Inflation and cosmological parameter estimation
Energy Technology Data Exchange (ETDEWEB)
Hamann, J.
2007-05-15
In this work, we focus on two aspects of cosmological data analysis: inference of parameter values and the search for new effects in the inflationary sector. Constraints on cosmological parameters are commonly derived under the assumption of a minimal model. We point out that this procedure systematically underestimates errors and possibly biases estimates, due to overly restrictive assumptions. In a more conservative approach, we analyse cosmological data using a more general eleven-parameter model. We find that regions of the parameter space that were previously thought ruled out are still compatible with the data; the bounds on individual parameters are relaxed by up to a factor of two, compared to the results for the minimal six-parameter model. Moreover, we analyse a class of inflation models, in which the slow roll conditions are briefly violated, due to a step in the potential. We show that the presence of a step generically leads to an oscillating spectrum and perform a fit to CMB and galaxy clustering data. We do not find conclusive evidence for a step in the potential and derive strong bounds on quantities that parameterise the step. (orig.)
Cosmological parameter estimation using Particle Swarm Optimization
Prasad, J.; Souradeep, T.
2014-03-01
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.
Cosmological parameter estimation using Particle Swarm Optimization
International Nuclear Information System (INIS)
Prasad, J; Souradeep, T
2014-01-01
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite
Cosmological parameter estimation using particle swarm optimization
Prasad, Jayanti; Souradeep, Tarun
2012-06-01
Constraining theoretical models, which are represented by a set of parameters, using observational data is an important exercise in cosmology. In Bayesian framework this is done by finding the probability distribution of parameters which best fits to the observational data using sampling based methods like Markov chain Monte Carlo (MCMC). It has been argued that MCMC may not be the best option in certain problems in which the target function (likelihood) poses local maxima or have very high dimensionality. Apart from this, there may be examples in which we are mainly interested to find the point in the parameter space at which the probability distribution has the largest value. In this situation the problem of parameter estimation becomes an optimization problem. In the present work we show that particle swarm optimization (PSO), which is an artificial intelligence inspired population based search procedure, can also be used for cosmological parameter estimation. Using PSO we were able to recover the best-fit Λ cold dark matter (LCDM) model parameters from the WMAP seven year data without using any prior guess value or any other property of the probability distribution of parameters like standard deviation, as is common in MCMC. We also report the results of an exercise in which we consider a binned primordial power spectrum (to increase the dimensionality of problem) and find that a power spectrum with features gives lower chi square than the standard power law. Since PSO does not sample the likelihood surface in a fair way, we follow a fitting procedure to find the spread of likelihood function around the best-fit point.
Assumptions of the primordial spectrum and cosmological parameter estimation
International Nuclear Information System (INIS)
Shafieloo, Arman; Souradeep, Tarun
2011-01-01
The observables of the perturbed universe, cosmic microwave background (CMB) anisotropy and large structures depend on a set of cosmological parameters, as well as the assumed nature of primordial perturbations. In particular, the shape of the primordial power spectrum (PPS) is, at best, a well-motivated assumption. It is known that the assumed functional form of the PPS in cosmological parameter estimation can affect the best-fit-parameters and their relative confidence limits. In this paper, we demonstrate that a specific assumed form actually drives the best-fit parameters into distinct basins of likelihood in the space of cosmological parameters where the likelihood resists improvement via modifications to the PPS. The regions where considerably better likelihoods are obtained allowing free-form PPS lie outside these basins. In the absence of a preferred model of inflation, this raises a concern that current cosmological parameter estimates are strongly prejudiced by the assumed form of PPS. Our results strongly motivate approaches toward simultaneous estimation of the cosmological parameters and the shape of the primordial spectrum from upcoming cosmological data. It is equally important for theorists to keep an open mind towards early universe scenarios that produce features in the PPS. (paper)
SCoPE: an efficient method of Cosmological Parameter Estimation
International Nuclear Information System (INIS)
Das, Santanu; Souradeep, Tarun
2014-01-01
Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of the chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data
Impact of relativistic effects on cosmological parameter estimation
Lorenz, Christiane S.; Alonso, David; Ferreira, Pedro G.
2018-01-01
Future surveys will access large volumes of space and hence very long wavelength fluctuations of the matter density and gravitational field. It has been argued that the set of secondary effects that affect the galaxy distribution, relativistic in nature, will bring new, complementary cosmological constraints. We study this claim in detail by focusing on a subset of wide-area future surveys: Stage-4 cosmic microwave background experiments and photometric redshift surveys. In particular, we look at the magnification lensing contribution to galaxy clustering and general-relativistic corrections to all observables. We quantify the amount of information encoded in these effects in terms of the tightening of the final cosmological constraints as well as the potential bias in inferred parameters associated with neglecting them. We do so for a wide range of cosmological parameters, covering neutrino masses, standard dark-energy parametrizations and scalar-tensor gravity theories. Our results show that, while the effect of lensing magnification to number counts does not contain a significant amount of information when galaxy clustering is combined with cosmic shear measurements, this contribution does play a significant role in biasing estimates on a host of parameter families if unaccounted for. Since the amplitude of the magnification term is controlled by the slope of the source number counts with apparent magnitude, s (z ), we also estimate the accuracy to which this quantity must be known to avoid systematic parameter biases, finding that future surveys will need to determine s (z ) to the ˜5 %- 10 % level. On the contrary, large-scale general-relativistic corrections are irrelevant both in terms of information content and parameter bias for most cosmological parameters but significant for the level of primordial non-Gaussianity.
Learn-as-you-go acceleration of cosmological parameter estimates
International Nuclear Information System (INIS)
Aslanyan, Grigor; Easther, Richard; Price, Layne C.
2015-01-01
Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly
Cosmological Parameter Estimation with Large Scale Structure Observations
Di Dio, Enea; Durrer, Ruth; Lesgourgues, Julien
2014-01-01
We estimate the sensitivity of future galaxy surveys to cosmological parameters, using the redshift dependent angular power spectra of galaxy number counts, $C_\\ell(z_1,z_2)$, calculated with all relativistic corrections at first order in perturbation theory. We pay special attention to the redshift dependence of the non-linearity scale and present Fisher matrix forecasts for Euclid-like and DES-like galaxy surveys. We compare the standard $P(k)$ analysis with the new $C_\\ell(z_1,z_2)$ method. We show that for surveys with photometric redshifts the new analysis performs significantly better than the $P(k)$ analysis. For spectroscopic redshifts, however, the large number of redshift bins which would be needed to fully profit from the redshift information, is severely limited by shot noise. We also identify surveys which can measure the lensing contribution and we study the monopole, $C_0(z_1,z_2)$.
Energy Technology Data Exchange (ETDEWEB)
Huang, Qing-Guo; Wang, Ke, E-mail: huangqg@itp.ac.cn, E-mail: wangke@itp.ac.cn [CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Zhong Guan Cun East Street 55 #, Beijing 100190 (China)
2017-07-01
The early reionization (ERE) is supposed to be a physical process which happens after recombination, but before the instantaneous reionization caused by the first generation of stars. We investigate the effect of the ERE on the temperature and polarization power spectra of cosmic microwave background (CMB), and adopt principal components analysis (PCA) to model-independently reconstruct the ionization history during the ERE. In addition, we also discuss how the ERE affects the cosmological parameter estimates, and find that the ERE does not impose any significant influences on the tensor-to-scalar ratio r and the neutrino mass at the sensitivities of current experiments. The better CMB polarization data can be used to give a tighter constraint on the ERE and might be important for more precisely constraining cosmological parameters in the future.
Primack, Joel R.
2000-01-01
The cosmological parameters that I emphasize are the age of the universe $t_0$, the Hubble parameter $H_0 \\equiv 100 h$ km s$^{-1}$ Mpc$^{-1}$, the average matter density $\\Omega_m$, the baryonic matter density $\\Omega_b$, the neutrino density $\\Omega_\
International Nuclear Information System (INIS)
Tegmark, Max; Zaldarriaga, Matias
2002-01-01
We present a method for measuring the cosmic matter budget without assumptions about speculative early Universe physics, and for measuring the primordial power spectrum P * (k) nonparametrically, either by combining CMB and LSS information or by using CMB polarization. Our method complements currently fashionable 'black box' cosmological parameter analysis, constraining cosmological models in a more physically intuitive fashion by mapping measurements of CMB, weak lensing and cluster abundance into k space, where they can be directly compared with each other and with galaxy and Lyα forest clustering. Including the new CBI results, we find that CMB measurements of P(k) overlap with those from 2dF galaxy clustering by over an order of magnitude in scale, and even overlap with weak lensing measurements. We describe how our approach can be used to raise the ambition level beyond cosmological parameter fitting as data improves, testing rather than assuming the underlying physics
Energy Technology Data Exchange (ETDEWEB)
Jennings, E.; Madigan, M.
2017-04-01
Given the complexity of modern cosmological parameter inference where we arefaced with non-Gaussian data and noise, correlated systematics and multi-probecorrelated data sets, the Approximate Bayesian Computation (ABC) method is apromising alternative to traditional Markov Chain Monte Carlo approaches in thecase where the Likelihood is intractable or unknown. The ABC method is called"Likelihood free" as it avoids explicit evaluation of the Likelihood by using aforward model simulation of the data which can include systematics. Weintroduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler forparameter estimation. A key challenge in astrophysics is the efficient use oflarge multi-probe datasets to constrain high dimensional, possibly correlatedparameter spaces. With this in mind astroABC allows for massive parallelizationusing MPI, a framework that handles spawning of jobs across multiple nodes. Akey new feature of astroABC is the ability to create MPI groups with differentcommunicators, one for the sampler and several others for the forward modelsimulation, which speeds up sampling time considerably. For smaller jobs thePython multiprocessing option is also available. Other key features include: aSequential Monte Carlo sampler, a method for iteratively adapting tolerancelevels, local covariance estimate using scikit-learn's KDTree, modules forspecifying optimal covariance matrix for a component-wise or multivariatenormal perturbation kernel, output and restart files are backed up everyiteration, user defined metric and simulation methods, a module for specifyingheterogeneous parameter priors including non-standard prior PDFs, a module forspecifying a constant, linear, log or exponential tolerance level,well-documented examples and sample scripts. This code is hosted online athttps://github.com/EliseJ/astroABC
Cosmological Parameter Estimation Using the Genus Amplitude—Application to Mock Galaxy Catalogs
Appleby, Stephen; Park, Changbom; Hong, Sungwook E.; Kim, Juhan
2018-01-01
We study the topology of the matter density field in two-dimensional slices and consider how we can use the amplitude A of the genus for cosmological parameter estimation. Using the latest Horizon Run 4 simulation data, we calculate the genus of the smoothed density field constructed from light cone mock galaxy catalogs. Information can be extracted from the amplitude of the genus by considering both its redshift evolution and magnitude. The constancy of the genus amplitude with redshift can be used as a standard population, from which we derive constraints on the equation of state of dark energy {w}{de}—by measuring A at z∼ 0.1 and z∼ 1, we can place an order {{Δ }}{w}{de}∼ { O }(15 % ) constraint on {w}{de}. By comparing A to its Gaussian expectation value, we can potentially derive an additional stringent constraint on the matter density {{Δ }}{{{Ω }}}{mat}∼ 0.01. We discuss the primary sources of contamination associated with the two measurements—redshift space distortion (RSD) and shot noise. With accurate knowledge of galaxy bias, we can successfully remove the effect of RSD, and the combined effect of shot noise and nonlinear gravitational evolution is suppressed by smoothing over suitably large scales {R}{{G}}≥slant 15 {Mpc}/h. Without knowledge of the bias, we discuss how joint measurements of the two- and three-dimensional genus can be used to constrain the growth factor β =f/b. The method can be applied optimally to redshift slices of a galaxy distribution generated using the drop-off technique.
Cai, Rong-Gen; Yang, Tao
2017-02-01
We investigate the constraint ability of the gravitational wave (GW) as the standard siren on the cosmological parameters by using the third-generation gravitational wave detector: the Einstein Telescope. The binary merger of a neutron with either a neutron or black hole is hypothesized to be the progenitor of a short and intense burst of γ rays; some fraction of those binary mergers could be detected both through electromagnetic radiation and gravitational waves. Thus we can determine both the luminosity distance and redshift of the source separately. We simulate the luminosity distances and redshift measurements from 100 to 1000 GW events. We use two different algorithms to constrain the cosmological parameters. For the Hubble constant H0 and dark matter density parameter Ωm, we adopt the Markov chain Monte Carlo approach. We find that with about 500-600 GW events we can constrain the Hubble constant with an accuracy comparable to Planck temperature data and Planck lensing combined results, while for the dark matter density, GWs alone seem not able to provide the constraints as good as for the Hubble constant; the sensitivity of 1000 GW events is a little lower than that of Planck data. It should require more than 1000 events to match the Planck sensitivity. Yet, for analyzing the more complex dynamical property of dark energy, i.e., the equation of state w , we adopt a new powerful nonparametric method: the Gaussian process. We can reconstruct w directly from the observational luminosity distance at every redshift. In the low redshift region, we find that about 700 GW events can give the constraints of w (z ) comparable to the constraints of a constant w by Planck data with type-Ia supernovae. Those results show that GWs as the standard sirens to probe the cosmological parameters can provide an independent and complementary alternative to current experiments.
On estimating cosmology-dependent covariance matrices
International Nuclear Information System (INIS)
Morrison, Christopher B.; Schneider, Michael D.
2013-01-01
We describe a statistical model to estimate the covariance matrix of matter tracer two-point correlation functions with cosmological simulations. Assuming a fixed number of cosmological simulation runs, we describe how to build a 'statistical emulator' of the two-point function covariance over a specified range of input cosmological parameters. Because the simulation runs with different cosmological models help to constrain the form of the covariance, we predict that the cosmology-dependent covariance may be estimated with a comparable number of simulations as would be needed to estimate the covariance for fixed cosmology. Our framework is a necessary first step in planning a simulations campaign for analyzing the next generation of cosmological surveys
Cosmological Constraints on Mirror Matter Parameters
International Nuclear Information System (INIS)
Wallemacq, Quentin; Ciarcelluti, Paolo
2014-01-01
Up-to-date estimates of the cosmological parameters are presented as a result of numerical simulations of cosmic microwave background and large scale structure, considering a flat Universe in which the dark matter is made entirely or partly of mirror matter, and the primordial perturbations are scalar adiabatic and in linear regime. A statistical analysis using the Markov Chain Monte Carlo method allows to obtain constraints of the cosmological parameters. As a result, we show that a Universe with pure mirror dark matter is statistically equivalent to the case of an admixture with cold dark matter. The upper limits for the ratio of the temperatures of ordinary and mirror sectors are around 0.3 for both the cosmological models, which show the presence of a dominant fraction of mirror matter, 0.06≲Ω_m_i_r_r_o_rh"2≲0.12.
International Nuclear Information System (INIS)
Hamann, Jan; Hannestad, Steen; Melchiorri, Alessandro; Wong, Yvonne Y Y
2008-01-01
We explore and compare the performances of two non-linear correction and scale-dependent biasing models for the extraction of cosmological information from galaxy power spectrum data, especially in the context of beyond-ΛCDM (CDM: cold dark matter) cosmologies. The first model is the well known Q model, first applied in the analysis of Two-degree Field Galaxy Redshift Survey data. The second, the P model, is inspired by the halo model, in which non-linear evolution and scale-dependent biasing are encapsulated in a single non-Poisson shot noise term. We find that while the two models perform equally well in providing adequate correction for a range of galaxy clustering data in standard ΛCDM cosmology and in extensions with massive neutrinos, the Q model can give unphysical results in cosmologies containing a subdominant free-streaming dark matter whose temperature depends on the particle mass, e.g., relic thermal axions, unless a suitable prior is imposed on the correction parameter. This last case also exposes the danger of analytic marginalization, a technique sometimes used in the marginalization of nuisance parameters. In contrast, the P model suffers no undesirable effects, and is the recommended non-linear correction model also because of its physical transparency
Hamann, Jan; Hannestad, Steen; Melchiorri, Alessandro; Wong, Yvonne Y. Y.
2008-07-01
We explore and compare the performances of two non-linear correction and scale-dependent biasing models for the extraction of cosmological information from galaxy power spectrum data, especially in the context of beyond-ΛCDM (CDM: cold dark matter) cosmologies. The first model is the well known Q model, first applied in the analysis of Two-degree Field Galaxy Redshift Survey data. The second, the P model, is inspired by the halo model, in which non-linear evolution and scale-dependent biasing are encapsulated in a single non-Poisson shot noise term. We find that while the two models perform equally well in providing adequate correction for a range of galaxy clustering data in standard ΛCDM cosmology and in extensions with massive neutrinos, the Q model can give unphysical results in cosmologies containing a subdominant free-streaming dark matter whose temperature depends on the particle mass, e.g., relic thermal axions, unless a suitable prior is imposed on the correction parameter. This last case also exposes the danger of analytic marginalization, a technique sometimes used in the marginalization of nuisance parameters. In contrast, the P model suffers no undesirable effects, and is the recommended non-linear correction model also because of its physical transparency.
Higgs field and cosmological parameters in the fractal quantum system
Directory of Open Access Journals (Sweden)
Abramov Valeriy
2017-01-01
Full Text Available For the fractal model of the Universe the relations of cosmological parameters and the Higgs field are established. Estimates of the critical density, the expansion and speed-up parameters of the Universe (the Hubble constant and the cosmological redshift; temperature and anisotropy of the cosmic microwave background radiation were performed.
DEFF Research Database (Denmark)
Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian
2011-01-01
of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....
Rocha, G.; Pagano, L.; Górski, K. M.; Huffenberger, K. M.; Lawrence, C. R.; Lange, A. E.
2010-04-01
We introduce a new method to propagate uncertainties in the beam shapes used to measure the cosmic microwave background to cosmological parameters determined from those measurements. The method, called markov chain beam randomization (MCBR), randomly samples from a set of templates or functions that describe the beam uncertainties. The method is much faster than direct numerical integration over systematic “nuisance” parameters, and is not restricted to simple, idealized cases as is analytic marginalization. It does not assume the data are normally distributed, and does not require Gaussian priors on the specific systematic uncertainties. We show that MCBR properly accounts for and provides the marginalized errors of the parameters. The method can be generalized and used to propagate any systematic uncertainties for which a set of templates is available. We apply the method to the Planck satellite, and consider future experiments. Beam measurement errors should have a small effect on cosmological parameters as long as the beam fitting is performed after removal of 1/f noise.
Chandra Cluster Cosmology Project III: Cosmological Parameter Constraints
DEFF Research Database (Denmark)
Vikhlinin, A.; Kravtsov, A. V.; Burenin, R. A.
2009-01-01
function evolution to be used as a useful growth of a structure-based dark energy probe. In this paper, we present cosmological parameter constraints obtained from Chandra observations of 37 clusters with langzrang = 0.55 derived from 400 deg2 ROSAT serendipitous survey and 49 brightest z ≈ 0.05 clusters...
Constraints on cosmological parameters in power-law cosmology
International Nuclear Information System (INIS)
Rani, Sarita; Singh, J.K.; Altaibayeva, A.; Myrzakulov, R.; Shahalam, M.
2015-01-01
In this paper, we examine observational constraints on the power law cosmology; essentially dependent on two parameters H 0 (Hubble constant) and q (deceleration parameter). We investigate the constraints on these parameters using the latest 28 points of H(z) data and 580 points of Union2.1 compilation data and, compare the results with the results of ΛCDM . We also forecast constraints using a simulated data set for the future JDEM, supernovae survey. Our studies give better insight into power law cosmology than the earlier done analysis by Kumar [arXiv:1109.6924] indicating it tuning well with Union2.1 compilation data but not with H(z) data. However, the constraints obtained on i.e. H 0 average and q average using the simulated data set for the future JDEM, supernovae survey are found to be inconsistent with the values obtained from the H(z) and Union2.1 compilation data. We also perform the statefinder analysis and find that the power-law cosmological models approach the standard ΛCDM model as q → −1. Finally, we observe that although the power law cosmology explains several prominent features of evolution of the Universe, it fails in details
International Nuclear Information System (INIS)
Novikov, I.D.
1979-01-01
Progress made by this Commission over the period 1976-1978 is reviewed. Topics include the Hubble constant, deceleration parameter, large-scale distribution of matter in the universe, radio astronomy and cosmology, space astronomy and cosmology, formation of galaxies, physics near the cosmological singularity, and unconventional cosmological models. (C.F.)
Bias-limited extraction of cosmological parameters
Energy Technology Data Exchange (ETDEWEB)
Shimon, Meir; Itzhaki, Nissan; Rephaeli, Yoel, E-mail: meirs@wise.tau.ac.il, E-mail: nitzhaki@post.tau.ac.il, E-mail: yoelr@wise.tau.ac.il [School of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978 (Israel)
2013-03-01
It is known that modeling uncertainties and astrophysical foregrounds can potentially introduce appreciable bias in the deduced values of cosmological parameters. While it is commonly assumed that these uncertainties will be accounted for to a sufficient level of precision, the level of bias has not been properly quantified in most cases of interest. We show that the requirement that the bias in derived values of cosmological parameters does not surpass nominal statistical error, translates into a maximal level of overall error O(N{sup −½}) on |ΔP(k)|/P(k) and |ΔC{sub l}|/C{sub l}, where P(k), C{sub l}, and N are the matter power spectrum, angular power spectrum, and number of (independent Fourier) modes at a given scale l or k probed by the cosmological survey, respectively. This required level has important consequences on the precision with which cosmological parameters are hoped to be determined by future surveys: in virtually all ongoing and near future surveys N typically falls in the range 10{sup 6}−10{sup 9}, implying that the required overall theoretical modeling and numerical precision is already very high. Future redshifted-21-cm observations, projected to sample ∼ 10{sup 14} modes, will require knowledge of the matter power spectrum to a fantastic 10{sup −7} precision level. We conclude that realizing the expected potential of future cosmological surveys, which aim at detecting 10{sup 6}−10{sup 14} modes, sets the formidable challenge of reducing the overall level of uncertainty to 10{sup −3}−10{sup −7}.
Cosmological parameters from SDSS and WMAP
International Nuclear Information System (INIS)
Tegmark, Max; Strauss, Michael A.; Bahcall, Neta A.; Schlegel, David; Finkbeiner, Douglas; Gunn, James E.; Ostriker, Jeremiah P.; Seljak, Uros; Ivezic, Zeljko; Knapp, Gillian R.; Lupton, Robert H.; Blanton, Michael R.; Scoccimarro, Roman; Hogg, David W.; Abazajian, Kevork; Xu Yongzhong; Dodelson, Scott; Sandvik, Havard; Wang Xiaomin; Jain, Bhuvnesh
2004-01-01
We measure cosmological parameters using the three-dimensional power spectrum P(k) from over 200 000 galaxies in the Sloan Digital Sky Survey (SDSS) in combination with Wilkinson Microwave Anisotropy Probe (WMAP) and other data. Our results are consistent with a 'vanilla' flat adiabatic cold dark matter model with a cosmological constant without tilt (n s =1), running tilt, tensor modes, or massive neutrinos. Adding SDSS information more than halves the WMAP-only error bars on some parameters, tightening 1σ constraints on the Hubble parameter from h≅0.74 -0.07 +0.18 to h≅0.70 -0.03 +0.04 , on the matter density from Ω m ≅0.25±0.10 to Ω m ≅0.30±0.04 (1σ) and on neutrino masses from 0 ≅16.3 -1.8 +2.3 Gyr to t 0 ≅14.1 -0.9 +1.0 Gyr by adding SDSS and SN Ia data. Including tensors, running tilt, neutrino mass and equation of state in the list of free parameters, many constraints are still quite weak, but future cosmological measurements from SDSS and other sources should allow these to be substantially tightened
Cosmological parameters from large scale structure - geometric versus shape information
Hamann, Jan; Lesgourgues, Julien; Rampf, Cornelius; Wong, Yvonne Y Y
2010-01-01
The matter power spectrum as derived from large scale structure (LSS) surveys contains two important and distinct pieces of information: an overall smooth shape and the imprint of baryon acoustic oscillations (BAO). We investigate the separate impact of these two types of information on cosmological parameter estimation, and show that for the simplest cosmological models, the broad-band shape information currently contained in the SDSS DR7 halo power spectrum (HPS) is by far superseded by geometric information derived from the baryonic features. An immediate corollary is that contrary to popular beliefs, the upper limit on the neutrino mass m_\
Planck 2013 results. XVI. Cosmological parameters
Ade, P.A.R.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Bartlett, J.G.; Battaner, E.; Benabed, K.; Benoit, A.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J.J.; Bonaldi, A.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R.C.; Calabrese, E.; Cappellini, B.; Cardoso, J.F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.R.; Chen, X.; Chiang, L.Y.; Chiang, H.C.; Christensen, P.R.; Church, S.; Clements, D.L.; Colombi, S.; Colombo, L.P.L.; Couchot, F.; Coulais, A.; Crill, B.P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.M.; Desert, F.X.; Dickinson, C.; Diego, J.M.; Dolag, K.; Dole, H.; Donzelli, S.; Dore, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Ensslin, T.A.; Eriksen, H.K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A.A.; Franceschi, E.; Gaier, T.C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Heraud, Y.; Gjerlow, E.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J.E.; Haissinski, J.; Hamann, J.; Hansen, F.K.; Hanson, D.; Harrison, D.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Hobson, M.; Holmes, W.A.; Hornstrup, A.; Hou, Z.; Hovest, W.; Huffenberger, K.M.; Jaffe, T.R.; Jaffe, A.H.; Jewell, J.; Jones, W.C.; Juvela, M.; Keihanen, E.; Keskitalo, R.; Kisner, T.S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lahteenmaki, A.; Lamarre, J.M.; Lasenby, A.; Lattanzi, M.; Laureijs, R.J.; Lawrence, C.R.; Leach, S.; Leahy, J.P.; Leonardi, R.; Leon-Tavares, J.; Lesgourgues, J.; Lewis, A.; Liguori, M.; Lilje, P.B.; Linden-Vornle, M.; Lopez-Caniego, M.; Lubin, P.M.; Macias-Perez, J.F.; Maffei, B.; Maino, D.; Mandolesi, N.; Maris, M.; Marshall, D.J.; Martin, P.G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P.R.; Melchiorri, A.; Melin, J.B.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschenes, M.A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C.B.; Norgaard-Nielsen, H.U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I.J.; Osborne, S.; Oxborrow, C.A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, D.; Pearson, T.J.; Peiris, H.V.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Platania, P.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G.W.; Prezeau, G.; Prunet, S.; Puget, J.L.; Rachen, J.P.; Reach, W.T.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubino-Martin, J.A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M.D.; Shellard, E.P.S.; Spencer, L.D.; Starck, J.L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Turler, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L.A.; Wandelt, B.D.; Wehus, I.K.; White, M.; White, S.D.M.; Wilkinson, A.; Yvon, D.; Zacchei, A.; Zonca, A.
2014-10-29
We present the first results based on Planck measurements of the CMB temperature and lensing-potential power spectra. The Planck spectra at high multipoles are extremely well described by the standard spatially-flat six-parameter LCDM cosmology. In this model Planck data determine the cosmological parameters to high precision. We find a low value of the Hubble constant, H0=67.3+/-1.2 km/s/Mpc and a high value of the matter density parameter, Omega_m=0.315+/-0.017 (+/-1 sigma errors) in excellent agreement with constraints from baryon acoustic oscillation (BAO) surveys. Including curvature, we find that the Universe is consistent with spatial flatness to percent-level precision using Planck CMB data alone. We present results from an analysis of extensions to the standard cosmology, using astrophysical data sets in addition to Planck and high-resolution CMB data. None of these models are favoured significantly over standard LCDM. The deviation of the scalar spectral index from unity is insensitive to the additi...
Constraining cosmological parameter with SN Ia
International Nuclear Information System (INIS)
Putri, A N Indra; Wulandari, H R Tri
2016-01-01
A type I supemovae (SN Ia) is an exploding white dwarf, whose mass exceeds Chandrasekar limit (1.44 solar mass). If a white dwarf is in a binary system, it may accrete matter from the companion, resulting in an excess mass that cannot be balanced by the pressure of degenerated electrons in the core. SNe Ia are highly luminous objects, that they are visible from very high distances. After some corrections (stretch (s), colour (c), K-corrections, etc.), the variations in the light curves of SNe Ia can be suppressed to be no more than 10%. Their high luminosity and almost uniform intrinsic brightness at the peak light, i.e. M B ∼ -19, make SNe Ia ideal standard candle. Because of their visibility from large distances, SNe Ia can be employed as a cosmological measuring tool. It was analysis of SNe Ia data that indicated for the first time, that the universe is not only expanding, but also accelerating. This work analyzed a compilation of SNe Ia data to determine several cosmological parameters (H 0 , Ω m , Ω a , and w ). It can be concluded from the analysis, that our universe is a flat, dark energy dominated universe, and that the cosmological constant A is a suitable candidate for dark energy. (paper)
Planck 2015 results. XIII. Cosmological parameters
Ade, P.A.R.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Bartlett, J.G.; Bartolo, N.; Battaner, E.; Battye, R.; Benabed, K.; Benoit, A.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bonaldi, A.; Bonavera, L.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R.C.; Calabrese, E.; Cardoso, J.F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.R.; Chiang, H.C.; Chluba, J.; Christensen, P.R.; Church, S.; Clements, D.L.; Colombi, S.; Colombo, L.P.L.; Combet, C.; Coulais, A.; Crill, B.P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Desert, F.X.; Di Valentino, E.; Dickinson, C.; Diego, J.M.; Dolag, K.; Dole, H.; Donzelli, S.; Dore, O.; Douspis, M.; Ducout, A.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Ensslin, T.A.; Eriksen, H.K.; Farhang, M.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A.A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Gauthier, C.; Gerbino, M.; Ghosh, T.; Giard, M.; Giraud-Heraud, Y.; Giusarma, E.; Gjerlow, E.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J.E.; Hamann, J.; Hansen, F.K.; Hanson, D.; Harrison, D.L.; Helou, G.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Hobson, M.; Holmes, W.A.; Hornstrup, A.; Hovest, W.; Huang, Z.; Huffenberger, K.M.; Hurier, G.; Jaffe, A.H.; Jaffe, T.R.; Jones, W.C.; Juvela, M.; Keihanen, E.; Keskitalo, R.; Kisner, T.S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lahteenmaki, A.; Lamarre, J.M.; Lasenby, A.; Lattanzi, M.; Lawrence, C.R.; Leahy, J.P.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Lewis, A.; Liguori, M.; Lilje, P.B.; Linden-Vornle, M.; Lopez-Caniego, M.; Lubin, P.M.; Macias-Perez, J.F.; Maggio, G.; Mandolesi, N.; Mangilli, A.; Marchini, A.; Martin, P.G.; Martinelli, M.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; Mazzotta, P.; McGehee, P.; Meinhold, P.R.; Melchiorri, A.; Melin, J.B.; Mendes, L.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschenes, M.A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J.A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C.B.; Norgaard-Nielsen, H.U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C.A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T.J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Popa, L.; Pratt, G.W.; Prezeau, G.; Prunet, S.; Puget, J.L.; Rachen, J.P.; Reach, W.T.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; d'Orfeuil, B.Rouille; Rowan-Robinson, M.; Rubino-Martin, J.A.; Rusholme, B.; Said, N.; Salvatelli, V.; Salvati, L.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M.D.; Serra, P.; Shellard, E.P.S.; Spencer, L.D.; Spinelli, M.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Trombetti, T.; Tucci, M.; Tuovinen, J.; Turler, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L.A.; Wandelt, B.D.; Wehus, I.K.; White, M.; White, S.D.M.; Wilkinson, A.; Yvon, D.; Zacchei, A.; Zonca, A.
2016-01-01
We present results based on full-mission Planck observations of temperature and polarization anisotropies of the CMB. These data are consistent with the six-parameter inflationary LCDM cosmology. From the Planck temperature and lensing data, for this cosmology we find a Hubble constant, H0= (67.8 +/- 0.9) km/s/Mpc, a matter density parameter Omega_m = 0.308 +/- 0.012 and a scalar spectral index with n_s = 0.968 +/- 0.006. (We quote 68% errors on measured parameters and 95% limits on other parameters.) Combined with Planck temperature and lensing data, Planck LFI polarization measurements lead to a reionization optical depth of tau = 0.066 +/- 0.016. Combining Planck with other astrophysical data we find N_ eff = 3.15 +/- 0.23 for the effective number of relativistic degrees of freedom and the sum of neutrino masses is constrained to < 0.23 eV. Spatial curvature is found to be |Omega_K| < 0.005. For LCDM we find a limit on the tensor-to-scalar ratio of r <0.11 consistent with the B-mode constraints fr...
The Atacama Cosmology Telescope: Cosmological Parameters from the 2008 Power Spectrum
Dunkley, J.; Hlozek, R.; Sievers, J.; Acquaviva, V.; Ade, P. A. R.; Aguirre, P.; Amiri, M.; Appel, J. W.; Barrientos, L. F.; Battistelli, E. S.;
2011-01-01
We present cosmological parameters derived from the angular power spectrum of the cosmic microwave background (CMB) radiation observed at 148 GHz and 218 GHz over 296 deg(exp 2) with the Atacama Cosmology Telescope (ACT) during its 2008 season. ACT measures fluctuations at scales 500 cosmological parameters from the less contaminated 148 GHz spectrum, marginalizing over SZ and source power. The ACDM cosmological model is a good fit to the data (chi square/dof = 29/46), and ACDM parameters estimated from ACT+Wilkinson Microwave Anisotropy Probe (WMAP) are consistent with the seven-year WMAP limits, with scale invariant n(sub s) = 1 excluded at 99.7% confidence level (CL) (3 sigma). A model with no CMB lensing is disfavored at 2.8 sigma. By measuring the third to seventh acoustic peaks, and probing the Silk damping regime, the ACT data improve limits on cosmological parameters that affect the small-scale CMB power. The ACT data combined with WMAP give a 6 sigma detection of primordial helium, with Y(sub p) = 0.313 +/- 0.044, and a 4 sigma detection of relativistic species, assumed to be neutrinos, with N(sub eff) = 5.3 +/- 1.3 (4.6 +/- 0.8 with BAO+H(sub 0) data). From the CMB alone the running of the spectral index is constrained to be d(sub s) / d ln k = -0,034 +/- 0,018, the limit on the tensor-to-scalar ratio is r < 0,25 (95% CL), and the possible contribution of Nambu cosmic strings to the power spectrum is constrained to string tension G(sub mu) < 1.6 x 10(exp -7) (95% CL),
CHANDRA CLUSTER COSMOLOGY PROJECT III: COSMOLOGICAL PARAMETER CONSTRAINTS
International Nuclear Information System (INIS)
Vikhlinin, A.; Forman, W. R.; Jones, C.; Murray, S. S.; Kravtsov, A. V.; Burenin, R. A.; Voevodkin, A.; Ebeling, H.; Hornstrup, A.; Nagai, D.; Quintana, H.
2009-01-01
Chandra observations of large samples of galaxy clusters detected in X-rays by ROSAT provide a new, robust determination of the cluster mass functions at low and high redshifts. Statistical and systematic errors are now sufficiently small, and the redshift leverage sufficiently large for the mass function evolution to be used as a useful growth of a structure-based dark energy probe. In this paper, we present cosmological parameter constraints obtained from Chandra observations of 37 clusters with (z) = 0.55 derived from 400 deg 2 ROSAT serendipitous survey and 49 brightest z ∼ 0.05 clusters detected in the All-Sky Survey. Evolution of the mass function between these redshifts requires Ω Λ > 0 with a ∼5σ significance, and constrains the dark energy equation-of-state parameter to w 0 = -1.14 ± 0.21, assuming a constant w and a flat universe. Cluster information also significantly improves constraints when combined with other methods. Fitting our cluster data jointly with the latest supernovae, Wilkinson Microwave Anisotropy Probe, and baryonic acoustic oscillation measurements, we obtain w 0 = -0.991 ± 0.045 (stat) ±0.039 (sys), a factor of 1.5 reduction in statistical uncertainties, and nearly a factor of 2 improvement in systematics compared with constraints that can be obtained without clusters. The joint analysis of these four data sets puts a conservative upper limit on the masses of light neutrinos Σm ν M h and σ 8 from the low-redshift cluster mass function.
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
International Nuclear Information System (INIS)
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K.
2013-01-01
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3) better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N par . One of the main goals of the present paper is to determine how large N par can be, while still maintaining reasonable computational efficiency; we find that N par = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
Energy Technology Data Exchange (ETDEWEB)
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no [Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029, Blindern, NO-0315 Oslo (Norway)
2013-11-10
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3) better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.
Exploring cosmic origins with CORE: Cosmological parameters
Di Valentino, E.; Brinckmann, T.; Gerbino, M.; Poulin, V.; Bouchet, F. R.; Lesgourgues, J.; Melchiorri, A.; Chluba, J.; Clesse, S.; Delabrouille, J.; Dvorkin, C.; Forastieri, F.; Galli, S.; Hooper, D. C.; Lattanzi, M.; Martins, C. J. A. P.; Salvati, L.; Cabass, G.; Caputo, A.; Giusarma, E.; Hivon, E.; Natoli, P.; Pagano, L.; Paradiso, S.; Rubiño-Martin, J. A.; Achúcarro, A.; Ade, P.; Allison, R.; Arroja, F.; Ashdown, M.; Ballardini, M.; Banday, A. J.; Banerji, R.; Bartolo, N.; Bartlett, J. G.; Basak, S.; Baumann, D.; de Bernardis, P.; Bersanelli, M.; Bonaldi, A.; Bonato, M.; Borrill, J.; Boulanger, F.; Bucher, M.; Burigana, C.; Buzzelli, A.; Cai, Z.-Y.; Calvo, M.; Carvalho, C. S.; Castellano, G.; Challinor, A.; Charles, I.; Colantoni, I.; Coppolecchia, A.; Crook, M.; D'Alessandro, G.; De Petris, M.; De Zotti, G.; Diego, J. M.; Errard, J.; Feeney, S.; Fernandez-Cobos, R.; Ferraro, S.; Finelli, F.; de Gasperis, G.; Génova-Santos, R. T.; González-Nuevo, J.; Grandis, S.; Greenslade, J.; Hagstotz, S.; Hanany, S.; Handley, W.; Hazra, D. K.; Hernández-Monteagudo, C.; Hervias-Caimapo, C.; Hills, M.; Kiiveri, K.; Kisner, T.; Kitching, T.; Kunz, M.; Kurki-Suonio, H.; Lamagna, L.; Lasenby, A.; Lewis, A.; Liguori, M.; Lindholm, V.; Lopez-Caniego, M.; Luzzi, G.; Maffei, B.; Martin, S.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; McCarthy, D.; Melin, J.-B.; Mohr, J. J.; Molinari, D.; Monfardini, A.; Negrello, M.; Notari, A.; Paiella, A.; Paoletti, D.; Patanchon, G.; Piacentini, F.; Piat, M.; Pisano, G.; Polastri, L.; Polenta, G.; Pollo, A.; Quartin, M.; Remazeilles, M.; Roman, M.; Ringeval, C.; Tartari, A.; Tomasi, M.; Tramonte, D.; Trappe, N.; Trombetti, T.; Tucker, C.; Väliviita, J.; van de Weygaert, R.; Van Tent, B.; Vennin, V.; Vermeulen, G.; Vielva, P.; Vittorio, N.; Young, K.; Zannoni, M.
2018-04-01
We forecast the main cosmological parameter constraints achievable with the CORE space mission which is dedicated to mapping the polarisation of the Cosmic Microwave Background (CMB). CORE was recently submitted in response to ESA's fifth call for medium-sized mission proposals (M5). Here we report the results from our pre-submission study of the impact of various instrumental options, in particular the telescope size and sensitivity level, and review the great, transformative potential of the mission as proposed. Specifically, we assess the impact on a broad range of fundamental parameters of our Universe as a function of the expected CMB characteristics, with other papers in the series focusing on controlling astrophysical and instrumental residual systematics. In this paper, we assume that only a few central CORE frequency channels are usable for our purpose, all others being devoted to the cleaning of astrophysical contaminants. On the theoretical side, we assume ΛCDM as our general framework and quantify the improvement provided by CORE over the current constraints from the Planck 2015 release. We also study the joint sensitivity of CORE and of future Baryon Acoustic Oscillation and Large Scale Structure experiments like DESI and Euclid. Specific constraints on the physics of inflation are presented in another paper of the series. In addition to the six parameters of the base ΛCDM, which describe the matter content of a spatially flat universe with adiabatic and scalar primordial fluctuations from inflation, we derive the precision achievable on parameters like those describing curvature, neutrino physics, extra light relics, primordial helium abundance, dark matter annihilation, recombination physics, variation of fundamental constants, dark energy, modified gravity, reionization and cosmic birefringence. In addition to assessing the improvement on the precision of individual parameters, we also forecast the post-CORE overall reduction of the allowed
Cosmological Parameters and Hyper-Parameters: The Hubble Constant from Boomerang and Maxima
Lahav, Ofer
Recently several studies have jointly analysed data from different cosmological probes with the motivation of estimating cosmological parameters. Here we generalise this procedure to allow freedom in the relative weights of various probes. This is done by including in the joint likelihood function a set of `Hyper-Parameters', which are dealt with using Bayesian considerations. The resulting algorithm, which assumes uniform priors on the log of the Hyper-Parameters, is very simple to implement. We illustrate the method by estimating the Hubble constant H0 from different sets of recent CMB experiments (including Saskatoon, Python V, MSAM1, TOCO, Boomerang and Maxima). The approach can be generalised for a combination of cosmic probes, and for other priors on the Hyper-Parameters. Reference: Lahav, Bridle, Hobson, Lasenby & Sodre, 2000, MNRAS, in press (astro-ph/9912105)
Cosmological perturbation effects on gravitational-wave luminosity distance estimates
Bertacca, Daniele; Raccanelli, Alvise; Bartolo, Nicola; Matarrese, Sabino
2018-06-01
Waveforms of gravitational waves provide information about a variety of parameters for the binary system merging. However, standard calculations have been performed assuming a FLRW universe with no perturbations. In reality this assumption should be dropped: we show that the inclusion of cosmological perturbations translates into corrections to the estimate of astrophysical parameters derived for the merging binary systems. We compute corrections to the estimate of the luminosity distance due to velocity, volume, lensing and gravitational potential effects. Our results show that the amplitude of the corrections will be negligible for current instruments, mildly important for experiments like the planned DECIGO, and very important for future ones such as the Big Bang Observer.
DIRECTIONAL DEPENDENCE OF ΛCDM COSMOLOGICAL PARAMETERS
International Nuclear Information System (INIS)
Axelsson, M.; Fantaye, Y.; Hansen, F. K.; Eriksen, H. K.; Banday, A. J.; Gorski, K. M.
2013-01-01
We study hemispherical power asymmetry in the Wilkinson Microwave Anisotropy Probe 9 yr data. We analyze the combined V- and W-band sky maps, after application of the KQ85 mask, and find that the asymmetry is statistically significant at the 3.4σ confidence level for l = 2-600, where the data are signal-dominated, with a preferred asymmetry direction (l, b) = (227, –27). Individual asymmetry axes estimated from six independent multipole ranges are all consistent with this direction. Subsequently, we estimate cosmological parameters on different parts of the sky and show that the parameters A s , n s , and Ω b are the most sensitive to this power asymmetry. In particular, for the two opposite hemispheres aligned with the preferred asymmetry axis, we find n s = 0.959 ± 0.022 and n s = 0.989 ± 0.024, respectively
Planck 2015 results: XIII. Cosmological parameters
DEFF Research Database (Denmark)
Ade, P. A R; Aghanim, N.; Arnaud, M.
2016-01-01
is constrained to w =-1.006 ± 0.045, consistent with the expected value for a cosmological constant. The standard big bang nucleosynthesis predictions for the helium and deuterium abundances for the best-fit Planck base ΛCDM cosmology are in excellent agreement with observations. We also constraints...... of the theory; for example, combining Planck observations with other astrophysical data we find Neff = 3.15 ± 0.23 for the effective number of relativistic degrees of freedom, consistent with the value Neff = 3.046 of the Standard Model of particle physics. The sum of neutrino masses is constrained to â'mν
Type Ia Supernova Intrinsic Magnitude Dispersion and the Fitting of Cosmological Parameters
Kim, A. G.
2011-02-01
I present an analysis for fitting cosmological parameters from a Hubble diagram of a standard candle with unknown intrinsic magnitude dispersion. The dispersion is determined from the data, simultaneously with the cosmological parameters. This contrasts with the strategies used to date. The advantages of the presented analysis are that it is done in a single fit (it is not iterative), it provides a statistically founded and unbiased estimate of the intrinsic dispersion, and its cosmological-parameter uncertainties account for the intrinsic-dispersion uncertainty. Applied to Type Ia supernovae, my strategy provides a statistical measure to test for subtypes and assess the significance of any magnitude corrections applied to the calibrated candle. Parameter bias and differences between likelihood distributions produced by the presented and currently used fitters are negligibly small for existing and projected supernova data sets.
Stochastic evolution of cosmological parameters in the early universe
Indian Academy of Sciences (India)
We develop a stochastic formulation of cosmology in the early universe, after considering the scatter in the redshift-apparent magnitude diagram in the early epochs as an observational evidence for the non-deterministic evolution of early universe. We consider the stochastic evolution of density parameter in the early ...
Data-constrained reionization and its effects on cosmological parameters
International Nuclear Information System (INIS)
Pandolfi, S.; Ferrara, A.; Choudhury, T. Roy; Mitra, S.; Melchiorri, A.
2011-01-01
We perform an analysis of the recent WMAP7 data considering physically motivated and viable reionization scenarios with the aim of assessing their effects on cosmological parameter determinations. The main novelties are: (i) the combination of cosmic microwave background data with astrophysical results from quasar absorption line experiments; (ii) the joint variation of both the cosmological and astrophysical [governing the evolution of the free electron fraction x e (z)] parameters. Including a realistic, data-constrained reionization history in the analysis induces appreciable changes in the cosmological parameter values deduced through a standard WMAP7 analysis. Particularly noteworthy are the variations in Ω b h 2 =0.02258 -0.00056 +0.00057 [WMAP7 (Sudden)] vs Ω b h 2 =0.02183±0.00054[WMAP7+ASTRO (CF)] and the new constraints for the scalar spectral index, for which WMAP7+ASTRO (CF) excludes the Harrison-Zel'dovich value n s =1 at >3σ. Finally, the electron-scattering optical depth value is considerably decreased with respect to the standard WMAP7, i.e. τ e =0.080±0.012. We conclude that the inclusion of astrophysical data sets, allowing to robustly constrain the reionization history, in the extraction procedure of cosmological parameters leads to relatively important differences in the final determination of their values.
Precision Parameter Estimation and Machine Learning
Wandelt, Benjamin D.
2008-12-01
I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.
Planck 2013 results. XVI. Cosmological parameters
DEFF Research Database (Denmark)
Planck Collaboration,; Ade, P. A. R.; Aghanim, N.
2013-01-01
parameters to high precision. We find a low value of the Hubble constant, H0=67.3+/-1.2 km/s/Mpc and a high value of the matter density parameter, Omega_m=0.315+/-0.017 (+/-1 sigma errors) in excellent agreement with constraints from baryon acoustic oscillation (BAO) surveys. Including curvature, we find...... over standard LCDM. The deviation of the scalar spectral index from unity is insensitive to the addition of tensor modes and to changes in the matter content of the Universe. We find a 95% upper limit of r...
Optomechanical parameter estimation
International Nuclear Information System (INIS)
Ang, Shan Zheng; Tsang, Mankei; Harris, Glen I; Bowen, Warwick P
2013-01-01
We propose a statistical framework for the problem of parameter estimation from a noisy optomechanical system. The Cramér–Rao lower bound on the estimation errors in the long-time limit is derived and compared with the errors of radiometer and expectation–maximization (EM) algorithms in the estimation of the force noise power. When applied to experimental data, the EM estimator is found to have the lowest error and follow the Cramér–Rao bound most closely. Our analytic results are envisioned to be valuable to optomechanical experiment design, while the EM algorithm, with its ability to estimate most of the system parameters, is envisioned to be useful for optomechanical sensing, atomic magnetometry and fundamental tests of quantum mechanics. (paper)
Ranking as parameter estimation
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav; Guy, Tatiana Valentine
2009-01-01
Roč. 4, č. 2 (2009), s. 142-158 ISSN 1745-7645 R&D Projects: GA MŠk 2C06001; GA AV ČR 1ET100750401; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : ranking * Bayesian estimation * negotiation * modelling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/AS/karny- ranking as parameter estimation.pdf
Liguori, M
2008-01-01
We study the impact of cosmological parameters' uncertainties on estimates of the primordial NG parameter f_NL in local and equilateral models of non-Gaussianity. We show that propagating these errors increases the f_NL relative uncertainty by 16% for WMAP and 5 % for Planck in the local case, whereas for equilateral configurations the correction term are 14% and 4%, respectively. If we assume for local f_NL a central value of order 60, according to recent WMAP 5-years estimates, we obtain for Planck a final correction \\Delta f_NL = 3. Although not dramatic, this correction is at the level of the expected estimator uncertainty for Planck, and should then be taken into account when quoting the significance of an eventual future detection. In current estimates of f_NL the cosmological parameters are held fixed at their best-fit values. We finally note that the impact of uncertainties in the cosmological parameters on the final f_NL error bar would become totally negligible if the parameters were allowed to vary...
Improved constraints on cosmological parameters from SNIa data
International Nuclear Information System (INIS)
March, M.C.; Trotta, R.
2011-02-01
We present a new method based on a Bayesian hierarchical model to extract constraints on cosmological parameters from SNIa data obtained with the SALT-II lightcurve fitter. We demonstrate with simulated data sets that our method delivers considerably tighter statistical constraints on the cosmological parameters and that it outperforms the usual χ 2 approach 2/3 of the times. As a further benefit, a full posterior probability distribution for the dispersion of the intrinsic magnitude of SNe is obtained. We apply this method to recent SNIa data and find that it improves statistical constraints on cosmological parameters from SNIa data alone by about 40% w.r.t. the standard approach. From the combination of SNIa, CMB and BAO data we obtain Ω m =0.29±0.01, Ω Λ =0.72±0.01 (assuming w=-1) and Ω m =0.28±0.01, w=-0.90±0.04 (assuming flatness; statistical uncertainties only). We constrain the intrinsic dispersion of the B-band magnitude of the SNIa population, obtaining σ μ int =0.13±0.01 [mag]. Applications to systematic uncertainties will be discussed in a forthcoming paper. (orig.)
Improved constraints on cosmological parameters from SNIa data
Energy Technology Data Exchange (ETDEWEB)
March, M.C.; Trotta, R. [Imperial College, London (United Kingdom). Astrophysics Group; Berkes, P. [Brandeis Univ., Waltham (United States). Volen Centre for Complex Systems; Starkman, G.D. [Case Western Reserve Univ., Cleveland (United States). CERCA and Dept. of Physics; Vaudrevange, P.M. [Case Western Reserve Univ., Cleveland (United States). CERCA and Dept. of Physics; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2011-02-15
We present a new method based on a Bayesian hierarchical model to extract constraints on cosmological parameters from SNIa data obtained with the SALT-II lightcurve fitter. We demonstrate with simulated data sets that our method delivers considerably tighter statistical constraints on the cosmological parameters and that it outperforms the usual {chi}{sup 2} approach 2/3 of the times. As a further benefit, a full posterior probability distribution for the dispersion of the intrinsic magnitude of SNe is obtained. We apply this method to recent SNIa data and find that it improves statistical constraints on cosmological parameters from SNIa data alone by about 40% w.r.t. the standard approach. From the combination of SNIa, CMB and BAO data we obtain {omega}{sub m}=0.29{+-}0.01, {omega}{sub {lambda}}=0.72{+-}0.01 (assuming w=-1) and {omega}{sub m}=0.28{+-}0.01, w=-0.90{+-}0.04 (assuming flatness; statistical uncertainties only). We constrain the intrinsic dispersion of the B-band magnitude of the SNIa population, obtaining {sigma}{sub {mu}}{sup int}=0.13{+-}0.01 [mag]. Applications to systematic uncertainties will be discussed in a forthcoming paper. (orig.)
Testing general relativity at cosmological scales: Implementation and parameter correlations
International Nuclear Information System (INIS)
Dossett, Jason N.; Ishak, Mustapha; Moldenhauer, Jacob
2011-01-01
The testing of general relativity at cosmological scales has become a possible and timely endeavor that is not only motivated by the pressing question of cosmic acceleration but also by the proposals of some extensions to general relativity that would manifest themselves at large scales of distance. We analyze here correlations between modified gravity growth parameters and some core cosmological parameters using the latest cosmological data sets including the refined Cosmic Evolution Survey 3D weak lensing. We provide the parametrized modified growth equations and their evolution. We implement known functional and binning approaches, and propose a new hybrid approach to evolve the modified gravity parameters in redshift (time) and scale. The hybrid parametrization combines a binned redshift dependence and a smooth evolution in scale avoiding a jump in the matter power spectrum. The formalism developed to test the consistency of current and future data with general relativity is implemented in a package that we make publicly available and call ISiTGR (Integrated Software in Testing General Relativity), an integrated set of modified modules for the publicly available packages CosmoMC and CAMB, including a modified version of the integrated Sachs-Wolfe-galaxy cross correlation module of Ho et al. and a new weak-lensing likelihood module for the refined Hubble Space Telescope Cosmic Evolution Survey weak gravitational lensing tomography data. We obtain parameter constraints and correlation coefficients finding that modified gravity parameters are significantly correlated with σ 8 and mildly correlated with Ω m , for all evolution methods. The degeneracies between σ 8 and modified gravity parameters are found to be substantial for the functional form and also for some specific bins in the hybrid and binned methods indicating that these degeneracies will need to be taken into consideration when using future high precision data.
Reionization history and CMB parameter estimation
International Nuclear Information System (INIS)
Dizgah, Azadeh Moradinezhad; Kinney, William H.; Gnedin, Nickolay Y.
2013-01-01
We study how uncertainty in the reionization history of the universe affects estimates of other cosmological parameters from the Cosmic Microwave Background. We analyze WMAP7 data and synthetic Planck-quality data generated using a realistic scenario for the reionization history of the universe obtained from high-resolution numerical simulation. We perform parameter estimation using a simple sudden reionization approximation, and using the Principal Component Analysis (PCA) technique proposed by Mortonson and Hu. We reach two main conclusions: (1) Adopting a simple sudden reionization model does not introduce measurable bias into values for other parameters, indicating that detailed modeling of reionization is not necessary for the purpose of parameter estimation from future CMB data sets such as Planck. (2) PCA analysis does not allow accurate reconstruction of the actual reionization history of the universe in a realistic case
Reionization history and CMB parameter estimation
Energy Technology Data Exchange (ETDEWEB)
Dizgah, Azadeh Moradinezhad; Gnedin, Nickolay Y.; Kinney, William H.
2013-05-01
We study how uncertainty in the reionization history of the universe affects estimates of other cosmological parameters from the Cosmic Microwave Background. We analyze WMAP7 data and synthetic Planck-quality data generated using a realistic scenario for the reionization history of the universe obtained from high-resolution numerical simulation. We perform parameter estimation using a simple sudden reionization approximation, and using the Principal Component Analysis (PCA) technique proposed by Mortonson and Hu. We reach two main conclusions: (1) Adopting a simple sudden reionization model does not introduce measurable bias into values for other parameters, indicating that detailed modeling of reionization is not necessary for the purpose of parameter estimation from future CMB data sets such as Planck. (2) PCA analysis does not allow accurate reconstruction of the actual reionization history of the universe in a realistic case.
The Atacama Cosmology Telescope: cosmological parameters from three seasons of data
International Nuclear Information System (INIS)
Sievers, Jonathan L.; Appel, John William; Hlozek, Renée A.; Nolta, Michael R.; Battaglia, Nick; Bond, J. Richard; Acquaviva, Viviana; Addison, Graeme E.; Amiri, Mandana; Battistelli, Elia S.; Burger, Bryce; Ade, Peter A. R.; Aguirre, Paula; Barrientos, L. Felipe; Brown, Ben; Calabrese, Erminia; Chervenak, Jay; Crichton, Devin; Das, Sudeep; Devlin, Mark J.
2013-01-01
We present constraints on cosmological and astrophysical parameters from high-resolution microwave background maps at 148 GHz and 218 GHz made by the Atacama Cosmology Telescope (ACT) in three seasons of observations from 2008 to 2010. A model of primary cosmological and secondary foreground parameters is fit to the map power spectra and lensing deflection power spectrum, including contributions from both the thermal Sunyaev-Zeldovich (tSZ) effect and the kinematic Sunyaev-Zeldovich (kSZ) effect, Poisson and correlated anisotropy from unresolved infrared sources, radio sources, and the correlation between the tSZ effect and infrared sources. The power ℓ 2 C ℓ /2π of the thermal SZ power spectrum at 148 GHz is measured to be 3.4±1.4 μK 2 at ℓ = 3000, while the corresponding amplitude of the kinematic SZ power spectrum has a 95% confidence level upper limit of 8.6 μK 2 . Combining ACT power spectra with the WMAP 7-year temperature and polarization power spectra, we find excellent consistency with the LCDM model. We constrain the number of effective relativistic degrees of freedom in the early universe to be N eff = 2.79±0.56, in agreement with the canonical value of N eff = 3.046 for three massless neutrinos. We constrain the sum of the neutrino masses to be Σm ν < 0.39 eV at 95% confidence when combining ACT and WMAP 7-year data with BAO and Hubble constant measurements. We constrain the amount of primordial helium to be Y p = 0.225±0.034, and measure no variation in the fine structure constant α since recombination, with α/α 0 = 1.004±0.005. We also find no evidence for any running of the scalar spectral index, dn s /dln k = −0.004±0.012
Measuring Cosmological Parameters with Photometrically Classified Pan-STARRS Supernovae
Jones, David; Scolnic, Daniel; Riess, Adam; Rest, Armin; Kirshner, Robert; Berger, Edo; Kessler, Rick; Pan, Yen-Chen; Foley, Ryan; Chornock, Ryan; Ortega, Carolyn; Challis, Peter; Burgett, William; Chambers, Kenneth; Draper, Peter; Flewelling, Heather; Huber, Mark; Kaiser, Nick; Kudritzki, Rolf; Metcalfe, Nigel; Tonry, John; Wainscoat, Richard J.; Waters, Chris; Gall, E. E. E.; Kotak, Rubina; McCrum, Matt; Smartt, Stephen; Smith, Ken
2018-01-01
We use nearly 1,200 supernovae (SNe) from Pan-STARRS and ~200 low-z (z energy equation of state parameter w to be -0.986±0.058 (stat+sys). If we allow w to evolve with redshift as w(a) = w0 + wa(1-a), we find w0 = -0.923±0.148 and wa = -0.404±0.797. These results are consistent with measurements of cosmological parameters from the JLA and from a new analysis of 1049 spectroscopically confirmed SNe Ia (Scolnic et al. 2017). We try four different photometric classification priors for Pan-STARRS SNe and two alternate ways of modeling the CC SN contamination, finding that none of these variants gives a w that differs by more than 1% from the baseline measurement. The systematic uncertainty on w due to marginalizing over the CC SN contamination, σwCC = 0.019, is approximately equal to the photometric calibration uncertainty and is lower than the systematic uncertainty in the SN\\,Ia dispersion model (σwdisp = 0.024). Our data provide one of the best current constraints on w, demonstrating that samples with ~5% CC SN contamination can give competitive cosmological constraints when the contaminating distribution is marginalized over in a Bayesian framework.
Big Bang Nucleosynthesis and Cosmological Constraints on Neutrino Oscillation Parameters
Kirilova, Daniela P; Kirilova, Daniela; Chizhov, Mihail
2001-01-01
We present a review of cosmological nucleosynthesis (CN) with neutrino oscillations, discussing the different effects of oscillations on CN, namely: increase of the effective degrees of freedom during CN, spectrum distortion of the oscillating neutrinos, neutrino number density depletion, and growth of neutrino-antineutrino asymmetry due to active-sterile oscillations. We discuss the importance of these effects for the primordial yield of helium-4. Primordially produced He-4 value is obtained in a selfconsistent study of the nucleons and the oscillating neutrinos. The effects of spectrum distortion, depletion and neutrino-antineutrino asymmetry growth on helium-4 production are explicitly calculated. An update of the cosmological constraints on active-sterile neutrino oscillations parameters is presented, giving the values: delta m^2 sin^8 (2 theta) 0, and |delta m^2| < 8.2 x 10^{-10} eV^2 at large mixing angles for delta m^2 < 0. According to these constraints, besides the active-sterile LMA solution,...
The Atacama Cosmology Telescope: Cosmological Parameters from Three Seasons of Data
Seivers, Jonathan L.; Hlozek, Renee A.; Nolta, Michael R.; Acquaviva, Viviana; Addison, Graeme E.; Ade, Peter A. R.; Aguirre, Paula; Amiri, Mandana; Appel, John W.; Barrientos, L. Felipe;
2013-01-01
We present constraints on cosmological and astrophysical parameters from highresolution microwave background maps at 148 GHz and 218 GHz made by the Atacama Cosmology Telescope (ACT) in three seasons of observations from 2008 to 2010. A model of primary cosmological and secondary foreground parameters is fit to the map power spectra and lensing deflection power spectrum, including contributions from both the thermal Sunyaev-Zeldovich (tSZ) effect and the kinematic Sunyaev-Zeldovich (kSZ) effect, Poisson and correlated anisotropy from unresolved infrared sources, radio sources, and the correlation between the tSZ effect and infrared sources. The power l(sup 2)C(sub l)/2pi of the thermal SZ power spectrum at 148 GHz is measured to be 3.4 +/- 1.4 micro-K(sup 2) at l = 3000, while the corresponding amplitude of the kinematic SZ power spectrum has a 95% confidence level upper limit of 8.6 micro-K(sup 2). Combining ACT power spectra with the WMAP 7-year temperature and polarization power spectra, we find excellent consistency with the LCDM model. We constrain the number of effective relativistic degrees of freedom in the early universe to be N(sub eff) = 2.79 +/- 0.56, in agreement with the canonical value of N(sub eff) = 3.046 for three massless neutrinos. We constrain the sum of the neutrino masses to be sigma(m?) is less than 0.39 eV at 95% confidence when combining ACT and WMAP 7-year data with BAO and Hubble constant measurements. We constrain the amount of primordial helium to be Y(sub p) = 0.225 +/- 0.034, and measure no variation in the fine structure constant alpha since recombination, with alpha/alpha(sub 0) = 1.004 +/- 0.005. We also find no evidence for any running of the scalar spectral index, derivative(n(sub s))/derivative(ln k) = -0.004 +/- 0.012.
Constraining cosmological parameters with observational data including weak lensing effects
Energy Technology Data Exchange (ETDEWEB)
Li Hong [Institute of High Energy Physics, Chinese Academy of Science, PO Box 918-4, Beijing 100049 (China); Theoretical Physics Center for Science Facilities (TPCSF), Chinese Academy of Science (China)], E-mail: hongli@mail.ihep.ac.cn; Liu Jie [Institute of High Energy Physics, Chinese Academy of Science, PO Box 918-4, Beijing 100049 (China); Xia Junqing [Scuola Internazionale Superiore di Studi Avanzati, Via Beirut 2-4, I-34014 Trieste (Italy); Sun Lei; Fan Zuhui [Department of Astronomy, School of Physics, Peking University, Beijing 100871 (China); Tao Charling; Tilquin, Andre [Centre de Physique des Particules de Marseille, CNRS/IN2P3-Luminy and Universite de la Mediterranee, Case 907, F-13288 Marseille Cedex 9 (France); Zhang Xinmin [Institute of High Energy Physics, Chinese Academy of Science, PO Box 918-4, Beijing 100049 (China); Theoretical Physics Center for Science Facilities (TPCSF), Chinese Academy of Science (China)
2009-05-11
In this Letter, we study the cosmological implications of the 100 square degree Weak Lensing survey (the CFHTLS-Wide, RCS, VIRMOS-DESCART and GaBoDS surveys). We combine these weak lensing data with the cosmic microwave background (CMB) measurements from the WMAP5, BOOMERanG, CBI, VSA, ACBAR, the SDSS LRG matter power spectrum and the Type Ia Supernoave (SNIa) data with the 'Union' compilation (307 sample), using the Markov Chain Monte Carlo method to determine the cosmological parameters, such as the equation-of-state (EoS) of dark energy w, the density fluctuation amplitude {sigma}{sub 8}, the total neutrino mass {sigma}m{sub {nu}} and the parameters associated with the power spectrum of the primordial fluctuations. Our results show that the {lambda}CDM model remains a good fit to all of these data. In a flat universe, we obtain a tight limit on the constant EoS of dark energy, w=-0.97{+-}0.041 (1{sigma}). For the dynamical dark energy model with time evolving EoS parameterized as w{sub de}(a)=w{sub 0}+w{sub a}(1-a), we find that the best-fit values are w{sub 0}=-1.064 and w{sub a}=0.375, implying the mildly preference of Quintom model whose EoS gets across the cosmological constant boundary during evolution. Regarding the total neutrino mass limit, we obtain the upper limit, {sigma}m{sub {nu}}<0.471 eV (95% C.L.) within the framework of the flat {lambda}CDM model. Due to the obvious degeneracies between the neutrino mass and the EoS of dark energy model, this upper limit will be relaxed by a factor of 2 in the framework of dynamical dark energy models. Assuming that the primordial fluctuations are adiabatic with a power law spectrum, within the {lambda}CDM model, we find that the upper limit on the ratio of the tensor to scalar is r<0.35 (95% C.L.) and the inflationary models with the slope n{sub s}{>=}1 are excluded at more than 2{sigma} confidence level. In this Letter we pay particular attention to the contribution from the weak lensing data and
Improved Estimates of Thermodynamic Parameters
Lawson, D. D.
1982-01-01
Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.
International Nuclear Information System (INIS)
Contopoulos, G.; Kotsakis, D.
1987-01-01
An extensive first part on a wealth of observational results relevant to cosmology lays the foundation for the second and central part of the book; the chapters on general relativity, the various cosmological theories, and the early universe. The authors present in a complete and almost non-mathematical way the ideas and theoretical concepts of modern cosmology including the exciting impact of high-energy particle physics, e.g. in the concept of the ''inflationary universe''. The final part addresses the deeper implications of cosmology, the arrow of time, the universality of physical laws, inflation and causality, and the anthropic principle
Integrated Sachs-Wolfe effect versus redshift test for the cosmological parameters
Kantowski, R.; Chen, B.; Dai, X.
2015-04-01
We describe a method using the integrated Sachs-Wolfe (ISW) effect caused by individual inhomogeneities to determine the cosmological parameters H0, Ωm , and ΩΛ, etc. This ISW-redshift test requires detailed knowledge of the internal kinematics of a set of individual density perturbations, e.g., galaxy clusters and/or cosmic voids, in particular their density and velocity profiles, and their mass accretion rates. It assumes the density perturbations are isolated and embedded (equivalently compensated) and makes use of the newly found relation between the ISW temperature perturbation of the cosmic microwave background (CMB) and the Fermat potential of the lens. Given measurements of the amplitudes of the temperature variations in the CMB caused by such clusters or voids at various redshifts and estimates of their angular sizes or masses, one can constrain the cosmological parameters. More realistically, the converse is more likely, i.e., if the background cosmology is sufficiently constrained, measurement of ISW profiles of clusters and voids (e.g., hot and cold spots and rings) can constrain dynamical properties of the dark matter, including accretion, associated with such lenses and thus constrain the evolution of these objects with redshift.
Aswath Damodaran
1999-01-01
Over the last three decades, the capital asset pricing model has occupied a central and often controversial place in most corporate finance analysts’ tool chests. The model requires three inputs to compute expected returns – a riskfree rate, a beta for an asset and an expected risk premium for the market portfolio (over and above the riskfree rate). Betas are estimated, by most practitioners, by regressing returns on an asset against a stock index, with the slope of the regression being the b...
CosmoSIS: A System for MC Parameter Estimation
Energy Technology Data Exchange (ETDEWEB)
Zuntz, Joe [Manchester U.; Paterno, Marc [Fermilab; Jennings, Elise [Chicago U., EFI; Rudd, Douglas [U. Chicago; Manzotti, Alessandro [Chicago U., Astron. Astrophys. Ctr.; Dodelson, Scott [Chicago U., Astron. Astrophys. Ctr.; Bridle, Sarah [Manchester U.; Sehrish, Saba [Fermilab; Kowalkowski, James [Fermilab
2015-01-01
Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. We present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in Cosmo- SIS, including camb, Planck, cosmic shear calculations, and a suite of samplers. We illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis.
Determining cosmological parameters with the latest observational data
International Nuclear Information System (INIS)
Xia Junqing; Li Hong; Zhao Gongbo; Zhang Xinmin
2008-01-01
In this paper, we combine the latest observational data, including the WMAP five-year data (WMAP5), BOOMERanG, CBI, VSA, ACBAR, as well as the baryon acoustic oscillations (BAO) and type Ia supernovae (SN) ''union'' compilation (307 sample), and use the Markov Chain Monte Carlo method to determine the cosmological parameters, such as the equation of state (EoS) of dark energy, the curvature of the universe, the total neutrino mass, and the parameters associated with the power spectrum of primordial fluctuations. In a flat universe, we obtain the tight limit on the constant EoS of dark energy as w=-0.977±0.056(stat)±0.057(sys). For the dynamical dark energy models with the time evolving EoS parametrized as w de (a)=w 0 +w 1 (1-a), we find that the best-fit values are w 0 =-1.08 and w 1 =0.368, while the ΛCDM model remains a good fit to the current data. For the curvature of the universe Ω k , our results give -0.012 k de =-1. When considering the dynamics of dark energy, the flat universe is still a good fit to the current data, -0.015 k s ≥1 are excluded at more than 2σ confidence level. However, in the framework of dynamical dark energy models, the allowed region in the parameter space of (n s ,r) is enlarged significantly. Finally, we find no strong evidence for the large running of the spectral index.
García-Bellido, J
2015-01-01
In these lectures I review the present status of the so-called Standard Cosmological Model, based on the hot Big Bang Theory and the Inflationary Paradigm. I will make special emphasis on the recent developments in observational cosmology, mainly the acceleration of the universe, the precise measurements of the microwave background anisotropies, and the formation of structure like galaxies and clusters of galaxies from tiny primordial fluctuations generated during inflation.
Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen
2018-03-01
Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data-space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper we use massive asymptotically-optimal data compression to reduce the dimensionality of the data-space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parameterized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate Density Estimation Likelihood-Free Inference with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological datasets.
A small cosmological constant and backreaction of non-finetuned parameters
International Nuclear Information System (INIS)
Krause, Axel
2003-01-01
We include the backreaction on warped geometry induced by non-finetuned parameters in a two domain-wall set-up to obtain an exponentially small Cosmological Constant Λ4. The mechanism to suppress the Cosmological Constant involves one classical fine-tuning as compared to an infinity of finetunings at the quantum level in standard D = 4 field theory. (author)
How to fool cosmic microwave background parameter estimation
International Nuclear Information System (INIS)
Kinney, William H.
2001-01-01
With the release of the data from the Boomerang and MAXIMA-1 balloon flights, estimates of cosmological parameters based on the cosmic microwave background (CMB) have reached unprecedented precision. In this paper I show that it is possible for these estimates to be substantially biased by features in the primordial density power spectrum. I construct primordial power spectra which mimic to within cosmic variance errors the effect of changing parameters such as the baryon density and neutrino mass, meaning that even an ideal measurement would be unable to resolve the degeneracy. Complementary measurements are necessary to resolve this ambiguity in parameter estimation efforts based on CMB temperature fluctuations alone
The Atacama Cosmology Telescope: Two-Season ACTPol Spectra and Parameters
Louis, Thibaut; Grace, Emily; Hasselfield, Matthew; Lungu, Marius; Maurin, Loic; Addison, Graeme E.; Adem Peter A. R.; Aiola, Simone; Allison, Rupert; Amiri, Mandana;
2017-01-01
We present the temperature and polarization angular power spectra measuredby the Atacama Cosmology Telescope Polarimeter (ACTPol). We analyze night-time datacollected during 2013-14 using two detector arrays at 149 GHz, from 548 deg(exp. 2) of sky onthe celestial equator. We use these spectra, and the spectra measured with the MBAC camera on ACT from 2008-10, in combination with Planck and WMAP data to estimate cosmological parameters from the temperature, polarization, and temperature-polarization cross-correlations. We find the new ACTPol data to be consistent with the CDM model. The ACTPol temperature-polarization cross-spectrum now provides stronger constraints on multiple parameters than the ACTPol temperature spectrum, including the baryon density, the acoustic peak angular scale, and the derived Hubble constant. The new ACTPol dataprovide information on damping tail parameters. The joint uncertainty on the number of neutrino species and the primordial helium fraction is reduced by 20% when adding ACTPol to Planck temperature data alone.
The Atacama Cosmology Telescope: two-season ACTPol spectra and parameters
Energy Technology Data Exchange (ETDEWEB)
Louis, Thibaut [UPMC Univ Paris 06, UMR7095, Institut d' Astrophysique de Paris, F-75014, Paris (France); Grace, Emily; Aiola, Simone; Choi, Steve K. [Joseph Henry Laboratories of Physics, Jadwin Hall, Princeton University, Princeton, NJ 08544 (United States); Hasselfield, Matthew [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Lungu, Marius; Angile, Elio [Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104 (United States); Maurin, Loïc [Instituto de Astrofísica and Centro de Astro-Ingeniería, Facultad de Física, Pontificia Universidad Católica de Chile, Av. Vicuña Mackenna 4860, 7820436 Macul, Santiago (Chile); Addison, Graeme E. [Department of Physics and Astronomy, The Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218-2686 (United States); Ade, Peter A. R. [School of Physics and Astronomy, Cardiff University, The Parade, Cardiff, Wales, CF24 3AA (United Kingdom); Allison, Rupert; Calabrese, Erminia [Sub-Department of Astrophysics, University of Oxford, Keble Road, Oxford, OX1 3RH (United Kingdom); Amiri, Mandana [Department of Physics and Astronomy, University of British Columbia, Vancouver, BC, V6T 1Z4 (Canada); Battaglia, Nicholas [Department of Astrophysical Sciences, Peyton Hall, Princeton University, Princeton, NJ 08544 (United States); Beall, James A.; Britton, Joe; Cho, Hsiao-mei [NIST Quantum Devices Group, 325 Broadway Mailcode 817.03, Boulder, CO 80305 (United States); De Bernardis, Francesco [Department of Physics, Cornell University, Ithaca, NY 14853 (United States); Bond, J Richard, E-mail: louis@iap.fr [Canadian Institute for Theoretical Astrophysics, University of Toronto, Toronto, ON, M5S 3H8 (Canada); and others
2017-06-01
We present the temperature and polarization angular power spectra measured by the Atacama Cosmology Telescope Polarimeter (ACTPol). We analyze night-time data collected during 2013–14 using two detector arrays at 149 GHz, from 548 deg{sup 2} of sky on the celestial equator. We use these spectra, and the spectra measured with the MBAC camera on ACT from 2008–10, in combination with planck and wmap data to estimate cosmological parameters from the temperature, polarization, and temperature-polarization cross-correlations. We find the new ACTPol data to be consistent with the ΛCDM model. The ACTPol temperature-polarization cross-spectrum now provides stronger constraints on multiple parameters than the ACTPol temperature spectrum, including the baryon density, the acoustic peak angular scale, and the derived Hubble constant. The new ACTPol data provide information on damping tail parameters. The joint uncertainty on the number of neutrino species and the primordial helium fraction is reduced by 20% when adding ACTPol to Planck temperature data alone.
Vittorio, Nicola
2018-01-01
Modern cosmology has changed significantly over the years, from the discovery to the precision measurement era. The data now available provide a wealth of information, mostly consistent with a model where dark matter and dark energy are in a rough proportion of 3:7. The time is right for a fresh new textbook which captures the state-of-the art in cosmology. Written by one of the world's leading cosmologists, this brand new, thoroughly class-tested textbook provides graduate and undergraduate students with coverage of the very latest developments and experimental results in the field. Prof. Nicola Vittorio shows what is meant by precision cosmology, from both theoretical and observational perspectives.
Parameter estimation in plasmonic QED
Jahromi, H. Rangani
2018-03-01
We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.
Cosmological parameters from pre-planck cosmic microwave background measurements
Calabrese, E.; Hlozek, R.; Battaglia, N.; Battistelli, E.; Bond, J.; Chluba, J.; Crichton, D.; Das, S.; Devlin, M.; Dunkley, J.; Dünner, R.; Farhang, M.; Gralla, M.; Hajian, A.; Halpern, M.; Hasselfield, M.; Hincks, A.; Irwin, K.; Kosowsky, A.; Louis, T.; Marriage, T.; Moodley, K.; Newburgh, L.; Niemack, M.; Nolta, M.; Page, L.; Sehgal, N.; Sherwin, B.; Sievers, J.; Sifon, Andalaft C.J.; Spergel, D.; Staggs, S.; Switzer, E.; Wollack, E.
2013-01-01
Recent data from the WMAP, ACT and SPT experiments provide precise measurements of the cosmic microwave background temperature power spectrum over a wide range of angular scales. The combination of these observations is well fit by the standard, spatially flat {$Lambda$}CDM cosmological model,
KiDS-450: the tomographic weak lensing power spectrum and constraints on cosmological parameters
Köhlinger, F.; Viola, M.; Joachimi, B.; Hoekstra, H.; van Uitert, E.; Hildebrandt, H.; Choi, A.; Erben, T.; Heymans, C.; Joudaki, S.; Klaes, D.; Kuijken, K.; Merten, J.; Miller, L.; Schneider, P.; Valentijn, E. A.
2017-11-01
We present measurements of the weak gravitational lensing shear power spectrum based on 450 ° ^2 of imaging data from the Kilo Degree Survey. We employ a quadratic estimator in two and three redshift bins and extract band powers of redshift autocorrelation and cross-correlation spectra in the multipole range 76 ≤ ℓ ≤ 1310. The cosmological interpretation of the measured shear power spectra is performed in a Bayesian framework assuming a ΛCDM model with spatially flat geometry, while accounting for small residual uncertainties in the shear calibration and redshift distributions as well as marginalizing over intrinsic alignments, baryon feedback and an excess-noise power model. Moreover, massive neutrinos are included in the modelling. The cosmological main result is expressed in terms of the parameter combination S_8 ≡ σ _8 √{Ω_m/0.3} yielding S8 = 0.651 ± 0.058 (three z-bins), confirming the recently reported tension in this parameter with constraints from Planck at 3.2σ (three z-bins). We cross-check the results of the three z-bin analysis with the weaker constraints from the two z-bin analysis and find them to be consistent. The high-level data products of this analysis, such as the band power measurements, covariance matrices, redshift distributions and likelihood evaluation chains are available at http://kids.strw.leidenuniv.nl.
Cosmological-model-parameter determination from satellite-acquired type Ia and IIP Supernova Data
International Nuclear Information System (INIS)
Podariu, Silviu; Nugent, Peter; Ratra, Bharat
2000-01-01
We examine the constraints that satellite-acquired Type Ia and IIP supernova apparent magnitude versus redshift data will place on cosmological model parameters in models with and without a constant or time-variable cosmological constant lambda. High-quality data which could be acquired in the near future will result in tight constraints on these parameters. For example, if all other parameters of a spatially-flat model with a constant lambda are known, the supernova data should constrain the non-relativistic matter density parameter omega to better than 1 (2, 0.5) at 1 sigma with neutral (worst case, best case) assumptions about data quality
Load Estimation from Modal Parameters
DEFF Research Database (Denmark)
Aenlle, Manuel López; Brincker, Rune; Fernández, Pelayo Fernández
2007-01-01
In Natural Input Modal Analysis the modal parameters are estimated just from the responses while the loading is not recorded. However, engineers are sometimes interested in knowing some features of the loading acting on a structure. In this paper, a procedure to determine the loading from a FRF m...
Parameter estimation and inverse problems
Aster, Richard C; Thurber, Clifford H
2005-01-01
Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...
Determination of cosmological parameters: An introduction for non ...
Indian Academy of Sciences (India)
Then I show how the age of the universe depends on them, followed by the evolution of the scale parameter of the universe for various values of the density parameters. Then I define strategies for measuring them, and show the results for the recent determination of these parameters from measurements on supernovas of ...
Applied parameter estimation for chemical engineers
Englezos, Peter
2000-01-01
Formulation of the parameter estimation problem; computation of parameters in linear models-linear regression; Gauss-Newton method for algebraic models; other nonlinear regression methods for algebraic models; Gauss-Newton method for ordinary differential equation (ODE) models; shortcut estimation methods for ODE models; practical guidelines for algorithm implementation; constrained parameter estimation; Gauss-Newton method for partial differential equation (PDE) models; statistical inferences; design of experiments; recursive parameter estimation; parameter estimation in nonlinear thermodynam
International Nuclear Information System (INIS)
Gomez Martinez, Silvina Paola; Madriz Aguilar, Jose Edgar; Bellini, Mauricio
2007-01-01
We study gravitational waves generated during the inflationary epoch in presence of a decaying cosmological parameter on a 5D geometrical background which is Riemann flat. Two examples are considered, one with a constant cosmological parameter and the second with a decreasing one
Data Handling and Parameter Estimation
DEFF Research Database (Denmark)
Sin, Gürkan; Gernaey, Krist
2016-01-01
,engineers, and professionals. However, it is also expected that they will be useful both for graduate teaching as well as a stepping stone for academic researchers who wish to expand their theoretical interest in the subject. For the models selected to interpret the experimental data, this chapter uses available models from...... literature that are mostly based on the ActivatedSludge Model (ASM) framework and their appropriate extensions (Henze et al., 2000).The chapter presents an overview of the most commonly used methods in the estimation of parameters from experimental batch data, namely: (i) data handling and validation, (ii......Modelling is one of the key tools at the disposal of modern wastewater treatment professionals, researchers and engineers. It enables them to study and understand complex phenomena underlying the physical, chemical and biological performance of wastewater treatment plants at different temporal...
Joint cosmic microwave background and weak lensing analysis: constraints on cosmological parameters.
Contaldi, Carlo R; Hoekstra, Henk; Lewis, Antony
2003-06-06
We use cosmic microwave background (CMB) observations together with the red-sequence cluster survey weak lensing results to derive constraints on a range of cosmological parameters. This particular choice of observations is motivated by their robust physical interpretation and complementarity. Our combined analysis, including a weak nucleosynthesis constraint, yields accurate determinations of a number of parameters including the amplitude of fluctuations sigma(8)=0.89+/-0.05 and matter density Omega(m)=0.30+/-0.03. We also find a value for the Hubble parameter of H(0)=70+/-3 km s(-1) Mpc(-1), in good agreement with the Hubble Space Telescope key-project result. We conclude that the combination of CMB and weak lensing data provides some of the most powerful constraints available in cosmology today.
International Nuclear Information System (INIS)
Yao, Ji; Ishak, Mustapha; Lin, Weikang; Troxel, Michael
2017-01-01
Intrinsic alignments (IA) of galaxies have been recognized as one of the most serious contaminants to weak lensing. These systematics need to be isolated and mitigated in order for ongoing and future lensing surveys to reach their full potential. The IA self-calibration (SC) method was shown in previous studies to be able to reduce the GI contamination by up to a factor of 10 for the 2-point and 3-point correlations. The SC method does not require the assumption of an IA model in its working and can extract the GI signal from the same photo-z survey offering the possibility to test and understand structure formation scenarios and their relationship to IA models. In this paper, we study the effects of the IA SC mitigation method on the precision and accuracy of cosmological parameter constraints from future cosmic shear surveys LSST, WFIRST and Euclid. We perform analytical and numerical calculations to estimate the loss of precision and the residual bias in the best fit cosmological parameters after the self-calibration is performed. We take into account uncertainties from photometric redshifts and the galaxy bias. We find that the confidence contours are slightly inflated from applying the SC method itself while a significant increase is due to the inclusion of the photo-z uncertainties. The bias of cosmological parameters is reduced from several-σ, when IA is not corrected for, to below 1-σ after SC is applied. These numbers are comparable to those resulting from applying the method of marginalizing over IA model parameters despite the fact that the two methods operate very differently. We conclude that implementing the SC for these future cosmic-shear surveys will not only allow one to efficiently mitigate the GI contaminant but also help to understand their modeling and link to structure formation.
Energy Technology Data Exchange (ETDEWEB)
Yao, Ji; Ishak, Mustapha; Lin, Weikang [Department of Physics, The University of Texas at Dallas, Dallas, TX 75080 (United States); Troxel, Michael, E-mail: jxy131230@utdallas.edu, E-mail: mxi054000@utdallas.edu, E-mail: wxl123830@utdallas.edu, E-mail: michael.a.troxel@gmail.com [Department of Physics, Ohio State University, Columbus, OH 43210 (United States)
2017-10-01
Intrinsic alignments (IA) of galaxies have been recognized as one of the most serious contaminants to weak lensing. These systematics need to be isolated and mitigated in order for ongoing and future lensing surveys to reach their full potential. The IA self-calibration (SC) method was shown in previous studies to be able to reduce the GI contamination by up to a factor of 10 for the 2-point and 3-point correlations. The SC method does not require the assumption of an IA model in its working and can extract the GI signal from the same photo-z survey offering the possibility to test and understand structure formation scenarios and their relationship to IA models. In this paper, we study the effects of the IA SC mitigation method on the precision and accuracy of cosmological parameter constraints from future cosmic shear surveys LSST, WFIRST and Euclid. We perform analytical and numerical calculations to estimate the loss of precision and the residual bias in the best fit cosmological parameters after the self-calibration is performed. We take into account uncertainties from photometric redshifts and the galaxy bias. We find that the confidence contours are slightly inflated from applying the SC method itself while a significant increase is due to the inclusion of the photo-z uncertainties. The bias of cosmological parameters is reduced from several-σ, when IA is not corrected for, to below 1-σ after SC is applied. These numbers are comparable to those resulting from applying the method of marginalizing over IA model parameters despite the fact that the two methods operate very differently. We conclude that implementing the SC for these future cosmic-shear surveys will not only allow one to efficiently mitigate the GI contaminant but also help to understand their modeling and link to structure formation.
Evolution of the Brans—Dicke Parameter in Generalized Chameleon Cosmology
International Nuclear Information System (INIS)
Jamil, Mubasher; Momeni, D.
2011-01-01
Motivated by an earlier study of Sahoo and Singh [Mod. Phys. Lett. A 17 (2002) 2409], we investigate the time dependence of the Brans-Dicke parameter ω(t) for an expanding Universe in the generalized Brans-Dicke Chameleon cosmology, and obtain an explicit dependence of ω(t) in different expansion phases of the Universe. Also, we discuss how the observed accelerated expansion of the observable Universe can be accommodated in the present formalism. (geophysics, astronomy, and astrophysics)
International Nuclear Information System (INIS)
Lusset, Vincent
2006-01-01
The Supernova Legacy Survey is a second generation experiment for the measurement of cosmological parameters using type-la supernovae. Il follows the discovery of the acceleration of the expansion of the Universe, attributed to an unknown 'dark energy'. This thesis presents a type-la supernovae search using an offline analysis of SNLS data. It makes it possible to detect the supernovae that were missed online and to study possible selection biases. One of its principal characteristics is that it uses entirely automatic selection criteria. This type of automated offline analysis had never been carried out before for data reaching this redshift. This analysis enabled us to discover 73 additional SNIa candidates compared to those identified in the real time analysis on the same data, representing an increase of more than 50% of the number of supernovae. The final Hubble diagram contains 262 SNIa which gives us, for a flat ACDM model, the following values for the cosmological parameters: Ω_M = 0,31 ± 0,028 (stat) ± 0,036 (syst) et Ω_A = 0,69. This offline analysis of SNLS data opens new horizons, both by checking for possible biases in current measurements of cosmological parameters by supernovae experiments and by preparing the third generation experiments, on the ground or in space, which will detect thousands of SNIa. (author) [fr
Hannachi, Zitouni; Guessoum, Nidhal; Azzam, Walid
2016-07-01
Context: We use the correlation relations between the energy emitted by the GRBs in their prompt phases and the X-ray afterglow fluxes, in an effort to constrain cosmological parameters and construct a Hubble diagram at high redshifts, i.e. beyond those found in Type Ia supernovae. Methods: We use a sample of 128 Swift GRBs, which we have selected among more than 800 ones observed until July 2015. The selection is based on a few observational constraints: GRB flux higher than 0.4 photons/cm^2/s in the band 15-150 keV; spectrum fitted with simple power law; redshift accurately known and given; and X-ray afterglow observed and flux measured. The statistical method of maximum likelihood is then used to determine the best cosmological parameters (Ω_M, Ω_L) that give the best correlation between the isotropic gamma energies E_{iso} and the afterglow fluxes at the break time t_{b}. The χ^2 statistical test is also used as a way to compare results from two methods. Results & Conclusions: Although the number of GRBs with high redshifts is rather small, and despite the notable dispersion found in the data, the results we have obtained are quite encouraging and promising. The values of the cosmological parameters obtained here are close to those currently used.
Parameter Estimation in Continuous Time Domain
Directory of Open Access Journals (Sweden)
Gabriela M. ATANASIU
2016-12-01
Full Text Available This paper will aim to presents the applications of a continuous-time parameter estimation method for estimating structural parameters of a real bridge structure. For the purpose of illustrating this method two case studies of a bridge pile located in a highly seismic risk area are considered, for which the structural parameters for the mass, damping and stiffness are estimated. The estimation process is followed by the validation of the analytical results and comparison with them to the measurement data. Further benefits and applications for the continuous-time parameter estimation method in civil engineering are presented in the final part of this paper.
Sunyaev-Zeldovich effect in WMAP and its effect on cosmological parameters
International Nuclear Information System (INIS)
Huffenberger, Kevin M.; Seljak, Uros; Makarov, Alexey
2004-01-01
We use multifrequency information in first year Wilkinson microwave anisotropy probe (WMAP) data to search for the Sunyaev-Zeldovich (SZ) effect. WMAP has sufficiently broad frequency coverage to constrain the SZ effect without the addition of higher frequency data: the SZ power spectrum amplitude is expected to increase 50% from W to Q frequency band. This, in combination with the low noise in WMAP, allows us to strongly constrain the SZ contribution. We derive an optimal frequency combination of WMAP cross-spectra to extract the SZ effect in the presence of noise, cosmic microwave background (CMB), and radio point sources, which are marginalized over. We find that the SZ contribution is less than 2% (95% C.L.) at the first acoustic peak in W band. Under the assumption that the removed radio point sources are not correlated with the SZ effect this limit implies σ 8 <1.07 at 95% C.L. We investigate the effect on the cosmological parameters of allowing an SZ component. We run Monte Carlo Markov chains with and without an SZ component and find that the addition of the SZ effect does not affect any of the cosmological conclusions. We conclude that the SZ effect does not contaminate the WMAP CMB or change cosmological parameters, refuting the recent claims that they may be corrupted
Effects of the interaction between dark energy and dark matter on cosmological parameters
International Nuclear Information System (INIS)
He, Jian-Hua; Wang, Bin
2008-01-01
We examine the effects of possible phenomenological interactions between dark energy and dark matter on cosmological parameters and their efficiency in solving the coincidence problem. We work with two simple parameterizations of the dynamical dark energy equation of state and the constant dark energy equation of state. Using observational data coming from the new 182 Gold type Ia supernova samples, the shift parameter of the Cosmic Microwave Background given by the three-year Wilkinson Microwave Anisotropy Probe observations and the baryon acoustic oscillation measurement from the Sloan Digital Sky Survey, we perform a statistical joint analysis of different forms of phenomenological interaction between dark energy and dark matter
Statistics of Parameter Estimates: A Concrete Example
Aguilar, Oscar; Allmaras, Moritz; Bangerth, Wolfgang; Tenorio, Luis
2015-01-01
© 2015 Society for Industrial and Applied Mathematics. Most mathematical models include parameters that need to be determined from measurements. The estimated values of these parameters and their uncertainties depend on assumptions made about noise
Parameter Estimation of Partial Differential Equation Models
Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Maity, Arnab; Carroll, Raymond J.
2013-01-01
PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus
Energy Technology Data Exchange (ETDEWEB)
Sobreira, F.; Rosenfeld, R. [Universidade Estadual Paulista Julio de Mesquita Filho (IFT/UNESP), Sao Paulo, SP (Brazil). Inst. Fisica Teorica; Simoni, F. de; Costa, L.A.N. da; Gaia, M.A.G.; Ramos, B.; Ogando, R.; Makler, M. [Laboratorio Interinstitucional de e-Astronomia (LIneA), Rio de Janeiro, RJ (Brazil)
2011-07-01
Full text: We study the cosmological constraints expected for the upcoming project Dark Energy Survey (DES) with the full functional form of the 2-point angular correlation function. The angular correlation function model applied in this work includes the effects of linear redshift-space distortion, photometric redshift errors (assumed to be Gaussian) and non-linearities prevenient from gravitational infall. The Fisher information matrix is constructed with the full covariance matrix, which takes the correlation between nearby redshift shells in a proper manner. The survey was sliced into 20 redshift shells in the range 0:4 {<=} z {<=} 1:40 with a variable angular scale in order to search only the scale around the signal from the baryon acoustic oscillation, therefore well within the validity of the non-linear model employed. We found that under those assumptions and with a flat {Lambda}CDM WMAP7 fiducial model, the DES will be able to constrain the dark energy equation of state parameter w with a precision of {approx} 20% and the cold dark matter with {approx} 11% when marginalizing over the other 25 parameters (bias is treated as a free parameter for each shell). When applying WMAP7 priors on {Omega}{sub baryon}, {Omega} c{sub dm}, n{sub s}, and HST priors on the Hubble parameter, w is constrained with {approx} 9% precision. This shows that the full shape of the angular correlation function with DES data will be a powerful probe to constrain cosmological parameters. (author)
On the impact of large angle CMB polarization data on cosmological parameters
Energy Technology Data Exchange (ETDEWEB)
Lattanzi, Massimiliano; Mandolesi, Nazzareno; Natoli, Paolo [Dipartimento di Fisica e Scienze della Terra, Università di Ferrara, Via Giuseppe Saragat 1, I-44122 Ferrara (Italy); Burigana, Carlo; Gruppuso, Alessandro; Trombetti, Tiziana [Istituto Nazionale di Astrofisica, Istituto di Astrofisica Spaziale e Fisica Cosmica di Bologna, Via Piero Gobetti 101, I-40129 Bologna (Italy); Gerbino, Martina [The Oskar Klein Centre for Cosmoparticle Physics, Department of Physics, Stockholm University, AlbaNova, SE-106 91 Stockholm (Sweden); Polenta, Gianluca [Agenzia Spaziale Italiana Science Data Center, Via del Politecnico snc, 00133, Roma (Italy); Salvati, Laura, E-mail: lattanzi@fe.infn.it, E-mail: burigana@iasfbo.inaf.it, E-mail: martina.gerbino@fysik.su.se, E-mail: gruppuso@iasfbo.inaf.it, E-mail: nazzareno.mandolesi@unife.it, E-mail: paolo.natoli@unife.it, E-mail: gianluca.polenta@asdc.asi.it, E-mail: laura.salvati@ias.u-psud.fr, E-mail: trombetti@iasfbo.inaf.it [Dipartimento di Fisica, Università La Sapienza, Piazzale Aldo Moro 2, I-00185 Roma (Italy)
2017-02-01
We study the impact of the large-angle CMB polarization datasets publicly released by the WMAP and Planck satellites on the estimation of cosmological parameters of the ΛCDM model. To complement large-angle polarization, we consider the high resolution (or 'high-ℓ') CMB datasets from either WMAP or Planck as well as CMB lensing as traced by Planck 's measured four point correlation function. In the case of WMAP, we compute the large-angle polarization likelihood starting over from low resolution frequency maps and their covariance matrices, and perform our own foreground mitigation technique, which includes as a possible alternative Planck 353 GHz data to trace polarized dust. We find that the latter choice induces a downward shift in the optical depth τ, roughly of order 2σ, robust to the choice of the complementary high resolution dataset. When the Planck 353 GHz is consistently used to minimize polarized dust emission, WMAP and Planck 70 GHz large-angle polarization data are in remarkable agreement: by combining them we find τ = 0.066 {sup +0.012}{sub −0.013}, again very stable against the particular choice for high-ℓ data. We find that the amplitude of primordial fluctuations A {sub s} , notoriously degenerate with τ, is the parameter second most affected by the assumptions on polarized dust removal, but the other parameters are also affected, typically between 0.5 and 1σ. In particular, cleaning dust with Planck 's 353 GHz data imposes a 1σ downward shift in the value of the Hubble constant H {sub 0}, significantly contributing to the tension reported between CMB based and direct measurements of the present expansion rate. On the other hand, we find that the appearance of the so-called low ℓ anomaly, a well-known tension between the high- and low-resolution CMB anisotropy amplitude, is not significantly affected by the details of large-angle polarization, or by the particular high-ℓ dataset employed.
Determination of the cosmological parameters and the nature of dark energy
International Nuclear Information System (INIS)
Linden, S.
2010-04-01
The measured properties of the dark energy component being consistent with a Cosmological Constant, Λ, this cosmological standard model is referred to as the Λ-Cold-Dark-Matter (ΛCDM) model. Despite its overall success, this model suffers from various problems. The existence of a Cosmological Constant raises fundamental questions. Attempts to describe it as the energy contribution from the vacuum as following from Quantum Field Theory failed quantitatively. In consequence, a large number of alternative models have been developed to describe the dark energy component: modified gravity, additional dimensions, Quintessence models. Also, astrophysical effects have been considered to mimic an accelerated expansion. The basics of the ΛCDM model and the various attempts of explaining dark energy are outlined in this thesis. Another major problem of the model comes from the dependencies of the fit results on a number of a priori assumptions and parameterization effects. Today, combined analyses of the various cosmological probes are performed to extract the parameters of the model. Due to a wrong model assumption or a bad parameterization of the real physics, one might end up measuring with high precision something which is not there. We show, that indeed due to the high precision of modern cosmological measurements, purely kinematic approaches to distance measurements no longer yield valid fit results except for accidental special cases, and that a fit of the exact (integral) redshift-distance relation is necessary. The main results of this work concern the use of the CPL parameterization of dark energy when coping with the dynamics of tracker solutions of Quintessence models, and the risk of introducing biases on the parameters due to the possibly prohibited extrapolation to arbitrary high redshifts of the SN type Ia magnitude calibration relation, which is obtained in the low-redshift regime. Whereas the risks of applying CPL shows up to be small for a wide range of
On parameter estimation in deformable models
DEFF Research Database (Denmark)
Fisker, Rune; Carstensen, Jens Michael
1998-01-01
Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...
Jones, D. O.; Scolnic, D. M.; Riess, A. G.; Rest, A.; Kirshner, R. P.; Berger, E.; Kessler, R.; Pan, Y.-C.; Foley, R. J.; Chornock, R.; Ortega, C. A.; Challis, P. J.; Burgett, W. S.; Chambers, K. C.; Draper, P. W.; Flewelling, H.; Huber, M. E.; Kaiser, N.; Kudritzki, R.-P.; Metcalfe, N.; Tonry, J.; Wainscoat, R. J.; Waters, C.; Gall, E. E. E.; Kotak, R.; McCrum, M.; Smartt, S. J.; Smith, K. W.
2018-04-01
We use 1169 Pan-STARRS supernovae (SNe) and 195 low-z (z used to infer unbiased cosmological parameters by using a Bayesian methodology that marginalizes over core-collapse (CC) SN contamination. Our sample contains nearly twice as many SNe as the largest previous SN Ia compilation. Combining SNe with cosmic microwave background (CMB) constraints from Planck, we measure the dark energy equation-of-state parameter w to be ‑0.989 ± 0.057 (stat+sys). If w evolves with redshift as w(a) = w 0 + w a (1 ‑ a), we find w 0 = ‑0.912 ± 0.149 and w a = ‑0.513 ± 0.826. These results are consistent with cosmological parameters from the Joint Light-curve Analysis and the Pantheon sample. We try four different photometric classification priors for Pan-STARRS SNe and two alternate ways of modeling CC SN contamination, finding that no variant gives a w differing by more than 2% from the baseline measurement. The systematic uncertainty on w due to marginalizing over CC SN contamination, {σ }wCC}=0.012, is the third-smallest source of systematic uncertainty in this work. We find limited (1.6σ) evidence for evolution of the SN color-luminosity relation with redshift, a possible systematic that could constitute a significant uncertainty in future high-z analyses. Our data provide one of the best current constraints on w, demonstrating that samples with ∼5% CC SN contamination can give competitive cosmological constraints when the contaminating distribution is marginalized over in a Bayesian framework.
ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS
Directory of Open Access Journals (Sweden)
muhammad zahid rashid
2011-04-01
Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR, moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes
Cosmological parameters from CMB and other data: A Monte Carlo approach
International Nuclear Information System (INIS)
Lewis, Antony; Bridle, Sarah
2002-01-01
We present a fast Markov chain Monte Carlo exploration of cosmological parameter space. We perform a joint analysis of results from recent cosmic microwave background (CMB) experiments and provide parameter constraints, including σ 8 , from the CMB independent of other data. We next combine data from the CMB, HST Key Project, 2dF galaxy redshift survey, supernovae type Ia and big-bang nucleosynthesis. The Monte Carlo method allows the rapid investigation of a large number of parameters, and we present results from 6 and 9 parameter analyses of flat models, and an 11 parameter analysis of non-flat models. Our results include constraints on the neutrino mass (m ν < or approx. 3 eV), equation of state of the dark energy, and the tensor amplitude, as well as demonstrating the effect of additional parameters on the base parameter constraints. In a series of appendixes we describe the many uses of importance sampling, including computing results from new data and accuracy correction of results generated from an approximate method. We also discuss the different ways of converting parameter samples to parameter constraints, the effect of the prior, assess the goodness of fit and consistency, and describe the use of analytic marginalization over normalization parameters
Parameter Estimation of Partial Differential Equation Models.
Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab
2013-01-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.
Application of spreadsheet to estimate infiltration parameters
Directory of Open Access Journals (Sweden)
Mohammad Zakwan
2016-09-01
Full Text Available Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach for estimation of infiltration parameters often fails to estimate the infiltration parameters precisely. The generalised reduced gradient (GRG solver is reported to be a powerful tool for estimating parameters of nonlinear equations and it has, therefore, been implemented to estimate the infiltration parameters in the present paper. Field data of infiltration rate available in literature for sandy loam soils of Umuahia, Nigeria were used to evaluate the performance of GRG solver. A comparative study of graphical method and GRG solver shows that the performance of GRG solver is better than that of conventional graphical method for estimation of infiltration rates. Further, the performance of Kostiakov model has been found to be better than the Horton and Philip's model in most of the cases based on both the approaches of parameter estimation.
Comparison of sampling techniques for Bayesian parameter estimation
Allison, Rupert; Dunkley, Joanna
2014-02-01
The posterior probability distribution for a set of model parameters encodes all that the data have to tell us in the context of a given model; it is the fundamental quantity for Bayesian parameter estimation. In order to infer the posterior probability distribution we have to decide how to explore parameter space. Here we compare three prescriptions for how parameter space is navigated, discussing their relative merits. We consider Metropolis-Hasting sampling, nested sampling and affine-invariant ensemble Markov chain Monte Carlo (MCMC) sampling. We focus on their performance on toy-model Gaussian likelihoods and on a real-world cosmological data set. We outline the sampling algorithms themselves and elaborate on performance diagnostics such as convergence time, scope for parallelization, dimensional scaling, requisite tunings and suitability for non-Gaussian distributions. We find that nested sampling delivers high-fidelity estimates for posterior statistics at low computational cost, and should be adopted in favour of Metropolis-Hastings in many cases. Affine-invariant MCMC is competitive when computing clusters can be utilized for massive parallelization. Affine-invariant MCMC and existing extensions to nested sampling naturally probe multimodal and curving distributions.
NINE-YEAR WILKINSON MICROWAVE ANISOTROPY PROBE (WMAP) OBSERVATIONS: COSMOLOGICAL PARAMETER RESULTS
International Nuclear Information System (INIS)
Hinshaw, G.; Halpern, M.; Larson, D.; Bennett, C. L.; Weiland, J. L.; Komatsu, E.; Spergel, D. N.; Dunkley, J.; Nolta, M. R.; Hill, R. S.; Odegard, N.; Page, L.; Jarosik, N.; Smith, K. M.; Gold, B.; Kogut, A.; Wollack, E.; Limon, M.; Meyer, S. S.; Tucker, G. S.
2013-01-01
We present cosmological parameter constraints based on the final nine-year Wilkinson Microwave Anisotropy Probe (WMAP) data, in conjunction with a number of additional cosmological data sets. The WMAP data alone, and in combination, continue to be remarkably well fit by a six-parameter ΛCDM model. When WMAP data are combined with measurements of the high-l cosmic microwave background anisotropy, the baryon acoustic oscillation scale, and the Hubble constant, the matter and energy densities, Ω b h 2 , Ω c h 2 , and Ω Λ , are each determined to a precision of ∼1.5%. The amplitude of the primordial spectrum is measured to within 3%, and there is now evidence for a tilt in the primordial spectrum at the 5σ level, confirming the first detection of tilt based on the five-year WMAP data. At the end of the WMAP mission, the nine-year data decrease the allowable volume of the six-dimensional ΛCDM parameter space by a factor of 68,000 relative to pre-WMAP measurements. We investigate a number of data combinations and show that their ΛCDM parameter fits are consistent. New limits on deviations from the six-parameter model are presented, for example: the fractional contribution of tensor modes is limited to r k = -0.0027 +0.0039 -0.0038 ; the summed mass of neutrinos is limited to Σm ν eff = 3.84 ± 0.40, when the full data are analyzed. The joint constraint on N eff and the primordial helium abundance, Y He , agrees with the prediction of standard big bang nucleosynthesis. We compare recent Planck measurements of the Sunyaev-Zel'dovich effect with our seven-year measurements, and show their mutual agreement. Our analysis of the polarization pattern around temperature extrema is updated. This confirms a fundamental prediction of the standard cosmological model and provides a striking illustration of acoustic oscillations and adiabatic initial conditions in the early universe
Directory of Open Access Journals (Sweden)
Gregory Beskin
2014-08-01
Full Text Available The results of a study of 43 peaked R-band light curves of optical counterparts of gamma-ray bursts with known redshifts are presented. The parameters of optical transients were calculated in the comoving frame, and then a search for pair correlations between them was conducted. A statistical analysis showed a strong correlation between the peak luminosity and the redshift both for pure afterglows and for events with residual gamma activity, which cannot be explained as an effect of observational selection.This suggests a cosmological evolution of the parameters of the local interstellar medium around the sources of the gamma-ray burst. In the models of forward and reverse shock waves, a relation between the density of the interstellar medium and the redshift was built for gamma-ray burst afterglows, leading to a power-law dependence of the star-formation rate at regions around GRBs on redshift with a slope of about 6.
Parameter Estimation of Nonlinear Models in Forestry.
Fekedulegn, Desta; Mac Siúrtáin, Máirtín Pádraig; Colbert, Jim J.
1999-01-01
Partial derivatives of the negative exponential, monomolecular, Mitcherlich, Gompertz, logistic, Chapman-Richards, von Bertalanffy, Weibull and the Richard’s nonlinear growth models are presented. The application of these partial derivatives in estimating the model parameters is illustrated. The parameters are estimated using the Marquardt iterative method of nonlinear regression relating top height to age of Norway spruce (Picea abies L.) from the Bowmont Norway Spruce Thinnin...
Estimate of the cosmological bispectrum from the MAXIMA-1 cosmic microwave background map.
Santos, M G; Balbi, A; Borrill, J; Ferreira, P G; Hanany, S; Jaffe, A H; Lee, A T; Magueijo, J; Rabii, B; Richards, P L; Smoot, G F; Stompor, R; Winant, C D; Wu, J H P
2002-06-17
We use the measurement of the cosmic microwave background taken during the MAXIMA-1 flight to estimate the bispectrum of cosmological perturbations. We propose an estimator for the bispectrum that is appropriate in the flat sky approximation, apply it to the MAXIMA-1 data, and evaluate errors using bootstrap methods. We compare the estimated value with what would be expected if the sky signal were Gaussian and find that it is indeed consistent, with a chi(2) per degree of freedom of approximately unity. This measurement places constraints on models of inflation.
Parameter Estimation of Partial Differential Equation Models
Xun, Xiaolei
2013-09-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.
Statistics of Parameter Estimates: A Concrete Example
Aguilar, Oscar
2015-01-01
© 2015 Society for Industrial and Applied Mathematics. Most mathematical models include parameters that need to be determined from measurements. The estimated values of these parameters and their uncertainties depend on assumptions made about noise levels, models, or prior knowledge. But what can we say about the validity of such estimates, and the influence of these assumptions? This paper is concerned with methods to address these questions, and for didactic purposes it is written in the context of a concrete nonlinear parameter estimation problem. We will use the results of a physical experiment conducted by Allmaras et al. at Texas A&M University [M. Allmaras et al., SIAM Rev., 55 (2013), pp. 149-167] to illustrate the importance of validation procedures for statistical parameter estimation. We describe statistical methods and data analysis tools to check the choices of likelihood and prior distributions, and provide examples of how to compare Bayesian results with those obtained by non-Bayesian methods based on different types of assumptions. We explain how different statistical methods can be used in complementary ways to improve the understanding of parameter estimates and their uncertainties.
Parameter estimation in X-ray astronomy
International Nuclear Information System (INIS)
Lampton, M.; Margon, B.; Bowyer, S.
1976-01-01
The problems of model classification and parameter estimation are examined, with the objective of establishing the statistical reliability of inferences drawn from X-ray observations. For testing the validities of classes of models, the procedure based on minimizing the chi 2 statistic is recommended; it provides a rejection criterion at any desired significance level. Once a class of models has been accepted, a related procedure based on the increase of chi 2 gives a confidence region for the values of the model's adjustable parameters. The procedure allows the confidence level to be chosen exactly, even for highly nonlinear models. Numerical experiments confirm the validity of the prescribed technique.The chi 2 /sub min/+1 error estimation method is evaluated and found unsuitable when several parameter ranges are to be derived, because it substantially underestimates their joint errors. The ratio of variances method, while formally correct, gives parameter confidence regions which are more variable than necessary
Parameter Estimation for Thurstone Choice Models
Energy Technology Data Exchange (ETDEWEB)
Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-04-24
We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.
Multi-Parameter Estimation for Orthorhombic Media
Masmoudi, Nabil; Alkhalifah, Tariq Ali
2015-01-01
Building reliable anisotropy models is crucial in seismic modeling, imaging and full waveform inversion. However, estimating anisotropy parameters is often hampered by the trade off between inhomogeneity and anisotropy. For instance, one way to estimate the anisotropy parameters is to relate them analytically to traveltimes, which is challenging in inhomogeneous media. Using perturbation theory, we develop travel-time approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2 and a parameter Δγ in inhomogeneous background media. Specifically, our expansion assumes inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. This approach has two main advantages: in one hand, it provides a computationally efficient tool to solve the orthorhombic eikonal equation, on the other hand, it provides a mechanism to scan for the best fitting anisotropy parameters without the need for repetitive modeling of traveltimes, because the coefficients of the traveltime expansion are independent of the perturbed parameters. Furthermore, the coefficients of the traveltime expansion provide insights on the sensitivity of the traveltime with respect to the perturbed parameters. We show the accuracy of the traveltime approximations as well as an approach for multi-parameter scanning in orthorhombic media.
Multi-Parameter Estimation for Orthorhombic Media
Masmoudi, Nabil
2015-08-19
Building reliable anisotropy models is crucial in seismic modeling, imaging and full waveform inversion. However, estimating anisotropy parameters is often hampered by the trade off between inhomogeneity and anisotropy. For instance, one way to estimate the anisotropy parameters is to relate them analytically to traveltimes, which is challenging in inhomogeneous media. Using perturbation theory, we develop travel-time approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2 and a parameter Δγ in inhomogeneous background media. Specifically, our expansion assumes inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. This approach has two main advantages: in one hand, it provides a computationally efficient tool to solve the orthorhombic eikonal equation, on the other hand, it provides a mechanism to scan for the best fitting anisotropy parameters without the need for repetitive modeling of traveltimes, because the coefficients of the traveltime expansion are independent of the perturbed parameters. Furthermore, the coefficients of the traveltime expansion provide insights on the sensitivity of the traveltime with respect to the perturbed parameters. We show the accuracy of the traveltime approximations as well as an approach for multi-parameter scanning in orthorhombic media.
Bayesian estimation of Weibull distribution parameters
International Nuclear Information System (INIS)
Bacha, M.; Celeux, G.; Idee, E.; Lannoy, A.; Vasseur, D.
1994-11-01
In this paper, we expose SEM (Stochastic Expectation Maximization) and WLB-SIR (Weighted Likelihood Bootstrap - Sampling Importance Re-sampling) methods which are used to estimate Weibull distribution parameters when data are very censored. The second method is based on Bayesian inference and allow to take into account available prior informations on parameters. An application of this method, with real data provided by nuclear power plants operation feedback analysis has been realized. (authors). 8 refs., 2 figs., 2 tabs
Iterative importance sampling algorithms for parameter estimation
Morzfeld, Matthias; Day, Marcus S.; Grout, Ray W.; Pau, George Shu Heng; Finsterle, Stefan A.; Bell, John B.
2016-01-01
In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov Chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is ...
Bayesian parameter estimation in probabilistic risk assessment
International Nuclear Information System (INIS)
Siu, Nathan O.; Kelly, Dana L.
1998-01-01
Bayesian statistical methods are widely used in probabilistic risk assessment (PRA) because of their ability to provide useful estimates of model parameters when data are sparse and because the subjective probability framework, from which these methods are derived, is a natural framework to address the decision problems motivating PRA. This paper presents a tutorial on Bayesian parameter estimation especially relevant to PRA. It summarizes the philosophy behind these methods, approaches for constructing likelihood functions and prior distributions, some simple but realistic examples, and a variety of cautions and lessons regarding practical applications. References are also provided for more in-depth coverage of various topics
Robust estimation of hydrological model parameters
Directory of Open Access Journals (Sweden)
A. Bárdossy
2008-11-01
Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.
MCMC for parameters estimation by bayesian approach
International Nuclear Information System (INIS)
Ait Saadi, H.; Ykhlef, F.; Guessoum, A.
2011-01-01
This article discusses the parameter estimation for dynamic system by a Bayesian approach associated with Markov Chain Monte Carlo methods (MCMC). The MCMC methods are powerful for approximating complex integrals, simulating joint distributions, and the estimation of marginal posterior distributions, or posterior means. The MetropolisHastings algorithm has been widely used in Bayesian inference to approximate posterior densities. Calibrating the proposal distribution is one of the main issues of MCMC simulation in order to accelerate the convergence.
Parameter estimation for an expanding universe
Directory of Open Access Journals (Sweden)
Jieci Wang
2015-03-01
Full Text Available We study the parameter estimation for excitations of Dirac fields in the expanding Robertson–Walker universe. We employ quantum metrology techniques to demonstrate the possibility for high precision estimation for the volume rate of the expanding universe. We show that the optimal precision of the estimation depends sensitively on the dimensionless mass m˜ and dimensionless momentum k˜ of the Dirac particles. The optimal precision for the ratio estimation peaks at some finite dimensionless mass m˜ and momentum k˜. We find that the precision of the estimation can be improved by choosing the probe state as an eigenvector of the hamiltonian. This occurs because the largest quantum Fisher information is obtained by performing projective measurements implemented by the projectors onto the eigenvectors of specific probe states.
Nonparametric estimation of location and scale parameters
Potgieter, C.J.; Lombard, F.
2012-01-01
Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal
Sensor Placement for Modal Parameter Subset Estimation
DEFF Research Database (Denmark)
Ulriksen, Martin Dalgaard; Bernal, Dionisio; Damkilde, Lars
2016-01-01
The present paper proposes an approach for deciding on sensor placements in the context of modal parameter estimation from vibration measurements. The approach is based on placing sensors, of which the amount is determined a priori, such that the minimum Fisher information that the frequency resp...
Postprocessing MPEG based on estimated quantization parameters
DEFF Research Database (Denmark)
Forchhammer, Søren
2009-01-01
the case where the coded stream is not accessible, or from an architectural point of view not desirable to use, and instead estimate some of the MPEG stream parameters based on the decoded sequence. The I-frames are detected and the quantization parameters are estimated from the coded stream and used...... in the postprocessing. We focus on deringing and present a scheme which aims at suppressing ringing artifacts, while maintaining the sharpness of the texture. The goal is to improve the visual quality, so perceptual blur and ringing metrics are used in addition to PSNR evaluation. The performance of the new `pure......' postprocessing compares favorable to a reference postprocessing filter which has access to the quantization parameters not only for I-frames but also on P and B-frames....
Estimating physiological skin parameters from hyperspectral signatures
Vyas, Saurabh; Banerjee, Amit; Burlina, Philippe
2013-05-01
We describe an approach for estimating human skin parameters, such as melanosome concentration, collagen concentration, oxygen saturation, and blood volume, using hyperspectral radiometric measurements (signatures) obtained from in vivo skin. We use a computational model based on Kubelka-Munk theory and the Fresnel equations. This model forward maps the skin parameters to a corresponding multiband reflectance spectra. Machine-learning-based regression is used to generate the inverse map, and hence estimate skin parameters from hyperspectral signatures. We test our methods using synthetic and in vivo skin signatures obtained in the visible through the short wave infrared domains from 24 patients of both genders and Caucasian, Asian, and African American ethnicities. Performance validation shows promising results: good agreement with the ground truth and well-established physiological precepts. These methods have potential use in the characterization of skin abnormalities and in minimally-invasive prescreening of malignant skin cancers.
Parameter estimation in stochastic differential equations
Bishwal, Jaya P N
2008-01-01
Parameter estimation in stochastic differential equations and stochastic partial differential equations is the science, art and technology of modelling complex phenomena and making beautiful decisions. The subject has attracted researchers from several areas of mathematics and other related fields like economics and finance. This volume presents the estimation of the unknown parameters in the corresponding continuous models based on continuous and discrete observations and examines extensively maximum likelihood, minimum contrast and Bayesian methods. Useful because of the current availability of high frequency data is the study of refined asymptotic properties of several estimators when the observation time length is large and the observation time interval is small. Also space time white noise driven models, useful for spatial data, and more sophisticated non-Markovian and non-semimartingale models like fractional diffusions that model the long memory phenomena are examined in this volume.
Energy Technology Data Exchange (ETDEWEB)
Villani, Mattia, E-mail: villani@fi.infn.it [Sezione INFN di Firenze, Polo Scientifico Via Sansone 1, 50019, Sesto Fiorentino (Italy)
2014-06-01
We consider the Goode-Wainwright representation of the Szekeres cosmological models and calculate the Taylor expansion of the luminosity distance in order to study the effects of the inhomogeneities on cosmographic parameters. Without making a particular choice for the arbitrary functions defining the metric, we Taylor expand up to the second order in redshift for Family I and up to the third order for Family II Szekeres metrics under the hypotesis, based on observation, that local structure formation is over. In a conservative fashion, we also allow for the existence of a non null cosmological constant.
Aylor, K.; Hou, Z.; Knox, L.; Story, K. T.; Benson, B. A.; Bleem, L. E.; Carlstrom, J. E.; Chang, C. L.; Cho, H.-M.; Chown, R.; Crawford, T. M.; Crites, A. T.; de Haan, T.; Dobbs, M. A.; Everett, W. B.; George, E. M.; Halverson, N. W.; Harrington, N. L.; Holder, G. P.; Holzapfel, W. L.; Hrubes, J. D.; Keisler, R.; Lee, A. T.; Leitch, E. M.; Luong-Van, D.; Marrone, D. P.; McMahon, J. J.; Meyer, S. S.; Millea, M.; Mocanu, L. M.; Mohr, J. J.; Natoli, T.; Omori, Y.; Padin, S.; Pryke, C.; Reichardt, C. L.; Ruhl, J. E.; Sayre, J. T.; Schaffer, K. K.; Shirokoff, E.; Staniszewski, Z.; Stark, A. A.; Vanderlinde, K.; Vieira, J. D.; Williamson, R.
2017-11-01
The Planck cosmic microwave background temperature data are best fit with a ΛCDM model that mildly contradicts constraints from other cosmological probes. The South Pole Telescope (SPT) 2540 {\\deg }2 SPT-SZ survey offers measurements on sub-degree angular scales (multipoles 650≤slant {\\ell }≤slant 2500) with sufficient precision to use as an independent check of the Planck data. Here we build on the recent joint analysis of the SPT-SZ and Planck data in Hou et al. by comparing ΛCDM parameter estimates using the temperature power spectrum from both data sets in the SPT-SZ survey region. We also restrict the multipole range used in parameter fitting to focus on modes measured well by both SPT and Planck, thereby greatly reducing sample variance as a driver of parameter differences and creating a stringent test for systematic errors. We find no evidence of systematic errors from these tests. When we expand the maximum multipole of SPT data used, we see low-significance shifts in the angular scale of the sound horizon and the physical baryon and cold dark matter densities, with a resulting trend to higher Hubble constant. When we compare SPT and Planck data on the SPT-SZ sky patch to Planck full-sky data but keep the multipole range restricted, we find differences in the parameters n s and {A}s{e}-2τ . We perform further checks, investigating instrumental effects and modeling assumptions, and we find no evidence that the effects investigated are responsible for any of the parameter shifts. Taken together, these tests reveal no evidence for systematic errors in SPT or Planck data in the overlapping sky coverage and multipole range and at most weak evidence for a breakdown of ΛCDM or systematic errors influencing either the Planck data outside the SPT-SZ survey area or the SPT data at {\\ell }> 2000.
Energy Technology Data Exchange (ETDEWEB)
Romano, Antonio Enea [University of Crete, Department of Physics and CCTP, Heraklion (Greece); Kyoto University, Yukawa Institute for Theoretical Physics, Kyoto (Japan); Universidad de Antioquia, Instituto de Fisica, Medellin (Colombia); Vallejo, Sergio Andres [Kyoto University, Yukawa Institute for Theoretical Physics, Kyoto (Japan); Universidad de Antioquia, Instituto de Fisica, Medellin (Colombia)
2016-04-15
In order to estimate the effects of a local structure on the Hubble parameter we calculate the low-redshift expansion for H(z) and (δH)/(H) for an observer at the center of a spherically symmetric matter distribution in the presence of a cosmological constant. We then test the accuracy of the formulas comparing them with fully relativistic non-perturbative numerical calculations for different cases for the density profile. The low-redshift expansion we obtain gives results more precise than perturbation theory since it is based on the use of an exact solution of Einstein's field equations. For larger density contrasts the low-redshift formulas accuracy improves respect to the perturbation theory accuracy because the latter is based on the assumption of a small density contrast, while the former does not rely on such an assumption. The formulas can be used to take into account the effects on the Hubble expansion parameter due to the monopole component of the local structure. If the H(z) observations will show deviations from the ΛCDM prediction compatible with the formulas we have derived, this could be considered an independent evidence of the existence of a local inhomogeneity, and the formulas could be used to determine the characteristics of this local structure. (orig.)
Cosmological parameter uncertainties from SALT-II type Ia supernova light curve models
International Nuclear Information System (INIS)
Mosher, J.; Sako, M.; Guy, J.; Astier, P.; Betoule, M.; El-Hage, P.; Pain, R.; Regnault, N.; Kessler, R.; Frieman, J. A.; Marriner, J.; Biswas, R.; Kuhlmann, S.; Schneider, D. P.
2014-01-01
We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ∼120 low-redshift (z < 0.1) SNe Ia, ∼255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ∼290 SNLS SNe Ia (z ≤ 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w input – w recovered ) ranging from –0.005 ± 0.012 to –0.024 ± 0.010. These biases are indistinguishable from each other within the uncertainty; the average bias on w is –0.014 ± 0.007.
Cosmological Parameter Uncertainties from SALT-II Type Ia Supernova Light Curve Models
Energy Technology Data Exchange (ETDEWEB)
Mosher, J. [Pennsylvania U.; Guy, J. [LBL, Berkeley; Kessler, R. [Chicago U., KICP; Astier, P. [Paris U., VI-VII; Marriner, J. [Fermilab; Betoule, M. [Paris U., VI-VII; Sako, M. [Pennsylvania U.; El-Hage, P. [Paris U., VI-VII; Biswas, R. [Argonne; Pain, R. [Paris U., VI-VII; Kuhlmann, S. [Argonne; Regnault, N. [Paris U., VI-VII; Frieman, J. A. [Fermilab; Schneider, D. P. [Penn State U.
2014-08-29
We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ~120 low-redshift (z < 0.1) SNe Ia, ~255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ~290 SNLS SNe Ia (z ≤ 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w (input) – w (recovered)) ranging from –0.005 ± 0.012 to –0.024 ± 0.010. These biases are indistinguishable from each other within the uncertainty, the average bias on w is –0.014 ± 0.007.
Nonparametric estimation of location and scale parameters
Potgieter, C.J.
2012-12-01
Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal assumptions regarding the form of the distribution functions of X and Y. We discuss an approach to the estimation problem that is based on asymptotic likelihood considerations. Our results enable us to provide a methodology that can be implemented easily and which yields estimators that are often near optimal when compared to fully parametric methods. We evaluate the performance of the estimators in a series of Monte Carlo simulations. © 2012 Elsevier B.V. All rights reserved.
Estimating RASATI scores using acoustical parameters
International Nuclear Information System (INIS)
Agüero, P D; Tulli, J C; Moscardi, G; Gonzalez, E L; Uriz, A J
2011-01-01
Acoustical analysis of speech using computers has reached an important development in the latest years. The subjective evaluation of a clinician is complemented with an objective measure of relevant parameters of voice. Praat, MDVP (Multi Dimensional Voice Program) and SAV (Software for Voice Analysis) are some examples of software for speech analysis. This paper describes an approach to estimate the subjective characteristics of RASATI scale given objective acoustical parameters. Two approaches were used: linear regression with non-negativity constraints, and neural networks. The experiments show that such approach gives correct evaluations with ±1 error in 80% of the cases.
Optimal design criteria - prediction vs. parameter estimation
Waldl, Helmut
2014-05-01
G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.
Variational estimates of point-kinetics parameters
International Nuclear Information System (INIS)
Favorite, J.A.; Stacey, W.M. Jr.
1995-01-01
Variational estimates of the effect of flux shifts on the integral reactivity parameter of the point-kinetics equations and on regional power fractions were calculated for a variety of localized perturbations in two light water reactor (LWR) model problems representing a small, tightly coupled core and a large, loosely coupled core. For the small core, the flux shifts resulting from even relatively large localized reactivity changes (∼600 pcm) were small, and the standard point-kinetics approximation estimates of reactivity were in error by only ∼10% or less, while the variational estimates were accurate to within ∼1%. For the larger core, significant (>50%) flux shifts occurred in response to local perturbations, leading to errors of the same magnitude in the standard point-kinetics approximation of the reactivity worth. For positive reactivity, the error in the variational estimate of reactivity was only a few percent in the larger core, and the resulting transient power prediction was 1 to 2 orders of magnitude more accurate than with the standard point-kinetics approximation. For a large, local negative reactivity insertion resulting in a large flux shift, the accuracy of the variational estimate broke down. The variational estimate of the effect of flux shifts on reactivity in point-kinetics calculations of transients in LWR cores was found to generally result in greatly improved accuracy, relative to the standard point-kinetics approximation, the exception being for large negative reactivity insertions with large flux shifts in large, loosely coupled cores
Thompson, Rodger I.
2018-04-01
This investigation explores using the beta function formalism to calculate analytic solutions for the observable parameters in rolling scalar field cosmologies. The beta function in this case is the derivative of the scalar ϕ with respect to the natural log of the scale factor a, β (φ )=d φ /d ln (a). Once the beta function is specified, modulo a boundary condition, the evolution of the scalar ϕ as a function of the scale factor is completely determined. A rolling scalar field cosmology is defined by its action which can contain a range of physically motivated dark energy potentials. The beta function is chosen so that the associated "beta potential" is an accurate, but not exact, representation of the appropriate dark energy model potential. The basic concept is that the action with the beta potential is so similar to the action with the model potential that solutions using the beta action are accurate representations of solutions using the model action. The beta function provides an extra equation to calculate analytic functions of the cosmologies parameters as a function of the scale factor that are that are not calculable using only the model action. As an example this investigation uses a quintessence cosmology to demonstrate the method for power and inverse power law dark energy potentials. An interesting result of the investigation is that the Hubble parameter H is almost completely insensitive to the power of the potentials and that ΛCDM is part of the family of quintessence cosmology power law potentials with a power of zero.
Directory of Open Access Journals (Sweden)
Lorenzo Iorio
2018-03-01
Full Text Available Independent tests aiming to constrain the value of the cosmological constant Λ are usually difficult because of its extreme smallness ( Λ ≃ 1 × 10 - 52 m - 2 , or 2 . 89 × 10 - 122 in Planck units . Bounds on it from Solar System orbital motions determined with spacecraft tracking are currently at the ≃ 10 - 43 – 10 - 44 m - 2 ( 5 – 1 × 10 - 113 in Planck units level, but they may turn out to be optimistic since Λ has not yet been explicitly modeled in the planetary data reductions. Accurate ( σ τ p ≃ 1 – 10 μ s timing of expected pulsars orbiting the Black Hole at the Galactic Center, preferably along highly eccentric and wide orbits, might, at least in principle, improve the planetary constraints by several orders of magnitude. By looking at the average time shift per orbit Δ δ τ ¯ p Λ , an S2-like orbital configuration with e = 0 . 8839 , P b = 16 yr would permit a preliminarily upper bound of the order of Λ ≲ 9 × 10 - 47 m - 2 ≲ 2 × 10 - 116 in Planck units if only σ τ p were to be considered. Our results can be easily extended to modified models of gravity using Λ -type parameters.
FORECASTING COSMOLOGICAL PARAMETER CONSTRAINTS FROM NEAR-FUTURE SPACE-BASED GALAXY SURVEYS
International Nuclear Information System (INIS)
Pavlov, Anatoly; Ratra, Bharat; Samushia, Lado
2012-01-01
The next generation of space-based galaxy surveys is expected to measure the growth rate of structure to a level of about one percent over a range of redshifts. The rate of growth of structure as a function of redshift depends on the behavior of dark energy and so can be used to constrain parameters of dark energy models. In this work, we investigate how well these future data will be able to constrain the time dependence of the dark energy density. We consider parameterizations of the dark energy equation of state, such as XCDM and ωCDM, as well as a consistent physical model of time-evolving scalar field dark energy, φCDM. We show that if the standard, specially flat cosmological model is taken as a fiducial model of the universe, these near-future measurements of structure growth will be able to constrain the time dependence of scalar field dark energy density to a precision of about 10%, which is almost an order of magnitude better than what can be achieved from a compilation of currently available data sets.
PARAMETER ESTIMATION IN BREAD BAKING MODEL
Directory of Open Access Journals (Sweden)
Hadiyanto Hadiyanto
2012-05-01
Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels. Abstrak PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan
Parameter estimation in tree graph metabolic networks
Directory of Open Access Journals (Sweden)
Laura Astola
2016-09-01
Full Text Available We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis–Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings.
Parameter estimation in tree graph metabolic networks.
Astola, Laura; Stigter, Hans; Gomez Roldan, Maria Victoria; van Eeuwijk, Fred; Hall, Robert D; Groenenboom, Marian; Molenaar, Jaap J
2016-01-01
We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum) seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs) to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis-Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings.
Parameter estimation for lithium ion batteries
Santhanagopalan, Shriram
With an increase in the demand for lithium based batteries at the rate of about 7% per year, the amount of effort put into improving the performance of these batteries from both experimental and theoretical perspectives is increasing. There exist a number of mathematical models ranging from simple empirical models to complicated physics-based models to describe the processes leading to failure of these cells. The literature is also rife with experimental studies that characterize the various properties of the system in an attempt to improve the performance of lithium ion cells. However, very little has been done to quantify the experimental observations and relate these results to the existing mathematical models. In fact, the best of the physics based models in the literature show as much as 20% discrepancy when compared to experimental data. The reasons for such a big difference include, but are not limited to, numerical complexities involved in extracting parameters from experimental data and inconsistencies in interpreting directly measured values for the parameters. In this work, an attempt has been made to implement simplified models to extract parameter values that accurately characterize the performance of lithium ion cells. The validity of these models under a variety of experimental conditions is verified using a model discrimination procedure. Transport and kinetic properties are estimated using a non-linear estimation procedure. The initial state of charge inside each electrode is also maintained as an unknown parameter, since this value plays a significant role in accurately matching experimental charge/discharge curves with model predictions and is not readily known from experimental data. The second part of the dissertation focuses on parameters that change rapidly with time. For example, in the case of lithium ion batteries used in Hybrid Electric Vehicle (HEV) applications, the prediction of the State of Charge (SOC) of the cell under a variety of
Composite likelihood estimation of demographic parameters
Directory of Open Access Journals (Sweden)
Garrigan Daniel
2009-11-01
Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable
Preliminary Estimation of Kappa Parameter in Croatia
Stanko, Davor; Markušić, Snježana; Ivančić, Ines; Mario, Gazdek; Gülerce, Zeynep
2017-12-01
Spectral parameter kappa κ is used to describe spectral amplitude decay “crash syndrome” at high frequencies. The purpose of this research is to estimate spectral parameter kappa for the first time in Croatia based on small and moderate earthquakes. Recordings of local earthquakes with magnitudes higher than 3, epicentre distances less than 150 km, and focal depths less than 30 km from seismological stations in Croatia are used. The value of kappa was estimated from the acceleration amplitude spectrum of shear waves from the slope of the high-frequency part where the spectrum starts to decay rapidly to a noise floor. Kappa models as a function of a site and distance were derived from a standard linear regression of kappa-distance dependence. Site kappa was determined from the extrapolation of the regression line to a zero distance. The preliminary results of site kappa across Croatia are promising. In this research, these results are compared with local site condition parameters for each station, e.g. shear wave velocity in the upper 30 m from geophysical measurements and with existing global shear wave velocity - site kappa values. Spatial distribution of individual kappa’s is compared with the azimuthal distribution of earthquake epicentres. These results are significant for a couple of reasons: to extend the knowledge of the attenuation of near-surface crust layers of the Dinarides and to provide additional information on the local earthquake parameters for updating seismic hazard maps of studied area. Site kappa can be used in the re-creation, and re-calibration of attenuation of peak horizontal and/or vertical acceleration in the Dinarides area since information on the local site conditions were not included in the previous studies.
Parameter estimation techniques for LTP system identification
Nofrarias Serra, Miquel
LISA Pathfinder (LPF) is the precursor mission of LISA (Laser Interferometer Space Antenna) and the first step towards gravitational waves detection in space. The main instrument onboard the mission is the LTP (LISA Technology Package) whose scientific goal is to test LISA's drag-free control loop by reaching a differential acceleration noise level between two masses in √ geodesic motion of 3 × 10-14 ms-2 / Hz in the milliHertz band. The mission is not only challenging in terms of technology readiness but also in terms of data analysis. As with any gravitational wave detector, attaining the instrument performance goals will require an extensive noise hunting campaign to measure all contributions with high accuracy. But, opposite to on-ground experiments, LTP characterisation will be only possible by setting parameters via telecommands and getting a selected amount of information through the available telemetry downlink. These two conditions, high accuracy and high reliability, are the main restrictions that the LTP data analysis must overcome. A dedicated object oriented Matlab Toolbox (LTPDA) has been set up by the LTP analysis team for this purpose. Among the different toolbox methods, an essential part for the mission are the parameter estimation tools that will be used for system identification during operations: Linear Least Squares, Non-linear Least Squares and Monte Carlo Markov Chain methods have been implemented as LTPDA methods. The data analysis team has been testing those methods with a series of mock data exercises with the following objectives: to cross-check parameter estimation methods and compare the achievable accuracy for each of them, and to develop the best strategies to describe the physics underlying a complex controlled experiment as the LTP. In this contribution we describe how these methods were tested with simulated LTP-like data to recover the parameters of the model and we report on the latest results of these mock data exercises.
Statistical distributions applications and parameter estimates
Thomopoulos, Nick T
2017-01-01
This book gives a description of the group of statistical distributions that have ample application to studies in statistics and probability. Understanding statistical distributions is fundamental for researchers in almost all disciplines. The informed researcher will select the statistical distribution that best fits the data in the study at hand. Some of the distributions are well known to the general researcher and are in use in a wide variety of ways. Other useful distributions are less understood and are not in common use. The book describes when and how to apply each of the distributions in research studies, with a goal to identify the distribution that best applies to the study. The distributions are for continuous, discrete, and bivariate random variables. In most studies, the parameter values are not known a priori, and sample data is needed to estimate parameter values. In other scenarios, no sample data is available, and the researcher seeks some insight that allows the estimate of ...
Statistical estimation of nuclear reactor dynamic parameters
International Nuclear Information System (INIS)
Cummins, J.D.
1962-02-01
This report discusses the study of the noise in nuclear reactors and associated power plant. The report is divided into three distinct parts. In the first part parameters which influence the dynamic behaviour of some reactors will be specified and their effect on dynamic performance described. Methods of estimating dynamic parameters using statistical signals will be described in detail together with descriptions of the usefulness of the results, the accuracy and related topics. Some experiments which have been and which might be performed on nuclear reactors will be described. In the second part of the report a digital computer programme will be described. The computer programme derives the correlation functions and the spectra of signals. The programme will compute the frequency response both gain and phase for physical items of plant for which simultaneous recordings of input and output signal variations have been made. Estimations of the accuracy of the correlation functions and the spectra may be computed using the programme and the amplitude distribution of signals may also b computed. The programme is written in autocode for the Ferranti Mercury computer. In the third part of the report a practical example of the use of the method and the digital programme is presented. In order to eliminate difficulties of interpretation a very simple plant model was chosen i.e. a simple first order lag. Several interesting properties of statistical signals were measured and will be discussed. (author)
Parameter Estimation of Spacecraft Fuel Slosh Model
Gangadharan, Sathya; Sudermann, James; Marlowe, Andrea; Njengam Charles
2004-01-01
Fuel slosh in the upper stages of a spinning spacecraft during launch has been a long standing concern for the success of a space mission. Energy loss through the movement of the liquid fuel in the fuel tank affects the gyroscopic stability of the spacecraft and leads to nutation (wobble) which can cause devastating control issues. The rate at which nutation develops (defined by Nutation Time Constant (NTC can be tedious to calculate and largely inaccurate if done during the early stages of spacecraft design. Pure analytical means of predicting the influence of onboard liquids have generally failed. A strong need exists to identify and model the conditions of resonance between nutation motion and liquid modes and to understand the general characteristics of the liquid motion that causes the problem in spinning spacecraft. A 3-D computerized model of the fuel slosh that accounts for any resonant modes found in the experimental testing will allow for increased accuracy in the overall modeling process. Development of a more accurate model of the fuel slosh currently lies in a more generalized 3-D computerized model incorporating masses, springs and dampers. Parameters describing the model include the inertia tensor of the fuel, spring constants, and damper coefficients. Refinement and understanding the effects of these parameters allow for a more accurate simulation of fuel slosh. The current research will focus on developing models of different complexity and estimating the model parameters that will ultimately provide a more realistic prediction of Nutation Time Constant obtained through simulation.
Mandelbaum, Rachel; Slosar, Anže; Baldauf, Tobias; Seljak, Uroš; Hirata, Christopher M.; Nakajima, Reiko; Reyes, Reinabelle; Smith, Robert E.
2013-06-01
Recent studies have shown that the cross-correlation coefficient between galaxies and dark matter is very close to unity on scales outside a few virial radii of galaxy haloes, independent of the details of how galaxies populate dark matter haloes. This finding makes it possible to determine the dark matter clustering from measurements of galaxy-galaxy weak lensing and galaxy clustering. We present new cosmological parameter constraints based on large-scale measurements of spectroscopic galaxy samples from the Sloan Digital Sky Survey (SDSS) data release 7. We generalize the approach of Baldauf et al. to remove small-scale information (below 2 and 4 h-1 Mpc for lensing and clustering measurements, respectively), where the cross-correlation coefficient differs from unity. We derive constraints for three galaxy samples covering 7131 deg2, containing 69 150, 62 150 and 35 088 galaxies with mean redshifts of 0.11, 0.28 and 0.40. We clearly detect scale-dependent galaxy bias for the more luminous galaxy samples, at a level consistent with theoretical expectations. When we vary both σ8 and Ωm (and marginalize over non-linear galaxy bias) in a flat Λ cold dark matter model, the best-constrained quantity is σ8(Ωm/0.25)0.57 = 0.80 ± 0.05 (1σ, stat. + sys.), where statistical and systematic errors (photometric redshift and shear calibration) have comparable contributions, and we have fixed ns = 0.96 and h = 0.7. These strong constraints on the matter clustering suggest that this method is competitive with cosmic shear in current data, while having very complementary and in some ways less serious systematics. We therefore expect that this method will play a prominent role in future weak lensing surveys. When we combine these data with Wilkinson Microwave Anisotropy Probe 7-year (WMAP7) cosmic microwave background (CMB) data, constraints on σ8, Ωm, H0, wde and ∑mν become 30-80 per cent tighter than with CMB data alone, since our data break several parameter
Bayesian `hyper-parameters' approach to joint estimation: the Hubble constant from CMB measurements
Lahav, O.; Bridle, S. L.; Hobson, M. P.; Lasenby, A. N.; Sodré, L.
2000-07-01
Recently several studies have jointly analysed data from different cosmological probes with the motivation of estimating cosmological parameters. Here we generalize this procedure to allow freedom in the relative weights of various probes. This is done by including in the joint χ2 function a set of `hyper-parameters', which are dealt with using Bayesian considerations. The resulting algorithm, which assumes uniform priors on the log of the hyper-parameters, is very simple: instead of minimizing \\sum \\chi_j2 (where \\chi_j2 is per data set j) we propose to minimize \\sum Nj (\\chi_j2) (where Nj is the number of data points per data set j). We illustrate the method by estimating the Hubble constant H0 from different sets of recent cosmic microwave background (CMB) experiments (including Saskatoon, Python V, MSAM1, TOCO and Boomerang). The approach can be generalized for combinations of cosmic probes, and for other priors on the hyper-parameters.
Parameter estimation in fractional diffusion models
Kubilius, Kęstutis; Ralchenko, Kostiantyn
2017-01-01
This book is devoted to parameter estimation in diffusion models involving fractional Brownian motion and related processes. For many years now, standard Brownian motion has been (and still remains) a popular model of randomness used to investigate processes in the natural sciences, financial markets, and the economy. The substantial limitation in the use of stochastic diffusion models with Brownian motion is due to the fact that the motion has independent increments, and, therefore, the random noise it generates is “white,” i.e., uncorrelated. However, many processes in the natural sciences, computer networks and financial markets have long-term or short-term dependences, i.e., the correlations of random noise in these processes are non-zero, and slowly or rapidly decrease with time. In particular, models of financial markets demonstrate various kinds of memory and usually this memory is modeled by fractional Brownian diffusion. Therefore, the book constructs diffusion models with memory and provides s...
Pollen parameters estimates of genetic variability among newly ...
African Journals Online (AJOL)
Pollen parameters estimates of genetic variability among newly selected Nigerian roselle (Hibiscus sabdariffa L.) genotypes. ... Estimates of some pollen parameters where used to assess the genetic diversity among ... HOW TO USE AJOL.
Estimation of light transport parameters in biological media using ...
Indian Academy of Sciences (India)
Estimation of light transport parameters in biological media using coherent backscattering ... backscattered light for estimating the light transport parameters of biological media has been investigated. ... Pramana – Journal of Physics | News.
Scalar-tensor cosmology with cosmological constant
International Nuclear Information System (INIS)
Maslanka, K.
1983-01-01
The equations of scalar-tensor theory of gravitation with cosmological constant in the case of homogeneous and isotropic cosmological model can be reduced to dynamical system of three differential equations with unknown functions H=R/R, THETA=phi/phi, S=e/phi. When new variables are introduced the system becomes more symmetrical and cosmological solutions R(t), phi(t), e(t) are found. It is shown that when cosmological constant is introduced large class of solutions which depend also on Dicke-Brans parameter can be obtained. Investigations of these solutions give general limits for cosmological constant and mean density of matter in plane model. (author)
Bhattacharjya, Rajib Kumar
2018-05-01
The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.
Application of spreadsheet to estimate infiltration parameters
Zakwan, Mohammad; Muzzammil, Mohammad; Alam, Javed
2016-01-01
Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach ...
Estimates for the parameters of the heavy quark expansion
Energy Technology Data Exchange (ETDEWEB)
Heinonen, Johannes; Mannel, Thomas [Universitaet Siegen (Germany)
2015-07-01
We give improved estimates for the non-perturbative parameters appearing in the heavy quark expansion for inclusive decays. While the parameters appearing in low orders of this expansion can be extracted from data, the number of parameters in higher orders proliferates strongly, making a determination of these parameters from data impossible. Thus, one has to rely on theoretical estimates which may be obtained from an insertion of intermediate states. We refine this method and attempt to estimate the uncertainties of this approach.
Enqvist, K
2012-01-01
The very basics of cosmological inflation are discussed. We derive the equations of motion for the inflaton field, introduce the slow-roll parameters, and present the computation of the inflationary perturbations and their connection to the temperature fluctuations of the cosmic microwave background.
Multi-objective optimization in quantum parameter estimation
Gong, BeiLi; Cui, Wei
2018-04-01
We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.
Estimation of Poisson-Dirichlet Parameters with Monotone Missing Data
Directory of Open Access Journals (Sweden)
Xueqin Zhou
2017-01-01
Full Text Available This article considers the estimation of the unknown numerical parameters and the density of the base measure in a Poisson-Dirichlet process prior with grouped monotone missing data. The numerical parameters are estimated by the method of maximum likelihood estimates and the density function is estimated by kernel method. A set of simulations was conducted, which shows that the estimates perform well.
International Nuclear Information System (INIS)
Wesson, P.S.
1979-01-01
The Cosmological Principle states: the universe looks the same to all observers regardless of where they are located. To most astronomers today the Cosmological Principle means the universe looks the same to all observers because density of the galaxies is the same in all places. A new Cosmological Principle is proposed. It is called the Dimensional Cosmological Principle. It uses the properties of matter in the universe: density (rho), pressure (p), and mass (m) within some region of space of length (l). The laws of physics require incorporation of constants for gravity (G) and the speed of light (C). After combining the six parameters into dimensionless numbers, the best choices are: 8πGl 2 rho/c 2 , 8πGl 2 rho/c 4 , and 2 Gm/c 2 l (the Schwarzchild factor). The Dimensional Cosmological Principal came about because old ideas conflicted with the rapidly-growing body of observational evidence indicating that galaxies in the universe have a clumpy rather than uniform distribution
Parameter estimation and testing of hypotheses
International Nuclear Information System (INIS)
Fruhwirth, R.
1996-01-01
This lecture presents the basic mathematical ideas underlying the concept of random variable and the construction and analysis of estimators and test statistics. The material presented is based mainly on four books given in the references: the general exposition of estimators and test statistics follows Kendall and Stuart which is a comprehensive review of the field; the book by Eadie et al. contains selecting topics of particular interest to experimental physicist and a host of illuminating examples from experimental high-energy physics; for the presentation of numerical procedures, the Press et al. and the Thisted books have been used. The last section deals with estimation in dynamic systems. In most books the Kalman filter is presented in a Bayesian framework, often obscured by cumbrous notation. In this lecture, the link to classical least-squares estimators and regression models is stressed with the aim of facilitating the access to this less familiar topic. References are given for specific applications to track and vertex fitting and for extended exposition of these topics. In the appendix, the link between Bayesian decision rules and feed-forward neural networks is presented. (J.S.). 10 refs., 5 figs., 1 appendix
Parameter estimation in tree graph metabolic networks
Astola, Laura; Stigter, Hans; Gomez Roldan, Maria Victoria; Eeuwijk, van Fred; Hall, Robert D.; Groenenboom, Marian; Molenaar, Jaap J.
2016-01-01
We study the glycosylation processes that convert initially toxic substrates to nu- tritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum) seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs) to model the enzyme
A Comparative Study of Distribution System Parameter Estimation Methods
Energy Technology Data Exchange (ETDEWEB)
Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup
2016-07-17
In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.
Neglect Of Parameter Estimation Uncertainty Can Significantly Overestimate Structural Reliability
Directory of Open Access Journals (Sweden)
Rózsás Árpád
2015-12-01
Full Text Available Parameter estimation uncertainty is often neglected in reliability studies, i.e. point estimates of distribution parameters are used for representative fractiles, and in probabilistic models. A numerical example examines the effect of this uncertainty on structural reliability using Bayesian statistics. The study reveals that the neglect of parameter estimation uncertainty might lead to an order of magnitude underestimation of failure probability.
minimum variance estimation of yield parameters of rubber tree
African Journals Online (AJOL)
2013-03-01
Mar 1, 2013 ... It is our opinion that Kalman filter is a robust estimator of the ... Kalman filter, parameter estimation, rubber clones, Chow failure test, autocorrelation, STAMP, data ...... Mills, T.C. Modelling Current Temperature Trends.
Quintessential brane cosmology
International Nuclear Information System (INIS)
Kunze, K.E.; Vazquez-Mozo, M.A.
2002-01-01
We study a class of braneworlds where the cosmological evolution arises as the result of the movement of a three-brane in a five-dimensional static dilatonic bulk, with and without reflection symmetry. The resulting four-dimensional Friedmann equation includes a term which, for a certain range of the parameters, effectively works as a quintessence component, producing an acceleration of the universe at late times. Using current observations and bounds derived from big-bang nucleosynthesis, we estimate the parameters that characterize the model
Estimation of a collision impact parameter
International Nuclear Information System (INIS)
Shmatov, S.V.; Zarubin, P.I.
2001-01-01
We demonstrate that the nuclear collision geometry (i.e. impact parameter) can be determined in an event-by-event analysis by measuring the transverse energy flow in the pseudorapidity region 3≤|η|≤5 with a minimal dependence on collision dynamics details at the LHC energy scale. Using the HIJING model we have illustrated our calculation by a simulation of events of nucleus-nucleus interactions at the c.m.s. energy from 1 up to 5.5 TeV per nucleon and various types of nuclei
Novel Method for 5G Systems NLOS Channels Parameter Estimation
Directory of Open Access Journals (Sweden)
Vladeta Milenkovic
2017-01-01
Full Text Available For the development of new 5G systems to operate in mm bands, there is a need for accurate radio propagation modelling at these bands. In this paper novel approach for NLOS channels parameter estimation will be presented. Estimation will be performed based on LCR performance measure, which will enable us to estimate propagation parameters in real time and to avoid weaknesses of ML and moment method estimation approaches.
Parameter Estimation for Improving Association Indicators in Binary Logistic Regression
Directory of Open Access Journals (Sweden)
Mahdi Bashiri
2012-02-01
Full Text Available The aim of this paper is estimation of Binary logistic regression parameters for maximizing the log-likelihood function with improved association indicators. In this paper the parameter estimation steps have been explained and then measures of association have been introduced and their calculations have been analyzed. Moreover a new related indicators based on membership degree level have been expressed. Indeed association measures demonstrate the number of success responses occurred in front of failure in certain number of Bernoulli independent experiments. In parameter estimation, existing indicators values is not sensitive to the parameter values, whereas the proposed indicators are sensitive to the estimated parameters during the iterative procedure. Therefore, proposing a new association indicator of binary logistic regression with more sensitivity to the estimated parameters in maximizing the log- likelihood in iterative procedure is innovation of this study.
Estimation of gloss from rough surface parameters
Simonsen, Ingve; Larsen, Åge G.; Andreassen, Erik; Ommundsen, Espen; Nord-Varhaug, Katrin
2005-12-01
Gloss is a quantity used in the optical industry to quantify and categorize materials according to how well they scatter light specularly. With the aid of phase perturbation theory, we derive an approximate expression for this quantity for a one-dimensional randomly rough surface. It is demonstrated that gloss depends in an exponential way on two dimensionless quantities that are associated with the surface randomness: the root-mean-square roughness times the perpendicular momentum transfer for the specular direction, and a correlation function dependent factor times a lateral momentum variable associated with the collection angle. Rigorous Monte Carlo simulations are used to access the quality of this approximation, and good agreement is observed over large regions of parameter space.
A new Bayesian recursive technique for parameter estimation
Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis
2006-08-01
The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.
Control and Estimation of Distributed Parameter Systems
Kappel, F; Kunisch, K
1998-01-01
Consisting of 23 refereed contributions, this volume offers a broad and diverse view of current research in control and estimation of partial differential equations. Topics addressed include, but are not limited to - control and stability of hyperbolic systems related to elasticity, linear and nonlinear; - control and identification of nonlinear parabolic systems; - exact and approximate controllability, and observability; - Pontryagin's maximum principle and dynamic programming in PDE; and - numerics pertinent to optimal and suboptimal control problems. This volume is primarily geared toward control theorists seeking information on the latest developments in their area of expertise. It may also serve as a stimulating reader to any researcher who wants to gain an impression of activities at the forefront of a vigorously expanding area in applied mathematics.
Gravity Field Parameter Estimation Using QR Factorization
Klokocnik, J.; Wagner, C. A.; McAdoo, D.; Kostelecky, J.; Bezdek, A.; Novak, P.; Gruber, C.; Marty, J.; Bruinsma, S. L.; Gratton, S.; Balmino, G.; Baboulin, M.
2007-12-01
This study compares the accuracy of the estimated geopotential coefficients when QR factorization is used instead of the classical method applied at our institute, namely the generation of normal equations that are solved by means of Cholesky decomposition. The objective is to evaluate the gain in numerical precision, which is obtained at considerable extra cost in terms of computer resources. Therefore, a significant increase in precision must be realized in order to justify the additional cost. Numerical simulations were done in order to examine the performance of both solution methods. Reference gravity gradients were simulated, using the EIGEN-GL04C gravity field model to degree and order 300, every 3 seconds along a near-circular, polar orbit at 250 km altitude. The simulation spanned a total of 60 days. A polar orbit was selected in this simulation in order to avoid the 'polar gap' problem, which causes inaccurate estimation of the low-order spherical harmonic coefficients. Regularization is required in that case (e.g., the GOCE mission), which is not the subject of the present study. The simulated gravity gradients, to which white noise was added, were then processed with the GINS software package, applying EIGEN-CG03 as the background gravity field model, followed either by the usual normal equation computation or using the QR approach for incremental linear least squares. The accuracy assessment of the gravity field recovery consists in computing the median error degree-variance spectra, accumulated geoid errors, geoid errors due to individual coefficients, and geoid errors calculated on a global grid. The performance, in terms of memory usage, required disk space, and CPU time, of the QR versus the normal equation approach is also evaluated.
Energy Technology Data Exchange (ETDEWEB)
Ernazarov, K.K. [RUDN University, Institute of Gravitation and Cosmology, Moscow (Russian Federation); Ivashchuk, V.D. [RUDN University, Institute of Gravitation and Cosmology, Moscow (Russian Federation); VNIIMS, Center for Gravitation and Fundamental Metrology, Moscow (Russian Federation)
2017-06-15
We consider a D-dimensional gravitational model with a Gauss-Bonnet term and the cosmological term Λ. We restrict the metrics to diagonal cosmological ones and find for certain Λ a class of solutions with exponential time dependence of three scale factors, governed by three non-coinciding Hubble-like parameters H > 0, h{sub 1} and h{sub 2}, corresponding to factor spaces of dimensions m > 2, k{sub 1} > 1 and k{sub 2} > 1, respectively, with k{sub 1} ≠ k{sub 2} and D = 1 + m + k{sub 1} + k{sub 2}. Any of these solutions describes an exponential expansion of 3d subspace with Hubble parameter H and zero variation of the effective gravitational constant G. We prove the stability of these solutions in a class of cosmological solutions with diagonal metrics. (orig.)
Online State Space Model Parameter Estimation in Synchronous Machines
Directory of Open Access Journals (Sweden)
Z. Gallehdari
2014-06-01
The suggested approach is evaluated for a sample synchronous machine model. Estimated parameters are tested for different inputs at different operating conditions. The effect of noise is also considered in this study. Simulation results show that the proposed approach provides good accuracy for parameter estimation.
Parameter Estimates in Differential Equation Models for Chemical Kinetics
Winkel, Brian
2011-01-01
We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…
Estimation of ground water hydraulic parameters
Energy Technology Data Exchange (ETDEWEB)
Hvilshoej, Soeren
1998-11-01
The main objective was to assess field methods to determine ground water hydraulic parameters and to develop and apply new analysis methods to selected field techniques. A field site in Vejen, Denmark, which previously has been intensively investigated on the basis of a large amount of mini slug tests and tracer tests, was chosen for experimental application and evaluation. Particular interest was in analysing partially penetrating pumping tests and a recently proposed single-well dipole test. Three wells were constructed in which partially penetrating pumping tests and multi-level single-well dipole tests were performed. In addition, multi-level slug tests, flow meter tests, gamma-logs, and geologic characterisation of soil samples were carried out. In addition to the three Vejen analyses, data from previously published partially penetrating pumping tests were analysed assuming homogeneous anisotropic aquifer conditions. In the present study methods were developed to analyse partially penetrating pumping tests and multi-level single-well dipole tests based on an inverse numerical model. The obtained horizontal hydraulic conductivities from the partially penetrating pumping tests were in accordance with measurements obtained from multi-level slug tests and mini slug tests. Accordance was also achieved between the anisotropy ratios determined from partially penetrating pumping tests and multi-level single-well dipole tests. It was demonstrated that the partially penetrating pumping test analysed by and inverse numerical model is a very valuable technique that may provide hydraulic information on the storage terms and the vertical distribution of the horizontal and vertical hydraulic conductivity under both confined and unconfined aquifer conditions. (EG) 138 refs.
Bayesian Parameter Estimation for Heavy-Duty Vehicles
Energy Technology Data Exchange (ETDEWEB)
Miller, Eric; Konan, Arnaud; Duran, Adam
2017-03-28
Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the current state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.
Parameter and State Estimator for State Space Models
Directory of Open Access Journals (Sweden)
Ruifeng Ding
2014-01-01
Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.
Parameter estimation and prediction of nonlinear biological systems: some examples
Doeswijk, T.G.; Keesman, K.J.
2006-01-01
Rearranging and reparameterizing a discrete-time nonlinear model with polynomial quotient structure in input, output and parameters (xk = f(Z, p)) leads to a model linear in its (new) parameters. As a result, the parameter estimation problem becomes a so-called errors-in-variables problem for which
A Novel Nonlinear Parameter Estimation Method of Soft Tissues
Directory of Open Access Journals (Sweden)
Qianqian Tong
2017-12-01
Full Text Available The elastic parameters of soft tissues are important for medical diagnosis and virtual surgery simulation. In this study, we propose a novel nonlinear parameter estimation method for soft tissues. Firstly, an in-house data acquisition platform was used to obtain external forces and their corresponding deformation values. To provide highly precise data for estimating nonlinear parameters, the measured forces were corrected using the constructed weighted combination forecasting model based on a support vector machine (WCFM_SVM. Secondly, a tetrahedral finite element parameter estimation model was established to describe the physical characteristics of soft tissues, using the substitution parameters of Young’s modulus and Poisson’s ratio to avoid solving complicated nonlinear problems. To improve the robustness of our model and avoid poor local minima, the initial parameters solved by a linear finite element model were introduced into the parameter estimation model. Finally, a self-adapting Levenberg–Marquardt (LM algorithm was presented, which is capable of adaptively adjusting iterative parameters to solve the established parameter estimation model. The maximum absolute error of our WCFM_SVM model was less than 0.03 Newton, resulting in more accurate forces in comparison with other correction models tested. The maximum absolute error between the calculated and measured nodal displacements was less than 1.5 mm, demonstrating that our nonlinear parameters are precise.
International Nuclear Information System (INIS)
Chow, Nathan; Khoury, Justin
2009-01-01
We study the cosmology of a galileon scalar-tensor theory, obtained by covariantizing the decoupling Lagrangian of the Dvali-Gabadadze-Poratti (DGP) model. Despite being local in 3+1 dimensions, the resulting cosmological evolution is remarkably similar to that of the full 4+1-dimensional DGP framework, both for the expansion history and the evolution of density perturbations. As in the DGP model, the covariant galileon theory yields two branches of solutions, depending on the sign of the galileon velocity. Perturbations are stable on one branch and ghostlike on the other. An interesting effect uncovered in our analysis is a cosmological version of the Vainshtein screening mechanism: at early times, the galileon dynamics are dominated by self-interaction terms, resulting in its energy density being suppressed compared to matter or radiation; once the matter density has redshifted sufficiently, the galileon becomes an important component of the energy density and contributes to dark energy. We estimate conservatively that the resulting expansion history is consistent with the observed late-time cosmology, provided that the scale of modification satisfies r c > or approx. 15 Gpc.
Robust Parameter and Signal Estimation in Induction Motors
DEFF Research Database (Denmark)
Børsting, H.
This thesis deals with theories and methods for robust parameter and signal estimation in induction motors. The project originates in industrial interests concerning sensor-less control of electrical drives. During the work, some general problems concerning estimation of signals and parameters...... in nonlinear systems, have been exposed. The main objectives of this project are: - analysis and application of theories and methods for robust estimation of parameters in a model structure, obtained from knowledge of the physics of the induction motor. - analysis and application of theories and methods...... for robust estimation of the rotor speed and driving torque of the induction motor based only on measurements of stator voltages and currents. Only contimuous-time models have been used, which means that physical related signals and parameters are estimated directly and not indirectly by some discrete...
Modeling and Parameter Estimation of a Small Wind Generation System
Directory of Open Access Journals (Sweden)
Carlos A. Ramírez Gómez
2013-11-01
Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.
Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters
Shi, L.
2015-12-01
This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
Combination and interpretation of observables in Cosmology
Directory of Open Access Journals (Sweden)
Virey Jean-Marc
2010-04-01
Full Text Available The standard cosmological model has deep theoretical foundations but need the introduction of two major unknown components, dark matter and dark energy, to be in agreement with various observations. Dark matter describes a non-relativistic collisionless fluid of (non baryonic matter which amount to 25% of the total density of the universe. Dark energy is a new kind of fluid not of matter type, representing 70% of the total density which should explain the recent acceleration of the expansion of the universe. Alternatively, one can reject this idea of adding one or two new components but argue that the equations used to make the interpretation should be modified consmological scales. Instead of dark matter one can invoke a failure of Newton's laws. Instead of dark energy, two approaches are proposed : general relativity (in term of the Einstein equation should be modified, or the cosmological principle which fixes the metric used for cosmology should be abandonned. One of the main objective of the community is to find the path of the relevant interpretations thanks to the next generation of experiments which should provide large statistics of observationnal data. Unfortunately, cosmological in formations are difficult to pin down directly fromt he measurements, and it is mandatory to combine the various observables to get the cosmological parameters. This is not problematic from the statistical point of view, but assumptions and approximations made for the analysis may bias our interprettion of the data. Consequently, a strong attention should be paied to the statistical methods used to make parameters estimation and for model testing. After a review of the basics of cosmology where the cosmological parameters are introduced, we discuss the various cosmological probes and their associated observables used to extract cosmological informations. We present the results obtained from several statistical analyses combining data of diferent nature but
A simulation of water pollution model parameter estimation
Kibler, J. F.
1976-01-01
A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.
State Estimation-based Transmission line parameter identification
Directory of Open Access Journals (Sweden)
Fredy Andrés Olarte Dussán
2010-01-01
Full Text Available This article presents two state-estimation-based algorithms for identifying transmission line parameters. The identification technique used simultaneous state-parameter estimation on an artificial power system composed of several copies of the same transmission line, using measurements at different points in time. The first algorithm used active and reactive power measurements at both ends of the line. The second method used synchronised phasor voltage and current measurements at both ends. The algorithms were tested in simulated conditions on the 30-node IEEE test system. All line parameters for this system were estimated with errors below 1%.
A variational approach to parameter estimation in ordinary differential equations
Directory of Open Access Journals (Sweden)
Kaschek Daniel
2012-08-01
Full Text Available Abstract Background Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. Results The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. Conclusions The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.
Estimating Soil Hydraulic Parameters using Gradient Based Approach
Rai, P. K.; Tripathi, S.
2017-12-01
The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.
A variational approach to parameter estimation in ordinary differential equations.
Kaschek, Daniel; Timmer, Jens
2012-08-14
Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.
Kinetic parameter estimation from attenuated SPECT projection measurements
International Nuclear Information System (INIS)
Reutter, B.W.; Gullberg, G.T.
1998-01-01
Conventional analysis of dynamically acquired nuclear medicine data involves fitting kinetic models to time-activity curves generated from regions of interest defined on a temporal sequence of reconstructed images. However, images reconstructed from the inconsistent projections of a time-varying distribution of radiopharmaceutical acquired by a rotating SPECT system can contain artifacts that lead to biases in the estimated kinetic parameters. To overcome this problem the authors investigated the estimation of kinetic parameters directly from projection data by modeling the data acquisition process. To accomplish this it was necessary to parametrize the spatial and temporal distribution of the radiopharmaceutical within the SPECT field of view. In a simulated transverse slice, kinetic parameters were estimated for simple one compartment models for three myocardial regions of interest, as well as for the liver. Myocardial uptake and washout parameters estimated by conventional analysis of noiseless simulated data had biases ranging between 1--63%. Parameters estimated directly from the noiseless projection data were unbiased as expected, since the model used for fitting was faithful to the simulation. Predicted uncertainties (standard deviations) of the parameters obtained for 500,000 detected events ranged between 2--31% for the myocardial uptake parameters and 2--23% for the myocardial washout parameters
Models for estimating photosynthesis parameters from in situ production profiles
Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana
2017-12-01
The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of
REML estimates of genetic parameters of sexual dimorphism for ...
Indian Academy of Sciences (India)
Administrator
Full and half sibs were distinguished, in contrast to usual isofemale studies in which animals ... studies. Thus, the aim of this study was to estimate genetic parameters of sexual dimorphism in isofemale lines using ..... Muscovy ducks. Genet.
A distributed approach for parameters estimation in System Biology models
International Nuclear Information System (INIS)
Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.
2009-01-01
Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.
Kinetic parameter estimation from SPECT cone-beam projection measurements
International Nuclear Information System (INIS)
Huesman, Ronald H.; Reutter, Bryan W.; Zeng, G. Larry; Gullberg, Grant T.
1998-01-01
Kinetic parameters are commonly estimated from dynamically acquired nuclear medicine data by first reconstructing a dynamic sequence of images and subsequently fitting the parameters to time-activity curves generated from regions of interest overlaid upon the image sequence. Biased estimates can result from images reconstructed using inconsistent projections of a time-varying distribution of radiopharmaceutical acquired by a rotating SPECT system. If the SPECT data are acquired using cone-beam collimators wherein the gantry rotates so that the focal point of the collimators always remains in a plane, additional biases can arise from images reconstructed using insufficient, as well as truncated, projection samples. To overcome these problems we have investigated the estimation of kinetic parameters directly from SPECT cone-beam projection data by modelling the data acquisition process. To accomplish this it was necessary to parametrize the spatial and temporal distribution of the radiopharmaceutical within the SPECT field of view. In a simulated chest image volume, kinetic parameters were estimated for simple one-compartment models for four myocardial regions of interest. Myocardial uptake and washout parameters estimated by conventional analysis of noiseless simulated cone-beam data had biases ranging between 3-26% and 0-28%, respectively. Parameters estimated directly from the noiseless projection data were unbiased as expected, since the model used for fitting was faithful to the simulation. Statistical uncertainties of parameter estimates for 10 000 000 events ranged between 0.2-9% for the uptake parameters and between 0.3-6% for the washout parameters. (author)
Kalman filter data assimilation: targeting observations and parameter estimation.
Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex
2014-06-01
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.
Kalman filter data assimilation: Targeting observations and parameter estimation
International Nuclear Information System (INIS)
Bellsky, Thomas; Kostelich, Eric J.; Mahalov, Alex
2014-01-01
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation
Kalman filter estimation of RLC parameters for UMP transmission line
Directory of Open Access Journals (Sweden)
Mohd Amin Siti Nur Aishah
2018-01-01
Full Text Available This paper present the development of Kalman filter that allows evaluation in the estimation of resistance (R, inductance (L, and capacitance (C values for Universiti Malaysia Pahang (UMP short transmission line. To overcome the weaknesses of existing system such as power losses in the transmission line, Kalman Filter can be a better solution to estimate the parameters. The aim of this paper is to estimate RLC values by using Kalman filter that in the end can increase the system efficiency in UMP. In this research, matlab simulink model is developed to analyse the UMP short transmission line by considering different noise conditions to reprint certain unknown parameters which are difficult to predict. The data is then used for comparison purposes between calculated and estimated values. The results have illustrated that the Kalman Filter estimate accurately the RLC parameters with less error. The comparison of accuracy between Kalman Filter and Least Square method is also presented to evaluate their performances.
Accelerated maximum likelihood parameter estimation for stochastic biochemical systems
Directory of Open Access Journals (Sweden)
Daigle Bernie J
2012-05-01
Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods
State and parameter estimation in biotechnical batch reactors
Keesman, K.J.
2000-01-01
In this paper the problem of state and parameter estimation in biotechnical batch reactors is considered. Models describing the biotechnical process behaviour are usually nonlinear with time-varying parameters. Hence, the resulting large dimensions of the augmented state vector, roughly > 7, in
On the Nature of SEM Estimates of ARMA Parameters.
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2002-01-01
Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…
On robust parameter estimation in brain-computer interfacing
Samek, Wojciech; Nakajima, Shinichi; Kawanabe, Motoaki; Müller, Klaus-Robert
2017-12-01
Objective. The reliable estimation of parameters such as mean or covariance matrix from noisy and high-dimensional observations is a prerequisite for successful application of signal processing and machine learning algorithms in brain-computer interfacing (BCI). This challenging task becomes significantly more difficult if the data set contains outliers, e.g. due to subject movements, eye blinks or loose electrodes, as they may heavily bias the estimation and the subsequent statistical analysis. Although various robust estimators have been developed to tackle the outlier problem, they ignore important structural information in the data and thus may not be optimal. Typical structural elements in BCI data are the trials consisting of a few hundred EEG samples and indicating the start and end of a task. Approach. This work discusses the parameter estimation problem in BCI and introduces a novel hierarchical view on robustness which naturally comprises different types of outlierness occurring in structured data. Furthermore, the class of minimum divergence estimators is reviewed and a robust mean and covariance estimator for structured data is derived and evaluated with simulations and on a benchmark data set. Main results. The results show that state-of-the-art BCI algorithms benefit from robustly estimated parameters. Significance. Since parameter estimation is an integral part of various machine learning algorithms, the presented techniques are applicable to many problems beyond BCI.
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
2002-01-01
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
Estimation of genetic parameters for body weights of Kurdish sheep ...
African Journals Online (AJOL)
Genetic parameters and (co)variance components were estimated by restricted maximum likelihood (REML) procedure, using animal models of kind 1, 2, 3, 4, 5 and 6, for body weight in birth, three, six, nine and 12 months of age in a Kurdish sheep flock. Direct and maternal breeding values were estimated using the best ...
Aircraft parameter estimation ± A tool for development of ...
Indian Academy of Sciences (India)
In addition, actuator performance and controller gains may be flight condition dependent. Moreover, this approach may result in open-loop parameter estimates with low accuracy. 6. Aerodynamic databases for high fidelity flight simulators. Estimation of a comprehensive aerodynamic model suitable for a flight simulator is an.
Audren, Benjamin; Bird, Simeon; Haehnelt, Martin G.; Viel, Matteo
2013-01-01
We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fourier space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservat...
A Note On the Estimation of the Poisson Parameter
Directory of Open Access Journals (Sweden)
S. S. Chitgopekar
1985-01-01
distribution when there are errors in observing the zeros and ones and obtains both the maximum likelihood and moments estimates of the Poisson mean and the error probabilities. It is interesting to note that either method fails to give unique estimates of these parameters unless the error probabilities are functionally related. However, it is equally interesting to observe that the estimate of the Poisson mean does not depend on the functional relationship between the error probabilities.
Adaptive distributed parameter and input estimation in linear parabolic PDEs
Mechhoud, Sarra
2016-01-01
First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.
Parameter Estimation of Damped Compound Pendulum Using Bat Algorithm
Directory of Open Access Journals (Sweden)
Saad Mohd Sazli
2016-01-01
Full Text Available In this study, the parameter identification of the damped compound pendulum system is proposed using one of the most promising nature inspired algorithms which is Bat Algorithm (BA. The procedure used to achieve the parameter identification of the experimental system consists of input-output data collection, ARX model order selection and parameter estimation using bat algorithm (BA method. PRBS signal is used as an input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the autoregressive with exogenous input (ARX model. The performance of the model is validated using mean squares error (MSE between the actual and predicted output responses of the models. Finally, comparative study is conducted between BA and the conventional estimation method (i.e. Least Square. Based on the results obtained, MSE produce from Bat Algorithm (BA is outperformed the Least Square (LS method.
Iterative methods for distributed parameter estimation in parabolic PDE
Energy Technology Data Exchange (ETDEWEB)
Vogel, C.R. [Montana State Univ., Bozeman, MT (United States); Wade, J.G. [Bowling Green State Univ., OH (United States)
1994-12-31
The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.
Method for Estimating the Parameters of LFM Radar Signal
Directory of Open Access Journals (Sweden)
Tan Chuan-Zhang
2017-01-01
Full Text Available In order to obtain reliable estimate of parameters, it is very important to protect the integrality of linear frequency modulation (LFM signal. Therefore, in the practical LFM radar signal processing, the length of data frame is often greater than the pulse width (PW of signal. In this condition, estimating the parameters by fractional Fourier transform (FrFT will cause the signal to noise ratio (SNR decrease. Aiming at this problem, we multiply the data frame by a Gaussian window to improve the SNR. Besides, for a further improvement of parameters estimation precision, a novel algorithm is derived via Lagrange interpolation polynomial, and we enhance the algorithm by a logarithmic transformation. Simulation results demonstrate that the derived algorithm significantly reduces the estimation errors of chirp-rate and initial frequency.
Axions in inflationary cosmology
International Nuclear Information System (INIS)
Linde, A.
1991-01-01
The problem of the cosmological constraints on the axion mass is re-examined. It is argued that in the context of inflationary cosmology the constraint m a > or approx.10 -5 eV can be avoided even when the axion perturbations produced during inflation are taken into account. It is shown also that in most axion models the effective parameter f a rapidly changes during inflation. This modifies some earlier statements concerning isothermal perturbations in the axion cosmology. A hybrid inflation scenario is proposed which combines some advantages of chaotic inflation with specific features of new and/or extended inflation. Its implications for the axion cosmology are discussed. (orig.)
Simple method for quick estimation of aquifer hydrogeological parameters
Ma, C.; Li, Y. Y.
2017-08-01
Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.
A software for parameter estimation in dynamic models
Directory of Open Access Journals (Sweden)
M. Yuceer
2008-12-01
Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.
Parameter Estimation in Stochastic Grey-Box Models
DEFF Research Database (Denmark)
Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay
2004-01-01
An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...... and proves to have better performance both in terms of quality of estimates for nonlinear systems with significant diffusion and in terms of reproducibility. In particular, the new tool provides more accurate and more consistent estimates of the parameters of the diffusion term....
Traveltime approximations and parameter estimation for orthorhombic media
Masmoudi, Nabil
2016-05-30
Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters if we relate them analytically to traveltimes. Using perturbation theory, we have developed traveltime approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2, and Δχ in inhomogeneous background media. The parameter Δχ is related to Tsvankin-Thomsen notation and ensures easier computation of traveltimes in the background model. Specifically, our expansion assumes an inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. We have used the Shanks transform to enhance the accuracy of the formulas. A homogeneous medium simplification of the traveltime expansion provided a nonhyperbolic moveout description of the traveltime that was more accurate than other derived approximations. Moreover, the formulation provides a computationally efficient tool to solve the eikonal equation of an orthorhombic medium, without any constraints on the background model complexity. Although, the expansion is based on the factorized representation of the perturbation parameters, smooth variations of these parameters (represented as effective values) provides reasonable results. Thus, this formulation provides a mechanism to estimate the three effective parameters η1, η2, and Δχ. We have derived Dix-type formulas for orthorhombic medium to convert the effective parameters to their interval values.
Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters
Hoshino, Takahiro; Shigemasu, Kazuo
2008-01-01
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
International Nuclear Information System (INIS)
Wainwright, J.
1990-01-01
The workshop on mathematical cosmology was devoted to four topics of current interest. This report contains a brief discussion of the historical background of each topic and a concise summary of the content of each talk. The topics were; the observational cosmology program, the cosmological perturbation program, isotropic singularities, and the evolution of Bianchi cosmologies. (author)
Small sample GEE estimation of regression parameters for longitudinal data.
Paul, Sudhir; Zhang, Xuemao
2014-09-28
Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.
Adaptive distributed parameter and input estimation in linear parabolic PDEs
Mechhoud, Sarra
2016-01-01
In this paper, we discuss the on-line estimation of distributed source term, diffusion, and reaction coefficients of a linear parabolic partial differential equation using both distributed and interior-point measurements. First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.
Pattern statistics on Markov chains and sensitivity to parameter estimation
Directory of Open Access Journals (Sweden)
Nuel Grégory
2006-10-01
Full Text Available Abstract Background: In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,.... Results: In the particular case where pattern statistics (overlap counting only computed through binomial approximations we use the delta-method to give an explicit expression of σ, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. Conclusion: We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation.
Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model
Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami
2017-06-01
A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.
International Nuclear Information System (INIS)
Raychaudhuri, A.K.
1979-01-01
The subject is covered in chapters, entitled; introduction; Newtonian gravitation and cosmology; general relativity and relativistic cosmology; analysis of observational data; relativistic models not obeying the cosmological principle; microwave radiation background; thermal history of the universe and nucleosynthesis; singularity of cosmological models; gravitational constant as a field variable; cosmological models based on Einstein-Cartan theory; cosmological singularity in two recent theories; fate of perturbations of isotropic universes; formation of galaxies; baryon symmetric cosmology; assorted topics (including extragalactic radio sources; Mach principle). (U.K.)
Parameter Estimation of Damped Compound Pendulum Differential Evolution Algorithm
Directory of Open Access Journals (Sweden)
Saad Mohd Sazli
2016-01-01
Full Text Available This paper present the parameter identification of damped compound pendulum using differential evolution algorithm. The procedure used to achieve the parameter identification of the experimental system consisted of input output data collection, ARX model order selection and parameter estimation using conventional method least square (LS and differential evolution (DE algorithm. PRBS signal is used to be input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the ARX model. The residual error between the actual and predicted output responses of the models is validated using mean squares error (MSE. Analysis showed that, MSE value for LS is 0.0026 and MSE value for DE is 3.6601×10-5. Based results obtained, it was found that DE have lower MSE than the LS method.
Cosmology and the early universe
Di Bari, Pasquale
2018-01-01
This book discusses cosmology from both an observational and a strong theoretical perspective. The first part focuses on gravitation, notably the expansion of the universe and determination of cosmological parameters, before moving onto the main emphasis of the book, the physics of the early universe, and the connections between cosmological models and particle physics. Readers will gain a comprehensive account of cosmology and the latest observational results, without requiring prior knowledge of relativistic theories, making the text ideal for students.
CTER—Rapid estimation of CTF parameters with error assessment
Energy Technology Data Exchange (ETDEWEB)
Penczek, Pawel A., E-mail: Pawel.A.Penczek@uth.tmc.edu [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Fang, Jia [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Li, Xueming; Cheng, Yifan [The Keck Advanced Microscopy Laboratory, Department of Biochemistry and Biophysics, University of California, San Francisco, CA 94158 (United States); Loerke, Justus; Spahn, Christian M.T. [Institut für Medizinische Physik und Biophysik, Charité – Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin (Germany)
2014-05-01
In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300 kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03 Å without, and 3.85 Å with, inclusion of astigmatism parameters. - Highlights: • We describe methodology for estimation of CTF parameters with error assessment. • Error estimates provide means for automated elimination of inferior micrographs. • High computational efficiency allows real-time monitoring of EM data quality. • Accurate CTF estimation yields structure of the 80S human ribosome at 3.85 Å.
An approach of parameter estimation for non-synchronous systems
International Nuclear Information System (INIS)
Xu Daolin; Lu Fangfang
2005-01-01
Synchronization-based parameter estimation is simple and effective but only available to synchronous systems. To come over this limitation, we propose a technique that the parameters of an unknown physical process (possibly a non-synchronous system) can be identified from a time series via a minimization procedure based on a synchronization control. The feasibility of this approach is illustrated in several chaotic systems
Parameter estimation in stochastic rainfall-runoff models
DEFF Research Database (Denmark)
Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur
2006-01-01
A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...
Estimation of octanol/water partition coefficients using LSER parameters
Luehrs, Dean C.; Hickey, James P.; Godbole, Kalpana A.; Rogers, Tony N.
1998-01-01
The logarithms of octanol/water partition coefficients, logKow, were regressed against the linear solvation energy relationship (LSER) parameters for a training set of 981 diverse organic chemicals. The standard deviation for logKow was 0.49. The regression equation was then used to estimate logKow for a test of 146 chemicals which included pesticides and other diverse polyfunctional compounds. Thus the octanol/water partition coefficient may be estimated by LSER parameters without elaborate software but only moderate accuracy should be expected.
Application of genetic algorithms for parameter estimation in liquid chromatography
International Nuclear Information System (INIS)
Hernandez Torres, Reynier; Irizar Mesa, Mirtha; Tavares Camara, Leoncio Diogenes
2012-01-01
In chromatography, complex inverse problems related to the parameters estimation and process optimization are presented. Metaheuristics methods are known as general purpose approximated algorithms which seek and hopefully find good solutions at a reasonable computational cost. These methods are iterative process to perform a robust search of a solution space. Genetic algorithms are optimization techniques based on the principles of genetics and natural selection. They have demonstrated very good performance as global optimizers in many types of applications, including inverse problems. In this work, the effectiveness of genetic algorithms is investigated to estimate parameters in liquid chromatography
Bayesian estimation of parameters in a regional hydrological model
Directory of Open Access Journals (Sweden)
K. Engeland
2002-01-01
Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis
Targeted estimation of nuisance parameters to obtain valid statistical inference.
van der Laan, Mark J
2014-01-01
In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special
Revisiting Boltzmann learning: parameter estimation in Markov random fields
DEFF Research Database (Denmark)
Hansen, Lars Kai; Andersen, Lars Nonboe; Kjems, Ulrik
1996-01-01
This article presents a generalization of the Boltzmann machine that allows us to use the learning rule for a much wider class of maximum likelihood and maximum a posteriori problems, including both supervised and unsupervised learning. Furthermore, the approach allows us to discuss regularization...... and generalization in the context of Boltzmann machines. We provide an illustrative example concerning parameter estimation in an inhomogeneous Markov field. The regularized adaptation produces a parameter set that closely resembles the “teacher” parameters, hence, will produce segmentations that closely reproduce...
Estimation of Compaction Parameters Based on Soil Classification
Lubis, A. S.; Muis, Z. A.; Hastuty, I. P.; Siregar, I. M.
2018-02-01
Factors that must be considered in compaction of the soil works were the type of soil material, field control, maintenance and availability of funds. Those problems then raised the idea of how to estimate the density of the soil with a proper implementation system, fast, and economical. This study aims to estimate the compaction parameter i.e. the maximum dry unit weight (γ dmax) and optimum water content (Wopt) based on soil classification. Each of 30 samples were being tested for its properties index and compaction test. All of the data’s from the laboratory test results, were used to estimate the compaction parameter values by using linear regression and Goswami Model. From the research result, the soil types were A4, A-6, and A-7 according to AASHTO and SC, SC-SM, and CL based on USCS. By linear regression, the equation for estimation of the maximum dry unit weight (γdmax *)=1,862-0,005*FINES- 0,003*LL and estimation of the optimum water content (wopt *)=- 0,607+0,362*FINES+0,161*LL. By Goswami Model (with equation Y=mLogG+k), for estimation of the maximum dry unit weight (γdmax *) with m=-0,376 and k=2,482, for estimation of the optimum water content (wopt *) with m=21,265 and k=-32,421. For both of these equations a 95% confidence interval was obtained.
Low Complexity Parameter Estimation For Off-the-Grid Targets
Jardak, Seifallah
2015-10-05
In multiple-input multiple-output radar, to estimate the reflection coefficient, spatial location, and Doppler shift of a target, a derived cost function is usually evaluated and optimized over a grid of points. The performance of such algorithms is directly affected by the size of the grid: increasing the number of points will enhance the resolution of the algorithm but exponentially increase its complexity. In this work, to estimate the parameters of a target, a reduced complexity super resolution algorithm is proposed. For off-the-grid targets, it uses a low order two dimensional fast Fourier transform to determine a suboptimal solution and then an iterative algorithm to jointly estimate the spatial location and Doppler shift. Simulation results show that the mean square estimation error of the proposed estimators achieve the Cram\\'er-Rao lower bound. © 2015 IEEE.
Estimation of object motion parameters from noisy images.
Broida, T J; Chellappa, R
1986-01-01
An approach is presented for the estimation of object motion parameters based on a sequence of noisy images. The problem considered is that of a rigid body undergoing unknown rotational and translational motion. The measurement data consists of a sequence of noisy image coordinates of two or more object correspondence points. By modeling the object dynamics as a function of time, estimates of the model parameters (including motion parameters) can be extracted from the data using recursive and/or batch techniques. This permits a desired degree of smoothing to be achieved through the use of an arbitrarily large number of images. Some assumptions regarding object structure are presently made. Results are presented for a recursive estimation procedure: the case considered here is that of a sequence of one dimensional images of a two dimensional object. Thus, the object moves in one transverse dimension, and in depth, preserving the fundamental ambiguity of the central projection image model (loss of depth information). An iterated extended Kalman filter is used for the recursive solution. Noise levels of 5-10 percent of the object image size are used. Approximate Cramer-Rao lower bounds are derived for the model parameter estimates as a function of object trajectory and noise level. This approach may be of use in situations where it is difficult to resolve large numbers of object match points, but relatively long sequences of images (10 to 20 or more) are available.
Revised models and genetic parameter estimates for production and ...
African Journals Online (AJOL)
Genetic parameters for production and reproduction traits in the Elsenburg Dormer sheep stud were estimated using records of 11743 lambs born between 1943 and 2002. An animal model with direct and maternal additive, maternal permanent and temporary environmental effects was fitted for traits considered traits of the ...
A Sparse Bayesian Learning Algorithm With Dictionary Parameter Estimation
DEFF Research Database (Denmark)
Hansen, Thomas Lundgaard; Badiu, Mihai Alin; Fleury, Bernard Henri
2014-01-01
This paper concerns sparse decomposition of a noisy signal into atoms which are specified by unknown continuous-valued parameters. An example could be estimation of the model order, frequencies and amplitudes of a superposition of complex sinusoids. The common approach is to reduce the continuous...
Estimation of Physical Parameters in Linear and Nonlinear Dynamic Systems
DEFF Research Database (Denmark)
Knudsen, Morten
variance and confidence ellipsoid is demonstrated. The relation is based on a new theorem on maxima of an ellipsoid. The procedure for input signal design and physical parameter estimation is tested on a number of examples, linear as well as nonlinear and simulated as well as real processes, and it appears...
Parameter Estimates in Differential Equation Models for Population Growth
Winkel, Brian J.
2011-01-01
We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…
Parameter extraction and estimation based on the PV panel outdoor ...
African Journals Online (AJOL)
The experimental data obtained are validated and compared with the estimated results obtained through simulation based on the manufacture's data sheet. The simulation is based on the Newton-Raphson iterative method in MATLAB environment. This approach aids the computation of the PV module's parameters at any ...
Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms
Berhausen, Sebastian; Paszek, Stefan
2016-01-01
In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.
MPEG2 video parameter and no reference PSNR estimation
DEFF Research Database (Denmark)
Li, Huiying; Forchhammer, Søren
2009-01-01
MPEG coded video may be processed for quality assessment or postprocessed to reduce coding artifacts or transcoded. Utilizing information about the MPEG stream may be useful for these tasks. This paper deals with estimating MPEG parameter information from the decoded video stream without access t...
NONLINEAR PLANT PIECEWISE-CONTINUOUS MODEL MATRIX PARAMETERS ESTIMATION
Directory of Open Access Journals (Sweden)
Roman L. Leibov
2017-09-01
Full Text Available This paper presents a nonlinear plant piecewise-continuous model matrix parameters estimation technique using nonlinear model time responses and random search method. One of piecewise-continuous model application areas is defined. The results of proposed approach application for aircraft turbofan engine piecewisecontinuous model formation are presented
Estimates Of Genetic Parameters Of Body Weights Of Different ...
African Journals Online (AJOL)
four (44) farrowings were used to estimate the genetic parameters (heritability and repeatability) of body weight of pigs. Results obtained from the study showed that the heritability (h2) of birth and weaning weights were moderate (0.33±0.16 ...
Estimation of stature from facial parameters in adult Abakaliki people ...
African Journals Online (AJOL)
This study is carried out in order to estimate the height of adult Igbo people of Abakaliki ethnic group in South-Eastern Nigeria from their facial Morphology. The parameters studied include Facial Length, Bizygomatic Diameter, Bigonial Diameter, Nasal Length, and Nasal Breadth. A total of 1000 subjects comprising 669 ...
On Modal Parameter Estimates from Ambient Vibration Tests
DEFF Research Database (Denmark)
Agneni, A.; Brincker, Rune; Coppotelli, B.
2004-01-01
Modal parameter estimates from ambient vibration testing are turning into the preferred technique when one is interested in systems under actual loadings and operational conditions. Moreover, with this approach, expensive devices to excite the structure are not needed, since it can be adequately...
Measuring, calculating and estimating PEP's parasitic mode loss parameters
International Nuclear Information System (INIS)
Weaver, J.N.
1981-01-01
This note discusses various ways the parasitic mode losses from a bunched beam to a vacuum chamber can be measured, calculated or estimated. A listing of the parameter, k, for the various PEP ring components is included. A number of formulas for calculating multiple and single pass losses are discussed and evaluated for several cases. 25 refs., 1 fig., 1 tab
Visco-piezo-elastic parameter estimation in laminated plate structures
DEFF Research Database (Denmark)
Araujo, A. L.; Mota Soares, C. M.; Herskovits, J.
2009-01-01
A parameter estimation technique is presented in this article, for identification of elastic, piezoelectric and viscoelastic properties of active laminated composite plates with surface-bonded piezoelectric patches. The inverse method presented uses experimental data in the form of a set of measu...
Estimates of genetic parameters and genetic gains for growth traits ...
African Journals Online (AJOL)
Estimates of genetic parameters and genetic gains for growth traits of two Eucalyptus ... In South Africa, Eucalyptus urophylla is an important species due to its ... as hybrid parents to cross with E. grandis was 59.8% over the population mean.
Estimation of riverbank soil erodibility parameters using genetic ...
Indian Academy of Sciences (India)
Tapas Karmaker
2017-11-07
Nov 7, 2017 ... process. Therefore, this is a study to verify the applicability of inverse parameter ... successful modelling of the riverbank erosion, precise estimation of ..... For this simulation, about 40 iterations are found to attain the convergence. ..... rithm for function optimization: a Matlab implementation. NCSU-IE TR ...
estimation of shear strength parameters of lateritic soils using
African Journals Online (AJOL)
user
... a tool to estimate the. Nigerian Journal of Technology (NIJOTECH). Vol. ... modeling tools for the prediction of shear strength parameters for lateritic ... 2.2 Geotechnical Analysis of the Soils ... The back propagation learning algorithm is the most popular and ..... [10] Alsaleh, M. I., Numerical modeling for strain localization in ...
Estimation of genetic parameters for carcass traits in Japanese quail ...
African Journals Online (AJOL)
The aim of this study was to estimate genetic parameters of some carcass characteristics in the Japanese quail. For this aim, carcass weight (Cw), breast weight (Bw), leg weight (Lw), abdominal fat weight (AFw), carcass yield (CP), breast percentage (BP), leg percentage (LP) and abdominal fat percentage (AFP) were ...
Tsunami Prediction and Earthquake Parameters Estimation in the Red Sea
Sawlan, Zaid A
2012-12-01
Tsunami concerns have increased in the world after the 2004 Indian Ocean tsunami and the 2011 Tohoku tsunami. Consequently, tsunami models have been developed rapidly in the last few years. One of the advanced tsunami models is the GeoClaw tsunami model introduced by LeVeque (2011). This model is adaptive and consistent. Because of different sources of uncertainties in the model, observations are needed to improve model prediction through a data assimilation framework. Model inputs are earthquake parameters and topography. This thesis introduces a real-time tsunami forecasting method that combines tsunami model with observations using a hybrid ensemble Kalman filter and ensemble Kalman smoother. The filter is used for state prediction while the smoother operates smoothing to estimate the earthquake parameters. This method reduces the error produced by uncertain inputs. In addition, state-parameter EnKF is implemented to estimate earthquake parameters. Although number of observations is small, estimated parameters generates a better tsunami prediction than the model. Methods and results of prediction experiments in the Red Sea are presented and the prospect of developing an operational tsunami prediction system in the Red Sea is discussed.
Dual ant colony operational modal analysis parameter estimation method
Sitarz, Piotr; Powałka, Bartosz
2018-01-01
Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.
Accuracy and sensitivity analysis on seismic anisotropy parameter estimation
Yan, Fuyong; Han, De-Hua
2018-04-01
There is significant uncertainty in measuring the Thomsen’s parameter δ in laboratory even though the dimensions and orientations of the rock samples are known. It is expected that more challenges will be encountered in the estimating of the seismic anisotropy parameters from field seismic data. Based on Monte Carlo simulation of vertical transversely isotropic layer cake model using the database of laboratory anisotropy measurement from the literature, we apply the commonly used quartic non-hyperbolic reflection moveout equation to estimate the seismic anisotropy parameters and test its accuracy and sensitivities to the source-receive offset, vertical interval velocity error and time picking error. The testing results show that the methodology works perfectly for noise-free synthetic data with short spread length. However, this method is extremely sensitive to the time picking error caused by mild random noises, and it requires the spread length to be greater than the depth of the reflection event. The uncertainties increase rapidly for the deeper layers and the estimated anisotropy parameters can be very unreliable for a layer with more than five overlain layers. It is possible that an isotropic formation can be misinterpreted as a strong anisotropic formation. The sensitivity analysis should provide useful guidance on how to group the reflection events and build a suitable geological model for anisotropy parameter inversion.
Estimation of parameter sensitivities for stochastic reaction networks
Gupta, Ankit
2016-01-07
Quantification of the effects of parameter uncertainty is an important and challenging problem in Systems Biology. We consider this problem in the context of stochastic models of biochemical reaction networks where the dynamics is described as a continuous-time Markov chain whose states represent the molecular counts of various species. For such models, effects of parameter uncertainty are often quantified by estimating the infinitesimal sensitivities of some observables with respect to model parameters. The aim of this talk is to present a holistic approach towards this problem of estimating parameter sensitivities for stochastic reaction networks. Our approach is based on a generic formula which allows us to construct efficient estimators for parameter sensitivity using simulations of the underlying model. We will discuss how novel simulation techniques, such as tau-leaping approximations, multi-level methods etc. can be easily integrated with our approach and how one can deal with stiff reaction networks where reactions span multiple time-scales. We will demonstrate the efficiency and applicability of our approach using many examples from the biological literature.
Estimation of Parameters in Mean-Reverting Stochastic Systems
Directory of Open Access Journals (Sweden)
Tianhai Tian
2014-01-01
Full Text Available Stochastic differential equation (SDE is a very important mathematical tool to describe complex systems in which noise plays an important role. SDE models have been widely used to study the dynamic properties of various nonlinear systems in biology, engineering, finance, and economics, as well as physical sciences. Since a SDE can generate unlimited numbers of trajectories, it is difficult to estimate model parameters based on experimental observations which may represent only one trajectory of the stochastic model. Although substantial research efforts have been made to develop effective methods, it is still a challenge to infer unknown parameters in SDE models from observations that may have large variations. Using an interest rate model as a test problem, in this work we use the Bayesian inference and Markov Chain Monte Carlo method to estimate unknown parameters in SDE models.
Estimating Arrhenius parameters using temperature programmed molecular dynamics
International Nuclear Information System (INIS)
Imandi, Venkataramana; Chatterjee, Abhijit
2016-01-01
Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.
Estimating Arrhenius parameters using temperature programmed molecular dynamics
Energy Technology Data Exchange (ETDEWEB)
Imandi, Venkataramana; Chatterjee, Abhijit, E-mail: abhijit@che.iitb.ac.in [Department of Chemical Engineering, Indian Institute of Technology Bombay, Mumbai 400076 (India)
2016-07-21
Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.
Using Genetic Algorithm to Estimate Hydraulic Parameters of Unconfined Aquifers
Directory of Open Access Journals (Sweden)
Asghar Asghari Moghaddam
2009-03-01
Full Text Available Nowadays, optimization techniques such as Genetic Algorithms (GA have attracted wide attention among scientists for solving complicated engineering problems. In this article, pumping test data are used to assess the efficiency of GA in estimating unconfined aquifer parameters and a sensitivity analysis is carried out to propose an optimal arrangement of GA. For this purpose, hydraulic parameters of three sets of pumping test data are calculated by GA and they are compared with the results of graphical methods. The results indicate that the GA technique is an efficient, reliable, and powerful method for estimating the hydraulic parameters of unconfined aquifer and, further, that in cases of deficiency in pumping test data, it has a better performance than graphical methods.
Directory of Open Access Journals (Sweden)
Jonathan R Karr
2015-05-01
Full Text Available Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.
Global parameter estimation for thermodynamic models of transcriptional regulation.
Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N
2013-07-15
Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.
Estimating model parameters in nonautonomous chaotic systems using synchronization
International Nuclear Information System (INIS)
Yang, Xiaoli; Xu, Wei; Sun, Zhongkui
2007-01-01
In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation
Influence of measurement errors and estimated parameters on combustion diagnosis
International Nuclear Information System (INIS)
Payri, F.; Molina, S.; Martin, J.; Armas, O.
2006-01-01
Thermodynamic diagnosis models are valuable tools for the study of Diesel combustion. Inputs required by such models comprise measured mean and instantaneous variables, together with suitable values for adjustable parameters used in different submodels. In the case of measured variables, one may estimate the uncertainty associated with measurement errors; however, the influence of errors in model parameter estimation may not be so easily established on an experimental basis. In this paper, a simulated pressure cycle has been used along with known input parameters, so that any uncertainty in the inputs is avoided. Then, the influence of errors in measured variables and geometric and heat transmission parameters on the results of a diagnosis combustion model for direct injection diesel engines have been studied. This procedure allowed to establish the relative importance of these parameters and to set limits to the maximal errors of the model, accounting for both the maximal expected errors in the input parameters and the sensitivity of the model to those errors
Stable Parameter Estimation for Autoregressive Equations with Random Coefficients
Directory of Open Access Journals (Sweden)
V. B. Goryainov
2014-01-01
Full Text Available In recent yearsthere has been a growing interest in non-linear time series models. They are more flexible than traditional linear models and allow more adequate description of real data. Among these models a autoregressive model with random coefficients plays an important role. It is widely used in various fields of science and technology, for example, in physics, biology, economics and finance. The model parameters are the mean values of autoregressive coefficients. Their evaluation is the main task of model identification. The basic method of estimation is still the least squares method, which gives good results for Gaussian time series, but it is quite sensitive to even small disturbancesin the assumption of Gaussian observations. In this paper we propose estimates, which generalize the least squares estimate in the sense that the quadratic objective function is replaced by an arbitrary convex and even function. Reasonable choice of objective function allows you to keep the benefits of the least squares estimate and eliminate its shortcomings. In particular, you can make it so that they will be almost as effective as the least squares estimate in the Gaussian case, but almost never loose in accuracy with small deviations of the probability distribution of the observations from the Gaussian distribution.The main result is the proof of consistency and asymptotic normality of the proposed estimates in the particular case of the one-parameter model describing the stationary process with finite variance. Another important result is the finding of the asymptotic relative efficiency of the proposed estimates in relation to the least squares estimate. This allows you to compare the two estimates, depending on the probability distribution of innovation process and of autoregressive coefficients. The results can be used to identify an autoregressive process, especially with nonGaussian nature, and/or of autoregressive processes observed with gross
Pedotransfer functions estimating soil hydraulic properties using different soil parameters
DEFF Research Database (Denmark)
Børgesen, Christen Duus; Iversen, Bo Vangsø; Jacobsen, Ole Hørbye
2008-01-01
Estimates of soil hydraulic properties using pedotransfer functions (PTF) are useful in many studies such as hydrochemical modelling and soil mapping. The objective of this study was to calibrate and test parametric PTFs that predict soil water retention and unsaturated hydraulic conductivity...... parameters. The PTFs are based on neural networks and the Bootstrap method using different sets of predictors and predict the van Genuchten/Mualem parameters. A Danish soil data set (152 horizons) dominated by sandy and sandy loamy soils was used in the development of PTFs to predict the Mualem hydraulic...... conductivity parameters. A larger data set (1618 horizons) with a broader textural range was used in the development of PTFs to predict the van Genuchten parameters. The PTFs using either three or seven textural classes combined with soil organic mater and bulk density gave the most reliable predictions...
Consistent Parameter and Transfer Function Estimation using Context Free Grammars
Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten
2017-04-01
This contribution presents a method for the inference of transfer functions for rainfall-runoff models. Here, transfer functions are defined as parametrized (functional) relationships between a set of spatial predictors (e.g. elevation, slope or soil texture) and model parameters. They are ultimately used for estimation of consistent, spatially distributed model parameters from a limited amount of lumped global parameters. Additionally, they provide a straightforward method for parameter extrapolation from one set of basins to another and can even be used to derive parameterizations for multi-scale models [see: Samaniego et al., 2010]. Yet, currently an actual knowledge of the transfer functions is often implicitly assumed. As a matter of fact, for most cases these hypothesized transfer functions can rarely be measured and often remain unknown. Therefore, this contribution presents a general method for the concurrent estimation of the structure of transfer functions and their respective (global) parameters. Note, that by consequence an estimation of the distributed parameters of the rainfall-runoff model is also undertaken. The method combines two steps to achieve this. The first generates different possible transfer functions. The second then estimates the respective global transfer function parameters. The structural estimation of the transfer functions is based on the context free grammar concept. Chomsky first introduced context free grammars in linguistics [Chomsky, 1956]. Since then, they have been widely applied in computer science. But, to the knowledge of the authors, they have so far not been used in hydrology. Therefore, the contribution gives an introduction to context free grammars and shows how they can be constructed and used for the structural inference of transfer functions. This is enabled by new methods from evolutionary computation, such as grammatical evolution [O'Neill, 2001], which make it possible to exploit the constructed grammar as a
METAHEURISTIC OPTIMIZATION METHODS FOR PARAMETERS ESTIMATION OF DYNAMIC SYSTEMS
Directory of Open Access Journals (Sweden)
V. Panteleev Andrei
2017-01-01
Full Text Available The article considers the usage of metaheuristic methods of constrained global optimization: “Big Bang - Big Crunch”, “Fireworks Algorithm”, “Grenade Explosion Method” in parameters of dynamic systems estimation, described with algebraic-differential equations. Parameters estimation is based upon the observation results from mathematical model behavior. Their values are derived after criterion minimization, which describes the total squared error of state vector coordinates from the deduced ones with precise values observation at different periods of time. Paral- lelepiped type restriction is imposed on the parameters values. Used for solving problems, metaheuristic methods of constrained global extremum don’t guarantee the result, but allow to get a solution of a rather good quality in accepta- ble amount of time. The algorithm of using metaheuristic methods is given. Alongside with the obvious methods for solving algebraic-differential equation systems, it is convenient to use implicit methods for solving ordinary differen- tial equation systems. Two ways of solving the problem of parameters evaluation are given, those parameters differ in their mathematical model. In the first example, a linear mathematical model describes the chemical action parameters change, and in the second one, a nonlinear mathematical model describes predator-prey dynamics, which characterize the changes in both kinds’ population. For each of the observed examples there are calculation results from all the three methods of optimization, there are also some recommendations for how to choose methods parameters. The obtained numerical results have demonstrated the efficiency of the proposed approach. The deduced parameters ap- proximate points slightly differ from the best known solutions, which were deduced differently. To refine the results one should apply hybrid schemes that combine classical methods of optimization of zero, first and second orders and
Directory of Open Access Journals (Sweden)
A. Elsonbaty
2014-10-01
Full Text Available In this article, the adaptive chaos synchronization technique is implemented by an electronic circuit and applied to the hyperchaotic system proposed by Chen et al. We consider the more realistic and practical case where all the parameters of the master system are unknowns. We propose and implement an electronic circuit that performs the estimation of the unknown parameters and the updating of the parameters of the slave system automatically, and hence it achieves the synchronization. To the best of our knowledge, this is the first attempt to implement a circuit that estimates the values of the unknown parameters of chaotic system and achieves synchronization. The proposed circuit has a variety of suitable real applications related to chaos encryption and cryptography. The outputs of the implemented circuits and numerical simulation results are shown to view the performance of the synchronized system and the proposed circuit.
Parameter estimation in nonlinear models for pesticide degradation
International Nuclear Information System (INIS)
Richter, O.; Pestemer, W.; Bunte, D.; Diekkrueger, B.
1991-01-01
A wide class of environmental transfer models is formulated as ordinary or partial differential equations. With the availability of fast computers, the numerical solution of large systems became feasible. The main difficulty in performing a realistic and convincing simulation of the fate of a substance in the biosphere is not the implementation of numerical techniques but rather the incomplete data basis for parameter estimation. Parameter estimation is a synonym for statistical and numerical procedures to derive reasonable numerical values for model parameters from data. The classical method is the familiar linear regression technique which dates back to the 18th century. Because it is easy to handle, linear regression has long been established as a convenient tool for analysing relationships. However, the wide use of linear regression has led to an overemphasis of linear relationships. In nature, most relationships are nonlinear and linearization often gives a poor approximation of reality. Furthermore, pure regression models are not capable to map the dynamics of a process. Therefore, realistic models involve the evolution in time (and space). This leads in a natural way to the formulation of differential equations. To establish the link between data and dynamical models, numerical advanced parameter identification methods have been developed in recent years. This paper demonstrates the application of these techniques to estimation problems in the field of pesticide dynamics. (7 refs., 5 figs., 2 tabs.)
Estimation of common cause failure parameters with periodic tests
Energy Technology Data Exchange (ETDEWEB)
Barros, Anne [Institut Charles Delaunay - Universite de technologie de Troyes - FRE CNRS 2848, 12, rue Marie Curie - BP 2060 -10010 Troyes cedex (France)], E-mail: anne.barros@utt.fr; Grall, Antoine [Institut Charles Delaunay - Universite de technologie de Troyes - FRE CNRS 2848, 12, rue Marie Curie - BP 2060 -10010 Troyes cedex (France); Vasseur, Dominique [Electricite de France, EDF R and D - Industrial Risk Management Department 1, av. du General de Gaulle- 92141 Clamart (France)
2009-04-15
In the specific case of safety systems, CCF parameters estimators for standby components depend on the periodic test schemes. Classically, the testing schemes are either staggered (alternation of tests on redundant components) or non-staggered (all components are tested at the same time). In reality, periodic tests schemes performed on safety components are more complex and combine staggered tests, when the plant is in operation, to non-staggered tests during maintenance and refueling outage periods of the installation. Moreover, the CCF parameters estimators described in the US literature are derived in a consistent way with US Technical Specifications constraints that do not apply on the French Nuclear Power Plants for staggered tests on standby components. Given these issues, the evaluation of CCF parameters from the operating feedback data available within EDF implies the development of methodologies that integrate the testing schemes specificities. This paper aims to formally propose a solution for the estimation of CCF parameters given two distinct difficulties respectively related to a mixed testing scheme and to the consistency with EDF's specific practices inducing systematic non-simultaneity of the observed failures in a staggered testing scheme.
DEFF Research Database (Denmark)
Sommer, Helle Mølgaard; Holst, Helle; Spliid, Henrik
1995-01-01
Three identical microbiological experiments were carried out and analysed in order to examine the variability of the parameter estimates. The microbiological system consisted of a substrate (toluene) and a biomass (pure culture) mixed together in an aquifer medium. The degradation of the substrate...... and the growth of the biomass are described by the Monod model consisting of two nonlinear coupled first-order differential equations. The objective of this study was to estimate the kinetic parameters in the Monod model and to test whether the parameters from the three identical experiments have the same values....... Estimation of the parameters was obtained using an iterative maximum likelihood method and the test used was an approximative likelihood ratio test. The test showed that the three sets of parameters were identical only on a 4% alpha level....
PWR system simulation and parameter estimation with neural networks
International Nuclear Information System (INIS)
Akkurt, Hatice; Colak, Uener
2002-01-01
A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within ±0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected
PWR system simulation and parameter estimation with neural networks
Energy Technology Data Exchange (ETDEWEB)
Akkurt, Hatice; Colak, Uener E-mail: uc@nuke.hacettepe.edu.tr
2002-11-01
A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within {+-}0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected.
Tracking of nuclear reactor parameters via recursive non linear estimation
International Nuclear Information System (INIS)
Pages Fita, J.; Alengrin, G.; Aguilar Martin, J.; Zwingelstein, M.
1975-01-01
The usefulness of nonlinear estimation in the supervision of nuclear reactors, as well for reactivity determination as for on-line modelisation in order to detect eventual and unwanted changes in working operation is illustrated. It is dealt with the reactivity estimation using an a priori dynamical model under the hypothesis of one group of delayed neutrons (measurements were done with an ionisation chamber). The determination of the reactivity using such measurements appears as a nonlinear estimation procedure derived from a particular form of nonlinear filter. Observed inputs being demand of power and inside temperature, and output being the reactivity balance, a recursive algorithm is derived for the estimation of the parameters that define the actual behavior of the reactor. Example of treatment of real data is given [fr
Parameter Estimation as a Problem in Statistical Thermodynamics.
Earle, Keith A; Schneider, David J
2011-03-14
In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.
Genetic Parameter Estimates for Metabolizing Two Common Pharmaceuticals in Swine
Directory of Open Access Journals (Sweden)
Jeremy T. Howard
2018-02-01
Full Text Available In livestock, the regulation of drugs used to treat livestock has received increased attention and it is currently unknown how much of the phenotypic variation in drug metabolism is due to the genetics of an animal. Therefore, the objective of the study was to determine the amount of phenotypic variation in fenbendazole and flunixin meglumine drug metabolism due to genetics. The population consisted of crossbred female and castrated male nursery pigs (n = 198 that were sired by boars represented by four breeds. The animals were spread across nine batches. Drugs were administered intravenously and blood collected a minimum of 10 times over a 48 h period. Genetic parameters for the parent drug and metabolite concentration within each drug were estimated based on pharmacokinetics (PK parameters or concentrations across time utilizing a random regression model. The PK parameters were estimated using a non-compartmental analysis. The PK model included fixed effects of sex and breed of sire along with random sire and batch effects. The random regression model utilized Legendre polynomials and included a fixed population concentration curve, sex, and breed of sire effects along with a random sire deviation from the population curve and batch effect. The sire effect included the intercept for all models except for the fenbendazole metabolite (i.e., intercept and slope. The mean heritability across PK parameters for the fenbendazole and flunixin meglumine parent drug (metabolite was 0.15 (0.18 and 0.31 (0.40, respectively. For the parent drug (metabolite, the mean heritability across time was 0.27 (0.60 and 0.14 (0.44 for fenbendazole and flunixin meglumine, respectively. The errors surrounding the heritability estimates for the random regression model were smaller compared to estimates obtained from PK parameters. Across both the PK and plasma drug concentration across model, a moderate heritability was estimated. The model that utilized the plasma drug
Genetic Parameter Estimates for Metabolizing Two Common Pharmaceuticals in Swine
Howard, Jeremy T.; Ashwell, Melissa S.; Baynes, Ronald E.; Brooks, James D.; Yeatts, James L.; Maltecca, Christian
2018-01-01
In livestock, the regulation of drugs used to treat livestock has received increased attention and it is currently unknown how much of the phenotypic variation in drug metabolism is due to the genetics of an animal. Therefore, the objective of the study was to determine the amount of phenotypic variation in fenbendazole and flunixin meglumine drug metabolism due to genetics. The population consisted of crossbred female and castrated male nursery pigs (n = 198) that were sired by boars represented by four breeds. The animals were spread across nine batches. Drugs were administered intravenously and blood collected a minimum of 10 times over a 48 h period. Genetic parameters for the parent drug and metabolite concentration within each drug were estimated based on pharmacokinetics (PK) parameters or concentrations across time utilizing a random regression model. The PK parameters were estimated using a non-compartmental analysis. The PK model included fixed effects of sex and breed of sire along with random sire and batch effects. The random regression model utilized Legendre polynomials and included a fixed population concentration curve, sex, and breed of sire effects along with a random sire deviation from the population curve and batch effect. The sire effect included the intercept for all models except for the fenbendazole metabolite (i.e., intercept and slope). The mean heritability across PK parameters for the fenbendazole and flunixin meglumine parent drug (metabolite) was 0.15 (0.18) and 0.31 (0.40), respectively. For the parent drug (metabolite), the mean heritability across time was 0.27 (0.60) and 0.14 (0.44) for fenbendazole and flunixin meglumine, respectively. The errors surrounding the heritability estimates for the random regression model were smaller compared to estimates obtained from PK parameters. Across both the PK and plasma drug concentration across model, a moderate heritability was estimated. The model that utilized the plasma drug
ESTIMATION OF DISTANCES TO STARS WITH STELLAR PARAMETERS FROM LAMOST
Energy Technology Data Exchange (ETDEWEB)
Carlin, Jeffrey L.; Newberg, Heidi Jo [Department of Physics, Applied Physics and Astronomy, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States); Liu, Chao; Deng, Licai; Li, Guangwei; Luo, A-Li; Wu, Yue; Yang, Ming; Zhang, Haotong [Key Lab of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China); Beers, Timothy C. [Department of Physics and JINA: Joint Institute for Nuclear Astrophysics, University of Notre Dame, 225 Nieuwland Science Hall, Notre Dame, IN 46556 (United States); Chen, Li; Hou, Jinliang; Smith, Martin C. [Shanghai Astronomical Observatory, 80 Nandan Road, Shanghai 200030 (China); Guhathakurta, Puragra [UCO/Lick Observatory, Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States); Hou, Yonghui [Nanjing Institute of Astronomical Optics and Technology, National Astronomical Observatories, Chinese Academy of Sciences, Nanjing 210042 (China); Lépine, Sébastien [Department of Physics and Astronomy, Georgia State University, 25 Park Place, Suite 605, Atlanta, GA 30303 (United States); Yanny, Brian [Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Zheng, Zheng, E-mail: jeffreylcarlin@gmail.com [Department of Physics and Astronomy, University of Utah, UT 84112 (United States)
2015-07-15
We present a method to estimate distances to stars with spectroscopically derived stellar parameters. The technique is a Bayesian approach with likelihood estimated via comparison of measured parameters to a grid of stellar isochrones, and returns a posterior probability density function for each star’s absolute magnitude. This technique is tailored specifically to data from the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) survey. Because LAMOST obtains roughly 3000 stellar spectra simultaneously within each ∼5° diameter “plate” that is observed, we can use the stellar parameters of the observed stars to account for the stellar luminosity function and target selection effects. This removes biasing assumptions about the underlying populations, both due to predictions of the luminosity function from stellar evolution modeling, and from Galactic models of stellar populations along each line of sight. Using calibration data of stars with known distances and stellar parameters, we show that our method recovers distances for most stars within ∼20%, but with some systematic overestimation of distances to halo giants. We apply our code to the LAMOST database, and show that the current precision of LAMOST stellar parameters permits measurements of distances with ∼40% error bars. This precision should improve as the LAMOST data pipelines continue to be refined.
Bayesian Parameter Estimation via Filtering and Functional Approximations
Matthies, Hermann G.
2016-11-25
The inverse problem of determining parameters in a model by comparing some output of the model with observations is addressed. This is a description for what hat to be done to use the Gauss-Markov-Kalman filter for the Bayesian estimation and updating of parameters in a computational model. This is a filter acting on random variables, and while its Monte Carlo variant --- the Ensemble Kalman Filter (EnKF) --- is fairly straightforward, we subsequently only sketch its implementation with the help of functional representations.
Bayesian Parameter Estimation via Filtering and Functional Approximations
Matthies, Hermann G.; Litvinenko, Alexander; Rosic, Bojana V.; Zander, Elmar
2016-01-01
The inverse problem of determining parameters in a model by comparing some output of the model with observations is addressed. This is a description for what hat to be done to use the Gauss-Markov-Kalman filter for the Bayesian estimation and updating of parameters in a computational model. This is a filter acting on random variables, and while its Monte Carlo variant --- the Ensemble Kalman Filter (EnKF) --- is fairly straightforward, we subsequently only sketch its implementation with the help of functional representations.
Parameter and state estimation in nonlinear dynamical systems
Creveling, Daniel R.
This thesis is concerned with the problem of state and parameter estimation in nonlinear systems. The need to evaluate unknown parameters in models of nonlinear physical, biophysical and engineering systems occurs throughout the development of phenomenological or reduced models of dynamics. When verifying and validating these models, it is important to incorporate information from observations in an efficient manner. Using the idea of synchronization of nonlinear dynamical systems, this thesis develops a framework for presenting data to a candidate model of a physical process in a way that makes efficient use of the measured data while allowing for estimation of the unknown parameters in the model. The approach presented here builds on existing work that uses synchronization as a tool for parameter estimation. Some critical issues of stability in that work are addressed and a practical framework is developed for overcoming these difficulties. The central issue is the choice of coupling strength between the model and data. If the coupling is too strong, the model will reproduce the measured data regardless of the adequacy of the model or correctness of the parameters. If the coupling is too weak, nonlinearities in the dynamics could lead to complex dynamics rendering any cost function comparing the model to the data inadequate for the determination of model parameters. Two methods are introduced which seek to balance the need for coupling with the desire to allow the model to evolve in its natural manner without coupling. One method, 'balanced' synchronization, adds to the synchronization cost function a requirement that the conditional Lyapunov exponents of the model system, conditioned on being driven by the data, remain negative but small in magnitude. Another method allows the coupling between the data and the model to vary in time according to a specific form of differential equation. The coupling dynamics is damped to allow for a tendency toward zero coupling
Estimation of Medium Voltage Cable Parameters for PD Detection
DEFF Research Database (Denmark)
Villefrance, Rasmus; Holbøll, Joachim T.; Henriksen, Mogens
1998-01-01
Medium voltage cable characteristics have been determined with respect to the parameters having influence on the evaluation of results from PD-measurements on paper/oil and XLPE-cables. In particular, parameters essential for discharge quantification and location were measured. In order to relate...... and phase constants. A method to estimate this propagation constant, based on high frequency measurements, will be presented and will be applied to different cable types under different conditions. The influence of temperature and test voltage was investigated. The relevance of the results for cable...
Estimating parameters for probabilistic linkage of privacy-preserved datasets.
Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H
2017-07-10
Probabilistic record linkage is a process used to bring together person-based records from within the same dataset (de-duplication) or from disparate datasets using pairwise comparisons and matching probabilities. The linkage strategy and associated match probabilities are often estimated through investigations into data quality and manual inspection. However, as privacy-preserved datasets comprise encrypted data, such methods are not possible. In this paper, we present a method for estimating the probabilities and threshold values for probabilistic privacy-preserved record linkage using Bloom filters. Our method was tested through a simulation study using synthetic data, followed by an application using real-world administrative data. Synthetic datasets were generated with error rates from zero to 20% error. Our method was used to estimate parameters (probabilities and thresholds) for de-duplication linkages. Linkage quality was determined by F-measure. Each dataset was privacy-preserved using separate Bloom filters for each field. Match probabilities were estimated using the expectation-maximisation (EM) algorithm on the privacy-preserved data. Threshold cut-off values were determined by an extension to the EM algorithm allowing linkage quality to be estimated for each possible threshold. De-duplication linkages of each privacy-preserved dataset were performed using both estimated and calculated probabilities. Linkage quality using the F-measure at the estimated threshold values was also compared to the highest F-measure. Three large administrative datasets were used to demonstrate the applicability of the probability and threshold estimation technique on real-world data. Linkage of the synthetic datasets using the estimated probabilities produced an F-measure that was comparable to the F-measure using calculated probabilities, even with up to 20% error. Linkage of the administrative datasets using estimated probabilities produced an F-measure that was higher
Estimation of economic parameters of U.S. hydropower resources
Energy Technology Data Exchange (ETDEWEB)
Hall, Douglas G. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL); Hunt, Richard T. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL); Reeves, Kelly S. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL); Carroll, Greg R. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL)
2003-06-01
Tools for estimating the cost of developing and operating and maintaining hydropower resources in the form of regression curves were developed based on historical plant data. Development costs that were addressed included: licensing, construction, and five types of environmental mitigation. It was found that the data for each type of cost correlated well with plant capacity. A tool for estimating the annual and monthly electric generation of hydropower resources was also developed. Additional tools were developed to estimate the cost of upgrading a turbine or a generator. The development and operation and maintenance cost estimating tools, and the generation estimating tool were applied to 2,155 U.S. hydropower sites representing a total potential capacity of 43,036 MW. The sites included totally undeveloped sites, dams without a hydroelectric plant, and hydroelectric plants that could be expanded to achieve greater capacity. Site characteristics and estimated costs and generation for each site were assembled in a database in Excel format that is also included within the EERE Library under the title, “Estimation of Economic Parameters of U.S. Hydropower Resources - INL Hydropower Resource Economics Database.”
Probabilistic estimation of the constitutive parameters of polymers
Directory of Open Access Journals (Sweden)
Siviour C.R.
2012-08-01
Full Text Available The Mulliken-Boyce constitutive model predicts the dynamic response of crystalline polymers as a function of strain rate and temperature. This paper describes the Mulliken-Boyce model-based estimation of the constitutive parameters in a Bayesian probabilistic framework. Experimental data from dynamic mechanical analysis and dynamic compression of PVC samples over a wide range of strain rates are analyzed. Both experimental uncertainty and natural variations in the material properties are simultaneously considered as independent and joint distributions; the posterior probability distributions are shown and compared with prior estimates of the material constitutive parameters. Additionally, particular statistical distributions are shown to be effective at capturing the rate and temperature dependence of internal phase transitions in DMA data.
Propagation channel characterization, parameter estimation, and modeling for wireless communications
Yin, Xuefeng
2016-01-01
Thoroughly covering channel characteristics and parameters, this book provides the knowledge needed to design various wireless systems, such as cellular communication systems, RFID and ad hoc wireless communication systems. It gives a detailed introduction to aspects of channels before presenting the novel estimation and modelling techniques which can be used to achieve accurate models. To systematically guide readers through the topic, the book is organised in three distinct parts. The first part covers the fundamentals of the characterization of propagation channels, including the conventional single-input single-output (SISO) propagation channel characterization as well as its extension to multiple-input multiple-output (MIMO) cases. Part two focuses on channel measurements and channel data post-processing. Wideband channel measurements are introduced, including the equipment, technology and advantages and disadvantages of different data acquisition schemes. The channel parameter estimation methods are ...
PARAMETER ESTIMATION OF THE HYBRID CENSORED LOMAX DISTRIBUTION
Directory of Open Access Journals (Sweden)
Samir Kamel Ashour
2010-12-01
Full Text Available Survival analysis is used in various fields for analyzing data involving the duration between two events. It is also known as event history analysis, lifetime data analysis, reliability analysis or time to event analysis. One of the difficulties which arise in this area is the presence of censored data. The lifetime of an individual is censored when it cannot be exactly measured but partial information is available. Different circumstances can produce different types of censoring. The two most common censoring schemes used in life testing experiments are Type-I and Type-II censoring schemes. Hybrid censoring scheme is mixture of Type-I and Type-II censoring scheme. In this paper we consider the estimation of parameters of Lomax distribution based on hybrid censored data. The parameters are estimated by the maximum likelihood and Bayesian methods. The Fisher information matrix has been obtained and it can be used for constructing asymptotic confidence intervals.
A Bayesian framework for parameter estimation in dynamical models.
Directory of Open Access Journals (Sweden)
Flávio Codeço Coelho
Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.
Estimating parameters of chaotic systems synchronized by external driving signal
International Nuclear Information System (INIS)
Wu Xiaogang; Wang Zuxi
2007-01-01
Noise-induced synchronization (NIS) has evoked great research interests recently. Two uncoupled identical chaotic systems can achieve complete synchronization (CS) by feeding a common noise with appropriate intensity. Actually, NIS belongs to the category of external feedback control (EFC). The significance of applying EFC in secure communication lies in fact that the trajectory of chaotic systems is disturbed so strongly by external driving signal that phase space reconstruction attack fails. In this paper, however, we propose an approach that can accurately estimate the parameters of the chaotic systems synchronized by external driving signal through chaotic transmitted signal, driving signal and their derivatives. Numerical simulation indicates that this approach can estimate system parameters and external coupling strength under two driving modes in a very rapid manner, which implies that EFC is not superior to other methods in secure communication
On Using Exponential Parameter Estimators with an Adaptive Controller
Patre, Parag; Joshi, Suresh M.
2011-01-01
Typical adaptive controllers are restricted to using a specific update law to generate parameter estimates. This paper investigates the possibility of using any exponential parameter estimator with an adaptive controller such that the system tracks a desired trajectory. The goal is to provide flexibility in choosing any update law suitable for a given application. The development relies on a previously developed concept of controller/update law modularity in the adaptive control literature, and the use of a converse Lyapunov-like theorem. Stability analysis is presented to derive gain conditions under which this is possible, and inferences are made about the tracking error performance. The development is based on a class of Euler-Lagrange systems that are used to model various engineering systems including space robots and manipulators.
Basic Earth's Parameters as estimated from VLBI observations
Directory of Open Access Journals (Sweden)
Ping Zhu
2017-11-01
Full Text Available The global Very Long Baseline Interferometry observation for measuring the Earth rotation's parameters was launched around 1970s. Since then the precision of the measurements is continuously improving by taking into account various instrumental and environmental effects. The MHB2000 nutation model was introduced in 2002, which is constructed based on a revised nutation series derived from 20 years VLBI observations (1980–1999. In this work, we firstly estimated the amplitudes of all nutation terms from the IERS-EOP-C04 VLBI global solutions w.r.t. IAU1980, then we further inferred the BEPs (Basic Earth's Parameters by fitting the major nutation terms. Meanwhile, the BEPs were obtained from the same nutation time series using a BI (Bayesian Inversion. The corrections to the precession rate and the estimated BEPs are in an agreement, independent of which methods have been applied.
Estimation of parameters of interior permanent magnet synchronous motors
International Nuclear Information System (INIS)
Hwang, C.C.; Chang, S.M.; Pan, C.T.; Chang, T.Y.
2002-01-01
This paper presents a magnetic circuit model to the estimation of machine parameters of an interior permanent magnet synchronous machine. It extends the earlier work of Hwang and Cho that focused mainly on the magnetic aspects of motor design. The proposed model used to calculate EMF, d- and q-axis reactances. These calculations are compared to those from finite element analysis and measurement with good agreement
Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model
DEFF Research Database (Denmark)
Åberg, Andreas; Widd, Anders; Abildskov, Jens
2016-01-01
be used directly for accurate full-scale transient simulations. The model was validated against full-scale data with an engine following the European Transient Cycle. The validation showed that the predictive capability for nitrogen oxides (NOx) was satisfactory. After re-estimation of the adsorption...... and desorption parameters with full-scale transient data, the fit for both NOx and NH3-slip was satisfactory....
Fundamental limits of radio interferometers: calibration and source parameter estimation
Trott, Cathryn M.; Wayth, Randall B.; Tingay, Steven J.
2012-01-01
We use information theory to derive fundamental limits on the capacity to calibrate next-generation radio interferometers, and measure parameters of point sources for instrument calibration, point source subtraction, and data deconvolution. We demonstrate the implications of these fundamental limits, with particular reference to estimation of the 21cm Epoch of Reionization power spectrum with next-generation low-frequency instruments (e.g., the Murchison Widefield Array -- MWA, Precision Arra...
Robust estimation of track parameters in wire chambers
International Nuclear Information System (INIS)
Bogdanova, N.B.; Bourilkov, D.T.
1988-01-01
The aim of this paper is to compare numerically the possibilities of the least square fit (LSF) and robust methods for modelled and real track data to determine the linear regression parameters of charged particles in wire chambers. It is shown, that Tukey robust estimate is superior to more standard (versions of LSF) methods. The efficiency of the method is illustrated by tables and figures for some important physical characteristics
Factorized Estimation of Partially Shared Parameters in Diffusion Networks
Czech Academy of Sciences Publication Activity Database
Dedecius, Kamil; Sečkárová, Vladimíra
2017-01-01
Roč. 65, č. 19 (2017), s. 5153-5163 ISSN 1053-587X R&D Projects: GA ČR(CZ) GP14-06678P; GA ČR GA16-09848S Institutional support: RVO:67985556 Keywords : Diffusion network * Diffusion estimation * Heterogeneous parameters * Multitask networks Subject RIV: BD - Theory of Information OBOR OECD: Applied mathematics Impact factor: 4.300, year: 2016 http://library.utia.cas.cz/separaty/2017/AS/dedecius-0477044.pdf
Statistical methods of parameter estimation for deterministically chaotic time series
Pisarenko, V. F.; Sornette, D.
2004-03-01
We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).
Estimation of parameters of interior permanent magnet synchronous motors
Hwang, C C; Pan, C T; Chang, T Y
2002-01-01
This paper presents a magnetic circuit model to the estimation of machine parameters of an interior permanent magnet synchronous machine. It extends the earlier work of Hwang and Cho that focused mainly on the magnetic aspects of motor design. The proposed model used to calculate EMF, d- and q-axis reactances. These calculations are compared to those from finite element analysis and measurement with good agreement.
CTER-rapid estimation of CTF parameters with error assessment.
Penczek, Pawel A; Fang, Jia; Li, Xueming; Cheng, Yifan; Loerke, Justus; Spahn, Christian M T
2014-05-01
In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03Å without, and 3.85Å with, inclusion of astigmatism parameters. Copyright © 2014 Elsevier B.V. All rights reserved.
Estimation of solid earth tidal parameters and FCN with VLBI
International Nuclear Information System (INIS)
Krásná, H.
2012-01-01
Measurements of a space-geodetic technique VLBI (Very Long Baseline Interferometry) are influenced by a variety of processes which have to be modelled and put as a priori information into the analysis of the space-geodetic data. The increasing accuracy of the VLBI measurements allows access to these parameters and provides possibilities to validate them directly from the measured data. The gravitational attraction of the Moon and the Sun causes deformation of the Earth's surface which can reach several decimetres in radial direction during a day. The displacement is a function of the so-called Love and Shida numbers. Due to the present accuracy of the VLBI measurements the parameters have to be specified as complex numbers, where the imaginary parts describe the anelasticity of the Earth's mantle. Moreover, it is necessary to distinguish between the single tides within the various frequency bands. In this thesis, complex Love and Shida numbers of twelve diurnal and five long-period tides included in the solid Earth tidal displacement modelling are estimated directly from the 27 years of VLBI measurements (1984.0 - 2011.0). In this work, the period of the Free Core Nutation (FCN) is estimated which shows up in the frequency dependent solid Earth tidal displacement as well as in a nutation model describing the motion of the Earth's axis in space. The FCN period in both models is treated as a single parameter and it is estimated in a rigorous global adjustment of the VLBI data. The obtained value of -431.18 ± 0.10 sidereal days differs slightly from the conventional value -431.39 sidereal days given in IERS Conventions 2010. An empirical FCN model based on variable amplitude and phase is determined, whose parameters are estimated in yearly steps directly within VLBI global solutions. (author) [de
Directory of Open Access Journals (Sweden)
Akatsuki eKimura
2015-03-01
Full Text Available Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE in a prediction or to maximize likelihood. A (local maximum of likelihood or (local minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.
Model parameters estimation and sensitivity by genetic algorithms
International Nuclear Information System (INIS)
Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca
2003-01-01
In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The
International Nuclear Information System (INIS)
Landsberg, P.T.; Evans, D.A.
1977-01-01
The subject is dealt with in chapters, entitled: cosmology -some fundamentals; Newtonian gravitation - some fundamentals; the cosmological differential equation - the particle model and the continuum model; some simple Friedmann models; the classification of the Friedmann models; the steady-state model; universe with pressure; optical effects of the expansion according to various theories of light; optical observations and cosmological models. (U.K.)
Energy Technology Data Exchange (ETDEWEB)
Simard, G.; et al.
2017-12-20
We report constraints on cosmological parameters from the angular power spectrum of a cosmic microwave background (CMB) gravitational lensing potential map created using temperature data from 2500 deg$^2$ of South Pole Telescope (SPT) data supplemented with data from Planck in the same sky region, with the statistical power in the combined map primarily from the SPT data. We fit the corresponding lensing angular power spectrum to a model including cold dark matter and a cosmological constant ($\\Lambda$CDM), and to models with single-parameter extensions to $\\Lambda$CDM. We find constraints that are comparable to and consistent with constraints found using the full-sky Planck CMB lensing data. Specifically, we find $\\sigma_8 \\Omega_{\\rm m}^{0.25}=0.598 \\pm 0.024$ from the lensing data alone with relatively weak priors placed on the other $\\Lambda$CDM parameters. In combination with primary CMB data from Planck, we explore single-parameter extensions to the $\\Lambda$CDM model. We find $\\Omega_k = -0.012^{+0.021}_{-0.023}$ or $M_{\
Applicability of genetic algorithms to parameter estimation of economic models
Directory of Open Access Journals (Sweden)
Marcel Ševela
2004-01-01
Full Text Available The paper concentrates on capability of genetic algorithms for parameter estimation of non-linear economic models. In the paper we test the ability of genetic algorithms to estimate of parameters of demand function for durable goods and simultaneously search for parameters of genetic algorithm that lead to maximum effectiveness of the computation algorithm. The genetic algorithms connect deterministic iterative computation methods with stochastic methods. In the genteic aůgorithm approach each possible solution is represented by one individual, those life and lifes of all generations of individuals run under a few parameter of genetic algorithm. Our simulations resulted in optimal mutation rate of 15% of all bits in chromosomes, optimal elitism rate 20%. We can not set the optimal extend of generation, because it proves positive correlation with effectiveness of genetic algorithm in all range under research, but its impact is degreasing. The used genetic algorithm was sensitive to mutation rate at most, than to extend of generation. The sensitivity to elitism rate is not so strong.
Automatic estimation of elasticity parameters in breast tissue
Skerl, Katrin; Cochran, Sandy; Evans, Andrew
2014-03-01
Shear wave elastography (SWE), a novel ultrasound imaging technique, can provide unique information about cancerous tissue. To estimate elasticity parameters, a region of interest (ROI) is manually positioned over the stiffest part of the shear wave image (SWI). The aim of this work is to estimate the elasticity parameters i.e. mean elasticity, maximal elasticity and standard deviation, fully automatically. Ultrasonic SWI of a breast elastography phantom and breast tissue in vivo were acquired using the Aixplorer system (SuperSonic Imagine, Aix-en-Provence, France). First, the SWI within the ultrasonic B-mode image was detected using MATLAB then the elasticity values were extracted. The ROI was automatically positioned over the stiffest part of the SWI and the elasticity parameters were calculated. Finally all values were saved in a spreadsheet which also contains the patient's study ID. This spreadsheet is easily available for physicians and clinical staff for further evaluation and so increase efficiency. Therewith the efficiency is increased. This algorithm simplifies the handling, especially for the performance and evaluation of clinical trials. The SWE processing method allows physicians easy access to the elasticity parameters of the examinations from their own and other institutions. This reduces clinical time and effort and simplifies evaluation of data in clinical trials. Furthermore, reproducibility will be improved.
Rapid estimation of high-parameter auditory-filter shapes
Shen, Yi; Sivakumar, Rajeswari; Richards, Virginia M.
2014-01-01
A Bayesian adaptive procedure, the quick-auditory-filter (qAF) procedure, was used to estimate auditory-filter shapes that were asymmetric about their peaks. In three experiments, listeners who were naive to psychoacoustic experiments detected a fixed-level, pure-tone target presented with a spectrally notched noise masker. The qAF procedure adaptively manipulated the masker spectrum level and the position of the masker notch, which was optimized for the efficient estimation of the five parameters of an auditory-filter model. Experiment I demonstrated that the qAF procedure provided a convergent estimate of the auditory-filter shape at 2 kHz within 150 to 200 trials (approximately 15 min to complete) and, for a majority of listeners, excellent test-retest reliability. In experiment II, asymmetric auditory filters were estimated for target frequencies of 1 and 4 kHz and target levels of 30 and 50 dB sound pressure level. The estimated filter shapes were generally consistent with published norms, especially at the low target level. It is known that the auditory-filter estimates are narrower for forward masking than simultaneous masking due to peripheral suppression, a result replicated in experiment III using fewer than 200 qAF trials. PMID:25324086
Basic MR sequence parameters systematically bias automated brain volume estimation
International Nuclear Information System (INIS)
Haller, Sven; Falkovskiy, Pavel; Roche, Alexis; Marechal, Benedicte; Meuli, Reto; Thiran, Jean-Philippe; Krueger, Gunnar; Lovblad, Karl-Olof; Kober, Tobias
2016-01-01
Automated brain MRI morphometry, including hippocampal volumetry for Alzheimer disease, is increasingly recognized as a biomarker. Consequently, a rapidly increasing number of software tools have become available. We tested whether modifications of simple MR protocol parameters typically used in clinical routine systematically bias automated brain MRI segmentation results. The study was approved by the local ethical committee and included 20 consecutive patients (13 females, mean age 75.8 ± 13.8 years) undergoing clinical brain MRI at 1.5 T for workup of cognitive decline. We compared three 3D T1 magnetization prepared rapid gradient echo (MPRAGE) sequences with the following parameter settings: ADNI-2 1.2 mm iso-voxel, no image filtering, LOCAL- 1.0 mm iso-voxel no image filtering, LOCAL+ 1.0 mm iso-voxel with image edge enhancement. Brain segmentation was performed by two different and established analysis tools, FreeSurfer and MorphoBox, using standard parameters. Spatial resolution (1.0 versus 1.2 mm iso-voxel) and modification in contrast resulted in relative estimated volume difference of up to 4.28 % (p < 0.001) in cortical gray matter and 4.16 % (p < 0.01) in hippocampus. Image data filtering resulted in estimated volume difference of up to 5.48 % (p < 0.05) in cortical gray matter. A simple change of MR parameters, notably spatial resolution, contrast, and filtering, may systematically bias results of automated brain MRI morphometry of up to 4-5 %. This is in the same range as early disease-related brain volume alterations, for example, in Alzheimer disease. Automated brain segmentation software packages should therefore require strict MR parameter selection or include compensatory algorithms to avoid MR parameter-related bias of brain morphometry results. (orig.)
Basic MR sequence parameters systematically bias automated brain volume estimation
Energy Technology Data Exchange (ETDEWEB)
Haller, Sven [University of Geneva, Faculty of Medicine, Geneva (Switzerland); Affidea Centre de Diagnostique Radiologique de Carouge CDRC, Geneva (Switzerland); Falkovskiy, Pavel; Roche, Alexis; Marechal, Benedicte [Siemens Healthcare HC CEMEA SUI DI BM PI, Advanced Clinical Imaging Technology, Lausanne (Switzerland); University Hospital (CHUV), Department of Radiology, Lausanne (Switzerland); Meuli, Reto [University Hospital (CHUV), Department of Radiology, Lausanne (Switzerland); Thiran, Jean-Philippe [LTS5, Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland); Krueger, Gunnar [Siemens Medical Solutions USA, Inc., Boston, MA (United States); Lovblad, Karl-Olof [University of Geneva, Faculty of Medicine, Geneva (Switzerland); University Hospitals of Geneva, Geneva (Switzerland); Kober, Tobias [Siemens Healthcare HC CEMEA SUI DI BM PI, Advanced Clinical Imaging Technology, Lausanne (Switzerland); LTS5, Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland)
2016-11-15
Automated brain MRI morphometry, including hippocampal volumetry for Alzheimer disease, is increasingly recognized as a biomarker. Consequently, a rapidly increasing number of software tools have become available. We tested whether modifications of simple MR protocol parameters typically used in clinical routine systematically bias automated brain MRI segmentation results. The study was approved by the local ethical committee and included 20 consecutive patients (13 females, mean age 75.8 ± 13.8 years) undergoing clinical brain MRI at 1.5 T for workup of cognitive decline. We compared three 3D T1 magnetization prepared rapid gradient echo (MPRAGE) sequences with the following parameter settings: ADNI-2 1.2 mm iso-voxel, no image filtering, LOCAL- 1.0 mm iso-voxel no image filtering, LOCAL+ 1.0 mm iso-voxel with image edge enhancement. Brain segmentation was performed by two different and established analysis tools, FreeSurfer and MorphoBox, using standard parameters. Spatial resolution (1.0 versus 1.2 mm iso-voxel) and modification in contrast resulted in relative estimated volume difference of up to 4.28 % (p < 0.001) in cortical gray matter and 4.16 % (p < 0.01) in hippocampus. Image data filtering resulted in estimated volume difference of up to 5.48 % (p < 0.05) in cortical gray matter. A simple change of MR parameters, notably spatial resolution, contrast, and filtering, may systematically bias results of automated brain MRI morphometry of up to 4-5 %. This is in the same range as early disease-related brain volume alterations, for example, in Alzheimer disease. Automated brain segmentation software packages should therefore require strict MR parameter selection or include compensatory algorithms to avoid MR parameter-related bias of brain morphometry results. (orig.)
Chloramine demand estimation using surrogate chemical and microbiological parameters.
Moradi, Sina; Liu, Sanly; Chow, Christopher W K; van Leeuwen, John; Cook, David; Drikas, Mary; Amal, Rose
2017-07-01
A model is developed to enable estimation of chloramine demand in full scale drinking water supplies based on chemical and microbiological factors that affect chloramine decay rate via nonlinear regression analysis method. The model is based on organic character (specific ultraviolet absorbance (SUVA)) of the water samples and a laboratory measure of the microbiological (F m ) decay of chloramine. The applicability of the model for estimation of chloramine residual (and hence chloramine demand) was tested on several waters from different water treatment plants in Australia through statistical test analysis between the experimental and predicted data. Results showed that the model was able to simulate and estimate chloramine demand at various times in real drinking water systems. To elucidate the loss of chloramine over the wide variation of water quality used in this study, the model incorporates both the fast and slow chloramine decay pathways. The significance of estimated fast and slow decay rate constants as the kinetic parameters of the model for three water sources in Australia was discussed. It was found that with the same water source, the kinetic parameters remain the same. This modelling approach has the potential to be used by water treatment operators as a decision support tool in order to manage chloramine disinfection. Copyright © 2017. Published by Elsevier B.V.
Estimation of Snow Parameters from Dual-Wavelength Airborne Radar
Liao, Liang; Meneghini, Robert; Iguchi, Toshio; Detwiler, Andrew
1997-01-01
Estimation of snow characteristics from airborne radar measurements would complement In-situ measurements. While In-situ data provide more detailed information than radar, they are limited in their space-time sampling. In the absence of significant cloud water contents, dual-wavelength radar data can be used to estimate 2 parameters of a drop size distribution if the snow density is assumed. To estimate, rather than assume, a snow density is difficult, however, and represents a major limitation in the radar retrieval. There are a number of ways that this problem can be investigated: direct comparisons with in-situ measurements, examination of the large scale characteristics of the retrievals and their comparison to cloud model outputs, use of LDR measurements, and comparisons to the theoretical results of Passarelli(1978) and others. In this paper we address the first approach and, in part, the second.
Energy Technology Data Exchange (ETDEWEB)
Linden, S.
2010-04-15
The measured properties of the dark energy component being consistent with a Cosmological Constant, {Lambda}, this cosmological standard model is referred to as the {Lambda}-Cold-Dark-Matter ({Lambda}CDM) model. Despite its overall success, this model suffers from various problems. The existence of a Cosmological Constant raises fundamental questions. Attempts to describe it as the energy contribution from the vacuum as following from Quantum Field Theory failed quantitatively. In consequence, a large number of alternative models have been developed to describe the dark energy component: modified gravity, additional dimensions, Quintessence models. Also, astrophysical effects have been considered to mimic an accelerated expansion. The basics of the {Lambda}CDM model and the various attempts of explaining dark energy are outlined in this thesis. Another major problem of the model comes from the dependencies of the fit results on a number of a priori assumptions and parameterization effects. Today, combined analyses of the various cosmological probes are performed to extract the parameters of the model. Due to a wrong model assumption or a bad parameterization of the real physics, one might end up measuring with high precision something which is not there. We show, that indeed due to the high precision of modern cosmological measurements, purely kinematic approaches to distance measurements no longer yield valid fit results except for accidental special cases, and that a fit of the exact (integral) redshift-distance relation is necessary. The main results of this work concern the use of the CPL parameterization of dark energy when coping with the dynamics of tracker solutions of Quintessence models, and the risk of introducing biases on the parameters due to the possibly prohibited extrapolation to arbitrary high redshifts of the SN type Ia magnitude calibration relation, which is obtained in the low-redshift regime. Whereas the risks of applying CPL shows up to be
A parameter tree approach to estimating system sensitivities to parameter sets
International Nuclear Information System (INIS)
Jarzemba, M.S.; Sagar, B.
2000-01-01
A post-processing technique for determining relative system sensitivity to groups of parameters and system components is presented. It is assumed that an appropriate parametric model is used to simulate system behavior using Monte Carlo techniques and that a set of realizations of system output(s) is available. The objective of our technique is to analyze the input vectors and the corresponding output vectors (that is, post-process the results) to estimate the relative sensitivity of the output to input parameters (taken singly and as a group) and thereby rank them. This technique is different from the design of experimental techniques in that a partitioning of the parameter space is not required before the simulation. A tree structure (which looks similar to an event tree) is developed to better explain the technique. Each limb of the tree represents a particular combination of parameters or a combination of system components. For convenience and to distinguish it from the event tree, we call it the parameter tree. To construct the parameter tree, the samples of input parameter values are treated as either a '+' or a '-' based on whether or not the sampled parameter value is greater than or less than a specified branching criterion (e.g., mean, median, percentile of the population). The corresponding system outputs are also segregated into similar bins. Partitioning the first parameter into a '+' or a '-' bin creates the first level of the tree containing two branches. At the next level, realizations associated with each first-level branch are further partitioned into two bins using the branching criteria on the second parameter and so on until the tree is fully populated. Relative sensitivities are then inferred from the number of samples associated with each branch of the tree. The parameter tree approach is illustrated by applying it to a number of preliminary simulations of the proposed high-level radioactive waste repository at Yucca Mountain, NV. Using a
Cosmological applications in Kaluza—Klein theory
International Nuclear Information System (INIS)
Wanas, M.I.; Nashed, Gamal G. L.; Nowaya, A.A.
2012-01-01
The field equations of Kaluza—Klein (KK) theory have been applied in the domain of cosmology. These equations are solved for a flat universe by taking the gravitational and the cosmological constants as a function of time t. We use Taylor's expansion of cosmological function, Λ(t), up to the first order of the time t. The cosmological parameters are calculated and some cosmological problems are discussed. (geophysics, astronomy, and astrophysics)
Estimating unknown parameters in haemophilia using expert judgement elicitation.
Fischer, K; Lewandowski, D; Janssen, M P
2013-09-01
The increasing attention to healthcare costs and treatment efficiency has led to an increasing demand for quantitative data concerning patient and treatment characteristics in haemophilia. However, most of these data are difficult to obtain. The aim of this study was to use expert judgement elicitation (EJE) to estimate currently unavailable key parameters for treatment models in severe haemophilia A. Using a formal expert elicitation procedure, 19 international experts provided information on (i) natural bleeding frequency according to age and onset of bleeding, (ii) treatment of bleeds, (iii) time needed to control bleeding after starting secondary prophylaxis, (iv) dose requirements for secondary prophylaxis according to onset of bleeding, and (v) life-expectancy. For each parameter experts provided their quantitative estimates (median, P10, P90), which were combined using a graphical method. In addition, information was obtained concerning key decision parameters of haemophilia treatment. There was most agreement between experts regarding bleeding frequencies for patients treated on demand with an average onset of joint bleeding (1.7 years): median 12 joint bleeds per year (95% confidence interval 0.9-36) for patients ≤ 18, and 11 (0.8-61) for adult patients. Less agreement was observed concerning estimated effective dose for secondary prophylaxis in adults: median 2000 IU every other day The majority (63%) of experts expected that a single minor joint bleed could cause irreversible damage, and would accept up to three minor joint bleeds or one trauma related joint bleed annually on prophylaxis. Expert judgement elicitation allowed structured capturing of quantitative expert estimates. It generated novel data to be used in computer modelling, clinical care, and trial design. © 2013 John Wiley & Sons Ltd.
Observable cosmology and cosmological models
International Nuclear Information System (INIS)
Kardashev, N.S.; Lukash, V.N.; Novikov, I.D.
1987-01-01
Modern state of observation cosmology is briefly discussed. Among other things, a problem, related to Hibble constant and slowdown constant determining is considered. Within ''pancake'' theory hot (neutrino) cosmological model explains well the large-scale structure of the Universe, but does not explain the galaxy formation. A cold cosmological model explains well light object formation, but contradicts data on large-scale structure
NEWBOX: A computer program for parameter estimation in diffusion problems
International Nuclear Information System (INIS)
Nestor, C.W. Jr.; Godbee, H.W.; Joy, D.S.
1989-01-01
In the analysis of experiments to determine amounts of material transferred form 1 medium to another (e.g., the escape of chemically hazardous and radioactive materials from solids), there are at least 3 important considerations. These are (1) is the transport amenable to treatment by established mass transport theory; (2) do methods exist to find estimates of the parameters which will give a best fit, in some sense, to the experimental data; and (3) what computational procedures are available for evaluating the theoretical expressions. The authors have made the assumption that established mass transport theory is an adequate model for the situations under study. Since the solutions of the diffusion equation are usually nonlinear in some parameters (diffusion coefficient, reaction rate constants, etc.), use of a method of parameter adjustment involving first partial derivatives can be complicated and prone to errors in the computation of the derivatives. In addition, the parameters must satisfy certain constraints; for example, the diffusion coefficient must remain positive. For these reasons, a variant of the constrained simplex method of M. J. Box has been used to estimate parameters. It is similar, but not identical, to the downhill simplex method of Nelder and Mead. In general, they calculate the fraction of material transferred as a function of time from expressions obtained by the inversion of the Laplace transform of the fraction transferred, rather than by taking derivatives of a calculated concentration profile. With the above approaches to the 3 considerations listed at the outset, they developed a computer program NEWBOX, usable on a personal computer, to calculate the fractional release of material from 4 different geometrical shapes (semi-infinite medium, finite slab, finite circular cylinder, and sphere), accounting for several different boundary conditions
Zhang Yuan Zhong
2002-01-01
This book is one of a series in the areas of high-energy physics, cosmology and gravitation published by the Institute of Physics. It includes courses given at a doctoral school on 'Relativistic Cosmology: Theory and Observation' held in Spring 2000 at the Centre for Scientific Culture 'Alessandro Volta', Italy, sponsored by SIGRAV-Societa Italiana di Relativita e Gravitazione (Italian Society of Relativity and Gravitation) and the University of Insubria. This book collects 15 review reports given by a number of outstanding scientists. They touch upon the main aspects of modern cosmology from observational matters to theoretical models, such as cosmological models, the early universe, dark matter and dark energy, modern observational cosmology, cosmic microwave background, gravitational lensing, and numerical simulations in cosmology. In particular, the introduction to the basics of cosmology includes the basic equations, covariant and tetrad descriptions, Friedmann models, observation and horizons, etc. The ...
Statistical estimation of ultrasonic propagation path parameters for aberration correction.
Waag, Robert C; Astheimer, Jeffrey P
2005-05-01
Parameters in a linear filter model for ultrasonic propagation are found using statistical estimation. The model uses an inhomogeneous-medium Green's function that is decomposed into a homogeneous-transmission term and a path-dependent aberration term. Power and cross-power spectra of random-medium scattering are estimated over the frequency band of the transmit-receive system by using closely situated scattering volumes. The frequency-domain magnitude of the aberration is obtained from a normalization of the power spectrum. The corresponding phase is reconstructed from cross-power spectra of subaperture signals at adjacent receive positions by a recursion. The subapertures constrain the receive sensitivity pattern to eliminate measurement system phase contributions. The recursion uses a Laplacian-based algorithm to obtain phase from phase differences. Pulse-echo waveforms were acquired from a point reflector and a tissue-like scattering phantom through a tissue-mimicking aberration path from neighboring volumes having essentially the same aberration path. Propagation path aberration parameters calculated from the measurements of random scattering through the aberration phantom agree with corresponding parameters calculated for the same aberrator and array position by using echoes from the point reflector. The results indicate the approach describes, in addition to time shifts, waveform amplitude and shape changes produced by propagation through distributed aberration under realistic conditions.
PARAMETER ESTIMATION OF VALVE STICTION USING ANT COLONY OPTIMIZATION
Directory of Open Access Journals (Sweden)
S. Kalaivani
2012-07-01
Full Text Available In this paper, a procedure for quantifying valve stiction in control loops based on ant colony optimization has been proposed. Pneumatic control valves are widely used in the process industry. The control valve contains non-linearities such as stiction, backlash, and deadband that in turn cause oscillations in the process output. Stiction is one of the long-standing problems and it is the most severe problem in the control valves. Thus the measurement data from an oscillating control loop can be used as a possible diagnostic signal to provide an estimate of the stiction magnitude. Quantification of control valve stiction is still a challenging issue. Prior to doing stiction detection and quantification, it is necessary to choose a suitable model structure to describe control-valve stiction. To understand the stiction phenomenon, the Stenman model is used. Ant Colony Optimization (ACO, an intelligent swarm algorithm, proves effective in various fields. The ACO algorithm is inspired from the natural trail following behaviour of ants. The parameters of the Stenman model are estimated using ant colony optimization, from the input-output data by minimizing the error between the actual stiction model output and the simulated stiction model output. Using ant colony optimization, Stenman model with known nonlinear structure and unknown parameters can be estimated.
Sensitivity and parameter-estimation precision for alternate LISA configurations
International Nuclear Information System (INIS)
Vallisneri, Michele; Crowder, Jeff; Tinto, Massimo
2008-01-01
We describe a simple framework to assess the LISA scientific performance (more specifically, its sensitivity and expected parameter-estimation precision for prescribed gravitational-wave signals) under the assumption of failure of one or two inter-spacecraft laser measurements (links) and of one to four intra-spacecraft laser measurements. We apply the framework to the simple case of measuring the LISA sensitivity to monochromatic circular binaries, and the LISA parameter-estimation precision for the gravitational-wave polarization angle of these systems. Compared to the six-link baseline configuration, the five-link case is characterized by a small loss in signal-to-noise ratio (SNR) in the high-frequency section of the LISA band; the four-link case shows a reduction by a factor of √2 at low frequencies, and by up to ∼2 at high frequencies. The uncertainty in the estimate of polarization, as computed in the Fisher-matrix formalism, also worsens when moving from six to five, and then to four links: this can be explained by the reduced SNR available in those configurations (except for observations shorter than three months, where five and six links do better than four even with the same SNR). In addition, we prove (for generic signals) that the SNR and Fisher matrix are invariant with respect to the choice of a basis of TDI observables; rather, they depend only on which inter-spacecraft and intra-spacecraft measurements are available
Temporal Parameters Estimation for Wheelchair Propulsion Using Wearable Sensors
Directory of Open Access Journals (Sweden)
Manoela Ojeda
2014-01-01
Full Text Available Due to lower limb paralysis, individuals with spinal cord injury (SCI rely on their upper limbs for mobility. The prevalence of upper extremity pain and injury is high among this population. We evaluated the performance of three triaxis accelerometers placed on the upper arm, wrist, and under the wheelchair, to estimate temporal parameters of wheelchair propulsion. Twenty-six participants with SCI were asked to push their wheelchair equipped with a SMARTWheel. The estimated stroke number was compared with the criterion from video observations and the estimated push frequency was compared with the criterion from the SMARTWheel. Mean absolute errors (MAE and mean absolute percentage of error (MAPE were calculated. Intraclass correlation coefficients and Bland-Altman plots were used to assess the agreement. Results showed reasonable accuracies especially using the accelerometer placed on the upper arm where the MAPE was 8.0% for stroke number and 12.9% for push frequency. The ICC was 0.994 for stroke number and 0.916 for push frequency. The wrist and seat accelerometer showed lower accuracy with a MAPE for the stroke number of 10.8% and 13.4% and ICC of 0.990 and 0.984, respectively. Results suggested that accelerometers could be an option for monitoring temporal parameters of wheelchair propulsion.
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.
A method for model identification and parameter estimation
International Nuclear Information System (INIS)
Bambach, M; Heinkenschloss, M; Herty, M
2013-01-01
We propose and analyze a new method for the identification of a parameter-dependent model that best describes a given system. This problem arises, for example, in the mathematical modeling of material behavior where several competing constitutive equations are available to describe a given material. In this case, the models are differential equations that arise from the different constitutive equations, and the unknown parameters are coefficients in the constitutive equations. One has to determine the best-suited constitutive equations for a given material and application from experiments. We assume that the true model is one of the N possible parameter-dependent models. To identify the correct model and the corresponding parameters, we can perform experiments, where for each experiment we prescribe an input to the system and observe a part of the system state. Our approach consists of two stages. In the first stage, for each pair of models we determine the experiment, i.e. system input and observation, that best differentiates between the two models, and measure the distance between the two models. Then we conduct N(N − 1) or, depending on the approach taken, N(N − 1)/2 experiments and use the result of the experiments as well as the previously computed model distances to determine the true model. We provide sufficient conditions on the model distances and measurement errors which guarantee that our approach identifies the correct model. Given the model, we identify the corresponding model parameters in the second stage. The problem in the second stage is a standard parameter estimation problem and we use a method suitable for the given application. We illustrate our approach on three examples, including one where the models are elliptic partial differential equations with different parameterized right-hand sides and an example where we identify the constitutive equation in a problem from computational viscoplasticity. (paper)
Transport parameter estimation from lymph measurements and the Patlak equation.
Watson, P D; Wolf, M B
1992-01-01
Two methods of estimating protein transport parameters for plasma-to-lymph transport data are presented. Both use IBM-compatible computers to obtain least-squares parameters for the solvent drag reflection coefficient and the permeability-surface area product using the Patlak equation. A matrix search approach is described, and the speed and convenience of this are compared with a commercially available gradient method. The results from both of these methods were different from those of a method reported by Reed, Townsley, and Taylor [Am. J. Physiol. 257 (Heart Circ. Physiol. 26): H1037-H1041, 1989]. It is shown that the Reed et al. method contains a systematic error. It is also shown that diffusion always plays an important role for transmembrane transport at the exit end of a membrane channel under all conditions of lymph flow rate and that the statement that diffusion becomes zero at high lymph flow rate depends on a mathematical definition of diffusion.
Averaging models: parameters estimation with the R-Average procedure
Directory of Open Access Journals (Sweden)
S. Noventa
2010-01-01
Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.
Synchronization and parameter estimations of an uncertain Rikitake system
International Nuclear Information System (INIS)
Aguilar-Ibanez, Carlos; Martinez-Guerra, Rafael; Aguilar-Lopez, Ricardo; Mata-Machuca, Juan L.
2010-01-01
In this Letter we address the synchronization and parameter estimation of the uncertain Rikitake system, under the assumption the state is partially known. To this end we use the master/slave scheme in conjunction with the adaptive control technique. Our control approach consists of proposing a slave system which has to follow asymptotically the uncertain Rikitake system, refereed as the master system. The gains of the slave system are adjusted continually according to a convenient adaptation control law, until the measurable output errors converge to zero. The convergence analysis is carried out by using the Barbalat's Lemma. Under this context, uncertainty means that although the system structure is known, only a partial knowledge of the corresponding parameter values is available.
Multivariate phase type distributions - Applications and parameter estimation
DEFF Research Database (Denmark)
Meisch, David
The best known univariate probability distribution is the normal distribution. It is used throughout the literature in a broad field of applications. In cases where it is not sensible to use the normal distribution alternative distributions are at hand and well understood, many of these belonging...... and statistical inference, is the multivariate normal distribution. Unfortunately only little is known about the general class of multivariate phase type distribution. Considering the results concerning parameter estimation and inference theory of univariate phase type distributions, the class of multivariate...... projects and depend on reliable cost estimates. The Successive Principle is a group analysis method primarily used for analyzing medium to large projects in relation to cost or duration. We believe that the mathematical modeling used in the Successive Principle can be improved. We suggested a novel...
Energy parameter estimation in solar powered wireless sensor networks
Mousa, Mustafa
2014-02-24
The operation of solar powered wireless sensor networks is associated with numerous challenges. One of the main challenges is the high variability of solar power input and battery capacity, due to factors such as weather, humidity, dust and temperature. In this article, we propose a set of tools that can be implemented onboard high power wireless sensor networks to estimate the battery condition and capacity as well as solar power availability. These parameters are very important to optimize sensing and communications operations and maximize the reliability of the complete system. Experimental results show that the performance of typical Lithium Ion batteries severely degrades outdoors in a matter of weeks or months, and that the availability of solar energy in an urban solar powered wireless sensor network is highly variable, which underlines the need for such power and energy estimation algorithms.
Estimation of Aircraft Nonlinear Unsteady Parameters From Wind Tunnel Data
Klein, Vladislav; Murphy, Patrick C.
1998-01-01
Aerodynamic equations were formulated for an aircraft in one-degree-of-freedom large amplitude motion about each of its body axes. The model formulation based on indicial functions separated the resulting aerodynamic forces and moments into static terms, purely rotary terms and unsteady terms. Model identification from experimental data combined stepwise regression and maximum likelihood estimation in a two-stage optimization algorithm that can identify the unsteady term and rotary term if necessary. The identification scheme was applied to oscillatory data in two examples. The model identified from experimental data fit the data well, however, some parameters were estimated with limited accuracy. The resulting model was a good predictor for oscillatory and ramp input data.
Optimization-based particle filter for state and parameter estimation
Institute of Scientific and Technical Information of China (English)
Li Fu; Qi Fei; Shi Guangming; Zhang Li
2009-01-01
In recent years, the theory of particle filter has been developed and widely used for state and parameter estimation in nonlinear/non-Gaussian systems. Choosing good importance density is a critical issue in particle filter design. In order to improve the approximation of posterior distribution, this paper provides an optimization-based algorithm (the steepest descent method) to generate the proposal distribution and then sample particles from the distribution. This algorithm is applied in 1-D case, and the simulation results show that the proposed particle filter performs better than the extended Kalman filter (EKF), the standard particle filter (PF), the extended Kalman particle filter (PF-EKF) and the unscented particle filter (UPF) both in efficiency and in estimation precision.
Energy parameter estimation in solar powered wireless sensor networks
Mousa, Mustafa; Claudel, Christian G.
2014-01-01
The operation of solar powered wireless sensor networks is associated with numerous challenges. One of the main challenges is the high variability of solar power input and battery capacity, due to factors such as weather, humidity, dust and temperature. In this article, we propose a set of tools that can be implemented onboard high power wireless sensor networks to estimate the battery condition and capacity as well as solar power availability. These parameters are very important to optimize sensing and communications operations and maximize the reliability of the complete system. Experimental results show that the performance of typical Lithium Ion batteries severely degrades outdoors in a matter of weeks or months, and that the availability of solar energy in an urban solar powered wireless sensor network is highly variable, which underlines the need for such power and energy estimation algorithms.
Fast optimization algorithms and the cosmological constant
Bao, Ning; Bousso, Raphael; Jordan, Stephen; Lackey, Brad
2017-11-01
Denef and Douglas have observed that in certain landscape models the problem of finding small values of the cosmological constant is a large instance of a problem that is hard for the complexity class NP (Nondeterministic Polynomial-time). The number of elementary operations (quantum gates) needed to solve this problem by brute force search exceeds the estimated computational capacity of the observable Universe. Here we describe a way out of this puzzling circumstance: despite being NP-hard, the problem of finding a small cosmological constant can be attacked by more sophisticated algorithms whose performance vastly exceeds brute force search. In fact, in some parameter regimes the average-case complexity is polynomial. We demonstrate this by explicitly finding a cosmological constant of order 10-120 in a randomly generated 1 09-dimensional Arkani-Hamed-Dimopoulos-Kachru landscape.
Parameter Estimation Analysis for Hybrid Adaptive Fault Tolerant Control
Eshak, Peter B.
Research efforts have increased in recent years toward the development of intelligent fault tolerant control laws, which are capable of helping the pilot to safely maintain aircraft control at post failure conditions. Researchers at West Virginia University (WVU) have been actively involved in the development of fault tolerant adaptive control laws in all three major categories: direct, indirect, and hybrid. The first implemented design to provide adaptation was a direct adaptive controller, which used artificial neural networks to generate augmentation commands in order to reduce the modeling error. Indirect adaptive laws were implemented in another controller, which utilized online PID to estimate and update the controller parameter. Finally, a new controller design was introduced, which integrated both direct and indirect control laws. This controller is known as hybrid adaptive controller. This last control design outperformed the two earlier designs in terms of less NNs effort and better tracking quality. The performance of online PID has an important role in the quality of the hybrid controller; therefore, the quality of the estimation will be of a great importance. Unfortunately, PID is not perfect and the online estimation process has some inherited issues; the online PID estimates are primarily affected by delays and biases. In order to ensure updating reliable estimates to the controller, the estimator consumes some time to converge. Moreover, the estimator will often converge to a biased value. This thesis conducts a sensitivity analysis for the estimation issues, delay and bias, and their effect on the tracking quality. In addition, the performance of the hybrid controller as compared to direct adaptive controller is explored. In order to serve this purpose, a simulation environment in MATLAB/SIMULINK has been created. The simulation environment is customized to provide the user with the flexibility to add different combinations of biases and delays to
Estimation of modal parameters using bilinear joint time frequency distributions
Roshan-Ghias, A.; Shamsollahi, M. B.; Mobed, M.; Behzad, M.
2007-07-01
In this paper, a new method is proposed for modal parameter estimation using time-frequency representations. Smoothed Pseudo Wigner-Ville distribution which is a member of the Cohen's class distributions is used to decouple vibration modes completely in order to study each mode separately. This distribution reduces cross-terms which are troublesome in Wigner-Ville distribution and retains the resolution as well. The method was applied to highly damped systems, and results were superior to those obtained via other conventional methods.
Parameter estimation of variable-parameter nonlinear Muskingum model using excel solver
Kang, Ling; Zhou, Liwei
2018-02-01
Abstract . The Muskingum model is an effective flood routing technology in hydrology and water resources Engineering. With the development of optimization technology, more and more variable-parameter Muskingum models were presented to improve effectiveness of the Muskingum model in recent decades. A variable-parameter nonlinear Muskingum model (NVPNLMM) was proposed in this paper. According to the results of two real and frequently-used case studies by various models, the NVPNLMM could obtain better values of evaluation criteria, which are used to describe the superiority of the estimated outflows and compare the accuracies of flood routing using various models, and the optimal estimated outflows by the NVPNLMM were closer to the observed outflows than the ones by other models.
Estimating the parameters of a generalized lambda distribution
International Nuclear Information System (INIS)
Fournier, B.; Rupin, N.; Najjar, D.; Iost, A.; Rupin, N.; Bigerelle, M.; Wilcox, R.; Fournier, B.
2007-01-01
The method of moments is a popular technique for estimating the parameters of a generalized lambda distribution (GLD), but published results suggest that the percentile method gives superior results. However, the percentile method cannot be implemented in an automatic fashion, and automatic methods, like the starship method, can lead to prohibitive execution time with large sample sizes. A new estimation method is proposed that is automatic (it does not require the use of special tables or graphs), and it reduces the computational time. Based partly on the usual percentile method, this new method also requires choosing which quantile u to use when fitting a GLD to data. The choice for u is studied and it is found that the best choice depends on the final goal of the modeling process. The sampling distribution of the new estimator is studied and compared to the sampling distribution of estimators that have been proposed. Naturally, all estimators are biased and here it is found that the bias becomes negligible with sample sizes n ≥ 2 * 10(3). The.025 and.975 quantiles of the sampling distribution are investigated, and the difference between these quantiles is found to decrease proportionally to 1/root n.. The same results hold for the moment and percentile estimates. Finally, the influence of the sample size is studied when a normal distribution is modeled by a GLD. Both bounded and unbounded GLDs are used and the bounded GLD turns out to be the most accurate. Indeed it is shown that, up to n = 10(6), bounded GLD modeling cannot be rejected by usual goodness-of-fit tests. (authors)
Analytic continuation by duality estimation of the S parameter
International Nuclear Information System (INIS)
Ignjatovic, S. R.; Wijewardhana, L. C. R.; Takeuchi, T.
2000-01-01
We investigate the reliability of the analytic continuation by duality (ACD) technique in estimating the electroweak S parameter for technicolor theories. The ACD technique, which is an application of finite energy sum rules, relates the S parameter for theories with unknown particle spectra to known OPE coefficients. We identify the sources of error inherent in the technique and evaluate them for several toy models to see if they can be controlled. The evaluation of errors is done analytically and all relevant formulas are provided in appendixes including analytical formulas for approximating the function 1/s with a polynomial in s. The use of analytical formulas protects us from introducing additional errors due to numerical integration. We find that it is very difficult to control the errors even when the momentum dependence of the OPE coefficients is known exactly. In realistic cases in which the momentum dependence of the OPE coefficients is only known perturbatively, it is impossible to obtain a reliable estimate. (c) 2000 The American Physical Society
A robust methodology for modal parameters estimation applied to SHM
Cardoso, Rharã; Cury, Alexandre; Barbosa, Flávio
2017-10-01
The subject of structural health monitoring is drawing more and more attention over the last years. Many vibration-based techniques aiming at detecting small structural changes or even damage have been developed or enhanced through successive researches. Lately, several studies have focused on the use of raw dynamic data to assess information about structural condition. Despite this trend and much skepticism, many methods still rely on the use of modal parameters as fundamental data for damage detection. Therefore, it is of utmost importance that modal identification procedures are performed with a sufficient level of precision and automation. To fulfill these requirements, this paper presents a novel automated time-domain methodology to identify modal parameters based on a two-step clustering analysis. The first step consists in clustering modes estimates from parametric models of different orders, usually presented in stabilization diagrams. In an automated manner, the first clustering analysis indicates which estimates correspond to physical modes. To circumvent the detection of spurious modes or the loss of physical ones, a second clustering step is then performed. The second step consists in the data mining of information gathered from the first step. To attest the robustness and efficiency of the proposed methodology, numerically generated signals as well as experimental data obtained from a simply supported beam tested in laboratory and from a railway bridge are utilized. The results appeared to be more robust and accurate comparing to those obtained from methods based on one-step clustering analysis.
Parameter estimation in space systems using recurrent neural networks
Parlos, Alexander G.; Atiya, Amir F.; Sunkel, John W.
1991-01-01
The identification of time-varying parameters encountered in space systems is addressed, using artificial neural systems. A hybrid feedforward/feedback neural network, namely a recurrent multilayer perception, is used as the model structure in the nonlinear system identification. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard back-propagation-learning algorithm is modified and it is used for both the off-line and on-line supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying parameters of nonlinear dynamic systems is investigated by estimating the mass properties of a representative large spacecraft. The changes in the spacecraft inertia are predicted using a trained neural network, during two configurations corresponding to the early and late stages of the spacecraft on-orbit assembly sequence. The proposed on-line mass properties estimation capability offers encouraging results, though, further research is warranted for training and testing the predictive capabilities of these networks beyond nominal spacecraft operations.
Parameter estimation and hypothesis testing in linear models
Koch, Karl-Rudolf
1999-01-01
The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...
Periodic orbits of hybrid systems and parameter estimation via AD
International Nuclear Information System (INIS)
Guckenheimer, John; Phipps, Eric Todd; Casey, Richard
2004-01-01
Rhythmic, periodic processes are ubiquitous in biological systems; for example, the heart beat, walking, circadian rhythms and the menstrual cycle. Modeling these processes with high fidelity as periodic orbits of dynamical systems is challenging because: (1) (most) nonlinear differential equations can only be solved numerically; (2) accurate computation requires solving boundary value problems; (3) many problems and solutions are only piecewise smooth; (4) many problems require solving differential-algebraic equations; (5) sensitivity information for parameter dependence of solutions requires solving variational equations; and (6) truncation errors in numerical integration degrade performance of optimization methods for parameter estimation. In addition, mathematical models of biological processes frequently contain many poorly-known parameters, and the problems associated with this impedes the construction of detailed, high-fidelity models. Modelers are often faced with the difficult problem of using simulations of a nonlinear model, with complex dynamics and many parameters, to match experimental data. Improved computational tools for exploring parameter space and fitting models to data are clearly needed. This paper describes techniques for computing periodic orbits in systems of hybrid differential-algebraic equations and parameter estimation methods for fitting these orbits to data. These techniques make extensive use of automatic differentiation to accurately and efficiently evaluate derivatives for time integration, parameter sensitivities, root finding and optimization. The boundary value problem representing a periodic orbit in a hybrid system of differential algebraic equations is discretized via multiple-shooting using a high-degree Taylor series integration method (GM00, Phi03). Numerical solutions to the shooting equations are then estimated by a Newton process yielding an approximate periodic orbit. A metric is defined for computing the distance
Axion cold dark matter in nonstandard cosmologies
International Nuclear Information System (INIS)
Visinelli, Luca; Gondolo, Paolo
2010-01-01
We study the parameter space of cold dark matter axions in two cosmological scenarios with nonstandard thermal histories before big bang nucleosynthesis: the low-temperature reheating (LTR) cosmology and the kination cosmology. If the Peccei-Quinn symmetry breaks during inflation, we find more allowed parameter space in the LTR cosmology than in the standard cosmology and less in the kination cosmology. On the contrary, if the Peccei-Quinn symmetry breaks after inflation, the Peccei-Quinn scale is orders of magnitude higher than standard in the LTR cosmology and lower in the kination cosmology. We show that the axion velocity dispersion may be used to distinguish some of these nonstandard cosmologies. Thus, axion cold dark matter may be a good probe of the history of the Universe before big bang nucleosynthesis.
DEFF Research Database (Denmark)
Sommer, Helle Mølgaard; Holst, Helle; Spliid, Henrik
1995-01-01
Three identical microbiological experiments were carried out and analysed in order to examine the variability of the parameter estimates. The microbiological system consisted of a substrate (toluene) and a biomass (pure culture) mixed together in an aquifer medium. The degradation of the substrate...
Thermodynamic criteria for estimating the kinetic parameters of catalytic reactions
Mitrichev, I. I.; Zhensa, A. V.; Kol'tsova, E. M.
2017-01-01
Kinetic parameters are estimated using two criteria in addition to the traditional criterion that considers the consistency between experimental and modeled conversion data: thermodynamic consistency and the consistency with entropy production (i.e., the absolute rate of the change in entropy due to exchange with the environment is consistent with the rate of entropy production in the steady state). A special procedure is developed and executed on a computer to achieve the thermodynamic consistency of a set of kinetic parameters with respect to both the standard entropy of a reaction and the standard enthalpy of a reaction. A problem of multi-criterion optimization, reduced to a single-criterion problem by summing weighted values of the three criteria listed above, is solved. Using the reaction of NO reduction with CO on a platinum catalyst as an example, it is shown that the set of parameters proposed by D.B. Mantri and P. Aghalayam gives much worse agreement with experimental values than the set obtained on the basis of three criteria: the sum of the squares of deviations for conversion, the thermodynamic consistency, and the consistency with entropy production.
Estimation of Parameters of CCF with Staggered Testing
International Nuclear Information System (INIS)
Kim, Myung-Ki; Hong, Sung-Yull
2006-01-01
Common cause failures are extremely important in reliability analysis and would be dominant to risk contributor in a high reliable system such as a nuclear power plant. Of particular concern is common cause failure (CCF) that degrades redundancy or diversity implemented to improve a reliability of systems. Most of analyses of parameters of CCF models such as beta factor model, alpha factor model, and MGL(Multiple Greek Letters) model deal a system with a nonstaggered testing strategy. Non-staggered testing is that all components are tested at the same time (or at least the same shift) and staggered testing is that if there is a failure in the first component, all the other components are tested immediately, and if it succeeds, no more is done until the next scheduled testing time. Both of them are applied in the nuclear power plants. The strategy, however, is not explicitly described in the technical specifications, but implicitly in the periodic test procedure. For example, some redundant components particularly important to safety are being tested with staggered testing strategy. Others are being performed with non-staggered testing strategy. This paper presents the parameter estimator of CCF model such as beta factor model, MGL model, and alpha factor model with staggered testing strategy. In addition, a new CCF model, rho factor model, is proposed and its parameter is presented with staggered testing strategy
Estimating negative binomial parameters from occurrence data with detection times.
Hwang, Wen-Han; Huggins, Richard; Stoklosa, Jakub
2016-11-01
The negative binomial distribution is a common model for the analysis of count data in biology and ecology. In many applications, we may not observe the complete frequency count in a quadrat but only that a species occurred in the quadrat. If only occurrence data are available then the two parameters of the negative binomial distribution, the aggregation index and the mean, are not identifiable. This can be overcome by data augmentation or through modeling the dependence between quadrat occupancies. Here, we propose to record the (first) detection time while collecting occurrence data in a quadrat. We show that under what we call proportionate sampling, where the time to survey a region is proportional to the area of the region, that both negative binomial parameters are estimable. When the mean parameter is larger than two, our proposed approach is more efficient than the data augmentation method developed by Solow and Smith (, Am. Nat. 176, 96-98), and in general is cheaper to conduct. We also investigate the effect of misidentification when collecting negative binomially distributed data, and conclude that, in general, the effect can be simply adjusted for provided that the mean and variance of misidentification probabilities are known. The results are demonstrated in a simulation study and illustrated in several real examples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Estimation Parameters And Modelling Zero Inflated Negative Binomial
Directory of Open Access Journals (Sweden)
Cindy Cahyaning Astuti
2016-11-01
Full Text Available Regression analysis is used to determine relationship between one or several response variable (Y with one or several predictor variables (X. Regression model between predictor variables and the Poisson distributed response variable is called Poisson Regression Model. Since, Poisson Regression requires an equality between mean and variance, it is not appropriate to apply this model on overdispersion (variance is higher than mean. Poisson regression model is commonly used to analyze the count data. On the count data type, it is often to encounteredd some observations that have zero value with large proportion of zero value on the response variable (zero Inflation. Poisson regression can be used to analyze count data but it has not been able to solve problem of excess zero value on the response variable. An alternative model which is more suitable for overdispersion data and can solve the problem of excess zero value on the response variable is Zero Inflated Negative Binomial (ZINB. In this research, ZINB is applied on the case of Tetanus Neonatorum in East Java. The aim of this research is to examine the likelihood function and to form an algorithm to estimate the parameter of ZINB and also applying ZINB model in the case of Tetanus Neonatorum in East Java. Maximum Likelihood Estimation (MLE method is used to estimate the parameter on ZINB and the likelihood function is maximized using Expectation Maximization (EM algorithm. Test results of ZINB regression model showed that the predictor variable have a partial significant effect at negative binomial model is the percentage of pregnant women visits and the percentage of maternal health personnel assisted, while the predictor variables that have a partial significant effect at zero inflation model is the percentage of neonatus visits.
International Nuclear Information System (INIS)
Meusburger, C.; Schroers, B. J.
2008-01-01
Each of the local isometry groups arising in three-dimensional (3d) gravity can be viewed as a group of unit (split) quaternions over a ring which depends on the cosmological constant. In this paper we explain and prove this statement and use it as a unifying framework for studying Poisson structures associated with the local isometry groups. We show that, in all cases except for the case of Euclidean signature with positive cosmological constant, the local isometry groups are equipped with the Poisson-Lie structure of a classical double. We calculate the dressing action of the factor groups on each other and find, among others, a simple and unified description of the symplectic leaves of SU(2) and SL(2,R). We also compute the Poisson structure on the dual Poisson-Lie groups of the local isometry groups and on their Heisenberg doubles; together, they determine the Poisson structure of the phase space of 3d gravity in the so-called combinatorial description
Automated modal parameter estimation using correlation analysis and bootstrap sampling
Yaghoubi, Vahid; Vakilzadeh, Majid K.; Abrahamsson, Thomas J. S.
2018-02-01
The estimation of modal parameters from a set of noisy measured data is a highly judgmental task, with user expertise playing a significant role in distinguishing between estimated physical and noise modes of a test-piece. Various methods have been developed to automate this procedure. The common approach is to identify models with different orders and cluster similar modes together. However, most proposed methods based on this approach suffer from high-dimensional optimization problems in either the estimation or clustering step. To overcome this problem, this study presents an algorithm for autonomous modal parameter estimation in which the only required optimization is performed in a three-dimensional space. To this end, a subspace-based identification method is employed for the estimation and a non-iterative correlation-based method is used for the clustering. This clustering is at the heart of the paper. The keys to success are correlation metrics that are able to treat the problems of spatial eigenvector aliasing and nonunique eigenvectors of coalescent modes simultaneously. The algorithm commences by the identification of an excessively high-order model from frequency response function test data. The high number of modes of this model provides bases for two subspaces: one for likely physical modes of the tested system and one for its complement dubbed the subspace of noise modes. By employing the bootstrap resampling technique, several subsets are generated from the same basic dataset and for each of them a model is identified to form a set of models. Then, by correlation analysis with the two aforementioned subspaces, highly correlated modes of these models which appear repeatedly are clustered together and the noise modes are collected in a so-called Trashbox cluster. Stray noise modes attracted to the mode clusters are trimmed away in a second step by correlation analysis. The final step of the algorithm is a fuzzy c-means clustering procedure applied to
Cosmological Probes for Supersymmetry
Directory of Open Access Journals (Sweden)
Maxim Khlopov
2015-05-01
Full Text Available The multi-parameter character of supersymmetric dark-matter models implies the combination of their experimental studies with astrophysical and cosmological probes. The physics of the early Universe provides nontrivial effects of non-equilibrium particles and primordial cosmological structures. Primordial black holes (PBHs are a profound signature of such structures that may arise as a cosmological consequence of supersymmetric (SUSY models. SUSY-based mechanisms of baryosynthesis can lead to the possibility of antimatter domains in a baryon asymmetric Universe. In the context of cosmoparticle physics, which studies the fundamental relationship of the micro- and macro-worlds, the development of SUSY illustrates the main principles of this approach, as the physical basis of the modern cosmology provides cross-disciplinary tests in physical and astronomical studies.
Jones, Bernard J. T.
2017-04-01
Preface; Notation and conventions; Part I. 100 Years of Cosmology: 1. Emerging cosmology; 2. The cosmic expansion; 3. The cosmic microwave background; 4. Recent cosmology; Part II. Newtonian Cosmology: 5. Newtonian cosmology; 6. Dark energy cosmological models; 7. The early universe; 8. The inhomogeneous universe; 9. The inflationary universe; Part III. Relativistic Cosmology: 10. Minkowski space; 11. The energy momentum tensor; 12. General relativity; 13. Space-time geometry and calculus; 14. The Einstein field equations; 15. Solutions of the Einstein equations; 16. The Robertson-Walker solution; 17. Congruences, curvature and Raychaudhuri; 18. Observing and measuring the universe; Part IV. The Physics of Matter and Radiation: 19. Physics of the CMB radiation; 20. Recombination of the primeval plasma; 21. CMB polarisation; 22. CMB anisotropy; Part V. Precision Tools for Precision Cosmology: 23. Likelihood; 24. Frequentist hypothesis testing; 25. Statistical inference: Bayesian; 26. CMB data processing; 27. Parametrising the universe; 28. Precision cosmology; 29. Epilogue; Appendix A. SI, CGS and Planck units; Appendix B. Magnitudes and distances; Appendix C. Representing vectors and tensors; Appendix D. The electromagnetic field; Appendix E. Statistical distributions; Appendix F. Functions on a sphere; Appendix G. Acknowledgements; References; Index.
Conformal Cosmology and Supernova Data
Behnke, Danilo; Blaschke, David; Pervushin, Victor; Proskurin, Denis
2000-01-01
We define the cosmological parameters $H_{c,0}$, $\\Omega_{m,c}$ and $\\Omega_{\\Lambda, c}$ within the Conformal Cosmology as obtained by the homogeneous approximation to the conformal-invariant generalization of Einstein's General Relativity theory. We present the definitions of the age of the universe and of the luminosity distance in the context of this approach. A possible explanation of the recent data from distant supernovae Ia without a cosmological constant is presented.
Colocated MIMO Radar: Beamforming, Waveform design, and Target Parameter Estimation
Jardak, Seifallah
2014-04-01
Thanks to its improved capabilities, the Multiple Input Multiple Output (MIMO) radar is attracting the attention of researchers and practitioners alike. Because it transmits orthogonal or partially correlated waveforms, this emerging technology outperformed the phased array radar by providing better parametric identifiability, achieving higher spatial resolution, and designing complex beampatterns. To avoid jamming and enhance the signal to noise ratio, it is often interesting to maximize the transmitted power in a given region of interest and minimize it elsewhere. This problem is known as the transmit beampattern design and is usually tackled as a two-step process: a transmit covariance matrix is firstly designed by minimizing a convex optimization problem, which is then used to generate practical waveforms. In this work, we propose simple novel methods to generate correlated waveforms using finite alphabet constant and non-constant-envelope symbols. To generate finite alphabet waveforms, the proposed method maps easily generated Gaussian random variables onto the phase-shift-keying, pulse-amplitude, and quadrature-amplitude modulation schemes. For such mapping, the probability density function of Gaussian random variables is divided into M regions, where M is the number of alphabets in the corresponding modulation scheme. By exploiting the mapping function, the relationship between the cross-correlation of Gaussian and finite alphabet symbols is derived. The second part of this thesis covers the topic of target parameter estimation. To determine the reflection coefficient, spatial location, and Doppler shift of a target, maximum likelihood estimation yields the best performance. However, it requires a two dimensional search problem. Therefore, its computational complexity is prohibitively high. So, we proposed a reduced complexity and optimum performance algorithm which allows the two dimensional fast Fourier transform to jointly estimate the spatial location
Estimation of the Alpha Factor Parameters Using the ICDE Database
Energy Technology Data Exchange (ETDEWEB)
Kang, Dae Il; Hwang, M. J.; Han, S. H
2007-04-15
Detailed common cause failure (CCF) analysis generally need for the data for CCF events of other nuclear power plants because the CCF events rarely occur. KAERI has been participated at the international common cause failure data exchange (ICDE) project to get the data for the CCF events. The operation office of the ICDE project sent the CCF event data for EDG to the KAERI at December 2006. As a pilot study, we performed the detailed CCF analysis of EDGs for Yonggwang Units 3 and 4 and Ulchin Units 3 and 4 using the ICDE database. There are two onsite EDGs for each NPP. When an offsite power and the two onsite EDGs are not available, one alternate AC (AAC) diesel generator (hereafter AAC) is provided. Two onsite EDGs and the AAC are manufactured by the same company, but they are designed differently. We estimated the Alpha Factor and the CCF probability for the cases where three EDGs were assumed to be identically designed, and for those were assumed to be not identically designed. For the cases where three EDGs were assumed to be identically designed, double CCF probabilities of Yonggwang Units 3/4 and Ulchin Units 3/4 for 'fails to start' were estimated as 2.20E-4 and 2.10E-4, respectively. Triple CCF probabilities of those were estimated as 2.39E-4 and 2.42E-4, respectively. As each NPP has no experience for 'fails to run', Yonggwang Units 3/4 and Ulchin Units 3/4 have the same CCF probability. The estimated double and triple CCF probabilities for 'fails to run' are 4.21E-4 and 4.61E-4, respectively. Quantification results show that the system unavailability for the cases where the three EDGs are identical is higher than that where the three EDGs are different. The estimated system unavailability of the former case was increased by 3.4% comparing with that of the latter. As a future study, a computerization work for the estimations of the CCF parameters will be performed.
Estimation of genetic parameters for reproductive traits in Shall sheep.
Amou Posht-e-Masari, Hesam; Shadparvar, Abdol Ahad; Ghavi Hossein-Zadeh, Navid; Hadi Tavatori, Mohammad Hossein
2013-06-01
The objective of this study was to estimate genetic parameters for reproductive traits in Shall sheep. Data included 1,316 records on reproductive performances of 395 Shall ewes from 41 sires and 136 dams which were collected from 2001 to 2007 in Shall breeding station in Qazvin province at the Northwest of Iran. Studied traits were litter size at birth (LSB), litter size at weaning (LSW), litter mean weight per lamb born (LMWLB), litter mean weight per lamb weaned (LMWLW), total litter weight at birth (TLWB), and total litter weight at weaning (TLWW). Test of significance to include fixed effects in the statistical model was performed using the general linear model procedure of SAS. The effects of lambing year and ewe age at lambing were significant (Psheep.
Multiphase flow parameter estimation based on laser scattering
Vendruscolo, Tiago P.; Fischer, Robert; Martelli, Cicero; Rodrigues, Rômulo L. P.; Morales, Rigoberto E. M.; da Silva, Marco J.
2015-07-01
The flow of multiple constituents inside a pipe or vessel, known as multiphase flow, is commonly found in many industry branches. The measurement of the individual flow rates in such flow is still a challenge, which usually requires a combination of several sensor types. However, in many applications, especially in industrial process control, it is not necessary to know the absolute flow rate of the respective phases, but rather to continuously monitor flow conditions in order to quickly detect deviations from the desired parameters. Here we show how a simple and low-cost sensor design can achieve this, by using machine-learning techniques to distinguishing the characteristic patterns of oblique laser light scattered at the phase interfaces. The sensor is capable of estimating individual phase fluxes (as well as their changes) in multiphase flows and may be applied to safety applications due to its quick response time.
Estimating Phenomenological Parameters in Multi-Assets Markets
Raffaelli, Giacomo; Marsili, Matteo
Financial correlations exhibit a non-trivial dynamic behavior. This is reproduced by a simple phenomenological model of a multi-asset financial market, which takes into account the impact of portfolio investment on price dynamics. This captures the fact that correlations determine the optimal portfolio but are affected by investment based on it. Such a feedback on correlations gives rise to an instability when the volume of investment exceeds a critical value. Close to the critical point the model exhibits dynamical correlations very similar to those observed in real markets. We discuss how the model's parameter can be estimated in real market data with a maximum likelihood principle. This confirms the main conclusion that real markets operate close to a dynamically unstable point.
Dynamic systems models new methods of parameter and state estimation
2016-01-01
This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...
Multiphase flow parameter estimation based on laser scattering
International Nuclear Information System (INIS)
Vendruscolo, Tiago P; Fischer, Robert; Martelli, Cicero; Da Silva, Marco J; Rodrigues, Rômulo L P; Morales, Rigoberto E M
2015-01-01
The flow of multiple constituents inside a pipe or vessel, known as multiphase flow, is commonly found in many industry branches. The measurement of the individual flow rates in such flow is still a challenge, which usually requires a combination of several sensor types. However, in many applications, especially in industrial process control, it is not necessary to know the absolute flow rate of the respective phases, but rather to continuously monitor flow conditions in order to quickly detect deviations from the desired parameters. Here we show how a simple and low-cost sensor design can achieve this, by using machine-learning techniques to distinguishing the characteristic patterns of oblique laser light scattered at the phase interfaces. The sensor is capable of estimating individual phase fluxes (as well as their changes) in multiphase flows and may be applied to safety applications due to its quick response time. (paper)
Review of methods for level density estimation from resonance parameters
International Nuclear Information System (INIS)
Froehner, F.H.
1983-01-01
A number of methods are available for statistical analysis of resonance parameter sets, i.e. for estimation of level densities and average widths with account of missing levels. The main categories are (i) methods based on theories of level spacings (orthogonal-ensemble theory, Dyson-Mehta statistics), (ii) methods based on comparison with simulated cross section curves (Monte Carlo simulation, Garrison's autocorrelation method), (iii) methods exploiting the observed neutron width distribution by means of Bayesian or more approximate procedures such as maximum-likelihood, least-squares or moment methods, with various recipes for the treatment of detection thresholds and resolution effects. The present review will concentrate on (iii) with the aim of clarifying the basic mathematical concepts and the relationship between the various techniques. Recent theoretical progress in the treatment of resolution effects, detectability thresholds and p-wave admixture is described. (Auth.)
MANOVA, LDA, and FA criteria in clusters parameter estimation
Directory of Open Access Journals (Sweden)
Stan Lipovetsky
2015-12-01
Full Text Available Multivariate analysis of variance (MANOVA and linear discriminant analysis (LDA apply such well-known criteria as the Wilks’ lambda, Lawley–Hotelling trace, and Pillai’s trace test for checking quality of the solutions. The current paper suggests using these criteria for building objectives for finding clusters parameters because optimizing such objectives corresponds to the best distinguishing between the clusters. Relation to Joreskog’s classification for factor analysis (FA techniques is also considered. The problem can be reduced to the multinomial parameterization, and solution can be found in a nonlinear optimization procedure which yields the estimates for the cluster centers and sizes. This approach for clustering works with data compressed into covariance matrix so can be especially useful for big data.
Transient analysis of intercalation electrodes for parameter estimation
Devan, Sheba
An essential part of integrating batteries as power sources in any application, be it a large scale automotive application or a small scale portable application, is an efficient Battery Management System (BMS). The combination of a battery with the microprocessor based BMS (called "smart battery") helps prolong the life of the battery by operating in the optimal regime and provides accurate information regarding the battery to the end user. The main purposes of BMS are cell protection, monitoring and control, and communication between different components. These purposes are fulfilled by tracking the change in the parameters of the intercalation electrodes in the batteries. Consequently, the functions of the BMS should be prompt, which requires the methodology of extracting the parameters to be efficient in time. The traditional transient techniques applied so far may not be suitable due to reasons such as the inability to apply these techniques when the battery is under operation, long experimental time, etc. The primary aim of this research work is to design a fast, accurate and reliable technique that can be used to extract parameter values of the intercalation electrodes. A methodology based on analysis of the short time response to a sinusoidal input perturbation, in the time domain is demonstrated using a porous electrode model for an intercalation electrode. It is shown that the parameters associated with the interfacial processes occurring in the electrode can be determined rapidly, within a few milliseconds, by measuring the response in the transient region. The short time analysis in the time domain is then extended to a single particle model that involves bulk diffusion in the solid phase in addition to interfacial processes. A systematic procedure for sequential parameter estimation using sensitivity analysis is described. Further, the short time response and the input perturbation are transformed into the frequency domain using Fast Fourier Transform
Energy Technology Data Exchange (ETDEWEB)
Maurya, D. Ch., E-mail: dcmaurya563@gmail.com; Zia, R., E-mail: rashidzya@gmail.com; Pradhan, A., E-mail: pradhan.anirudh@gmail.com [GLA University, Department of Mathematics, Institute of Applied Sciences and Humanities (India)
2016-10-15
We discuss a spatially homogeneous and anisotropic string cosmological models in the Brans–Dicke theory of gravitation. For a spatially homogeneous metric, it is assumed that the expansion scalar θ is proportional to the shear scalar σ. This condition leads to A = kB{sup m}, where k and m are constants. With these assumptions and also assuming a variable scale factor a = a(t), we find solutions of the Brans–Dicke field equations. Various phenomena like the Big Bang, expanding universe, and shift from anisotropy to isotropy are observed in the model. It can also be seen that in early stage of the evolution of the universe, strings dominate over particles, whereas the universe is dominated by massive strings at the late time. Some physical and geometrical behaviors of the models are also discussed and observed to be in good agreement with the recent observations of SNe la supernovae.
Smoothing of, and parameter estimation from, noisy biophysical recordings.
Directory of Open Access Journals (Sweden)
Quentin J M Huys
2009-05-01
Full Text Available Biophysically detailed models of single cells are difficult to fit to real data. Recent advances in imaging techniques allow simultaneous access to various intracellular variables, and these data can be used to significantly facilitate the modelling task. These data, however, are noisy, and current approaches to building biophysically detailed models are not designed to deal with this. We extend previous techniques to take the noisy nature of the measurements into account. Sequential Monte Carlo ("particle filtering" methods, in combination with a detailed biophysical description of a cell, are used for principled, model-based smoothing of noisy recording data. We also provide an alternative formulation of smoothing where the neural nonlinearities are estimated in a non-parametric manner. Biophysically important parameters of detailed models (such as channel densities, intercompartmental conductances, input resistances, and observation noise are inferred automatically from noisy data via expectation-maximization. Overall, we find that model-based smoothing is a powerful, robust technique for smoothing of noisy biophysical data and for inference of biophysical parameters in the face of recording noise.
Project Parameter Estimation on the Basis of an Erp Database
Directory of Open Access Journals (Sweden)
Relich Marcin
2013-12-01
Full Text Available Nowadays, more and more enterprises are using Enterprise Resource Planning (EPR systems that can also be used to plan and control the development of new products. In order to obtain a project schedule, certain parameters (e.g. duration have to be specified in an ERP system. These parameters can be defined by the employees according to their knowledge, or can be estimated on the basis of data from previously completed projects. This paper investigates using an ERP database to identify those variables that have a significant influence on the duration of a project phase. In the paper, a model of knowledge discovery from an ERP database is proposed. The presented method contains four stages of the knowledge discovery process such as data selection, data transformation, data mining and interpretation of patterns in the context of new product development. Among data mining techniques, a fuzzy neural system is chosen to seek relationships on the basis of data from completed projects stored in an ERP system.
Bayesian parameter estimation for stochastic models of biological cell migration
Dieterich, Peter; Preuss, Roland
2013-08-01
Cell migration plays an essential role under many physiological and patho-physiological conditions. It is of major importance during embryonic development and wound healing. In contrast, it also generates negative effects during inflammation processes, the transmigration of tumors or the formation of metastases. Thus, a reliable quantification and characterization of cell paths could give insight into the dynamics of these processes. Typically stochastic models are applied where parameters are extracted by fitting models to the so-called mean square displacement of the observed cell group. We show that this approach has several disadvantages and problems. Therefore, we propose a simple procedure directly relying on the positions of the cell's trajectory and the covariance matrix of the positions. It is shown that the covariance is identical with the spatial aging correlation function for the supposed linear Gaussian models of Brownian motion with drift and fractional Brownian motion. The technique is applied and illustrated with simulated data showing a reliable parameter estimation from single cell paths.
Estimation of fracture parameters using elastic full-waveform inversion
Zhang, Zhendong
2017-08-17
Current methodologies to characterize fractures at the reservoir scale have serious limitations in spatial resolution and suffer from uncertainties in the inverted parameters. Here, we propose to estimate the spatial distribution and physical properties of fractures using full-waveform inversion (FWI) of multicomponent surface seismic data. An effective orthorhombic medium with five clusters of vertical fractures distributed in a checkboard fashion is used to test the algorithm. A shape regularization term is added to the objective function to improve the estimation of the fracture azimuth, which is otherwise poorly constrained. The cracks are assumed to be penny-shaped to reduce the nonuniqueness in the inverted fracture weaknesses and achieve a faster convergence. To better understand the inversion results, we analyze the radiation patterns induced by the perturbations in the fracture weaknesses and orientation. Due to the high-resolution potential of elastic FWI, the developed algorithm can recover the spatial fracture distribution and identify localized “sweet spots” of intense fracturing. However, the fracture azimuth can be resolved only using long-offset data.
Sanders, RH; Papantonopoulos, E
2005-01-01
I discuss the classical cosmological tests, i.e., angular size-redshift, flux-redshift, and galaxy number counts, in the light of the cosmology prescribed by the interpretation of the CMB anisotropies. The discussion is somewhat of a primer for physicists, with emphasis upon the possible systematic
Anisotropic cosmological constant and the CMB quadrupole anomaly
International Nuclear Information System (INIS)
Rodrigues, Davi C.
2008-01-01
There are evidences that the cosmic microwave background (CMB) large-angle anomalies imply a departure from statistical isotropy and hence from the standard cosmological model. We propose a ΛCDM model extension whose dark energy component preserves its nondynamical character but wields anisotropic vacuum pressure. Exact solutions for the cosmological scale factors are presented, upper bounds for the deformation parameter are evaluated and its value is estimated considering the elliptical universe proposal to solve the quadrupole anomaly. This model can be constructed from a Bianchi I cosmology with a cosmological constant from two different ways: (i) a straightforward anisotropic modification of the vacuum pressure consistently with energy-momentum conservation; (ii) a Poisson structure deformation between canonical momenta such that the dynamics remain invariant under scale factors rescalings
Estimation of genetic parameters for reproductive traits in alpacas.
Cruz, A; Cervantes, I; Burgos, A; Morante, R; Gutiérrez, J P
2015-12-01
One of the main deficiencies affecting animal breeding programs in Peruvian alpacas is the low reproductive performance leading to low number of animals available to select from, decreasing strongly the selection intensity. Some reproductive traits could be improved by artificial selection, but very few information about genetic parameters exists for these traits in this specie. The aim of this study was to estimate genetic parameters for six reproductive traits in alpacas both in Suri (SU) and Huacaya (HU) ecotypes, as well as their genetic relationship with fiber and morphological traits. Dataset belonging to Pacomarca experimental farm collected between 2000 and 2014 was used. Number of records for age at first service (AFS), age at first calving (AFC), copulation time (CT), pregnancy diagnosis (PD), gestation length (GL), and calving interval (CI) were, respectively, 1704, 854, 19,770, 5874, 4290 and 934. Pedigree consisted of 7742 animals. Regarding reproductive traits, model of analysis included additive and residual random effects for all traits, and also permanent environmental effect for CT, PD, GL and CI traits, with color and year of recording as fixed effects for all the reproductive traits and also age at mating and sex of calf for GL trait. Estimated heritabilities, respectively for HU and SU were 0.19 and 0.09 for AFS, 0.45 and 0.59 for AFC, 0.04 and 0.05 for CT, 0.07 and 0.05 for PD, 0.12 and 0.20 for GL, and 0.14 and 0.09 for CI. Genetic correlations between them ranged from -0.96 to 0.70. No important genetic correlations were found between reproductive traits and fiber or morphological traits in HU. However, some moderate favorable genetic correlations were found between reproductive and either fiber and morphological traits in SU. According to estimated genetic correlations, some reproductive traits might be included as additional selection criteria in HU. Copyright © 2015 Elsevier B.V. All rights reserved.
Burkatovskaya, Yuliya Borisovna; Kabanova, T.; Khaustov, Pavel Aleksandrovich
2016-01-01
CUSUM algorithm for controlling chain state switching in the Markov modulated Poissonprocess was investigated via simulation. Recommendations concerning the parameter choice were givensubject to characteristics of the process. Procedure of the process parameter estimation was described.
Perturbations in loop quantum cosmology
International Nuclear Information System (INIS)
Nelson, W; Agullo, I; Ashtekar, A
2014-01-01
The era of precision cosmology has allowed us to accurately determine many important cosmological parameters, in particular via the CMB. Confronting Loop Quantum Cosmology with these observations provides us with a powerful test of the theory. For this to be possible, we need a detailed understanding of the generation and evolution of inhomogeneous perturbations during the early, quantum gravity phase of the universe. Here, we have described how Loop Quantum Cosmology provides a completion of the inflationary paradigm, that is consistent with the observed power spectra of the CMB
Genetic parameter estimation of reproductive traits of Litopenaeus vannamei
Tan, Jian; Kong, Jie; Cao, Baoxiang; Luo, Kun; Liu, Ning; Meng, Xianhong; Xu, Shengyu; Guo, Zhaojia; Chen, Guoliang; Luan, Sheng
2017-02-01
In this study, the heritability, repeatability, phenotypic correlation, and genetic correlation of the reproductive and growth traits of L. vannamei were investigated and estimated. Eight traits of 385 shrimps from forty-two families, including the number of eggs (EN), number of nauplii (NN), egg diameter (ED), spawning frequency (SF), spawning success (SS), female body weight (BW) and body length (BL) at insemination, and condition factor (K), were measured,. A total of 519 spawning records including multiple spawning and 91 no spawning records were collected. The genetic parameters were estimated using an animal model, a multinomial logit model (for SF), and a sire-dam and probit model (for SS). Because there were repeated records, permanent environmental effects were included in the models. The heritability estimates for BW, BL, EN, NN, ED, SF, SS, and K were 0.49 ± 0.14, 0.51 ± 0.14, 0.12 ± 0.08, 0, 0.01 ± 0.04, 0.06 ± 0.06, 0.18 ± 0.07, and 0.10 ± 0.06, respectively. The genetic correlation was 0.99 ± 0.01 between BW and BL, 0.90 ± 0.19 between BW and EN, 0.22 ± 0.97 between BW and ED, -0.77 ± 1.14 between EN and ED, and -0.27 ± 0.36 between BW and K. The heritability of EN estimated without a covariate was 0.12 ± 0.08, and the genetic correlation was 0.90 ± 0.19 between BW and EN, indicating that improving BW may be used in selection programs to genetically improve the reproductive output of L. vannamei during the breeding. For EN, the data were also analyzed using body weight as a covariate (EN-2). The heritability of EN-2 was 0.03 ± 0.05, indicating that it is difficult to improve the reproductive output by genetic improvement. Furthermore, excessive pursuit of this selection is often at the expense of growth speed. Therefore, the selection of high-performance spawners using BW and SS may be an important strategy to improve nauplii production.
Modified General Relativity and Cosmology
Abdel-Rahman, A.-M. M.
1997-10-01
Aspects of the modified general relativity theory of Rastall, Al-Rawaf and Taha are discussed in both the radiation- and matter-dominated flat cosmological models. A nucleosynthesis constraint on the theory's free parameter is obtained and the implication for the age of the Universe is discussed. The consistency of the modified matter- dominated model with the neoclassical cosmological tests is demonstrated.
Szalay, Alexander S.; Matsubara, Takahiko; Scranton, Ryan; Vogeley, Michael S.; Connolly, Andrew; Dodelson, Scott; Eisenstein, Daniel; Frieman, Joshua A.; Gunn, James E.; Hui, Lam; Johnston, David; Kent, Stephen M.; Kerscher, Martin; Loveday, Jon; Meiksin, Avery; Narayanan, Vijay; Nichol, Robert C.; O'Connell, Liam; Pope, Adrian; Scoccimarro, Roman; Sheth, Ravi K.; Stebbins, Albert; Strauss, Michael A.; Szapudi, Istvan; Tegmark, Max; Zehavi, Idit; Annis, James; Bahcall, Neta A.; Brinkmann, Jon; Csabai, Istvan; Fukugita, Masataka; Hennessy, Greg; Hogg, David W.; Ivezic, Zeljko; Knapp, Gillian R.; Kunszt, Peter Z.; Lamb, Don Q.; Lee, Brian C.; Lupton, Robert H.; Munn, Jeffrey R.; Peoples, John; Pier, Jeffrey R.; Rockosi, Constance; Schlegel, David; Stoughton, Christopher; Tucker, Douglas L.; Yanny, Brian; York, Donald G.; Szalay, Alexander S.; Jain, Bhuvnesh; Matsubara, Takahiko; Scranton, Ryan; Vogeley, Michael S.; Connolly, Andrew; Dodelson, Scott; Eisenstein, Daniel; Frieman, Joshua A.; Gunn, James E.; Hui, Lam; Johnston, David; Kent, Stephen; Kerscher, Martin; Loveday, Jon; Meiksin, Avery; Narayanan, Vijay; Nichol, Robert C.; Connell, Liam O'; Pope, Adrian; Scoccimarro, Roman; Sheth, Ravi K.; Stebbins, Albert; Strauss, Michael A.; Szapudi, Istvan; Tegmark, Max; Zehavi, Idit
2002-01-01
We present measurements of parameters of the 3-dimensional power spectrum of galaxy clustering from 222 square degrees of early imaging data in the Sloan Digital Sky Survey. The projected galaxy distribution on the sky is expanded over a set of Karhunen-Loeve eigenfunctions, which optimize the signal-to-noise ratio in our analysis. A maximum likelihood analysis is used to estimate parameters that set the shape and amplitude of the 3-dimensional power spectrum. Our best estimates are Gamma=0.188 +/- 0.04 and sigma_8L = 0.915 +/- 0.06 (statistical errors only), for a flat Universe with a cosmological constant. We demonstrate that our measurements contain signal from scales at or beyond the peak of the 3D power spectrum. We discuss how the results scale with systematic uncertainties, like the radial selection function. We find that the central values satisfy the analytically estimated scaling relation. We have also explored the effects of evolutionary corrections, various truncations of the KL basis, seeing, sam...
AUTOMATIC ESTIMATION OF SIZE PARAMETERS USING VERIFIED COMPUTERIZED STEREOANALYSIS
Directory of Open Access Journals (Sweden)
Peter R Mouton
2011-05-01
Full Text Available State-of-the-art computerized stereology systems combine high-resolution video microscopy and hardwaresoftware integration with stereological methods to assist users in quantifying multidimensional parameters of importance to biomedical research, including volume, surface area, length, number, their variation and spatial distribution. The requirement for constant interactions between a trained, non-expert user and the targeted features of interest currently limits the throughput efficiency of these systems. To address this issue we developed a novel approach for automatic stereological analysis of 2-D images, Verified Computerized Stereoanalysis (VCS. The VCS approach minimizes the need for user interactions with high contrast [high signal-to-noise ratio (S:N] biological objects of interest. Performance testing of the VCS approach confirmed dramatic increases in the efficiency of total object volume (size estimation, without a loss of accuracy or precision compared to conventional computerized stereology. The broad application of high efficiency VCS to high-contrast biological objects on tissue sections could reduce labor costs, enhance hypothesis testing, and accelerate the progress of biomedical research focused on improvements in health and the management of disease.
International Nuclear Information System (INIS)
Leibundgut, B.
2005-01-01
Supernovae have developed into a versatile tool for cosmology. Their impact on the cosmological model has been profound and led to the discovery of the accelerated expansion. The current status of the cosmological model as perceived through supernova observations will be presented. Supernovae are currently the only astrophysical objects that can measure the dynamics of the cosmic expansion during the past eight billion years. Ongoing experiments are trying to determine the characteristics of the accelerated expansion and give insight into what might be the physical explanation for the acceleration. (author)
International Nuclear Information System (INIS)
Berstein, J.
1984-01-01
These lectures offer a self-contained review of the role of neutrinos in cosmology. The first part deals with the question 'What is a neutrino.' and describes in a historical context the theoretical ideas and experimental discoveries related to the different types of neutrinos and their properties. The basic differences between the Dirac neutrino and the Majorana neutrino are pointed out and the evidence for different neutrino 'flavours', neutrino mass, and neutrino oscillations is discussed. The second part summarizes current views on cosmology, particularly as they are affected by recent theoretical and experimental advances in high-energy particle physics. Finally, the close relationship between neutrino physics and cosmology is brought out in more detail, to show how cosmological constraints can limit the various theoretical possibilities for neutrinos and, more particularly, how increasing knowledge of neutrino properties can contribute to our understanding of the origin, history, and future of the Universe. The level is that of the beginning graduate student. (orig.)
International Nuclear Information System (INIS)
Khalatnikov, I.M.; Belinskij, V.A.
1984-01-01
Application of the qualitative theory of dynamic systems to analysis of homogeneous cosmological models is described. Together with the well-known cases, requiring ideal liquid, the properties of cosmological evolution of matter with dissipative processes due to viscosity are considered. New cosmological effects occur, when viscosity terms being one and the same order with the rest terms in the equations of gravitation or even exceeding them. In these cases the description of the dissipative process by means of only two viscosity coefficients (volume and shift) may become inapplicable because all the rest decomposition terms of dissipative addition to the energy-momentum in velocity gradient can be large application of equations with hydrodynamic viscosty should be considered as a model of dissipative effects in cosmology
Lesgourgues, Julien; Miele, Gennaro; Pastor, Sergio
2013-01-01
The role that neutrinos have played in the evolution of the Universe is the focus of one of the most fascinating research areas that has stemmed from the interplay between cosmology, astrophysics and particle physics. In this self-contained book, the authors bring together all aspects of the role of neutrinos in cosmology, spanning from leptogenesis to primordial nucleosynthesis, their role in CMB and structure formation, to the problem of their direct detection. The book starts by guiding the reader through aspects of fundamental neutrino physics, such as the standard cosmological model and the statistical mechanics in the expanding Universe, before discussing the history of neutrinos in chronological order from the very early stages until today. This timely book will interest graduate students and researchers in astrophysics, cosmology and particle physics, who work with either a theoretical or experimental focus.
International Nuclear Information System (INIS)
Zeldovich, Y.B.
1983-01-01
This paper fives a general review of modern cosmology. The following subjects are discussed: hot big bang and periodization of the evolution; Hubble expansion; the structure of the universe (pancake theory); baryon asymmetry; inflatory universe. (Auth.)
Automated Modal Parameter Estimation of Civil Engineering Structures
DEFF Research Database (Denmark)
Andersen, Palle; Brincker, Rune; Goursat, Maurice
In this paper the problems of doing automatic modal parameter extraction of ambient excited civil engineering structures is considered. Two different approaches for obtaining the modal parameters automatically are presented: The Frequency Domain Decomposition (FDD) technique and a correlation...
Estimation of uranium migration parameters in sandstone aquifers.
Malov, A I
2016-03-01
The chemical composition and isotopes of carbon and uranium were investigated in groundwater samples that were collected from 16 wells and 2 sources in the Northern Dvina Basin, Northwest Russia. Across the dataset, the temperatures in the groundwater ranged from 3.6 to 6.9 °C, the pH ranged from 7.6 to 9.0, the Eh ranged from -137 to +128 mV, the total dissolved solids (TDS) ranged from 209 to 22,000 mg L(-1), and the dissolved oxygen (DO) ranged from 0 to 9.9 ppm. The (14)C activity ranged from 0 to 69.96 ± 0.69 percent modern carbon (pmC). The uranium content in the groundwater ranged from 0.006 to 16 ppb, and the (234)U:(238)U activity ratio ranged from 1.35 ± 0.21 to 8.61 ± 1.35. The uranium concentration and (234)U:(238)U activity ratio increased from the recharge area to the redox barrier; behind the barrier, the uranium content is minimal. The results were systematized by creating a conceptual model of the Northern Dvina Basin's hydrogeological system. The use of uranium isotope dating in conjunction with radiocarbon dating allowed the determination of important water-rock interaction parameters, such as the dissolution rate:recoil loss factor ratio Rd:p (a(-1)) and the uranium retardation factor:recoil loss factor ratio R:p in the aquifer. The (14)C age of the water was estimated to be between modern and >35,000 years. The (234)U-(238)U age of the water was estimated to be between 260 and 582,000 years. The Rd:p ratio decreases with increasing groundwater residence time in the aquifer from n × 10(-5) to n × 10(-7) a(-1). This finding is observed because the TDS increases in that direction from 0.2 to 9 g L(-1), and accordingly, the mineral saturation indices increase. Relatively high values of R:p (200-1000) characterize aquifers in sandy-clayey sediments from the Late Pleistocene and the deepest parts of the Vendian strata. In samples from the sandstones of the upper part of the Vendian strata, the R:p value is ∼ 24, i.e., sorption processes are
Energy Technology Data Exchange (ETDEWEB)
Zhang Yuanzhong
2002-06-21
This book is one of a series in the areas of high-energy physics, cosmology and gravitation published by the Institute of Physics. It includes courses given at a doctoral school on 'Relativistic Cosmology: Theory and Observation' held in Spring 2000 at the Centre for Scientific Culture 'Alessandro Volta', Italy, sponsored by SIGRAV-Societa Italiana di Relativita e Gravitazione (Italian Society of Relativity and Gravitation) and the University of Insubria. This book collects 15 review reports given by a number of outstanding scientists. They touch upon the main aspects of modern cosmology from observational matters to theoretical models, such as cosmological models, the early universe, dark matter and dark energy, modern observational cosmology, cosmic microwave background, gravitational lensing, and numerical simulations in cosmology. In particular, the introduction to the basics of cosmology includes the basic equations, covariant and tetrad descriptions, Friedmann models, observation and horizons, etc. The chapters on the early universe involve inflationary theories, particle physics in the early universe, and the creation of matter in the universe. The chapters on dark matter (DM) deal with experimental evidence of DM, neutrino oscillations, DM candidates in supersymmetry models and supergravity, structure formation in the universe, dark-matter search with innovative techniques, and dark energy (cosmological constant), etc. The chapters about structure in the universe consist of the basis for structure formation, quantifying large-scale structure, cosmic background fluctuation, galaxy space distribution, and the clustering of galaxies. In the field of modern observational cosmology, galaxy surveys and cluster surveys are given. The chapter on gravitational lensing describes the lens basics and models, galactic microlensing and galaxy clusters as lenses. The last chapter, 'Numerical simulations in cosmology', deals with spatial and
International Nuclear Information System (INIS)
Zeldovich, Ya.
1984-01-01
The knowledge is summed up of contemporary cosmology on the universe and its development resulting from a great number of highly sensitive observations and the application of contemporary physical theories to the entire universe. The questions are assessed of mass density in the universe, the structure and origin of the universe, its baryon asymmetry and the quantum explanation of the origin of the universe. Physical problems are presented which should be resolved for the future development of cosmology. (Ha)
CERN. Geneva
2007-01-01
The understanding of the Universe at the largest and smallest scales traditionally has been the subject of cosmology and particle physics, respectively. Studying the evolution of the Universe connects today's large scales with the tiny scales in the very early Universe and provides the link between the physics of particles and of the cosmos. This series of five lectures aims at a modern and critical presentation of the basic ideas, methods, models and observations in today's particle cosmology.
Energy Technology Data Exchange (ETDEWEB)
Sefusatti, Emiliano; /Fermilab /CCPP, New York; Crocce, Martin; Pueblas, Sebastian; Scoccimarro, Roman; /CCPP, New York
2006-04-01
The present spatial distribution of galaxies in the Universe is non-Gaussian, with 40% skewness in 50 h{sup -1} Mpc spheres, and remarkably little is known about the information encoded in it about cosmological parameters beyond the power spectrum. In this work they present an attempt to bridge this gap by studying the bispectrum, paying particular attention to a joint analysis with the power spectrum and their combination with CMB data. They address the covariance properties of the power spectrum and bispectrum including the effects of beat coupling that lead to interesting cross-correlations, and discuss how baryon acoustic oscillations break degeneracies. They show that the bispectrum has significant information on cosmological parameters well beyond its power in constraining galaxy bias, and when combined with the power spectrum is more complementary than combining power spectra of different samples of galaxies, since non-Gaussianity provides a somewhat different direction in parameter space. In the framework of flat cosmological models they show that most of the improvement of adding bispectrum information corresponds to parameters related to the amplitude and effective spectral index of perturbations, which can be improved by almost a factor of two. Moreover, they demonstrate that the expected statistical uncertainties in {sigma}s of a few percent are robust to relaxing the dark energy beyond a cosmological constant.
Joudaki, Shahab; Blake, Chris; Johnson, Andrew; Amon, Alexandra; Asgari, Marika; Choi, Ami; Erben, Thomas; Glazebrook, Karl; Harnois-Déraps, Joachim; Heymans, Catherine; Hildebrandt, Hendrik; Hoekstra, Henk; Klaes, Dominik; Kuijken, Konrad; Lidman, Chris; Mead, Alexander; Miller, Lance; Parkinson, David; Poole, Gregory B.; Schneider, Peter; Viola, Massimo; Wolf, Christian
2018-03-01
We perform a combined analysis of cosmic shear tomography, galaxy-galaxy lensing tomography, and redshift-space multipole power spectra (monopole and quadrupole) using 450 deg2 of imaging data by the Kilo Degree Survey (KiDS-450) overlapping with two spectroscopic surveys: the 2-degree Field Lensing Survey (2dFLenS) and the Baryon Oscillation Spectroscopic Survey (BOSS). We restrict the galaxy-galaxy lensing and multipole power spectrum measurements to the overlapping regions with KiDS, and self-consistently compute the full covariance between the different observables using a large suite of N-body simulations. We methodically analyse different combinations of the observables, finding that the galaxy-galaxy lensing measurements are particularly useful in improving the constraint on the intrinsic alignment amplitude, while the multipole power spectra are useful in tightening the constraints along the lensing degeneracy direction. The fully combined constraint on S_8 ≡ σ _8 √{Ω _m/0.3}=0.742± 0.035, which is an improvement by 20 per cent compared to KiDS alone, corresponds to a 2.6σ discordance with Planck, and is not significantly affected by fitting to a more conservative set of scales. Given the tightening of the parameter space, we are unable to resolve the discordance with an extended cosmology that is simultaneously favoured in a model selection sense, including the sum of neutrino masses, curvature, evolving dark energy and modified gravity. The complementarity of our observables allows for constraints on modified gravity degrees of freedom that are not simultaneously bounded with either probe alone, and up to a factor of three improvement in the S8 constraint in the extended cosmology compared to KiDS alone.
Exploring Cosmology with Supernovae
DEFF Research Database (Denmark)
Li, Xue
distribution of strong gravitational lensing is developed. For Type Ia supernova (SNe Ia), the rate is lower than core-collapse supernovae (CC SNe). The rate of SNe Ia declines beyond z 1:5. Based on these reasons, we investigate a potential candidate to measure cosmological distance: GRB......-SNe. They are a subclass of CC SNe. Light curves of GRB-SNe are obtained and their properties are studied. We ascertain that the properties of GRB-SNe make them another candidate for standardizable candles in measuring the cosmic distance. Cosmological parameters M and are constrained with the help of GRB-SNe. The first...
Recursive Parameter Identification for Estimating and Displaying Maneuvering Vessel Path
National Research Council Canada - National Science Library
Pullard, Stephen
2003-01-01
...). The extended least squares (ELS) parameter identification approach allows the system to be installed on most platforms without prior knowledge of system dynamics provided vessel states are available...
Multiple-hit parameter estimation in monolithic detectors.
Hunter, William C J; Barrett, Harrison H; Lewellen, Tom K; Miyaoka, Robert S
2013-02-01
We examine a maximum-a-posteriori method for estimating the primary interaction position of gamma rays with multiple interaction sites (hits) in a monolithic detector. In assessing the performance of a multiple-hit estimator over that of a conventional one-hit estimator, we consider a few different detector and readout configurations of a 50-mm-wide square cerium-doped lutetium oxyorthosilicate block. For this study, we use simulated data from SCOUT, a Monte-Carlo tool for photon tracking and modeling scintillation- camera output. With this tool, we determine estimate bias and variance for a multiple-hit estimator and compare these with similar metrics for a one-hit maximum-likelihood estimator, which assumes full energy deposition in one hit. We also examine the effect of event filtering on these metrics; for this purpose, we use a likelihood threshold to reject signals that are not likely to have been produced under the assumed likelihood model. Depending on detector design, we observe a 1%-12% improvement of intrinsic resolution for a 1-or-2-hit estimator as compared with a 1-hit estimator. We also observe improved differentiation of photopeak events using a 1-or-2-hit estimator as compared with the 1-hit estimator; more than 6% of photopeak events that were rejected by likelihood filtering for the 1-hit estimator were accurately identified as photopeak events and positioned without loss of resolution by a 1-or-2-hit estimator; for PET, this equates to at least a 12% improvement in coincidence-detection efficiency with likelihood filtering applied.
Modified geodetic brane cosmology
International Nuclear Information System (INIS)
Cordero, Rubén; Cruz, Miguel; Molgado, Alberto; Rojas, Efraín
2012-01-01
We explore the cosmological implications provided by the geodetic brane gravity action corrected by an extrinsic curvature brane term, describing a codimension-1 brane embedded in a 5D fixed Minkowski spacetime. In the geodetic brane gravity action, we accommodate the correction term through a linear term in the extrinsic curvature swept out by the brane. We study the resulting geodetic-type equation of motion. Within a Friedmann–Robertson–Walker metric, we obtain a generalized Friedmann equation describing the associated cosmological evolution. We observe that, when the radiation-like energy contribution from the extra dimension is vanishing, this effective model leads to a self-(non-self)-accelerated expansion of the brane-like universe in dependence on the nature of the concomitant parameter β associated with the correction, which resembles an analogous behaviour in the DGP brane cosmology. Several possibilities in the description for the cosmic evolution of this model are embodied and characterized by the involved density parameters related in turn to the cosmological constant, the geometry characterizing the model, the introduced β parameter as well as the dark-like energy and the matter content on the brane. (paper)
Single-Channel Blind Estimation of Reverberation Parameters
DEFF Research Database (Denmark)
Doire, C.S.J.; Brookes, M. D.; Naylor, P. A.
2015-01-01
The reverberation of an acoustic channel can be characterised by two frequency-dependent parameters: the reverberation time and the direct-to-reverberant energy ratio. This paper presents an algorithm for blindly determining these parameters from a single-channel speech signal. The algorithm uses...
Uncertainty of Modal Parameters Estimated by ARMA Models
DEFF Research Database (Denmark)
Jensen, Jakob Laigaard; Brincker, Rune; Rytter, Anders
In this paper the uncertainties of identified modal parameters such as eigenfrequencies and damping ratios are assessed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the param...
Estimation of source parameters of Chamoli Earthquake, India
Indian Academy of Sciences (India)
R. Narasimhan, Krishtel eMaging Solutions
meter studies, in different parts of the world. Singh et al (1979) and Sharma and Wason (1994, 1995) have calculated source parameters for Himalayan and nearby regions. To the best of this authors' knowledge, the source parameter studies using strong motion data have not been carried out in India so far, though similar ...
Estimation of the petrophysical parameters of sediments from Chad ...
African Journals Online (AJOL)
Porosity was estimated from three methods, and polynomial trends having fits ranging between 0.0604 and 0.478 describe depth - porosity variations. Interpretation of the trends revealed lithology trend that agree with the trends of shaliness. Estimates of average effective porosities of formations favorably compared with ...
Hill, Bryon K.; Walker, Bruce K.
1991-01-01
When using parameter estimation methods based on extended Kalman filter (EKF) theory, it is common practice to assume that the unknown parameter values behave like a random process, such as a random walk, in order to guarantee their identifiability by the filter. The present work is the result of an ongoing effort to quantitatively describe the effect that the assumption of a fictitious noise (called pseudonoise) driving the unknown parameter values has on the parameter estimate convergence rate in filter-based parameter estimators. The initial approach is to examine a first-order system described by one state variable with one parameter to be estimated. The intent is to derive analytical results for this simple system that might offer insight into the effect of the pseudonoise assumption for more complex systems. Such results would make it possible to predict the estimator error convergence behavior as a function of the assumed pseudonoise intensity, and this leads to the natural application of the results to the design of filter-based parameter estimators. The results obtained show that the analytical description of the convergence behavior is very difficult.
Rajantie, Arttu
2018-03-06
The discovery of the Higgs boson in 2012 and other results from the Large Hadron Collider have confirmed the standard model of particle physics as the correct theory of elementary particles and their interactions up to energies of several TeV. Remarkably, the theory may even remain valid all the way to the Planck scale of quantum gravity, and therefore it provides a solid theoretical basis for describing the early Universe. Furthermore, the Higgs field itself has unique properties that may have allowed it to play a central role in the evolution of the Universe, from inflation to cosmological phase transitions and the origin of both baryonic and dark matter, and possibly to determine its ultimate fate through the electroweak vacuum instability. These connections between particle physics and cosmology have given rise to a new and growing field of Higgs cosmology, which promises to shed new light on some of the most puzzling questions about the Universe as new data from particle physics experiments and cosmological observations become available.This article is part of the Theo Murphy meeting issue 'Higgs cosmology'. © 2018 The Author(s).
Rajantie, Arttu
2018-01-01
The discovery of the Higgs boson in 2012 and other results from the Large Hadron Collider have confirmed the standard model of particle physics as the correct theory of elementary particles and their interactions up to energies of several TeV. Remarkably, the theory may even remain valid all the way to the Planck scale of quantum gravity, and therefore it provides a solid theoretical basis for describing the early Universe. Furthermore, the Higgs field itself has unique properties that may have allowed it to play a central role in the evolution of the Universe, from inflation to cosmological phase transitions and the origin of both baryonic and dark matter, and possibly to determine its ultimate fate through the electroweak vacuum instability. These connections between particle physics and cosmology have given rise to a new and growing field of Higgs cosmology, which promises to shed new light on some of the most puzzling questions about the Universe as new data from particle physics experiments and cosmological observations become available. This article is part of the Theo Murphy meeting issue `Higgs cosmology'.
ASTROPHYSICAL PRIOR INFORMATION AND GRAVITATIONAL-WAVE PARAMETER ESTIMATION
International Nuclear Information System (INIS)
Pankow, Chris; Sampson, Laura; Perri, Leah; Chase, Eve; Coughlin, Scott; Zevin, Michael; Kalogera, Vassiliki
2017-01-01
The detection of electromagnetic counterparts to gravitational waves (GWs) has great promise for the investigation of many scientific questions. While it is well known that certain orientation parameters can reduce uncertainty in other related parameters, it was also hoped that the detection of an electromagnetic signal in conjunction with a GW could augment the measurement precision of the mass and spin from the gravitational signal itself. That is, knowledge of the sky location, inclination, and redshift of a binary could break degeneracies between these extrinsic, coordinate-dependent parameters and the physical parameters that are intrinsic to the binary. In this paper, we investigate this issue by assuming perfect knowledge of extrinsic parameters, and assessing the maximal impact of this knowledge on our ability to extract intrinsic parameters. We recover similar gains in extrinsic recovery to earlier work; however, we find only modest improvements in a few intrinsic parameters—namely the primary component’s spin. We thus conclude that, even in the best case, the use of additional information from electromagnetic observations does not improve the measurement of the intrinsic parameters significantly.
ASTROPHYSICAL PRIOR INFORMATION AND GRAVITATIONAL-WAVE PARAMETER ESTIMATION
Energy Technology Data Exchange (ETDEWEB)
Pankow, Chris; Sampson, Laura; Perri, Leah; Chase, Eve; Coughlin, Scott; Zevin, Michael; Kalogera, Vassiliki [Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA) and Department of Physics and Astronomy, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208 (United States)
2017-01-10
The detection of electromagnetic counterparts to gravitational waves (GWs) has great promise for the investigation of many scientific questions. While it is well known that certain orientation parameters can reduce uncertainty in other related parameters, it was also hoped that the detection of an electromagnetic signal in conjunction with a GW could augment the measurement precision of the mass and spin from the gravitational signal itself. That is, knowledge of the sky location, inclination, and redshift of a binary could break degeneracies between these extrinsic, coordinate-dependent parameters and the physical parameters that are intrinsic to the binary. In this paper, we investigate this issue by assuming perfect knowledge of extrinsic parameters, and assessing the maximal impact of this knowledge on our ability to extract intrinsic parameters. We recover similar gains in extrinsic recovery to earlier work; however, we find only modest improvements in a few intrinsic parameters—namely the primary component’s spin. We thus conclude that, even in the best case, the use of additional information from electromagnetic observations does not improve the measurement of the intrinsic parameters significantly.
Hierarchical parameter estimation of DFIG and drive train system in a wind turbine generator
Institute of Scientific and Technical Information of China (English)
Xueping PAN; Ping JU; Feng WU; Yuqing JIN
2017-01-01
A new hierarchical parameter estimation method for doubly fed induction generator (DFIG) and drive train system in a wind turbine generator (WTG) is proposed in this paper.Firstly,the parameters of the DFIG and the drive train are estimated locally under different types of disturbances.Secondly,a coordination estimation method is further applied to identify the parameters of the DFIG and the drive train simultaneously with the purpose of attaining the global optimal estimation results.The main benefit of the proposed scheme is the improved estimation accuracy.Estimation results confirm the applicability of the proposed estimation technique.
Uncertainty of Modal Parameters Estimated by ARMA Models
DEFF Research Database (Denmark)
Jensen, Jacob Laigaard; Brincker, Rune; Rytter, Anders
1990-01-01
In this paper the uncertainties of identified modal parameters such as eidenfrequencies and damping ratios are assed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the parameters...... by simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been choosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore......, it is shown that the model errors may also contribute significantly to the uncertainty....
DEFF Research Database (Denmark)
Aghanim, N.; Akrami, Y.; Ashdown, M.
2017-01-01
The six parameters of the standard ΛCDM model have best-fit values derived from the Planck temperature power spectrum that are shifted somewhat from the best-fit values derived from WMAP data. These shifts are driven by features in the Planck temperature power spectrum at angular scales that had ...
Energy Technology Data Exchange (ETDEWEB)
Meliopoulos, Sakis [Georgia Inst. of Technology, Atlanta, GA (United States); Cokkinides, George [Georgia Inst. of Technology, Atlanta, GA (United States); Fardanesh, Bruce [New York Power Authority, NY (United States); Hedrington, Clinton [U.S. Virgin Islands Water and Power Authority (WAPA), St. Croix (U.S. Virgin Islands)
2013-12-31
This is the final report for this project that was performed in the period: October1, 2009 to June 30, 2013. In this project, a fully distributed high-fidelity dynamic state estimator (DSE) that continuously tracks the real time dynamic model of a wide area system with update rates better than 60 times per second is achieved. The proposed technology is based on GPS-synchronized measurements but also utilizes data from all available Intelligent Electronic Devices in the system (numerical relays, digital fault recorders, digital meters, etc.). The distributed state estimator provides the real time model of the system not only the voltage phasors. The proposed system provides the infrastructure for a variety of applications and two very important applications (a) a high fidelity generating unit parameters estimation and (b) an energy function based transient stability monitoring of a wide area electric power system with predictive capability. Also the dynamic distributed state estimation results are stored (the storage scheme includes data and coincidental model) enabling an automatic reconstruction and “play back” of a system wide disturbance. This approach enables complete play back capability with fidelity equal to that of real time with the advantage of “playing back” at a user selected speed. The proposed technologies were developed and tested in the lab during the first 18 months of the project and then demonstrated on two actual systems, the USVI Water and Power Administration system and the New York Power Authority’s Blenheim-Gilboa pumped hydro plant in the last 18 months of the project. The four main thrusts of this project, mentioned above, are extremely important to the industry. The DSE with the achieved update rates (more than 60 times per second) provides a superior solution to the “grid visibility” question. The generator parameter identification method fills an important and practical need of the industry. The “energy function” based
da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G
2016-07-08
Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.
Tsunami Prediction and Earthquake Parameters Estimation in the Red Sea
Sawlan, Zaid A
2012-01-01
parameters and topography. This thesis introduces a real-time tsunami forecasting method that combines tsunami model with observations using a hybrid ensemble Kalman filter and ensemble Kalman smoother. The filter is used for state prediction while
Estimation of parameter sensitivities for stochastic reaction networks
Gupta, Ankit
2016-01-01
Quantification of the effects of parameter uncertainty is an important and challenging problem in Systems Biology. We consider this problem in the context of stochastic models of biochemical reaction networks where the dynamics is described as a
A novel parameter estimation method for metal oxide surge arrester ...
Indian Academy of Sciences (India)
the program, which is based on MAPSO algorithm and can determine the fitness and parameters .... to solve many optimization problems (Kennedy & Eberhart 1995; Eberhart & Shi 2001; Gaing. 2003 ... describe the content of this concept. V el.
BIASED BEARINGS-ONIKY PARAMETER ESTIMATION FOR BISTATIC SYSTEM
Institute of Scientific and Technical Information of China (English)
Xu Benlian; Wang Zhiquan
2007-01-01
According to the biased angles provided by the bistatic sensors,the necessary condition of observability and Cramer-Rao low bounds for the bistatic system are derived and analyzed,respectively.Additionally,a dual Kalman filter method is presented with the purpose of eliminating the effect of biased angles on the state variable estimation.Finally,Monte-Carlo simulations are conducted in the observable scenario.Simulation results show that the proposed theory holds true,and the dual Kalman filter method can estimate state variable and biased angles simultaneously.Furthermore,the estimated results can achieve their Cramer-Rao low bounds.
Weibull Parameters Estimation Based on Physics of Failure Model
DEFF Research Database (Denmark)
Kostandyan, Erik; Sørensen, John Dalsgaard
2012-01-01
Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...
Estimating kinetic mechanisms with prior knowledge I: Linear parameter constraints.
Salari, Autoosa; Navarro, Marco A; Milescu, Mirela; Milescu, Lorin S
2018-02-05
To understand how ion channels and other proteins function at the molecular and cellular levels, one must decrypt their kinetic mechanisms. Sophisticated algorithms have been developed that can be used to extract kinetic parameters from a variety of experimental data types. However, formulating models that not only explain new data, but are also consistent with existing knowledge, remains a challenge. Here, we present a two-part study describing a mathematical and computational formalism that can be used to enforce prior knowledge into the model using constraints. In this first part, we focus on constraints that enforce explicit linear relationships involving rate constants or other model parameters. We develop a simple, linear algebra-based transformation that can be applied to enforce many types of model properties and assumptions, such as microscopic reversibility, allosteric gating, and equality and inequality parameter relationships. This transformation converts the set of linearly interdependent model parameters into a reduced set of independent parameters, which can be passed to an automated search engine for model optimization. In the companion article, we introduce a complementary method that can be used to enforce arbitrary parameter relationships and any constraints that quantify the behavior of the model under certain conditions. The procedures described in this study can, in principle, be coupled to any of the existing methods for solving molecular kinetics for ion channels or other proteins. These concepts can be used not only to enforce existing knowledge but also to formulate and test new hypotheses. © 2018 Salari et al.
Stellar atmospheric parameter estimation using Gaussian process regression
Bu, Yude; Pan, Jingchang
2015-02-01
As is well known, it is necessary to derive stellar parameters from massive amounts of spectral data automatically and efficiently. However, in traditional automatic methods such as artificial neural networks (ANNs) and kernel regression (KR), it is often difficult to optimize the algorithm structure and determine the optimal algorithm parameters. Gaussian process regression (GPR) is a recently developed method that has been proven to be capable of overcoming these difficulties. Here we apply GPR to derive stellar atmospheric parameters from spectra. Through evaluating the performance of GPR on Sloan Digital Sky Survey (SDSS) spectra, Medium resolution Isaac Newton Telescope Library of Empirical Spectra (MILES) spectra, ELODIE spectra and the spectra of member stars of galactic globular clusters, we conclude that GPR can derive stellar parameters accurately and precisely, especially when we use data preprocessed with principal component analysis (PCA). We then compare the performance of GPR with that of several widely used regression methods (ANNs, support-vector regression and KR) and find that with GPR it is easier to optimize structures and parameters and more efficient and accurate to extract atmospheric parameters.
Retrospective forecast of ETAS model with daily parameters estimate
Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang
2016-04-01
We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.
Sanders, Robert H
2016-01-01
The advent of sensitive high-resolution observations of the cosmic microwave background radiation and their successful interpretation in terms of the standard cosmological model has led to great confidence in this model's reality. The prevailing attitude is that we now understand the Universe and need only work out the details. In this book, Sanders traces the development and successes of Lambda-CDM, and argues that this triumphalism may be premature. The model's two major components, dark energy and dark matter, have the character of the pre-twentieth-century luminiferous aether. While there is astronomical evidence for these hypothetical fluids, their enigmatic properties call into question our assumptions of the universality of locally determined physical law. Sanders explains how modified Newtonian dynamics (MOND) is a significant challenge for cold dark matter. Overall, the message is hopeful: the field of cosmology has not become frozen, and there is much fundamental work ahead for tomorrow's cosmologis...
Low Complexity Parameter Estimation For Off-the-Grid Targets
Jardak, Seifallah; Ahmed, Sajid; Alouini, Mohamed-Slim
2015-01-01
In multiple-input multiple-output radar, to estimate the reflection coefficient, spatial location, and Doppler shift of a target, a derived cost function is usually evaluated and optimized over a grid of points. The performance of such algorithms
Development of simple kinetic models and parameter estimation for ...
African Journals Online (AJOL)
PANCHIGA
2016-09-28
Sep 28, 2016 ... estimation for simulation of recombinant human serum albumin ... and recombinant protein production by P. pastoris without requiring complex models. Key words: ..... SDS-PAGE and showed the same molecular size as.
The effect of selection on genetic parameter estimates
African Journals Online (AJOL)
Unknown
The South African Journal of Animal Science is available online at ... A simulation study was carried out to investigate the effect of selection on the estimation of genetic ... The model contained a fixed effect, random genetic and random.
(Co) variance Components and Genetic Parameter Estimates for Re
African Journals Online (AJOL)
Mapula
The magnitude of heritability estimates obtained in the current study ... traits were recently introduced to supplement progeny testing programmes or for usage as sole source of ..... VCE-5 User's Guide and Reference Manual Version 5.1.
International Nuclear Information System (INIS)
Dickau, Jonathan J.
2009-01-01
The use of fractals and fractal-like forms to describe or model the universe has had a long and varied history, which begins long before the word fractal was actually coined. Since the introduction of mathematical rigor to the subject of fractals, by Mandelbrot and others, there have been numerous cosmological theories and analyses of astronomical observations which suggest that the universe exhibits fractality or is by nature fractal. In recent years, the term fractal cosmology has come into usage, as a description for those theories and methods of analysis whereby a fractal nature of the cosmos is shown.
Cosmological constraints with clustering-based redshifts
Kovetz, Ely D.; Raccanelli, Alvise; Rahman, Mubdi
2017-07-01
We demonstrate that observations lacking reliable redshift information, such as photometric and radio continuum surveys, can produce robust measurements of cosmological parameters when empowered by clustering-based redshift estimation. This method infers the redshift distribution based on the spatial clustering of sources, using cross-correlation with a reference data set with known redshifts. Applying this method to the existing Sloan Digital Sky Survey (SDSS) photometric galaxies, and projecting to future radio continuum surveys, we show that sources can be efficiently divided into several redshift bins, increasing their ability to constrain cosmological parameters. We forecast constraints on the dark-energy equation of state and on local non-Gaussianity parameters. We explore several pertinent issues, including the trade-off between including more sources and minimizing the overlap between bins, the shot-noise limitations on binning and the predicted performance of the method at high redshifts, and most importantly pay special attention to possible degeneracies with the galaxy bias. Remarkably, we find that once this technique is implemented, constraints on dynamical dark energy from the SDSS imaging catalogue can be competitive with, or better than, those from the spectroscopic BOSS survey and even future planned experiments. Further, constraints on primordial non-Gaussianity from future large-sky radio-continuum surveys can outperform those from the Planck cosmic microwave background experiment and rival those from future spectroscopic galaxy surveys. The application of this method thus holds tremendous promise for cosmology.
Empirical estimation of school siting parameter towards improving children's safety
Aziz, I. S.; Yusoff, Z. M.; Rasam, A. R. A.; Rahman, A. N. N. A.; Omar, D.
2014-02-01
Distance from school to home is a key determination in ensuring the safety of hildren. School siting parameters are made to make sure that a particular school is located in a safe environment. School siting parameters are made by Department of Town and Country Planning Malaysia (DTCP) and latest review was on June 2012. These school siting parameters are crucially important as they can affect the safety, school reputation, and not to mention the perception of the pupil and parents of the school. There have been many studies to review school siting parameters since these change in conjunction with this ever-changing world. In this study, the focus is the impact of school siting parameter on people with low income that live in the urban area, specifically in Johor Bahru, Malaysia. In achieving that, this study will use two methods which are on site and off site. The on site method is to give questionnaires to people and off site is to use Geographic Information System (GIS) and Statistical Product and Service Solutions (SPSS), to analyse the results obtained from the questionnaire. The output is a maps of suitable safe distance from school to house. The results of this study will be useful to people with low income as their children tend to walk to school rather than use transportation.
Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly
Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.
2013-01-01
Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…
Efficient estimates of cochlear hearing loss parameters in individual listeners
DEFF Research Database (Denmark)
Fereczkowski, Michal; Jepsen, Morten Løve; Dau, Torsten
2013-01-01
It has been suggested that the level corresponding to the knee-point of the basilar membrane (BM) input/output (I/O) function can be used to estimate the amount of inner- and outer hair-cell loss (IHL, OHL) in listeners with a moderate cochlear hearing impairment Plack et al. (2004). According...... to Jepsen and Dau (2011) IHL + OHL = HLT [dB], where HLT stands for total hearing loss. Hence having estimates of the total hearing loss and OHC loss, one can estimate the IHL. In the present study, results from forward masking experiments based on temporal masking curves (TMC; Nelson et al., 2001...... estimates of the knee-point level. Further, it is explored whether it is possible to estimate the compression ratio using only on-frequency TMCs. 10 normal-hearing and 10 hearing-impaired listeners (with mild-to-moderate sensorineural hearing loss) were tested at 1, 2 and 4 kHz. The results showed...
Cao, Shu-Lei; Duan, Xiao-Wei; Meng, Xiao-Lei; Zhang, Tong-Jie
2018-04-01
Aiming at exploring the nature of dark energy (DE), we use forty-three observational Hubble parameter data (OHD) in the redshift range 0 measurements. The binning methods turn out to be promising and considered to be robust. By applying the two-point diagnostic to the binned data, we find that although the best-fit values of Omh^2 fluctuate as the continuous redshift intervals change, on average, they are continuous with being constant within 1 σ confidence interval. Therefore, we conclude that the ΛCDM model cannot be ruled out.
Sugarcane maturity estimation through edaphic-climatic parameters
Directory of Open Access Journals (Sweden)
Scarpari Maximiliano Salles
2004-01-01
Full Text Available Sugarcane (Saccharum officinarum L. grows under different weather conditions directly affecting crop maturation. Raw material quality predicting models are important tools in sugarcane crop management; the goal of these models is to provide productivity estimates during harvesting, increasing the efficiency of strategical and administrative decisions. The objective of this work was developing a model to predict Total Recoverable Sugars (TRS during harvesting, using data related to production factors such as soil water storage and negative degree-days. The database of a sugar mill for the crop seasons 1999/2000, 2000/2001 and 2001/2002 was analyzed, and statistical models were tested to estimate raw material. The maturity model for a one-year old sugarcane proved to be significant, with a coefficient of determination (R² of 0.7049*. No differences were detected between measured and estimated data in the simulation (P < 0.05.
An Introduction to Goodness of Fit for PMU Parameter Estimation
Energy Technology Data Exchange (ETDEWEB)
Riepnieks, Artis; Kirkham, Harold
2017-10-01
New results of measurements of phasor-like signals are presented based on our previous work on the topic. In this document an improved estimation method is described. The algorithm (which is realized in MATLAB software) is discussed. We examine the effect of noisy and distorted signals on the Goodness of Fit metric. The estimation method is shown to be performing very well with clean data and with a measurement window as short as a half a cycle and as few as 5 samples per cycle. The Goodness of Fit decreases predictably with added phase noise, and seems to be acceptable even with visible distortion in the signal. While the exact results we obtain are specific to our method of estimation, the Goodness of Fit method could be implemented in any phasor measurement unit.
Response-Based Estimation of Sea State Parameters
DEFF Research Database (Denmark)
Nielsen, Ulrik Dam
2007-01-01
of measured ship responses. It is therefore interesting to investigate how the filtering aspect, introduced by FRF, affects the final outcome of the estimation procedures. The paper contains a study based on numerical generated time series, and the study shows that filtering has an influence...... calculated by a 3-D time domain code and by closed-form (analytical) expressions, respectively. Based on comparisons with wave radar measurements and satellite measurements it is seen that the wave estimations based on closedform expressions exhibit a reasonable energy content, but the distribution of energy...
Application of Parameter Estimation for Diffusions and Mixture Models
DEFF Research Database (Denmark)
Nolsøe, Kim
The first part of this thesis proposes a method to determine the preferred number of structures, their proportions and the corresponding geometrical shapes of an m-membered ring molecule. This is obtained by formulating a statistical model for the data and constructing an algorithm which samples...... with the posterior score function. From an application point of view this methology is easy to apply, since the optimal estimating function G(;Xt1 ; : : : ;Xtn ) is equal to the classical optimal estimating function, plus a correction term which takes into account the prior information. The methology is particularly...
Estimation of beech pyrolysis kinetic parameters by Shuffled Complex Evolution.
Ding, Yanming; Wang, Changjian; Chaos, Marcos; Chen, Ruiyu; Lu, Shouxiang
2016-01-01
The pyrolysis kinetics of a typical biomass energy feedstock, beech, was investigated based on thermogravimetric analysis over a wide heating rate range from 5K/min to 80K/min. A three-component (corresponding to hemicellulose, cellulose and lignin) parallel decomposition reaction scheme was applied to describe the experimental data. The resulting kinetic reaction model was coupled to an evolutionary optimization algorithm (Shuffled Complex Evolution, SCE) to obtain model parameters. To the authors' knowledge, this is the first study in which SCE has been used in the context of thermogravimetry. The kinetic parameters were simultaneously optimized against data for 10, 20 and 60K/min heating rates, providing excellent fits to experimental data. Furthermore, it was shown that the optimized parameters were applicable to heating rates (5 and 80K/min) beyond those used to generate them. Finally, the predicted results based on optimized parameters were contrasted with those based on the literature. Copyright © 2015 Elsevier Ltd. All rights reserved.
LIKELIHOOD ESTIMATION OF PARAMETERS USING SIMULTANEOUSLY MONITORED PROCESSES
DEFF Research Database (Denmark)
Friis-Hansen, Peter; Ditlevsen, Ove Dalager
2004-01-01
The topic is maximum likelihood inference from several simultaneously monitored response processes of a structure to obtain knowledge about the parameters of other not monitored but important response processes when the structure is subject to some Gaussian load field in space and time. The consi....... The considered example is a ship sailing with a given speed through a Gaussian wave field....
parameter extraction and estimation based on the pv panel outdoor
African Journals Online (AJOL)
userpc
The five parameters in Equation (1) depend on the incident solar irradiance, the cell temperature, and on their reference values. These reference values are generally provided by manufacturers of PV modules for specified operating condition such as STC (Standard Test Conditions) for which the irradiance is 1000 and the.
Unconstrained parameter estimation for assessment of dynamic cerebral autoregulation
International Nuclear Information System (INIS)
Chacón, M; Nuñez, N; Henríquez, C; Panerai, R B
2008-01-01
Measurement of dynamic cerebral autoregulation (CA), the transient response of cerebral blood flow (CBF) to changes in arterial blood pressure (ABP), has been performed with an index of autoregulation (ARI), related to the parameters of a second-order differential equation model, namely gain (K), damping factor (D) and time constant (T). Limitations of the ARI were addressed by increasing its numerical resolution and generalizing the parameter space. In 16 healthy subjects, recordings of ABP (Finapres) and CBF velocity (ultrasound Doppler) were performed at rest, before, during and after 5% CO 2 breathing, and for six repeated thigh cuff maneuvers. The unconstrained model produced lower predictive error (p < 0.001) than the original model. Unconstrained parameters (K'–D'–T') were significantly different from K–D–T but were still sensitive to different measurement conditions, such as the under-regulation induced by hypercapnia. The intra-subject variability of K' was significantly lower than that of the ARI and this parameter did not show the unexpected occurrences of zero values as observed with the ARI and the classical value of K. These results suggest that K' could be considered as a more stable and reliable index of dynamic autoregulation than ARI. Further studies are needed to validate this new index under different clinical conditions
Measurement Error Estimation for Capacitive Voltage Transformer by Insulation Parameters
Directory of Open Access Journals (Sweden)
Bin Chen
2017-03-01
Full Text Available Measurement errors of a capacitive voltage transformer (CVT are relevant to its equivalent parameters for which its capacitive divider contributes the most. In daily operation, dielectric aging, moisture, dielectric breakdown, etc., it will exert mixing effects on a capacitive divider’s insulation characteristics, leading to fluctuation in equivalent parameters which result in the measurement error. This paper proposes an equivalent circuit model to represent a CVT which incorporates insulation characteristics of a capacitive divider. After software simulation and laboratory experiments, the relationship between measurement errors and insulation parameters is obtained. It indicates that variation of insulation parameters in a CVT will cause a reasonable measurement error. From field tests and calculation, equivalent capacitance mainly affects magnitude error, while dielectric loss mainly affects phase error. As capacitance changes 0.2%, magnitude error can reach −0.2%. As dielectric loss factor changes 0.2%, phase error can reach 5′. An increase of equivalent capacitance and dielectric loss factor in the high-voltage capacitor will cause a positive real power measurement error. An increase of equivalent capacitance and dielectric loss factor in the low-voltage capacitor will cause a negative real power measurement error.
Estimation of Aerodynamic Parameters in Conditions of Measurement
Directory of Open Access Journals (Sweden)
Htang Om Moung
2017-01-01
Full Text Available The paper discusses the problem of aircraft parameter identification in conditions of measurement noises. It is assumed that all the signals involved into the process of identification are subjects to measurement noises, that is measurement random errors normally distributed. The results of simulation are presented which show the relation between the noises standard deviations and the accuracy of identification.
A general method of estimating stellar astrophysical parameters from photometry
Belikov, A. N.; Roeser, S.
2008-01-01
Context. Applying photometric catalogs to the study of the population of the Galaxy is obscured by the impossibility to map directly photometric colors into astrophysical parameters. Most of all-sky catalogs like ASCC or 2MASS are based upon broad-band photometric systems, and the use of broad
Hierarchical Bayesian parameter estimation for cumulative prospect theory
Nilsson, H.; Rieskamp, J.; Wagenmakers, E.-J.
2011-01-01
Cumulative prospect theory (CPT Tversky & Kahneman, 1992) has provided one of the most influential accounts of how people make decisions under risk. CPT is a formal model with parameters that quantify psychological processes such as loss aversion, subjective values of gains and losses, and
Parameter estimation in stochastic mammogram model by heuristic optimization techniques.
Selvan, S.E.; Xavier, C.C.; Karssemeijer, N.; Sequeira, J.; Cherian, R.A.; Dhala, B.Y.
2006-01-01
The appearance of disproportionately large amounts of high-density breast parenchyma in mammograms has been found to be a strong indicator of the risk of developing breast cancer. Hence, the breast density model is popular for risk estimation or for monitoring breast density change in prevention or
EVALUATING SOIL EROSION PARAMETER ESTIMATES FROM DIFFERENT DATA SOURCES
Topographic factors and soil loss estimates that were derived from thee data sources (STATSGO, 30-m DEM, and 3-arc second DEM) were compared. Slope magnitudes derived from the three data sources were consistently different. Slopes from the DEMs tended to provide a flattened sur...
Online Parameter Estimation for a Centrifugal Decanter System
DEFF Research Database (Denmark)
Larsen, Jesper Abildgaard; Alstrøm, Preben
2014-01-01
In many processing plants decanter systems are used for separation of heterogenious mixtures, and even though they account for a large fraction of the energy consumption, most decanters just runs at a fixed setpoint. Here, multi model estimation is applied to a waste water treatment plant, and it...
Estimates of selection parameters in protein mutants of spring barley
International Nuclear Information System (INIS)
Gaul, H.; Walther, H.; Seibold, K.H.; Brunner, H.; Mikaelsen, K.
1976-01-01
Detailed studies have been made with induced protein mutants regarding a possible genetic advance in selection including the estimation of the genetic variation and heritability coefficients. Estimates were obtained for protein content and protein yield. The variation of mutant lines in different environments was found to be many times as large as the variation of the line means. The detection of improved protein mutants seems therefore possible only in trials with more than one environment. The heritability of protein content and protein yield was estimated in different sets of environments and was found to be low. However, higher values were found with an increasing number of environments. At least four environments seem to be necessary to obtain reliable heritability estimates. The geneticall component of the variation between lines was significant for protein content in all environmental combinations. For protein yield some environmental combinations only showed significant differences. The expected genetic advance with one selection step was small for both protein traits. Genetically significant differences between protein micromutants give, however, a first indication that selection among protein mutants with small differences seems also possible. (author)
On Structure, Family and Parameter Estimation of Hierarchical Archimedean Copulas
Czech Academy of Sciences Publication Activity Database
Górecki, J.; Hofert, M.; Holeňa, Martin
2017-01-01
Roč. 87, č. 17 (2017), s. 3261-3324 ISSN 0094-9655 R&D Projects: GA ČR GA17-01251S Institutional support: RVO:67985807 Keywords : copula estimation * goodness-of-fit * Hierarchical Archimedean copula * structure determination Subject RIV: IN - Informatics, Computer Science OBOR OECD: Statistics and probability Impact factor: 0.757, year: 2016
Estimation of reservoir parameter using a hybrid neural network
Energy Technology Data Exchange (ETDEWEB)
Aminzadeh, F. [FACT, Suite 201-225, 1401 S.W. FWY Sugarland, TX (United States); Barhen, J.; Glover, C.W. [Center for Engineering Systems Advanced Research, Oak Ridge National Laboratory, Oak Ridge, TN (United States); Toomarian, N.B. [Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA (United States)
1999-11-01
Estimation of an oil field's reservoir properties using seismic data is a crucial issue. The accuracy of those estimates and the associated uncertainty are also important information. This paper demonstrates the use of the k-fold cross validation technique to obtain confidence bound on an Artificial Neural Network's (ANN) accuracy statistic from a finite sample set. In addition, we also show that an ANN's classification accuracy is dramatically improved by transforming the ANN's input feature space to a dimensionally smaller, new input space. The new input space represents a feature space that maximizes the linear separation between classes. Thus, the ANN's convergence time and accuracy are improved because the ANN must merely find nonlinear perturbations to the starting linear decision boundaries. These technique for estimating ANN accuracy bounds and feature space transformations are demonstrated on the problem of estimating the sand thickness in an oil field reservoir based only on remotely sensed seismic data.
Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials
DEFF Research Database (Denmark)
Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael
2010-01-01
Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficul...
Parameters estimation for X-ray sources: positions
International Nuclear Information System (INIS)
Avni, Y.
1977-01-01
It is shown that the sizes of the positional error boxes for x-ray sources can be determined by using an estimation method which we have previously formulated generally and applied in spectral analyses. It is explained how this method can be used by scanning x-ray telescopes, by rotating modulation collimators, and by HEAO-A (author)
Parameter estimation of electricity spot models from futures prices
Aihara, ShinIchi; Bagchi, Arunabha; Imreizeeq, E.S.N.; Walter, E.
We consider a slight perturbation of the Schwartz-Smith model for the electricity futures prices and the resulting modified spot model. Using the martingale property of the modified price under the risk neutral measure, we derive the arbitrage free model for the spot and futures prices. We estimate
Estimation of fracture parameters using elastic full-waveform inversion
Zhang, Zhendong; Alkhalifah, Tariq Ali; Oh, Juwon; Tsvankin, Ilya
2017-01-01
regularization term is added to the objective function to improve the estimation of the fracture azimuth, which is otherwise poorly constrained. The cracks are assumed to be penny-shaped to reduce the nonuniqueness in the inverted fracture weaknesses and achieve
Cosmological helium production simplified
International Nuclear Information System (INIS)
Bernstein, J.; Brown, L.S.; Feinberg, G.
1988-01-01
We present a simplified model of helium synthesis in the early universe. The purpose of the model is to explain clearly the physical ideas relevant to the cosmological helium synthesis, in a manner that does not overlay these ideas with complex computer calculations. The model closely follows the standard calculation, except that it neglects the small effect of Fermi-Dirac statistics for the leptons. We also neglect the temperature difference between photons and neutrinos during the period in which neutrons and protons interconvert. These approximations allow us to express the neutron-proton conversion rates in a closed form, which agrees to 10% accuracy or better with the exact rates. Using these analytic expressions for the rates, we reduce the calculation of the neutron-proton ratio as a function of temperature to a simple numerical integral. We also estimate the effect of neutron decay on the helium abundance. Our result for this quantity agrees well with precise computer calculations. We use our semi-analytic formulas to determine how the predicted helium abundance varies with such parameters as the neutron life-time, the baryon to photon ratio, the number of neutrino species, and a possible electron-neutrino chemical potential. 19 refs., 1 fig., 1 tab
Ellis, G F R
1993-01-01
Many topics were covered in the submitted papers, showing much life in this subject at present. They ranged from conventional calculations in specific cosmological models to provocatively speculative work. Space and time restrictions required selecting from them, for summarisation here; the book of Abstracts should be consulted for a full overview.
DEFF Research Database (Denmark)
Aghanim, N.; Akrami, Y.; Ashdown, M.
2017-01-01
never before been measured to cosmic-variance level precision. We have investigated these shifts to determine whether they are within the range of expectation and to understand their origin in the data. Taking our parameter set to be the optical depth of the reionized intergalactic medium τ, the baryon...... density ωb, the matter density ωm, the angular size of the sound horizon θ∗, the spectral index of the primordial power spectrum, ns, and Ase- 2τ (where As is the amplitude of the primordial power spectrum), we have examined the change in best-fit values between a WMAP-like large angular-scale data set...
International Nuclear Information System (INIS)
Bachoc, Francois
2014-01-01
Covariance parameter estimation of Gaussian processes is analyzed in an asymptotic framework. The spatial sampling is a randomly perturbed regular grid and its deviation from the perfect regular grid is controlled by a single scalar regularity parameter. Consistency and asymptotic normality are proved for the Maximum Likelihood and Cross Validation estimators of the covariance parameters. The asymptotic covariance matrices of the covariance parameter estimators are deterministic functions of the regularity parameter. By means of an exhaustive study of the asymptotic covariance matrices, it is shown that the estimation is improved when the regular grid is strongly perturbed. Hence, an asymptotic confirmation is given to the commonly admitted fact that using groups of observation points with small spacing is beneficial to covariance function estimation. Finally, the prediction error, using a consistent estimator of the covariance parameters, is analyzed in detail. (authors)
Uncertainty estimation of core safety parameters using cross-correlations of covariance matrix
International Nuclear Information System (INIS)
Yamamoto, A.; Yasue, Y.; Endo, T.; Kodama, Y.; Ohoka, Y.; Tatsumi, M.
2012-01-01
An uncertainty estimation method for core safety parameters, for which measurement values are not obtained, is proposed. We empirically recognize the correlations among the prediction errors among core safety parameters, e.g., a correlation between the control rod worth and assembly relative power of corresponding position. Correlations of uncertainties among core safety parameters are theoretically estimated using the covariance of cross sections and sensitivity coefficients for core parameters. The estimated correlations among core safety parameters are verified through the direct Monte-Carlo sampling method. Once the correlation of uncertainties among core safety parameters is known, we can estimate the uncertainty of a safety parameter for which measurement value is not obtained. Furthermore, the correlations can be also used for the reduction of uncertainties of core safety parameters. (authors)
Nonparametric Estimation of Regression Parameters in Measurement Error Models
Czech Academy of Sciences Publication Activity Database
Ehsanes Saleh, A.K.M.D.; Picek, J.; Kalina, Jan
2009-01-01
Roč. 67, č. 2 (2009), s. 177-200 ISSN 0026-1424 Grant - others:GA AV ČR(CZ) IAA101120801; GA MŠk(CZ) LC06024 Institutional research plan: CEZ:AV0Z10300504 Keywords : asymptotic relative efficiency(ARE) * asymptotic theory * emaculate mode * Me model * R-estimation * Reliabilty ratio(RR) Subject RIV: BB - Applied Statistics, Operational Research
Measurement-Based Transmission Line Parameter Estimation with Adaptive Data Selection Scheme
DEFF Research Database (Denmark)
Li, Changgang; Zhang, Yaping; Zhang, Hengxu
2017-01-01
Accurate parameters of transmission lines are critical for power system operation and control decision making. Transmission line parameter estimation based on measured data is an effective way to enhance the validity of the parameters. This paper proposes a multi-point transmission line parameter...
Marker-based estimation of genetic parameters in genomics.
Directory of Open Access Journals (Sweden)
Zhiqiu Hu
Full Text Available Linear mixed model (LMM analysis has been recently used extensively for estimating additive genetic variances and narrow-sense heritability in many genomic studies. While the LMM analysis is computationally less intensive than the Bayesian algorithms, it remains infeasible for large-scale genomic data sets. In this paper, we advocate the use of a statistical procedure known as symmetric differences squared (SDS as it may serve as a viable alternative when the LMM methods have difficulty or fail to work with large datasets. The SDS procedure is a general and computationally simple method based only on the least squares regression analysis. We carry out computer simulations and empirical analyses to compare the SDS procedure with two commonly used LMM-based procedures. Our results show that the SDS method is not as good as the LMM methods for small data sets, but it becomes progressively better and can match well with the precision of estimation by the LMM methods for data sets with large sample sizes. Its major advantage is that with larger and larger samples, it continues to work with the increasing precision of estimation while the commonly used LMM methods are no longer able to work under our current typical computing capacity. Thus, these results suggest that the SDS method can serve as a viable alternative particularly when analyzing 'big' genomic data sets.
Reservoir parameter estimation using a hybrid neural network
Energy Technology Data Exchange (ETDEWEB)
Aminzadeh, F. [DGB USA and FACT Inc., Sugarland, TX (United States); Barhen, J.; Glover, C.W. [Oak Ridge National Laboratory (United States). Center for Engineering Systems Advanced Resesarch; Toomarian, N.B. [California Institute of Technology (United States). Jet Propulsion Laboratory
2000-10-01
The accuracy of an artificial neural network (ANN) algorithm is a crucial issue in the estimation of an oil field's reservoir properties from the log and seismic data. This paper demonstrates the use of the k-fold cross validation technique to obtain confidence bounds on an ANN's accuracy statistic from a finite sample set. In addition, we also show that an ANN's classification accuracy is dramatically improved by transforming the ANN's input feature space to a dimensionally smaller new input space. The new input space represents a feature space that maximizes the linear separation between classes. Thus, the ANN's convergence time and accuracy are improved because the ANN must merely find nonlinear perturbations to the starting linear decision boundaries. These techniques for estimating ANN accuracy bounds and feature space transformations are demonstrated on the problem of estimating the sand thickness in an oil field reservoir based only on remotely sensed seismic data. (author)
International Nuclear Information System (INIS)
Amendola, Luca; Campos, Gabriela Camargo; Rosenfeld, Rogerio
2007-01-01
Models where the dark matter component of the Universe interacts with the dark energy field have been proposed as a solution to the cosmic coincidence problem, since in the attractor regime both dark energy and dark matter scale in the same way. In these models the mass of the cold dark matter particles is a function of the dark energy field responsible for the present acceleration of the Universe, and different scenarios can be parametrized by how the mass of the cold dark matter particles evolves with time. In this article we study the impact of a constant coupling δ between dark energy and dark matter on the determination of a redshift dependent dark energy equation of state w DE (z) and on the dark matter density today from SNIa data. We derive an analytical expression for the luminosity distance in this case. In particular, we show that the presence of such a coupling increases the tension between the cosmic microwave background data from the analysis of the shift parameter in models with constant w DE and SNIa data for realistic values of the present dark matter density fraction. Thus, an independent measurement of the present dark matter density can place constraints on models with interacting dark energy
Parameter estimation in a simple stochastic differential equation for phytoplankton modelling
DEFF Research Database (Denmark)
Møller, Jan Kloppenborg; Madsen, Henrik; Carstensen, Jacob
2011-01-01
The use of stochastic differential equations (SDEs) for simulation of aquatic ecosystems has attracted increasing attention in recent years. The SDE setting also provides the opportunity for statistical estimation of ecosystem parameters. We present an estimation procedure, based on Kalman...
Time-course window estimator for ordinary differential equations linear in the parameters
Vujacic, Ivan; Dattner, Itai; Gonzalez, Javier; Wit, Ernst
In many applications obtaining ordinary differential equation descriptions of dynamic processes is scientifically important. In both, Bayesian and likelihood approaches for estimating parameters of ordinary differential equations, the speed and the convergence of the estimation procedure may
Peng, Yijie; Fu, Michael C.; Hu, Jian Qiang; Heidergott, Bernd
In this paper, we propose a new unbiased stochastic derivative estimator in a framework that can handle discontinuous sample performances with structural parameters. This work extends the three most popular unbiased stochastic derivative estimators: (1) infinitesimal perturbation analysis (IPA), (2)
ADAPTIVE PARAMETER ESTIMATION OF PERSON RECOGNITION MODEL IN A STOCHASTIC HUMAN TRACKING PROCESS
W. Nakanishi; T. Fuse; T. Ishikawa
2015-01-01
This paper aims at an estimation of parameters of person recognition models using a sequential Bayesian filtering method. In many human tracking method, any parameters of models used for recognize the same person in successive frames are usually set in advance of human tracking process. In real situation these parameters may change according to situation of observation and difficulty level of human position prediction. Thus in this paper we formulate an adaptive parameter estimation ...
On the estimation of water pure compound parameters in association theories
DEFF Research Database (Denmark)
Grenner, Andreas; Kontogeorgis, Georgios; Michelsen, Michael Locht
2007-01-01
Determination of the appropriate number of association sites and estimation of parameters for association (SAFT-type) theories is not a trivial matter. Building further on a recently published manuscript by Clark et al., this work investigates aspects of the parameter estimation for water using t...... different association theories. Their performance for various properties as well as against the results presented earlier is demonstrated.......Determination of the appropriate number of association sites and estimation of parameters for association (SAFT-type) theories is not a trivial matter. Building further on a recently published manuscript by Clark et al., this work investigates aspects of the parameter estimation for water using two...
Estimation of atomic interaction parameters by photon counting
DEFF Research Database (Denmark)
Kiilerich, Alexander Holm; Mølmer, Klaus
2014-01-01
Detection of radiation signals is at the heart of precision metrology and sensing. In this article we show how the fluctuations in photon counting signals can be exploited to optimally extract information about the physical parameters that govern the dynamics of the emitter. For a simple two......-level emitter subject to photon counting, we show that the Fisher information and the Cram\\'er- Rao sensitivity bound based on the full detection record can be evaluated from the waiting time distribution in the fluorescence signal which can, in turn, be calculated for both perfect and imperfect detectors...
Parameter estimation via conditional expectation: a Bayesian inversion
Matthies, Hermann G.; Zander, Elmar; Rosić, Bojana V.; Litvinenko, Alexander
2016-01-01
When a mathematical or computational model is used to analyse some system, it is usual that some parameters resp. functions or fields in the model are not known, and hence uncertain. These parametric quantities are then identified by actual observations of the response of the real system. In a probabilistic setting, Bayes’s theory is the proper mathematical background for this identification process. The possibility of being able to compute a conditional expectation turns out to be crucial for this purpose. We show how this theoretical background can be used in an actual numerical procedure, and shortly discuss various numerical approximations.
Parameter estimation via conditional expectation: a Bayesian inversion
Matthies, Hermann G.
2016-08-11
When a mathematical or computational model is used to analyse some system, it is usual that some parameters resp. functions or fields in the model are not known, and hence uncertain. These parametric quantities are then identified by actual observations of the response of the real system. In a probabilistic setting, Bayes’s theory is the proper mathematical background for this identification process. The possibility of being able to compute a conditional expectation turns out to be crucial for this purpose. We show how this theoretical background can be used in an actual numerical procedure, and shortly discuss various numerical approximations.
Directory of Open Access Journals (Sweden)
Azam Zaka
2014-10-01
Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.
Parameters influencing deposit estimation when using water sensitive papers
Directory of Open Access Journals (Sweden)
Emanuele Cerruto
2013-10-01
Full Text Available The aim of the study was to assess the possibility of using water sensitive papers (WSP to estimate the amount of deposit on the target when varying the spray characteristics. To identify the main quantities influencing the deposit, some simplifying hypotheses were applied to simulate WSP behaviour: log-normal distribution of the diameters of the drops and circular stains randomly placed on the images. A very large number (4704 of images of WSPs were produced by means of simulation. The images were obtained by simulating drops of different arithmetic mean diameter (40-300 μm, different coefficient of variation (0.1-1.5, and different percentage of covered surface (2-100%, not considering overlaps. These images were considered to be effective WSP images and then analysed using image processing software in order to measure the percentage of covered surface, the number of particles, and the area of each particle; the deposit was then calculated. These data were correlated with those used to produce the images, varying the spray characteristics. As far as the drop populations are concerned, a classification based on the volume median diameter only should be avoided, especially in case of high variability. This, in fact, results in classifying sprays with very low arithmetic mean diameter as extremely or ultra coarse. The WSP image analysis shows that the relation between simulated and computed percentage of covered surface is independent of the type of spray, whereas impact density and unitary deposit can be estimated from the computed percentage of covered surface only if the spray characteristics (arithmetic mean and coefficient of variation of the drop diameters are known. These data can be estimated by analysing the particles on the WSP images. The results of a validation test show good agreement between simulated and computed deposits, testified by a high (0.93 coefficient of determination.
Graviton fluctuations erase the cosmological constant
Wetterich, C.
2017-10-01
Graviton fluctuations induce strong non-perturbative infrared renormalization effects for the cosmological constant. The functional renormalization flow drives a positive cosmological constant towards zero, solving the cosmological constant problem without the need to tune parameters. We propose a simple computation of the graviton contribution to the flow of the effective potential for scalar fields. Within variable gravity, with effective Planck mass proportional to the scalar field, we find that the potential increases asymptotically at most quadratically with the scalar field. The solutions of the derived cosmological equations lead to an asymptotically vanishing cosmological "constant" in the infinite future, providing for dynamical dark energy in the present cosmological epoch. Beyond a solution of the cosmological constant problem, our simplified computation also entails a sizeable positive graviton-induced anomalous dimension for the quartic Higgs coupling in the ultraviolet regime, substantiating the successful prediction of the Higgs boson mass within the asymptotic safety scenario for quantum gravity.
Estimation of atomic interaction parameters by quantum measurements
DEFF Research Database (Denmark)
Kiilerich, Alexander Holm; Mølmer, Klaus
Quantum systems, ranging from atomic systems to field modes and mechanical devices are useful precision probes for a variety of physical properties and phenomena. Measurements by which we extract information about the evolution of single quantum systems yield random results and cause a back actio...... strategies, we address the Fisher information and the Cramér-Rao sensitivity bound. We investigate monitoring by photon counting, homodyne detection and frequent projective measurements respectively, and exemplify by Rabi frequency estimation in a driven two-level system....
Estimation of common cause failure parameters for diesel generators
International Nuclear Information System (INIS)
Tirira, J.; Lanore, J.M.
2002-10-01
This paper presents a summary of some results concerning the feedback analysis of French Emergency diesel generator (EDG). The database of common cause failure for EDG has been updated. The data collected covers a period of 10 years. Several latent common cause failure (CCF) events counting in tens are identified. In fact, in this number of events collected, most are potential CCF. From events identified, 15% events are characterized as complete CCF. The database is organised following the structure proposed by 'International Common Cause Data Exchange' (ICDE project). Events collected are analyzed by failure mode and degree of failure. Qualitative analysis of root causes, coupling factors and corrective actions are studied. The exercise of quantitative analysis is in progress for evaluating CCF parameters taking into account the average impact vector and the rate of the independent failures. The interest of the average impact vector approach is that it makes it possible to take into account a wide experience feedback, not limited to complete CCF but including also many events related to partial or potential CCF. It has to be noted that there are no finalized quantitative conclusions yet to be drawn and analysis is in progress for evaluating diesel CCF parameters. In fact, the numerical coding CCF representation of the events uses a part of subjective analysis, which requests a complete and detailed event examination. (authors)
Catalytic hydrolysis of ammonia borane: Intrinsic parameter estimation and validation
Energy Technology Data Exchange (ETDEWEB)
Basu, S.; Gore, J.P. [School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907-2088 (United States); School of Chemical Engineering, Purdue University, West Lafayette, IN 47907-2100 (United States); Energy Center in Discovery Park, Purdue University, West Lafayette, IN 47907-2022 (United States); Zheng, Y. [School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907-2088 (United States); Energy Center in Discovery Park, Purdue University, West Lafayette, IN 47907-2022 (United States); Varma, A.; Delgass, W.N. [School of Chemical Engineering, Purdue University, West Lafayette, IN 47907-2100 (United States); Energy Center in Discovery Park, Purdue University, West Lafayette, IN 47907-2022 (United States)
2010-04-02
Ammonia borane (AB) hydrolysis is a potential process for on-board hydrogen generation. This paper presents isothermal hydrogen release rate measurements of dilute AB (1 wt%) hydrolysis in the presence of carbon supported ruthenium catalyst (Ru/C). The ranges of investigated catalyst particle sizes and temperature were 20-181 {mu}m and 26-56 C, respectively. The obtained rate data included both kinetic and diffusion-controlled regimes, where the latter was evaluated using the catalyst effectiveness approach. A Langmuir-Hinshelwood kinetic model was adopted to interpret the data, with intrinsic kinetic and diffusion parameters determined by a nonlinear fitting algorithm. The AB hydrolysis was found to have an activation energy 60.4 kJ mol{sup -1}, pre-exponential factor 1.36 x 10{sup 10} mol (kg-cat){sup -1} s{sup -1}, adsorption energy -32.5 kJ mol{sup -1}, and effective mass diffusion coefficient 2 x 10{sup -10} m{sup 2} s{sup -1}. These parameters, obtained under dilute AB conditions, were validated by comparing measurements with simulations of AB consumption rates during the hydrolysis of concentrated AB solutions (5-20 wt%), and also with the axial temperature distribution in a 0.5 kW continuous-flow packed-bed reactor. (author)
Bayesian estimation of multicomponent relaxation parameters in magnetic resonance fingerprinting.
McGivney, Debra; Deshmane, Anagha; Jiang, Yun; Ma, Dan; Badve, Chaitra; Sloan, Andrew; Gulani, Vikas; Griswold, Mark
2018-07-01
To estimate multiple components within a single voxel in magnetic resonance fingerprinting when the number and types of tissues comprising the voxel are not known a priori. Multiple tissue components within a single voxel are potentially separable with magnetic resonance fingerprinting as a result of differences in signal evolutions of each component. The Bayesian framework for inverse problems provides a natural and flexible setting for solving this problem when the tissue composition per voxel is unknown. Assuming that only a few entries from the dictionary contribute to a mixed signal, sparsity-promoting priors can be placed upon the solution. An iterative algorithm is applied to compute the maximum a posteriori estimator of the posterior probability density to determine the magnetic resonance fingerprinting dictionary entries that contribute most significantly to mixed or pure voxels. Simulation results show that the algorithm is robust in finding the component tissues of mixed voxels. Preliminary in vivo data confirm this result, and show good agreement in voxels containing pure tissue. The Bayesian framework and algorithm shown provide accurate solutions for the partial-volume problem in magnetic resonance fingerprinting. The flexibility of the method will allow further study into different priors and hyperpriors that can be applied in the model. Magn Reson Med 80:159-170, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Estimating Stellar Parameters and Interstellar Extinction from Evolutionary Tracks
Directory of Open Access Journals (Sweden)
Sichevsky S.
2016-03-01
Full Text Available Developing methods for analyzing and extracting information from modern sky surveys is a challenging task in astrophysical studies. We study possibilities of parameterizing stars and interstellar medium from multicolor photometry performed in three modern photometric surveys: GALEX, SDSS, and 2MASS. For this purpose, we have developed a method to estimate stellar radius from effective temperature and gravity with the help of evolutionary tracks and model stellar atmospheres. In accordance with the evolution rate at every point of the evolutionary track, star formation rate, and initial mass function, a weight is assigned to the resulting value of radius that allows us to estimate the radius more accurately. The method is verified for the most populated areas of the Hertzsprung-Russell diagram: main-sequence stars and red giants, and it was found to be rather precise (for main-sequence stars, the average relative error of radius and its standard deviation are 0.03% and 3.87%, respectively.
A practical approach to parameter estimation applied to model predicting heart rate regulation
DEFF Research Database (Denmark)
Olufsen, Mette; Ottesen, Johnny T.
2013-01-01
Mathematical models have long been used for prediction of dynamics in biological systems. Recently, several efforts have been made to render these models patient specific. One way to do so is to employ techniques to estimate parameters that enable model based prediction of observed quantities....... Knowledge of variation in parameters within and between groups of subjects have potential to provide insight into biological function. Often it is not possible to estimate all parameters in a given model, in particular if the model is complex and the data is sparse. However, it may be possible to estimate...... a subset of model parameters reducing the complexity of the problem. In this study, we compare three methods that allow identification of parameter subsets that can be estimated given a model and a set of data. These methods will be used to estimate patient specific parameters in a model predicting...
Estimations of parameters in Pareto reliability model in the presence of masked data
International Nuclear Information System (INIS)
Sarhan, Ammar M.
2003-01-01
Estimations of parameters included in the individual distributions of the life times of system components in a series system are considered in this paper based on masked system life test data. We consider a series system of two independent components each has a Pareto distributed lifetime. The maximum likelihood and Bayes estimators for the parameters and the values of the reliability of the system's components at a specific time are obtained. Symmetrical triangular prior distributions are assumed for the unknown parameters to be estimated in obtaining the Bayes estimators of these parameters. Large simulation studies are done in order: (i) explain how one can utilize the theoretical results obtained; (ii) compare the maximum likelihood and Bayes estimates obtained of the underlying parameters; and (iii) study the influence of the masking level and the sample size on the accuracy of the estimates obtained
Directory of Open Access Journals (Sweden)
Chuii Khim Chong
2012-06-01
Full Text Available This paper introduces an improved Differential Evolution algorithm (IDE which aims at improving its performance in estimating the relevant parameters for metabolic pathway data to simulate glycolysis pathway for yeast. Metabolic pathway data are expected to be of significant help in the development of efficient tools in kinetic modeling and parameter estimation platforms. Many computation algorithms face obstacles due to the noisy data and difficulty of the system in estimating myriad of parameters, and require longer computational time to estimate the relevant parameters. The proposed algorithm (IDE in this paper is a hybrid of a Differential Evolution algorithm (DE and a Kalman Filter (KF. The outcome of IDE is proven to be superior than Genetic Algorithm (GA and DE. The results of IDE from experiments show estimated optimal kinetic parameters values, shorter computation time and increased accuracy for simulated results compared with other estimation algorithms
Data adaptive control parameter estimation for scaling laws
Energy Technology Data Exchange (ETDEWEB)
Dinklage, Andreas [Max-Planck-Institut fuer Plasmaphysik, Teilinstitut Greifswald, Wendelsteinstrasse 1, D-17491 Greifswald (Germany); Dose, Volker [Max-Planck- Institut fuer Plasmaphysik, Boltzmannstrasse 2, D-85748 Garching (Germany)
2007-07-01
Bayesian experimental design quantifies the utility of data expressed by the information gain. Data adaptive exploration determines the expected utility of a single new measurement using existing data and a data descriptive model. In other words, the method can be used for experimental planning. As an example for a multivariate linear case, we apply this method for constituting scaling laws of fusion devices. In detail, the scaling of the stellarator W7-AS is examined for a subset of {iota}=1/3 data. The impact of the existing data on the scaling exponents is presented. Furthermore, in control parameter space regions of high utility are identified which improve the accuracy of the scaling law. This approach is not restricted to the presented example only, but can also be extended to non-linear models.
Improving Distribution Resiliency with Microgrids and State and Parameter Estimation
Energy Technology Data Exchange (ETDEWEB)
Tuffner, Francis K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Williams, Tess L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Schneider, Kevin P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elizondo, Marcelo A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sun, Yannan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Chen-Ching [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xu, Yin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gourisetti, Sri Nikhil Gup [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-09-30
Modern society relies on low-cost reliable electrical power, both to maintain industry, as well as provide basic social services to the populace. When major disturbances occur, such as Hurricane Katrina or Hurricane Sandy, the nation’s electrical infrastructure can experience significant outages. To help prevent the spread of these outages, as well as facilitating faster restoration after an outage, various aspects of improving the resiliency of the power system are needed. Two such approaches are breaking the system into smaller microgrid sections, and to have improved insight into the operations to detect failures or mis-operations before they become critical. Breaking the system into smaller sections of microgrid islands, power can be maintained in smaller areas where distribution generation and energy storage resources are still available, but bulk power generation is no longer connected. Additionally, microgrid systems can maintain service to local pockets of customers when there has been extensive damage to the local distribution system. However, microgrids are grid connected a majority of the time and implementing and operating a microgrid is much different than when islanded. This report discusses work conducted by the Pacific Northwest National Laboratory that developed improvements for simulation tools to capture the characteristics of microgrids and how they can be used to develop new operational strategies. These operational strategies reduce the cost of microgrid operation and increase the reliability and resilience of the nation’s electricity infrastructure. In addition to the ability to break the system into microgrids, improved observability into the state of the distribution grid can make the power system more resilient. State estimation on the transmission system already provides great insight into grid operations and detecting abnormal conditions by leveraging existing measurements. These transmission-level approaches are expanded to using
Simultaneous Parameters Identifiability and Estimation of an E. coli Metabolic Network Model
Directory of Open Access Journals (Sweden)
Kese Pontes Freitas Alberton
2015-01-01
Full Text Available This work proposes a procedure for simultaneous parameters identifiability and estimation in metabolic networks in order to overcome difficulties associated with lack of experimental data and large number of parameters, a common scenario in the modeling of such systems. As case study, the complex real problem of parameters identifiability of the Escherichia coli K-12 W3110 dynamic model was investigated, composed by 18 differential ordinary equations and 35 kinetic rates, containing 125 parameters. With the procedure, model fit was improved for most of the measured metabolites, achieving 58 parameters estimated, including 5 unknown initial conditions. The results indicate that simultaneous parameters identifiability and estimation approach in metabolic networks is appealing, since model fit to the most of measured metabolites was possible even when important measures of intracellular metabolites and good initial estimates of parameters are not available.
ESTIMATION OF HUMAN BODY SHAPE PARAMETERS USING MICROSOFT KINECTSENCOR
Directory of Open Access Journals (Sweden)
D. M. Vasilkov
2017-01-01
Full Text Available In the paper a human body shape estimation technology based on scan data acquired from sensor controller Microsoft Kinect is described. This device includes an RGB camera and a depth sensor that provides, for each pixel of the image,a distance from the camera focus to the object. A scan session produces a triangulated high-density surface noised with oscillations, isolated fragments and holes. When scanning a human, additional noise comes from garment folds and wrinkles. An algorithm of creating a sparse and regular 3D human body model (avatar free of these defects, which approximates shape, posture and basic metrics of the scanned body is proposed. This solution finds application in individual clothing industry and computer games, as well.
International Nuclear Information System (INIS)
Zeng, G.L.; Gullberg, G.T.
1995-01-01
It is common practice to estimate kinetic parameters from dynamically acquired tomographic data by first reconstructing a dynamic sequence of three-dimensional reconstructions and then fitting the parameters to time activity curves generated from the time-varying reconstructed images. However, in SPECT, the pharmaceutical distribution can change during the acquisition of a complete tomographic data set, which can bias the estimated kinetic parameters. It is hypothesized that more accurate estimates of the kinetic parameters can be obtained by fitting to the projection measurements instead of the reconstructed time sequence. Estimation from projections requires the knowledge of their relationship between the tissue regions of interest or voxels with particular kinetic parameters and the project measurements, which results in a complicated nonlinear estimation problem with a series of exponential factors with multiplicative coefficients. A technique is presented in this paper where the exponential decay parameters are estimated separately using linear time-invariant system theory. Once the exponential factors are known, the coefficients of the exponentials can be estimated using linear estimation techniques. Computer simulations demonstrate that estimation of the kinetic parameters directly from the projections is more accurate than the estimation from the reconstructed images
Heidari, M.; Ranjithan, S.R.
1998-01-01
In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is
How accurately can 21cm tomography constrain cosmology?
Mao, Yi; Tegmark, Max; McQuinn, Matthew; Zaldarriaga, Matias; Zahn, Oliver
2008-07-01
There is growing interest in using 3-dimensional neutral hydrogen mapping with the redshifted 21 cm line as a cosmological probe. However, its utility depends on many assumptions. To aid experimental planning and design, we quantify how the precision with which cosmological parameters can be measured depends on a broad range of assumptions, focusing on the 21 cm signal from 6noise, to uncertainties in the reionization history, and to the level of contamination from astrophysical foregrounds. We derive simple analytic estimates for how various assumptions affect an experiment’s sensitivity, and we find that the modeling of reionization is the most important, followed by the array layout. We present an accurate yet robust method for measuring cosmological parameters that exploits the fact that the ionization power spectra are rather smooth functions that can be accurately fit by 7 phenomenological parameters. We find that for future experiments, marginalizing over these nuisance parameters may provide constraints almost as tight on the cosmology as if 21 cm tomography measured the matter power spectrum directly. A future square kilometer array optimized for 21 cm tomography could improve the sensitivity to spatial curvature and neutrino masses by up to 2 orders of magnitude, to ΔΩk≈0.0002 and Δmν≈0.007eV, and give a 4σ detection of the spectral index running predicted by the simplest inflation models.
Compressive Parameter Estimation for Sparse Translation-Invariant Signals Using Polar Interpolation
DEFF Research Database (Denmark)
Fyhn, Karsten; Duarte, Marco F.; Jensen, Søren Holdt
2015-01-01
We propose new compressive parameter estimation algorithms that make use of polar interpolation to improve the estimator precision. Our work extends previous approaches involving polar interpolation for compressive parameter estimation in two aspects: (i) we extend the formulation from real non...... to attain good estimation precision and keep the computational complexity low. Our numerical experiments show that the proposed algorithms outperform existing approaches that either leverage polynomial interpolation or are based on a conversion to a frequency-estimation problem followed by a super...... interpolation increases the estimation precision....
International Nuclear Information System (INIS)
Fliche, H.-H.; Souriau, J.-M.
1978-03-01
On the basis of colorimetric data a composite spectrum of quasars is established from the visible to the Lyman's limit. Its agreement with the spectrum of the quasar 3C273, obtained directly, confirms the homogeneity of these objects. The compatibility of the following hypotheses: negligible evolution of quasars, Friedmann type model of the universe with cosmological constant, is studied by means of two tests: a non-correlation test adopted to the observation conditions and the construction of diagrams (absolute magnitude, volume) using the K-correction deduced from the composite spectrum. This procedure happens to give relatively well-defined values of the parameters; the central values of the density parameter, the reduced curvature and the reduced cosmological constant are: Ω 0 =0.053, k 0 =0.245, lambda-zero=1.19, which correspond to a big bang model, eternally expanding, spatially finite, in which Hubble's parameter H is presently increasing. This model responds well to different cosmological tests: density of matter, diameter of radio sources, age of the universe. Its characteristics suggest various cosmogonic mechanisms, espacially mass formation by growth of empty spherical bubbles [fr
Grant, E.; Murdin, P.
2000-11-01
During the early Middle Ages (ca 500 to ca 1130) scholars with an interest in cosmology had little useful and dependable literature. They relied heavily on a partial Latin translation of PLATO's Timaeus by Chalcidius (4th century AD), and on a series of encyclopedic treatises associated with the names of Pliny the Elder (ca AD 23-79), Seneca (4 BC-AD 65), Macrobius (fl 5th century AD), Martianus ...
iCosmo: an interactive cosmology package
Refregier, A.; Amara, A.; Kitching, T. D.; Rassat, A.
2011-04-01
Aims: The interactive software package iCosmo, designed to perform cosmological calculations is described. Methods: iCosmo is a software package to perfom interactive cosmological calculations for the low-redshift universe. Computing distance measures, the matter power spectrum, and the growth factor is supported for any values of the cosmological parameters. It also computes derived observed quantities for several cosmological probes such as cosmic shear, baryon acoustic oscillations, and type Ia supernovae. The associated errors for these observable quantities can be derived for customised surveys, or for pre-set values corresponding to current or planned instruments. The code also allows for calculation of cosmological forecasts with Fisher matrices, which can be manipulated to combine different surveys and cosmological probes. The code is written in the IDL language and thus benefits from the convenient interactive features and scientific libraries available in this language. iCosmo can also be used as an engine to perform cosmological calculations in batch mode, and forms a convenient adaptive platform for the development of further cosmological modules. With its extensive documentation, it may also serve as a useful resource for teaching and for newcomers to the field of cosmology. Results: The iCosmo package is described with a number of examples and command sequences. The code is freely available with documentation at http://www.icosmo.org, along with an interactive web interface and is part of the Initiative for Cosmology, a common archive for cosmological resources.
Mathematical properties and parameter estimation for transit compartment pharmacodynamic models.
Yates, James W T
2008-07-03
One feature of recent research in pharmacodynamic modelling has been the move towards more mechanistically based model structures. However, in all of these models there are common sub-systems, such as feedback loops and time-delays, whose properties and contribution to the model behaviour merit some mathematical analysis. In this paper a common pharmacodynamic model sub-structure is considered: the linear transit compartment. These models have a number of interesting properties as the length of the cascade chain is increased. In the limiting case a pure time-delay is achieved [Milsum, J.H., 1966. Biological Control Systems Analysis. McGraw-Hill Book Company, New York] and the initial behaviour becoming increasingly sensitive to parameter value perturbation. It is also shown that the modelled drug effect is attenuated, though the duration of action is longer. Through this analysis the range of behaviours that such models are capable of reproducing are characterised. The properties of these models and the experimental requirements are discussed in order to highlight how mathematical analysis prior to experimentation can enhance the utility of mathematical modelling.
International Nuclear Information System (INIS)
Partridge, R.B.
1977-01-01
Some sixty years after the development of relativistic cosmology by Einstein and his colleagues, observations are finally beginning to have an important impact on our views of the Universe. The available evidence seems to support one of the simplest cosmological models, the hot Big Bang model. The aim of this paper is to assess the observational support for certain assumptions underlying the hot Big Bang model. These are that the Universe is isobaric and homogeneous on a large scale; that it is expanding from an initial state of high density and temperature; and that the proper theory to describe the dynamics of the Universe is unmodified General Relativity. The properties of the cosmic microwave background radiation and recent observations of the abundance of light elements, in particular, support these assumptions. Also examined here are the data bearing on the related questions of the geometry and the future of the Universe (is it ever-expanding, or fated to recollapse). Finally, some difficulties and faults of the standard model are discussed, particularly various aspects of the 'initial condition' problem. It appears that the simplest Big Bang cosmological model calls for a highly specific set of initial conditions to produce the presently observed properties of the Universe. (Auth.)
the Universe About Cosmology Planck Satellite Launched Cosmology Videos Professor George Smoot's group conducts research on the early universe (cosmology) using the Cosmic Microwave Background radiation (CMB science goals regarding cosmology. George Smoot named Director of Korean Cosmology Institute The GRB
Estimation of cauliflower mass transfer parameters during convective drying
Sahin, Medine; Doymaz, İbrahim
2017-02-01
The study was conducted to evaluate the effect of pre-treatments such as citric acid and hot water blanching and air temperature on drying and rehydration characteristics of cauliflower slices. Experiments were carried out at four different drying air temperatures of 50, 60, 70 and 80 °C with the air velocity of 2.0 m/s. It was observed that drying and rehydration characteristics of cauliflower slices were greatly influenced by air temperature and pre-treatment. Six commonly used mathematical models were evaluated to predict the drying kinetics of cauliflower slices. The Midilli et al. model described the drying behaviour of cauliflower slices at all temperatures better than other models. The values of effective moisture diffusivities ( D eff ) were determined using Fick's law of diffusion and were between 4.09 × 10-9 and 1.88 × 10-8 m2/s. Activation energy was estimated by an Arrhenius type equation and was 23.40, 29.09 and 26.39 kJ/mol for citric acid, blanch and control samples, respectively.
Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models
Raykov, Tenko
2005-01-01
A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…
The Effect of Error in Item Parameter Estimates on the Test Response Function Method of Linking.
Kaskowitz, Gary S.; De Ayala, R. J.
2001-01-01
Studied the effect of item parameter estimation for computation of linking coefficients for the test response function (TRF) linking/equating method. Simulation results showed that linking was more accurate when there was less error in the parameter estimates, and that 15 or 25 common items provided better results than 5 common items under both…
Maximum-likelihood estimation of the hyperbolic parameters from grouped observations
DEFF Research Database (Denmark)
Jensen, Jens Ledet
1988-01-01
a least-squares problem. The second procedure Hypesti first approaches the maximum-likelihood estimate by iterating in the profile-log likelihood function for the scale parameter. Close to the maximum of the likelihood function, the estimation is brought to an end by iteration, using all four parameters...
International Nuclear Information System (INIS)
Volkman, Y.
1980-07-01
The optimal design of experimental separation processes for maximum accuracy in the estimation of process parameters is discussed. The sensitivity factor correlates the inaccuracy of the analytical methods with the inaccuracy of the estimation of the enrichment ratio. It is minimized according to the design parameters of the experiment and the characteristics of the analytical method
Application of isotopic information for estimating parameters in Philip infiltration model
Directory of Open Access Journals (Sweden)
Tao Wang
2016-10-01
Full Text Available Minimizing parameter uncertainty is crucial in the application of hydrologic models. Isotopic information in various hydrologic components of the water cycle can expand our knowledge of the dynamics of water flow in the system, provide additional information for parameter estimation, and improve parameter identifiability. This study combined the Philip infiltration model with an isotopic mixing model using an isotopic mass balance approach for estimating parameters in the Philip infiltration model. Two approaches to parameter estimation were compared: (a using isotopic information to determine the soil water transmission and then hydrologic information to estimate the soil sorptivity, and (b using hydrologic information to determine the soil water transmission and the soil sorptivity. Results of parameter estimation were verified through a rainfall infiltration experiment in a laboratory under rainfall with constant isotopic compositions and uniform initial soil water content conditions. Experimental results showed that approach (a, using isotopic and hydrologic information, estimated the soil water transmission in the Philip infiltration model in a manner that matched measured values well. The results of parameter estimation of approach (a were better than those of approach (b. It was also found that the analytical precision of hydrogen and oxygen stable isotopes had a significant effect on parameter estimation using isotopic information.
Estimating 3D Object Parameters from 2D Grey-Level Images
Houkes, Z.
2000-01-01
This thesis describes a general framework for parameter estimation, which is suitable for computer vision applications. The approach described combines 3D modelling, animation and estimation tools to determine parameters of objects in a scene from 2D grey-level images. The animation tool predicts