Multi-fidelity stochastic collocation method for computation of statistical moments
Energy Technology Data Exchange (ETDEWEB)
Zhu, Xueyu, E-mail: xueyu-zhu@uiowa.edu [Department of Mathematics, University of Iowa, Iowa City, IA 52242 (United States); Linebarger, Erin M., E-mail: aerinline@sci.utah.edu [Department of Mathematics, University of Utah, Salt Lake City, UT 84112 (United States); Xiu, Dongbin, E-mail: xiu.16@osu.edu [Department of Mathematics, The Ohio State University, Columbus, OH 43210 (United States)
2017-07-15
We present an efficient numerical algorithm to approximate the statistical moments of stochastic problems, in the presence of models with different fidelities. The method extends the multi-fidelity approximation method developed in . By combining the efficiency of low-fidelity models and the accuracy of high-fidelity models, our method exhibits fast convergence with a limited number of high-fidelity simulations. We establish an error bound of the method and present several numerical examples to demonstrate the efficiency and applicability of the multi-fidelity algorithm.
Ha, Vu Thi Thanh; Hung, Vu Van; Hanh, Pham Thi Minh; Tuyen, Nguyen Viet; Hai, Tran Thi; Hieu, Ho Khac
2018-03-01
The thermodynamic and mechanical properties of III-V zinc-blende AlP, InP semiconductors and their alloys have been studied in detail from statistical moment method taking into account the anharmonicity effects of the lattice vibrations. The nearest neighbor distance, thermal expansion coefficient, bulk moduli, specific heats at the constant volume and constant pressure of the zincblende AlP, InP and AlyIn1-yP alloys are calculated as functions of the temperature. The statistical moment method calculations are performed by using the many-body Stillinger-Weber potential. The concentration dependences of the thermodynamic quantities of zinc-blende AlyIn1-yP crystals have also been discussed and compared with those of the experimental results. Our results are reasonable agreement with earlier density functional theory calculations and can provide useful qualitative information for future experiments. The moment method then can be developed extensively for studying the atomistic structure and thermodynamic properties of nanoscale materials as well.
Directory of Open Access Journals (Sweden)
Ramon F. Alvarez-Estrada
2012-02-01
Full Text Available We consider non-equilibrium open statistical systems, subject to potentials and to external “heat baths” (hb at thermal equilibrium at temperature T (either with ab initio dissipation or without it. Boltzmann’s classical equilibrium distributions generate, as Gaussian weight functions in momenta, orthogonal polynomials in momenta (the position-independent Hermite polynomialsHn’s. The moments of non-equilibrium classical distributions, implied by the Hn’s, fulfill a hierarchy: for long times, the lowest moment dominates the evolution towards thermal equilibrium, either with dissipation or without it (but under certain approximation. We revisit that hierarchy, whose solution depends on operator continued fractions. We review our generalization of that moment method to classical closed many-particle interacting systems with neither a hb nor ab initio dissipation: with initial states describing thermal equilibrium at T at large distances but non-equilibrium at finite distances, the moment method yields, approximately, irreversible thermalization of the whole system at T, for long times. Generalizations to non-equilibrium quantum interacting systems meet additional difficulties. Three of them are: (i equilibrium distributions (represented through Wigner functions are neither Gaussian in momenta nor known in closed form; (ii they may depend on dissipation; and (iii the orthogonal polynomials in momenta generated by them depend also on positions. We generalize the moment method, dealing with (i, (ii and (iii, to some non-equilibrium one-particle quantum interacting systems. Open problems are discussed briefly.
Additivity of statistical moments in the exponentially modified Gaussian model of chromatography
International Nuclear Information System (INIS)
Howerton, Samuel B.; Lee Chomin; McGuffin, Victoria L.
2002-01-01
A homologous series of saturated fatty acids ranging from C 10 to C 22 was separated by reversed-phase capillary liquid chromatography. The resultant zone profiles were found to be fit best by an exponentially modified Gaussian (EMG) function. To compare the EMG function and statistical moments for the analysis of the experimental zone profiles, a series of simulated profiles was generated by using fixed values for retention time and different values for the symmetrical (σ) and asymmetrical (τ) contributions to the variance. The simulated profiles were modified with respect to the integration limits, the number of points, and the signal-to-noise ratio. After modification, each profile was analyzed by using statistical moments and an iteratively fit EMG equation. These data indicate that the statistical moment method is much more susceptible to error when the degree of asymmetry is large, when the integration limits are inappropriately chosen, when the number of points is small, and when the signal-to-noise ratio is small. The experimental zone profiles were then analyzed by using the statistical moment and EMG methods. Although care was taken to minimize the sources of error discussed above, significant differences were found between the two methods. The differences in the second moment suggest that the symmetrical and asymmetrical contributions to broadening in the experimental zone profiles are not independent. As a consequence, the second moment is not equal to the sum of σ 2 and τ 2 , as is commonly assumed. This observation has important implications for the elucidation of thermodynamic and kinetic information from chromatographic zone profiles
Stochastic Generalized Method of Moments
Yin, Guosheng; Ma, Yanyuan; Liang, Faming; Yuan, Ying
2011-01-01
The generalized method of moments (GMM) is a very popular estimation and inference procedure based on moment conditions. When likelihood-based methods are difficult to implement, one can often derive various moment conditions and construct the GMM objective function. However, minimization of the objective function in the GMM may be challenging, especially over a large parameter space. Due to the special structure of the GMM, we propose a new sampling-based algorithm, the stochastic GMM sampler, which replaces the multivariate minimization problem by a series of conditional sampling procedures. We develop the theoretical properties of the proposed iterative Monte Carlo method, and demonstrate its superior performance over other GMM estimation procedures in simulation studies. As an illustration, we apply the stochastic GMM sampler to a Medfly life longevity study. Supplemental materials for the article are available online. © 2011 American Statistical Association.
Stochastic Generalized Method of Moments
Yin, Guosheng
2011-08-16
The generalized method of moments (GMM) is a very popular estimation and inference procedure based on moment conditions. When likelihood-based methods are difficult to implement, one can often derive various moment conditions and construct the GMM objective function. However, minimization of the objective function in the GMM may be challenging, especially over a large parameter space. Due to the special structure of the GMM, we propose a new sampling-based algorithm, the stochastic GMM sampler, which replaces the multivariate minimization problem by a series of conditional sampling procedures. We develop the theoretical properties of the proposed iterative Monte Carlo method, and demonstrate its superior performance over other GMM estimation procedures in simulation studies. As an illustration, we apply the stochastic GMM sampler to a Medfly life longevity study. Supplemental materials for the article are available online. © 2011 American Statistical Association.
Statistical moments of the Strehl ratio
Yaitskova, Natalia; Esselborn, Michael; Gladysz, Szymon
2012-07-01
Knowledge of the statistical characteristics of the Strehl ratio is essential for the performance assessment of the existing and future adaptive optics systems. For full assessment not only the mean value of the Strehl ratio but also higher statistical moments are important. Variance is related to the stability of an image and skewness reflects the chance to have in a set of short exposure images more or less images with the quality exceeding the mean. Skewness is a central parameter in the domain of lucky imaging. We present a rigorous theory for the calculation of the mean value, the variance and the skewness of the Strehl ratio. In our approach we represent the residual wavefront as being formed by independent cells. The level of the adaptive optics correction defines the number of the cells and the variance of the cells, which are the two main parameters of our theory. The deliverables are the values of the three moments as the functions of the correction level. We make no further assumptions except for the statistical independence of the cells.
Higher moments method for generalized Pareto distribution in flood frequency analysis
Zhou, C. R.; Chen, Y. F.; Huang, Q.; Gu, S. H.
2017-08-01
The generalized Pareto distribution (GPD) has proven to be the ideal distribution in fitting with the peak over threshold series in flood frequency analysis. Several moments-based estimators are applied to estimating the parameters of GPD. Higher linear moments (LH moments) and higher probability weighted moments (HPWM) are the linear combinations of Probability Weighted Moments (PWM). In this study, the relationship between them will be explored. A series of statistical experiments and a case study are used to compare their performances. The results show that if the same PWM are used in LH moments and HPWM methods, the parameter estimated by these two methods is unbiased. Particularly, when the same PWM are used, the PWM method (or the HPWM method when the order equals 0) shows identical results in parameter estimation with the linear Moments (L-Moments) method. Additionally, this phenomenon is significant when r ≥ 1 that the same order PWM are used in HPWM and LH moments method.
Score Function of Distribution and Revival of the Moment Method
Czech Academy of Sciences Publication Activity Database
Fabián, Zdeněk
2016-01-01
Roč. 45, č. 4 (2016), s. 1118-1136 ISSN 0361-0926 R&D Projects: GA MŠk(CZ) LG12020 Institutional support: RVO:67985807 Keywords : characteristics of distributions * data characteristics * general moment method * Huber moment estimator * parametric methods * score function Subject RIV: BB - Applied Statistics , Operational Research Impact factor: 0.311, year: 2016
Regional frequency analysis of extreme rainfalls using partial L moments method
Zakaria, Zahrahtul Amani; Shabri, Ani
2013-07-01
An approach based on regional frequency analysis using L moments and LH moments are revisited in this study. Subsequently, an alternative regional frequency analysis using the partial L moments (PL moments) method is employed, and a new relationship for homogeneity analysis is developed. The results were then compared with those obtained using the method of L moments and LH moments of order two. The Selangor catchment, consisting of 37 sites and located on the west coast of Peninsular Malaysia, is chosen as a case study. PL moments for the generalized extreme value (GEV), generalized logistic (GLO), and generalized Pareto distributions were derived and used to develop the regional frequency analysis procedure. PL moment ratio diagram and Z test were employed in determining the best-fit distribution. Comparison between the three approaches showed that GLO and GEV distributions were identified as the suitable distributions for representing the statistical properties of extreme rainfall in Selangor. Monte Carlo simulation used for performance evaluation shows that the method of PL moments would outperform L and LH moments methods for estimation of large return period events.
Level set method for image segmentation based on moment competition
Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai
2015-05-01
We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.
Regional analysis of annual maximum rainfall using TL-moments method
Shabri, Ani Bin; Daud, Zalina Mohd; Ariff, Noratiqah Mohd
2011-06-01
Information related to distributions of rainfall amounts are of great importance for designs of water-related structures. One of the concerns of hydrologists and engineers is the probability distribution for modeling of regional data. In this study, a novel approach to regional frequency analysis using L-moments is revisited. Subsequently, an alternative regional frequency analysis using the TL-moments method is employed. The results from both methods were then compared. The analysis was based on daily annual maximum rainfall data from 40 stations in Selangor Malaysia. TL-moments for the generalized extreme value (GEV) and generalized logistic (GLO) distributions were derived and used to develop the regional frequency analysis procedure. TL-moment ratio diagram and Z-test were employed in determining the best-fit distribution. Comparison between the two approaches showed that the L-moments and TL-moments produced equivalent results. GLO and GEV distributions were identified as the most suitable distributions for representing the statistical properties of extreme rainfall in Selangor. Monte Carlo simulation was used for performance evaluation, and it showed that the method of TL-moments was more efficient for lower quantile estimation compared with the L-moments.
International Nuclear Information System (INIS)
Muhtadan
2009-01-01
The purpose of this research is to perform feature extraction in weld defect of digital image of radiographic film using geometric invariant moment and statistical texture method. Feature extraction values can be use as values that used to classify and pattern recognition on interpretation of weld defect in digital image of radiographic film by computer automatically. Weld defectology type that used in this research are longitudinal crack, transversal crack, distributed porosity, clustered porosity, wormhole, and no defect. Research methodology on this research are program development to read digital image, then performing image cropping to localize weld position, and then applying geometric invariant moment and statistical texture formulas to find feature values. The result of this research are feature extraction values that have tested with RST (rotation, scale, transformation) treatment and yield moment values that more invariant there are ϕ 3 , ϕ 4 , ϕ 5 from geometric invariant moment method. Feature values from statistical texture that are average intensity, average contrast, smoothness, 3 rd moment, uniformity, and entropy, they used as feature extraction values. (author)
Higher-Order Moment Characterisation of Rogue Wave Statistics in Supercontinuum Generation
DEFF Research Database (Denmark)
Sørensen, Simon Toft; Bang, Ole; Wetzel, Benjamin
2012-01-01
The noise characteristics of supercontinuum generation are characterized using higherorder statistical moments. Measures of skew and kurtosis, and the coefficient of variation allow quantitative identification of spectral regions dominated by rogue wave like behaviour.......The noise characteristics of supercontinuum generation are characterized using higherorder statistical moments. Measures of skew and kurtosis, and the coefficient of variation allow quantitative identification of spectral regions dominated by rogue wave like behaviour....
International Nuclear Information System (INIS)
Huh, Jae Sung; Kwak, Byung Man
2011-01-01
Robust optimization or reliability-based design optimization are some of the methodologies that are employed to take into account the uncertainties of a system at the design stage. For applying such methodologies to solve industrial problems, accurate and efficient methods for estimating statistical moments and failure probability are required, and further, the results of sensitivity analysis, which is needed for searching direction during the optimization process, should also be accurate. The aim of this study is to employ the function approximation moment method into the sensitivity analysis formulation, which is expressed as an integral form, to verify the accuracy of the sensitivity results, and to solve a typical problem of reliability-based design optimization. These results are compared with those of other moment methods, and the feasibility of the function approximation moment method is verified. The sensitivity analysis formula with integral form is the efficient formulation for evaluating sensitivity because any additional function calculation is not needed provided the failure probability or statistical moments are calculated
Rapid objective measurement of gamma camera resolution using statistical moments.
Hander, T A; Lancaster, J L; Kopp, D T; Lasher, J C; Blumhardt, R; Fox, P T
1997-02-01
An easy and rapid method for the measurement of the intrinsic spatial resolution of a gamma camera was developed. The measurement is based on the first and second statistical moments of regions of interest (ROIs) applied to bar phantom images. This leads to an estimate of the modulation transfer function (MTF) and the full-width-at-half-maximum (FWHM) of a line spread function (LSF). Bar phantom images were acquired using four large field-of-view (LFOV) gamma cameras (Scintronix, Picker, Searle, Siemens). The following factors important for routine measurements of gamma camera resolution with this method were tested: ROI placement and shape, phantom orientation, spatial sampling, and procedural consistency. A 0.2% coefficient of variation (CV) between repeat measurements of MTF was observed for a circular ROI. The CVs of less than 2% were observed for measured MTF values for bar orientations ranging from -10 degrees to +10 degrees with respect to the x and y axes of the camera acquisition matrix. A 256 x 256 matrix (1.6 mm pixel spacing) was judged sufficient for routine measurements, giving an estimate of the FWHM to within 0.1 mm of manufacturer-specified values (3% difference). Under simulated clinical conditions, the variation in measurements attributable to procedural effects yielded a CV of less than 2% in newer generation cameras. The moments method for determining MTF correlated well with a peak-valley method, with an average difference of 0.03 across the range of spatial frequencies tested (0.11-0.17 line pairs/mm, corresponding to 4.5-3.0 mm bars). When compared with the NEMA method for measuring intrinsic spatial resolution, the moments method was found to be within 4% of the expected FWHM.
A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.
Lin, Johnny; Bentler, Peter M
2012-01-01
Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.
Moment methods for nonlinear maps
International Nuclear Information System (INIS)
Pusch, G.D.; Atomic Energy of Canada Ltd., Chalk River, ON
1993-01-01
It is shown that Differential Algebra (DA) may be used to push moments of distributions through a map, at a computational cost per moment comparable to pushing a single particle. The algorithm is independent of order, and whether or not the map is symplectic. Starting from the known result that moment-vectors transform linearly - like a tensor - even under a nonlinear map, I suggest that the form of the moment transformation rule indicates that the moment-vectors are elements of the dual to DA-vector space. I propose several methods of manipulating moments and constructing invariants using DA. I close with speculations on how DA might be used to ''close the circle'' to solve the inverse moment problem, yielding an entirely DA-and-moment-based space-charge code. (Author)
Moment methods and Lanczos methods
International Nuclear Information System (INIS)
Whitehead, R.R.
1980-01-01
In contrast to many of the speakers at this conference I am less interested in average properties of nuclei than in detailed spectroscopy. I will try to show, however, that the two are very closely connected and that shell-model calculations may be used to give a great deal of information not normally associated with the shell-model. It has been demonstrated clearly to us that the level spacing fluctuations in nuclear spectra convey very little physical information. This is true when the fluctuations are averaged over the entire spectrum but not if one's interest is in the lowest few states, whose spacings are relatively large. If one wishes to calculate a ground state (say) accurately, that is with an error much smaller than the excitation energy of the first excited state, very high moments, μ/sub n/, n approx. 200, are needed. As I shall show, we use such moments as a matter of course, albeit without actually calculating them; in fact I will try to show that, if at all possible, the actual calculations of moments is to be avoided like the plague. At the heart of the new shell-model methods embodied in the Glasgow shell-model program and one or two similar ones is the so-called Lanczos method and this, it turns out, has many deep and subtle connections with the mathematical theory of moments. It is these connections that I will explore here
Directory of Open Access Journals (Sweden)
Gökhan Gökdere
2014-05-01
Full Text Available In this paper, closed form expressions for the moments of the truncated Pareto order statistics are obtained by using conditional distribution. We also derive some results for the moments which will be useful for moment computations based on ordered data.
Information content in B→VV decays and the angular moments method
International Nuclear Information System (INIS)
Dighe, A.; Sen, S.
1998-10-01
The time-dependent angular distributions of decays of neutral B mesons into two vector mesons contain information about the lifetimes, mass differences, strong and weak phases, form factors, and CP violating quantities. A statistical analysis of the information content is performed by giving the ''information'' a quantitative meaning. It is shown that for some parameters of interest, the information content in time and angular measurements combined may be orders of magnitude more than the information from time measurements alone and hence the angular measurements are highly recommended. The method of angular moments is compared with the (maximum) likelihood method to find that it works almost as well in the region of interest for the one-angle distribution. For the complete three-angle distribution, an estimate of possible statistical errors expected on the observables of interest is obtained. It indicates that the three-angle distribution, unraveled by the method of angular moments, would be able to nail down many quantities of interest and will help in pointing unambiguously to new physics. (author)
Higher order statistical moment application for solar PV potential analysis
Basri, Mohd Juhari Mat; Abdullah, Samizee; Azrulhisham, Engku Ahmad; Harun, Khairulezuan
2016-10-01
Solar photovoltaic energy could be as alternative energy to fossil fuel, which is depleting and posing a global warming problem. However, this renewable energy is so variable and intermittent to be relied on. Therefore the knowledge of energy potential is very important for any site to build this solar photovoltaic power generation system. Here, the application of higher order statistical moment model is being analyzed using data collected from 5MW grid-connected photovoltaic system. Due to the dynamic changes of skewness and kurtosis of AC power and solar irradiance distributions of the solar farm, Pearson system where the probability distribution is calculated by matching their theoretical moments with that of the empirical moments of a distribution could be suitable for this purpose. On the advantage of the Pearson system in MATLAB, a software programming has been developed to help in data processing for distribution fitting and potential analysis for future projection of amount of AC power and solar irradiance availability.
Method of moments in electromagnetics
Gibson, Walton C
2007-01-01
Responding to the need for a clear, up-to-date introduction to the field, The Method of Moments in Electromagnetics explores surface integral equations in electromagnetics and presents their numerical solution using the method of moments (MOM) technique. It provides the numerical implementation aspects at a nuts-and-bolts level while discussing integral equations and electromagnetic theory at a higher level. The author covers a range of topics in this area, from the initial underpinnings of the MOM to its current applications. He first reviews the frequency-domain electromagnetic theory and t
Collective vector method for calculation of E1 moments in atomic transition arrays
International Nuclear Information System (INIS)
Bloom, S.D.; Goldberg, A.
1985-10-01
The CV (collective vector) method for calculating E1 moments for a transition array is described and applied in two cases, herein denoted Z26A and Z26B, pertaining to two different configurations of iron VI. The basic idea of the method is to create a CV from each of the parent (''initial state'') state-vectors of the transition array by application of the E1 operator. The moments of each of these CV's, referred to the parent energy, are then the rigorous moments for that parent, requiring no state decomposition of the manifold of daughter state-vectors. Since, in cases of practical interest, the daughter manifold can be orders of magnitude larger in size than the parent manifold, this makes possible the calculation of many moments higher than the second in situations hitherto unattainable via standard methods. The combination of the moments of all the parents, with proper statistical weighting, then yields the transition array moments from which the transition strength distribution can be derived by various procedures. We describe two of these procedures: (1) The well-known GC (Gram-Charlier) expansion in terms of Hermite polynomials, (2) The Lanczos algorithm or Stieltjes imaging method, also called herein the delta expansion. Application is made in the cases of Z26A (50 lines) and Z26B (5523 lines) and the relative merits and shortcomings of the two procedures are discussed. 10 refs., 15 figs., 2 tabs
Moments method in the theory of accelerators
International Nuclear Information System (INIS)
Perel'shtejn, Eh.A.
1984-01-01
The moments method is widely used for solution of different physical and calculation problems in the theory of accelerators, magnetic optics and dynamics of high-current beams. Techniques using moments of the second order-mean squape characteristics of charged particle beams is shown to be most developed. The moments method is suitable and sometimes even the only technique applicable for solution of computerized problems on optimization of accelerating structures, beam transport channels, matching and other systems with accout of a beam space charge
He, Fu-yuan; Deng, Kai-wen; Huang, Sheng; Liu, Wen-long; Shi, Ji-lian
2013-09-01
The paper aims to elucidate and establish a new mathematic model: the total quantum statistical moment standard similarity (TQSMSS) on the base of the original total quantum statistical moment model and to illustrate the application of the model to medical theoretical research. The model was established combined with the statistical moment principle and the normal distribution probability density function properties, then validated and illustrated by the pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical method for them, and by analysis of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving the Buyanghanwu-decoction extract. The established model consists of four mainly parameters: (1) total quantum statistical moment similarity as ST, an overlapped area by two normal distribution probability density curves in conversion of the two TQSM parameters; (2) total variability as DT, a confidence limit of standard normal accumulation probability which is equal to the absolute difference value between the two normal accumulation probabilities within integration of their curve nodical; (3) total variable probability as 1-Ss, standard normal distribution probability within interval of D(T); (4) total variable probability (1-beta)alpha and (5) stable confident probability beta(1-alpha): the correct probability to make positive and negative conclusions under confident coefficient alpha. With the model, we had analyzed the TQSMS similarities of pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical methods for them were at range of 0.3852-0.9875 that illuminated different pharmacokinetic behaviors of each other; and the TQSMS similarities (ST) of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving Buyanghuanwu-decoction-extract were at range of 0.6842-0.999 2 that showed different constituents
Expert judgement combination using moment methods
International Nuclear Information System (INIS)
Wisse, Bram; Bedford, Tim; Quigley, John
2008-01-01
Moment methods have been employed in decision analysis, partly to avoid the computational burden that decision models involving continuous probability distributions can suffer from. In the Bayes linear (BL) methodology prior judgements about uncertain quantities are specified using expectation (rather than probability) as the fundamental notion. BL provides a strong foundation for moment methods, rooted in work of De Finetti and Goldstein. The main objective of this paper is to discuss in what way expert assessments of moments can be combined, in a non-Bayesian way, to construct a prior assessment. We show that the linear pool can be justified in an analogous but technically different way to linear pools for probability assessments, and that this linear pool has a very convenient property: a linear pool of experts' assessments of moments is coherent if each of the experts has given coherent assessments. To determine the weights of the linear pool we give a method of performance based weighting analogous to Cooke's classical model and explore its properties. Finally, we compare its performance with the classical model on data gathered in applications of the classical model
Directory of Open Access Journals (Sweden)
V. E. Merzlikin
2015-01-01
Full Text Available The article deals with the search for optimal parameter estimation of the parameters of the process of homogenization of dairy products. Provides a theoretical basis for relationship of the relaxation time of the fat globules and attenuation coefficient of ultrasonic oscillations in dairy products. Suggested from the measured acoustic properties of milk to make the calculations of the mass distribution of fat globules. Studies on the proof of this hypothesis. Morphological analysis procedure carried out for homogenized milk samples at different pressures, as well as homogenized. As a result of research obtained distribution histogram of fat globules in dependence on the homogenization pressure. Also performed acoustic studies to obtain the frequency characteristics of loss modulus as a function of homogenization pressure. For further research the choice of method for approximating dependences is obtained using statistical moments of distributions. The parameters for the approximation of the distribution of fat globules and loss modulus versus pressure homogenization were obtained. Was carried out to test the hypothesis on the relationship parameters of approximation of the distribution of the fat globules and loss modulus as a function of pressure homogenization. Correlation analysis showed a clear dependence of the first and second statistical moment distributions of the pressure homogenization. The obtain ed dependence is consistent with the physical meaning of the first two moments of a statistical distribution. Correlation analysis was carried out according to the statistical moments of the distribution of the fat globules from moments of loss modulus. It is concluded that the possibility of ultrasonic testing the degree of homogenization and mass distribution of the fat globules of milk products.
Moment methods with effective nuclear Hamiltonians; calculations of radial moments
International Nuclear Information System (INIS)
Belehrad, R.H.
1981-02-01
A truncated orthogonal polynomial expansion is used to evaluate the expectation value of the radial moments of the one-body density of nuclei. The expansion contains the configuration moments, , , and 2 >, where R/sup (k)/ is the operator for the k-th power of the radial coordinate r, and H is the effective nuclear Hamiltonian which is the sum of the relative kinetic energy operator and the Bruckner G matrix. Configuration moments are calculated using trace reduction formulae where the proton and neutron orbitals are treated separately in order to find expectation values of good total isospin. The operator averages are taken over many-body shell model states in the harmonic oscillator basis where all particles are active and single-particle orbitals through six major shells are included. The radial moment expectation values are calculated for the nuclei 16 O, 40 Ca, and 58 Ni and find that is usually the largest term in the expansion giving a large model space dependence to the results. For each of the 3 nuclei, a model space is found which gives the desired rms radius and then we find that the other 5 lowest moments compare favorably with other theoretical predictions. Finally, we use a method of Gordon (5) to employ the lowest 6 radial moment expectation values in the calculation of elastic electron scattering from these nuclei. For low to moderate momentum transfer, the results compare favorably with the experimental data
Extension of moment projection method to the fragmentation process
International Nuclear Information System (INIS)
Wu, Shaohua; Yapp, Edward K.Y.; Akroyd, Jethro; Mosbach, Sebastian; Xu, Rong; Yang, Wenming; Kraft, Markus
2017-01-01
The method of moments is a simple but efficient method of solving the population balance equation which describes particle dynamics. Recently, the moment projection method (MPM) was proposed and validated for particle inception, coagulation, growth and, more importantly, shrinkage; here the method is extended to include the fragmentation process. The performance of MPM is tested for 13 different test cases for different fragmentation kernels, fragment distribution functions and initial conditions. Comparisons are made with the quadrature method of moments (QMOM), hybrid method of moments (HMOM) and a high-precision stochastic solution calculated using the established direct simulation algorithm (DSA) and advantages of MPM are drawn.
Extension of moment projection method to the fragmentation process
Energy Technology Data Exchange (ETDEWEB)
Wu, Shaohua [Department of Mechanical Engineering, National University of Singapore, Engineering Block EA, Engineering Drive 1, 117576 (Singapore); Yapp, Edward K.Y.; Akroyd, Jethro; Mosbach, Sebastian [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge, CB2 3RA (United Kingdom); Xu, Rong [School of Chemical and Biomedical Engineering, Nanyang Technological University, 62 Nanyang Drive, 637459 (Singapore); Yang, Wenming [Department of Mechanical Engineering, National University of Singapore, Engineering Block EA, Engineering Drive 1, 117576 (Singapore); Kraft, Markus, E-mail: mk306@cam.ac.uk [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge, CB2 3RA (United Kingdom); School of Chemical and Biomedical Engineering, Nanyang Technological University, 62 Nanyang Drive, 637459 (Singapore)
2017-04-15
The method of moments is a simple but efficient method of solving the population balance equation which describes particle dynamics. Recently, the moment projection method (MPM) was proposed and validated for particle inception, coagulation, growth and, more importantly, shrinkage; here the method is extended to include the fragmentation process. The performance of MPM is tested for 13 different test cases for different fragmentation kernels, fragment distribution functions and initial conditions. Comparisons are made with the quadrature method of moments (QMOM), hybrid method of moments (HMOM) and a high-precision stochastic solution calculated using the established direct simulation algorithm (DSA) and advantages of MPM are drawn.
The Method of Moments in electromagnetics
Gibson, Walton C
2014-01-01
Now Covers Dielectric Materials in Practical Electromagnetic DevicesThe Method of Moments in Electromagnetics, Second Edition explains the solution of electromagnetic integral equations via the method of moments (MOM). While the first edition exclusively focused on integral equations for conducting problems, this edition extends the integral equation framework to treat objects having conducting as well as dielectric parts.New to the Second EditionExpanded treatment of coupled surface integral equations for conducting and composite conducting/dielectric objects, including objects having multipl
International Nuclear Information System (INIS)
Deco, Gustavo; Marti, Daniel
2007-01-01
The analysis of transitions in stochastic neurodynamical systems is essential to understand the computational principles that underlie those perceptual and cognitive processes involving multistable phenomena, like decision making and bistable perception. To investigate the role of noise in a multistable neurodynamical system described by coupled differential equations, one usually considers numerical simulations, which are time consuming because of the need for sufficiently many trials to capture the statistics of the influence of the fluctuations on that system. An alternative analytical approach involves the derivation of deterministic differential equations for the moments of the distribution of the activity of the neuronal populations. However, the application of the method of moments is restricted by the assumption that the distribution of the state variables of the system takes on a unimodal Gaussian shape. We extend in this paper the classical moments method to the case of bimodal distribution of the state variables, such that a reduced system of deterministic coupled differential equations can be derived for the desired regime of multistability
Deco, Gustavo; Martí, Daniel
2007-03-01
The analysis of transitions in stochastic neurodynamical systems is essential to understand the computational principles that underlie those perceptual and cognitive processes involving multistable phenomena, like decision making and bistable perception. To investigate the role of noise in a multistable neurodynamical system described by coupled differential equations, one usually considers numerical simulations, which are time consuming because of the need for sufficiently many trials to capture the statistics of the influence of the fluctuations on that system. An alternative analytical approach involves the derivation of deterministic differential equations for the moments of the distribution of the activity of the neuronal populations. However, the application of the method of moments is restricted by the assumption that the distribution of the state variables of the system takes on a unimodal Gaussian shape. We extend in this paper the classical moments method to the case of bimodal distribution of the state variables, such that a reduced system of deterministic coupled differential equations can be derived for the desired regime of multistability.
Comparing the index-flood and multiple-regression methods using L-moments
Malekinezhad, H.; Nachtnebel, H. P.; Klik, A.
In arid and semi-arid regions, the length of records is usually too short to ensure reliable quantile estimates. Comparing index-flood and multiple-regression analyses based on L-moments was the main objective of this study. Factor analysis was applied to determine main influencing variables on flood magnitude. Ward’s cluster and L-moments approaches were applied to several sites in the Namak-Lake basin in central Iran to delineate homogeneous regions based on site characteristics. Homogeneity test was done using L-moments-based measures. Several distributions were fitted to the regional flood data and index-flood and multiple-regression methods as two regional flood frequency methods were compared. The results of factor analysis showed that length of main waterway, compactness coefficient, mean annual precipitation, and mean annual temperature were the main variables affecting flood magnitude. The study area was divided into three regions based on the Ward’s method of clustering approach. The homogeneity test based on L-moments showed that all three regions were acceptably homogeneous. Five distributions were fitted to the annual peak flood data of three homogeneous regions. Using the L-moment ratios and the Z-statistic criteria, GEV distribution was identified as the most robust distribution among five candidate distributions for all the proposed sub-regions of the study area, and in general, it was concluded that the generalised extreme value distribution was the best-fit distribution for every three regions. The relative root mean square error (RRMSE) measure was applied for evaluating the performance of the index-flood and multiple-regression methods in comparison with the curve fitting (plotting position) method. In general, index-flood method gives more reliable estimations for various flood magnitudes of different recurrence intervals. Therefore, this method should be adopted as regional flood frequency method for the study area and the Namak-Lake basin
Statistical methods of parameter estimation for deterministically chaotic time series
Pisarenko, V. F.; Sornette, D.
2004-03-01
We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).
International Nuclear Information System (INIS)
Ouarda, T.B.M.J.; Charron, C.; Chebana, F.
2016-01-01
Highlights: • Review of criteria used to select probability distributions to model wind speed data. • Classical and L-moment ratio diagrams are applied to wind speed data. • The diagrams allow to select the best distribution to model each wind speed sample. • The goodness-of-fit statistics are more consistent with the L-moment ratio diagram. - Abstract: This paper reviews the different criteria used in the field of wind energy to compare the goodness-of-fit of candidate probability density functions (pdfs) to wind speed records, and discusses their advantages and disadvantages. The moment ratio and L-moment ratio diagram methods are also proposed as alternative methods for the choice of the pdfs. These two methods have the advantage of allowing an easy comparison of the fit of several pdfs for several time series (stations) on a single diagram. Plotting the position of a given wind speed data set in these diagrams is instantaneous and provides more information than a goodness-of-fit criterion since it provides knowledge about such characteristics as the skewness and kurtosis of the station data set. In this paper, it is proposed to study the applicability of these two methods for the selection of pdfs for wind speed data. Both types of diagrams are used to assess the fit of the pdfs for wind speed series in the United Arab Emirates. The analysis of the moment ratio diagrams reveals that the Kappa, Log-Pearson type III and Generalized Gamma are the distributions that fit best all wind speed series. The Weibull represents the best distribution among those with only one shape parameter. Results obtained with the diagrams are compared with those obtained with goodness-of-fit statistics and a good agreement is observed especially in the case of the L-moment ratio diagram. It is concluded that these diagrams can represent a simple and efficient approach to be used as complementary method to goodness-of-fit criteria.
Computing moment to moment BOLD activation for real-time neurofeedback
Hinds, Oliver; Ghosh, Satrajit; Thompson, Todd W.; Yoo, Julie J.; Whitfield-Gabrieli, Susan; Triantafyllou, Christina; Gabrieli, John D.E.
2013-01-01
Estimating moment to moment changes in blood oxygenation level dependent (BOLD) activation levels from functional magnetic resonance imaging (fMRI) data has applications for learned regulation of regional activation, brain state monitoring, and brain-machine interfaces. In each of these contexts, accurate estimation of the BOLD signal in as little time as possible is desired. This is a challenging problem due to the low signal-to-noise ratio of fMRI data. Previous methods for real-time fMRI analysis have either sacrificed the ability to compute moment to moment activation changes by averaging several acquisitions into a single activation estimate or have sacrificed accuracy by failing to account for prominent sources of noise in the fMRI signal. Here we present a new method for computing the amount of activation present in a single fMRI acquisition that separates moment to moment changes in the fMRI signal intensity attributable to neural sources from those due to noise, resulting in a feedback signal more reflective of neural activation. This method computes an incremental general linear model fit to the fMRI timeseries, which is used to calculate the expected signal intensity at each new acquisition. The difference between the measured intensity and the expected intensity is scaled by the variance of the estimator in order to transform this residual difference into a statistic. Both synthetic and real data were used to validate this method and compare it to the only other published real-time fMRI method. PMID:20682350
A multivariate quadrature based moment method for LES based modeling of supersonic combustion
Donde, Pratik; Koo, Heeseok; Raman, Venkat
2012-07-01
The transported probability density function (PDF) approach is a powerful technique for large eddy simulation (LES) based modeling of scramjet combustors. In this approach, a high-dimensional transport equation for the joint composition-enthalpy PDF needs to be solved. Quadrature based approaches provide deterministic Eulerian methods for solving the joint-PDF transport equation. In this work, it is first demonstrated that the numerical errors associated with LES require special care in the development of PDF solution algorithms. The direct quadrature method of moments (DQMOM) is one quadrature-based approach developed for supersonic combustion modeling. This approach is shown to generate inconsistent evolution of the scalar moments. Further, gradient-based source terms that appear in the DQMOM transport equations are severely underpredicted in LES leading to artificial mixing of fuel and oxidizer. To overcome these numerical issues, a semi-discrete quadrature method of moments (SeQMOM) is formulated. The performance of the new technique is compared with the DQMOM approach in canonical flow configurations as well as a three-dimensional supersonic cavity stabilized flame configuration. The SeQMOM approach is shown to predict subfilter statistics accurately compared to the DQMOM approach.
An optimal method of moments to measure the charge asymmetry at the Z0
International Nuclear Information System (INIS)
Bruemmer, N.C.
1994-02-01
Parity violation at LEP or SLC can be measured through the charge asymmetry. An optimal method of moments is developed here to measure this asymmetry, as well as similar asymmetries. This method is equivalent to the likelihood fit. It is simpler in use, as it gives analytical formulas for both the asymmetry and its statistical error. These formulas give the dependence of the accuracy on the experimental angular acceptance explicitly. (orig.)
Variational-moment method for computing magnetohydrodynamic equilibria
International Nuclear Information System (INIS)
Lao, L.L.
1983-08-01
A fast yet accurate method to compute magnetohydrodynamic equilibria is provided by the variational-moment method, which is similar to the classical Rayleigh-Ritz-Galerkin approximation. The equilibrium solution sought is decomposed into a spectral representation. The partial differential equations describing the equilibrium are then recast into their equivalent variational form and systematically reduced to an optimum finite set of coupled ordinary differential equations. An appropriate spectral decomposition can make the series representing the solution coverge rapidly and hence substantially reduces the amount of computational time involved. The moment method was developed first to compute fixed-boundary inverse equilibria in axisymmetric toroidal geometry, and was demonstrated to be both efficient and accurate. The method since has been generalized to calculate free-boundary axisymmetric equilibria, to include toroidal plasma rotation and pressure anisotropy, and to treat three-dimensional toroidal geometry. In all these formulations, the flux surfaces are assumed to be smooth and nested so that the solutions can be decomposed in Fourier series in inverse coordinates. These recent developments and the advantages and limitations of the moment method are reviewed. The use of alternate coordinates for decomposition is discussed
Arismendi, Ivan; Johnson, Sherri L.; Dunham, Jason B.
2015-01-01
Statistics of central tendency and dispersion may not capture relevant or desired characteristics of the distribution of continuous phenomena and, thus, they may not adequately describe temporal patterns of change. Here, we present two methodological approaches that can help to identify temporal changes in environmental regimes. First, we use higher-order statistical moments (skewness and kurtosis) to examine potential changes of empirical distributions at decadal extents. Second, we adapt a statistical procedure combining a non-metric multidimensional scaling technique and higher density region plots to detect potentially anomalous years. We illustrate the use of these approaches by examining long-term stream temperature data from minimally and highly human-influenced streams. In particular, we contrast predictions about thermal regime responses to changing climates and human-related water uses. Using these methods, we effectively diagnose years with unusual thermal variability and patterns in variability through time, as well as spatial variability linked to regional and local factors that influence stream temperature. Our findings highlight the complexity of responses of thermal regimes of streams and reveal their differential vulnerability to climate warming and human-related water uses. The two approaches presented here can be applied with a variety of other continuous phenomena to address historical changes, extreme events, and their associated ecological responses.
Method of moments analysis of the Twin Lake tracer test data
International Nuclear Information System (INIS)
Moltyaner, G.L.; Wills, C.A.
1987-09-01
The two-dimensional transport of radioiodine at the Twin Lake aquifer at CRNL, is investigated at the full-aquifer-thickness scale using curve fitting procedures and the method of statistical moments. The observed concentration-time data were analysed using temporal moments and were converted to concentration-distance data sets. The converted data were then analysed using spatial moments and curve fitting procedures. It was concluded that over the 40 m flow path the areal two-dimensional model of mean concentration distribution does not adequately describe the essentially three-dimensional nature of the dispersion process probably because of the relatively short travel distances compared to the full-aquifer-thickness scale of description of the transport process (∼ 10 m). Much larger travel distances are required before the effect of irregular flow geometry at this scale is smoothed out. Based on the previous analysis of the dispersion process at the macroscopic scale of descriptions of the transport processes it was concluded that the macroscopic scale characteristics, to a large extent, possess a universal character and the macroscopic-scale dispersion model adequately describes the field-scale dispersion process. 20 refs
Moments, Mixed Methods, and Paradigm Dialogs
Denzin, Norman K.
2010-01-01
I reread the 50-year-old history of the qualitative inquiry that calls for triangulation and mixed methods. I briefly visit the disputes within the mixed methods community asking how did we get to where we are today, the period of mixed-multiple-methods advocacy, and Teddlie and Tashakkori's third methodological moment. (Contains 10 notes.)
A chronicle of permutation statistical methods 1920–2000, and beyond
Berry, Kenneth J; Mielke Jr , Paul W
2014-01-01
The focus of this book is on the birth and historical development of permutation statistical methods from the early 1920s to the near present. Beginning with the seminal contributions of R.A. Fisher, E.J.G. Pitman, and others in the 1920s and 1930s, permutation statistical methods were initially introduced to validate the assumptions of classical statistical methods. Permutation methods have advantages over classical methods in that they are optimal for small data sets and non-random samples, are data-dependent, and are free of distributional assumptions. Permutation probability values may be exact, or estimated via moment- or resampling-approximation procedures. Because permutation methods are inherently computationally-intensive, the evolution of computers and computing technology that made modern permutation methods possible accompanies the historical narrative. Permutation analogs of many well-known statistical tests are presented in a historical context, including multiple correlation and regression, ana...
A moment projection method for population balance dynamics with a shrinkage term
Energy Technology Data Exchange (ETDEWEB)
Wu, Shaohua [Department of Mechanical Engineering, National University of Singapore, Engineering Block EA, Engineering Drive 1, 117576 (Singapore); Yapp, Edward K.Y.; Akroyd, Jethro; Mosbach, Sebastian [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge, CB2 3RA (United Kingdom); Xu, Rong [School of Chemical and Biomedical Engineering, Nanyang Technological University, 62 Nanyang Drive, 637459 (Singapore); Yang, Wenming [Department of Mechanical Engineering, National University of Singapore, Engineering Block EA, Engineering Drive 1, 117576 (Singapore); Kraft, Markus, E-mail: mk306@cam.ac.uk [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge, CB2 3RA (United Kingdom); School of Chemical and Biomedical Engineering, Nanyang Technological University, 62 Nanyang Drive, 637459 (Singapore)
2017-02-01
A new method of moments for solving the population balance equation is developed and presented. The moment projection method (MPM) is numerically simple and easy to implement and attempts to address the challenge of particle shrinkage due to processes such as oxidation, evaporation or dissolution. It directly solves the moment transport equation for the moments and tracks the number of the smallest particles using the algorithm by Blumstein and Wheeler (1973) . The performance of the new method is measured against the method of moments (MOM) and the hybrid method of moments (HMOM). The results suggest that MPM performs much better than MOM and HMOM where shrinkage is dominant. The new method predicts mean quantities which are almost as accurate as a high-precision stochastic method calculated using the established direct simulation algorithm (DSA).
A Comparison of Moments-Based Logo Recognition Methods
Directory of Open Access Journals (Sweden)
Zili Zhang
2014-01-01
Full Text Available Logo recognition is an important issue in document image, advertisement, and intelligent transportation. Although there are many approaches to study logos in these fields, logo recognition is an essential subprocess. Among the methods of logo recognition, the descriptor is very vital. The results of moments as powerful descriptors were not discussed before in terms of logo recognition. So it is unclear which moments are more appropriate to recognize which kind of logos. In this paper we find out the relations between logos with different transforms and moments, which moments are fit for logos with different transforms. The open datasets are employed from the University of Maryland. The comparisons based on moments are carried out from the aspects of logos with noise, and rotation, scaling, rotation and scaling.
Energy Technology Data Exchange (ETDEWEB)
Densmore, J.D., E-mail: jeffery.densmore@unnpp.gov [Bettis Atomic Power Laboratory, P.O. Box 79, West Mifflin, PA 15122 (United States); Park, H., E-mail: hkpark@lanl.gov [Fluid Dynamics and Solid Mechanics Group, Los Alamos National Laboratory, P.O. Box 1663, MS B216, Los Alamos, NM 87545 (United States); Wollaber, A.B., E-mail: wollaber@lanl.gov [Computational Physics and Methods Group, Los Alamos National Laboratory, P.O. Box 1663, MS D409, Los Alamos, NM 87545 (United States); Rauenzahn, R.M., E-mail: rick@lanl.gov [Fluid Dynamics and Solid Mechanics Group, Los Alamos National Laboratory, P.O. Box 1663, MS B216, Los Alamos, NM 87545 (United States); Knoll, D.A., E-mail: nol@lanl.gov [Fluid Dynamics and Solid Mechanics Group, Los Alamos National Laboratory, P.O. Box 1663, MS B216, Los Alamos, NM 87545 (United States)
2015-03-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.
International Nuclear Information System (INIS)
Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.
2015-01-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm
International Nuclear Information System (INIS)
Tsuchida, Takahiro; Kimura, Koji
2016-01-01
Equivalent non-Gaussian excitation method is proposed to obtain the response moments up to the 4th order of dynamic systems under non-Gaussian random excitation. The non-Gaussian excitation is prescribed by the probability density and the power spectrum, and is described by an Ito stochastic differential equation. Generally, moment equations for the response, which are derived from the governing equations for the excitation and the system, are not closed due to the nonlinearity of the diffusion coefficient in the equation for the excitation even though the system is linear. In the equivalent non-Gaussian excitation method, the diffusion coefficient is replaced with the equivalent diffusion coefficient approximately to obtain a closed set of the moment equations. The square of the equivalent diffusion coefficient is expressed by a quadratic polynomial. In numerical examples, a linear system subjected to nonGaussian excitations with bimodal and Rayleigh distributions is analyzed by using the present method. The results show that the method yields the variance, skewness and kurtosis of the response with high accuracy for non-Gaussian excitation with the widely different probability densities and bandwidth. The statistical moments of the equivalent non-Gaussian excitation are also investigated to describe the feature of the method. (paper)
Alexiadis, Alessio; Vanni, Marco; Gardin, Pascal
2004-08-01
The method of moment (MOM) is a powerful tool for solving population balance. Nevertheless it cannot be used in every circumstance. Sometimes, in fact, it is not possible to write the governing equations in closed form. Higher moments, for instance, could appear in the evolution of the lower ones. This obstacle has often been resolved by prescribing some functional form for the particle size distribution. Another example is the occurrence of fractional moment, usually connected with the presence of fractal aggregates. For this case we propose a procedure that does not need any assumption on the form of the distribution but it is based on the "moments generating function" (that is the Laplace transform of the distribution). An important result of probability theory is that the kth derivative of the moments generating function represents the kth moment of the original distribution. This result concerns integer moments but, taking in account the Weyl fractional derivative, could be extended to fractional orders. Approximating fractional derivative makes it possible to express the fractional moments in terms of the integer ones and so to use regularly the method of moments.
Theory and applications of moment methods in many-fermion systems
International Nuclear Information System (INIS)
Dalton, B.J.; Grimes, S.M.; Vary, J.P.; Williams, S.A.
1980-01-01
This book contains the proceedings of a conference on the application of the moment problem which was held at Ames, Iowa, September 10-13, 1979. It is, generally speaking, a well-printed book consisting of photo-offset reproductions of typed contributions. First of all, there are articles on the general method of moments such as the ones by French. Secondly, there are articles on how to actually calculate these moments. Current progress in recent years has been made on this computational endeavor, which is what makes the moment method particularly useful and interesting now. The articles by Ginnochio, Bloom and Hausman, and Vary are representative of these techniques. Thirdly, there are articles on what to do with the moments once you obtain them. Articles by Langhoff, Whitehead, and Bessis are representative here. Of particular interest to this reviewer is the fact that all of these methods seem to be mathematically quite closely related to various Pade approximant techniques. Finally, there are articles on the problems from which these moment problems arise. Mainly in this book nuclear physics examples are described, although some mention is made of other topics. De Facio et al. discuss application to the Ising model
Sun, Dan; Garmory, Andrew; Page, Gary J.
2017-02-01
For flows where the particle number density is low and the Stokes number is relatively high, as found when sand or ice is ingested into aircraft gas turbine engines, streams of particles can cross each other's path or bounce from a solid surface without being influenced by inter-particle collisions. The aim of this work is to develop an Eulerian method to simulate these types of flow. To this end, a two-node quadrature-based moment method using 13 moments is proposed. In the proposed algorithm thirteen moments of particle velocity, including cross-moments of second order, are used to determine the weights and abscissas of the two nodes and to set up the association between the velocity components in each node. Previous Quadrature Method of Moments (QMOM) algorithms either use more than two nodes, leading to increased computational expense, or are shown here to give incorrect results under some circumstances. This method gives the computational efficiency advantages of only needing two particle phase velocity fields whilst ensuring that a correct combination of weights and abscissas is returned for any arbitrary combination of particle trajectories without the need for any further assumptions. Particle crossing and wall bouncing with arbitrary combinations of angles are demonstrated using the method in a two-dimensional scheme. The ability of the scheme to include the presence of drag from a carrier phase is also demonstrated, as is bouncing off surfaces with inelastic collisions. The method is also applied to the Taylor-Green vortex flow test case and is found to give results superior to the existing two-node QMOM method and is in good agreement with results from Lagrangian modelling of this case.
Energy Technology Data Exchange (ETDEWEB)
Dixon, D.A., E-mail: ddixon@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663, MS P365, Los Alamos, NM 87545 (United States); Prinja, A.K., E-mail: prinja@unm.edu [Department of Nuclear Engineering, MSC01 1120, 1 University of New Mexico, Albuquerque, NM 87131-0001 (United States); Franke, B.C., E-mail: bcfrank@sandia.gov [Sandia National Laboratories, Albuquerque, NM 87123 (United States)
2015-09-15
This paper presents the theoretical development and numerical demonstration of a moment-preserving Monte Carlo electron transport method. Foremost, a full implementation of the moment-preserving (MP) method within the Geant4 particle simulation toolkit is demonstrated. Beyond implementation details, it is shown that the MP method is a viable alternative to the condensed history (CH) method for inclusion in current and future generation transport codes through demonstration of the key features of the method including: systematically controllable accuracy, computational efficiency, mathematical robustness, and versatility. A wide variety of results common to electron transport are presented illustrating the key features of the MP method. In particular, it is possible to achieve accuracy that is statistically indistinguishable from analog Monte Carlo, while remaining up to three orders of magnitude more efficient than analog Monte Carlo simulations. Finally, it is shown that the MP method can be generalized to any applicable analog scattering DCS model by extending previous work on the MP method beyond analytical DCSs to the partial-wave (PW) elastic tabulated DCS data.
The verification of the Taylor-expansion moment method in solving aerosol breakage
Directory of Open Access Journals (Sweden)
Yu Ming-Zhou
2012-01-01
Full Text Available The combination of the method of moment, characterizing the particle population balance, and the computational fluid dynamics has been an emerging research issue in the studies on the aerosol science and on the multiphase flow science. The difficulty of solving the moment equation arises mainly from the closure of some fractal moment variables which appears in the transform from the non-linear integral-differential population balance equation to the moment equations. Within the Taylor-expansion moment method, the breakage-dominated Taylor-expansion moment equation is first derived here when the symmetric fragmentation mechanism is involved. Due to the high efficiency and the high precision, this proposed moment model is expected to become an important tool for solving population balance equations.
Polynomial factor models : non-iterative estimation via method-of-moments
Schuberth, Florian; Büchner, Rebecca; Schermelleh-Engel, Karin; Dijkstra, Theo K.
2017-01-01
We introduce a non-iterative method-of-moments estimator for non-linear latent variable (LV) models. Under the assumption of joint normality of all exogenous variables, we use the corrected moments of linear combinations of the observed indicators (proxies) to obtain consistent path coefficient and
Magnetic moments of J{sup P} = (3)/(2){sup +} decuplet baryons using the statistical model
Energy Technology Data Exchange (ETDEWEB)
Kaur, Amanpreet; Upadhyay, Alka [Thapar University, School of Physics and Materials Science, Patiala (India)
2016-04-15
A suitable wave function for the baryon decuplet is framed with the inclusion of the sea containing quark-gluon Fock states. Relevant operator formalism is applied to calculate the magnetic moments of J{sup P} = (3)/(2){sup +} baryon decuplet. The statistical model assumes the decomposition of the baryonic state in various quark-gluon Fock states and is used in combination with the detailed balance principle to find the relative probabilities of these Fock states in flavor, spin and color space. The upper limit to the gluon is restricted to three with the possibility of emission of quark-antiquark pairs. We study the importance of strangeness in the sea (scalar, vector and tensor) and its contribution to the magnetic moments. Our approach has confirmed the scalar-tensor sea dominancy over the vector sea. Various modifications in the model are used to check the validity of the statistical approach. The results are matched with the available theoretical data. A good consistency with the experimental data has been achieved for Δ{sup ++}, Δ{sup +} and Ω{sup -}. (orig.)
Fradin, Cécile
2013-01-01
Magnetotactic bacteria possess organelles called magnetosomes that confer a magnetic moment on the cells, resulting in their partial alignment with external magnetic fields. Here we show that analysis of the trajectories of cells exposed to an external magnetic field can be used to measure the average magnetic dipole moment of a cell population in at least five different ways. We apply this analysis to movies of Magnetospirillum magneticum AMB-1 cells, and compare the values of the magnetic moment obtained in this way to that obtained by direct measurements of magnetosome dimension from electron micrographs. We find that methods relying on the viscous relaxation of the cell orientation give results comparable to that obtained by magnetosome measurements, whereas methods relying on statistical mechanics assumptions give systematically lower values of the magnetic moment. Since the observed distribution of magnetic moments in the population is not sufficient to explain this discrepancy, our results suggest that non-thermal random noise is present in the system, implying that a magnetotactic bacterial population should not be considered as similar to a paramagnetic material. PMID:24349185
Bending Moment Calculations for Piles Based on the Finite Element Method
Directory of Open Access Journals (Sweden)
Yu-xin Jie
2013-01-01
Full Text Available Using the finite element analysis program ABAQUS, a series of calculations on a cantilever beam, pile, and sheet pile wall were made to investigate the bending moment computational methods. The analyses demonstrated that the shear locking is not significant for the passive pile embedded in soil. Therefore, higher-order elements are not always necessary in the computation. The number of grids across the pile section is important for bending moment calculated with stress and less significant for that calculated with displacement. Although computing bending moment with displacement requires fewer grid numbers across the pile section, it sometimes results in variation of the results. For displacement calculation, a pile row can be suitably represented by an equivalent sheet pile wall, whereas the resulting bending moments may be different. Calculated results of bending moment may differ greatly with different grid partitions and computational methods. Therefore, a comparison of results is necessary when performing the analysis.
Introductory statistical inference
Mukhopadhyay, Nitis
2014-01-01
This gracefully organized text reveals the rigorous theory of probability and statistical inference in the style of a tutorial, using worked examples, exercises, figures, tables, and computer simulations to develop and illustrate concepts. Drills and boxed summaries emphasize and reinforce important ideas and special techniques.Beginning with a review of the basic concepts and methods in probability theory, moments, and moment generating functions, the author moves to more intricate topics. Introductory Statistical Inference studies multivariate random variables, exponential families of dist
Stochastic development regression using method of moments
DEFF Research Database (Denmark)
Kühnel, Line; Sommer, Stefan Horst
2017-01-01
This paper considers the estimation problem arising when inferring parameters in the stochastic development regression model for manifold valued non-linear data. Stochastic development regression captures the relation between manifold-valued response and Euclidean covariate variables using...... the stochastic development construction. It is thereby able to incorporate several covariate variables and random effects. The model is intrinsically defined using the connection of the manifold, and the use of stochastic development avoids linearizing the geometry. We propose to infer parameters using...... the Method of Moments procedure that matches known constraints on moments of the observations conditional on the latent variables. The performance of the model is investigated in a simulation example using data on finite dimensional landmark manifolds....
Inference in partially identified models with many moment inequalities using Lasso
DEFF Research Database (Denmark)
Bugni, Federico A.; Caner, Mehmet; Kock, Anders Bredahl
This paper considers the problem of inference in a partially identified moment (in)equality model with possibly many moment inequalities. Our contribution is to propose a novel two-step new inference method based on the combination of two ideas. On the one hand, our test statistic and critical...
Solution of the agglomerate Brownian coagulation using Taylor-expansion moment method.
Yu, Mingzhou; Lin, Jianzhong
2009-08-01
The newly proposed Taylor-expansion moment method (TEMOM) is extended to solve agglomerate coagulation in the free-molecule regime and in the continuum regime, respectively. The moment equations with respect to fractal dimension are derived based on 3rd Taylor-series expansion technique. The validation of this method is done by comparing its result with the published data at each limited size regime. By comparing with analytical method, sectional method (SM) and quadrature method of moments (QMOMs), this new approach is shown to produce the most efficiency without losing much accuracy. At each limited size regime, the effect of fractal dimension on the decay of particle number and particle size growth is mainly investigated, and especially in the continuum regime the relation of mean diameters of size distributions with different fractal dimensions is first proposed. The agglomerate size distribution is found to be sensitive to the fractal dimension and the initial geometric mean deviation before the self-preserving size distribution is achieved in the continuum regime.
Moments Method for Shell-Model Level Density
International Nuclear Information System (INIS)
Zelevinsky, V; Horoi, M; Sen'kov, R A
2016-01-01
The modern form of the Moments Method applied to the calculation of the nuclear shell-model level density is explained and examples of the method at work are given. The calculated level density practically exactly coincides with the result of full diagonalization when the latter is feasible. The method provides the pure level density for given spin and parity with spurious center-of-mass excitations subtracted. The presence and interplay of all correlations leads to the results different from those obtained by the mean-field combinatorics. (paper)
Measurement of the second moment in NMR using instationary methods
International Nuclear Information System (INIS)
Fenzke, D.; Rinck, W.; Schneider, H.
1973-01-01
Different instationary methods for determination of the second moment in NMR are tested. Measurements were carried out with a noncommercial solid-state pulse spectrometer with a fast analog transient memory (aquisition time >0.5 μs), data processing with a ''DIDAC 800'' spectrum accumulator and a ''NICOLET-1080'' computer. For processing of signals three methods are discussed: the numerical differentiation, the least square method and an application of the sampling theorem. We determined the second moment observing the ''Free Induction Decay'', ''Solid Echo'', ''Magic Echo'' and a special group of many pulse pairs. ''Magic Echo'' and data processing with the least square method gave the best result, because only by this method the influence of apparatus dead time can be completely eliminated. (author)
A moment-convergence method for stochastic analysis of biochemical reaction networks.
Zhang, Jiajun; Nie, Qing; Zhou, Tianshou
2016-05-21
Traditional moment-closure methods need to assume that high-order cumulants of a probability distribution approximate to zero. However, this strong assumption is not satisfied for many biochemical reaction networks. Here, we introduce convergent moments (defined in mathematics as the coefficients in the Taylor expansion of the probability-generating function at some point) to overcome this drawback of the moment-closure methods. As such, we develop a new analysis method for stochastic chemical kinetics. This method provides an accurate approximation for the master probability equation (MPE). In particular, the connection between low-order convergent moments and rate constants can be more easily derived in terms of explicit and analytical forms, allowing insights that would be difficult to obtain through direct simulation or manipulation of the MPE. In addition, it provides an accurate and efficient way to compute steady-state or transient probability distribution, avoiding the algorithmic difficulty associated with stiffness of the MPE due to large differences in sizes of rate constants. Applications of the method to several systems reveal nontrivial stochastic mechanisms of gene expression dynamics, e.g., intrinsic fluctuations can induce transient bimodality and amplify transient signals, and slow switching between promoter states can increase fluctuations in spatially heterogeneous signals. The overall approach has broad applications in modeling, analysis, and computation of complex biochemical networks with intrinsic noise.
A moment-convergence method for stochastic analysis of biochemical reaction networks
Energy Technology Data Exchange (ETDEWEB)
Zhang, Jiajun [School of Mathematics and Computational Science, Sun Yat-Sen University, Guangzhou 510275 (China); Nie, Qing [Department of Mathematics, University of California at Irvine, Irvine, California 92697 (United States); Zhou, Tianshou, E-mail: mcszhtsh@mail.sysu.edu.cn [School of Mathematics and Computational Science, Sun Yat-Sen University, Guangzhou 510275 (China); Guangdong Province Key Laboratory of Computational Science and School of Mathematics and Computational Science, Sun Yat-Sen University, Guangzhou 510275 (China)
2016-05-21
Traditional moment-closure methods need to assume that high-order cumulants of a probability distribution approximate to zero. However, this strong assumption is not satisfied for many biochemical reaction networks. Here, we introduce convergent moments (defined in mathematics as the coefficients in the Taylor expansion of the probability-generating function at some point) to overcome this drawback of the moment-closure methods. As such, we develop a new analysis method for stochastic chemical kinetics. This method provides an accurate approximation for the master probability equation (MPE). In particular, the connection between low-order convergent moments and rate constants can be more easily derived in terms of explicit and analytical forms, allowing insights that would be difficult to obtain through direct simulation or manipulation of the MPE. In addition, it provides an accurate and efficient way to compute steady-state or transient probability distribution, avoiding the algorithmic difficulty associated with stiffness of the MPE due to large differences in sizes of rate constants. Applications of the method to several systems reveal nontrivial stochastic mechanisms of gene expression dynamics, e.g., intrinsic fluctuations can induce transient bimodality and amplify transient signals, and slow switching between promoter states can increase fluctuations in spatially heterogeneous signals. The overall approach has broad applications in modeling, analysis, and computation of complex biochemical networks with intrinsic noise.
Moment Restriction-based Econometric Methods: An Overview
N. Kunitomo (Naoto); M.J. McAleer (Michael); Y. Nishiyama (Yoshihiko)
2010-01-01
textabstractMoment restriction-based econometric modelling is a broad class which includes the parametric, semiparametric and nonparametric approaches. Moments and conditional moments themselves are nonparametric quantities. If a model is specified in part up to some finite dimensional parameters,
Szulc, Stefan
1965-01-01
Statistical Methods provides a discussion of the principles of the organization and technique of research, with emphasis on its application to the problems in social statistics. This book discusses branch statistics, which aims to develop practical ways of collecting and processing numerical data and to adapt general statistical methods to the objectives in a given field.Organized into five parts encompassing 22 chapters, this book begins with an overview of how to organize the collection of such information on individual units, primarily as accomplished by government agencies. This text then
Moments of inertia for solids of revolution and variational methods
International Nuclear Information System (INIS)
Diaz, Rodolfo A; Herrera, William J; Martinez, R
2006-01-01
We present some formulae for the moments of inertia of homogeneous solids of revolution in terms of the functions that generate the solids. The development of these expressions exploits the cylindrical symmetry of these objects and avoids the explicit use of multiple integration, providing an easy and pedagogical approach. The explicit use of the functions that generate the solid gives the possibility of writing the moment of inertia as a functional, which in turn allows us to utilize the calculus of variations to obtain new insight into some properties of this fundamental quantity. In particular, minimization of moments of inertia under certain restrictions is possible by using variational methods
An improved method for calculating force distributions in moment-stiff timber connections
DEFF Research Database (Denmark)
Ormarsson, Sigurdur; Blond, Mette
2012-01-01
An improved method for calculating force distributions in moment-stiff metal dowel-type timber connections is presented, a method based on use of three-dimensional finite element simulations of timber connections subjected to moment action. The study that was carried out aimed at determining how...... the slip modulus varies with the angle between the direction of the dowel forces and the fibres in question, as well as how the orthotropic stiffness behaviour of the wood material affects the direction and the size of the forces. It was assumed that the force distribution generated by the moment action...
The maximum entropy method of moments and Bayesian probability theory
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
Directory of Open Access Journals (Sweden)
Abul Kalam Azad
2014-05-01
Full Text Available The best Weibull distribution methods for the assessment of wind energy potential at different altitudes in desired locations are statistically diagnosed in this study. Seven different methods, namely graphical method (GM, method of moments (MOM, standard deviation method (STDM, maximum likelihood method (MLM, power density method (PDM, modified maximum likelihood method (MMLM and equivalent energy method (EEM were used to estimate the Weibull parameters and six statistical tools, namely relative percentage of error, root mean square error (RMSE, mean percentage of error, mean absolute percentage of error, chi-square error and analysis of variance were used to precisely rank the methods. The statistical fittings of the measured and calculated wind speed data are assessed for justifying the performance of the methods. The capacity factor and total energy generated by a small model wind turbine is calculated by numerical integration using Trapezoidal sums and Simpson’s rules. The results show that MOM and MLM are the most efficient methods for determining the value of k and c to fit Weibull distribution curves.
An automatic formulation of inverse free second moment method for algebraic systems
International Nuclear Information System (INIS)
Shakshuki, Elhadi; Ponnambalam, Kumaraswamy
2002-01-01
In systems with probabilistic uncertainties, an estimation of reliability requires at least the first two moments. In this paper, we focus on probabilistic analysis of linear systems. The important tasks in this analysis are the formulation and the automation of the moment equations. The main objective of the formulation is to provide at least means and variances of the output variables with at least a second-order accuracy. The objective of the automation is to reduce the storage and computational complexities required for implementing (automating) those formulations. This paper extends the recent work done to calculate the first two moments of a set of random algebraic linear equations by developing a stamping procedure to facilitate its automation. The new method has an additional advantage of being able to solve problems when the mean matrix of a system is singular. Lastly, from storage and computational complexities and accuracy point of view, a comparison between the new method and another recently developed first order second moment method is made with numerical examples
Comparison of PDF and Moment Closure Methods in the Modeling of Turbulent Reacting Flows
Norris, Andrew T.; Hsu, Andrew T.
1994-01-01
In modeling turbulent reactive flows, Probability Density Function (PDF) methods have an advantage over the more traditional moment closure schemes in that the PDF formulation treats the chemical reaction source terms exactly, while moment closure methods are required to model the mean reaction rate. The common model used is the laminar chemistry approximation, where the effects of turbulence on the reaction are assumed negligible. For flows with low turbulence levels and fast chemistry, the difference between the two methods can be expected to be small. However for flows with finite rate chemistry and high turbulence levels, significant errors can be expected in the moment closure method. In this paper, the ability of the PDF method and the moment closure scheme to accurately model a turbulent reacting flow is tested. To accomplish this, both schemes were used to model a CO/H2/N2- air piloted diffusion flame near extinction. Identical thermochemistry, turbulence models, initial conditions and boundary conditions are employed to ensure a consistent comparison can be made. The results of the two methods are compared to experimental data as well as to each other. The comparison reveals that the PDF method provides good agreement with the experimental data, while the moment closure scheme incorrectly shows a broad, laminar-like flame structure.
An improved method in the measurement of the moment of inertia
Energy Technology Data Exchange (ETDEWEB)
Peng, Jun, E-mail: pengjun@cimm.com.cn [Key Laboratory for Metrology, Changcheng Institute of Metrology and Measurement (CIMM) Beijing (China); Zhang, Li, E-mail: zhangli@cimm.com.cn [Key Laboratory for Metrology, Changcheng Institute of Metrology and Measurement (CIMM) Beijing (China); School of Instrumentation Science and Opto-electronics Engineering, Beihang University, Beijing (China)
2016-06-28
The moment of inertia calibration system is developed by Changcheng Institute of Metrology and Measurement (CIMM). Rotation table - torsional spring system is used to generate angular vibration, and laser vibrometer is used to measure rotational angle and the vibration period. The object to be measured is mounted on the top of the rotation table. The air-bearing system is elaborately manufactured which reduce the friction of the angular movement and increase measurement accuracy. Heterodyne laser interferometer collaborates with column diffraction grating is used in the measurement of angular movement. Experiment shows the method of measuring oscillating angle and period introduced in this paper is stable and the time resolution is high. When the air damping effect can’t be neglected in moment of inertia measurement, the periodic waveform area ratio method is introduced to calculate damping ratio and obtain the moment of inertia.
Assessment of drug disposition in the perfused rat brain by statistical moment analysis
International Nuclear Information System (INIS)
Sakane, T.; Nakatsu, M.; Yamamoto, A.; Hashida, M.; Sezaki, H.; Yamashita, S.; Nadai, T.
1991-01-01
Drug disposition in the brain was investigated by statistical moment analysis using an improved in situ brain perfusion technique. The right cerebral hemisphere of the rat was perfused in situ. The drug and inulin were injected into the right internal carotid artery as a rapid bolus and the venous outflow curve at the posterior facial vein was obtained. The infusion rate was adjusted to minimize the flow of perfusion fluid into the left hemisphere. The obtained disposition parameters were characteristics and considered to reflect the physicochemical properties of each drug. Antipyrine showed a small degree of initial uptake. Therefore, its apparent distribution volume (Vi) and apparent intrinsic clearance (CLint,i) were small. Diazepam showed large degrees of both influx and efflux and, thus, a large Vi. Water showed parameters intermediate between those of antipyrine and those of diazepam. Imipramine, desipramine, and propranolol showed a large CLint,i compared with those of the other drugs. The extraction ratio of propranolol significantly decreased with increasing concentrations of unlabeled propranolol in the perfusion fluid. These findings may be explained partly by the tissue binding of these drugs. In conclusion, the present method is useful for studying drug disposition in the brain
SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos
Energy Technology Data Exchange (ETDEWEB)
Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.
2016-09-01
A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5
SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos
International Nuclear Information System (INIS)
Ahlfeld, R.; Belkouchi, B.; Montomoli, F.
2016-01-01
A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10
Structured methods and striking moments: using question sequences in "living" ways.
Lowe, Roger
2005-03-01
This article draws together two seemingly incompatible practices in social constructionist therapies: the use of structured questioning methods (associated with solution-focused and narrative therapies) and the poetic elaboration of "striking moments" (associated with conversational therapies). To what extent can we value and use both styles of practice? Beginning with practitioners' concerns about the use of structured question sequences, I explore possibilities for resituating these methods in different conceptual and metaphorical frames, selectively drawing on ideas from the philosophy of striking moments. The aim is not to reduce one therapeutic style to another, but to encourage the teaching and practice of structured methods in more creative, improvisational, and "living" ways.
The method of moments and nested Hilbert spaces in quantum mechanics
International Nuclear Information System (INIS)
Adeniyi Bangudu, E.
1980-08-01
It is shown how the structures of a nested Hilbert space Hsub(I), associated with a given Hilbert space Hsub(O), may be used to simplify our understanding of the effects of parameters, whose values have to be chosen rather than determined variationally, in the method of moments. The result, as applied to non-relativistic quartic oscillator and helium atom, is to associate the parameters with sequences of Hilbert spaces, while the error of the method of moments relative to the variational method corresponds to a nesting operator of the nested Hilbert space. Difficulties hindering similar interpretations in terms of rigged Hilbert space structures are highlighted. (author)
Algorithm Indicating Moment of P-Wave Arrival Based on Second-Moment Characteristic
Directory of Open Access Journals (Sweden)
Jakub Sokolowski
2016-01-01
Full Text Available The moment of P-wave arrival can provide us with many information about the nature of a seismic event. Without adequate knowledge regarding the onset moment, many properties of the events related to location, polarization of P-wave, and so forth are impossible to receive. In order to save time required to indicate P-wave arrival moment manually, one can benefit from automatic picking algorithms. In this paper two algorithms based on a method finding a regime switch point are applied to seismic event data in order to find P-wave arrival time. The algorithms are based on signals transformed via a basic transform rather than on raw recordings. They involve partitioning the transformed signal into two separate series and fitting logarithm function to the first subset (which corresponds to pure noise and therefore it is considered stationary, exponent or power function to the second subset (which corresponds to nonstationary seismic event, and finding the point at which these functions best fit the statistic in terms of sum of squared errors. Effectiveness of the algorithms is tested on seismic data acquired from O/ZG “Rudna” underground copper ore mine with moments of P-wave arrival initially picked by broadly known STA/LTA algorithm and then corrected by seismic station specialists. The results of proposed algorithms are compared to those obtained using STA/LTA.
Theoretical study of fiber Raman amplifiers by broadband pumps through moment method
International Nuclear Information System (INIS)
Teimorpour, M. H.; Pourmoghadas, A.; Rahimi, L.; Farman, F.; Bahrampour, A.
2007-01-01
The governing equations of Raman optical fiber amplifier with broadband pumps in the steady state are a system of Uncountable Nonlinear Ordinary Differential Equations. In this paper, the Moment Method is used to reduce the uncountable system of Nonlinear Ordinary Differential Equations to a system of finite number of Nonlinear Ordinary Differential Equations. This system of equations is solved numerically. It is shown that the Moment Method is a precise and fast technique for analysis of optical fiber Raman Amplifier with broadband pumps.
New method of measuring electric dipole moments in storage rings
Farley, FJM; Jungmann, K; Miller, JP; Morse, WM; Orlov, YF; Roberts, BL; Semertzidis, YK; Silenko, A; Stephenson, EJ
2004-01-01
A new highly sensitive method of looking for electric dipole moments of charged particles in storage rings is described. The major systematic errors inherent in the method are addressed and ways to minimize them are suggested. It seems possible to measure the muon EDM to levels that test speculative
Trunk muscle activation. The effects of torso flexion, moment direction, and moment magnitude.
Lavender, S; Trafimow, J; Andersson, G B; Mayer, R S; Chen, I H
1994-04-01
This study was performed to quantify the electromyographic trunk muscle activities in response to variations in moment magnitude and direction while in forward-flexed postures. Recordings were made over eight trunk muscles in 19 subjects who maintained forward-flexed postures of 30 degrees and 60 degrees. In each of the two flexed postures, external moments of 20 Nm and 40 Nm were applied via a chest harness. The moment directions were varied in seven 30 degrees increments to a subject's right side, such that the direction of the applied load ranged from the upper body's anterior midsagittal plane (0 degree) to the posterior midsagittal plane (180 degrees). Statistical analyses yielded significant moment magnitude by moment-direction interaction effects for the EMG output from six of the eight muscles. Trunk flexion by moment-direction interactions were observed in the responses from three muscles. In general, the primary muscle supporting the torso and the applied load was the contralateral (left) erector spinae. The level of electromyographic activity in the anterior muscles was quite low, even with the posterior moment directions.
Semiclassical moment of inertia shell-structure within the phase-space approach
International Nuclear Information System (INIS)
Gorpinchenko, D V; Magner, A G; Bartel, J; Blocki, J P
2015-01-01
The moment of inertia for nuclear collective rotations is derived within a semiclassical approach based on the cranking model and the Strutinsky shell-correction method by using the non-perturbative periodic-orbit theory in the phase-space variables. This moment of inertia for adiabatic (statistical-equilibrium) rotations can be approximated by the generalized rigid-body moment of inertia accounting for the shell corrections of the particle density. A semiclassical phase-space trace formula allows us to express the shell components of the moment of inertia quite accurately in terms of the free-energy shell corrections for integrable and partially chaotic Fermi systems, which is in good agreement with the corresponding quantum calculations. (paper)
Puzzle of magnetic moments of Ni clusters revisited using quantum Monte Carlo method.
Lee, Hung-Wen; Chang, Chun-Ming; Hsing, Cheng-Rong
2017-02-28
The puzzle of the magnetic moments of small nickel clusters arises from the discrepancy between values predicted using density functional theory (DFT) and experimental measurements. Traditional DFT approaches underestimate the magnetic moments of nickel clusters. Two fundamental problems are associated with this puzzle, namely, calculating the exchange-correlation interaction accurately and determining the global minimum structures of the clusters. Theoretically, the two problems can be solved using quantum Monte Carlo (QMC) calculations and the ab initio random structure searching (AIRSS) method correspondingly. Therefore, we combined the fixed-moment AIRSS and QMC methods to investigate the magnetic properties of Ni n (n = 5-9) clusters. The spin moments of the diffusion Monte Carlo (DMC) ground states are higher than those of the Perdew-Burke-Ernzerhof ground states and, in the case of Ni 8-9 , two new ground-state structures have been discovered using the DMC calculations. The predicted results are closer to the experimental findings, unlike the results predicted in previous standard DFT studies.
Method of moments as applied to arbitrarily shaped bounded nonlinear scatterers
Caorsi, Salvatore; Massa, Andrea; Pastorino, Matteo
1994-01-01
In this paper, we explore the possibility of applying the moment method to determine the electromagnetic field distributions inside three-dimensional bounded nonlinear dielectric objects of arbitrary shapes. The moment method has usually been employed to solve linear scattering problems. We start with an integral equation formulation, and derive a nonlinear system of algebraic equations that allows us to obtain an approximate solution for the harmonic vector components of the electric field. Preliminary results of some numerical simulations are reported. Dans cet article nous explorons la possibilité d'appliquer la méthode des moments pour déterminer la distribution du champ électromagnétique dans des objets tridimensionnels diélectriques, non-linéaires, limités et de formes arbitraires. La méthode des moments a été communément employée pour les problèmes de diffusion linéaire. Nous commençons par une formulation basée sur l'équation intégrale et nous dérivons un système non-linéaire d'équations algébriques qui nous permet d'obtenir une solution approximative pour les composantes harmoniques du vecteur du champ électrique. Les résultats préliminaires de quelques simulations numériques sont présentés.
Spectral-Lagrangian methods for collisional models of non-equilibrium statistical states
International Nuclear Information System (INIS)
Gamba, Irene M.; Tharkabhushanam, Sri Harsha
2009-01-01
We propose a new spectral Lagrangian based deterministic solver for the non-linear Boltzmann transport equation (BTE) in d-dimensions for variable hard sphere (VHS) collision kernels with conservative or non-conservative binary interactions. The method is based on symmetries of the Fourier transform of the collision integral, where the complexity in its computation is reduced to a separate integral over the unit sphere S d-1 . The conservation of moments is enforced by Lagrangian constraints. The resulting scheme, implemented in free space, is very versatile and adjusts in a very simple manner to several cases that involve energy dissipation due to local micro-reversibility (inelastic interactions) or elastic models of slowing down process. Our simulations are benchmarked with available exact self-similar solutions, exact moment equations and analytical estimates for the homogeneous Boltzmann equation, both for elastic and inelastic VHS interactions. Benchmarking of the simulations involves the selection of a time self-similar rescaling of the numerical distribution function which is performed using the continuous spectrum of the equation for Maxwell molecules as studied first in Bobylev et al. [A.V. Bobylev, C. Cercignani, G. Toscani, Proof of an asymptotic property of self-similar solutions of the Boltzmann equation for granular materials, Journal of Statistical Physics 111 (2003) 403-417] and generalized to a wide range of related models in Bobylev et al. [A.V. Bobylev, C. Cercignani, I.M. Gamba, On the self-similar asymptotics for generalized non-linear kinetic Maxwell models, Communication in Mathematical Physics, in press. URL: ( )]. The method also produces accurate results in the case of inelastic diffusive Boltzmann equations for hard spheres (inelastic collisions under thermal bath), where overpopulated non-Gaussian exponential tails have been conjectured in computations by stochastic methods [T.V. Noije, M. Ernst, Velocity distributions in homogeneously
The method of moments and its application to the description of liquid He4
International Nuclear Information System (INIS)
Parlinski, K.
1974-01-01
The method of moments used to calculate the time dependent correlation functions is discussed. To reconstruct the approximate correlation function the finite number of the moments of a given function is needed. Every such approximation is an exact solution of the problem described by some model Hamiltonian. The formulae for any order of the approximation are given. Also described is another way of using the moments, which relies on the expansion of the Fourier transform of the correlation function into the series of the Hermitian polynomials, the coefficient of which are combinations of the moments. The method of moments was applied to the description of liquid He 4 which is at absolute zero temperature. The calculated moments of the density-density correlation function were applied to the description of the experimentally observed oscillations of width and average energy of the distribution of neutrons scattered by liquid helium as a function of the wave vector greater than 2 Angstroem -1 . Good agreement between the calculated and experimentally observed oscillations was obtained. It was also shown that the dynamics structure factor is highly asymmetrical. Using the calculated moments of the velocity-velocity correlation function, the expansion coefficients of the incoherent, double differential scattering cross-section into the series over the inverse wave vector were found up to the term of third order. The coefficients of this expansion do not depend explicitly on the relative particle occupation fraction of the zero-momentum state, i.e. the condensate. This expansion describes well the expermentally observed distributions of scattered neutrons for the wave vector 14.33 Angstroem -1 . The obtained results indicate that the inelastic neutron scattering method for high momentum transfers cannot be used as a straightforward method of measuring the relative occupation number of the zero-momentum state. The methods of elaboration of neutron scattering results at
Moment-based method for computing the two-dimensional discrete Hartley transform
Dong, Zhifang; Wu, Jiasong; Shu, Huazhong
2009-10-01
In this paper, we present a fast algorithm for computing the two-dimensional (2-D) discrete Hartley transform (DHT). By using kernel transform and Taylor expansion, the 2-D DHT is approximated by a linear sum of 2-D geometric moments. This enables us to use the fast algorithms developed for computing the 2-D moments to efficiently calculate the 2-D DHT. The proposed method achieves a simple computational structure and is suitable to deal with any sequence lengths.
A General Method to Estimate Earthquake Moment and Magnitude using Regional Phase Amplitudes
Energy Technology Data Exchange (ETDEWEB)
Pasyanos, M E
2009-11-19
This paper presents a general method of estimating earthquake magnitude using regional phase amplitudes, called regional M{sub o} or regional M{sub w}. Conceptually, this method uses an earthquake source model along with an attenuation model and geometrical spreading which accounts for the propagation to utilize regional phase amplitudes of any phase and frequency. Amplitudes are corrected to yield a source term from which one can estimate the seismic moment. Moment magnitudes can then be reliably determined with sets of observed phase amplitudes rather than predetermined ones, and afterwards averaged to robustly determine this parameter. We first examine in detail several events to demonstrate the methodology. We then look at various ensembles of phases and frequencies, and compare results to existing regional methods. We find regional M{sub o} to be a stable estimator of earthquake size that has several advantages over other methods. Because of its versatility, it is applicable to many more events, particularly smaller events. We make moment estimates for earthquakes ranging from magnitude 2 to as large as 7. Even with diverse input amplitude sources, we find magnitude estimates to be more robust than typical magnitudes and existing regional methods and might be tuned further to improve upon them. The method yields a more meaningful quantity of seismic moment, which can be recast as M{sub w}. Lastly, it is applied here to the Middle East region using an existing calibration model, but it would be easy to transport to any region with suitable attenuation calibration.
Directory of Open Access Journals (Sweden)
Elias Chaibub Neto
Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.
Likelihood devices in spatial statistics
Zwet, E.W. van
1999-01-01
One of the main themes of this thesis is the application to spatial data of modern semi- and nonparametric methods. Another, closely related theme is maximum likelihood estimation from spatial data. Maximum likelihood estimation is not common practice in spatial statistics. The method of moments
Consistent calculation of the polarization electric dipole moment by the shell-correction method
International Nuclear Information System (INIS)
Denisov, V.Yu.
1992-01-01
Macroscopic calculations of the polarization electric dipole moment which arises in nuclei with an octupole deformation are discussed in detail. This dipole moment is shown to depend on the position of the center of gravity. The conditions of consistency of the radii of the proton and neutron potentials and the radii of the proton and neutron surfaces, respectively, are discussed. These conditions must be incorporated in a shell-correction calculation of this dipole moment. A correct calculation of this moment by the shell-correction method is carried out. Dipole transitions between (on the one hand) levels belonging to an octupole vibrational band and (on the other) the ground state in rare-earth nuclei with a large quadrupole deformation are studied. 19 refs., 3 figs
Schmüdgen, Konrad
2017-01-01
This advanced textbook provides a comprehensive and unified account of the moment problem. It covers the classical one-dimensional theory and its multidimensional generalization, including modern methods and recent developments. In both the one-dimensional and multidimensional cases, the full and truncated moment problems are carefully treated separately. Fundamental concepts, results and methods are developed in detail and accompanied by numerous examples and exercises. Particular attention is given to powerful modern techniques such as real algebraic geometry and Hilbert space operators. A wide range of important aspects are covered, including the Nevanlinna parametrization for indeterminate moment problems, canonical and principal measures for truncated moment problems, the interplay between Positivstellensätze and moment problems on semi-algebraic sets, the fibre theorem, multidimensional determinacy theory, operator-theoretic approaches, and the existence theory and important special topics of multidime...
An evaluation of solutions to moment method of biochemical oxygen ...
African Journals Online (AJOL)
This paper evaluated selected solutions of moment method in respect to Biochemical Oxygen Demand (BOD) kinetics with the aim of ascertain error free solution. Domestic - institutional wastewaters were collected two - weekly for three months from waste - stabilization ponds in Obafemi Awolowo University, Ile - Ife.
Alvarez, Diego A.; Hurtado, Jorge E.; Bedoya-Ruíz, Daniel Alveiro
2012-07-01
Despite technological advances in seismic instrumentation, the assessment of the intensity of an earthquake using an observational scale as given, for example, by the modified Mercalli intensity scale is highly useful for practical purposes. In order to link the qualitative numbers extracted from the acceleration record of an earthquake and other instrumental data such as peak ground velocity, epicentral distance, and moment magnitude on the one hand and the modified Mercalli intensity scale on the other, simple statistical regression has been generally employed. In this paper, we will employ three methods of nonlinear regression, namely support vector regression, multilayer perceptrons, and genetic programming in order to find a functional dependence between the instrumental records and the modified Mercalli intensity scale. The proposed methods predict the intensity of an earthquake while dealing with nonlinearity and the noise inherent to the data. The nonlinear regressions with good estimation results have been performed using the "Did You Feel It?" database of the US Geological Survey and the database of the Center for Engineering Strong Motion Data for the California region.
Validation of engineering methods for predicting hypersonic vehicle controls forces and moments
Maughmer, M.; Straussfogel, D.; Long, L.; Ozoroski, L.
1991-01-01
This work examines the ability of the aerodynamic analysis methods contained in an industry standard conceptual design code, the Aerodynamic Preliminary Analysis System (APAS II), to estimate the forces and moments generated through control surface deflections from low subsonic to high hypersonic speeds. Predicted control forces and moments generated by various control effectors are compared with previously published wind-tunnel and flight-test data for three vehicles: the North American X-15, a hypersonic research airplane concept, and the Space Shuttle Orbiter. Qualitative summaries of the results are given for each force and moment coefficient and each control derivative in the various speed ranges. Results show that all predictions of longitudinal stability and control derivatives are acceptable for use at the conceptual design stage.
Application of pedagogy reflective in statistical methods course and practicum statistical methods
Julie, Hongki
2017-08-01
Subject Elementary Statistics, Statistical Methods and Statistical Methods Practicum aimed to equip students of Mathematics Education about descriptive statistics and inferential statistics. The students' understanding about descriptive and inferential statistics were important for students on Mathematics Education Department, especially for those who took the final task associated with quantitative research. In quantitative research, students were required to be able to present and describe the quantitative data in an appropriate manner, to make conclusions from their quantitative data, and to create relationships between independent and dependent variables were defined in their research. In fact, when students made their final project associated with quantitative research, it was not been rare still met the students making mistakes in the steps of making conclusions and error in choosing the hypothetical testing process. As a result, they got incorrect conclusions. This is a very fatal mistake for those who did the quantitative research. There were some things gained from the implementation of reflective pedagogy on teaching learning process in Statistical Methods and Statistical Methods Practicum courses, namely: 1. Twenty two students passed in this course and and one student did not pass in this course. 2. The value of the most accomplished student was A that was achieved by 18 students. 3. According all students, their critical stance could be developed by them, and they could build a caring for each other through a learning process in this course. 4. All students agreed that through a learning process that they undergo in the course, they can build a caring for each other.
International Nuclear Information System (INIS)
Tsuchida, Takahiro; Kimura, Koji
2015-01-01
Equivalent non-Gaussian excitation method is proposed to obtain the moments up to the fourth order of the response of systems under non-Gaussian random excitation. The excitation is prescribed by the probability density and power spectrum. Moment equations for the response can be derived from the stochastic differential equations for the excitation and the system. However, the moment equations are not closed due to the nonlinearity of the diffusion coefficient in the equation for the excitation. In the proposed method, the diffusion coefficient is replaced with the equivalent diffusion coefficient approximately to obtain a closed set of the moment equations. The square of the equivalent diffusion coefficient is expressed by the second-order polynomial. In order to demonstrate the validity of the method, a linear system to non-Gaussian excitation with generalized Gaussian distribution is analyzed. The results show the method is applicable to non-Gaussian excitation with the widely different kurtosis and bandwidth. (author)
International Nuclear Information System (INIS)
Yu Mingzhou; Lin Jianzhong; Jin Hanhui; Jiang Ying
2011-01-01
The closure of moment equations for nanoparticle coagulation due to Brownian motion in the entire size regime is performed using a newly proposed method of moments. The equations in the free molecular size regime and the continuum plus near-continuum regime are derived separately in which the fractal moments are approximated by three-order Taylor-expansion series. The moment equations for coagulation in the entire size regime are achieved by the harmonic mean solution and the Dahneke’s solution. The results produced by the quadrature method of moments (QMOM), the Pratsinis’s log-normal moment method (PMM), the sectional method (SM), and the newly derived Taylor-expansion moment method (TEMOM) are presented and compared in accuracy and efficiency. The TEMOM method with Dahneke’s solution produces the most accurate results with a high efficiency than other existing moment models in the entire size regime, and thus it is recommended to be used in the following studies on nanoparticle dynamics due to Brownian motion.
Llope, W. J.; STAR Collaboration
2013-10-01
Specific products of the statistical moments of the multiplicity distributions of identified particles can be directly compared to susceptibility ratios obtained from lattice QCD calculations. They may also diverge for nuclear systems formed close to a possible QCD critical point due to the phenomenon of critical opalescence. Of particular interest are the moments products for net-protons, net-kaons, and net-charge, as these are considered proxies for conserved quantum numbers. The moments products have been measured by the STAR experiment for Au+Au collisions at seven beam energies ranging from 7.7 to 200 GeV. In this presentation, the experimental results are compared to data-based calculations in which the intra-event correlations of the numbers of positive and negative particles are broken by construction. The importance of intra-event correlations to the moments products values for net-protons, net-kaons, and net-charge can thus be evaluated. Work supported by the U.S. Dept of Energy under grant DE-PS02-09ER09.
Directory of Open Access Journals (Sweden)
Cleather Daniel J
2010-11-01
Full Text Available Abstract Background A vast number of biomechanical studies have employed inverse dynamics methods to calculate inter-segmental moments during movement. Although all inverse dynamics methods are rooted in classical mechanics and thus theoretically the same, there exist a number of distinct computational methods. Recent research has demonstrated a key influence of the dynamics computation of the inverse dynamics method on the calculated moments, despite the theoretical equivalence of the methods. The purpose of this study was therefore to explore the influence of the choice of inverse dynamics on the calculation of inter-segmental moments. Methods An inverse dynamics analysis was performed to analyse vertical jumping and weightlifting movements using two distinct methods. The first method was the traditional inverse dynamics approach, in this study characterized as the 3 step method, where inter-segmental moments were calculated in the local coordinate system of each segment, thus requiring multiple coordinate system transformations. The second method (the 1 step method was the recently proposed approach based on wrench notation that allows all calculations to be performed in the global coordinate system. In order to best compare the effect of the inverse dynamics computation a number of the key assumptions and methods were harmonized, in particular unit quaternions were used to parameterize rotation in both methods in order to standardize the kinematics. Results Mean peak inter-segmental moments calculated by the two methods were found to agree to 2 decimal places in all cases and were not significantly different (p > 0.05. Equally the normalized dispersions of the two methods were small. Conclusions In contrast to previously documented research the difference between the two methods was found to be negligible. This study demonstrates that the 1 and 3 step method are computationally equivalent and can thus be used interchangeably in
A new method to assess the statistical convergence of monte carlo solutions
International Nuclear Information System (INIS)
Forster, R.A.
1991-01-01
Accurate Monte Carlo confidence intervals (CIs), which are formed with an estimated mean and an estimated standard deviation, can only be created when the number of particle histories N becomes large enough so that the central limit theorem can be applied. The Monte Carlo user has a limited number of marginal methods to assess the fulfillment of this condition, such as statistical error reduction proportional to 1/√N with error magnitude guidelines and third and fourth moment estimators. A new method is presented here to assess the statistical convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores. Related work in this area includes the derivation of analytic score distributions for a two-state Monte Carlo problem. Score distribution histograms have been generated to determine when a small number of histories accounts for a large fraction of the result. This summary describes initial studies of empirical Monte Carlo history score PDFs created from score histograms of particle transport simulations. 7 refs., 1 fig
International Nuclear Information System (INIS)
Aistov, A.V.; Gavrilenko, V.G.
1996-01-01
The normal incidence of a small-amplitude electromagnetic wave upon a semi-infinite turbulent collisional plasm with an oblique external magnetic field is considered. Within a small-angle-scattering approximation of the radiative transport theory, a system of differential equations is derived for statistical moments of the angular power spectrum of radiation. The dependences of the spectrum centroid, dispersion, and asymmetry on the depth of penetration are studied numerically. The nonmonotonic behavior of the dispersion is revealed, and an increase in the spectrum width with absorption anisotropy is found within some depth interval. It is shown that, at large depths, the direction of the displacement of the spectrum centroid, does not always coincide with the direction of minimum absorption
THE GROWTH POINTS OF STATISTICAL METHODS
Orlov A. I.
2014-01-01
On the basis of a new paradigm of applied mathematical statistics, data analysis and economic-mathematical methods are identified; we have also discussed five topical areas in which modern applied statistics is developing as well as the other statistical methods, i.e. five "growth points" – nonparametric statistics, robustness, computer-statistical methods, statistics of interval data, statistics of non-numeric data
Maughmer, Mark D.; Ozoroski, L.; Ozoroski, T.; Straussfogel, D.
1990-01-01
Many types of hypersonic aircraft configurations are currently being studied for feasibility of future development. Since the control of the hypersonic configurations throughout the speed range has a major impact on acceptable designs, it must be considered in the conceptual design stage. The ability of the aerodynamic analysis methods contained in an industry standard conceptual design system, APAS II, to estimate the forces and moments generated through control surface deflections from low subsonic to high hypersonic speeds is considered. Predicted control forces and moments generated by various control effectors are compared with previously published wind tunnel and flight test data for three configurations: the North American X-15, the Space Shuttle Orbiter, and a hypersonic research airplane concept. Qualitative summaries of the results are given for each longitudinal force and moment and each control derivative in the various speed ranges. Results show that all predictions of longitudinal stability and control derivatives are acceptable for use at the conceptual design stage. Results for most lateral/directional control derivatives are acceptable for conceptual design purposes; however, predictions at supersonic Mach numbers for the change in yawing moment due to aileron deflection and the change in rolling moment due to rudder deflection are found to be unacceptable. Including shielding effects in the analysis is shown to have little effect on lift and pitching moment predictions while improving drag predictions.
Approximating distributions from moments
Pawula, R. F.
1987-11-01
A method based upon Pearson-type approximations from statistics is developed for approximating a symmetric probability density function from its moments. The extended Fokker-Planck equation for non-Markov processes is shown to be the underlying foundation for the approximations. The approximation is shown to be exact for the beta probability density function. The applicability of the general method is illustrated by numerous pithy examples from linear and nonlinear filtering of both Markov and non-Markov dichotomous noise. New approximations are given for the probability density function in two cases in which exact solutions are unavailable, those of (i) the filter-limiter-filter problem and (ii) second-order Butterworth filtering of the random telegraph signal. The approximate results are compared with previously published Monte Carlo simulations in these two cases.
Statistical methods and materials characterisation
International Nuclear Information System (INIS)
Wallin, K.R.W.
2010-01-01
Statistics is a wide mathematical area, which covers a myriad of analysis and estimation options, some of which suit special cases better than others. A comprehensive coverage of the whole area of statistics would be an enormous effort and would also be outside the capabilities of this author. Therefore, this does not intend to be a textbook on statistical methods available for general data analysis and decision making. Instead it will highlight a certain special statistical case applicable to mechanical materials characterization. The methods presented here do not in any way rule out other statistical methods by which to analyze mechanical property material data. (orig.)
Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang
2017-05-01
The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE 1 and MLE 2 , respectively), and Greenwood approximation (MLE gw ) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE 1 , the MLE 2 and MLE gw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE 2 and MLE gw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE 2 and MLE gw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.
Uncertainty analysis with statistically correlated failure data
International Nuclear Information System (INIS)
Modarres, M.; Dezfuli, H.; Roush, M.L.
1987-01-01
Likelihood of occurrence of the top event of a fault tree or sequences of an event tree is estimated from the failure probability of components that constitute the events of the fault/event tree. Component failure probabilities are subject to statistical uncertainties. In addition, there are cases where the failure data are statistically correlated. At present most fault tree calculations are based on uncorrelated component failure data. This chapter describes a methodology for assessing the probability intervals for the top event failure probability of fault trees or frequency of occurrence of event tree sequences when event failure data are statistically correlated. To estimate mean and variance of the top event, a second-order system moment method is presented through Taylor series expansion, which provides an alternative to the normally used Monte Carlo method. For cases where component failure probabilities are statistically correlated, the Taylor expansion terms are treated properly. Moment matching technique is used to obtain the probability distribution function of the top event through fitting the Johnson Ssub(B) distribution. The computer program, CORRELATE, was developed to perform the calculations necessary for the implementation of the method developed. (author)
Permutation statistical methods an integrated approach
Berry, Kenneth J; Johnston, Janis E
2016-01-01
This research monograph provides a synthesis of a number of statistical tests and measures, which, at first consideration, appear disjoint and unrelated. Numerous comparisons of permutation and classical statistical methods are presented, and the two methods are compared via probability values and, where appropriate, measures of effect size. Permutation statistical methods, compared to classical statistical methods, do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This text takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field in statistics. This topic is new in that it took modern computing power to make permutation methods available to people working in the mainstream of research. This research monograph addresses a statistically-informed audience, and can also easily serve as a ...
International Nuclear Information System (INIS)
Freedman, M.S.; Peshkin, M.; Ringo, G.R.; Dombeck, T.W.
1989-08-01
The use of an ultracold neutron interferometer incorporating an electrostatic accelerator having a strong electric field gradient to accelerate neutrons by their possible electric dipole moment is proposed as a method of measuring the neutron electric dipole moment. The method appears to have the possibility of extending the sensitivity of the measurement by several orders of magnitude, perhaps to 10 -30 e-cm. 9 refs., 3 figs
International Nuclear Information System (INIS)
Hamoudi, A.; Shahaliev, E.; Nazmitdinov, R. G.; Alhassid, Y.
2002-01-01
We study the fluctuation properties of ΔT=0 electromagnetic transition intensities and electromagnetic moments in A∼60 nuclei within the framework of the interacting shell model, using a realistic effective interaction for pf-shell nuclei with a 56 Ni core. The distributions of the transition intensities and of the electromagnetic moments are well described by the Gaussian orthogonal ensemble of random matrices. In particular, the transition intensity distributions follow a Porter-Thomas distribution. When diagonal matrix elements (i.e., moments) are included in the analysis of transition intensities, the distributions remain Porter-Thomas except for the isoscalar M1. This deviation is explained in terms of the structure of the isoscalar M1 operator
Modified generalized method of moments for a robust estimation of polytomous logistic model
Directory of Open Access Journals (Sweden)
Xiaoshan Wang
2014-07-01
Full Text Available The maximum likelihood estimation (MLE method, typically used for polytomous logistic regression, is prone to bias due to both misclassification in outcome and contamination in the design matrix. Hence, robust estimators are needed. In this study, we propose such a method for nominal response data with continuous covariates. A generalized method of weighted moments (GMWM approach is developed for dealing with contaminated polytomous response data. In this approach, distances are calculated based on individual sample moments. And Huber weights are applied to those observations with large distances. Mellow-type weights are also used to downplay leverage points. We describe theoretical properties of the proposed approach. Simulations suggest that the GMWM performs very well in correcting contamination-caused biases. An empirical application of the GMWM estimator on data from a survey demonstrates its usefulness.
Magnetic moments of light nuclei within the framework of reduced Hamiltonian method
Deveikis, A
1998-01-01
A new procedure for evaluation of magnetic dipole moments of light atomic nuclei has been developed. The procedure presented obeys the principles of antisymmetry and translational invariance and is based on the reduced Hamiltonian method. The theoretical formulation has been illustrated by calculation of magnetic dipole moments for 2 sup H , 3 sup H , 3 sup H e, 4 sup H e, 5 sup H e, 5 sup L i, 11 sup L i, and 6 sup L i nuclei. The calculations were performed in a complete 0(h/2 pi)omega basis. The obtained results are in good agreement with the experimental data. (author)
Performance Evaluation of Moment Connections of Moment Resisting Frames Against Progressive Collapse
Directory of Open Access Journals (Sweden)
M. Mahmoudi
2017-02-01
Full Text Available When a primary structural element fails due to sudden load such as explosion, the building undergoes progressive collapse. The method for design of moment connections during progressive collapse is different to seismic design of moment connections. Because in this case, the axial force on the connections makes it behave differently. The purpose of this paper is to evaluate the performance of a variety of moment connections in preventing progressive collapse in steel moment frames. To achieve this goal, three prequalified moment connections (BSEEP, BFP and WUP-W were designed according seismic codes. These moment connections were analyzed numerically using ABAQUS software for progressive collapse. The results show that the BFP connection (bolted flange plate has capacity much more than other connections because of the use of plates at the junction of beam-column.
Photoelectric method for determination of the moment of formation of an anodic spot
International Nuclear Information System (INIS)
Barinov, V.N.; Goncharov, V.K.; Smirnov, A.V.
1986-01-01
In studying the problem of the effect of the amplitude and form of discharge current pulses on the time for transition from a diffuse discharge form to a contracted one and on the value of the threshold current I /SUB As/ for formation of an anodic spot, the authors used a photoelectric method for determination of the moment of appearance of the anodic spot based on determination of the spectral composition of the plasma at different moments of time after the beginning of discharge initiation. The photoelectric method can be used in studying emission processes on a cathode and also in those cases where both electrodes are made of the same material. An example shows synchronous oscillograms of I /SUB p/ (tau) and J /SUB i/ (tau) for copper electrodes. It is evident that during transition of the discharge to a contracted form with an anodic spot there was a sharp increase of the intensity of deexcitation of the ionic copper line. At the moment of extinction of the anodic spot, the amplitude values of J /SUB i/ (tau) corresponded to a level characteristic of the diffuse form of arc burning
Statistical Estimation of Heterogeneities: A New Frontier in Well Testing
Neuman, S. P.; Guadagnini, A.; Illman, W. A.; Riva, M.; Vesselinov, V. V.
2001-12-01
Well-testing methods have traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. Geostatistical inverse interpretation of cross-hole tests yields a smoothed but detailed "tomographic" image of how parameters actually vary in three-dimensional space, together with corresponding measures of estimation uncertainty. Moment solutions may soon allow one to interpret well tests in terms of statistical parameters such as the mean and variance of log permeability, its spatial autocorrelation and statistical anisotropy. The idea of geostatistical cross-hole tomography is illustrated through pneumatic injection tests conducted in unsaturated fractured tuff at the Apache Leap Research Site near Superior, Arizona. The idea of using moment equations to interpret well-tests statistically is illustrated through a recently developed three-dimensional solution for steady state flow to a well in a bounded, randomly heterogeneous, statistically anisotropic aquifer.
A new computational method of a moment-independent uncertainty importance measure
International Nuclear Information System (INIS)
Liu Qiao; Homma, Toshimitsu
2009-01-01
For a risk assessment model, the uncertainty in input parameters is propagated through the model and leads to the uncertainty in the model output. The study of how the uncertainty in the output of a model can be apportioned to the uncertainty in the model inputs is the job of sensitivity analysis. Saltelli [Sensitivity analysis for importance assessment. Risk Analysis 2002;22(3):579-90] pointed out that a good sensitivity indicator should be global, quantitative and model free. Borgonovo [A new uncertainty importance measure. Reliability Engineering and System Safety 2007;92(6):771-84] further extended these three requirements by adding the fourth feature, moment-independence, and proposed a new sensitivity measure, δ i . It evaluates the influence of the input uncertainty on the entire output distribution without reference to any specific moment of the model output. In this paper, a new computational method of δ i is proposed. It is conceptually simple and easier to implement. The feasibility of this new method is proved by applying it to two examples.
Analytic moment method calculations of the drift wave spectrum
International Nuclear Information System (INIS)
Thayer, D.R.; Molvig, K.
1985-11-01
A derivation and approximate solution of renormalized mode coupling equations describing the turbulent drift wave spectrum is presented. Arguments are given which indicate that a weak turbulence formulation of the spectrum equations fails for a system with negative dissipation. The inadequacy of the weak turbulence theory is circumvented by utilizing a renormalized formation. An analytic moment method is developed to approximate the solution of the nonlinear spectrum integral equations. The solution method employs trial functions to reduce the integral equations to algebraic equations in basic parameters describing the spectrum. An approximate solution of the spectrum equations is first obtained for a mode dissipation with known solution, and second for an electron dissipation in the NSA
Solution Methods for Structures with Random Properties Subject to Random Excitation
DEFF Research Database (Denmark)
Köylüoglu, H. U.; Nielsen, Søren R. K.; Cakmak, A. S.
This paper deals with the lower order statistical moments of the response of structures with random stiffness and random damping properties subject to random excitation. The arising stochastic differential equations (SDE) with random coefficients are solved by two methods, a second order...... the SDE with random coefficients with deterministic initial conditions to an equivalent nonlinear SDE with deterministic coefficient and random initial conditions. In both methods, the statistical moment equations are used. Hierarchy of statistical moments in the markovian approach is closed...... by the cumulant neglect closure method applied at the fourth order level....
Register-based statistics statistical methods for administrative data
Wallgren, Anders
2014-01-01
This book provides a comprehensive and up to date treatment of theory and practical implementation in Register-based statistics. It begins by defining the area, before explaining how to structure such systems, as well as detailing alternative approaches. It explains how to create statistical registers, how to implement quality assurance, and the use of IT systems for register-based statistics. Further to this, clear details are given about the practicalities of implementing such statistical methods, such as protection of privacy and the coordination and coherence of such an undertaking. Thi
Statistics for experimentalists
Cooper, B E
2014-01-01
Statistics for Experimentalists aims to provide experimental scientists with a working knowledge of statistical methods and search approaches to the analysis of data. The book first elaborates on probability and continuous probability distributions. Discussions focus on properties of continuous random variables and normal variables, independence of two random variables, central moments of a continuous distribution, prediction from a normal distribution, binomial probabilities, and multiplication of probabilities and independence. The text then examines estimation and tests of significance. Topics include estimators and estimates, expected values, minimum variance linear unbiased estimators, sufficient estimators, methods of maximum likelihood and least squares, and the test of significance method. The manuscript ponders on distribution-free tests, Poisson process and counting problems, correlation and function fitting, balanced incomplete randomized block designs and the analysis of covariance, and experiment...
Towers, Sherry; Mubayi, Anuj; Castillo-Chavez, Carlos
2018-01-01
When attempting to statistically distinguish between a null and an alternative hypothesis, many researchers in the life and social sciences turn to binned statistical analysis methods, or methods that are simply based on the moments of a distribution (such as the mean, and variance). These methods have the advantage of simplicity of implementation, and simplicity of explanation. However, when null and alternative hypotheses manifest themselves in subtle differences in patterns in the data, binned analysis methods may be insensitive to these differences, and researchers may erroneously fail to reject the null hypothesis when in fact more sensitive statistical analysis methods might produce a different result when the null hypothesis is actually false. Here, with a focus on two recent conflicting studies of contagion in mass killings as instructive examples, we discuss how the use of unbinned likelihood methods makes optimal use of the information in the data; a fact that has been long known in statistical theory, but perhaps is not as widely appreciated amongst general researchers in the life and social sciences. In 2015, Towers et al published a paper that quantified the long-suspected contagion effect in mass killings. However, in 2017, Lankford & Tomek subsequently published a paper, based upon the same data, that claimed to contradict the results of the earlier study. The former used unbinned likelihood methods, and the latter used binned methods, and comparison of distribution moments. Using these analyses, we also discuss how visualization of the data can aid in determination of the most appropriate statistical analysis methods to distinguish between a null and alternate hypothesis. We also discuss the importance of assessment of the robustness of analysis results to methodological assumptions made (for example, arbitrary choices of number of bins and bin widths when using binned methods); an issue that is widely overlooked in the literature, but is critical
Futamure, Sumire; Bonnet, Vincent; Dumas, Raphael; Venture, Gentiane
2017-11-07
This paper presents a method allowing a simple and efficient sensitivity analysis of dynamics parameters of complex whole-body human model. The proposed method is based on the ground reaction and joint moment regressor matrices, developed initially in robotics system identification theory, and involved in the equations of motion of the human body. The regressor matrices are linear relatively to the segment inertial parameters allowing us to use simple sensitivity analysis methods. The sensitivity analysis method was applied over gait dynamics and kinematics data of nine subjects and with a 15 segments 3D model of the locomotor apparatus. According to the proposed sensitivity indices, 76 segments inertial parameters out the 150 of the mechanical model were considered as not influent for gait. The main findings were that the segment masses were influent and that, at the exception of the trunk, moment of inertia were not influent for the computation of the ground reaction forces and moments and the joint moments. The same method also shows numerically that at least 90% of the lower-limb joint moments during the stance phase can be estimated only from a force-plate and kinematics data without knowing any of the segment inertial parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.
The method of arbitrarily large moments to calculate single scale processes in quantum field theory
Energy Technology Data Exchange (ETDEWEB)
Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation (RISC)
2017-01-15
We device a new method to calculate a large number of Mellin moments of single scale quantities using the systems of differential and/or difference equations obtained by integration-by-parts identities between the corresponding Feynman integrals of loop corrections to physical quantities. These scalar quantities have a much simpler mathematical structure than the complete quantity. A sufficiently large set of moments may even allow the analytic reconstruction of the whole quantity considered, holding in case of first order factorizing systems. In any case, one may derive highly precise numerical representations in general using this method, which is otherwise completely analytic.
Statistical methods for nuclear material management
International Nuclear Information System (INIS)
Bowen, W.M.; Bennett, C.A.
1988-12-01
This book is intended as a reference manual of statistical methodology for nuclear material management practitioners. It describes statistical methods currently or potentially important in nuclear material management, explains the choice of methods for specific applications, and provides examples of practical applications to nuclear material management problems. Together with the accompanying training manual, which contains fully worked out problems keyed to each chapter, this book can also be used as a textbook for courses in statistical methods for nuclear material management. It should provide increased understanding and guidance to help improve the application of statistical methods to nuclear material management problems
Statistical methods for nuclear material management
Energy Technology Data Exchange (ETDEWEB)
Bowen W.M.; Bennett, C.A. (eds.)
1988-12-01
This book is intended as a reference manual of statistical methodology for nuclear material management practitioners. It describes statistical methods currently or potentially important in nuclear material management, explains the choice of methods for specific applications, and provides examples of practical applications to nuclear material management problems. Together with the accompanying training manual, which contains fully worked out problems keyed to each chapter, this book can also be used as a textbook for courses in statistical methods for nuclear material management. It should provide increased understanding and guidance to help improve the application of statistical methods to nuclear material management problems.
International Nuclear Information System (INIS)
Rebane, A.; Drobizhev, M.; Makarov, N.S.; Beuerman, E.; Tillo, S.; Hughes, T.
2010-01-01
Stark effect, in combination with spectral hole burning and single-molecule spectroscopy, has been a fruitful technique to study permanent electric dipole moment of molecules in condensed phase. However, because measuring Stark shifts relies on external fields and narrow line- or hole-widths, the applicability of this method at ambient conditions required by most biological systems has remained limited. Here we demonstrate a new all-optical method for measuring the molecular dipole moment difference between ground and excited states using two-photon absorption (2PA) spectroscopy. We show that the value and orientation of the static dipole moment difference can be determined from the corresponding absolute 2PA cross-section. We use this new method to determine for the first time the strength of local electric field E loc =0.1-1.0x10 8 V/cm inside beta-barrel of Fruit series of red fluorescent proteins. Because our method does not rely on external field and is applicable in liquid solutions, it is well suited for the study of biological systems.
Manning, Robert M.
2004-01-01
The extended wide-angle parabolic wave equation applied to electromagnetic wave propagation in random media is considered. A general operator equation is derived which gives the statistical moments of an electric field of a propagating wave. This expression is used to obtain the first and second order moments of the wave field and solutions are found that transcend those which incorporate the full paraxial approximation at the outset. Although these equations can be applied to any propagation scenario that satisfies the conditions of application of the extended parabolic wave equation, the example of propagation through atmospheric turbulence is used. It is shown that in the case of atmospheric wave propagation and under the Markov approximation (i.e., the delta-correlation of the fluctuations in the direction of propagation), the usual parabolic equation in the paraxial approximation is accurate even at millimeter wavelengths. The comprehensive operator solution also allows one to obtain expressions for the longitudinal (generalized) second order moment. This is also considered and the solution for the atmospheric case is obtained and discussed. The methodology developed here can be applied to any qualifying situation involving random propagation through turbid or plasma environments that can be represented by a spectral density of permittivity fluctuations.
Statistical methods in quality assurance
International Nuclear Information System (INIS)
Eckhard, W.
1980-01-01
During the different phases of a production process - planning, development and design, manufacturing, assembling, etc. - most of the decision rests on a base of statistics, the collection, analysis and interpretation of data. Statistical methods can be thought of as a kit of tools to help to solve problems in the quality functions of the quality loop with respect to produce quality products and to reduce quality costs. Various statistical methods are represented, typical examples for their practical application are demonstrated. (RW)
Adaptive Elastic Net for Generalized Methods of Moments.
Caner, Mehmet; Zhang, Hao Helen
2014-01-30
Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.
International Nuclear Information System (INIS)
Theodorsen, A; Garcia, O E; Rypdal, M
2017-01-01
Filtered Poisson processes are often used as reference models for intermittent fluctuations in physical systems. Such a process is here extended by adding a noise term, either as a purely additive term to the process or as a dynamical term in a stochastic differential equation. The lowest order moments, probability density function, auto-correlation function and power spectral density are derived and used to identify and compare the effects of the two different noise terms. Monte-Carlo studies of synthetic time series are used to investigate the accuracy of model parameter estimation and to identify methods for distinguishing the noise types. It is shown that the probability density function and the three lowest order moments provide accurate estimations of the model parameters, but are unable to separate the noise types. The auto-correlation function and the power spectral density also provide methods for estimating the model parameters, as well as being capable of identifying the noise type. The number of times the signal crosses a prescribed threshold level in the positive direction also promises to be able to differentiate the noise type. (paper)
Moment magnitude determination of local seismic events recorded at selected Polish seismic stations
Wiejacz, Paweł; Wiszniowski, Jan
2006-03-01
The paper presents the method of local magnitude determination used at Polish seismic stations to report events originating in one of the four regions of induced seismicity in Poland or its immediate vicinity. The method is based on recalculation of the seismic moment into magnitude, whereas the seismic moment is obtained from spectral analysis. The method has been introduced at Polish seismic stations in the late 1990s but as of yet had not been described in full because magnitude discrepancies have been found between the results of the individual stations. The authors have performed statistics of these differences, provide their explanation and calculate station corrections for each station and each event source region. The limitations of the method are also discussed. The method is found to be a good and reliable method of local magnitude determination provided the limitations are observed and station correction applied.
A Hybrid Joint Moment Ratio Test for Financial Time Series
Groenendijk, Patrick A.; Lucas, André; Vries, de Casper G.
1998-01-01
We advocate the use of absolute moment ratio statistics in conjunctionwith standard variance ratio statistics in order to disentangle lineardependence, non-linear dependence, and leptokurtosis in financial timeseries. Both statistics are computed for multiple return horizonssimultaneously, and the
Equivalent circuit study of beam-loading using a moment method
International Nuclear Information System (INIS)
Wang, T.F.; Machida, S.; Mori, Y.; Ohmori, C.
1997-01-01
In this work, we present a formalism by considering the perturbations in the moments of a bunched beam for the equivalent circuit model to include all harmonics of the synchroton oscillation in a beam-cavity interaction system. The linear coupling among all longitudinal modes under the influence of narrow-band impedance can be naturally incorporated in this new approach. We used this method to re-examine the coupling between the dipole and the quadrupole modes. The dispersion relation obtained by this new method was compared with that derived from the linearized Vlasov equation up to the second harmonic of the synchrotron motion. We found excellent qualitative agreements between two approaches
The general 2-D moments via integral transform method for acoustic radiation and scattering
Smith, Jerry R.; Mirotznik, Mark S.
2004-05-01
The moments via integral transform method (MITM) is a technique to analytically reduce the 2-D method of moments (MoM) impedance double integrals into single integrals. By using a special integral representation of the Green's function, the impedance integral can be analytically simplified to a single integral in terms of transformed shape and weight functions. The reduced expression requires fewer computations and reduces the fill times of the MoM impedance matrix. Furthermore, the resulting integral is analytic for nearly arbitrary shape and weight function sets. The MITM technique is developed for mixed boundary conditions and predictions with basic shape and weight function sets are presented. Comparisons of accuracy and speed between MITM and brute force are presented. [Work sponsored by ONR and NSWCCD ILIR Board.
DEFF Research Database (Denmark)
Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard
Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics...... that rarely connect concepts to data and books on econometrics and time series analysis that do not cover specific problems related to option valuation. The book discusses applications of financial derivatives pertaining to risk assessment and elimination. The authors cover various statistical...... and mathematical techniques, including linear and nonlinear time series analysis, stochastic calculus models, stochastic differential equations, Itō’s formula, the Black–Scholes model, the generalized method-of-moments, and the Kalman filter. They explain how these tools are used to price financial derivatives...
Finite moments approach to the time-dependent neutron transport equation
International Nuclear Information System (INIS)
Kim, Sang Hyun
1994-02-01
Currently, nodal techniques are widely used in solving the multidimensional diffusion equation because of savings in computing time and storage. Thanks to the development of computer technology, one can now solve the transport equation instead of the diffusion equation to obtain more accurate solution. The finite moments method, one of the nodal methods, attempts to represent the fluxes in the cell and on cell surfaces more rigorously by retaining additional spatial moments. Generally, there are two finite moments schemes to solve the time-dependent transport equation. In one, the time variable is treated implicitly with finite moments method in space variable (implicit finite moments method), the other method uses finite moments method in both space and time (space-time finite moments method). In this study, these two schemes are applied to two types of time-dependent neutron transport problems. One is a fixed source problem, the other a heterogeneous fast reactor problem with delayed neutrons. From the results, it is observed that the two finite moments methods give almost the same solutions in both benchmark problems. However, the space-time finite moments method requires a little longer computing time than that of the implicit finite moments method. In order to reduce the longer computing time in the space-time finite moments method, a new iteration strategy is exploited, where a few time-stepwise calculation, in which original time steps are grouped into several coarse time divisions, is performed sequentially instead of performing iterations over the entire time steps. This strategy results in significant reduction of the computing time and we observe that 2-or 3-stepwise calculation is preferable. In addition, we propose a new finite moments method which is called mixed finite moments method in this thesis. Asymptotic analysis for the finite moments method shows that accuracy of the solution in a heterogeneous problem mainly depends on the accuracy of the
International Nuclear Information System (INIS)
French, J.B.
1974-01-01
The concepts of statistical behavior and symmetry are presented from the point of view of many body spectroscopy. Remarks are made on methods for the evaluation of moments, particularly widths, for the purpose of giving a feeling for the types of mathematical structures encountered. Applications involving ground state energies, spectra, and level densities are discussed. The extent to which Hamiltonian eigenstates belong to irreducible representations is mentioned. (4 figures, 1 table) (U.S.)
Statistical methods for ranking data
Alvo, Mayer
2014-01-01
This book introduces advanced undergraduate, graduate students and practitioners to statistical methods for ranking data. An important aspect of nonparametric statistics is oriented towards the use of ranking data. Rank correlation is defined through the notion of distance functions and the notion of compatibility is introduced to deal with incomplete data. Ranking data are also modeled using a variety of modern tools such as CART, MCMC, EM algorithm and factor analysis. This book deals with statistical methods used for analyzing such data and provides a novel and unifying approach for hypotheses testing. The techniques described in the book are illustrated with examples and the statistical software is provided on the authors’ website.
Statistical methods in nuclear theory
International Nuclear Information System (INIS)
Shubin, Yu.N.
1974-01-01
The paper outlines statistical methods which are widely used for describing properties of excited states of nuclei and nuclear reactions. It discusses physical assumptions lying at the basis of known distributions between levels (Wigner, Poisson distributions) and of widths of highly excited states (Porter-Thomas distribution, as well as assumptions used in the statistical theory of nuclear reactions and in the fluctuation analysis. The author considers the random matrix method, which consists in replacing the matrix elements of a residual interaction by random variables with a simple statistical distribution. Experimental data are compared with results of calculations using the statistical model. The superfluid nucleus model is considered with regard to superconducting-type pair correlations
The method of arbitrarily large moments to calculate single scale processes in quantum field theory
Directory of Open Access Journals (Sweden)
Johannes Blümlein
2017-08-01
Full Text Available We devise a new method to calculate a large number of Mellin moments of single scale quantities using the systems of differential and/or difference equations obtained by integration-by-parts identities between the corresponding Feynman integrals of loop corrections to physical quantities. These scalar quantities have a much simpler mathematical structure than the complete quantity. A sufficiently large set of moments may even allow the analytic reconstruction of the whole quantity considered, holding in case of first order factorizing systems. In any case, one may derive highly precise numerical representations in general using this method, which is otherwise completely analytic.
A Hybrid Joint Moment Ratio Test for Financial Time Series
P.A. Groenendijk (Patrick); A. Lucas (André); C.G. de Vries (Casper)
1998-01-01
textabstractWe advocate the use of absolute moment ratio statistics in conjunction with standard variance ratio statistics in order to disentangle linear dependence, non-linear dependence, and leptokurtosis in financial time series. Both statistics are computed for multiple return horizons
Moments of the very high multiplicity distributions
International Nuclear Information System (INIS)
Nechitailo, V.A.
2004-01-01
In experiment, the multiplicity distributions of inelastic processes are truncated due to finite energy, insufficient statistics, or special choice of events. It is shown that the moments of such truncated multiplicity distributions possess some typical features. In particular, the oscillations of cumulant moments at high ranks and their negative values at the second rank can be considered as ones most indicative of the specifics of these distributions. They allow one to distinguish between distributions of different type
Statistical Methods in Integrative Genomics
Richardson, Sylvia; Tseng, George C.; Sun, Wei
2016-01-01
Statistical methods in integrative genomics aim to answer important biology questions by jointly analyzing multiple types of genomic data (vertical integration) or aggregating the same type of data across multiple studies (horizontal integration). In this article, we introduce different types of genomic data and data resources, and then review statistical methods of integrative genomics, with emphasis on the motivation and rationale of these methods. We conclude with some summary points and future research directions. PMID:27482531
Methods of statistical physics
Akhiezer, Aleksandr I
1981-01-01
Methods of Statistical Physics is an exposition of the tools of statistical mechanics, which evaluates the kinetic equations of classical and quantized systems. The book also analyzes the equations of macroscopic physics, such as the equations of hydrodynamics for normal and superfluid liquids and macroscopic electrodynamics. The text gives particular attention to the study of quantum systems. This study begins with a discussion of problems of quantum statistics with a detailed description of the basics of quantum mechanics along with the theory of measurement. An analysis of the asymptotic be
The Taylor-expansion method of moments for the particle system with bimodal distribution
Directory of Open Access Journals (Sweden)
Liu Yan-Hua
2013-01-01
Full Text Available This paper derives the multipoint Taylor expansion method of moments for the bimodal particle system. The collision effects are modeled by the internal and external coagulation terms. Simple theory and numerical tests are performed to prove the effect of the current model.
Machicoane, Nathanaël; López-Caballero, Miguel; Bourgoin, Mickael; Aliseda, Alberto; Volk, Romain
2017-10-01
We present a method to improve the accuracy of velocity measurements for fluid flow or particles immersed in it, based on a multi-time-step approach that allows for cancellation of noise in the velocity measurements. Improved velocity statistics, a critical element in turbulent flow measurements, can be computed from the combination of the velocity moments computed using standard particle tracking velocimetry (PTV) or particle image velocimetry (PIV) techniques for data sets that have been collected over different values of time intervals between images. This method produces Eulerian velocity fields and Lagrangian velocity statistics with much lower noise levels compared to standard PIV or PTV measurements, without the need of filtering and/or windowing. Particle displacement between two frames is computed for multiple different time-step values between frames in a canonical experiment of homogeneous isotropic turbulence. The second order velocity structure function of the flow is computed with the new method and compared to results from traditional measurement techniques in the literature. Increased accuracy is also demonstrated by comparing the dissipation rate of turbulent kinetic energy measured from this function against previously validated measurements.
International Nuclear Information System (INIS)
Schek, I.; Wyatt, R.E.
1986-01-01
Molecular multiphoton processes are treated in the Recursive Residue Generation Method (A. Nauts and R.E. Wyatt, Phys. Rev. Lett 51, 2238 (1983)) by converting the molecular-field Hamiltonian matrix into tridiagonal form, using the Lanczos equations. In this study, the self-energies (diagonal) and linking (off-diagaonal) terms in the tridiagonal matrix are obtained by comparing linked moment diagrams in both representations. The dynamics of the source state is introduced and computed in terms of the linked and the irreducible moments
International Nuclear Information System (INIS)
Kim, Kyu Tae; Kim, Oh Hwan
1999-01-01
A simplified statistical methodology is developed in order to both reduce over-conservatism of deterministic methodologies employed for PWR fuel rod internal pressure (RIP) calculation and simplify the complicated calculation procedure of the widely used statistical methodology which employs the response surface method and Monte Carlo simulation. The simplified statistical methodology employs the system moment method with a deterministic statistical methodology employs the system moment method with a deterministic approach in determining the maximum variance of RIP. The maximum RIP variance is determined with the square sum of each maximum value of a mean RIP value times a RIP sensitivity factor for all input variables considered. This approach makes this simplified statistical methodology much more efficient in the routine reload core design analysis since it eliminates the numerous calculations required for the power history-dependent RIP variance determination. This simplified statistical methodology is shown to be more conservative in generating RIP distribution than the widely used statistical methodology. Comparison of the significances of each input variable to RIP indicates that fission gas release model is the most significant input variable. (author). 11 refs., 6 figs., 2 tabs
QUANTIFYING THE SHORT LIFETIME WITH TCSPC-FLIM: FIRST MOMENT VERSUS FITTING METHODS
Directory of Open Access Journals (Sweden)
LINGLING XU
2013-10-01
Full Text Available Combing the time-correlated single photon counting (TCSPC with fluorescence lifetime imaging microscopy (FLIM provides promising opportunities in revealing important information on the microenvironment of cells and tissues, but the applications are thus far mainly limited by the accuracy and precision of the TCSPC-FLIM technique. Here we present a comprehensive investigation on the performance of two data analysis methods, the first moment (M1 method and the conventional least squares (Fitting method, in quantifying fluorescence lifetime. We found that the M1 method is more superior than the Fitting method when the lifetime is short (70 ~ 400 ps or the signal intensity is weak (<103 photons.
Image object recognition based on the Zernike moment and neural networks
Wan, Jianwei; Wang, Ling; Huang, Fukan; Zhou, Liangzhu
1998-03-01
This paper first give a comprehensive discussion about the concept of artificial neural network its research methods and the relations with information processing. On the basis of such a discussion, we expound the mathematical similarity of artificial neural network and information processing. Then, the paper presents a new method of image recognition based on invariant features and neural network by using image Zernike transform. The method not only has the invariant properties for rotation, shift and scale of image object, but also has good fault tolerance and robustness. Meanwhile, it is also compared with statistical classifier and invariant moments recognition method.
Multivariate statistical methods a first course
Marcoulides, George A
2014-01-01
Multivariate statistics refer to an assortment of statistical methods that have been developed to handle situations in which multiple variables or measures are involved. Any analysis of more than two variables or measures can loosely be considered a multivariate statistical analysis. An introductory text for students learning multivariate statistical methods for the first time, this book keeps mathematical details to a minimum while conveying the basic principles. One of the principal strategies used throughout the book--in addition to the presentation of actual data analyses--is poin
Statistical methods for physical science
Stanford, John L
1994-01-01
This volume of Methods of Experimental Physics provides an extensive introduction to probability and statistics in many areas of the physical sciences, with an emphasis on the emerging area of spatial statistics. The scope of topics covered is wide-ranging-the text discusses a variety of the most commonly used classical methods and addresses newer methods that are applicable or potentially important. The chapter authors motivate readers with their insightful discussions, augmenting their material withKey Features* Examines basic probability, including coverage of standard distributions, time s
The conditional moment closure method for modeling lean premixed turbulent combustion
Martin, Scott Montgomery
Natural gas fired lean premixed gas turbines have become the method of choice for new power generation systems due to their high efficiency and low pollutant emissions. As emission regulations for these combustion systems become more stringent, the use of numerical modeling has become an important a priori tool in designing clean and efficient combustors. Here a new turbulent combustion model is developed in an attempt to improve the state of the art. The Conditional Moment Closure (CMC) method is a new theory that has been applied to non-premixed combustion with good success. The application of the CMC method to premixed systems has been proposed, but has not yet been done. The premixed CMC method replaces the species mass fractions as independent variables with the species mass fractions that are conditioned on a reaction progress variable (RPV). Conservation equations for these new variables are then derived and solved. The general idea behind the CMC method is that the behavior of the chemical species is closely coupled to the reaction progress variable. Thus, species conservation equations that are conditioned on the RPV will have terms involving the fluctuating quantities that are much more likely to be negligible. The CMC method accounts for the interaction between scalar dissipation (micromixing) and chemistry, while de-coupling the kinetics from the bulk flow (macromixing). Here the CMC method is combined with a commercial computational fluid dynamics program, which calculates the large-scale fluid motions. The CMC model is validated by comparison to 2-D reacting backward facing step data. Predicted species, temperature and velocity fields are compared to experimental data with good success. The CMC model is also validated against the University of Washington's 3-D jet stirred reactor (JSR) data, which is an idealized lean premixed combustor. The JSR results are encouraging, but not as good as the backward facing step. The largest source of error is from
Statistical Methods for Environmental Pollution Monitoring
Energy Technology Data Exchange (ETDEWEB)
Gilbert, Richard O. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
1987-01-01
The application of statistics to environmental pollution monitoring studies requires a knowledge of statistical analysis methods particularly well suited to pollution data. This book fills that need by providing sampling plans, statistical tests, parameter estimation procedure techniques, and references to pertinent publications. Most of the statistical techniques are relatively simple, and examples, exercises, and case studies are provided to illustrate procedures. The book is logically divided into three parts. Chapters 1, 2, and 3 are introductory chapters. Chapters 4 through 10 discuss field sampling designs and Chapters 11 through 18 deal with a broad range of statistical analysis procedures. Some statistical techniques given here are not commonly seen in statistics book. For example, see methods for handling correlated data (Sections 4.5 and 11.12), for detecting hot spots (Chapter 10), and for estimating a confidence interval for the mean of a lognormal distribution (Section 13.2). Also, Appendix B lists a computer code that estimates and tests for trends over time at one or more monitoring stations using nonparametric methods (Chapters 16 and 17). Unfortunately, some important topics could not be included because of their complexity and the need to limit the length of the book. For example, only brief mention could be made of time series analysis using Box-Jenkins methods and of kriging techniques for estimating spatial and spatial-time patterns of pollution, although multiple references on these topics are provided. Also, no discussion of methods for assessing risks from environmental pollution could be included.
Robust statistical methods with R
Jureckova, Jana
2005-01-01
Robust statistical methods were developed to supplement the classical procedures when the data violate classical assumptions. They are ideally suited to applied research across a broad spectrum of study, yet most books on the subject are narrowly focused, overly theoretical, or simply outdated. Robust Statistical Methods with R provides a systematic treatment of robust procedures with an emphasis on practical application.The authors work from underlying mathematical tools to implementation, paying special attention to the computational aspects. They cover the whole range of robust methods, including differentiable statistical functions, distance of measures, influence functions, and asymptotic distributions, in a rigorous yet approachable manner. Highlighting hands-on problem solving, many examples and computational algorithms using the R software supplement the discussion. The book examines the characteristics of robustness, estimators of real parameter, large sample properties, and goodness-of-fit tests. It...
Optimal moment determination in POME-copula based hydrometeorological dependence modelling
Liu, Dengfeng; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Chen, Yuanfang; Chen, Xi
2017-07-01
Copula has been commonly applied in multivariate modelling in various fields where marginal distribution inference is a key element. To develop a flexible, unbiased mathematical inference framework in hydrometeorological multivariate applications, the principle of maximum entropy (POME) is being increasingly coupled with copula. However, in previous POME-based studies, determination of optimal moment constraints has generally not been considered. The main contribution of this study is the determination of optimal moments for POME for developing a coupled optimal moment-POME-copula framework to model hydrometeorological multivariate events. In this framework, margins (marginals, or marginal distributions) are derived with the use of POME, subject to optimal moment constraints. Then, various candidate copulas are constructed according to the derived margins, and finally the most probable one is determined, based on goodness-of-fit statistics. This optimal moment-POME-copula framework is applied to model the dependence patterns of three types of hydrometeorological events: (i) single-site streamflow-water level; (ii) multi-site streamflow; and (iii) multi-site precipitation, with data collected from Yichang and Hankou in the Yangtze River basin, China. Results indicate that the optimal-moment POME is more accurate in margin fitting and the corresponding copulas reflect a good statistical performance in correlation simulation. Also, the derived copulas, capturing more patterns which traditional correlation coefficients cannot reflect, provide an efficient way in other applied scenarios concerning hydrometeorological multivariate modelling.
Uncertainty analysis of reactor safety systems with statistically correlated failure data
International Nuclear Information System (INIS)
Dezfuli, H.; Modarres, M.
1985-01-01
The probability of occurrence of the top event of a fault tree is estimated from failure probability of components that constitute the fault tree. Component failure probabilities are subject to statistical uncertainties. In addition, there are cases where the failure data are statistically correlated. Most fault tree evaluations have so far been based on uncorrelated component failure data. The subject of this paper is the description of a method of assessing the probability intervals for the top event failure probability of fault trees when component failure data are statistically correlated. To estimate the mean and variance of the top event, a second-order system moment method is presented through Taylor series expansion, which provides an alternative to the normally used Monte-Carlo method. For cases where component failure probabilities are statistically correlated, the Taylor expansion terms are treated properly. A moment matching technique is used to obtain the probability distribution function of the top event through fitting a Johnson Ssub(B) distribution. The computer program (CORRELATE) was developed to perform the calculations necessary for the implementation of the method developed. The CORRELATE code is very efficient and consumes minimal computer time. This is primarily because it does not employ the time-consuming Monte-Carlo method. (author)
Factorial-moment and fractal analyses of γ families from atmospheric cascades
International Nuclear Information System (INIS)
Kalmakhelidze, M. E.; Roinishvili, N. N.; Svanidze, M. S.; Khizanishvili, L. A.; Chadranyan, L. Kh.
1997-01-01
Methods of factorial moments and fractal dimensions are used to analyze γ families from nuclear-electromagnetic cascades in the atmosphere. The analysis aims at estimating the sensitivity of these methods to multiparticle density fluctuations in γ families as considered in spaces of various variables. The mean characteristics of factorial and fractal moments in the azimuthal plane are studied and compared with those of the statistical ensemble of random families. It is shown that fluctuations of the photon distribution in the azimuthal angle Φ are of a dynamic origin. The mean model parameters are analyzed as functions of the radius vector R, an analog of pseudorapidity, and the product ER (E is the energy of an individual photon), an analog of the transverse momentum. Particle densities for two-dimensional partitions into both rings (in the radius R) and sectors (in the azimuthal angle Φ), d 2 N/dΦdR, are also considered. The distributions of various factorial and fractal features of individual γ families are compared with those for the statistical ensemble of random families. Correlations of these features for a γ family treated in terms of different variables (sectors and rings) are studied. Correlations between different factorial-fractal parameters of γ families are analyzed
Workshop on Analytical Methods in Statistics
Jurečková, Jana; Maciak, Matúš; Pešta, Michal
2017-01-01
This volume collects authoritative contributions on analytical methods and mathematical statistics. The methods presented include resampling techniques; the minimization of divergence; estimation theory and regression, eventually under shape or other constraints or long memory; and iterative approximations when the optimal solution is difficult to achieve. It also investigates probability distributions with respect to their stability, heavy-tailness, Fisher information and other aspects, both asymptotically and non-asymptotically. The book not only presents the latest mathematical and statistical methods and their extensions, but also offers solutions to real-world problems including option pricing. The selected, peer-reviewed contributions were originally presented at the workshop on Analytical Methods in Statistics, AMISTAT 2015, held in Prague, Czech Republic, November 10-13, 2015.
Numerical approximation of the Boltzmann equation : moment closure
Abdel Malik, M.R.A.; Brummelen, van E.H.
2012-01-01
This work applies the moment method onto a generic form of kinetic equations to simplify kinetic models of particle systems. This leads to the moment closure problem which is addressed using entropy-based moment closure techniques utilizing entropy minimization. The resulting moment closure system
Statistical methods in personality assessment research.
Schinka, J A; LaLone, L; Broeckel, J A
1997-06-01
Emerging models of personality structure and advances in the measurement of personality and psychopathology suggest that research in personality and personality assessment has entered a stage of advanced development, in this article we examine whether researchers in these areas have taken advantage of new and evolving statistical procedures. We conducted a review of articles published in the Journal of Personality, Assessment during the past 5 years. Of the 449 articles that included some form of data analysis, 12.7% used only descriptive statistics, most employed only univariate statistics, and fewer than 10% used multivariate methods of data analysis. We discuss the cost of using limited statistical methods, the possible reasons for the apparent reluctance to employ advanced statistical procedures, and potential solutions to this technical shortcoming.
Statistical Methods in Psychology Journals.
Willkinson, Leland
1999-01-01
Proposes guidelines for revising the American Psychological Association (APA) publication manual or other APA materials to clarify the application of statistics in research reports. The guidelines are intended to induce authors and editors to recognize the thoughtless application of statistical methods. Contains 54 references. (SLD)
Development of atomic-beam resonance method to measure the nuclear moments of unstable nuclei
Energy Technology Data Exchange (ETDEWEB)
Sugimoto, T., E-mail: sugimoto@ribf.riken.jp [SPring-8 (Japan); Asahi, K. [Tokyo Institute of Technology, Department of Physics (Japan); Kawamura, H.; Murata, J. [Rikkyo University, Department of Physics (Japan); Nagae, D.; Shimada, K. [Tokyo Institute of Technology, Department of Physics (Japan); Ueno, H.; Yoshimi, A. [RIKEN Nishina Center (Japan)
2008-01-15
We have been working on the development of a new technique of atomic-beam resonance method to measure the nuclear moments of unstable nuclei. In the present study, an ion-guiding system to be used as an atomic-beam source have been developed.
Method of moments approach to pricing double barrier contracts in polynomial jump-diffusion models
Eriksson, B.; Pistorius, M.
2011-01-01
Abstract: We present a method of moments approach to pricing double barrier contracts when the underlying is modelled by a polynomial jump-diffusion. By general principles the price is linked to certain infinite dimensional linear programming problems. Subsequently approximating these by finite
Analyzed Using Statistical Moments
International Nuclear Information System (INIS)
Oltulu, O.
2004-01-01
Diffraction enhanced imaging (DEl) technique is a new x-ray imaging method derived from radiography. The method uses a monorheumetten x-ray beam and introduces an analyzer crystal between an object and a detector Narrow angular acceptance of the analyzer crystal generates an improved contrast over the evaluation radiography. While standart radiography can produce an 'absorption image', DEl produces 'apparent absorption' and 'apparent refraction' images with superior quality. Objects with similar absorption properties may not be distinguished with conventional techniques due to close absorption coefficients. This problem becomes more dominant when an object has scattering properties. A simple approach is introduced to utilize scattered radiation to obtain 'pure absorption' and 'pure refraction' images
Statistical methods for quality improvement
National Research Council Canada - National Science Library
Ryan, Thomas P
2011-01-01
...."-TechnometricsThis new edition continues to provide the most current, proven statistical methods for quality control and quality improvementThe use of quantitative methods offers numerous benefits...
Statistical learning methods: Basics, control and performance
Energy Technology Data Exchange (ETDEWEB)
Zimmermann, J. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)]. E-mail: zimmerm@mppmu.mpg.de
2006-04-01
The basics of statistical learning are reviewed with a special emphasis on general principles and problems for all different types of learning methods. Different aspects of controlling these methods in a physically adequate way will be discussed. All principles and guidelines will be exercised on examples for statistical learning methods in high energy and astrophysics. These examples prove in addition that statistical learning methods very often lead to a remarkable performance gain compared to the competing classical algorithms.
Statistical learning methods: Basics, control and performance
International Nuclear Information System (INIS)
Zimmermann, J.
2006-01-01
The basics of statistical learning are reviewed with a special emphasis on general principles and problems for all different types of learning methods. Different aspects of controlling these methods in a physically adequate way will be discussed. All principles and guidelines will be exercised on examples for statistical learning methods in high energy and astrophysics. These examples prove in addition that statistical learning methods very often lead to a remarkable performance gain compared to the competing classical algorithms
Why and how to normalize the factorial moments of intermittency
International Nuclear Information System (INIS)
Peschanski, R.
1990-01-01
The normalization of factorial moments of intermittency, which is often the subject of controverses, is justified and (re-)derived from the general assumption of multi-Poissonian statistical noise in the production of particles at high-energy. Correction factors for the horizontal vs. Vertical analyses are derived in general cases, including the factorial multi-bin correlation moments
Karian, Zaven A
2000-01-01
Throughout the physical and social sciences, researchers face the challenge of fitting statistical distributions to their data. Although the study of statistical modelling has made great strides in recent years, the number and variety of distributions to choose from-all with their own formulas, tables, diagrams, and general properties-continue to create problems. For a specific application, which of the dozens of distributions should one use? What if none of them fit well?Fitting Statistical Distributions helps answer those questions. Focusing on techniques used successfully across many fields, the authors present all of the relevant results related to the Generalized Lambda Distribution (GLD), the Generalized Bootstrap (GB), and Monte Carlo simulation (MC). They provide the tables, algorithms, and computer programs needed for fitting continuous probability distributions to data in a wide variety of circumstances-covering bivariate as well as univariate distributions, and including situations where moments do...
Hyperon magnetic moments and total cross sections
International Nuclear Information System (INIS)
Lipkin, H.J.
1982-06-01
The new data on both total cross sections and magnetic moments are simply described by beginning with the additive quark model in an SU(3) limit where all quarks behave like strange quarks and breaking both additivity and SU(3) simultaneously with an additional non-additive mechanism which affects only nonstrange quark contributions. The suggestion that strange quarks behave more simply than nonstrange may provide clues to underlying structure or dynamics. Small discrepancies in the moments are analyzed and shown to provide serious difficulties for most models if they are statistically significant. (author)
Statistical methods in nonlinear dynamics
Indian Academy of Sciences (India)
Sensitivity to initial conditions in nonlinear dynamical systems leads to exponential divergence of trajectories that are initially arbitrarily close, and hence to unpredictability. Statistical methods have been found to be helpful in extracting useful information about such systems. In this paper, we review briefly some statistical ...
An evaluation of collision models in the Method of Moments for rarefied gas problems
Emerson, David; Gu, Xiao-Jun
2014-11-01
The Method of Moments offers an attractive approach for solving gaseous transport problems that are beyond the limit of validity of the Navier-Stokes-Fourier equations. Recent work has demonstrated the capability of the regularized 13 and 26 moment equations for solving problems when the Knudsen number, Kn (where Kn is the ratio of the mean free path of a gas to a typical length scale of interest), is in the range 0.1 and 1.0-the so-called transition regime. In comparison to numerical solutions of the Boltzmann equation, the Method of Moments has captured both qualitatively, and quantitatively, results of classical test problems in kinetic theory, e.g. velocity slip in Kramers' problem, temperature jump in Knudsen layers, the Knudsen minimum etc. However, most of these results have been obtained for Maxwell molecules, where molecules repel each other according to an inverse fifth-power rule. Recent work has incorporated more traditional collision models such as BGK, S-model, and ES-BGK, the latter being important for thermal problems where the Prandtl number can vary. We are currently investigating the impact of these collision models on fundamental low-speed problems of particular interest to micro-scale flows that will be discussed and evaluated in the presentation. Engineering and Physical Sciences Research Council under Grant EP/I011927/1 and CCP12.
Directory of Open Access Journals (Sweden)
Koivistoinen Teemu
2007-01-01
Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an -by-1 or 1-by- array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD.'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Moment-to-Moment Optimal Branding in TV Commercials: Preventing Avoidance by Pulsing
Thales S. Teixeira; Michel Wedel; Rik Pieters
2010-01-01
We develop a conceptual framework about the impact that branding activity (the audiovisual representation of brands) and consumers' focused versus dispersed attention have on consumer moment-to-moment avoidance decisions during television advertising. We formalize this framework in a dynamic probit model and estimate it with Markov chain Monte Carlo methods. Data on avoidance through zapping, along with eye tracking on 31 commercials for nearly 2,000 participants, are used to calibrate the mo...
Hajabdollahi, Farzaneh; Premnath, Kannan N.
2018-05-01
Lattice Boltzmann (LB) models used for the computation of fluid flows represented by the Navier-Stokes (NS) equations on standard lattices can lead to non-Galilean-invariant (GI) viscous stress involving cubic velocity errors. This arises from the dependence of their third-order diagonal moments on the first-order moments for standard lattices, and strategies have recently been introduced to restore Galilean invariance without such errors using a modified collision operator involving corrections to either the relaxation times or the moment equilibria. Convergence acceleration in the simulation of steady flows can be achieved by solving the preconditioned NS equations, which contain a preconditioning parameter that can be used to tune the effective sound speed, and thereby alleviating the numerical stiffness. In the present paper, we present a GI formulation of the preconditioned cascaded central-moment LB method used to solve the preconditioned NS equations, which is free of cubic velocity errors on a standard lattice, for steady flows. A Chapman-Enskog analysis reveals the structure of the spurious non-GI defect terms and it is demonstrated that the anisotropy of the resulting viscous stress is dependent on the preconditioning parameter, in addition to the fluid velocity. It is shown that partial correction to eliminate the cubic velocity defects is achieved by scaling the cubic velocity terms in the off-diagonal third-order moment equilibria with the square of the preconditioning parameter. Furthermore, we develop additional corrections based on the extended moment equilibria involving gradient terms with coefficients dependent locally on the fluid velocity and the preconditioning parameter. Such parameter dependent corrections eliminate the remaining truncation errors arising from the degeneracy of the diagonal third-order moments and fully restore Galilean invariance without cubic defects for the preconditioned LB scheme on a standard lattice. Several
Statistical data analysis using SAS intermediate statistical methods
Marasinghe, Mervyn G
2018-01-01
The aim of this textbook (previously titled SAS for Data Analytics) is to teach the use of SAS for statistical analysis of data for advanced undergraduate and graduate students in statistics, data science, and disciplines involving analyzing data. The book begins with an introduction beyond the basics of SAS, illustrated with non-trivial, real-world, worked examples. It proceeds to SAS programming and applications, SAS graphics, statistical analysis of regression models, analysis of variance models, analysis of variance with random and mixed effects models, and then takes the discussion beyond regression and analysis of variance to conclude. Pedagogically, the authors introduce theory and methodological basis topic by topic, present a problem as an application, followed by a SAS analysis of the data provided and a discussion of results. The text focuses on applied statistical problems and methods. Key features include: end of chapter exercises, downloadable SAS code and data sets, and advanced material suitab...
International Nuclear Information System (INIS)
Matsuta, K.; Arimura, K.; Nagatomo, T.; Akutsu, K.; Iwakoshi, T.; Kudo, S.; Ogura, M.; Takechi, M.; Tanaka, K.; Sumikama, T.; Minamisono, K.; Miyake, T.; Minamisono, T.; Fukuda, M.; Mihara, M.; Kitagawa, A.; Sasaki, M.; Kanazawa, M.; Torikoshi, M.; Suda, M.; Hirai, M.; Momota, S.; Nojiri, Y.; Sakamoto, A.; Saihara, M.; Ohtsubo, T.; Alonso, J.R.; Krebs, G.F.; Symons, T.J.M.
2004-01-01
The magnetic moment of 33 Cl (Iπ=3/2+, T1/2=2.51s) has been re-measured precisely by β-NMR method. The obtained magnetic moment |μ|=0.7549(3)μN is consistent with the old value 0.7523(16)μN, but is 5 times more accurate. The value is well reproduced by the shell model calculation, μSM=0.70μN. Combined with the magnetic moment of the mirror partner 33 S, the nuclear matrix elements , , , and were derived
Advanced statistical methods in data science
Chen, Jiahua; Lu, Xuewen; Yi, Grace; Yu, Hao
2016-01-01
This book gathers invited presentations from the 2nd Symposium of the ICSA- CANADA Chapter held at the University of Calgary from August 4-6, 2015. The aim of this Symposium was to promote advanced statistical methods in big-data sciences and to allow researchers to exchange ideas on statistics and data science and to embraces the challenges and opportunities of statistics and data science in the modern world. It addresses diverse themes in advanced statistical analysis in big-data sciences, including methods for administrative data analysis, survival data analysis, missing data analysis, high-dimensional and genetic data analysis, longitudinal and functional data analysis, the design and analysis of studies with response-dependent and multi-phase designs, time series and robust statistics, statistical inference based on likelihood, empirical likelihood and estimating functions. The editorial group selected 14 high-quality presentations from this successful symposium and invited the presenters to prepare a fu...
Handbook of tables for order statistics from lognormal distributions with applications
Balakrishnan, N
1999-01-01
Lognormal distributions are one of the most commonly studied models in the sta tistical literature while being most frequently used in the applied literature. The lognormal distributions have been used in problems arising from such diverse fields as hydrology, biology, communication engineering, environmental science, reliability, agriculture, medical science, mechanical engineering, material science, and pharma cology. Though the lognormal distributions have been around from the beginning of this century (see Chapter 1), much of the work concerning inferential methods for the parameters of lognormal distributions has been done in the recent past. Most of these methods of inference, particUlarly those based on censored samples, involve extensive use of numerical methods to solve some nonlinear equations. Order statistics and their moments have been discussed quite extensively in the literature for many distributions. It is very well known that the moments of order statistics can be derived explicitly only...
Troive, L.
2017-09-01
Friction-free 3-point bending has become a common test-method since the VDA 238-100 plate-bending test [1] was introduced. According to this test the criterion for failure is when the force suddenly drops. It was found by the author that the evolution of the cross-section moment is a more preferable measure regarding the real material response instead of the force. Beneficially, the cross-section moment gets more or less a constant maximum steady-state level when the cross-section becomes fully plastified. An expression for the moment M is presented that fulfils the criteria for energy of conservation at bending. Also an expression calculating the unit-free moment, M/Me, i.e. current moment to elastic-moment ratio, is demonstrated specifically proposed for detection of failures. The mathematical expressions are simple making it easy to transpose measured force F and stroke position S to the corresponding cross-section moment M. From that point of view it’s even possible to implement, e.g. into a conventional measurement system software, studying the cross-section moment in real-time during a test. It’s even possible to calculate other parameters such as flow-stress and shape of curvature at every stage. It has been tested on different thicknesses and grades within the range from 1.0 to 10 mm with very good results. In this paper the present model is applied on a 6.1 mm hot-rolled high strength steel from the same batch at three different conditions, i.e. directly quenched, quenched and tempered, and a third variant quench and tempered with levelling. It will be shown that very small differences in material-response can be predicted by this method.
Statistical Methods for Fuzzy Data
Viertl, Reinhard
2011-01-01
Statistical data are not always precise numbers, or vectors, or categories. Real data are frequently what is called fuzzy. Examples where this fuzziness is obvious are quality of life data, environmental, biological, medical, sociological and economics data. Also the results of measurements can be best described by using fuzzy numbers and fuzzy vectors respectively. Statistical analysis methods have to be adapted for the analysis of fuzzy data. In this book, the foundations of the description of fuzzy data are explained, including methods on how to obtain the characterizing function of fuzzy m
Directory of Open Access Journals (Sweden)
Iman Mansouri
2017-01-01
Full Text Available Designer engineers have always the serious challenge regarding the choice of the kind of structures to use in the areas with significant seismic activities. Development of fragility curve provides an opportunity for designers to select a structure that will have the least fragility. This paper presents an investigation into the seismic vulnerability of both steel and reinforced concrete (RC moment frames using fragility curves obtained by HAZUS and statistical methodologies. Fragility curves are employed for several probability parameters. Fragility curves are used to assess several probability parameters. Furthermore, it examines whether the probability of the exceedence of the damage limit state is reduced as expected. Nonlinear dynamic analyses of five-, eight-, and twelve-story frames are carried out using Perform 3D. The definition of damage states is based on the descriptions provided by HAZUS, which gives the limit states and the associated interstory drift limits for structures. The fragility curves show that the HAZUS procedure reduces probability of damage, and this reduction is higher for RC frames. Generally, the RC frames have higher fragility compared to steel frames.
Ginzburg, Irina
2017-01-01
The effect of the heterogeneity in the soil structure or the nonuniformity of the velocity field on the modeled resident time distribution (RTD) and breakthrough curves is quantified by their moments. While the first moment provides the effective velocity, the second moment is related to the longitudinal dispersion coefficient (kT) in the developed Taylor regime; the third and fourth moments are characterized by their normalized values skewness (Sk) and kurtosis (Ku), respectively. The purpose of this investigation is to examine the role of the truncation corrections of the numerical scheme in kT, Sk, and Ku because of their interference with the second moment, in the form of the numerical dispersion, and in the higher-order moments, by their definition. Our symbolic procedure is based on the recently proposed extended method of moments (EMM). Originally, the EMM restores any-order physical moments of the RTD or averaged distributions assuming that the solute concentration obeys the advection-diffusion equation in multidimensional steady-state velocity field, in streamwise-periodic heterogeneous structure. In our work, the EMM is generalized to the fourth-order-accurate apparent mass-conservation equation in two- and three-dimensional duct flows. The method looks for the solution of the transport equation as the product of a long harmonic wave and a spatially periodic oscillating component; the moments of the given numerical scheme are derived from a chain of the steady-state fourth-order equations at a single cell. This mathematical technique is exemplified for the truncation terms of the two-relaxation-time lattice Boltzmann scheme, using plug and parabolic flow in straight channel and cylindrical capillary with the d2Q9 and d3Q15 discrete velocity sets as simple but illustrative examples. The derived symbolic dependencies can be readily extended for advection by another, Newtonian or non-Newtonian, flow profile in any-shape open-tabular conduits. It is
Poppe, L.J.; Eliason, A.H.; Hastings, M.E.
2004-01-01
Measures that describe and summarize sediment grain-size distributions are important to geologists because of the large amount of information contained in textural data sets. Statistical methods are usually employed to simplify the necessary comparisons among samples and quantify the observed differences. The two statistical methods most commonly used by sedimentologists to describe particle distributions are mathematical moments (Krumbein and Pettijohn, 1938) and inclusive graphics (Folk, 1974). The choice of which of these statistical measures to use is typically governed by the amount of data available (Royse, 1970). If the entire distribution is known, the method of moments may be used; if the next to last accumulated percent is greater than 95, inclusive graphics statistics can be generated. Unfortunately, earlier programs designed to describe sediment grain-size distributions statistically do not run in a Windows environment, do not allow extrapolation of the distribution's tails, or do not generate both moment and graphic statistics (Kane and Hubert, 1963; Collias et al., 1963; Schlee and Webster, 1967; Poppe et al., 2000)1.Owing to analytical limitations, electro-resistance multichannel particle-size analyzers, such as Coulter Counters, commonly truncate the tails of the fine-fraction part of grain-size distributions. These devices do not detect fine clay in the 0.6–0.1 μm range (part of the 11-phi and all of the 12-phi and 13-phi fractions). Although size analyses performed down to 0.6 μm microns are adequate for most freshwater and near shore marine sediments, samples from many deeper water marine environments (e.g. rise and abyssal plain) may contain significant material in the fine clay fraction, and these analyses benefit from extrapolation.The program (GSSTAT) described herein generates statistics to characterize sediment grain-size distributions and can extrapolate the fine-grained end of the particle distribution. It is written in Microsoft
Variational approach to magnetic moments
Energy Technology Data Exchange (ETDEWEB)
Lipparini, E; Stringari, S; Traini, M [Dipartimento di Matematica e Fisica, Libera Universita di Trento, Italy
1977-11-07
Magnetic moments in nuclei with a spin unsaturated core plus or minus an extra nucleon have been studied using a restricted Hartree-Fock approach. The method yields simple explicit expressions for the deformed ground state and for magnetic moments. Different projection techniques of the HF scheme have been discussed and compared with perturbation theory.
Source-Type Identification Analysis Using Regional Seismic Moment Tensors
Chiang, A.; Dreger, D. S.; Ford, S. R.; Walter, W. R.
2012-12-01
Waveform inversion to determine the seismic moment tensor is a standard approach in determining the source mechanism of natural and manmade seismicity, and may be used to identify, or discriminate different types of seismic sources. The successful applications of the regional moment tensor method at the Nevada Test Site (NTS) and the 2006 and 2009 North Korean nuclear tests (Ford et al., 2009a, 2009b, 2010) show that the method is robust and capable for source-type discrimination at regional distances. The well-separated populations of explosions, earthquakes and collapses on a Hudson et al., (1989) source-type diagram enables source-type discrimination; however the question remains whether or not the separation of events is universal in other regions, where we have limited station coverage and knowledge of Earth structure. Ford et al., (2012) have shown that combining regional waveform data and P-wave first motions removes the CLVD-isotropic tradeoff and uniquely discriminating the 2009 North Korean test as an explosion. Therefore, including additional constraints from regional and teleseismic P-wave first motions enables source-type discrimination at regions with limited station coverage. We present moment tensor analysis of earthquakes and explosions (M6) from Lop Nor and Semipalatinsk test sites for station paths crossing Kazakhstan and Western China. We also present analyses of smaller events from industrial sites. In these sparse coverage situations we combine regional long-period waveforms, and high-frequency P-wave polarity from the same stations, as well as from teleseismic arrays to constrain the source type. Discrimination capability with respect to velocity model and station coverage is examined, and additionally we investigate the velocity model dependence of vanishing free-surface traction effects on seismic moment tensor inversion of shallow sources and recovery of explosive scalar moment. Our synthetic data tests indicate that biases in scalar
A Study of Moment Based Features on Handwritten Digit Recognition
Directory of Open Access Journals (Sweden)
Pawan Kumar Singh
2016-01-01
Full Text Available Handwritten digit recognition plays a significant role in many user authentication applications in the modern world. As the handwritten digits are not of the same size, thickness, style, and orientation, therefore, these challenges are to be faced to resolve this problem. A lot of work has been done for various non-Indic scripts particularly, in case of Roman, but, in case of Indic scripts, the research is limited. This paper presents a script invariant handwritten digit recognition system for identifying digits written in five popular scripts of Indian subcontinent, namely, Indo-Arabic, Bangla, Devanagari, Roman, and Telugu. A 130-element feature set which is basically a combination of six different types of moments, namely, geometric moment, moment invariant, affine moment invariant, Legendre moment, Zernike moment, and complex moment, has been estimated for each digit sample. Finally, the technique is evaluated on CMATER and MNIST databases using multiple classifiers and, after performing statistical significance tests, it is observed that Multilayer Perceptron (MLP classifier outperforms the others. Satisfactory recognition accuracies are attained for all the five mentioned scripts.
Analysis on the moment method for determining the moisture transport properties in porous media
International Nuclear Information System (INIS)
Wang, B.X.; Fang, Z.H.
1987-01-01
The authors discuss a new unsteady-state method proposed for determining the moisture transport properties in wet porous media. It is based on measurement of the change in moment of gravity caused by the moisture migration. In addition to its high-speed performance, this method may get rid of the difficulty in determination of a changing moisture content or moisture distribution. On this basis, two particular procedures are contrived: a constant heat source method for determining the thermal mass diffusivity and an instantaneous moisture source method for determining the moisture diffusivity
2002-01-01
Experiment IS358 uses the intense and pure beams of copper isotopes provided by the ISOLDE RILIS (resonance ionization laser ion source). The isotopes are implanted and oriented in the low temperature nuclear orientation set-up NICOLE. Magnetic moments are measured by $\\beta$-NMR. Copper (Z=29), with a single proton above the proton-magic nickel isotopes provides an ideal testground for precise shell model calculations of magnetic moments and their experimental verification. In the course of our experiments we already determined the magnetic moments of $^{67}$Ni, $^{67}$Cu, $^{68g}$Cu, $^{69}$Cu and $^{71}$Cu which provide important information on the magicity of the N=40 subshell closure. In 2001 we plan to conclude our systematic investigations by measuring the magnetic moment of the neutron-deficient isotope $^{59}$Cu. This will pave the way for a subsequent study of the magnetic moment of $^{57}$Cu with a complementary method.
Directory of Open Access Journals (Sweden)
Alpo Värri
2007-01-01
Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an m-by-1 or 1-by-m array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ‘‘time-frequency moments singular value decomposition (TFM-SVD.’’ In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo
2006-12-01
As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Kopferman, H; Massey, H S W
1958-01-01
Nuclear Moments focuses on the processes, methodologies, reactions, and transformations of molecules and atoms, including magnetic resonance and nuclear moments. The book first offers information on nuclear moments in free atoms and molecules, including theoretical foundations of hyperfine structure, isotope shift, spectra of diatomic molecules, and vector model of molecules. The manuscript then takes a look at nuclear moments in liquids and crystals. Discussions focus on nuclear paramagnetic and magnetic resonance and nuclear quadrupole resonance. The text discusses nuclear moments and nucl
Statistical methods in physical mapping
International Nuclear Information System (INIS)
Nelson, D.O.
1995-05-01
One of the great success stories of modern molecular genetics has been the ability of biologists to isolate and characterize the genes responsible for serious inherited diseases like fragile X syndrome, cystic fibrosis and myotonic muscular dystrophy. This dissertation concentrates on constructing high-resolution physical maps. It demonstrates how probabilistic modeling and statistical analysis can aid molecular geneticists in the tasks of planning, execution, and evaluation of physical maps of chromosomes and large chromosomal regions. The dissertation is divided into six chapters. Chapter 1 provides an introduction to the field of physical mapping, describing the role of physical mapping in gene isolation and ill past efforts at mapping chromosomal regions. The next two chapters review and extend known results on predicting progress in large mapping projects. Such predictions help project planners decide between various approaches and tactics for mapping large regions of the human genome. Chapter 2 shows how probability models have been used in the past to predict progress in mapping projects. Chapter 3 presents new results, based on stationary point process theory, for progress measures for mapping projects based on directed mapping strategies. Chapter 4 describes in detail the construction of all initial high-resolution physical map for human chromosome 19. This chapter introduces the probability and statistical models involved in map construction in the context of a large, ongoing physical mapping project. Chapter 5 concentrates on one such model, the trinomial model. This chapter contains new results on the large-sample behavior of this model, including distributional results, asymptotic moments, and detection error rates. In addition, it contains an optimality result concerning experimental procedures based on the trinomial model. The last chapter explores unsolved problems and describes future work
Statistical methods in physical mapping
Energy Technology Data Exchange (ETDEWEB)
Nelson, David O. [Univ. of California, Berkeley, CA (United States)
1995-05-01
One of the great success stories of modern molecular genetics has been the ability of biologists to isolate and characterize the genes responsible for serious inherited diseases like fragile X syndrome, cystic fibrosis and myotonic muscular dystrophy. This dissertation concentrates on constructing high-resolution physical maps. It demonstrates how probabilistic modeling and statistical analysis can aid molecular geneticists in the tasks of planning, execution, and evaluation of physical maps of chromosomes and large chromosomal regions. The dissertation is divided into six chapters. Chapter 1 provides an introduction to the field of physical mapping, describing the role of physical mapping in gene isolation and ill past efforts at mapping chromosomal regions. The next two chapters review and extend known results on predicting progress in large mapping projects. Such predictions help project planners decide between various approaches and tactics for mapping large regions of the human genome. Chapter 2 shows how probability models have been used in the past to predict progress in mapping projects. Chapter 3 presents new results, based on stationary point process theory, for progress measures for mapping projects based on directed mapping strategies. Chapter 4 describes in detail the construction of all initial high-resolution physical map for human chromosome 19. This chapter introduces the probability and statistical models involved in map construction in the context of a large, ongoing physical mapping project. Chapter 5 concentrates on one such model, the trinomial model. This chapter contains new results on the large-sample behavior of this model, including distributional results, asymptotic moments, and detection error rates. In addition, it contains an optimality result concerning experimental procedures based on the trinomial model. The last chapter explores unsolved problems and describes future work.
Electric dipole moment of diatomic molecules
International Nuclear Information System (INIS)
Rosato, A.
1983-01-01
The electric dipole moment of some diatomic molecules is calculated using the Variational Cellular Method. The results obtained for the CO, HB, HF and LiH molecules are compared with other calculations and with experimental data. It is shown that there is strong dependence of the electric dipole moment with respect to the geometry of the cells. The possibility of fixing the geometry of the problem by giving the experimental value of the dipole moment is discussed. (Author) [pt
Statistical analysis tolerance using jacobian torsor model based on uncertainty propagation method
Directory of Open Access Journals (Sweden)
W Ghie
2016-04-01
Full Text Available One risk inherent in the use of assembly components is that the behaviourof these components is discovered only at the moment an assembly isbeing carried out. The objective of our work is to enable designers to useknown component tolerances as parameters in models that can be usedto predict properties at the assembly level. In this paper we present astatistical approach to assemblability evaluation, based on tolerance andclearance propagations. This new statistical analysis method for toleranceis based on the Jacobian-Torsor model and the uncertainty measurementapproach. We show how this can be accomplished by modeling thedistribution of manufactured dimensions through applying a probabilitydensity function. By presenting an example we show how statisticaltolerance analysis should be used in the Jacobian-Torsor model. This workis supported by previous efforts aimed at developing a new generation ofcomputational tools for tolerance analysis and synthesis, using theJacobian-Torsor approach. This approach is illustrated on a simple threepartassembly, demonstrating the method’s capability in handling threedimensionalgeometry.
Table of Nuclear Electric Quadrupole Moments
International Nuclear Information System (INIS)
Stone, N.J.
2013-12-01
This Table is a compilation of experimental measurements of static electric quadrupole moments of ground states and excited states of atomic nuclei throughout the periodic table. To aid identification of the states, their excitation energy, half-life, spin and parity are given, along with a brief indication of the method and any reference standard used in the particular measurement. Experimental data from all quadrupole moment measurements actually provide a value of the product of the moment and the electric field gradient [EFG] acting at the nucleus. Knowledge of the EFG is thus necessary to extract the quadrupole moment. A single recommended value of the moment is given for each state, based, for each element, wherever possible, upon a standard reference moment for a nuclear state of that element studied in a situation in which the electric field gradient has been well calculated. For several elements one or more subsidiary reference EFG/moment references are required and their use is specified. The literature search covers the period to mid-2013. (author)
Semiclassical shell structure of moments of inertia in deformed Fermi systems
International Nuclear Information System (INIS)
Magner, A.G.; Gzhebinsky, A.M.; Sitdikov, A.S.; Khamzin, A.A.; Bartel, J.
2010-01-01
The collective moment of inertia is derived analytically within the cranking model in the adiabatic mean-field approximation at finite temperature. Using the nonperturbative periodic-orbit theory the semiclassical shell-structure components of the collective moment of inertia are obtained for any potential well. Their relation to the free-energy shell corrections are found semiclassically as being given through the shell-structure components of the rigid-body moment of inertia of the statistically equilibrium rotation in terms of short periodic orbits. Shell effects in the moment of inertia disappear exponentially with increasing temperature. For the case of the harmonic-oscillator potential one observes a perfect agreement between semiclassical and quantum shell-structure components of the free energy and the moment of inertia for several critical bifurcation deformations and several temperatures. (author)
International Nuclear Information System (INIS)
Furukawa, Takeshi; Wakui, Takashi; Yang, Xiaofei; Fujita, Tomomi; Imamura, Kei; Yamaguchi, Yasuhiro; Tetsuka, Hiroki; Tsutsui, Yoshiki; Mitsuya, Yosuke; Ichikawa, Yuichi; Ishibashi, Yoko; Yoshida, Naoki; Shirai, Hazuki; Ebara, Yuta; Hayasaka, Miki; Arai, Shino; Muramoto, Sosuke
2013-01-01
Highlights: • Development of a novel nuclear laser spectroscopy method using superfluid helium. • Observation of the Zeeman resonance with the 85 Rb beam introduced into helium. • Demonstration of deducing the nuclear spins from the observed resonance spectrum. -- Abstract: We have been developing a novel nuclear laser spectroscopy method “OROCHI” for determining spins and moments of exotic radioisotopes. In this method, we use superfluid helium as a stopping material of energetic radioisotope beams and then stopped radioisotope atoms are subjected to in situ laser spectroscopy in superfluid helium. To confirm the feasibility of this method for rare radioisotopes, we carried out a test experiment using a 85 Rb beam. In this experiment, we have successfully measured the Zeeman resonance signals from the 85 Rb atoms stopped in superfluid helium by laser-RF double resonance spectroscopy. This method is efficient for the measurement of spins and moments of more exotic nuclei
Directory of Open Access Journals (Sweden)
Marco Antonio Meggiolaro
2015-07-01
Full Text Available A critical issue in multiaxial damage calculation in non-proportional (NP histories is to find the equivalent stress or strain ranges and mean components associated with each rainflow-counted cycle of the stress (or strain path. A traditional way to find such ranges is to use enclosing surface methods, which search for convex enclosures, such as balls or prisms, of the entire history path in stress or strain diagrams. These methods only work for relatively simple load histories, since the enclosing surfaces lose information of the original history. This work presents an approach to evaluate equivalent stress and strain ranges in NP histories, called the moment of inertia (MOI method. It is an integral approach that assumes the path contour in the stress diagram is a homogeneous wire with a unit mass. The center of mass of such wire gives then the mean component of the path, while the moments of inertia of the wire can be used to obtain the equivalent stress or strain ranges. Experimental results obtained from the literature for 13 different multiaxial histories prove the effectiveness of the MOI method to predict fatigue lives.
A method of moments to estimate bivariate survival functions: the copula approach
Directory of Open Access Journals (Sweden)
Silvia Angela Osmetti
2013-05-01
Full Text Available In this paper we discuss the problem on parametric and non parametric estimation of the distributions generated by the Marshall-Olkin copula. This copula comes from the Marshall-Olkin bivariate exponential distribution used in reliability analysis. We generalize this model by the copula and different marginal distributions to construct several bivariate survival functions. The cumulative distribution functions are not absolutely continuous and they unknown parameters are often not be obtained in explicit form. In order to estimate the parameters we propose an easy procedure based on the moments. This method consist in two steps: in the first step we estimate only the parameters of marginal distributions and in the second step we estimate only the copula parameter. This procedure can be used to estimate the parameters of complex survival functions in which it is difficult to find an explicit expression of the mixed moments. Moreover it is preferred to the maximum likelihood one for its simplex mathematic form; in particular for distributions whose maximum likelihood parameters estimators can not be obtained in explicit form.
International Nuclear Information System (INIS)
Ehnder, A.Ya.; Ehnder, I.A.
1999-01-01
A new approach to develop nonlinear moment method to solve the Boltzmann equation is presented. This approach is based on the invariance of collision integral as to the selection of the base functions. The Sonin polynomials with the Maxwell weighting function are selected to serve as the base functions. It is shown that for the arbitrary cross sections of the interaction the matrix elements corresponding to the moments from the nonlinear integral of collisions are bound by simple recurrent bonds enabling to express all nonlinear matrix elements in terms of the linear ones. As a result, high-efficiency numerical pattern to calculate nonlinear matrix elements is obtained. The presented approach offers possibilities both to calculate relaxation processes within high speed range and to some more complex kinetic problems [ru
Electric dipole moment of diatomic molecules
International Nuclear Information System (INIS)
Rosato, A.
1983-01-01
The electric dipole moment of some diatomic molecules is calculated using the Variational Cellular Method. The results obtained for the molecules CO, HB, HF and LiH are compared with other calculations and with experimental data. It is shown that there is strong dependence of the electric dipole moment with respect to the geometry of the cells. It is discussed the possibility of fixing the geometry of the problem by giving the experimental value of the dipole moment. (Author) [pt
Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P
1999-01-01
Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149
Statistical properties of chaotic dynamical systems which exhibit strange attractors
International Nuclear Information System (INIS)
Jensen, R.V.; Oberman, C.R.
1981-07-01
A path integral method is developed for the calculation of the statistical properties of turbulent dynamical systems. The method is applicable to conservative systems which exhibit a transition to stochasticity as well as dissipative systems which exhibit strange attractors. A specific dissipative mapping is considered in detail which models the dynamics of a Brownian particle in a wave field with a broad frequency spectrum. Results are presented for the low order statistical moments for three turbulent regimes which exhibit strange attractors corresponding to strong, intermediate, and weak collisional damping
DEFF Research Database (Denmark)
Köylüoglu, H. U.; Nielsen, Søren R. K.; Cakmak, A. S.
Geometrically non-linear multi-degree-of-freedom (MDOF) systems subject to random excitation are considered. New semi-analytical approximate forward difference equations for the lower order non-stationary statistical moments of the response are derived from the stochastic differential equations...... of motion, and, the accuracy of these equations is numerically investigated. For stationary excitations, the proposed method computes the stationary statistical moments of the response from the solution of non-linear algebraic equations....
Skinner, Carl G; Patel, Manish M; Thomas, Jerry D; Miller, Michael A
2011-01-01
Statistical methods are pervasive in medical research and general medical literature. Understanding general statistical concepts will enhance our ability to critically appraise the current literature and ultimately improve the delivery of patient care. This article intends to provide an overview of the common statistical methods relevant to medicine.
A Special Variant of the Moment Method for Fredholm Integral Equations of the Second Kind
Directory of Open Access Journals (Sweden)
S. A. Solov’eva
2015-01-01
Full Text Available We consider the linear Fredholm integral equation of the second kind, where the kernel and the free term are smooth functions. We find the unknown function in this class as well.Exact and approximate methods for the solution of linear Fredholm integral equations of the second kind are well developed. However, classical methods do not take into account the structural properties of the kernel and the free term of equation.In this paper we develop and justify a special variant of the moment method to solve this equation, which takes into account the differential properties of initial data. The proposed paper furthers studies of N.S Gabbasov, I.P. Kasakina, and S.A Solov’eva. We use approximation theory, version of the general theory of approximate methods of analysis that Gabdulkhayev B.G suggested, and methods of functional analysis to prove theorems. In addition, we use N.S. Gabbasov’s ideas and methods in papers that are devoted to the Fredholm equations of the first kind, as well as N.S. Gabbasov and S.A Solov’eva’s investigations on the Fredholm equations of the third kind in the space of distributions.The first part of the paper provides a description of the basic function space and elements of the theory of approximation in it.In the second part we propose and theoretically justify a generalized moment method. We have demonstrated that the improvement of differential properties of the initial data improves the approximation accuracy. Since, in practice, the approximate equations are solved, as a rule, only approximately, we prove the stability and causality of the proposed method. The resulting estimate of the paper is in good agreement with the estimate for the ordinary moment method for equations of the second kind in the space of continuous functions.In the final section we have shown that a developed method is optimal in order of accuracy among all polynomial projection methods to solve Fredholm integral equations of the second
Statistical models and methods for reliability and survival analysis
Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo
2013-01-01
Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical
Directory of Open Access Journals (Sweden)
Thomas Acher
2014-12-01
Full Text Available A simulation model for 3D polydisperse bubble column flows in an Eulerian/Eulerian framework is presented. A computationally efficient and numerically stable algorithm is created by making use of quadrature method of moments (QMOM functionalities, in conjunction with appropriate breakup and coalescence models. To account for size dependent bubble motion, the constituent moments of the bubble size distribution function are transported with individual velocities. Validation of the simulation results against experimental and numerical data of Hansen [1] show the capability of the present model to accurately predict complex gas-liquid flows.
Edwards, Benjamin; Allmann, Bettina; Fäh, Donat; Clinton, John
2017-01-01
Moment magnitudes (MW) are computed for small and moderate earthquakes using a spectral fitting method. 40 of the resulting values are compared with those from broadband moment tensor solutions and found to match with negligible offset and scatter for available MW values of between 2.8 and 5.0. Using the presented method, MW are computed for 679 earthquakes in Switzerland with a minimum ML= 1.3. A combined bootstrap and orthogonal L1 minimization is then used to produce a scaling relation bet...
The dipole moments of the linear polycarbon monosulfides
International Nuclear Information System (INIS)
Murakami, Akinori
1989-01-01
The dipole moments of the linear polycarbon monosulfides, CS, C 2 S and C 3 S molecule (radical)s were calculated by ab initio SCF-CI method. The equilibrium geometries of the C n S molecules were obtained by MP3 method using the 6-31G** basis set. From the split balencetype (MIDI-4) to the Huzinaga's well tempered extended type(WT) were used to evaluate dipole moments. Final results were obtained using the WT+2d basis set and CI calculation. The calculated dipole moment of the CS molecule, 1.96 debye, is in good agreement with experimental one. The dipole moment of the C 2 S radical is calculated to be 2.81 debye and 3.66 debye for C 3 S molecule. The calculated dipole moments of the C n S will be accurate with in 0.1 debye(5%)
Statistical Methods for Stochastic Differential Equations
Kessler, Mathieu; Sorensen, Michael
2012-01-01
The seventh volume in the SemStat series, Statistical Methods for Stochastic Differential Equations presents current research trends and recent developments in statistical methods for stochastic differential equations. Written to be accessible to both new students and seasoned researchers, each self-contained chapter starts with introductions to the topic at hand and builds gradually towards discussing recent research. The book covers Wiener-driven equations as well as stochastic differential equations with jumps, including continuous-time ARMA processes and COGARCH processes. It presents a sp
Pieters, Jurgen
2001-01-01
'Moments of Negotiation' offers the first book-length and indepth analysis of the New Historicist reading method, which the American Shakespeare-scolar Stephen Greenblatt introduced at the beginning of the 1980s. Ever since, Greenblatt has been hailed as the prime representative of this movement,
Invariant moments based convolutional neural networks for image analysis
Directory of Open Access Journals (Sweden)
Vijayalakshmi G.V. Mahesh
2017-01-01
Full Text Available The paper proposes a method using convolutional neural network to effectively evaluate the discrimination between face and non face patterns, gender classification using facial images and facial expression recognition. The novelty of the method lies in the utilization of the initial trainable convolution kernels coefficients derived from the zernike moments by varying the moment order. The performance of the proposed method was compared with the convolutional neural network architecture that used random kernels as initial training parameters. The multilevel configuration of zernike moments was significant in extracting the shape information suitable for hierarchical feature learning to carry out image analysis and classification. Furthermore the results showed an outstanding performance of zernike moment based kernels in terms of the computation time and classification accuracy.
Moment Conditions Selection Based on Adaptive Penalized Empirical Likelihood
Directory of Open Access Journals (Sweden)
Yunquan Song
2014-01-01
Full Text Available Empirical likelihood is a very popular method and has been widely used in the fields of artificial intelligence (AI and data mining as tablets and mobile application and social media dominate the technology landscape. This paper proposes an empirical likelihood shrinkage method to efficiently estimate unknown parameters and select correct moment conditions simultaneously, when the model is defined by moment restrictions in which some are possibly misspecified. We show that our method enjoys oracle-like properties; that is, it consistently selects the correct moment conditions and at the same time its estimator is as efficient as the empirical likelihood estimator obtained by all correct moment conditions. Moreover, unlike the GMM, our proposed method allows us to carry out confidence regions for the parameters included in the model without estimating the covariances of the estimators. For empirical implementation, we provide some data-driven procedures for selecting the tuning parameter of the penalty function. The simulation results show that the method works remarkably well in terms of correct moment selection and the finite sample properties of the estimators. Also, a real-life example is carried out to illustrate the new methodology.
Simple statistical methods for software engineering data and patterns
Pandian, C Ravindranath
2015-01-01
Although there are countless books on statistics, few are dedicated to the application of statistical methods to software engineering. Simple Statistical Methods for Software Engineering: Data and Patterns fills that void. Instead of delving into overly complex statistics, the book details simpler solutions that are just as effective and connect with the intuition of problem solvers.Sharing valuable insights into software engineering problems and solutions, the book not only explains the required statistical methods, but also provides many examples, review questions, and case studies that prov
Application of blended learning in teaching statistical methods
Directory of Open Access Journals (Sweden)
Barbara Dębska
2012-12-01
Full Text Available The paper presents the application of a hybrid method (blended learning - linking traditional education with on-line education to teach selected problems of mathematical statistics. This includes the teaching of the application of mathematical statistics to evaluate laboratory experimental results. An on-line statistics course was developed to form an integral part of the module ‘methods of statistical evaluation of experimental results’. The course complies with the principles outlined in the Polish National Framework of Qualifications with respect to the scope of knowledge, skills and competencies that students should have acquired at course completion. The paper presents the structure of the course and the educational content provided through multimedia lessons made accessible on the Moodle platform. Following courses which used the traditional method of teaching and courses which used the hybrid method of teaching, students test results were compared and discussed to evaluate the effectiveness of the hybrid method of teaching when compared to the effectiveness of the traditional method of teaching.
Bollinger, Sarah; Kreuter, Matthew W.
2012-01-01
In a randomized experiment using moment-to-moment audience analysis methods, we compared women's emotional responses with a narrative versus informational breast cancer video. Both videos communicated three key messages about breast cancer: (i) understand your breast cancer risk, (ii) talk openly about breast cancer and (iii) get regular…
Statistical analysis of partial reduced width distributions
International Nuclear Information System (INIS)
Tran Quoc Thuong.
1973-01-01
The aim of this study was to develop rigorous methods for analysing experimental event distributions according to a law in chi 2 and to check if the number of degrees of freedom ν is compatible with the value 1 for the reduced neutron width distribution. Two statistical methods were used (the maximum-likelihood method and the method of moments); it was shown, in a few particular cases, that ν is compatible with 1. The difference between ν and 1, if it exists, should not exceed 3%. These results confirm the validity of the compound nucleus model [fr
Development of a Research Methods and Statistics Concept Inventory
Veilleux, Jennifer C.; Chapman, Kate M.
2017-01-01
Research methods and statistics are core courses in the undergraduate psychology major. To assess learning outcomes, it would be useful to have a measure that assesses research methods and statistical literacy beyond course grades. In two studies, we developed and provided initial validation results for a research methods and statistical knowledge…
On multipole moments in general relativity
International Nuclear Information System (INIS)
Hoenselaers, C.
1986-01-01
In general situations, involving gravitational waves the question of multiple moments in general relativity restricts the author to stationary axisymmetric situations. Here it has been shown that multipole moments, a set of numbers defined at spatial infinity as far away from the source as possible, determine a solution of Einstein's equations uniquely. With the rather powerful methods for generating solutions one might hope to get solutions with predefined multipole moments. Before doing so, however, one needs an efficient algorithm for calculating the moments of a given solution. Chapter 2 deals with a conjecture pertaining to such a calculational procedure and shows it to be not true. There is another context in which multipole moments are important. Consider a system composed of several objects. To separate, if possible, the various parts of their interaction, one needs a definition for multipole moments of individual members of a many body system. In spite of the fact that there is no definition for individual moments, with the exception of mass and angular momentum, Chapter 3 shows what can be done for the double Kerr solution. The authors can identify various terms in he interaction of two aligned Kerr objects and show that gravitational spin-spin interaction is indeed proportional to the product of the angular momenta
Statistical error estimation of the Feynman-α method using the bootstrap method
International Nuclear Information System (INIS)
Endo, Tomohiro; Yamamoto, Akio; Yagi, Takahiro; Pyeon, Cheol Ho
2016-01-01
Applicability of the bootstrap method is investigated to estimate the statistical error of the Feynman-α method, which is one of the subcritical measurement techniques on the basis of reactor noise analysis. In the Feynman-α method, the statistical error can be simply estimated from multiple measurements of reactor noise, however it requires additional measurement time to repeat the multiple times of measurements. Using a resampling technique called 'bootstrap method' standard deviation and confidence interval of measurement results obtained by the Feynman-α method can be estimated as the statistical error, using only a single measurement of reactor noise. In order to validate our proposed technique, we carried out a passive measurement of reactor noise without any external source, i.e. with only inherent neutron source by spontaneous fission and (α,n) reactions in nuclear fuels at the Kyoto University Criticality Assembly. Through the actual measurement, it is confirmed that the bootstrap method is applicable to approximately estimate the statistical error of measurement results obtained by the Feynman-α method. (author)
Effects of particle-number-projection on nuclear moment of intertia
International Nuclear Information System (INIS)
Rozmej, P.
1976-01-01
The formalism of the moment of inertia in cranking model and BCS theory has been extended for the partially particle-number-projected BCS wave functions. The ground state moments of inertia obtained by this method are a little greater than those calculated by BCS method. A smooth growth of the moments of inertia for diminishing pairing strength constant has been obtained. (author)
Statistical Models and Methods for Lifetime Data
Lawless, Jerald F
2011-01-01
Praise for the First Edition"An indispensable addition to any serious collection on lifetime data analysis and . . . a valuable contribution to the statistical literature. Highly recommended . . ."-Choice"This is an important book, which will appeal to statisticians working on survival analysis problems."-Biometrics"A thorough, unified treatment of statistical models and methods used in the analysis of lifetime data . . . this is a highly competent and agreeable statistical textbook."-Statistics in MedicineThe statistical analysis of lifetime or response time data is a key tool in engineering,
Statistical Approaches for Spatiotemporal Prediction of Low Flows
Fangmann, A.; Haberlandt, U.
2017-12-01
An adequate assessment of regional climate change impacts on streamflow requires the integration of various sources of information and modeling approaches. This study proposes simple statistical tools for inclusion into model ensembles, which are fast and straightforward in their application, yet able to yield accurate streamflow predictions in time and space. Target variables for all approaches are annual low flow indices derived from a data set of 51 records of average daily discharge for northwestern Germany. The models require input of climatic data in the form of meteorological drought indices, derived from observed daily climatic variables, averaged over the streamflow gauges' catchments areas. Four different modeling approaches are analyzed. Basis for all pose multiple linear regression models that estimate low flows as a function of a set of meteorological indices and/or physiographic and climatic catchment descriptors. For the first method, individual regression models are fitted at each station, predicting annual low flow values from a set of annual meteorological indices, which are subsequently regionalized using a set of catchment characteristics. The second method combines temporal and spatial prediction within a single panel data regression model, allowing estimation of annual low flow values from input of both annual meteorological indices and catchment descriptors. The third and fourth methods represent non-stationary low flow frequency analyses and require fitting of regional distribution functions. Method three is subject to a spatiotemporal prediction of an index value, method four to estimation of L-moments that adapt the regional frequency distribution to the at-site conditions. The results show that method two outperforms successive prediction in time and space. Method three also shows a high performance in the near future period, but since it relies on a stationary distribution, its application for prediction of far future changes may be
Statistical methods in spatial genetics
DEFF Research Database (Denmark)
Guillot, Gilles; Leblois, Raphael; Coulon, Aurelie
2009-01-01
The joint analysis of spatial and genetic data is rapidly becoming the norm in population genetics. More and more studies explicitly describe and quantify the spatial organization of genetic variation and try to relate it to underlying ecological processes. As it has become increasingly difficult...... to keep abreast with the latest methodological developments, we review the statistical toolbox available to analyse population genetic data in a spatially explicit framework. We mostly focus on statistical concepts but also discuss practical aspects of the analytical methods, highlighting not only...
Dynamical moments of inertia for superdeformed nuclei
International Nuclear Information System (INIS)
Obikhod, T.V.
1995-01-01
The method of quantum groups has been applied for calculation the dynamical moments of inertia for the yrast superdeformed bands in 194 Hg and 192 Hg as well as to calculation of the dynamical moments of inertia of superdeformed bands in 150 Gd and 148 Gd
Statistical distribution of the local purity in a large quantum system
International Nuclear Information System (INIS)
De Pasquale, A; Pascazio, S; Facchi, P; Giovannetti, V; Parisi, G; Scardicchio, A
2012-01-01
The local purity of large many-body quantum systems can be studied by following a statistical mechanical approach based on a random matrix model. Restricting the analysis to the case of global pure states, this method proved to be successful, and a full characterization of the statistical properties of the local purity was obtained by computing the partition function of the problem. Here we generalize these techniques to the case of global mixed states. In this context, by uniformly sampling the phase space of states with assigned global mixedness, we determine the exact expression of the first two moments of the local purity and a general expression for the moments of higher order. This generalizes previous results obtained for globally pure configurations. Furthermore, through the introduction of a partition function for a suitable canonical ensemble, we compute the approximate expression of the first moment of the marginal purity in the high-temperature regime. In the process, we establish a formal connection with the theory of quantum twirling maps that provides an alternative, possibly fruitful, way of performing the calculation. (paper)
Effective moments of inertia and spin cut off parameters in Hf isotopes
International Nuclear Information System (INIS)
Razavi, R.; Sharifzadeh, N.; Farahmand, M.R.
2011-01-01
In all statistical theories the nuclear level density is the most characteristic quantity and plays a major role in the study of nuclear structure. Most experimental data on nuclear level density have been analyzed with analytical functions of the level density. On the basis of statistical models, the effective moments of inertia and spin cut off parameters have been determined for 176 Hf, 178 Hf and 180 Hf nuclei from extensive and complete level schemes and neutron resonance densities in low excitation energy levels. Then, moments of inertia of these nuclei have been determined by nuclear rotational model. The results have been compared with their corresponding rigid body value
Energy Technology Data Exchange (ETDEWEB)
Alwan, Aravind; Aluru, N.R.
2013-12-15
This paper presents a data-driven framework for performing uncertainty quantification (UQ) by choosing a stochastic model that accurately describes the sources of uncertainty in a system. This model is propagated through an appropriate response surface function that approximates the behavior of this system using stochastic collocation. Given a sample of data describing the uncertainty in the inputs, our goal is to estimate a probability density function (PDF) using the kernel moment matching (KMM) method so that this PDF can be used to accurately reproduce statistics like mean and variance of the response surface function. Instead of constraining the PDF to be optimal for a particular response function, we show that we can use the properties of stochastic collocation to make the estimated PDF optimal for a wide variety of response functions. We contrast this method with other traditional procedures that rely on the Maximum Likelihood approach, like kernel density estimation (KDE) and its adaptive modification (AKDE). We argue that this modified KMM method tries to preserve what is known from the given data and is the better approach when the available data is limited in quantity. We test the performance of these methods for both univariate and multivariate density estimation by sampling random datasets from known PDFs and then measuring the accuracy of the estimated PDFs, using the known PDF as a reference. Comparing the output mean and variance estimated with the empirical moments using the raw data sample as well as the actual moments using the known PDF, we show that the KMM method performs better than KDE and AKDE in predicting these moments with greater accuracy. This improvement in accuracy is also demonstrated for the case of UQ in electrostatic and electrothermomechanical microactuators. We show how our framework results in the accurate computation of statistics in micromechanical systems.
International Nuclear Information System (INIS)
Alwan, Aravind; Aluru, N.R.
2013-01-01
This paper presents a data-driven framework for performing uncertainty quantification (UQ) by choosing a stochastic model that accurately describes the sources of uncertainty in a system. This model is propagated through an appropriate response surface function that approximates the behavior of this system using stochastic collocation. Given a sample of data describing the uncertainty in the inputs, our goal is to estimate a probability density function (PDF) using the kernel moment matching (KMM) method so that this PDF can be used to accurately reproduce statistics like mean and variance of the response surface function. Instead of constraining the PDF to be optimal for a particular response function, we show that we can use the properties of stochastic collocation to make the estimated PDF optimal for a wide variety of response functions. We contrast this method with other traditional procedures that rely on the Maximum Likelihood approach, like kernel density estimation (KDE) and its adaptive modification (AKDE). We argue that this modified KMM method tries to preserve what is known from the given data and is the better approach when the available data is limited in quantity. We test the performance of these methods for both univariate and multivariate density estimation by sampling random datasets from known PDFs and then measuring the accuracy of the estimated PDFs, using the known PDF as a reference. Comparing the output mean and variance estimated with the empirical moments using the raw data sample as well as the actual moments using the known PDF, we show that the KMM method performs better than KDE and AKDE in predicting these moments with greater accuracy. This improvement in accuracy is also demonstrated for the case of UQ in electrostatic and electrothermomechanical microactuators. We show how our framework results in the accurate computation of statistics in micromechanical systems
Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis.
Abbasi, Mahdi
2014-01-01
Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N (2)log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR.
Exact collisional moments for plasma fluid theories
Pfefferle, David; Hirvijoki, Eero; Lingam, Manasvi
2017-10-01
The velocity-space moments of the often troublesome nonlinear Landau collision operator are expressed exactly in terms of multi-index Hermite-polynomial moments of the distribution functions. The collisional moments are shown to be generated by derivatives of two well-known functions, namely the Rosenbluth-MacDonald-Judd-Trubnikov potentials for a Gaussian distribution. The resulting formula has a nonlinear dependency on the relative mean flow of the colliding species normalised to the root-mean-square of the corresponding thermal velocities, and a bilinear dependency on densities and higher-order velocity moments of the distribution functions, with no restriction on temperature, flow or mass ratio of the species. The result can be applied to both the classic transport theory of plasmas, that relies on the Chapman-Enskog method, as well as to deriving collisional fluid equations that follow Grad's moment approach. As an illustrative example, we provide the collisional ten-moment equations with exact conservation laws for momentum- and energy-transfer rate.
Statistical learning methods in high-energy and astrophysics analysis
Energy Technology Data Exchange (ETDEWEB)
Zimmermann, J. [Forschungszentrum Juelich GmbH, Zentrallabor fuer Elektronik, 52425 Juelich (Germany) and Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)]. E-mail: zimmerm@mppmu.mpg.de; Kiesling, C. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)
2004-11-21
We discuss several popular statistical learning methods used in high-energy- and astro-physics analysis. After a short motivation for statistical learning we present the most popular algorithms and discuss several examples from current research in particle- and astro-physics. The statistical learning methods are compared with each other and with standard methods for the respective application.
Statistical learning methods in high-energy and astrophysics analysis
International Nuclear Information System (INIS)
Zimmermann, J.; Kiesling, C.
2004-01-01
We discuss several popular statistical learning methods used in high-energy- and astro-physics analysis. After a short motivation for statistical learning we present the most popular algorithms and discuss several examples from current research in particle- and astro-physics. The statistical learning methods are compared with each other and with standard methods for the respective application
Assembling Transgender Moments
Greteman, Adam J.
2017-01-01
In this article, the author seeks to assemble moments--scholarly, popular, and aesthetic--in order to explore the possibilities that emerge as moments collect in education's encounters with the needs, struggles, and possibilities of transgender lives and practices. Assembling moments, the author argues, illustrates the value of "moments"…
LEMS: application of the method to study the static quadrupole moment of the K=35/2 isomer in 179W
International Nuclear Information System (INIS)
Neyens, G.; Vyvey, K.; Byrne, A.P.; Dracoulis, G.D.; Blaha, P.
1997-01-01
The method of the level mixing spectroscopy (LEMS) was applied for the first time for the study of the static quadrupole moments of high-K isomers in the A∼180 mass region. Results from a preliminary experiment for the static quadrupole moment of the 35/2 - (750 ns) isomer in 179 W give a limit for its upper value Q 2 <0.343. (orig.). With 1 fig
Moments of the Wigner delay times
International Nuclear Information System (INIS)
Berkolaiko, Gregory; Kuipers, Jack
2010-01-01
The Wigner time delay is a measure of the time spent by a particle inside the scattering region of an open system. For chaotic systems, the statistics of the individual delay times (whose average is the Wigner time delay) are thought to be well described by random matrix theory. Here we present a semiclassical derivation showing the validity of random matrix results. In order to simplify the semiclassical treatment, we express the moments of the delay times in terms of correlation functions of scattering matrices at different energies. In the semiclassical approximation, the elements of the scattering matrix are given in terms of the classical scattering trajectories, requiring one to study correlations between sets of such trajectories. We describe the structure of correlated sets of trajectories and formulate the rules for their evaluation to the leading order in inverse channel number. This allows us to derive a polynomial equation satisfied by the generating function of the moments. Along with showing the agreement of our semiclassical results with the moments predicted by random matrix theory, we infer that the scattering matrix is unitary to all orders in the semiclassical approximation.
Glushak, P. A.; Markiv, B. B.; Tokarchuk, M. V.
2018-01-01
We present a generalization of Zubarev's nonequilibrium statistical operator method based on the principle of maximum Renyi entropy. In the framework of this approach, we obtain transport equations for the basic set of parameters of the reduced description of nonequilibrium processes in a classical system of interacting particles using Liouville equations with fractional derivatives. For a classical systems of particles in a medium with a fractal structure, we obtain a non-Markovian diffusion equation with fractional spatial derivatives. For a concrete model of the frequency dependence of a memory function, we obtain generalized Kettano-type diffusion equation with the spatial and temporal fractality taken into account. We present a generalization of nonequilibrium thermofield dynamics in Zubarev's nonequilibrium statistical operator method in the framework of Renyi statistics.
Statistical methods and their applications in constructional engineering
International Nuclear Information System (INIS)
1977-01-01
An introduction into the basic terms of statistics is followed by a discussion of elements of the probability theory, customary discrete and continuous distributions, simulation methods, statistical supporting framework dynamics, and a cost-benefit analysis of the methods introduced. (RW) [de
He, Ping
2012-01-01
The long-standing puzzle surrounding the statistical mechanics of self-gravitating systems has not yet been solved successfully. We formulate a systematic theoretical framework of entropy-based statistical mechanics for spherically symmetric collisionless self-gravitating systems. We use an approach that is very different from that of the conventional statistical mechanics of short-range interaction systems. We demonstrate that the equilibrium states of self-gravitating systems consist of both mechanical and statistical equilibria, with the former characterized by a series of velocity-moment equations and the latter by statistical equilibrium equations, which should be derived from the entropy principle. The velocity-moment equations of all orders are derived from the steady-state collisionless Boltzmann equation. We point out that the ergodicity is invalid for the whole self-gravitating system, but it can be re-established locally. Based on the local ergodicity, using Fermi-Dirac-like statistics, with the non-degenerate condition and the spatial independence of the local microstates, we rederive the Boltzmann-Gibbs entropy. This is consistent with the validity of the collisionless Boltzmann equation, and should be the correct entropy form for collisionless self-gravitating systems. Apart from the usual constraints of mass and energy conservation, we demonstrate that the series of moment or virialization equations must be included as additional constraints on the entropy functional when performing the variational calculus; this is an extension to the original prescription by White & Narayan. Any possible velocity distribution can be produced by the statistical-mechanical approach that we have developed with the extended Boltzmann-Gibbs/White-Narayan statistics. Finally, we discuss the questions of negative specific heat and ensemble inequivalence for self-gravitating systems.
Evolution of truncated moments of singlet parton distributions
International Nuclear Information System (INIS)
Forte, S.; Magnea, L.; Piccione, A.; Ridolfi, G.
2001-01-01
We define truncated Mellin moments of parton distributions by restricting the integration range over the Bjorken variable to the experimentally accessible subset x 0 ≤x≤1 of the allowed kinematic range 0≤x≤1. We derive the evolution equations satisfied by truncated moments in the general (singlet) case in terms of an infinite triangular matrix of anomalous dimensions which couple each truncated moment to all higher moments with orders differing by integers. We show that the evolution of any moment can be determined to arbitrarily good accuracy by truncating the system of coupled moments to a sufficiently large but finite size, and show how the equations can be solved in a way suitable for numerical applications. We discuss in detail the accuracy of the method in view of applications to precision phenomenology
International Nuclear Information System (INIS)
Zhu Zhenghe; Luo Deli; Feng Kaiming
2013-01-01
The present work is to calculate the magnetic thermodynamically functions, i.e. energy, the intensity of magnetization, enthalpy, entropy and Gibbs function for nuclear magnetic moments of T, D and neutron n at 2 T and 1, 50, 100 and 150 K from partition functions. It is shown that magnetic saturation of thermonuclear plasma does not easily occur for nuclear magneton is only of 10 -3 of Bohr magneton. The work done by magnetic field is considerable. (authors)
Statistical probability tables CALENDF program
International Nuclear Information System (INIS)
Ribon, P.
1989-01-01
The purpose of the probability tables is: - to obtain dense data representation - to calculate integrals by quadratures. They are mainly used in the USA for calculations by Monte Carlo and in the USSR and Europe for self-shielding calculations by the sub-group method. The moment probability tables, in addition to providing a more substantial mathematical basis and calculation methods, are adapted for condensation and mixture calculations, which are the crucial operations for reactor physics specialists. However, their extension is limited by the statistical hypothesis they imply. Efforts are being made to remove this obstacle, at the cost, it must be said, of greater complexity
Craven, Galen T.; Nitzan, Abraham
2018-01-01
Statistical properties of Brownian motion that arise by analyzing, separately, trajectories over which the system energy increases (upside) or decreases (downside) with respect to a threshold energy level are derived. This selective analysis is applied to examine transport properties of a nonequilibrium Brownian process that is coupled to multiple thermal sources characterized by different temperatures. Distributions, moments, and correlation functions of a free particle that occur during upside and downside events are investigated for energy activation and energy relaxation processes and also for positive and negative energy fluctuations from the average energy. The presented results are sufficiently general and can be applied without modification to the standard Brownian motion. This article focuses on the mathematical basis of this selective analysis. In subsequent articles in this series, we apply this general formalism to processes in which heat transfer between thermal reservoirs is mediated by activated rate processes that take place in a system bridging them.
Online Statistics Labs in MSW Research Methods Courses: Reducing Reluctance toward Statistics
Elliott, William; Choi, Eunhee; Friedline, Terri
2013-01-01
This article presents results from an evaluation of an online statistics lab as part of a foundations research methods course for master's-level social work students. The article discusses factors that contribute to an environment in social work that fosters attitudes of reluctance toward learning and teaching statistics in research methods…
Spatial analysis statistics, visualization, and computational methods
Oyana, Tonny J
2015-01-01
An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...
Exchange currents for hypernuclear magnetic moments
International Nuclear Information System (INIS)
Saito, K.; Oka, M.; Suzuki, T.
1997-01-01
The meson (K and π) exchange currents for the hypernuclear magnetic moments are calculated using the effective Lagrangian method. The seagull diagram, the mesonic diagram and the Σ 0 -excitation diagram are considered. The Λ-N exchange magnetic moments for 5 Λ He and A=6 hypernuclei are calculated employing the harmonic oscillator shell model. It is found that the two-body correction is about -9% of the single particle value for 5 Λ He. The π exchange current, induced only in the Σ 0 -excitation diagram, is found to give dominant contribution for the isovector magnetic moments of hypernuclei with A=6. (orig.)
Searches for permanent electric dipole moments in Radium isotopes
Willmann, L.; Jungmann, K.; Wilschut, H.W.
2010-01-01
Permanent electric dipole moments are uniquely sensitive to sources of T and P violation in fundamental interactions. In particular radium isotopes offer the largest intrinsic sensitivity. We want to explore the prospects for utilizing the high intense beams from HIE-ISOLDE to boost the statistical
Cluster Statistics of BTW Automata
International Nuclear Information System (INIS)
Ajanta Bhowal Acharyya
2011-01-01
The cluster statistics of BTW automata in the SOC states are obtained by extensive computer simulation. Various moments of the clusters are calculated and few results are compared with earlier available numerical estimates and exact results. Reasonably good agreement is observed. An extended statistical analysis has been made. (author)
Fixed-pattern noise correction method based on improved moment matching for a TDI CMOS image sensor.
Xu, Jiangtao; Nie, Huafeng; Nie, Kaiming; Jin, Weimin
2017-09-01
In this paper, an improved moment matching method based on a spatial correlation filter (SCF) and bilateral filter (BF) is proposed to correct the fixed-pattern noise (FPN) of a time-delay-integration CMOS image sensor (TDI-CIS). First, the values of row FPN (RFPN) and column FPN (CFPN) are estimated and added to the original image through SCF and BF, respectively. Then the filtered image will be processed by an improved moment matching method with a moving window. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination, the standard deviation of row mean vector (SDRMV) decreases from 5.6761 LSB to 0.1948 LSB, while the standard deviation of the column mean vector (SDCMV) decreases from 15.2005 LSB to 13.1949LSB. In addition, for different images captured by different TDI-CISs, the average decrease of SDRMV and SDCMV is 5.4922/2.0357 LSB, respectively. Comparative experimental results indicate that the proposed method can effectively correct the FPNs of different TDI-CISs while maintaining image details without any auxiliary equipment.
Statistical-mechanical entropy by the thin-layer method
International Nuclear Information System (INIS)
Feng, He; Kim, Sung Won
2003-01-01
G. Hooft first studied the statistical-mechanical entropy of a scalar field in a Schwarzschild black hole background by the brick-wall method and hinted that the statistical-mechanical entropy is the statistical origin of the Bekenstein-Hawking entropy of the black hole. However, according to our viewpoint, the statistical-mechanical entropy is only a quantum correction to the Bekenstein-Hawking entropy of the black-hole. The brick-wall method based on thermal equilibrium at a large scale cannot be applied to the cases out of equilibrium such as a nonstationary black hole. The statistical-mechanical entropy of a scalar field in a nonstationary black hole background is calculated by the thin-layer method. The condition of local equilibrium near the horizon of the black hole is used as a working postulate and is maintained for a black hole which evaporates slowly enough and whose mass is far greater than the Planck mass. The statistical-mechanical entropy is also proportional to the area of the black hole horizon. The difference from the stationary black hole is that the result relies on a time-dependent cutoff
Method for statistical data analysis of multivariate observations
Gnanadesikan, R
1997-01-01
A practical guide for multivariate statistical techniques-- now updated and revised In recent years, innovations in computer technology and statistical methodologies have dramatically altered the landscape of multivariate data analysis. This new edition of Methods for Statistical Data Analysis of Multivariate Observations explores current multivariate concepts and techniques while retaining the same practical focus of its predecessor. It integrates methods and data-based interpretations relevant to multivariate analysis in a way that addresses real-world problems arising in many areas of inte
Experimental uncertainty estimation and statistics for data having interval uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)
2007-05-01
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Analysis of Statistical Methods Currently used in Toxicology Journals.
Na, Jihye; Yang, Hyeri; Bae, SeungJin; Lim, Kyung-Min
2014-09-01
Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health.
Induced Magnetic Moment in Defected Single-Walled Carbon Nanotubes
International Nuclear Information System (INIS)
Liu Hong
2006-01-01
The existence of a large induced magnetic moment in defect single-walled carbon nanotube(SWNT) is predicted using the Green's function method. Specific to this magnetic moment of defect SWNT is its magnitude which is several orders of magnitude larger than that of perfect SWNT. The induced magnetic moment also shows certain remarkable features. Therefore, we suggest that two pair-defect orientations in SWNT can be distinguished in experiment through the direction of the induced magnetic moment at some Specific energy points
Ginzburg, Irina; Vikhansky, Alexander
2018-05-01
The extended method of moments (EMM) is elaborated in recursive algorithmic form for the prediction of the effective diffusivity, the Taylor dispersion dyadic and the associated longitudinal high-order coefficients in mean-concentration profiles and residence-time distributions. The method applies in any streamwise-periodic stationary d-dimensional velocity field resolved in the piecewise continuous heterogeneous porosity field. It is demonstrated that EMM reduces to the method of moments and the volume-averaging formulation in microscopic velocity field and homogeneous soil, respectively. The EMM simultaneously constructs two systems of moments, the spatial and the temporal, without resorting to solving of the high-order upscaled PDE. At the same time, the EMM is supported with the reconstruction of distribution from its moments, allowing to visualize the deviation from the classical ADE solution. The EMM can be handled by any linear advection-diffusion solver with explicit mass-source and diffusive-flux jump condition on the solid boundary and permeable interface. The prediction of the first four moments is decisive in the optimization of the dispersion, asymmetry, peakedness and heavy-tails of the solute distributions, through an adequate design of the composite materials, wetlands, chemical devices or oil recovery. The symbolic solutions for dispersion, skewness and kurtosis are constructed in basic configurations: diffusion process and Darcy flow through two porous blocks in "series", straight and radial Poiseuille flow, porous flow governed by the Stokes-Brinkman-Darcy channel equation and a fracture surrounded by penetrable diffusive matrix or embedded in porous flow. We examine the moments dependency upon porosity contrast, aspect ratio, Péclet and Darcy numbers, but also for their response on the effective Brinkman viscosity applied in flow modeling. Two numerical Lattice Boltzmann algorithms, a direct solver of the microscopic ADE in heterogeneous
Application of nonparametric statistic method for DNBR limit calculation
International Nuclear Information System (INIS)
Dong Bo; Kuang Bo; Zhu Xuenong
2013-01-01
Background: Nonparametric statistical method is a kind of statistical inference method not depending on a certain distribution; it calculates the tolerance limits under certain probability level and confidence through sampling methods. The DNBR margin is one important parameter of NPP design, which presents the safety level of NPP. Purpose and Methods: This paper uses nonparametric statistical method basing on Wilks formula and VIPER-01 subchannel analysis code to calculate the DNBR design limits (DL) of 300 MW NPP (Nuclear Power Plant) during the complete loss of flow accident, simultaneously compared with the DL of DNBR through means of ITDP to get certain DNBR margin. Results: The results indicate that this method can gain 2.96% DNBR margin more than that obtained by ITDP methodology. Conclusions: Because of the reduction of the conservation during analysis process, the nonparametric statistical method can provide greater DNBR margin and the increase of DNBR margin is benefited for the upgrading of core refuel scheme. (authors)
Statistical theory and inference
Olive, David J
2014-01-01
This text is for a one semester graduate course in statistical theory and covers minimal and complete sufficient statistics, maximum likelihood estimators, method of moments, bias and mean square error, uniform minimum variance estimators and the Cramer-Rao lower bound, an introduction to large sample theory, likelihood ratio tests and uniformly most powerful tests and the Neyman Pearson Lemma. A major goal of this text is to make these topics much more accessible to students by using the theory of exponential families. Exponential families, indicator functions and the support of the distribution are used throughout the text to simplify the theory. More than 50 ``brand name" distributions are used to illustrate the theory with many examples of exponential families, maximum likelihood estimators and uniformly minimum variance unbiased estimators. There are many homework problems with over 30 pages of solutions.
Evaluation for moments of a ratio with application to regression estimation
Doukhan, Paul; Lang, Gabriel
2008-01-01
Ratios of random variables often appear in probability and statistical applications. We aim to approximate the moments of such ratios under several dependence assumptions. Extending the ideas in Collomb [C. R. Acad. Sci. Paris 285 (1977) 289–292], we propose sharper bounds for the moments of randomly weighted sums and for the Lp-deviations from the asymptotic normal law when the central limit theorem holds. We indicate suitable applications in finance and censored data analysis and focus on t...
Extended moment series and the parameters of the negative binomial distribution
International Nuclear Information System (INIS)
Bowman, K.O.
1984-01-01
Recent studies indicate that, for finite sample sizes, moment estimators may be superior to maximum likelihood estimators in some regions of parameter space. In this paper a statistic based on the central moment of the sample is expanded in a Taylor series using 24 derivatives and many more terms than previous expansions. A summary algorithm is required to find meaningful approximants using the higher-order coefficients. A example is presented and a comparison between theoretical assessment and simulation results is made
Energy Technology Data Exchange (ETDEWEB)
Gaillard, J.P.; Lalleman, S.; Bertrand, M. [CEA, Centre de Marcoule, Nuclear Energy Division, RadioChemistry and Process Department, F-30207 Bagnols sur Ceze (France); Plasari, E. [Ecole Nationale Superieure des Industries Chimiques, Laboratoire Reactions et Genie des Procedes, Universite de Lorraine - CNRS,1 rue Grandville, BP 20451, 54001, Nancy Cedex (France)
2016-07-01
Oxalic precipitation is generally used in the nuclear industry to deal with radioactive waste and recover the actinides from a multicomponent solution. To facilitate the development of experimental methods and data acquisitions, actinides are often simulated using lanthanides, gaining experience more easily. The purpose of this article is to compare the results achieved by two methods for solving the population balance during neodymium oxalate precipitation in a continuous MSMPR (Mixed Suspension Mixed Product Removal). The method of classes, also called discretized population balance, used in this study is based on the method of Litster. Whereas, the Quadrature Method of Moment (QMOM) is written in terms of the transport equations of the moments of the number density function. All the integrals are solved through a quadrature approximation thanks to the product-difference algorithm or the Chebyshev algorithm. Primary nucleation, crystal growth and agglomeration are taken into account. Agglomeration phenomena have been found to be represented by a loose agglomerates model. Thermodynamic effects are modeled by activity coefficients which are calculated using the Bromley model. The sizes of particles predicted by the two methods are in good agreement with experimental measurements. (authors)
Extented second moment algebra as an efficient tool in structural reliability
International Nuclear Information System (INIS)
Ditlevsen, O.
1982-01-01
During the seventies, second moment structural reliability analysis was extensively discussed with respect to philosophy and method. One recent clarification into a consistent formalism is represented by the extended second moment reliability theory with the generalized reliability index as its measure of safety. Its methods of formal failure probability calculations are useful independent of the opinion that one may adopt about the philosophy of the second moment reliability formalism. After an introduction of the historical development of the philosphy the paper gives a short introductory review of the extended second moment structural reliability theory. (orig.)
Askerov, Bahram M
2010-01-01
This book deals with theoretical thermodynamics and the statistical physics of electron and particle gases. While treating the laws of thermodynamics from both classical and quantum theoretical viewpoints, it posits that the basis of the statistical theory of macroscopic properties of a system is the microcanonical distribution of isolated systems, from which all canonical distributions stem. To calculate the free energy, the Gibbs method is applied to ideal and non-ideal gases, and also to a crystalline solid. Considerable attention is paid to the Fermi-Dirac and Bose-Einstein quantum statistics and its application to different quantum gases, and electron gas in both metals and semiconductors is considered in a nonequilibrium state. A separate chapter treats the statistical theory of thermodynamic properties of an electron gas in a quantizing magnetic field.
Steepest descent moment method for three-dimensional magnetohydrodynamic equilibria
International Nuclear Information System (INIS)
Hirshman, S.P.; Whitson, J.C.
1983-11-01
An energy principle is used to obtain the solution of the magnetohydrodynamic (MHD) equilibrium equation J Vector x B Vector - del p = 0 for nested magnetic flux surfaces that are expressed in the inverse coordinate representation x Vector = x Vector(rho, theta, zeta). Here, theta and zeta are poloidal and toroidal flux coordinate angles, respectively, and p = p(rho) labels a magnetic surface. Ordinary differential equations in rho are obtained for the Fourier amplitudes (moments) in the doubly periodic spectral decomposition of x Vector. A steepest descent iteration is developed for efficiently solving these nonlinear, coupled moment equations. The existence of a positive-definite energy functional guarantees the monotonic convergence of this iteration toward an equilibrium solution (in the absence of magnetic island formation). A renormalization parameter lambda is introduced to ensure the rapid convergence of the Fourier series for x Vector, while simultaneously satisfying the MHD requirement that magnetic field lines are straight in flux coordinates. A descent iteration is also developed for determining the self-consistent value for lambda
Methods library of embedded R functions at Statistics Norway
Directory of Open Access Journals (Sweden)
Øyvind Langsrud
2017-11-01
Full Text Available Statistics Norway is modernising the production processes. An important element in this work is a library of functions for statistical computations. In principle, the functions in such a methods library can be programmed in several languages. A modernised production environment demand that these functions can be reused for different statistics products, and that they are embedded within a common IT system. The embedding should be done in such a way that the users of the methods do not need to know the underlying programming language. As a proof of concept, Statistics Norway soon has established a methods library offering a limited number of methods for macro-editing, imputation and confidentiality. This is done within an area of municipal statistics with R as the only programming language. This paper presents the details and experiences from this work. The problem of fitting real word applications to simple and strict standards is discussed and exemplified by the development of solutions to regression imputation and table suppression.
Complex Data Modeling and Computationally Intensive Statistical Methods
Mantovan, Pietro
2010-01-01
The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici
Bending Moment Decrease of Reinforced Concrete Beam Supported by Additional CFRP
Directory of Open Access Journals (Sweden)
Mykolas Daugevičius
2011-04-01
Full Text Available The calculation method of reinforced concrete beam with additional CFRP composite is proposed in this article. This method estimates tangential angular concrete deformations in tensioned beam layers between steel and bonded carbon fiber reinforced polymer. The horizontal slip of CFRP composite reduce beam bending moment capacity. An additional coefficient to reduce CFRP resultant force is necessary for better precision of bending moment capacity. Also, various calculation methods of bending moment capacity are considered. Article in Lithuanian
Trunk muscle cocontraction: the effects of moment direction and moment magnitude.
Lavender, S A; Tsuang, Y H; Andersson, G B; Hafezi, A; Shin, C C
1992-09-01
This study investigated the cocontraction of eight trunk muscles during the application of asymmetric loads to the torso. External moments of 10, 20, 30, 40, and 50 Nm were applied to the torso via a harness system. The direction of the applied moment was varied by 30 degrees increments to the subjects' right side between the sagittally symmetric orientations front and rear. Electromyographic (EMG) data from the left and right latissimus dorsi, erector spinae, external oblique, and rectus abdominus were collected from 10 subjects. The normalized EMG data were tested using multivariate and univariate analyses of variance procedures. These analyses showed significant interactions between the moment magnitude and the moment direction for seven of the eight muscles. Most of the interactions could be characterized as due to changes in muscle recruitment with changes in the direction of the external moment. Analysis of the relative activation levels, which were computed for each combination of moment magnitude and direction, indicated large changes in muscle recruitment due to asymmetry, but only small adjustments in the relative activation levels due to increased moment magnitude.
Descriptive and inferential statistical methods used in burns research.
Al-Benna, Sammy; Al-Ajam, Yazan; Way, Benjamin; Steinstraesser, Lars
2010-05-01
Burns research articles utilise a variety of descriptive and inferential methods to present and analyse data. The aim of this study was to determine the descriptive methods (e.g. mean, median, SD, range, etc.) and survey the use of inferential methods (statistical tests) used in articles in the journal Burns. This study defined its population as all original articles published in the journal Burns in 2007. Letters to the editor, brief reports, reviews, and case reports were excluded. Study characteristics, use of descriptive statistics and the number and types of statistical methods employed were evaluated. Of the 51 articles analysed, 11(22%) were randomised controlled trials, 18(35%) were cohort studies, 11(22%) were case control studies and 11(22%) were case series. The study design and objectives were defined in all articles. All articles made use of continuous and descriptive data. Inferential statistics were used in 49(96%) articles. Data dispersion was calculated by standard deviation in 30(59%). Standard error of the mean was quoted in 19(37%). The statistical software product was named in 33(65%). Of the 49 articles that used inferential statistics, the tests were named in 47(96%). The 6 most common tests used (Student's t-test (53%), analysis of variance/co-variance (33%), chi(2) test (27%), Wilcoxon & Mann-Whitney tests (22%), Fisher's exact test (12%)) accounted for the majority (72%) of statistical methods employed. A specified significance level was named in 43(88%) and the exact significance levels were reported in 28(57%). Descriptive analysis and basic statistical techniques account for most of the statistical tests reported. This information should prove useful in deciding which tests should be emphasised in educating burn care professionals. These results highlight the need for burn care professionals to have a sound understanding of basic statistics, which is crucial in interpreting and reporting data. Advice should be sought from professionals
Extension of the direct statistical approach to a volume parameter model (non-integer splitting)
International Nuclear Information System (INIS)
Burn, K.W.
1990-01-01
The Direct Statistical Approach is a rigorous mathematical derivation of the second moment for surface splitting and Russian Roulette games attached to the Monte Carlo modelling of fixed source particle transport. It has been extended to a volume parameter model (involving non-integer ''expected value'' splitting), and then to a cell model. The cell model gives second moment and time functions that have a closed form. This suggests the possibility of two different methods of solution of the optimum splitting/Russian Roulette parameters. (author)
Strange Quark Magnetic Moment of the Nucleon at the Physical Point.
Sufian, Raza Sabbir; Yang, Yi-Bo; Alexandru, Andrei; Draper, Terrence; Liang, Jian; Liu, Keh-Fei
2017-01-27
We report a lattice QCD calculation of the strange quark contribution to the nucleon's magnetic moment and charge radius. This analysis presents the first direct determination of strange electromagnetic form factors including at the physical pion mass. We perform a model-independent extraction of the strange magnetic moment and the strange charge radius from the electromagnetic form factors in the momentum transfer range of 0.051 GeV^{2}≲Q^{2}≲1.31 GeV^{2}. The finite lattice spacing and finite volume corrections are included in a global fit with 24 valence quark masses on four lattices with different lattice spacings, different volumes, and four sea quark masses including one at the physical pion mass. We obtain the strange magnetic moment G_{M}^{s}(0)=-0.064(14)(09)μ_{N}. The four-sigma precision in statistics is achieved partly due to low-mode averaging of the quark loop and low-mode substitution to improve the statistics of the nucleon propagator. We also obtain the strange charge radius ⟨r_{s}^{2}⟩_{E}=-0.0043(16)(14) fm^{2}.
Directory of Open Access Journals (Sweden)
Ahmadi Majid
2003-01-01
Full Text Available This paper introduces a novel method for the recognition of human faces in digital images using a new feature extraction method that combines the global and local information in frontal view of facial images. Radial basis function (RBF neural network with a hybrid learning algorithm (HLA has been used as a classifier. The proposed feature extraction method includes human face localization derived from the shape information. An efficient distance measure as facial candidate threshold (FCT is defined to distinguish between face and nonface images. Pseudo-Zernike moment invariant (PZMI with an efficient method for selecting moment order has been used. A newly defined parameter named axis correction ratio (ACR of images for disregarding irrelevant information of face images is introduced. In this paper, the effect of these parameters in disregarding irrelevant information in recognition rate improvement is studied. Also we evaluate the effect of orders of PZMI in recognition rate of the proposed technique as well as RBF neural network learning speed. Simulation results on the face database of Olivetti Research Laboratory (ORL indicate that the proposed method for human face recognition yielded a recognition rate of 99.3%.
An advanced probabilistic structural analysis method for implicit performance functions
Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.
1989-01-01
In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.
Statistical Methods for Particle Physics (4/4)
CERN. Geneva
2012-01-01
The series of four lectures will introduce some of the important statistical methods used in Particle Physics, and should be particularly relevant to those involved in the analysis of LHC data. The lectures will include an introduction to statistical tests, parameter estimation, and the application of these tools to searches for new phenomena. Both frequentist and Bayesian methods will be described, with particular emphasis on treatment of systematic uncertainties. The lectures will also cover unfolding, that is, estimation of a distribution in binned form where the variable in question is subject to measurement errors.
Statistical Methods for Particle Physics (1/4)
CERN. Geneva
2012-01-01
The series of four lectures will introduce some of the important statistical methods used in Particle Physics, and should be particularly relevant to those involved in the analysis of LHC data. The lectures will include an introduction to statistical tests, parameter estimation, and the application of these tools to searches for new phenomena. Both frequentist and Bayesian methods will be described, with particular emphasis on treatment of systematic uncertainties. The lectures will also cover unfolding, that is, estimation of a distribution in binned form where the variable in question is subject to measurement errors.
Statistical Methods for Particle Physics (2/4)
CERN. Geneva
2012-01-01
The series of four lectures will introduce some of the important statistical methods used in Particle Physics, and should be particularly relevant to those involved in the analysis of LHC data. The lectures will include an introduction to statistical tests, parameter estimation, and the application of these tools to searches for new phenomena. Both frequentist and Bayesian methods will be described, with particular emphasis on treatment of systematic uncertainties. The lectures will also cover unfolding, that is, estimation of a distribution in binned form where the variable in question is subject to measurement errors.
Statistical Methods for Particle Physics (3/4)
CERN. Geneva
2012-01-01
The series of four lectures will introduce some of the important statistical methods used in Particle Physics, and should be particularly relevant to those involved in the analysis of LHC data. The lectures will include an introduction to statistical tests, parameter estimation, and the application of these tools to searches for new phenomena. Both frequentist and Bayesian methods will be described, with particular emphasis on treatment of systematic uncertainties. The lectures will also cover unfolding, that is, estimation of a distribution in binned form where the variable in question is subject to measurement errors.
Statistical methods for spatio-temporal systems
Finkenstadt, Barbel
2006-01-01
Statistical Methods for Spatio-Temporal Systems presents current statistical research issues on spatio-temporal data modeling and will promote advances in research and a greater understanding between the mechanistic and the statistical modeling communities.Contributed by leading researchers in the field, each self-contained chapter starts with an introduction of the topic and progresses to recent research results. Presenting specific examples of epidemic data of bovine tuberculosis, gastroenteric disease, and the U.K. foot-and-mouth outbreak, the first chapter uses stochastic models, such as point process models, to provide the probabilistic backbone that facilitates statistical inference from data. The next chapter discusses the critical issue of modeling random growth objects in diverse biological systems, such as bacteria colonies, tumors, and plant populations. The subsequent chapter examines data transformation tools using examples from ecology and air quality data, followed by a chapter on space-time co...
Statistical methods for forecasting
Abraham, Bovas
2009-01-01
The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists."This book, it must be said, lives up to the words on its advertising cover: ''Bridging the gap between introductory, descriptive approaches and highly advanced theoretical treatises, it provides a practical, intermediate level discussion of a variety of forecasting tools, and explains how they relate to one another, both in theory and practice.'' It does just that!"-Journal of the Royal Statistical Society"A well-written work that deals with statistical methods and models that can be used to produce short-term forecasts, this book has wide-ranging applications. It could be used in the context of a study of regression, forecasting, and time series ...
Advances in Statistical Methods for Substance Abuse Prevention Research
MacKinnon, David P.; Lockwood, Chondra M.
2010-01-01
The paper describes advances in statistical methods for prevention research with a particular focus on substance abuse prevention. Standard analysis methods are extended to the typical research designs and characteristics of the data collected in prevention research. Prevention research often includes longitudinal measurement, clustering of data in units such as schools or clinics, missing data, and categorical as well as continuous outcome variables. Statistical methods to handle these features of prevention data are outlined. Developments in mediation, moderation, and implementation analysis allow for the extraction of more detailed information from a prevention study. Advancements in the interpretation of prevention research results include more widespread calculation of effect size and statistical power, the use of confidence intervals as well as hypothesis testing, detailed causal analysis of research findings, and meta-analysis. The increased availability of statistical software has contributed greatly to the use of new methods in prevention research. It is likely that the Internet will continue to stimulate the development and application of new methods. PMID:12940467
Moments, positive polynomials and their applications
Lasserre, Jean Bernard
2009-01-01
Many important applications in global optimization, algebra, probability and statistics, applied mathematics, control theory, financial mathematics, inverse problems, etc. can be modeled as a particular instance of the Generalized Moment Problem (GMP) . This book introduces a new general methodology to solve the GMP when its data are polynomials and basic semi-algebraic sets. This methodology combines semidefinite programming with recent results from real algebraic geometry to provide a hierarchy of semidefinite relaxations converging to the desired optimal value. Applied on appropriate cones,
Energy Technology Data Exchange (ETDEWEB)
Michael Ramsey-Musolf; Wick Haxton; Ching-Pang Liu
2002-03-29
Nuclear anapole moments are parity-odd, time-reversal-even E1 moments of the electromagnetic current operator. Although the existence of this moment was recognized theoretically soon after the discovery of parity nonconservation (PNC), its experimental isolation was achieved only recently, when a new level of precision was reached in a measurement of the hyperfine dependence of atomic PNC in 133Cs. An important anapole moment bound in 205Tl also exists. In this paper, we present the details of the first calculation of these anapole moments in the framework commonly used in other studies of hadronic PNC, a meson exchange potential that includes long-range pion exchange and enough degrees of freedom to describe the five independent S-P amplitudes induced by short-range interactions. The resulting contributions of pi-, rho-, and omega-exchange to the single-nucleon anapole moment, to parity admixtures in the nuclear ground state, and to PNC exchange currents are evaluated, using configuration-mixed shell-model wave functions. The experimental anapole moment constraints on the PNC meson-nucleon coupling constants are derived and compared with those from other tests of the hadronic weak interaction. While the bounds obtained from the anapole moment results are consistent with the broad ''reasonable ranges'' defined by theory, they are not in good agreement with the constraints from the other experiments. We explore possible explanations for the discrepancy and comment on the potential importance of new experiments.
International Nuclear Information System (INIS)
Michael Ramsey-Musolf; Wick Haxton; Ching-Pang Liu
2002-01-01
Nuclear anapole moments are parity-odd, time-reversal-even E1 moments of the electromagnetic current operator. Although the existence of this moment was recognized theoretically soon after the discovery of parity nonconservation (PNC), its experimental isolation was achieved only recently, when a new level of precision was reached in a measurement of the hyperfine dependence of atomic PNC in 133Cs. An important anapole moment bound in 205Tl also exists. In this paper, we present the details of the first calculation of these anapole moments in the framework commonly used in other studies of hadronic PNC, a meson exchange potential that includes long-range pion exchange and enough degrees of freedom to describe the five independent S-P amplitudes induced by short-range interactions. The resulting contributions of pi-, rho-, and omega-exchange to the single-nucleon anapole moment, to parity admixtures in the nuclear ground state, and to PNC exchange currents are evaluated, using configuration-mixed shell-model wave functions. The experimental anapole moment constraints on the PNC meson-nucleon coupling constants are derived and compared with those from other tests of the hadronic weak interaction. While the bounds obtained from the anapole moment results are consistent with the broad ''reasonable ranges'' defined by theory, they are not in good agreement with the constraints from the other experiments. We explore possible explanations for the discrepancy and comment on the potential importance of new experiments
Non-Equilibrium Liouville and Wigner Equations: Moment Methods and Long-Time Approximations
Directory of Open Access Journals (Sweden)
Ramon F. Álvarez-Estrada
2014-03-01
Full Text Available We treat the non-equilibrium evolution of an open one-particle statistical system, subject to a potential and to an external “heat bath” (hb with negligible dissipation. For the classical equilibrium Boltzmann distribution, Wc,eq, a non-equilibrium three-term hierarchy for moments fulfills Hermiticity, which allows one to justify an approximate long-time thermalization. That gives partial dynamical support to Boltzmann’s Wc,eq, out of the set of classical stationary distributions, Wc;st, also investigated here, for which neither Hermiticity nor that thermalization hold, in general. For closed classical many-particle systems without hb (by using Wc,eq, the long-time approximate thermalization for three-term hierarchies is justified and yields an approximate Lyapunov function and an arrow of time. The largest part of the work treats an open quantum one-particle system through the non-equilibrium Wigner function, W. Weq for a repulsive finite square well is reported. W’s (< 0 in various cases are assumed to be quasi-definite functionals regarding their dependences on momentum (q. That yields orthogonal polynomials, HQ,n(q, for Weq (and for stationary Wst, non-equilibrium moments, Wn, of W and hierarchies. For the first excited state of the harmonic oscillator, its stationary Wst is a quasi-definite functional, and the orthogonal polynomials and three-term hierarchy are studied. In general, the non-equilibrium quantum hierarchies (associated with Weq for the Wn’s are not three-term ones. As an illustration, we outline a non-equilibrium four-term hierarchy and its solution in terms of generalized operator continued fractions. Such structures also allow one to formulate long-time approximations, but make it more difficult to justify thermalization. For large thermal and de Broglie wavelengths, the dominant Weq and a non-equilibrium equation for W are reported: the non-equilibrium hierarchy could plausibly be a three-term one and possibly not
The Monte Carlo method the method of statistical trials
Shreider, YuA
1966-01-01
The Monte Carlo Method: The Method of Statistical Trials is a systematic account of the fundamental concepts and techniques of the Monte Carlo method, together with its range of applications. Some of these applications include the computation of definite integrals, neutron physics, and in the investigation of servicing processes. This volume is comprised of seven chapters and begins with an overview of the basic features of the Monte Carlo method and typical examples of its application to simple problems in computational mathematics. The next chapter examines the computation of multi-dimensio
Advanced statistics to improve the physical interpretation of atomization processes
International Nuclear Information System (INIS)
Panão, Miguel R.O.; Radu, Lucian
2013-01-01
Highlights: ► Finite pdf mixtures improves physical interpretation of sprays. ► Bayesian approach using MCMC algorithm is used to find the best finite mixture. ► Statistical method identifies multiple droplet clusters in a spray. ► Multiple drop clusters eventually associated with multiple atomization mechanisms. ► Spray described by drop size distribution and not only its moments. -- Abstract: This paper reports an analysis of the physics of atomization processes using advanced statistical tools. Namely, finite mixtures of probability density functions, which best fitting is found using a Bayesian approach based on a Markov chain Monte Carlo (MCMC) algorithm. This approach takes into account eventual multimodality and heterogeneities in drop size distributions. Therefore, it provides information about the complete probability density function of multimodal drop size distributions and allows the identification of subgroups in the heterogeneous data. This allows improving the physical interpretation of atomization processes. Moreover, it also overcomes the limitations induced by analyzing the spray droplets characteristics through moments alone, particularly, the hindering of different natures of droplet formation. Finally, the method is applied to physically interpret a case-study based on multijet atomization processes
Temperature-dependent particle-number projected moment of inertia
International Nuclear Information System (INIS)
Allal, N. H.; Fellah, M.; Benhamouda, N.; Oudih, M. R.
2008-01-01
Expressions of the parallel and perpendicular temperature-dependent particle-number projected nuclear moment of inertia have been established by means of a discrete projection method. They generalize that of the FTBCS method and are well adapted to numerical computation. The effects of particle-number fluctuations have been numerically studied for some even-even actinide nuclei by using the single-particle energies and eigenstates of a deformed Woods-Saxon mean field. It has been shown that the parallel moment of inertia is practically not modified by the use of the projection method. In contrast, the discrepancy between the projected and FTBCS perpendicular moment of inertia values may reach 5%. Moreover, the particle-number fluctuation effects vary not only as a function of the temperature but also as a function of the deformation for a given temperature. This is not the case for the system energy
Academic Training Lecture: Statistical Methods for Particle Physics
PH Department
2012-01-01
2, 3, 4 and 5 April 2012 Academic Training Lecture Regular Programme from 11:00 to 12:00 - Bldg. 222-R-001 - Filtration Plant Statistical Methods for Particle Physics by Glen Cowan (Royal Holloway) The series of four lectures will introduce some of the important statistical methods used in Particle Physics, and should be particularly relevant to those involved in the analysis of LHC data. The lectures will include an introduction to statistical tests, parameter estimation, and the application of these tools to searches for new phenomena. Both frequentist and Bayesian methods will be described, with particular emphasis on treatment of systematic uncertainties. The lectures will also cover unfolding, that is, estimation of a distribution in binned form where the variable in question is subject to measurement errors.
Statistical Methods for Unusual Count Data
DEFF Research Database (Denmark)
Guthrie, Katherine A.; Gammill, Hilary S.; Kamper-Jørgensen, Mads
2016-01-01
microchimerism data present challenges for statistical analysis, including a skewed distribution, excess zero values, and occasional large values. Methods for comparing microchimerism levels across groups while controlling for covariates are not well established. We compared statistical models for quantitative...... microchimerism values, applied to simulated data sets and 2 observed data sets, to make recommendations for analytic practice. Modeling the level of quantitative microchimerism as a rate via Poisson or negative binomial model with the rate of detection defined as a count of microchimerism genome equivalents per...
International Nuclear Information System (INIS)
Fox, R.O.; Laurent, F.; Massot, M.
2008-01-01
The scope of the present study is Eulerian modeling and simulation of polydisperse liquid sprays undergoing droplet coalescence and evaporation. The fundamental mathematical description is the Williams spray equation governing the joint number density function f(v,u;x,t) of droplet volume and velocity. Eulerian multi-fluid models have already been rigorously derived from this equation in Laurent et al. [F. Laurent, M. Massot, P. Villedieu, Eulerian multi-fluid modeling for the numerical simulation of coalescence in polydisperse dense liquid sprays, J. Comput. Phys. 194 (2004) 505-543]. The first key feature of the paper is the application of direct quadrature method of moments (DQMOM) introduced by Marchisio and Fox [D.L. Marchisio, R.O. Fox, Solution of population balance equations using the direct quadrature method of moments, J. Aerosol Sci. 36 (2005) 43-73] to the Williams spray equation. Both the multi-fluid method and DQMOM yield systems of Eulerian conservation equations with complicated interaction terms representing coalescence. In order to focus on the difficulties associated with treating size-dependent coalescence and to avoid numerical uncertainty issues associated with two-way coupling, only one-way coupling between the droplets and a given gas velocity field is considered. In order to validate and compare these approaches, the chosen configuration is a self-similar 2D axisymmetrical decelerating nozzle with sprays having various size distributions, ranging from smooth ones up to Dirac delta functions. The second key feature of the paper is a thorough comparison of the two approaches for various test-cases to a reference solution obtained through a classical stochastic Lagrangian solver. Both Eulerian models prove to describe adequately spray coalescence and yield a very interesting alternative to the Lagrangian solver. The third key point of the study is a detailed description of the limitations associated with each method, thus giving criteria for
Syahputra, M. F.; Chairani, R.; Seniman; Rahmat, R. F.; Abdullah, D.; Napitupulu, D.; Setiawan, M. I.; Albra, W.; Erliana, C. I.; Andayani, U.
2018-03-01
Sperm morphology is still a standard laboratory analysis in diagnosing infertility in men. Manually identification of sperm form is still not accurate, the difficulty in seeing the form of the invisible sperm from the digital microscope image is often a weakness in the process of identification and takes a long time. Therefore, male fertility identification application system is needed Through sperm abnormalities based on sperm morphology (teratospermia). The method used is invariant moment method. This study uses 15 data testing and 20 data training sperm image. That the process of male fertility identification through sperm abnormalities based on sperm morphology (teratospermia) has an accuracy rate of 80.77%. Use of time to process Identification of male fertility through sperm abnormalities Based on sperm morphology (teratospermia) during 0.4369 seconds.
Marciano, William J
2010-01-01
This book provides a self-contained description of the measurements of the magnetic dipole moments of the electron and muon, along with a discussion of the measurements of the fine structure constant, and the theory associated with magnetic and electric dipole moments. Also included are the searches for a permanent electric dipole moment of the electron, muon, neutron and atomic nuclei. The related topic of the transition moment for lepton flavor violating processes, such as neutrinoless muon or tauon decays, and the search for such processes are included as well. The papers, written by many o
Observations of Cluster Substructure using Weakly Lensed Sextupole Moments
Energy Technology Data Exchange (ETDEWEB)
Irwin, John
2003-08-01
Since dark matter clusters and groups may have substructure, we have examined the sextupole content of Hubble images looking for a curvature signature in background galaxies that would arise from galaxy-galaxy lensing. We describe techniques for extracting and analyzing sextupole and higher weakly lensed moments. Indications of substructure, via spatial clumping of curved background galaxies, were observed in the image of CL0024 and then surprisingly in both Hubble deep fields. We estimate the dark cluster masses in the deep field. Alternatives to a lensing hypothesis appear improbable, but better statistics will be required to exclude them conclusively. Observation of sextupole moments would then provide a means to measure dark matter structure on smaller length scales than heretofore.
International Nuclear Information System (INIS)
Li Liyong; Tchelepi, Hamdi A.; Zhang Dongxiao
2003-01-01
We present detailed comparisons between high-resolution Monte Carlo simulation (MCS) and low-order numerical solutions of stochastic moment equations (SMEs) for the first and second statistical moments of pressure. The objective is to quantify the difference between the predictions obtained from MCS and SME. Natural formations with high permeability variability and large spatial correlation scales are of special interest for underground resources (e.g. oil and water). Consequently, we focus on such formations. We investigated fields with variance of log-permeability, σ Y 2 , from 0.1 to 3.0 and correlation scales (normalized by domain length) of 0.05 to 0.5. In order to avoid issues related to statistical convergence and resolution level, we used 9000 highly resolved realizations of permeability for MCS. We derive exact discrete forms of the statistical moment equations. Formulations based on equations written explicitly in terms of permeability (K-based) and log-transformed permeability (Y-based) are considered. The discrete forms are applicable to systems of arbitrary variance and correlation scales. However, equations governing a particular statistical moment depend on higher moments. Thus, while the moment equations are exact, they are not closed. In particular, the discrete form of the second moment of pressure includes two triplet terms that involve log-permeability (or permeability) and pressure. We combined MCS computations with full discrete SME equations to quantify the importance of the various terms that make up the moment equations. We show that second-moment solutions obtained using a low-order Y-based SME formulation are significantly better than those from K-based formulations, especially when σ Y 2 >1. As a result, Y-based formulations are preferred. The two triplet terms are complex functions of the variance level and correlation length. The importance (contribution) of these triplet terms increases dramatically as σ Y 2 increases above one. We
Nonequilibrium statistical mechanics ensemble method
Eu, Byung Chan
1998-01-01
In this monograph, nonequilibrium statistical mechanics is developed by means of ensemble methods on the basis of the Boltzmann equation, the generic Boltzmann equations for classical and quantum dilute gases, and a generalised Boltzmann equation for dense simple fluids The theories are developed in forms parallel with the equilibrium Gibbs ensemble theory in a way fully consistent with the laws of thermodynamics The generalised hydrodynamics equations are the integral part of the theory and describe the evolution of macroscopic processes in accordance with the laws of thermodynamics of systems far removed from equilibrium Audience This book will be of interest to researchers in the fields of statistical mechanics, condensed matter physics, gas dynamics, fluid dynamics, rheology, irreversible thermodynamics and nonequilibrium phenomena
Cosmological Non-Gaussian Signature Detection: Comparing Performance of Different Statistical Tests
Directory of Open Access Journals (Sweden)
O. Forni
2005-09-01
Full Text Available Currently, it appears that the best method for non-Gaussianity detection in the cosmic microwave background (CMB consists in calculating the kurtosis of the wavelet coefficients. We know that wavelet-kurtosis outperforms other methods such as the bispectrum, the genus, ridgelet-kurtosis, and curvelet-kurtosis on an empirical basis, but relatively few studies have compared other transform-based statistics, such as extreme values, or more recent tools such as higher criticism (HC, or proposed “best possible†choices for such statistics. In this paper, we consider two models for transform-domain coefficients: (a a power-law model, which seems suited to the wavelet coefficients of simulated cosmic strings, and (b a sparse mixture model, which seems suitable for the curvelet coefficients of filamentary structure. For model (a, if power-law behavior holds with finite 8th moment, excess kurtosis is an asymptotically optimal detector, but if the 8th moment is not finite, a test based on extreme values is asymptotically optimal. For model (b, if the transform coefficients are very sparse, a recent test, higher criticism, is an optimal detector, but if they are dense, kurtosis is an optimal detector. Empirical wavelet coefficients of simulated cosmic strings have power-law character, infinite 8th moment, while curvelet coefficients of the simulated cosmic strings are not very sparse. In all cases, excess kurtosis seems to be an effective test in moderate-resolution imagery.
DEFF Research Database (Denmark)
Larsen, Gunner Chr.; Bierbooms, W.; Hansen, Kurt Schaldemose
2003-01-01
. A theoretical expression for the probability density function associated with local extremes of a stochasticprocess is presented. The expression is basically based on the lower four statistical moments and a bandwidth parameter. The theoretical expression is subsequently verified by comparison with simulated...
International Nuclear Information System (INIS)
Oylumoglu, G.
2005-01-01
In this study variation of additional enthalpy with respect to pH has been investigated by the statistical mechanical methods.. To bring up the additional effect, the partition function of the proteins are calculated by single protein molecule approximation. From the partition function, free energies of the proteins are obtained and by this way additional free energy has been used in the calculation of the terms in the thermodynamical quantity. Additional enthalpy H D has been obtained by taking effective electric field E and constant dipole moment M as thermodynamical variables and using Maxwell Equations. In the presented semi phenomenological theory, necessary data are taken from the experimental study of P.L. Privalov. The variation in the additional enthalpy H D has been investigated in the pH interval of 1-5 and the results of the calculations are discussed for Lysozyme
Moment based model predictive control for systems with additive uncertainty
Saltik, M.B.; Ozkan, L.; Weiland, S.; Ludlage, J.H.A.
2017-01-01
In this paper, we present a model predictive control (MPC) strategy based on the moments of the state variables and the cost functional. The statistical properties of the state predictions are calculated through the open loop iteration of dynamics and used in the formulation of MPC cost function. We
Lower limb joint moment during walking in water.
Miyoshi, Tasuku; Shirota, Takashi; Yamamoto, Shin-Ichiro; Nakazawa, Kimitaka; Akai, Masami
2003-11-04
Walking in water is a widely used rehabilitation method for patients with orthopedic disorders or arthritis, based on the belief that the reduction of weight in water makes it a safer medium and prevents secondary injuries of the lower-limb joints. To our knowledge, however, no experimental data on lower-limb joint moment during walking in water is available. The aim of this study was to quantify the joint moments of the ankle, knee, and hip during walking in water in comparison with those on land. Eight healthy volunteers walked on land and in water at a speed comfortable for them. A video-motion analysis system and waterproof force platform were used to obtain kinematic data and to calculate the joint moments. The hip joint moment was shown to be an extension moment almost throughout the stance phase during walking in water, while it changed from an extension- to flexion-direction during walking on land. The knee joint moment had two extension peaks during walking on land, whereas it had only one extension peak, a late one, during walking in water. The ankle joint moment during walking in water was considerably reduced but in the same direction, plantarflexion, as that during walking on land. The joint moments of the hip, knee, and ankle were not merely reduced during walking in water; rather, inter-joint coordination was totally changed.
International Nuclear Information System (INIS)
Witte, N.S.; Shankar, R.
1999-01-01
We examine the Ising chain in a transverse field at zero temperature from the point of view of a family of moment formalisms based upon the cumulant generating function, where we find exact solutions for the generating functions and cumulants at arbitrary couplings and hence for both the ordered and disordered phases of the model. In a t-expansion analysis, the exact Horn-Weinstein function E(t) has cuts along an infinite set of curves in the complex Jt-plane which are confined to the left-hand half-plane ImJt < -((1)/(4)) for the phase containing the trial state (disordered), but are not so for the other phase (ordered). For finite couplings the expansion has a finite radius of convergence. Asymptotic forms for this function exhibit a crossover at the critical point, giving the excited state gap in the ground state sector for the disordered phase, and the first excited state gap in the ordered phase. Convergence of the t-expansion with respect to truncation order is found in the disordered phase right up to the critical point, for both the ground state energy and the excited state gap. However, convergence is found in only one of the connected moments expansions (CMX), the CMX-LT, and the ground state energy shows convergence right to the criticalpoint again, although to a limited accuracy
Statistical method for resolving the photon-photoelectron-counting inversion problem
International Nuclear Information System (INIS)
Wu Jinlong; Li Tiejun; Peng, Xiang; Guo Hong
2011-01-01
A statistical inversion method is proposed for the photon-photoelectron-counting statistics in quantum key distribution experiment. With the statistical viewpoint, this problem is equivalent to the parameter estimation for an infinite binomial mixture model. The coarse-graining idea and Bayesian methods are applied to deal with this ill-posed problem, which is a good simple example to show the successful application of the statistical methods to the inverse problem. Numerical results show the applicability of the proposed strategy. The coarse-graining idea for the infinite mixture models should be general to be used in the future.
Directory of Open Access Journals (Sweden)
A.V. Getman
2013-12-01
Full Text Available Theoretical aspects of an experimental determination method for residual and inductive magnetic moments of a technical object are considered. As input data, the technical object magnetic induction signatures obtained under its linear movement near a pair of three-component sensors are used. A magnetic signature integration technique based on spatial harmonic analysis of the magnetic field represented by twenty-four multipole coefficients is introduced.
DEFF Research Database (Denmark)
Manohara, S.R.; Kumar, V. Udaya; Shivakumaraiah
2013-01-01
chemical calculations using the DFT method by adopting B3LYP/6-31G* level of theory (Gaussian 03) and using the AM1 method (Chem3D Ultra 8.0). It was observed that, dipole moments of diazines in the excited-state (μe) were greater than the corresponding ground-state values (μg), indicating a substantial...
International Nuclear Information System (INIS)
Lipkin, H.J.
1983-06-01
The new experimental values of hyperon magnetic moments are compared with sum rules predicted from general quark models. Three difficulties are encountered which are not easily explained by simple models. The isovector contributions of nonstrange quarks to hyperon moments are smaller than the corresponding contribution to nucleon moments, indicating either appreciable configuration mixing present in hyperon wave functions and absent in nucleons or an additional isovector contribution beyond that of valence quarks; e.g. from a pion cloud. The large magnitude of the ω - moment may indicate that the strange quark contribution to the ω moments is considerably larger than the value μ(#betta#) predicted by simple models which have otherwise been very successful. The set of controversial values from different experiments of the μ - moment include a value very close to -(1/2)μ(μ + ) which would indicate that strange quarks do not contribute at all to the μ moments. (author)
A new method to determine the number of experimental data using statistical modeling methods
Energy Technology Data Exchange (ETDEWEB)
Jung, Jung-Ho; Kang, Young-Jin; Lim, O-Kaung; Noh, Yoojeong [Pusan National University, Busan (Korea, Republic of)
2017-06-15
For analyzing the statistical performance of physical systems, statistical characteristics of physical parameters such as material properties need to be estimated by collecting experimental data. For accurate statistical modeling, many such experiments may be required, but data are usually quite limited owing to the cost and time constraints of experiments. In this study, a new method for determining a rea- sonable number of experimental data is proposed using an area metric, after obtaining statistical models using the information on the underlying distribution, the Sequential statistical modeling (SSM) approach, and the Kernel density estimation (KDE) approach. The area metric is used as a convergence criterion to determine the necessary and sufficient number of experimental data to be acquired. The pro- posed method is validated in simulations, using different statistical modeling methods, different true models, and different convergence criteria. An example data set with 29 data describing the fatigue strength coefficient of SAE 950X is used for demonstrating the performance of the obtained statistical models that use a pre-determined number of experimental data in predicting the probability of failure for a target fatigue life.
Model Reduction using Vorobyev Moment Problem
Czech Academy of Sciences Publication Activity Database
Strakoš, Zdeněk
2009-01-01
Roč. 51, č. 3 (2009), s. 363-379 ISSN 1017-1398 R&D Projects: GA AV ČR IAA100300802 Institutional research plan: CEZ:AV0Z10300504 Keywords : matching moments * model reduction * Krylov subspace methods * conjugate gradient method * Lanczos method * Arnoldi method * Gauss-Christoffel quadrature * scattering amplitude Subject RIV: BA - General Mathematics Impact factor: 0.716, year: 2009
Cook, G. G.; Khamas, S. K.; Kingsley, S. P.; Woods, R. C.
1992-01-01
The radar cross section and Q factors of electrically small dipole and loop antennas made with a YBCO high Tc superconductor are predicted using a two-fluid-moment method model, in order to determine the effects of finite conductivity on the performances of such antennas. The results compare the useful operating bandwidths of YBCO antennas exhibiting varying degrees of impurity with their copper counterparts at 77 K, showing a linear relationship between bandwidth and impurity level.
The application of statistical methods to assess economic assets
Directory of Open Access Journals (Sweden)
D. V. Dianov
2017-01-01
Full Text Available The article is devoted to consideration and evaluation of machinery, equipment and special equipment, methodological aspects of the use of standards for assessment of buildings and structures in current prices, the valuation of residential, specialized houses, office premises, assessment and reassessment of existing and inactive military assets, the application of statistical methods to obtain the relevant cost estimates.The objective of the scientific article is to consider possible application of statistical tools in the valuation of the assets, composing the core group of elements of national wealth – the fixed assets. Firstly, capital tangible assets constitute the basis of material base of a new value creation, products and non-financial services. The gain, accumulated of tangible assets of a capital nature is a part of the gross domestic product, and from its volume and specific weight in the composition of GDP we can judge the scope of reproductive processes in the country.Based on the methodological materials of the state statistics bodies of the Russian Federation, regulations of the theory of statistics, which describe the methods of statistical analysis such as the index, average values, regression, the methodical approach is structured in the application of statistical tools to obtain value estimates of property, plant and equipment with significant accumulated depreciation. Until now, the use of statistical methodology in the practice of economic assessment of assets is only fragmentary. This applies to both Federal Legislation (Federal law № 135 «On valuation activities in the Russian Federation» dated 16.07.1998 in edition 05.07.2016 and the methodological documents and regulations of the estimated activities, in particular, the valuation activities’ standards. A particular problem is the use of a digital database of Rosstat (Federal State Statistics Service, as to the specific fixed assets the comparison should be carried
Moment approach to tandem mirror radial transport
International Nuclear Information System (INIS)
Siebert, K.D.; Callen, J.D.
1986-02-01
A moment approach is proposed for the study of tandem mirror radial transport in the resonant plateau regime. The salient features of the method are described with reference to axisymmetric tokamak transport theory. In particular, the importance of momentum conservation to the establishment of the azimuthal variations in the electrostatic potential is demonstrated. Also, an ad hoc drift kinetic equation is solved to determine parallel viscosity coefficients which are required to close the moment system
International Nuclear Information System (INIS)
Ekspong, G.; Johansson, H.
1976-04-01
In high energy particle reactions where many neutral pions may be produced the information contained in the decay gamma radiation can be converted to information about the neutral pions. Two methods are described to obtain the moments of the multiplicity distribution of the neutral pions from the distribution of the number of electron-positron pairs. (Auth.)
Statistical inference of level densities from resolved resonance parameters
International Nuclear Information System (INIS)
Froehner, F.H.
1983-08-01
Level densities are most directly obtained by counting the resonances observed in the resolved resonance range. Even in the measurements, however, weak levels are invariably missed so that one has to estimate their number and add it to the raw count. The main categories of missinglevel estimators are discussed in the present review, viz. (I) ladder methods including those based on the theory of Hamiltonian matrix ensembles (Dyson-Mehta statistics), (II) methods based on comparison with artificial cross section curves (Monte Carlo simulation, Garrison's autocorrelation method), (III) methods exploiting the observed neutron width distribution by means of Bayesian or more approximate procedures such as maximum-likelihood, least-squares or moment methods, with various recipes for the treatment of detection thresholds and resolution effects. The language of mathematical statistics is employed to clarify the basis of, and the relationship between, the various techniques. Recent progress in the treatment of resolution effects, detection thresholds and p-wave admixture is described. (orig.) [de
Brief guidelines for methods and statistics in medical research
Ab Rahman, Jamalludin
2015-01-01
This book serves as a practical guide to methods and statistics in medical research. It includes step-by-step instructions on using SPSS software for statistical analysis, as well as relevant examples to help those readers who are new to research in health and medical fields. Simple texts and diagrams are provided to help explain the concepts covered, and print screens for the statistical steps and the SPSS outputs are provided, together with interpretations and examples of how to report on findings. Brief Guidelines for Methods and Statistics in Medical Research offers a valuable quick reference guide for healthcare students and practitioners conducting research in health related fields, written in an accessible style.
Convergence of statistical moments of particle density time series in scrape-off layer plasmas
Energy Technology Data Exchange (ETDEWEB)
Kube, R., E-mail: ralph.kube@uit.no; Garcia, O. E. [Department of Physics and Technology, UiT - The Arctic University of Norway, N-9037 Tromsø (Norway)
2015-01-15
Particle density fluctuations in the scrape-off layer of magnetically confined plasmas, as measured by gas-puff imaging or Langmuir probes, are modeled as the realization of a stochastic process in which a superposition of pulses with a fixed shape, an exponential distribution of waiting times, and amplitudes represents the radial motion of blob-like structures. With an analytic formulation of the process at hand, we derive expressions for the mean squared error on estimators of sample mean and sample variance as a function of sample length, sampling frequency, and the parameters of the stochastic process. Employing that the probability distribution function of a particularly relevant stochastic process is given by the gamma distribution, we derive estimators for sample skewness and kurtosis and expressions for the mean squared error on these estimators. Numerically, generated synthetic time series are used to verify the proposed estimators, the sample length dependency of their mean squared errors, and their performance. We find that estimators for sample skewness and kurtosis based on the gamma distribution are more precise and more accurate than common estimators based on the method of moments.
Convergence of statistical moments of particle density time series in scrape-off layer plasmas
International Nuclear Information System (INIS)
Kube, R.; Garcia, O. E.
2015-01-01
Particle density fluctuations in the scrape-off layer of magnetically confined plasmas, as measured by gas-puff imaging or Langmuir probes, are modeled as the realization of a stochastic process in which a superposition of pulses with a fixed shape, an exponential distribution of waiting times, and amplitudes represents the radial motion of blob-like structures. With an analytic formulation of the process at hand, we derive expressions for the mean squared error on estimators of sample mean and sample variance as a function of sample length, sampling frequency, and the parameters of the stochastic process. Employing that the probability distribution function of a particularly relevant stochastic process is given by the gamma distribution, we derive estimators for sample skewness and kurtosis and expressions for the mean squared error on these estimators. Numerically, generated synthetic time series are used to verify the proposed estimators, the sample length dependency of their mean squared errors, and their performance. We find that estimators for sample skewness and kurtosis based on the gamma distribution are more precise and more accurate than common estimators based on the method of moments
Overdispersion in nuclear statistics
International Nuclear Information System (INIS)
Semkow, Thomas M.
1999-01-01
The modern statistical distribution theory is applied to the development of the overdispersion theory in ionizing-radiation statistics for the first time. The physical nuclear system is treated as a sequence of binomial processes, each depending on a characteristic probability, such as probability of decay, detection, etc. The probabilities fluctuate in the course of a measurement, and the physical reasons for that are discussed. If the average values of the probabilities change from measurement to measurement, which originates from the random Lexis binomial sampling scheme, then the resulting distribution is overdispersed. The generating functions and probability distribution functions are derived, followed by a moment analysis. The Poisson and Gaussian limits are also given. The distribution functions belong to a family of generalized hypergeometric factorial moment distributions by Kemp and Kemp, and can serve as likelihood functions for the statistical estimations. An application to radioactive decay with detection is described and working formulae are given, including a procedure for testing the counting data for overdispersion. More complex experiments in nuclear physics (such as solar neutrino) can be handled by this model, as well as distinguishing between the source and background
Directory of Open Access Journals (Sweden)
Lee Kyungkoo
2008-01-01
Full Text Available An analytical method to model failure of steel beam plastic hinges due to local buckling and low-cycle fatigue is proposed herein. This method is based on the plastic collapse mechanism approach and a yield-line plastic hinge (YLPH model whose geometry is based on buckled shapes of beam plastic hinges observed in experiments. Two limit states, strength degradation failure induced by local buckling and low-cycle fatigue fracture, are considered. The proposed YLPH model was developed for FEMA-350 WUF-W, RBS and Free Flange connections and validated in comparisons to experimental data. This model can be used to estimate the seismic rotation capacity of fully restrained beam-column connections in special steel moment-resisting frames under both monotonic and cyclic loading conditions.
Particle electric dipole-moments
Energy Technology Data Exchange (ETDEWEB)
Pendlebury, J M [Sussex Univ., Brighton (United Kingdom)
1997-04-01
The incentive to detect particle electric dipole-moments, as a window on time-reversal violation, remains undiminished. Efforts to improve the measurements for the neutron, the electron and some nuclei are still making rapid progress as more powerful experimental methods are brought to bear. A new measurement for the neutron at ILL is presented. (author). 7 refs.
Nonlinear Radon Transform Using Zernike Moment for Shape Analysis
Directory of Open Access Journals (Sweden)
Ziping Ma
2013-01-01
Full Text Available We extend the linear Radon transform to a nonlinear space and propose a method by applying the nonlinear Radon transform to Zernike moments to extract shape descriptors. These descriptors are obtained by computing Zernike moment on the radial and angular coordinates of the pattern image's nonlinear Radon matrix. Theoretical and experimental results validate the effectiveness and the robustness of the method. The experimental results show the performance of the proposed method in the case of nonlinear space equals or outperforms that in the case of linear Radon.
Statistics of Monte Carlo methods used in radiation transport calculation
International Nuclear Information System (INIS)
Datta, D.
2009-01-01
Radiation transport calculation can be carried out by using either deterministic or statistical methods. Radiation transport calculation based on statistical methods is basic theme of the Monte Carlo methods. The aim of this lecture is to describe the fundamental statistics required to build the foundations of Monte Carlo technique for radiation transport calculation. Lecture note is organized in the following way. Section (1) will describe the introduction of Basic Monte Carlo and its classification towards the respective field. Section (2) will describe the random sampling methods, a key component of Monte Carlo radiation transport calculation, Section (3) will provide the statistical uncertainty of Monte Carlo estimates, Section (4) will describe in brief the importance of variance reduction techniques while sampling particles such as photon, or neutron in the process of radiation transport
Numerically Stable Evaluation of Moments of Random Gram Matrices With Applications
Elkhalil, Khalil; Kammoun, Abla; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim
2017-01-01
This paper focuses on the computation of the positive moments of one-side correlated random Gram matrices. Closed-form expressions for the moments can be obtained easily, but numerical evaluation thereof is prone to numerical stability, especially in high-dimensional settings. This letter provides a numerically stable method that efficiently computes the positive moments in closed-form. The developed expressions are more accurate and can lead to higher accuracy levels when fed to moment based-approaches. As an application, we show how the obtained moments can be used to approximate the marginal distribution of the eigenvalues of random Gram matrices.
Numerically Stable Evaluation of Moments of Random Gram Matrices With Applications
Elkhalil, Khalil
2017-07-31
This paper focuses on the computation of the positive moments of one-side correlated random Gram matrices. Closed-form expressions for the moments can be obtained easily, but numerical evaluation thereof is prone to numerical stability, especially in high-dimensional settings. This letter provides a numerically stable method that efficiently computes the positive moments in closed-form. The developed expressions are more accurate and can lead to higher accuracy levels when fed to moment based-approaches. As an application, we show how the obtained moments can be used to approximate the marginal distribution of the eigenvalues of random Gram matrices.
Fantastic Learning Moments and Where to Find Them
Directory of Open Access Journals (Sweden)
Alexander Y. Sheng
2017-12-01
Full Text Available Introduction Experiential learning is crucial for the development of all learners. Literature exploring how and where experiential learning happens in the modern clinical learning environment is sparse. We created a novel, web-based educational tool called “Learning Moment” (LM to foster experiential learning among our learners. We used data captured by LM as a research database to determine where learning experiences were occuring within our emergency department (ED. We hypothesized that these moments would occur more frequently at the physician workstations as opposed to the bedside. Methods We implemented LM at a single ED’s medical student clerkship. The platform captured demographic data including the student’s intended specialty and year of training as well as “learning moments,” defined as logs of learner self-selected learning experiences that included the clinical “pearl,” clinical scenario, and location where the “learning moment” occurred. We presented data using descriptive statistics with frequencies and percentages. Locations of learning experiences were stratified by specialty and training level. Results A total of 323 “learning moments” were logged by 42 registered medical students (29 fourth-year medical students (MS 4 and 13 MS 3 over a six-month period. Over half (52.4% intended to enter the field of emergency medicine (EM. Of these “learning moments,” 266 included optional location data. The most frequently reported location was patient rooms (135 “learning moments”, 50.8%. Physician workstations hosted the second most frequent “learning moments” (67, 25.2%. EM-bound students reported 43.7% of “learning moments” happening in patient rooms, followed by workstations (32.8%. On the other hand, non EM-bound students reported that 66.3% of “learning moments” occurred in patient rooms and only 8.4% at workstations (p<0.001. Conclusion LM was implemented within our ED as an innovative, web
A Calculation of the Angular Moments of the Kernel for a Monatomic Gas Scatterer
Energy Technology Data Exchange (ETDEWEB)
Haakansson, Rune
1964-07-15
B. Davison has given in an unpublished paper a method of calculating the moments of the monatomic gas scattering kernel. We present here this method and apply it to calculate the first four moments. Numerical results for these moments for the masses M = 1 and 3.6 are also given.
The estimation of the measurement results with using statistical methods
International Nuclear Information System (INIS)
Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" data-affiliation=" (State Enterprise Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" >Velychko, O; UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" data-affiliation=" (State Scientific Institution UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" >Gordiyenko, T
2015-01-01
The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed
The estimation of the measurement results with using statistical methods
Velychko, O.; Gordiyenko, T.
2015-02-01
The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.
Directory of Open Access Journals (Sweden)
Zaira M Alieva
2016-01-01
Full Text Available The article analyzes the application of mathematical and statistical methods in the analysis of socio-humanistic texts. The essence of mathematical and statistical methods, presents examples of their use in the study of Humanities and social phenomena. Considers the key issues faced by the expert in the application of mathematical-statistical methods in socio-humanitarian sphere, including the availability of sustainable contrasting socio-humanitarian Sciences and mathematics; the complexity of the allocation of the object that is the bearer of the problem; having the use of a probabilistic approach. The conclusion according to the results of the study.
Cutting-edge statistical methods for a life-course approach.
Bub, Kristen L; Ferretti, Larissa K
2014-01-01
Advances in research methods, data collection and record keeping, and statistical software have substantially increased our ability to conduct rigorous research across the lifespan. In this article, we review a set of cutting-edge statistical methods that life-course researchers can use to rigorously address their research questions. For each technique, we describe the method, highlight the benefits and unique attributes of the strategy, offer a step-by-step guide on how to conduct the analysis, and illustrate the technique using data from the National Institute of Child Health and Human Development Study of Early Child Care and Youth Development. In addition, we recommend a set of technical and empirical readings for each technique. Our goal was not to address a substantive question of interest but instead to provide life-course researchers with a useful reference guide to cutting-edge statistical methods.
International Nuclear Information System (INIS)
RodrIguez, Arezky H; Handy, Carlos R; Trallero-Giner, C
2004-01-01
The suitability of conformal transformation (CT) analysis, and the eigenvalue moment method (EMM), for determining the eigenenergies and eigenfunctions of a quantum particle confined within a lens geometry, is reviewed and compared to the recent results by Even and Loualiche (2003 J. Phys.: Condens. Matter 15 8465). It is shown that CT and EMM define two accurate and versatile analytical/computational methods relevant to lens shaped regions of varying geometrical aspect ratios. (reply)
A robust statistical method for association-based eQTL analysis.
Directory of Open Access Journals (Sweden)
Ning Jiang
Full Text Available It has been well established that theoretical kernel for recently surging genome-wide association study (GWAS is statistical inference of linkage disequilibrium (LD between a tested genetic marker and a putative locus affecting a disease trait. However, LD analysis is vulnerable to several confounding factors of which population stratification is the most prominent. Whilst many methods have been proposed to correct for the influence either through predicting the structure parameters or correcting inflation in the test statistic due to the stratification, these may not be feasible or may impose further statistical problems in practical implementation.We propose here a novel statistical method to control spurious LD in GWAS from population structure by incorporating a control marker into testing for significance of genetic association of a polymorphic marker with phenotypic variation of a complex trait. The method avoids the need of structure prediction which may be infeasible or inadequate in practice and accounts properly for a varying effect of population stratification on different regions of the genome under study. Utility and statistical properties of the new method were tested through an intensive computer simulation study and an association-based genome-wide mapping of expression quantitative trait loci in genetically divergent human populations.The analyses show that the new method confers an improved statistical power for detecting genuine genetic association in subpopulations and an effective control of spurious associations stemmed from population structure when compared with other two popularly implemented methods in the literature of GWAS.
Optimal allocation of testing resources for statistical simulations
Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick
2015-07-01
Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.
Energy Technology Data Exchange (ETDEWEB)
Zhang, G. P., E-mail: gpzhang@indstate.edu [Department of Physics, Indiana State University, Terre Haute, Indiana 47809 (United States); Si, M. S. [Key Laboratory for Magnetism and Magnetic Materials of the Ministry of Education, Lanzhou University, Lanzhou 730000 (China); George, Thomas F. [Office of the Chancellor and Center for Nanoscience, Departments of Chemistry and Biochemistry and Physics and Astronomy, University of Missouri-St. Louis, St. Louis, Missouri 63121 (United States)
2015-05-07
When a laser pulse excites a ferromagnet, its spin undergoes a dramatic change. The initial demagnetization process is very fast. Experimentally, it is found that the demagnetization time is related to the spin moment in the sample. In this study, we employ the first-principles method to directly simulate such a process. We use the fixed spin moment method to change the spin moment in ferromagnetic nickel, and then we employ the Liouville equation to couple the laser pulse to the system. We find that in general the dependence of demagnetization time on the spin moment is nonlinear: It decreases with the spin moment up to a point, after which an increase with the spin moment is observed, followed by a second decrease. To understand this, we employ an extended Heisenberg model, which includes both the exchange interaction and spin-orbit coupling. The model directly links the demagnetization rate to the spin moment itself and demonstrates analytically that the spin relaxes more slowly with a small spin moment. A future experimental test of our predictions is needed.
Entropy statistics and information theory
Frenken, K.; Hanusch, H.; Pyka, A.
2007-01-01
Entropy measures provide important tools to indicate variety in distributions at particular moments in time (e.g., market shares) and to analyse evolutionary processes over time (e.g., technical change). Importantly, entropy statistics are suitable to decomposition analysis, which renders the
Yin, Yixing; Chen, Haishan; Xu, Chong-Yu; Xu, Wucheng; Chen, Changchun; Sun, Shanlei
2016-05-01
The regionalization methods, which "trade space for time" by pooling information from different locations in the frequency analysis, are efficient tools to enhance the reliability of extreme quantile estimates. This paper aims at improving the understanding of the regional frequency of extreme precipitation by using regionalization methods, and providing scientific background and practical assistance in formulating the regional development strategies for water resources management in one of the most developed and flood-prone regions in China, the Yangtze River Delta (YRD) region. To achieve the main goals, L-moment-based index-flood (LMIF) method, one of the most popular regionalization methods, is used in the regional frequency analysis of extreme precipitation with special attention paid to inter-site dependence and its influence on the accuracy of quantile estimates, which has not been considered by most of the studies using LMIF method. Extensive data screening of stationarity, serial dependence, and inter-site dependence was carried out first. The entire YRD region was then categorized into four homogeneous regions through cluster analysis and homogenous analysis. Based on goodness-of-fit statistic and L-moment ratio diagrams, generalized extreme-value (GEV) and generalized normal (GNO) distributions were identified as the best fitted distributions for most of the sub-regions, and estimated quantiles for each region were obtained. Monte Carlo simulation was used to evaluate the accuracy of the quantile estimates taking inter-site dependence into consideration. The results showed that the root-mean-square errors (RMSEs) were bigger and the 90 % error bounds were wider with inter-site dependence than those without inter-site dependence for both the regional growth curve and quantile curve. The spatial patterns of extreme precipitation with a return period of 100 years were finally obtained which indicated that there are two regions with highest precipitation
An Algorithm for Fast Computation of 3D Zernike Moments for Volumetric Images
Directory of Open Access Journals (Sweden)
Khalid M. Hosny
2012-01-01
Full Text Available An algorithm was proposed for very fast and low-complexity computation of three-dimensional Zernike moments. The 3D Zernike moments were expressed in terms of exact 3D geometric moments where the later are computed exactly through the mathematical integration of the monomial terms over the digital image/object voxels. A new symmetry-based method was proposed to compute 3D Zernike moments with 87% reduction in the computational complexity. A fast 1D cascade algorithm was also employed to add more complexity reduction. The comparison with existing methods was performed, where the numerical experiments and the complexity analysis ensured the efficiency of the proposed method especially with image and objects of large sizes.
Search for electric dipole moments in storage rings
Directory of Open Access Journals (Sweden)
Lenisa Paolo
2016-01-01
Full Text Available The JEDI collaboration aims at making use of storage ring to provide the most precise measurement of the electric dipole moments of hadrons. The method makes exploits a longitudinal polarized beam. The existence an electric dipole moment would generate a torque slowly twisting the particle spin out of plan of the storage ring into the vertical direction. The observation of non zero electric dipole moment would represent a clear sign of new physics beyond the Standard Model. Feasiblity tests are presently undergoing at the COSY storage ring Forschungszentrum Jülich (Germany, to develop the novel techniques to be implemented in a future dedicated storage ring.
On the pth moment stability of the binary airfoil induced by bounded noise
International Nuclear Information System (INIS)
Wu, Jiancheng; Li, Xuan; Liu, Xianbin
2017-01-01
Highlights: • We obtain finite pth moment Lyapunov exponent for binary airfoil subject to a bounded noise. • Based on perturbation approach and Green's functions method, second differential eigenvalue equation governing moment Lyapunov exponent is established. • The types of singular points are investigated. • The eigenvalue problem is solved analytically and numerically. • The effects of noise and system parameters on the moment Lyapunov exponent and the stochastic stability of the system are discussed. - Abstract: In the paper, the stochastic stability of the binary airfoil subject to the effect of a bounded noise is studied through the determination of moment Lyapunov exponents. The noise excitation here is often used to model a realistic model of noise in many engineering application. The partial differential eigenvalue problem governing the moment Lyapunov exponent is established. Via the Feller boundary classification, the types of singular points are discussed here, and for the system discussed, the singular points only exist in end points. The fundamental methods used are the perturbation approach and the Green's functions method. With these methods, the second-order expansions of the moment Lyapunov exponents are obtained, which are shown to be in good agreement with those obtained using Monte Carlo simulation. The effects of noise and system parameters on the moment Lyapunov exponent and the stochastic stability of the binary airfoil system are discussed.
Stochastic analysis of complex reaction networks using binomial moment equations.
Barzel, Baruch; Biham, Ofer
2012-09-01
The stochastic analysis of complex reaction networks is a difficult problem because the number of microscopic states in such systems increases exponentially with the number of reactive species. Direct integration of the master equation is thus infeasible and is most often replaced by Monte Carlo simulations. While Monte Carlo simulations are a highly effective tool, equation-based formulations are more amenable to analytical treatment and may provide deeper insight into the dynamics of the network. Here, we present a highly efficient equation-based method for the analysis of stochastic reaction networks. The method is based on the recently introduced binomial moment equations [Barzel and Biham, Phys. Rev. Lett. 106, 150602 (2011)]. The binomial moments are linear combinations of the ordinary moments of the probability distribution function of the population sizes of the interacting species. They capture the essential combinatorics of the reaction processes reflecting their stoichiometric structure. This leads to a simple and transparent form of the equations, and allows a highly efficient and surprisingly simple truncation scheme. Unlike ordinary moment equations, in which the inclusion of high order moments is prohibitively complicated, the binomial moment equations can be easily constructed up to any desired order. The result is a set of equations that enables the stochastic analysis of complex reaction networks under a broad range of conditions. The number of equations is dramatically reduced from the exponential proliferation of the master equation to a polynomial (and often quadratic) dependence on the number of reactive species in the binomial moment equations. The aim of this paper is twofold: to present a complete derivation of the binomial moment equations; to demonstrate the applicability of the moment equations for a representative set of example networks, in which stochastic effects play an important role.
Application of statistical method for FBR plant transient computation
International Nuclear Information System (INIS)
Kikuchi, Norihiro; Mochizuki, Hiroyasu
2014-01-01
Highlights: • A statistical method with a large trial number up to 10,000 is applied to the plant system analysis. • A turbine trip test conducted at the “Monju” reactor is selected as a plant transient. • A reduction method of trial numbers is discussed. • The result with reduced trial number can express the base regions of the computed distribution. -- Abstract: It is obvious that design tolerances, errors included in operation, and statistical errors in empirical correlations effect on the transient behavior. The purpose of the present study is to apply above mentioned statistical errors to a plant system computation in order to evaluate the statistical distribution contained in the transient evolution. A selected computation case is the turbine trip test conducted at 40% electric power of the prototype fast reactor “Monju”. All of the heat transport systems of “Monju” are modeled with the NETFLOW++ system code which has been validated using the plant transient tests of the experimental fast reactor Joyo, and “Monju”. The effects of parameters on upper plenum temperature are confirmed by sensitivity analyses, and dominant parameters are chosen. The statistical errors are applied to each computation deck by using a pseudorandom number and the Monte-Carlo method. The dSFMT (Double precision SIMD-oriented Fast Mersenne Twister) that is developed version of Mersenne Twister (MT), is adopted as the pseudorandom number generator. In the present study, uniform random numbers are generated by dSFMT, and these random numbers are transformed to the normal distribution by the Box–Muller method. Ten thousands of different computations are performed at once. In every computation case, the steady calculation is performed for 12,000 s, and transient calculation is performed for 4000 s. In the purpose of the present statistical computation, it is important that the base regions of distribution functions should be calculated precisely. A large number of
A κ-generalized statistical mechanics approach to income analysis
Clementi, F.; Gallegati, M.; Kaniadakis, G.
2009-02-01
This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low-middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful.
A κ-generalized statistical mechanics approach to income analysis
International Nuclear Information System (INIS)
Clementi, F; Gallegati, M; Kaniadakis, G
2009-01-01
This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low–middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful
International Nuclear Information System (INIS)
BURKARDT, JOHN; GUNZBURGER, MAX; PETERSON, JANET; BRANNON, REBECCA M.
2002-01-01
The theory, numerical algorithm, and user documentation are provided for a new ''Centroidal Voronoi Tessellation (CVT)'' method of filling a region of space (2D or 3D) with particles at any desired particle density. ''Clumping'' is entirely avoided and the boundary is optimally resolved. This particle placement capability is needed for any so-called ''mesh-free'' method in which physical fields are discretized via arbitrary-connectivity discrete points. CVT exploits efficient statistical methods to avoid expensive generation of Voronoi diagrams. Nevertheless, if a CVT particle's Voronoi cell were to be explicitly computed, then it would have a centroid that coincides with the particle itself and a minimized rotational moment. The CVT code provides each particle's volume and centroid, and also the rotational moment matrix needed to approximate a particle by an ellipsoid (instead of a simple sphere). DIATOM region specification is supported
Teachable moments for health behavior change and intermediate patient outcomes.
Flocke, Susan A; Clark, Elizabeth; Antognoli, Elizabeth; Mason, Mary Jane; Lawson, Peter J; Smith, Samantha; Cohen, Deborah J
2014-07-01
Teachable moments (TM) are opportunities created through physician-patient interaction and used to encourage patients to change unhealthy behaviors. We examine the effectiveness of TMs to increase patients' recall of advice, motivation to modify behavior, and behavior change. A mixed-method observational study of 811 patient visits to 28 primary care clinicians used audio-recordings of visits to identify TMs and other types of advice in health behavior change talk. Patient surveys assessed smoking, exercise, fruit/vegetable consumption, height, weight, and readiness for change prior to the observed visit and 6-weeks post-visit. Compared to other identified categories of advice (i.e. missed opportunities or teachable moment attempts), recall was greatest after TMs occurred (83% vs. 49-74%). TMs had the greatest proportion of patients change in importance and confidence and increase readiness to change; however differences were small. TMs had greater positive behavior change scores than other categories of advice; however, this pattern was statistically non-significant and was not observed for BMI change. TMs have a greater positive influence on several intermediate markers of patient behavior change compared to other categories of advice. TMs show promise as an approach for clinicians to discuss behavior change with patients efficiently and effectively. Copyright © 2014. Published by Elsevier Ireland Ltd.
Directory of Open Access Journals (Sweden)
Chan Jasper FW
2011-05-01
Full Text Available Abstract Background MedSense is an electronic hand hygiene compliance monitoring system that provides Infection Control Practitioners with continuous access to hand hygiene compliance information by monitoring Moments 1 and 4 of the WHO "My 5 Moments for Hand Hygiene" guidelines. Unlike previous electronic monitoring systems, MedSense operates in open cubicles with multiple beds and does not disrupt existing workflows. Methods This study was conducted in a 6-bed neurosurgical intensive care unit with technical development and evaluation phases. Healthcare workers (HCWs wore an electronic device in the style of an identity badge to detect hand hygiene opportunities and compliance. We compared the compliance determined by the system and an infection control nurse. At the same time, the system assessed compliance by time of day, day of week, work shift, professional category of HCWs, and individual subject, while the workload of HCWs was monitored by measuring the amount of time they spent in patient zones. Results During the three-month evaluation phase, the system identified 13,694 hand hygiene opportunities from 17 nurses, 3 physiotherapists, and 1 healthcare assistant, resulting in an overall compliance of 35.1% for the unit. The per-indication compliance for Moment 1, 4, and simultaneous 1 and 4 were 21.3% (95%CI: 19.0, 23.6, 39.6% (95%CI: 37.3, 41.9, and 49.2% (95%CI: 46.6, 51.8, respectively, and were all statistically significantly different (p Conclusion MedSense provides an unobtrusive and objective measurement of hand hygiene compliance. The information is important for staff training by the infection control team and allocation of manpower by hospital administration.
Methods for estimating low-flow statistics for Massachusetts streams
Ries, Kernell G.; Friesz, Paul J.
2000-01-01
Methods and computer software are described in this report for determining flow duration, low-flow frequency statistics, and August median flows. These low-flow statistics can be estimated for unregulated streams in Massachusetts using different methods depending on whether the location of interest is at a streamgaging station, a low-flow partial-record station, or an ungaged site where no data are available. Low-flow statistics for streamgaging stations can be estimated using standard U.S. Geological Survey methods described in the report. The MOVE.1 mathematical method and a graphical correlation method can be used to estimate low-flow statistics for low-flow partial-record stations. The MOVE.1 method is recommended when the relation between measured flows at a partial-record station and daily mean flows at a nearby, hydrologically similar streamgaging station is linear, and the graphical method is recommended when the relation is curved. Equations are presented for computing the variance and equivalent years of record for estimates of low-flow statistics for low-flow partial-record stations when either a single or multiple index stations are used to determine the estimates. The drainage-area ratio method or regression equations can be used to estimate low-flow statistics for ungaged sites where no data are available. The drainage-area ratio method is generally as accurate as or more accurate than regression estimates when the drainage-area ratio for an ungaged site is between 0.3 and 1.5 times the drainage area of the index data-collection site. Regression equations were developed to estimate the natural, long-term 99-, 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, and 50-percent duration flows; the 7-day, 2-year and the 7-day, 10-year low flows; and the August median flow for ungaged sites in Massachusetts. Streamflow statistics and basin characteristics for 87 to 133 streamgaging stations and low-flow partial-record stations were used to develop the equations. The
Multivariate statistical methods and data mining in particle physics (4/4)
CERN. Geneva
2008-01-01
The lectures will cover multivariate statistical methods and their applications in High Energy Physics. The methods will be viewed in the framework of a statistical test, as used e.g. to discriminate between signal and background events. Topics will include an introduction to the relevant statistical formalism, linear test variables, neural networks, probability density estimation (PDE) methods, kernel-based PDE, decision trees and support vector machines. The methods will be evaluated with respect to criteria relevant to HEP analyses such as statistical power, ease of computation and sensitivity to systematic effects. Simple computer examples that can be extended to more complex analyses will be presented.
Multivariate statistical methods and data mining in particle physics (2/4)
CERN. Geneva
2008-01-01
The lectures will cover multivariate statistical methods and their applications in High Energy Physics. The methods will be viewed in the framework of a statistical test, as used e.g. to discriminate between signal and background events. Topics will include an introduction to the relevant statistical formalism, linear test variables, neural networks, probability density estimation (PDE) methods, kernel-based PDE, decision trees and support vector machines. The methods will be evaluated with respect to criteria relevant to HEP analyses such as statistical power, ease of computation and sensitivity to systematic effects. Simple computer examples that can be extended to more complex analyses will be presented.
Multivariate statistical methods and data mining in particle physics (1/4)
CERN. Geneva
2008-01-01
The lectures will cover multivariate statistical methods and their applications in High Energy Physics. The methods will be viewed in the framework of a statistical test, as used e.g. to discriminate between signal and background events. Topics will include an introduction to the relevant statistical formalism, linear test variables, neural networks, probability density estimation (PDE) methods, kernel-based PDE, decision trees and support vector machines. The methods will be evaluated with respect to criteria relevant to HEP analyses such as statistical power, ease of computation and sensitivity to systematic effects. Simple computer examples that can be extended to more complex analyses will be presented.
Moment Magnitude Determination for Marmara Region-Turkey Using Displacement Spectra
Köseoǧlu Küsmezer, Ayşegül; Meral Özel, Nurcan; Barış, Å.Erif; Üçer, S. Balamir; Ottemöller, Lars
2010-05-01
The main purpose of the study is to determine moment magnitude Mω using displacement source spectra of earthquakes occurred in Marmara Region. The region is the most densely populated and fast-developing part of Turkey, bounded by 39.0°N to 42.0°N and 26.0°E to 32.0°E, and have experienced major earthquake disasters during the last four centuries with destructive earthquakes and probabilistic seismic hazard studies shows that the region have significant probability of producing M>7 earthquake within the next years. Seismic moment is a direct measurement of earthquake size (rupture area and static displacement) and does not saturate, spectral analysis at local distances is a very useful method which allows the reliable determination of seismic moment and moment magnitude. We have used converging grid search method developed by L. Ottemöller, and J. Havskov, 2008 for the automatic determination of moment magnitude for local distances. For data preperation; the time domain signal of S waves were extracted from the vertical component seismograms.Data was transformed from time to frequency domain by applying the standart fast fourier transform (fft). Source parameters and moment magnitudes of earthquakes are determined by applying spectral fitting procedure to classical Brune's model. The method is first manually and then automatically performed on the source spectrum of S waves within 20 sec. Mo and fc (Aki;1967, and Brune;1970) were determined by using the method which the model space is divided into a grid and the error function detected for all grid points. A smaller grid with denser spacing around the best solution is generated with an iterative procedure. The moment magnitudes of the earthquakes have been calculated according to the scale of Kanamori (1977) and Hanks and Kanamori (1979). A data set of 279 events recorded on broadband velocity seismograms extracted from KOERI (Kandilli Observatory and Earthquake Research Institute) seismic network were
Simple formulae for interpretation of the dead time α (first moment) method of reactor noise
International Nuclear Information System (INIS)
Degweker, S.B.
1999-01-01
The Markov Chain approach for solving problems related to the presence of a non extending dead time in a particle counting circuit with time correlated pulses was developed in an earlier paper. The formalism was applied to, among others, the dead time α (first moment) method of reactor noise. For this problem, however the solution obtained was largely numerical in character and had a tendency to break down for systems close to criticality. In the present paper, simple analytical expressions are derived for the count rate and L ex , the quantities of interest in this method. Comparisons with Monte Carlo simulations show that these formulae are accurate in the range of system parameters of practical interest
Statistical methods for evaluating the attainment of cleanup standards
Energy Technology Data Exchange (ETDEWEB)
Gilbert, R.O.; Simpson, J.C.
1992-12-01
This document is the third volume in a series of volumes sponsored by the US Environmental Protection Agency (EPA), Statistical Policy Branch, that provide statistical methods for evaluating the attainment of cleanup Standards at Superfund sites. Volume 1 (USEPA 1989a) provides sampling designs and tests for evaluating attainment of risk-based standards for soils and solid media. Volume 2 (USEPA 1992) provides designs and tests for evaluating attainment of risk-based standards for groundwater. The purpose of this third volume is to provide statistical procedures for designing sampling programs and conducting statistical tests to determine whether pollution parameters in remediated soils and solid media at Superfund sites attain site-specific reference-based standards. This.document is written for individuals who may not have extensive training or experience with statistical methods. The intended audience includes EPA regional remedial project managers, Superfund-site potentially responsible parties, state environmental protection agencies, and contractors for these groups.
Callahan, Patrick Gregory
different alloying additions in each sample cause differences in lattice misfit and γ' precipitate shape morphology, varying from spherical, to cuboidal, to intermediate morphologies. 3-D datasets from each alloy were collected via automated Focused Ion Beam (FIB) serial sectioning. Digital image processing methods are used to register, clean, and segment the images in each of the datasets in order to digitally reconstruct the microstructures in 3-D. The distributions of the shape descriptors of the γ' precipitates from each microstructure are compared using the Hellinger distance. The Hellinger distance determines if there are quantitative differences in the γ' precipitate morphologies, or if they are the same. It was found that comparing distributions of the second order affine moment invariant Ω 3 with the Hellinger distance is sufficient for recognizing that alloys have different compositions. The secondary γ' precipitate shapes in two Ni-based superalloys, one from a UM-F20 alloy with cuboidal precipitates, and one from a Rene-88 DT alloy with more complex dendritic precipitates, have been decomposed and reconstructed using 3-D Zernike functions, which are orthogonal over the unit ball; they can be used to decompose an arbitrary shape scaled to fit inside an embedding sphere into spherical harmonics. Relatively complex shapes can be decomposed into, and reconstructed from, 3-D Zernike functions. In this thesis we show the 3-D Zernike functions and a method to derive expressions for Zernike moments from the more familiar geometric moments. Then Zernike moment reconstructions up to order 20 of precipitates from the two Ni-base superalloys are presented. The Zernike moment reconstructions were characterized using second order moment invariants, and have yielded good reconstructions of cuboidal precipitates. More orders of Zernike moments may be needed to accurately reconstruct the dendritic precipitates. We also introduce the concept of moment invariant density maps
EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.
Tong, Xiaoxiao; Bentler, Peter M
2013-01-01
Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.
Statistical methods for accurately determining criticality code bias
International Nuclear Information System (INIS)
Trumble, E.F.; Kimball, K.D.
1997-01-01
A system of statistically treating validation calculations for the purpose of determining computer code bias is provided in this paper. The following statistical treatments are described: weighted regression analysis, lower tolerance limit, lower tolerance band, and lower confidence band. These methods meet the criticality code validation requirements of ANS 8.1. 8 refs., 5 figs., 4 tabs
Jensen, Kevin L.; Finkenstadt, Daniel; Shabaev, Andrew; Lambrakos, Samuel G.; Moody, Nathan A.; Petillo, John J.; Yamaguchi, Hisato; Liu, Fangze
2018-01-01
Recent experimental measurements of a bulk material covered with a small number of graphene layers reported by Yamaguchi et al. [NPJ 2D Mater. Appl. 1, 12 (2017)] (on bialkali) and Liu et al. [Appl. Phys. Lett. 110, 041607 (2017)] (on copper) and the needs of emission models in beam optics codes have lead to substantial changes in a Moments model of photoemission. The changes account for (i) a barrier profile and density of states factor based on density functional theory (DFT) evaluations, (ii) a Drude-Lorentz model of the optical constants and laser penetration depth, and (iii) a transmission probability evaluated by an Airy Transfer Matrix Approach. Importantly, the DFT results lead to a surface barrier profile of a shape similar to both resonant barriers and reflectionless wells: the associated quantum mechanical transmission probabilities are shown to be comparable to those recently required to enable the Moments (and Three Step) model to match experimental data but for reasons very different than the assumption by conventional wisdom that a barrier is responsible. The substantial modifications of the Moments model components, motivated by computational materials methods, are developed. The results prepare the Moments model for use in treating heterostructures and discrete energy level systems (e.g., quantum dots) proposed for decoupling the opposing metrics of performance that undermine the performance of advanced light sources like the x-ray Free Electron Laser. The consequences of the modified components on quantum yield, emittance, and emission models needed by beam optics codes are discussed.
Statistical assessment of numerous Monte Carlo tallies
International Nuclear Information System (INIS)
Kiedrowski, Brian C.; Solomon, Clell J.
2011-01-01
Four tests are developed to assess the statistical reliability of collections of tallies that number in thousands or greater. To this end, the relative-variance density function is developed and its moments are studied using simplified, non-transport models. The statistical tests are performed upon the results of MCNP calculations of three different transport test problems and appear to show that the tests are appropriate indicators of global statistical quality. (author)
Statistical methods for quality assurance
International Nuclear Information System (INIS)
Rinne, H.; Mittag, H.J.
1989-01-01
This is the first German-language textbook on quality assurance and the fundamental statistical methods that is suitable for private study. The material for this book has been developed from a course of Hagen Open University and is characterized by a particularly careful didactical design which is achieved and supported by numerous illustrations and photographs, more than 100 exercises with complete problem solutions, many fully displayed calculation examples, surveys fostering a comprehensive approach, bibliography with comments. The textbook has an eye to practice and applications, and great care has been taken by the authors to avoid abstraction wherever appropriate, to explain the proper conditions of application of the testing methods described, and to give guidance for suitable interpretation of results. The testing methods explained also include latest developments and research results in order to foster their adoption in practice. (orig.) [de
De-trending of wind speed variance based on first-order and second-order statistical moments only
DEFF Research Database (Denmark)
Larsen, Gunner Chr.; Hansen, Kurt Schaldemose
2014-01-01
The lack of efficient methods for de-trending of wind speed resource data may lead to erroneous wind turbine fatigue and ultimate load predictions. The present paper presents two models, which quantify the effect of an assumed linear trend on wind speed standard deviations as based on available...... statistical data only. The first model is a pure time series analysis approach, which quantifies the effect of non-stationary characteristics of ensemble mean wind speeds on the estimated wind speed standard deviations as based on mean wind speed statistics only. This model is applicable to statistics...... of arbitrary types of time series. The second model uses the full set of information and includes thus additionally observed wind speed standard deviations to estimate the effect of ensemble mean non-stationarities on wind speed standard deviations. This model takes advantage of a simple physical relationship...
First observation of magnetic moment precession of channeled particles in bent crystals
International Nuclear Information System (INIS)
Chen, D.; Albuquerque, I.F.; Baublis, V.V.; Bondar, N.F.; Carrigan, R.A. Jr.; Cooper, P.S.; Lisheng, D.; Denisov, A.S.; Dobrovolsky, A.V.; Dubbs, T.; Endler, A.M.F.; Escobar, C.O.; Foucher, M.; Golovtsov, V.L.; Goritchev, P.A.; Gottschalk, H.; Gouffon, P.; Grachev, V.T.; Khanzadeev, A.V.; Kubantsev, M.A.; Kuropatkin, N.P.; Lach, J.; Lang Pengfei; Lebedenko, V.N.; Li Chengze; Li Yunshan; Mahon, J.R.P.; McCliment, E.; Morelos, A.; Newsom, C.; Pommot Maia, M.C.; Samsonov, V.M.; Schegelsky, V.A.; Shi Huanzhang; Smith, V.J.; Sun, C.R.; Tang Fukun; Terentyev, N.K.; Timm, S.; Tkatch, I.I.; Uvarov, L.N.; Vorobyov, A.A.; Yan Jie; Zhao Wenheng; Zheng Shuchen; Zhong Yuanyuan
1992-01-01
Spin precession of channeled particles in bent crystals has been observed for the first time. Polarized Σ + were channeled using bent Si crystals. These crystals provided an effective magnetic field of 45 T which resulted in a measured spin precession of 60±17 degree. This agrees with the prediction of 62±2 degree using the world average of Σ + magnetic moment measurements. This new technique gives a Σ + magnetic moment of (2.40±0.46±0.40)μ N , where the quoted uncertainties are statistical and systematic, respectively. We see no evidence of depolarization in the channeling process
DEFF Research Database (Denmark)
Kim, Oleksiy S.; Jørgensen, Erik; Meincke, Peter
2004-01-01
An efficient higher-order method of moments (MoM) solution of volume integral equations is presented. The higher-order MoM solution is based on higher-order hierarchical Legendre basis functions and higher-order geometry modeling. An unstructured mesh composed of 8-node trilinear and/or curved 27...... of magnitude in comparison to existing higher-order hierarchical basis functions. Consequently, an iterative solver can be applied even for high expansion orders. Numerical results demonstrate excellent agreement with the analytical Mie series solution for a dielectric sphere as well as with results obtained...
Understanding advanced statistical methods
Westfall, Peter
2013-01-01
Introduction: Probability, Statistics, and ScienceReality, Nature, Science, and ModelsStatistical Processes: Nature, Design and Measurement, and DataModelsDeterministic ModelsVariabilityParametersPurely Probabilistic Statistical ModelsStatistical Models with Both Deterministic and Probabilistic ComponentsStatistical InferenceGood and Bad ModelsUses of Probability ModelsRandom Variables and Their Probability DistributionsIntroductionTypes of Random Variables: Nominal, Ordinal, and ContinuousDiscrete Probability Distribution FunctionsContinuous Probability Distribution FunctionsSome Calculus-Derivatives and Least SquaresMore Calculus-Integrals and Cumulative Distribution FunctionsProbability Calculation and SimulationIntroductionAnalytic Calculations, Discrete and Continuous CasesSimulation-Based ApproximationGenerating Random NumbersIdentifying DistributionsIntroductionIdentifying Distributions from Theory AloneUsing Data: Estimating Distributions via the HistogramQuantiles: Theoretical and Data-Based Estimate...
On the moment of inertia of a quantum harmonic oscillator
International Nuclear Information System (INIS)
Khamzin, A. A.; Sitdikov, A. S.; Nikitin, A. S.; Roganov, D. A.
2013-01-01
An original method for calculating the moment of inertia of the collective rotation of a nucleus on the basis of the cranking model with the harmonic-oscillator Hamiltonian at arbitrary frequencies of rotation and finite temperature is proposed. In the adiabatic limit, an oscillating chemical-potential dependence of the moment of inertia is obtained by means of analytic calculations. The oscillations of the moment of inertia become more pronounced as deformations approach the spherical limit and decrease exponentially with increasing temperature.
Moment matrices, border bases and radical computation
B. Mourrain; J.B. Lasserre; M. Laurent (Monique); P. Rostalski; P. Trebuchet (Philippe)
2013-01-01
htmlabstractIn this paper, we describe new methods to compute the radical (resp. real radical) of an ideal, assuming it complex (resp. real) variety is nte. The aim is to combine approaches for solving a system of polynomial equations with dual methods which involve moment matrices and
Moment matrices, border bases and radical computation
B. Mourrain; J.B. Lasserre; M. Laurent (Monique); P. Rostalski; P. Trebuchet (Philippe)
2011-01-01
htmlabstractIn this paper, we describe new methods to compute the radical (resp. real radical) of an ideal, assuming it complex (resp. real) variety is nte. The aim is to combine approaches for solving a system of polynomial equations with dual methods which involve moment matrices and
Statistical trend analysis methods for temporal phenomena
Energy Technology Data Exchange (ETDEWEB)
Lehtinen, E.; Pulkkinen, U. [VTT Automation, (Finland); Poern, K. [Poern Consulting, Nykoeping (Sweden)
1997-04-01
We consider point events occurring in a random way in time. In many applications the pattern of occurrence is of intrinsic interest as indicating a trend or some other systematic feature in the rate of occurrence. The purpose of this report is to survey briefly different statistical trend analysis methods and illustrate their applicability to temporal phenomena in particular. The trend testing of point events is usually seen as the testing of the hypotheses concerning the intensity of the occurrence of events. When the intensity function is parametrized, the testing of trend is a typical parametric testing problem. In industrial applications the operational experience generally does not suggest any specified model and method in advance. Therefore, and particularly, if the Poisson process assumption is very questionable, it is desirable to apply tests that are valid for a wide variety of possible processes. The alternative approach for trend testing is to use some non-parametric procedure. In this report we have presented four non-parametric tests: The Cox-Stuart test, the Wilcoxon signed ranks test, the Mann test, and the exponential ordered scores test. In addition to the classical parametric and non-parametric approaches we have also considered the Bayesian trend analysis. First we discuss a Bayesian model, which is based on a power law intensity model. The Bayesian statistical inferences are based on the analysis of the posterior distribution of the trend parameters, and the probability of trend is immediately seen from these distributions. We applied some of the methods discussed in an example case. It should be noted, that this report is a feasibility study rather than a scientific evaluation of statistical methods, and the examples can only be seen as demonstrations of the methods. 14 refs, 10 figs.
Statistical trend analysis methods for temporal phenomena
International Nuclear Information System (INIS)
Lehtinen, E.; Pulkkinen, U.; Poern, K.
1997-04-01
We consider point events occurring in a random way in time. In many applications the pattern of occurrence is of intrinsic interest as indicating a trend or some other systematic feature in the rate of occurrence. The purpose of this report is to survey briefly different statistical trend analysis methods and illustrate their applicability to temporal phenomena in particular. The trend testing of point events is usually seen as the testing of the hypotheses concerning the intensity of the occurrence of events. When the intensity function is parametrized, the testing of trend is a typical parametric testing problem. In industrial applications the operational experience generally does not suggest any specified model and method in advance. Therefore, and particularly, if the Poisson process assumption is very questionable, it is desirable to apply tests that are valid for a wide variety of possible processes. The alternative approach for trend testing is to use some non-parametric procedure. In this report we have presented four non-parametric tests: The Cox-Stuart test, the Wilcoxon signed ranks test, the Mann test, and the exponential ordered scores test. In addition to the classical parametric and non-parametric approaches we have also considered the Bayesian trend analysis. First we discuss a Bayesian model, which is based on a power law intensity model. The Bayesian statistical inferences are based on the analysis of the posterior distribution of the trend parameters, and the probability of trend is immediately seen from these distributions. We applied some of the methods discussed in an example case. It should be noted, that this report is a feasibility study rather than a scientific evaluation of statistical methods, and the examples can only be seen as demonstrations of the methods
Methods and statistics for combining motif match scores.
Bailey, T L; Gribskov, M
1998-01-01
Position-specific scoring matrices are useful for representing and searching for protein sequence motifs. A sequence family can often be described by a group of one or more motifs, and an effective search must combine the scores for matching a sequence to each of the motifs in the group. We describe three methods for combining match scores and estimating the statistical significance of the combined scores and evaluate the search quality (classification accuracy) and the accuracy of the estimate of statistical significance of each. The three methods are: 1) sum of scores, 2) sum of reduced variates, 3) product of score p-values. We show that method 3) is superior to the other two methods in both regards, and that combining motif scores indeed gives better search accuracy. The MAST sequence homology search algorithm utilizing the product of p-values scoring method is available for interactive use and downloading at URL http:/(/)www.sdsc.edu/MEME.
A Stochastic Fractional Dynamics Model of Rainfall Statistics
Kundu, Prasun; Travis, James
2013-04-01
Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is designed to faithfully reflect the scale dependence and is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. The main restriction is the assumption that the statistics of the precipitation field is spatially homogeneous and isotropic and stationary in time. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of the radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment. Some data sets containing periods of non-stationary behavior that involves occasional anomalously correlated rain events, present a challenge for the model.
Energy Technology Data Exchange (ETDEWEB)
Berkolaiko, G. [Department of Mathematics, Texas A and M University, College Station, Texas 77843-3368 (United States); Kuipers, J. [Institut für Theoretische Physik, Universität Regensburg, D-93040 Regensburg (Germany)
2013-12-15
Electronic transport through chaotic quantum dots exhibits universal behaviour which can be understood through the semiclassical approximation. Within the approximation, calculation of transport moments reduces to codifying classical correlations between scattering trajectories. These can be represented as ribbon graphs and we develop an algorithmic combinatorial method to generate all such graphs with a given genus. This provides an expansion of the linear transport moments for systems both with and without time reversal symmetry. The computational implementation is then able to progress several orders further than previous semiclassical formulae as well as those derived from an asymptotic expansion of random matrix results. The patterns observed also suggest a general form for the higher orders.
Statistical methods and challenges in connectome genetics
Pluta, Dustin; Yu, Zhaoxia; Shen, Tong; Chen, Chuansheng; Xue, Gui; Ombao, Hernando
2018-01-01
The study of genetic influences on brain connectivity, known as connectome genetics, is an exciting new direction of research in imaging genetics. We here review recent results and current statistical methods in this area, and discuss some
Wave packet methods for the direct calculation of energy-transfer moments in molecular collisions
International Nuclear Information System (INIS)
Bradley, K.S.; Schatz, G.C.; Balint-Kurti, G.G.
1999-01-01
The authors present a new wave packet based theory for the direct calculation of energy-transfer moments in molecular collision processes. This theory does not contain any explicit reference to final state information associated with the collision dynamics, thereby avoiding the need for determining vibration-rotation bound states (other than the initial state) for the molecules undergoing collision and also avoiding the calculation of state-to-state transition probabilities. The theory applies to energy-transfer moments of any order, and it generates moments for a wide range of translational energies in a single calculation. Two applications of the theory are made that demonstrate its viability; one is to collinear He + H 2 and the other to collinear He + CS 2 (with two active vibrational modes in CS 2 ). The results of these applications agree well with earlier results based on explicit calculation of transition probabilities
Top quark amplitudes with an anomalous magnetic moment
International Nuclear Information System (INIS)
Larkoski, Andrew J.; Peskin, Michael E.
2011-01-01
The anomalous magnetic moment of the top quark may be measured during the first run of the LHC at 7 TeV. For these measurements, it will be useful to have available tree amplitudes with tt and arbitrarily many photons and gluons, including both QED and color anomalous magnetic moments. In this paper, we present a method for computing these amplitudes using the Britto-Cachazo-Feng-Witten recursion formula. Because we deal with an effective theory with higher-dimension couplings, there are roadblocks to a direct computation with the Britto-Cachazo-Feng-Witten method. We evade these by using an auxiliary scalar theory to compute a subset of the amplitudes.
Moments of inertia in 162Yb at very high spins
International Nuclear Information System (INIS)
Simon, R.S.; Banaschik, M.V.; Colombani, P.; Soroka, D.P.; Stephens, F.S.; Diamond, R.M.
1976-01-01
Two methods have been used to obtain values of the effective moment of inertia of very-high-spin (20h-bar--50h-bar) states populated in heavy-ion compound-nucleus reactions. The 162 Yb nucleus studied has effective moments of inertia smaller than, but approaching, the rigid-body estimate
Higher-order force moments of active particles
Nasouri, Babak; Elfring, Gwynn J.
2018-04-01
Active particles moving through fluids generate disturbance flows due to their activity. For simplicity, the induced flow field is often modeled by the leading terms in a far-field approximation of the Stokes equations, whose coefficients are the force, torque, and stresslet (zeroth- and first-order force moments) of the active particle. This level of approximation is quite useful, but may also fail to predict more complex behaviors that are observed experimentally. In this study, to provide a better approximation, we evaluate the contribution of the second-order force moments to the flow field and, by reciprocal theorem, present explicit formulas for the stresslet dipole, rotlet dipole, and potential dipole for an arbitrarily shaped active particle. As examples of this method, we derive modified Faxén laws for active spherical particles and resolve higher-order moments for active rod-like particles.
Directory of Open Access Journals (Sweden)
Yan Chen
2017-03-01
Full Text Available Based on the vectorised and cache optimised kernel, a parallel lower upper decomposition with a novel communication avoiding pivoting scheme is developed to solve dense complex matrix equations generated by the method of moments. The fine-grain data rearrangement and assembler instructions are adopted to reduce memory accessing times and improve CPU cache utilisation, which also facilitate vectorisation of the code. Through grouping processes in a binary tree, a parallel pivoting scheme is designed to optimise the communication pattern and thus reduces the solving time of the proposed solver. Two large electromagnetic radiation problems are solved on two supercomputers, respectively, and the numerical results demonstrate that the proposed method outperforms those in open source and commercial libraries.
Quantitative EEG Applying the Statistical Recognition Pattern Method
DEFF Research Database (Denmark)
Engedal, Knut; Snaedal, Jon; Hoegh, Peter
2015-01-01
BACKGROUND/AIM: The aim of this study was to examine the discriminatory power of quantitative EEG (qEEG) applying the statistical pattern recognition (SPR) method to separate Alzheimer's disease (AD) patients from elderly individuals without dementia and from other dementia patients. METHODS...
On the multipole moments of charge distributions
International Nuclear Information System (INIS)
Khare, P.L.
1977-01-01
There are two different standard methods for showing the equivalence of a charge distribution in a small volume tau surrounding a point O, to the superposition of a monopole, a dipole, a quadrupole and poles of higher moments at the point O: (a) to show that the electrostatic potential due to the charge distribution at an outside point is the same as due to these superposed multipoles (including a monopole). (b) to show that the energy of interaction of an external field with the charge distribution is the same as with the superposed equivalent monopole and multipoles. Neither of these methods gives a physical picture of the equivalence of a charge distribution to the superposition of different multipoles. An attempt is made to interpret in physical terms the emergence of the multipoles of different order, that are equivalent to a charge distribution and to show that the magnitudes of the moments of these multipoles are in agreement with the results of both the approaches (a) and (b). This physical interpretation also helps to understand, in a simple manner, some of the wellknown properties of the multipole moments of atoms and nuclei. (K.B.)
Moment matrices, border bases and radical computation
Lasserre, J.B.; Laurent, M.; Mourrain, B.; Rostalski, P.; Trébuchet, P.
2013-01-01
In this paper, we describe new methods to compute the radical (resp. real radical) of an ideal, assuming its complex (resp. real) variety is finite. The aim is to combine approaches for solving a system of polynomial equations with dual methods which involve moment matrices and semi-definite
Torsional Moment Measurement on Bucket Wheel Shaft of Giant Machine
Directory of Open Access Journals (Sweden)
Jiří FRIES
2011-06-01
Full Text Available Bucket wheel loading at the present time (torsional moment on wheel shaft, peripheral cutting force is determined from electromotor incoming power or reaction force measured on gearbox hinge. Both methods together are weighted by steel construction absorption of driving units and by inertial forces of motor rotating parts. In the article is described direct method of the torsional moment measurement, which eliminates mentioned unfavourable impacts except absorption of steel construction of bucket wheel itself.
Finger crease pattern recognition using Legendre moments and principal component analysis
Luo, Rongfang; Lin, Tusheng
2007-03-01
The finger joint lines defined as finger creases and its distribution can identify a person. In this paper, we propose a new finger crease pattern recognition method based on Legendre moments and principal component analysis (PCA). After obtaining the region of interest (ROI) for each finger image in the pre-processing stage, Legendre moments under Radon transform are applied to construct a moment feature matrix from the ROI, which greatly decreases the dimensionality of ROI and can represent principal components of the finger creases quite well. Then, an approach to finger crease pattern recognition is designed based on Karhunen-Loeve (K-L) transform. The method applies PCA to a moment feature matrix rather than the original image matrix to achieve the feature vector. The proposed method has been tested on a database of 824 images from 103 individuals using the nearest neighbor classifier. The accuracy up to 98.584% has been obtained when using 4 samples per class for training. The experimental results demonstrate that our proposed approach is feasible and effective in biometrics.
Longitudinal data analysis a handbook of modern statistical methods
Fitzmaurice, Garrett; Verbeke, Geert; Molenberghs, Geert
2008-01-01
Although many books currently available describe statistical models and methods for analyzing longitudinal data, they do not highlight connections between various research threads in the statistical literature. Responding to this void, Longitudinal Data Analysis provides a clear, comprehensive, and unified overview of state-of-the-art theory and applications. It also focuses on the assorted challenges that arise in analyzing longitudinal data. After discussing historical aspects, leading researchers explore four broad themes: parametric modeling, nonparametric and semiparametric methods, joint
Statistical methods for assessing agreement between continuous measurements
DEFF Research Database (Denmark)
Sokolowski, Ineta; Hansen, Rikke Pilegaard; Vedsted, Peter
Background: Clinical research often involves study of agreement amongst observers. Agreement can be measured in different ways, and one can obtain quite different values depending on which method one uses. Objective: We review the approaches that have been discussed to assess the agreement between...... continuous measures and discuss their strengths and weaknesses. Different methods are illustrated using actual data from the `Delay in diagnosis of cancer in general practice´ project in Aarhus, Denmark. Subjects and Methods: We use weighted kappa-statistic, intraclass correlation coefficient (ICC......), concordance coefficient, Bland-Altman limits of agreement and percentage of agreement to assess the agreement between patient reported delay and doctor reported delay in diagnosis of cancer in general practice. Key messages: The correct statistical approach is not obvious. Many studies give the product...
Directory of Open Access Journals (Sweden)
A. Becker
2007-06-01
Full Text Available In this paper a hybrid method combining the Time-Domain Method of Moments (TD-MoM, the Time-Domain Uniform Theory of Diffraction (TD-UTD and the Finite-Difference Time-Domain Method (FDTD is presented. When applying this new hybrid method, thin-wire antennas are modeled with the TD-MoM, inhomogeneous bodies are modelled with the FDTD and large perfectly conducting plates are modelled with the TD-UTD. All inhomogeneous bodies are enclosed in a so-called FDTD-volume and the thin-wire antennas can be embedded into this volume or can lie outside. The latter avoids the simulation of white space between antennas and inhomogeneous bodies. If the antennas are positioned into the FDTD-volume, their discretization does not need to agree with the grid of the FDTD. By using the TD-UTD large perfectly conducting plates can be considered efficiently in the solution-procedure. Thus this hybrid method allows time-domain simulations of problems including very different classes of objects, applying the respective most appropriate numerical techniques to every object.
Moment-based metrics for global sensitivity analysis of hydrological systems
Directory of Open Access Journals (Sweden)
A. Dell'Oca
2017-12-01
Full Text Available We propose new metrics to assist global sensitivity analysis, GSA, of hydrological and Earth systems. Our approach allows assessing the impact of uncertain parameters on main features of the probability density function, pdf, of a target model output, y. These include the expected value of y, the spread around the mean and the degree of symmetry and tailedness of the pdf of y. Since reliable assessment of higher-order statistical moments can be computationally demanding, we couple our GSA approach with a surrogate model, approximating the full model response at a reduced computational cost. Here, we consider the generalized polynomial chaos expansion (gPCE, other model reduction techniques being fully compatible with our theoretical framework. We demonstrate our approach through three test cases, including an analytical benchmark, a simplified scenario mimicking pumping in a coastal aquifer and a laboratory-scale conservative transport experiment. Our results allow ascertaining which parameters can impact some moments of the model output pdf while being uninfluential to others. We also investigate the error associated with the evaluation of our sensitivity metrics by replacing the original system model through a gPCE. Our results indicate that the construction of a surrogate model with increasing level of accuracy might be required depending on the statistical moment considered in the GSA. The approach is fully compatible with (and can assist the development of analysis techniques employed in the context of reduction of model complexity, model calibration, design of experiment, uncertainty quantification and risk assessment.
Sensitivity of (α,α') cross sections to excited-state quadrupole moments
International Nuclear Information System (INIS)
Baker, F.T.; Scott, A.; Ronningen, R.M.; Hamilton, J.H.; Kruse, T.H.; Suchannek, R.; Savin, W.
1977-01-01
Inelastic α particle scattering at 21 and 24 MeV has been used to estimate the electric quadrupole moment of the second 2 + state in 180 Hf. Sensitivity to the assumed quadrupole moment is due almost entirely to reorientation via the nuclear force. Results suggest that the technique may be a useful method of estimating excited state quadrupole moments, particularly for states with high excitation energies or with J greater than 2
Statistical and Machine Learning forecasting methods: Concerns and ways forward
Makridakis, Spyros; Assimakopoulos, Vassilios
2018-01-01
Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions. PMID:29584784
Statistical and Machine Learning forecasting methods: Concerns and ways forward.
Makridakis, Spyros; Spiliotis, Evangelos; Assimakopoulos, Vassilios
2018-01-01
Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions.
Extraction of moments of net-particle event-by-event fluctuations in the CBM experiment
Energy Technology Data Exchange (ETDEWEB)
Vovchenko, Volodymyr [Frankfurt Institute for Advanced Studies, Frankfurt am Main (Germany); Goethe University, Frankfurt am Main (Germany); Taras Shevchenko University, Kyiv (Ukraine); Kisel, Ivan [Frankfurt Institute for Advanced Studies, Frankfurt am Main (Germany); Goethe University, Frankfurt am Main (Germany); Collaboration: CBM-Collaboration
2016-07-01
The future CBM experiment at FAIR will employ high intensity beams and large acceptance detectors in order to study the properties of the strongly interacting matter produced in heavy-ion collisions at high baryon densities. The search for the conjectured critical point of QCD is one the important tasks. It is predicted from statistical physics that higher moments of event-by-event fluctuations are very sensitive to the proximity of the critical point. This argument is explicitly demonstrated with the van der Waals equation of state. Thus, it was suggested that higher moments of fluctuations of conserved charges can be used as probes for the critical behavior. The statistical convergence of cumulants of different order is explored. The extraction of scaled variance, skewness, and kurtosis of proton distribution from simulated UrQMD events is performed and the efficiency correction described by binomial distribution is accounted for. The validity of this correction is tested with different modelings of the CBM detector response: from binomial distribution with fluctuating event-by-event efficiency to a full-scale GEANT simulation. The obtained results indicate that a more elaborate efficiency correction is needed in order to accurately reconstruct moments of higher orders.
Computerized statistical analysis with bootstrap method in nuclear medicine
International Nuclear Information System (INIS)
Zoccarato, O.; Sardina, M.; Zatta, G.; De Agostini, A.; Barbesti, S.; Mana, O.; Tarolo, G.L.
1988-01-01
Statistical analysis of data samples involves some hypothesis about the features of data themselves. The accuracy of these hypotheses can influence the results of statistical inference. Among the new methods of computer-aided statistical analysis, the bootstrap method appears to be one of the most powerful, thanks to its ability to reproduce many artificial samples starting from a single original sample and because it works without hypothesis about data distribution. The authors applied the bootstrap method to two typical situation of Nuclear Medicine Department. The determination of the normal range of serum ferritin, as assessed by radioimmunoassay and defined by the mean value ±2 standard deviations, starting from an experimental sample of small dimension, shows an unacceptable lower limit (ferritin plasmatic levels below zero). On the contrary, the results obtained by elaborating 5000 bootstrap samples gives ans interval of values (10.95 ng/ml - 72.87 ng/ml) corresponding to the normal ranges commonly reported. Moreover the authors applied the bootstrap method in evaluating the possible error associated with the correlation coefficient determined between left ventricular ejection fraction (LVEF) values obtained by first pass radionuclide angiocardiography with 99m Tc and 195m Au. The results obtained indicate a high degree of statistical correlation and give the range of r 2 values to be considered acceptable for this type of studies
Statistical methods to monitor the West Valley off-gas system
International Nuclear Information System (INIS)
Eggett, D.L.
1990-01-01
This paper reports on the of-gas system for the ceramic melter operated at the West Valley Demonstration Project at West Valley, NY, monitored during melter operation. A one-at-a-time method of monitoring the parameters of the off-gas system is not statistically sound. Therefore, multivariate statistical methods appropriate for the monitoring of many correlated parameters will be used. Monitoring a large number of parameters increases the probability of a false out-of-control signal. If the parameters being monitored are statistically independent, the control limits can be easily adjusted to obtain the desired probability of a false out-of-control signal. The principal component (PC) scores have desirable statistical properties when the original variables are distributed as multivariate normals. Two statistics derived from the PC scores and used to form multivariate control charts are outlined and their distributional properties reviewed
Detrended fluctuation analysis based on higher-order moments of financial time series
Teng, Yue; Shang, Pengjian
2018-01-01
In this paper, a generalized method of detrended fluctuation analysis (DFA) is proposed as a new measure to assess the complexity of a complex dynamical system such as stock market. We extend DFA and local scaling DFA to higher moments such as skewness and kurtosis (labeled SMDFA and KMDFA), so as to investigate the volatility scaling property of financial time series. Simulations are conducted over synthetic and financial data for providing the comparative study. We further report the results of volatility behaviors in three American countries, three Chinese and three European stock markets by using DFA and LSDFA method based on higher moments. They demonstrate the dynamics behaviors of time series in different aspects, which can quantify the changes of complexity for stock market data and provide us with more meaningful information than single exponent. And the results reveal some higher moments volatility and higher moments multiscale volatility details that cannot be obtained using the traditional DFA method.
DEFF Research Database (Denmark)
Swann, Andrew Francis; Madsen, Thomas Bruun
2012-01-01
We introduce a notion of moment map adapted to actions of Lie groups that preserve a closed three-form. We show existence of our multi-moment maps in many circumstances, including mild topological assumptions on the underlying manifold. Such maps are also shown to exist for all groups whose second...
An Overview of Short-term Statistical Forecasting Methods
DEFF Research Database (Denmark)
Elias, Russell J.; Montgomery, Douglas C.; Kulahci, Murat
2006-01-01
An overview of statistical forecasting methodology is given, focusing on techniques appropriate to short- and medium-term forecasts. Topics include basic definitions and terminology, smoothing methods, ARIMA models, regression methods, dynamic regression models, and transfer functions. Techniques...... for evaluating and monitoring forecast performance are also summarized....
Singular-perturbation--strong-coupling field theory and the moments problem
International Nuclear Information System (INIS)
Handy, C.R.
1981-01-01
Motivated by recent work of Bender, Cooper, Guralnik, Mjolsness, Rose, and Sharp, a new technique is presented for solving field equations in terms of singular-perturbation--strong-coupling expansions. Two traditional mathematical tools are combined into one effective procedure. Firstly, high-temperature lattice expansions are obtained for the corresponding power moments of the field solution. The approximate continuum-limit power moments are subsequently obtained through the application of Pade techniques. Secondly, in order to reconstruct the corresponding approximate global field solution, one must use function-moments reconstruction techniques. The latter involves reconsidering the traditional ''moments problem'' of interest to pure and applied mathematicians. The above marriage between lattice methods and moments reconstruction procedures for functions yields good results for the phi 4 field-theory kink, and the sine-Gordon kink solutions. It is argued that the power moments are the most efficient dynamical variables for the generation of strong-coupling expansions. Indeed, a momentum-space formulation is being advocated in which the long-range behavior of the space-dependent fields are determined by the small-momentum, infrared, domain
Computation of temperature-dependent legendre moments of a double-differential elastic cross section
International Nuclear Information System (INIS)
Arbanas, G.; Dunn, M.E.; Larson, N.M.; Leal, L.C.; Williams, M.L.; Becker, B.; Dagan, R.
2011-01-01
A general expression for temperature-dependent Legendre moments of a double-differential elastic scattering cross section was derived by Ouisloumen and Sanchez [Nucl. Sci. Eng. 107, 189-200 (1991)]. Attempts to compute this expression are hindered by the three-fold nested integral, limiting their practical application to just the zeroth Legendre moment of an isotropic scattering. It is shown that the two innermost integrals could be evaluated analytically to all orders of Legendre moments, and for anisotropic scattering, by a recursive application of the integration by parts method. For this method to work, the anisotropic angular distribution in the center of mass is expressed as an expansion in Legendre polynomials. The first several Legendre moments of elastic scattering of neutrons on 238 U are computed at T=1000 K at incoming energy 6.5 eV for isotropic scattering in the center of mass frame. Legendre moments of the anisotropic angular distribution given via Blatt-Biedenharn coefficients are computed at 1 keV. The results are in agreement with those computed by the Monte Carlo method. (author)
Rapid Moment Magnitude Estimation Using Strong Motion Derived Static Displacements
Muzli, Muzli; Asch, Guenter; Saul, Joachim; Murjaya, Jaya
2015-01-01
The static surface deformation can be recovered from strong motion records. Compared to satellite-based measurements such as GPS or InSAR, the advantage of strong motion records is that they have the potential to provide real-time coseismic static displacements. The use of these valuable data was optimized for the moment magnitude estimation. A centroid grid search method was introduced to calculate the moment magnitude by using1 model. The method to data sets was applied of the 2011...
Goodman, J. W.
This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.
Phase analysis of NK-bar scattering and Λ-hyperon magnetic moment
International Nuclear Information System (INIS)
Nikitiu, F.
1987-01-01
The NK-bar-scattering S matrix is suggested to have the P 01 -channel pole which corresponds to Λ-hyperon. The Λ-hyperon magnetic moment is calculated. Its value ''arises'' only due to nucleon magnetic moments and N and K-bar nontrivial relativistic coupling in the P 01 -channel. This is one more method to the quark model methods. The calculations are in agreement with the experimental value of μΛ
International Nuclear Information System (INIS)
Cabral-Rosetti, L.G.; Bernabeu, J.; Vidal, J.
2000-01-01
We present a computation of the charge and the magnetic moment of the neutrino in the recently developed electro-weak background field method and in the linear R ξ L gauge. First, we deduce a formal Ward-Takahashi identity which implies the immediate cancellation of the neutrino electric charge. This Ward-Takahashi identity is as simple as that for QED. The computation of the (proper and improper) one loop vertex diagrams contributing to the neutrino electric charge is also presented in an arbitrary gauge, checking in this way the Ward-Takahashi identity previously obtained. Finally, the calculation of the magnetic moment of the neutrino, in the minimal extension of the standard model with massive Dirac neutrinos, is presented, showing its gauge parameter and gauge structure independence explicitly. (orig.)
Order statistics & inference estimation methods
Balakrishnan, N
1991-01-01
The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co
Patil, S K; Wari, M N; Panicker, C Yohannan; Inamdar, S R
2014-04-05
The absorption and fluorescence spectra of three medium sized dipolar laser dyes: coumarin 478 (C478), coumarin 519 (C519) and coumarin 523 (C523) have been recorded and studied comprehensively in various solvents at room temperature. The absorption and fluorescence spectra of C478, C519 and C523 show a bathochromic and hypsochromic shifts with increasing solvent polarity indicate that the transitions involved are π→π(∗) and n→π(∗). Onsager radii determined from ab initio calculations were used in the determination of dipole moments. The ground and excited state dipole moments were evaluated by using solvatochromic correlations. It is observed that the dipole moment values of excited states (μe) are higher than corresponding ground state values (μg) for the solvents studied. The ground and excited state dipole moments of these probes computed from ab initio calculations and those determined experimentally are compared and the results are discussed. Copyright © 2013 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
masoomeh Veiskarami
2015-01-01
Full Text Available Background : Rocker shoes are the most commonly prescribed external therapeutic shoe modification and are used for treatment of the ankle and midfoot problems. The aim of this study was to assesse the effects of the heel- to-toe rocker shoes on temporal-spatial and ankle joint moments in sagital and frontal plane. Materials and Methods: In this quasi-exprimental study, three-dimentional gait analysis was carried out on 20 healthy university female students with normal normal gait pattern selected by convenience sampling method. A Vicon 470 system(Oxford Metrix, U.K. consisting of 6 cameras operating at 60Hz and kistler forceplate (A9286 was used.The paired samples T test was used to statistical analysis. Results: The results showed that there is no significant change in temporal-spatial parameters while wearing this modified shoe ,but the ankle moments while wearing rocker shoes in sagittal plane was significantly less than that while wearing traditional shoes(p=0.002 but in frontal plane significantly increased(p=0.007. Conclusion: Based on the current findings the major benefits of this modified shoe appear to be significantly restricted sagital plane moments with maintenance of walking speed so the loads on ankle joint and achilles tendon reduced, but increases frontal plain moments which leads to increase of mediolateral instability of ankle joint.
Mat Jan, Nur Amalina; Shabri, Ani
2017-01-01
TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.
Top Quark Amplitudes with an Anomolous Magnetic Moment
International Nuclear Information System (INIS)
Larkoski, Andrew
2011-01-01
The anomalous magnetic moment of the top quark may be measured during the first run of the LHC at 7 TeV. For these measurements, it will be useful to have available tree amplitudes with t(bar t) and arbitrarily many photons and gluons, including both QED and color anomalous magnetic moments. In this paper, we present a method for computing these amplitudes using the Britto-Cachazo-Feng-Witten recursion formula. Because we deal with an effective theory with higher-dimension couplings, there are roadblocks to a direct computation with the Britto-Cachazo-Feng-Witten method. We evade these by using an auxiliary scalar theory to compute a subset of the amplitudes.
Directory of Open Access Journals (Sweden)
Qing Zhang
2018-01-01
Full Text Available Electric force is the most popular technique for bioparticle transportation and manipulation in microfluidic systems. In this paper, the iterative dipole moment (IDM method was used to calculate the dielectrophoretic (DEP forces of particle-particle interactions in a two-dimensional DC electric field, and the Lagrangian method was used to solve the transportation of particles. It was found that the DEP properties and whether the connection line between initial positions of particles perpendicular or parallel to the electric field greatly affect the chain patterns. In addition, the dependence of the DEP particle interaction upon the particle diameters, initial particle positions, and the DEP properties have been studied in detail. The conclusions are advantageous in elelctrokinetic microfluidic systems where it may be desirable to control, manipulate, and assemble bioparticles.
Polarimetric Segmentation Using Wishart Test Statistic
DEFF Research Database (Denmark)
Skriver, Henning; Schou, Jesper; Nielsen, Allan Aasbjerg
2002-01-01
A newly developed test statistic for equality of two complex covariance matrices following the complex Wishart distribution and an associated asymptotic probability for the test statistic has been used in a segmentation algorithm. The segmentation algorithm is based on the MUM (merge using moments......) approach, which is a merging algorithm for single channel SAR images. The polarimetric version described in this paper uses the above-mentioned test statistic for merging. The segmentation algorithm has been applied to polarimetric SAR data from the Danish dual-frequency, airborne polarimetric SAR, EMISAR...
Analysis of dynamical corrections to baryon magnetic moments
International Nuclear Information System (INIS)
Ha, Phuoc; Durand, Loyal
2003-01-01
We present and analyze QCD corrections to the baryon magnetic moments in terms of the one-, two-, and three-body operators which appear in the effective field theory developed in our recent papers. The main corrections are extended Thomas-type corrections associated with the confining interactions in the baryon. We investigate the contributions of low-lying angular excitations to the baryon magnetic moments quantitatively and show that they are completely negligible. When the QCD corrections are combined with the nonquark model contributions of the meson loops, we obtain a model which describes the baryon magnetic moments within a mean deviation of 0.04 μ N . The nontrivial interplay of the two types of corrections to the quark-model magnetic moments is analyzed in detail, and explains why the quark model is so successful. In the course of these calculations, we parametrize the general spin structure of the j=(1/2) + baryon wave functions in a form which clearly displays the symmetry properties and the internal angular momentum content of the wave functions, and allows us to use spin-trace methods to calculate the many spin matrix elements which appear in the expressions for the baryon magnetic moments. This representation may be useful elsewhere
Low-order moment expansions to tight binding for interatomic potentials: Successes and failures
International Nuclear Information System (INIS)
Kress, J.D.; Voter, A.F.
1995-01-01
We discuss the use of moment-based approximations to tight binding. Using a maximum entropy form for the electronic density of states, we show that a general interatomic potential can be defined that is suitable for molecular-dynamics simulations and has several other desirable features. For covalent materials (C and Si), properties where the atoms are in equivalent environments are well converged at low-order moments. For defect environments, which offer a more critical (and relevant) test, the method is found to give less satisfactory results. For example, the vacancy formation energy for Si is too low by ∼2 eV at 10 moments relative to exact tight binding. Attempts to improve the accuracy were unsuccessful, leading to the conclusion that potentials based on this approach are inadequate for covalent materials. We speculate that this may be a deficiency of low-order moment methods in general. For metals, in contrast to the covalent systems, we find that the low-order moment approach is better behaved. This finding is consistent with the success of existing empirical fourth-moment potentials for metals
Rogowski, Isabelle; Creveaux, Thomas; Chèze, Laurence; Macé, Pierre; Dumas, Raphaël
2014-01-01
This study examined the effect of the polar moment of inertia of a tennis racket on upper limb loading in the serve. Eight amateur competition tennis players performed two sets of 10 serves using two rackets identical in mass, position of center of mass and moments of inertia other than the polar moment of inertia (0.00152 vs 0.00197 kg.m2). An eight-camera motion analysis system collected the 3D trajectories of 16 markers, located on the thorax, upper limbs and racket, from which shoulder, elbow and wrist net joint moments and powers were computed using inverse dynamics. During the cocking phase, increased racket polar moment of inertia was associated with significant increases in the peak shoulder extension and abduction moments, as well the peak elbow extension, valgus and supination moments. During the forward swing phase, peak wrist extension and radial deviation moments significantly increased with polar moment of inertia. During the follow-through phase, the peak shoulder adduction, elbow pronation and wrist external rotation moments displayed a significant inverse relationship with polar moment of inertia. During the forward swing, the magnitudes of negative joint power at the elbow and wrist were significantly larger when players served using the racket with a higher polar moment of inertia. Although a larger polar of inertia allows players to better tolerate off-center impacts, it also appears to place additional loads on the upper extremity when serving and may therefore increase injury risk in tennis players.
Large Contrast Between the Moment Magnitude of Tremor and the Moment Magnitude of Slip in ETS Events
Kao, H.; Wang, K.; Dragert, H.; Rogers, G. C.; Kao, J. Y.
2009-12-01
We have developed an algorithm to estimate the moment magnitudes (Mw) of seismic tremors that are recorded during episodic tremor and slip (ETS) events beneath the northern Cascadia margin. The tremor “cloud” during an ETS episode consists of numerous individual tremor bursts. For each tremor burst, the hypocenter is first determined by the Source-Scanning Algorithm [Kao and Shan, 2004]. From the derived source location, we calculate a set of synthetic seismograms for each station based on a fixed seismic moment but different focal mechanisms. The maximum tremor amplitude observed at each station is then compared to that of the synthetics to give an estimate of the corresponding seismic moment of the tremor burst. The seismic moment averaged over all stations is used to calculate the final tremor burst Mw. We have applied this method to local earthquakes for calibration and the results are very consistent with the magnitudes listed in the catalogue. For each of the 8 northern Cascadia ETS episodes whose GPS coverage is sufficient for slip distribution inversion, the cumulative tremor Mw for the entire tremor cloud, determined from the combined moments of all individual tremor bursts in the ETS episode, is ~3 orders less than the corresponding slip Mw in the same episode (e.g., 3.7 vs. 6.7). This result suggests that aseismic slip is the predominant mode of deformation during ETS. The majority of individual tremor bursts in northern Cascadia have Mw ranging between 1.0 and 1.7 with the mean of 1.34. Only 5% of all tremors are larger than 2.0 with the largest being ~2.5.
Neck Muscle Moment Arms Obtained In-Vivo from MRI: Effect of Curved and Straight Modeled Paths.
Suderman, Bethany L; Vasavada, Anita N
2017-08-01
Musculoskeletal models of the cervical spine commonly represent neck muscles with straight paths. However, straight lines do not best represent the natural curvature of muscle paths in the neck, because the paths are constrained by bone and soft tissue. The purpose of this study was to estimate moment arms of curved and straight neck muscle paths using different moment arm calculation methods: tendon excursion, geometric, and effective torque. Curved and straight muscle paths were defined for two subject-specific cervical spine models derived from in vivo magnetic resonance images (MRI). Modeling neck muscle paths with curvature provides significantly different moment arm estimates than straight paths for 10 of 15 neck muscles (p straight lines to model muscle paths can lead to overestimating neck extension moment. However, moment arm methods for curved paths should be investigated further, as different methods of calculating moment arm can provide different estimates.
Fundamentals of modern statistical methods substantially improving power and accuracy
Wilcox, Rand R
2001-01-01
Conventional statistical methods have a very serious flaw They routinely miss differences among groups or associations among variables that are detected by more modern techniques - even under very small departures from normality Hundreds of journal articles have described the reasons standard techniques can be unsatisfactory, but simple, intuitive explanations are generally unavailable Improved methods have been derived, but they are far from obvious or intuitive based on the training most researchers receive Situations arise where even highly nonsignificant results become significant when analyzed with more modern methods Without assuming any prior training in statistics, Part I of this book describes basic statistical principles from a point of view that makes their shortcomings intuitive and easy to understand The emphasis is on verbal and graphical descriptions of concepts Part II describes modern methods that address the problems covered in Part I Using data from actual studies, many examples are include...
Bolted flanged connections subjected to longitudinal bending moments
International Nuclear Information System (INIS)
Blach, A.E.
1992-01-01
Flanges in piping systems and also pressure vessel flanges on tall columns are often subjected to longitudinal bending moments of considerable magnitude, be it from thermal expansion stresses in piping systems or from wind or seismic loadings on tall vertical pressure vessels. Except for the ASME Code, Section III, Subsections NB, NC, and ND, other pressure vessel and piping codes do not contain design ASME Nuclear Power Plant Code (Section III), an empirical formula is given, expressing a longitudinal bending moment in bolted flanged connections in terms of an equivalent internal pressure to be added to the design pressure of the flange. In this paper, an attempt is made to analyse the stresses on flanges and bolting due to external bending moments and to compare flange thicknesses thus obtained with thicknesses required using the equivalent design pressure specified in Subsections NB, NC, and ND. A design method is proposed, based on analysis and experimental work, which may be suitable for flange bending moment analysis when the rules of the Nuclear Power Plant Code are not mandatory. (orig.)
Statistical methods and challenges in connectome genetics
Pluta, Dustin
2018-03-12
The study of genetic influences on brain connectivity, known as connectome genetics, is an exciting new direction of research in imaging genetics. We here review recent results and current statistical methods in this area, and discuss some of the persistent challenges and possible directions for future work.
Literature in Focus: Statistical Methods in Experimental Physics
2007-01-01
Frederick James was a high-energy physicist who became the CERN "expert" on statistics and is now well-known around the world, in part for this famous text. The first edition of Statistical Methods in Experimental Physics was originally co-written with four other authors and was published in 1971 by North Holland (now an imprint of Elsevier). It became such an important text that demand for it has continued for more than 30 years. Fred has updated it and it was released in a second edition by World Scientific in 2006. It is still a top seller and there is no exaggeration in calling it «the» reference on the subject. A full review of the title appeared in the October CERN Courier.Come and meet the author to hear more about how this book has flourished during its 35-year lifetime. Frederick James Statistical Methods in Experimental Physics Monday, 26th of November, 4 p.m. Council Chamber (Bldg. 503-1-001) The author will be introduced...
Heterogeneous Rock Simulation Using DIP-Micromechanics-Statistical Methods
Directory of Open Access Journals (Sweden)
H. Molladavoodi
2018-01-01
Full Text Available Rock as a natural material is heterogeneous. Rock material consists of minerals, crystals, cement, grains, and microcracks. Each component of rock has a different mechanical behavior under applied loading condition. Therefore, rock component distribution has an important effect on rock mechanical behavior, especially in the postpeak region. In this paper, the rock sample was studied by digital image processing (DIP, micromechanics, and statistical methods. Using image processing, volume fractions of the rock minerals composing the rock sample were evaluated precisely. The mechanical properties of the rock matrix were determined based on upscaling micromechanics. In order to consider the rock heterogeneities effect on mechanical behavior, the heterogeneity index was calculated in a framework of statistical method. A Weibull distribution function was fitted to the Young modulus distribution of minerals. Finally, statistical and Mohr–Coulomb strain-softening models were used simultaneously as a constitutive model in DEM code. The acoustic emission, strain energy release, and the effect of rock heterogeneities on the postpeak behavior process were investigated. The numerical results are in good agreement with experimental data.
Dai, Jun; Zhou, Haigang; Zhao, Shaoquan
2017-01-01
This paper considers a multi-scale future hedge strategy that minimizes lower partial moments (LPM). To do this, wavelet analysis is adopted to decompose time series data into different components. Next, different parametric estimation methods with known distributions are applied to calculate the LPM of hedged portfolios, which is the key to determining multi-scale hedge ratios over different time scales. Then these parametric methods are compared with the prevailing nonparametric kernel metric method. Empirical results indicate that in the China Securities Index 300 (CSI 300) index futures and spot markets, hedge ratios and hedge efficiency estimated by the nonparametric kernel metric method are inferior to those estimated by parametric hedging model based on the features of sequence distributions. In addition, if minimum-LPM is selected as a hedge target, the hedging periods, degree of risk aversion, and target returns can affect the multi-scale hedge ratios and hedge efficiency, respectively.
[Rank distributions in community ecology from the statistical viewpoint].
Maksimov, V N
2004-01-01
Traditional statistical methods for definition of empirical functions of abundance distribution (population, biomass, production, etc.) of species in a community are applicable for processing of multivariate data contained in the above quantitative indices of the communities. In particular, evaluation of moments of distribution suffices for convolution of the data contained in a list of species and their abundance. At the same time, the species should be ranked in the list in ascending rather than descending population and the distribution models should be analyzed on the basis of the data on abundant species only.
The Playground Game: Inquiry‐Based Learning About Research Methods and Statistics
Westera, Wim; Slootmaker, Aad; Kurvers, Hub
2014-01-01
The Playground Game is a web-based game that was developed for teaching research methods and statistics to nursing and social sciences students in higher education and vocational training. The complexity and abstract nature of research methods and statistics poses many challenges for students. The
Bayes linear statistics, theory & methods
Goldstein, Michael
2007-01-01
Bayesian methods combine information available from data with any prior information available from expert knowledge. The Bayes linear approach follows this path, offering a quantitative structure for expressing beliefs, and systematic methods for adjusting these beliefs, given observational data. The methodology differs from the full Bayesian methodology in that it establishes simpler approaches to belief specification and analysis based around expectation judgements. Bayes Linear Statistics presents an authoritative account of this approach, explaining the foundations, theory, methodology, and practicalities of this important field. The text provides a thorough coverage of Bayes linear analysis, from the development of the basic language to the collection of algebraic results needed for efficient implementation, with detailed practical examples. The book covers:The importance of partial prior specifications for complex problems where it is difficult to supply a meaningful full prior probability specification...
Detection of Doppler Microembolic Signals Using High Order Statistics
Directory of Open Access Journals (Sweden)
Maroun Geryes
2016-01-01
Full Text Available Robust detection of the smallest circulating cerebral microemboli is an efficient way of preventing strokes, which is second cause of mortality worldwide. Transcranial Doppler ultrasound is widely considered the most convenient system for the detection of microemboli. The most common standard detection is achieved through the Doppler energy signal and depends on an empirically set constant threshold. On the other hand, in the past few years, higher order statistics have been an extensive field of research as they represent descriptive statistics that can be used to detect signal outliers. In this study, we propose new types of microembolic detectors based on the windowed calculation of the third moment skewness and fourth moment kurtosis of the energy signal. During energy embolus-free periods the distribution of the energy is not altered and the skewness and kurtosis signals do not exhibit any peak values. In the presence of emboli, the energy distribution is distorted and the skewness and kurtosis signals exhibit peaks, corresponding to the latter emboli. Applied on real signals, the detection of microemboli through the skewness and kurtosis signals outperformed the detection through standard methods. The sensitivities and specificities reached 78% and 91% and 80% and 90% for the skewness and kurtosis detectors, respectively.
Improved moment scaling estimation for multifractal signals
Directory of Open Access Journals (Sweden)
D. Veneziano
2009-11-01
Full Text Available A fundamental problem in the analysis of multifractal processes is to estimate the scaling exponent K(q of moments of different order q from data. Conventional estimators use the empirical moments μ^rq=⟨ | εr(τ|q⟩ of wavelet coefficients εr(τ, where τ is location and r is resolution. For stationary measures one usually considers "wavelets of order 0" (averages, whereas for functions with multifractal increments one must use wavelets of order at least 1. One obtains K^(q as the slope of log( μ^rq against log(r over a range of r. Negative moments are sensitive to measurement noise and quantization. For them, one typically uses only the local maxima of | εr(τ| (modulus maxima methods. For the positive moments, we modify the standard estimator K^(q to significantly reduce its variance at the expense of a modest increase in the bias. This is done by separately estimating K(q from sub-records and averaging the results. For the negative moments, we show that the standard modulus maxima estimator is biased and, in the case of additive noise or quantization, is not applicable with wavelets of order 1 or higher. For these cases we propose alternative estimators. We also consider the fitting of parametric models of K(q and show how, by splitting the record into sub-records as indicated above, the accuracy of standard methods can be significantly improved.
Directory of Open Access Journals (Sweden)
Isabelle Rogowski
Full Text Available This study examined the effect of the polar moment of inertia of a tennis racket on upper limb loading in the serve. Eight amateur competition tennis players performed two sets of 10 serves using two rackets identical in mass, position of center of mass and moments of inertia other than the polar moment of inertia (0.00152 vs 0.00197 kg.m2. An eight-camera motion analysis system collected the 3D trajectories of 16 markers, located on the thorax, upper limbs and racket, from which shoulder, elbow and wrist net joint moments and powers were computed using inverse dynamics. During the cocking phase, increased racket polar moment of inertia was associated with significant increases in the peak shoulder extension and abduction moments, as well the peak elbow extension, valgus and supination moments. During the forward swing phase, peak wrist extension and radial deviation moments significantly increased with polar moment of inertia. During the follow-through phase, the peak shoulder adduction, elbow pronation and wrist external rotation moments displayed a significant inverse relationship with polar moment of inertia. During the forward swing, the magnitudes of negative joint power at the elbow and wrist were significantly larger when players served using the racket with a higher polar moment of inertia. Although a larger polar of inertia allows players to better tolerate off-center impacts, it also appears to place additional loads on the upper extremity when serving and may therefore increase injury risk in tennis players.
Thouless-Valatin rotational moment of inertia from linear response theory
Petrík, Kristian; Kortelainen, Markus
2018-03-01
Spontaneous breaking of continuous symmetries of a nuclear many-body system results in the appearance of zero-energy restoration modes. These so-called spurious Nambu-Goldstone modes represent a special case of collective motion and are sources of important information about the Thouless-Valatin inertia. The main purpose of this work is to study the Thouless-Valatin rotational moment of inertia as extracted from the Nambu-Goldstone restoration mode that results from the zero-frequency response to the total-angular-momentum operator. We examine the role and effects of the pairing correlations on the rotational characteristics of heavy deformed nuclei in order to extend our understanding of superfluidity in general. We use the finite-amplitude method of the quasiparticle random-phase approximation on top of the Skyrme energy density functional framework with the Hartree-Fock-Bogoliubov theory. We have successfully extended this formalism and established a practical method for extracting the Thouless-Valatin rotational moment of inertia from the strength function calculated in the symmetry-restoration regime. Our results reveal the relation between the pairing correlations and the moment of inertia of axially deformed nuclei of rare-earth and actinide regions of the nuclear chart. We have also demonstrated the feasibility of the method for obtaining the moment of inertia for collective Hamiltonian models. We conclude that from the numerical and theoretical perspective, the finite-amplitude method can be widely used to effectively study rotational properties of deformed nuclei within modern density functional approaches.
An Algorithm for Fast Computation of 3D Zernike Moments for Volumetric Images
Hosny, Khalid M.; Hafez, Mohamed A.
2012-01-01
An algorithm was proposed for very fast and low-complexity computation of three-dimensional Zernike moments. The 3D Zernike moments were expressed in terms of exact 3D geometric moments where the later are computed exactly through the mathematical integration of the monomial terms over the digital image/object voxels. A new symmetry-based method was proposed to compute 3D Zernike moments with 87% reduction in the computational complexity. A fast 1D cascade algorithm was also employed to add m...
A corrector for spacecraft calculated electron moments
Directory of Open Access Journals (Sweden)
J. Geach
2005-03-01
Full Text Available We present the application of a numerical method to correct electron moments calculated on-board spacecraft from the effects of potential broadening and energy range truncation. Assuming a shape for the natural distribution of the ambient plasma and employing the scalar approximation, the on-board moments can be represented as non-linear integral functions of the underlying distribution. We have implemented an algorithm which inverts this system successfully over a wide range of parameters for an assumed underlying drifting Maxwellian distribution. The outputs of the solver are the corrected electron plasma temperature Te, density Ne and velocity vector Ve. We also make an estimation of the temperature anisotropy A of the distribution. We present corrected moment data from Cluster's PEACE experiment for a range of plasma environments and make comparisons with electron and ion data from other Cluster instruments, as well as the equivalent ground-based calculations using full 3-D distribution PEACE telemetry.
Energy Technology Data Exchange (ETDEWEB)
McGraw R.
2012-03-01
Moment methods are finding increasing usage for simulations of particle population balance in box models and in more complex flows including two-phase flows. These highly efficient methods have nevertheless had little impact to date for multi-moment representation of aerosols and clouds in atmospheric models. There are evidently two reasons for this: First, atmospheric models, especially if the goal is to simulate climate, tend to be extremely complex and take many man-years to develop. Thus there is considerable inertia to the implementation of novel approaches. Second, and more fundamental, the nonlinear transport algorithms designed to reduce numerical diffusion during advection of various species (tracers) from cell to cell, in the typically coarse grid arrays of these models, can and occasionally do fail to preserve correlations between the moments. Other correlated tracers such as isotopic abundances, composition of aerosol mixtures, hydrometeor phase, etc., are subject to this same fate. In the case of moments, this loss of correlation can and occasionally does give rise to unphysical moment sets. When this happens the simulation can come to a halt. Following a brief description and review of moment methods, the goal of this paper is to present two new approaches that both test moment sequences for validity and correct them when they fail. The new approaches work on individual grid cells without requiring stored information from previous time-steps or neighboring cells.
Analysis of scaled-factorial-moment data
International Nuclear Information System (INIS)
Seibert, D.
1990-01-01
We discuss the two standard constructions used in the search for intermittency, the exclusive and inclusive scaled factorial moments. We propose the use of a new scaled factorial moment that reduces to the exclusive moment in the appropriate limit and is free of undesirable multiplicity correlations that are contained in the inclusive moment. We show that there are some similarities among most of the models that have been proposed to explain factorial-moment data, and that these similarities can be used to increase the efficiency of testing these models. We begin by calculating factorial moments from a simple independent-cluster model that assumes only approximate boost invariance of the cluster rapidity distribution and an approximate relation among the moments of the cluster multiplicity distribution. We find two scaling laws that are essentially model independent. The first scaling law relates the moments to each other with a simple formula, indicating that the different factorial moments are not independent. The second scaling law relates samples with different rapidity densities. We find evidence for much larger clusters in heavy-ion data than in light-ion data, indicating possible spatial intermittency in the heavy-ion events
A Plastic Design Method for RC Moment Frame Buildings against Progressive Collapse
Directory of Open Access Journals (Sweden)
Hadi Faghihmaleki
2017-04-01
Full Text Available In this study, progressive collapse potential of generic 3-, 8- and 12-storey RC moment frame buildings designed based on IBC-2006 code was investigated by performing non-linear static and dynamic analyses. It was observed that the model structures had high potential for progressive collapse when the second floor column was suddenly removed. Then, the size of beams required to satisfy the failure criteria for progressive collapse was obtained by using the virtual work method; i.e., using the equilibrium of the external work done by gravity load due to loss of a column and the internal work done by plastic rotation of beams. According to the nonlinear dynamic analysis results, the model structures designed only for normal load turned out to have strong potential for progressive collapse whereas the structures designed by plastic design concept for progressive collapse satisfied the failure criterion recommended by the GSA code.
The statistical process control methods - SPC
Directory of Open Access Journals (Sweden)
Floreková Ľubica
1998-03-01
Full Text Available Methods of statistical evaluation of quality SPC (item 20 of the documentation system of quality control of ISO norm, series 900 of various processes, products and services belong amongst basic qualitative methods that enable us to analyse and compare data pertaining to various quantitative parameters. Also they enable, based on the latter, to propose suitable interventions with the aim of improving these processes, products and services. Theoretical basis and applicatibily of the principles of the: - diagnostics of a cause and effects, - Paret analysis and Lorentz curve, - number distribution and frequency curves of random variable distribution, - Shewhart regulation charts, are presented in the contribution.
International Nuclear Information System (INIS)
Goutte, Dominique.
1979-10-01
A determination was made of an angular distribution of the inelastic scattering cross-sections of electrons by the first excited state (Jsup(π)=3 - , E*=2.615 MeV) of 208 Pb. The statistical accuracy of previous data was improved between 2 and 2.7 fm -1 and the area of transfer of moment was extended up to qsub(max)=3.4 fm -1 . Cross-sections up to 10 -37 cm 2 /sr were determined whereas the limit reached before was 7x10 -35 cm 2 /sr. In order to determine the transition charge density, it was put into parametric form by a Fourier-Bessel development using 12 coefficients and an 11 fm cut-off radius. The model error inherent in this method is reduced to an insignificant contribution by the sufficiently high transfer of moment. The experimental transition charge density was compared with the theoretical predictions [fr
Landslide Susceptibility Statistical Methods: A Critical and Systematic Literature Review
Mihir, Monika; Malamud, Bruce; Rossi, Mauro; Reichenbach, Paola; Ardizzone, Francesca
2014-05-01
Landslide susceptibility assessment, the subject of this systematic review, is aimed at understanding the spatial probability of slope failures under a set of geomorphological and environmental conditions. It is estimated that about 375 landslides that occur globally each year are fatal, with around 4600 people killed per year. Past studies have brought out the increasing cost of landslide damages which primarily can be attributed to human occupation and increased human activities in the vulnerable environments. Many scientists, to evaluate and reduce landslide risk, have made an effort to efficiently map landslide susceptibility using different statistical methods. In this paper, we do a critical and systematic landslide susceptibility literature review, in terms of the different statistical methods used. For each of a broad set of studies reviewed we note: (i) study geography region and areal extent, (ii) landslide types, (iii) inventory type and temporal period covered, (iv) mapping technique (v) thematic variables used (vi) statistical models, (vii) assessment of model skill, (viii) uncertainty assessment methods, (ix) validation methods. We then pulled out broad trends within our review of landslide susceptibility, particularly regarding the statistical methods. We found that the most common statistical methods used in the study of landslide susceptibility include logistic regression, artificial neural network, discriminant analysis and weight of evidence. Although most of the studies we reviewed assessed the model skill, very few assessed model uncertainty. In terms of geographic extent, the largest number of landslide susceptibility zonations were in Turkey, Korea, Spain, Italy and Malaysia. However, there are also many landslides and fatalities in other localities, particularly India, China, Philippines, Nepal and Indonesia, Guatemala, and Pakistan, where there are much fewer landslide susceptibility studies available in the peer-review literature. This
Comparison of Force and Moment Coefficients for the Same Test Article in Multiple Wind Tunnels
Deloach, Richard
2013-01-01
This paper compares the results of force and moment measurements made on the same test article and with the same balance in three transonic wind tunnels. Comparisons are made for the same combination of Reynolds number, Mach number, sideslip angle, control surface configuration, and angle of attack range. Between-tunnel force and moment differences are quantified. An analysis of variance was performed at four unique sites in the design space to assess the statistical significance of between-tunnel variation and any interaction with angle of attack. Tunnel to tunnel differences too large to attribute to random error were detected were observed for all forces and moments. In some cases these differences were independent of angle of attack and in other cases they changed with angle of attack.
Statistical methods with applications to demography and life insurance
Khmaladze, Estáte V
2013-01-01
Suitable for statisticians, mathematicians, actuaries, and students interested in the problems of insurance and analysis of lifetimes, Statistical Methods with Applications to Demography and Life Insurance presents contemporary statistical techniques for analyzing life distributions and life insurance problems. It not only contains traditional material but also incorporates new problems and techniques not discussed in existing actuarial literature. The book mainly focuses on the analysis of an individual life and describes statistical methods based on empirical and related processes. Coverage ranges from analyzing the tails of distributions of lifetimes to modeling population dynamics with migrations. To help readers understand the technical points, the text covers topics such as the Stieltjes, Wiener, and Itô integrals. It also introduces other themes of interest in demography, including mixtures of distributions, analysis of longevity and extreme value theory, and the age structure of a population. In addi...
Zhai, Hong Lin; Zhai, Yue Yuan; Li, Pei Zhen; Tian, Yue Li
2013-01-21
A very simple approach to quantitative analysis is proposed based on the technology of digital image processing using three-dimensional (3D) spectra obtained by high-performance liquid chromatography coupled with a diode array detector (HPLC-DAD). As the region-based shape features of a grayscale image, Zernike moments with inherently invariance property were employed to establish the linear quantitative models. This approach was applied to the quantitative analysis of three compounds in mixed samples using 3D HPLC-DAD spectra, and three linear models were obtained, respectively. The correlation coefficients (R(2)) for training and test sets were more than 0.999, and the statistical parameters and strict validation supported the reliability of established models. The analytical results suggest that the Zernike moment selected by stepwise regression can be used in the quantitative analysis of target compounds. Our study provides a new idea for quantitative analysis using 3D spectra, which can be extended to the analysis of other 3D spectra obtained by different methods or instruments.
Limit moments for non circular cross-section (elliptical) pipe bends
International Nuclear Information System (INIS)
Spence, J.
1977-01-01
A number of experiment studies have been reported or are underway which investigate limit moments applied to pipe bends. Some theoretical work is also available. However, most of the work has been confined to nominally circular cross-section bends and little account has been taken of the practical problem of manufacturing tolerances. Many methods of manufacture result in bends which are not circular in cross-section but have an oval or elliptical shape. The present paper extends previous analyses on circular bends to cater for initially elliptical cross-sections. The loading is primarily in plane bending but out of plane is also considered and several independent methods are presented. No previous information is known to the authors. Upper and lower bound limit moments are derived first of all from existing linear elastic analyses and secondly upper bound moments are derived via a plastic analogy from existing stationary creep results. It is also shown that the creep information on design factors for bends can be used to obtain a reasonable estimate of the complete moment/strain behaviour of a bend or indeed a system. (Auth.)
Directory of Open Access Journals (Sweden)
Mahdi Golpayegani
2017-08-01
Full Text Available In recent years some studies have been done on the moment rredistribution in buildings and new methods offered for calculating of redistribution. Observations demonstrated that the combination of moment and shear force is important in analysis of reinforced concrete structures. But little research is done about the effect of redistribution by using moding in software. In order to study the effect of moment redistribution on the stability of RC moment resisting frame structures, four buildings with 4, 7, 10 and 13 story have been considered. In these models, the nonlinear behavior of elements (beam and column is considered by the use of interaction PMM hinges. The average plastic rotation was calculated by performing pushover analysis and storing stiffness matrix for 5 points and then the buckling coefficients were obtained by conducting buckling analysis. By the use of modal analysis natural frequency was calculated and it was attempted to be related the average plastic rotation with the buckling coefficients and the natural frequency. It could be concluded that increase in the plastic rotation reduce the buckling coefficients to about 96% which this amount of reduction is related to the average plastic rotation. Moreover, the buildings experience instability state when the average plastic rotation reached to 0.006 radian.
Static Scene Statistical Non-Uniformity Correction
2015-03-01
Covari- ances and Arbitrary-Order Statistical Moments. Technical Report SAND2008-6212, Sandia National Labs, 2008. [21] Perry, David L. and Eustace L...Oxford University Press, Inc., New York, 5th edition, 2004. [30] Stettner, Roger, Howard Bailey, and Steven Silverman . Three dimensional Flash LADAR focal
Statistical distributions of extreme dry spell in Peninsular Malaysia
Zin, Wan Zawiah Wan; Jemain, Abdul Aziz
2010-11-01
Statistical distributions of annual extreme (AE) series and partial duration (PD) series for dry-spell event are analyzed for a database of daily rainfall records of 50 rain-gauge stations in Peninsular Malaysia, with recording period extending from 1975 to 2004. The three-parameter generalized extreme value (GEV) and generalized Pareto (GP) distributions are considered to model both series. In both cases, the parameters of these two distributions are fitted by means of the L-moments method, which provides a robust estimation of them. The goodness-of-fit (GOF) between empirical data and theoretical distributions are then evaluated by means of the L-moment ratio diagram and several goodness-of-fit tests for each of the 50 stations. It is found that for the majority of stations, the AE and PD series are well fitted by the GEV and GP models, respectively. Based on the models that have been identified, we can reasonably predict the risks associated with extreme dry spells for various return periods.
Study on the dipole moment of asphaltene molecules through dielectric measuring
Zhang, Long Li; Yang, Chao He; Wang, Ji Qian; Yang, Guo Hua; Li, Li; Li, Yan Vivian; Cathles, Lawrence
2015-01-01
The polarity of asphaltenes influences production, transportation, and refining of heavy oils. However, the dipole moment of asphaltene molecules is difficult to measure due to their complex composition and electromagnetic opaqueness. In this work, we present a convenient and efficient way to determine the dipole moment of asphaltene in solution by dielectric measurements alone without measurement of the refractive index. The dipole moment of n-heptane asphaltenes of Middle East atmospheric residue (MEAR) and Ta-He atmospheric residue (THAR) are measured within the temperature range of -60°C to 20°C. There is one dielectric loss peak in the measured solutions of the two types of asphaltene at the temperatures of -60°C or -40°C, indicating there is one type of dipole in the solution. Furthermore, there are two dielectric loss peaks in the measured solutions of the two kinds of asphaltene when the temperature rises above -5°C, indicating there are two types of dipoles corresponding to the two peaks. This phenomenon indicates that as the temperature increases above -5°C, the asphaltene molecules aggregate and present larger dipole moment values. The dipole moments of MEAR C7-asphaltene aggregates are up to 5 times larger than those before aggregation. On the other hand, the dipole moments of the THAR C7-asphaltene aggregates are only 3 times larger than those before aggregation. It will be demonstrated that this method is capable of simultaneously measuring multi dipoles in one solution, instead of obtaining only the mean dipole moment. In addition, this method can be used with a wide range of concentrations and temperatures.
Identification of mine waters by statistical multivariate methods
Energy Technology Data Exchange (ETDEWEB)
Mali, N [IGGG, Ljubljana (Slovenia)
1992-01-01
Three water-bearing aquifers are present in the Velenje lignite mine. The aquifer waters have differing chemical composition; a geochemical water analysis can therefore determine the source of mine water influx. Mine water samples from different locations in the mine were analyzed, the results of chemical content and of electric conductivity of mine water were statistically processed by means of MICROGAS, SPSS-X and IN STATPAC computer programs, which apply three multivariate statistical methods (discriminate, cluster and factor analysis). Reliability of calculated values was determined with the Kolmogorov and Smirnov tests. It is concluded that laboratory analysis of single water samples can produce measurement errors, but statistical processing of water sample data can identify origin and movement of mine water. 15 refs.
Yin, Yixing; Chen, Haishan; Xu, Chongyu; Xu, Wucheng; Chen, Changchun
2014-05-01
The regionalization methods which 'trade space for time' by including several at-site data records in the frequency analysis are an efficient tool to improve the reliability of extreme quantile estimates. With the main aims of improving the understanding of the regional frequency of extreme precipitation and providing scientific and practical background and assistance in formulating the regional development strategies for water resources management in one of the most developed and flood-prone regions in China, the Yangtze River Delta (YRD) region, in this paper, L-moment-based index-flood (LMIF) method, one of the popular regionalization methods, is used in the regional frequency analysis of extreme precipitation; attention was paid to inter-site dependence and its influence on the accuracy of quantile estimates, which hasn't been considered for most of the studies using LMIF method. Extensive data screening of stationarity, serial dependence and inter-site dependence was carried out first. The entire YRD region was then categorized into four homogeneous regions through cluster analysis and homogenous analysis. Based on goodness-of-fit statistic and L-moment ratio diagrams, Generalized extreme-value (GEV) and Generalized Normal (GNO) distributions were identified as the best-fit distributions for most of the sub regions. Estimated quantiles for each region were further obtained. Monte-Carlo simulation was used to evaluate the accuracy of the quantile estimates taking inter-site dependence into consideration. The results showed that the root mean square errors (RMSEs) were bigger and the 90% error bounds were wider with inter-site dependence than those with no inter-site dependence for both the regional growth curve and quantile curve. The spatial patterns of extreme precipitation with return period of 100 years were obtained which indicated that there are two regions with the highest precipitation extremes (southeastern coastal area of Zhejiang Province and the
Calculation of three-dimensional groundwater transport using second-order moments
International Nuclear Information System (INIS)
Pepper, D.W.; Stephenson, D.E.
1987-01-01
Groundwater transport of contaminants from the F-Area seepage basin at the Savannah River Plant (SRP) was calculated using a three-dimensional, second-order moment technique. The numerical method calculates the zero, first, and second moment distributions of concentration within a cell volume. By summing the moments over the entire solution domain, and using a Lagrangian advection scheme, concentrations are transported without numerical dispersion errors. Velocities obtained from field tests are extrapolated and interpolated to all nodal points; a variational analysis is performed over the three-dimensional velocity field to ensure mass consistency. Transport predictions are calculated out to 12,000 days. 28 refs., 9 figs
Students' Attitudes toward Statistics across the Disciplines: A Mixed-Methods Approach
Griffith, James D.; Adams, Lea T.; Gu, Lucy L.; Hart, Christian L.; Nichols-Whitehead, Penney
2012-01-01
Students' attitudes toward statistics were investigated using a mixed-methods approach including a discovery-oriented qualitative methodology among 684 undergraduate students across business, criminal justice, and psychology majors where at least one course in statistics was required. Students were asked about their attitudes toward statistics and…
W-boson electric dipole moment
International Nuclear Information System (INIS)
He, X.; McKellar, B.H.J.
1990-01-01
The W-boson electric dipole moment is calculated in the SU(3) C xSU(2) L xU(1) Y model with several Higgs-boson doublets. Using the constraint on the CP-violating parameters from the experimental upper bound of the neutron electric dipole moment, we find that the W-boson electric dipole moment is constrained to be less than 10 -4
Identifying User Profiles from Statistical Grouping Methods
Directory of Open Access Journals (Sweden)
Francisco Kelsen de Oliveira
2018-02-01
Full Text Available This research aimed to group users into subgroups according to their levels of knowledge about technology. Statistical hierarchical and non-hierarchical clustering methods were studied, compared and used in the creations of the subgroups from the similarities of the skill levels with these users’ technology. The research sample consisted of teachers who answered online questionnaires about their skills with the use of software and hardware with educational bias. The statistical methods of grouping were performed and showed the possibilities of groupings of the users. The analyses of these groups allowed to identify the common characteristics among the individuals of each subgroup. Therefore, it was possible to define two subgroups of users, one with skill in technology and another with skill with technology, so that the partial results of the research showed two main algorithms for grouping with 92% similarity in the formation of groups of users with skill with technology and the other with little skill, confirming the accuracy of the techniques of discrimination against individuals.
On the electric dipole moments of small sodium clusters from different theoretical approaches
International Nuclear Information System (INIS)
Aguado, Andrés; Largo, Antonio; Vega, Andrés; Balbás, Luis Carlos
2012-01-01
Graphical abstract: The dipole moments and polarizabilities of a few isomers of sodium clusters of selected sizes (n = 13, 14, 16) are calculated using density functional theory methods as well as ab initio MP2, CASSCF, and MR-CI methods. Among the density functional approaches, we consider the usual local density and generalized gradient approximations, as well as a recent van der Waals self-consistent functional accounting for non-local dispersion interactions. Highlights: ► Dipole moment and polarizability of sodium clusters from DFT and ab initio methods. ► New van der Waals selfconsistent implementation of non-local dispersion interactions. ► New starting isomeric geometries from extensive search of global minimum structures. ► Good agreement with recent experiments at cryogenic temperatures. - Abstract: The dipole moments of Na n clusters in the size range 10 n clusters of selected sizes (n = 13, 14, 16), obtained recently through an extensive unbiased search of the global minimum structures, and using density functional theory methods as well as ab initio MP2, CASSCF, and MR-CI methods. Among the density functional approaches, we consider the usual local density and generalized gradient approximations, as well as a recent van der Waals self-consistent functional accounting for non-local dispersion interactions. Both non-local pseudopotentials and all-electron implementations are employed and compared in order to assess the possible contribution of the core electrons to the electric dipole moments. Our new geometries possess significantly smaller electric dipole moments than previous density functional results, mostly when combined with the van der Waals exchange–correlation functional. However, although the agreement with experiment clearly improves upon previous calculations, the theoretical dipole moments are still about one order of magnitude larger than the experimental values, suggesting that the correct global minimum structures have not been
Integrated Chassis Control System with Fail Safety Using Optimum Yaw Moment Distribution
International Nuclear Information System (INIS)
Yim, Seongjin
2014-01-01
This paper presents an integrated chassis control system with fail safety using optimum yaw moment distribution for a vehicle with steer-by-wire and brake-by-wire devices. The proposed system has two-level structure: upper- and lower-level controllers. In the upper-level controller, the control yaw moment is computed with sliding mode control theory. In the lower-level controller, the control yaw moment is distributed into the tire forces of active front steering(AFS) and electronic stability control(ESC) with the weighted pseudo-inverse based control allocation(WPCA) method. By setting the variable weights in WPCA, it is possible to take the sensor/actuator failure into account. In this framework, it is necessary to optimize the variables weights in order to enhance the yaw moment distribution. For this purpose, simulation-based tuning is proposed. To show the effectiveness of the proposed method, simulations are conducted on a vehicle simulation package, CarSim
Integrated Chassis Control System with Fail Safety Using Optimum Yaw Moment Distribution
Energy Technology Data Exchange (ETDEWEB)
Yim, Seongjin [Seoul Nat' l Univ. of Sci. and Tech., Seoul (Korea, Republic of)
2014-03-15
This paper presents an integrated chassis control system with fail safety using optimum yaw moment distribution for a vehicle with steer-by-wire and brake-by-wire devices. The proposed system has two-level structure: upper- and lower-level controllers. In the upper-level controller, the control yaw moment is computed with sliding mode control theory. In the lower-level controller, the control yaw moment is distributed into the tire forces of active front steering(AFS) and electronic stability control(ESC) with the weighted pseudo-inverse based control allocation(WPCA) method. By setting the variable weights in WPCA, it is possible to take the sensor/actuator failure into account. In this framework, it is necessary to optimize the variables weights in order to enhance the yaw moment distribution. For this purpose, simulation-based tuning is proposed. To show the effectiveness of the proposed method, simulations are conducted on a vehicle simulation package, CarSim.
Multifractals embedded in short time series: An unbiased estimation of probability moment
Qiu, Lu; Yang, Tianguang; Yin, Yanhua; Gu, Changgui; Yang, Huijie
2016-12-01
An exact estimation of probability moments is the base for several essential concepts, such as the multifractals, the Tsallis entropy, and the transfer entropy. By means of approximation theory we propose a new method called factorial-moment-based estimation of probability moments. Theoretical prediction and computational results show that it can provide us an unbiased estimation of the probability moments of continuous order. Calculations on probability redistribution model verify that it can extract exactly multifractal behaviors from several hundred recordings. Its powerfulness in monitoring evolution of scaling behaviors is exemplified by two empirical cases, i.e., the gait time series for fast, normal, and slow trials of a healthy volunteer, and the closing price series for Shanghai stock market. By using short time series with several hundred lengths, a comparison with the well-established tools displays significant advantages of its performance over the other methods. The factorial-moment-based estimation can evaluate correctly the scaling behaviors in a scale range about three generations wider than the multifractal detrended fluctuation analysis and the basic estimation. The estimation of partition function given by the wavelet transform modulus maxima has unacceptable fluctuations. Besides the scaling invariance focused in the present paper, the proposed factorial moment of continuous order can find its various uses, such as finding nonextensive behaviors of a complex system and reconstructing the causality relationship network between elements of a complex system.
Reconstruction of convex bodies from moments
DEFF Research Database (Denmark)
Hörrmann, Julia; Kousholt, Astrid
We investigate how much information about a convex body can be retrieved from a finite number of its geometric moments. We give a sufficient condition for a convex body to be uniquely determined by a finite number of its geometric moments, and we show that among all convex bodies, those which......- rithm that approximates a convex body using a finite number of its Legendre moments. The consistency of the algorithm is established using the stabil- ity result for Legendre moments. When only noisy measurements of Legendre moments are available, the consistency of the algorithm is established under...
Delgado, Carlos; Cátedra, Manuel Felipe
2018-05-01
This work presents a technique that allows a very noticeable relaxation of the computational requirements for full-wave electromagnetic simulations based on the Method of Moments. A ray-tracing analysis of the geometry is performed in order to extract the critical points with significant contributions. These points are then used to generate a reduced mesh, considering the regions of the geometry that surround each critical point and taking into account the electrical path followed from the source. The electromagnetic analysis of the reduced mesh produces very accurate results, requiring a fraction of the resources that the conventional analysis would utilize.
International Nuclear Information System (INIS)
Towner, I.S.; Khanna, F.C.
1984-01-01
Consideration of core polarization, isobar currents and meson-exchange processes gives a satisfactory understanding of the ground-state magnetic moments in closed-shell-plus (or minus)-one nuclei, A = 3, 15, 17, 39 and 41. Ever since the earliest days of the nuclear shell model the understanding of magnetic moments of nuclear states of supposedly simple configurations, such as doubly closed LS shells +-1 nucleon, has been a challenge for theorists. The experimental moments, which in most cases are known with extraordinary precision, show a small yet significant departure from the single-particle Schmidt values. The departure, however, is difficult to evaluate precisely since, as will be seen, it results from a sensitive cancellation between several competing corrections each of which can be as large as the observed discrepancy. This, then, is the continuing fascination of magnetic moments. In this contribution, we revisit the subjet principally to identify the role played by isobar currents, which are of much concern at this conference. But in so doing we warn quite strongly of the dangers of considering just isobar currents in isolation; equal consideration must be given to competing processes which in this context are the mundane nuclear structure effects, such as core polarization, and the more popular meson-exchange currents
Vortex methods and vortex statistics
International Nuclear Information System (INIS)
Chorin, A.J.
1993-05-01
Vortex methods originated from the observation that in incompressible, inviscid, isentropic flow vorticity (or, more accurately, circulation) is a conserved quantity, as can be readily deduced from the absence of tangential stresses. Thus if the vorticity is known at time t = 0, one can deduce the flow at a later time by simply following it around. In this narrow context, a vortex method is a numerical method that makes use of this observation. Even more generally, the analysis of vortex methods leads, to problems that are closely related to problems in quantum physics and field theory, as well as in harmonic analysis. A broad enough definition of vortex methods ends up by encompassing much of science. Even the purely computational aspects of vortex methods encompass a range of ideas for which vorticity may not be the best unifying theme. The author restricts himself in these lectures to a special class of numerical vortex methods, those that are based on a Lagrangian transport of vorticity in hydrodynamics by smoothed particles (''blobs'') and those whose understanding contributes to the understanding of blob methods. Vortex methods for inviscid flow lead to systems of ordinary differential equations that can be readily clothed in Hamiltonian form, both in three and two space dimensions, and they can preserve exactly a number of invariants of the Euler equations, including topological invariants. Their viscous versions resemble Langevin equations. As a result, they provide a very useful cartoon of statistical hydrodynamics, i.e., of turbulence, one that can to some extent be analyzed analytically and more importantly, explored numerically, with important implications also for superfluids, superconductors, and even polymers. In the authors view, vortex ''blob'' methods provide the most promising path to the understanding of these phenomena
Statistical inference based on latent ability estimates
Hoijtink, H.J.A.; Boomsma, A.
The quality of approximations to first and second order moments (e.g., statistics like means, variances, regression coefficients) based on latent ability estimates is being discussed. The ability estimates are obtained using either the Rasch, oi the two-parameter logistic model. Straightforward use
Mathematical and Statistical Methods for Actuarial Sciences and Finance
Legros, Florence; Perna, Cira; Sibillo, Marilena
2017-01-01
This volume gathers selected peer-reviewed papers presented at the international conference "MAF 2016 – Mathematical and Statistical Methods for Actuarial Sciences and Finance”, held in Paris (France) at the Université Paris-Dauphine from March 30 to April 1, 2016. The contributions highlight new ideas on mathematical and statistical methods in actuarial sciences and finance. The cooperation between mathematicians and statisticians working in insurance and finance is a very fruitful field, one that yields unique theoretical models and practical applications, as well as new insights in the discussion of problems of national and international interest. This volume is addressed to academicians, researchers, Ph.D. students and professionals.
International Nuclear Information System (INIS)
Calvin W. Johnson
2004-01-01
The general goal of the project is to develop and implement computer codes and input files to compute nuclear densities of state. Such densities are important input into calculations of statistical neutron capture, and are difficult to access experimentally. In particular, we will focus on calculating densities for nuclides in the mass range A ?????? 50 - 100. We use statistical spectroscopy, a moments method based upon a microscopic framework, the interacting shell model. In this report we present our progress for the past year
On the electric dipole moments of small sodium clusters from different theoretical approaches
Energy Technology Data Exchange (ETDEWEB)
Aguado, Andres, E-mail: aguado@metodos.fam.cie.uva.es [Departamento de Fisica Teorica, Atomica, y Optica, Universidad de Valladolid (Spain); Largo, Antonio, E-mail: alargo@qf.uva.es [Departamento de Quimica Fisica y Quimica Inorganica, Universidad de Valladolid (Spain); Vega, Andres, E-mail: vega@fta.uva.es [Departamento de Fisica Teorica, Atomica, y Optica, Universidad de Valladolid (Spain); Balbas, Luis Carlos, E-mail: balbas@fta.uva.es [Departamento de Fisica Teorica, Atomica, y Optica, Universidad de Valladolid (Spain)
2012-05-03
Graphical abstract: The dipole moments and polarizabilities of a few isomers of sodium clusters of selected sizes (n = 13, 14, 16) are calculated using density functional theory methods as well as ab initio MP2, CASSCF, and MR-CI methods. Among the density functional approaches, we consider the usual local density and generalized gradient approximations, as well as a recent van der Waals self-consistent functional accounting for non-local dispersion interactions. Highlights: Black-Right-Pointing-Pointer Dipole moment and polarizability of sodium clusters from DFT and ab initio methods. Black-Right-Pointing-Pointer New van der Waals selfconsistent implementation of non-local dispersion interactions. Black-Right-Pointing-Pointer New starting isomeric geometries from extensive search of global minimum structures. Black-Right-Pointing-Pointer Good agreement with recent experiments at cryogenic temperatures. - Abstract: The dipole moments of Na{sub n} clusters in the size range 10 < n < 20, recently measured at very low temperature (20 K), are much smaller than predicted by standard density functional methods. On the other hand, the calculated static dipole polarizabilities in that range of sizes deviate non-systematically from the measured ones, depending on the employed first principles approach. In this work we calculate the dipole moments and polarizabilities of a few isomers of Na{sub n} clusters of selected sizes (n = 13, 14, 16), obtained recently through an extensive unbiased search of the global minimum structures, and using density functional theory methods as well as ab initio MP2, CASSCF, and MR-CI methods. Among the density functional approaches, we consider the usual local density and generalized gradient approximations, as well as a recent van der Waals self-consistent functional accounting for non-local dispersion interactions. Both non-local pseudopotentials and all-electron implementations are employed and compared in order to assess the possible
A method for statistical steady state thermal analysis of reactor cores
International Nuclear Information System (INIS)
Whetton, P.A.
1980-01-01
This paper presents a method for performing a statistical steady state thermal analysis of a reactor core. The technique is only outlined here since detailed thermal equations are dependent on the core geometry. The method has been applied to a pressurised water reactor core and the results are presented for illustration purposes. Random hypothetical cores are generated using the Monte-Carlo method. The technique shows that by splitting the parameters into two types, denoted core-wise and in-core, the Monte Carlo method may be used inexpensively. The idea of using extremal statistics to characterise the low probability events (i.e. the tails of a distribution) is introduced together with a method of forming the final probability distribution. After establishing an acceptable probability of exceeding a thermal design criterion, the final probability distribution may be used to determine the corresponding thermal response value. If statistical and deterministic (i.e. conservative) thermal response values are compared, information on the degree of pessimism in the deterministic method of analysis may be inferred and the restrictive performance limitations imposed by this method relieved. (orig.)
Moment invariants for particle beams
International Nuclear Information System (INIS)
Lysenko, W.P.; Overley, M.S.
1988-01-01
The rms emittance is a certain function of second moments in 2-D phase space. It is preserved for linear uncoupled (1-D) motion. In this paper, the authors present new functions of moments that are invariants for coupled motion. These invariants were computed symbolically using a computer algebra system. Possible applications for these invariants are discussed. Also, approximate moment invariants for nonlinear motion are presented
Directory of Open Access Journals (Sweden)
Andrzej Magiera
2017-09-01
Full Text Available Measurements of electric dipole moment (EDM for light hadrons with use of a storage ring have been proposed. The expected effect is very small, therefore various subtle effects need to be considered. In particular, interaction of particle’s magnetic dipole moment and electric quadrupole moment with electromagnetic field gradients can produce an effect of a similar order of magnitude as that expected for EDM. This paper describes a very promising method employing an rf Wien filter, allowing to disentangle that contribution from the genuine EDM effect. It is shown that both these effects could be separated by the proper setting of the rf Wien filter frequency and phase. In the EDM measurement the magnitude of systematic uncertainties plays a key role and they should be under strict control. It is shown that particles’ interaction with field gradients offers also the possibility to estimate global systematic uncertainties with the precision necessary for an EDM measurement with the planned accuracy.
Magiera, Andrzej
2017-09-01
Measurements of electric dipole moment (EDM) for light hadrons with use of a storage ring have been proposed. The expected effect is very small, therefore various subtle effects need to be considered. In particular, interaction of particle's magnetic dipole moment and electric quadrupole moment with electromagnetic field gradients can produce an effect of a similar order of magnitude as that expected for EDM. This paper describes a very promising method employing an rf Wien filter, allowing to disentangle that contribution from the genuine EDM effect. It is shown that both these effects could be separated by the proper setting of the rf Wien filter frequency and phase. In the EDM measurement the magnitude of systematic uncertainties plays a key role and they should be under strict control. It is shown that particles' interaction with field gradients offers also the possibility to estimate global systematic uncertainties with the precision necessary for an EDM measurement with the planned accuracy.
Madadi-Kandjani, E.; Fox, R. O.; Passalacqua, A.
2017-06-01
An extended quadrature method of moments using the β kernel density function (β -EQMOM) is used to approximate solutions to the evolution equation for univariate and bivariate composition probability distribution functions (PDFs) of a passive scalar for binary and ternary mixing. The key element of interest is the molecular mixing term, which is described using the Fokker-Planck (FP) molecular mixing model. The direct numerical simulations (DNSs) of Eswaran and Pope ["Direct numerical simulations of the turbulent mixing of a passive scalar," Phys. Fluids 31, 506 (1988)] and the amplitude mapping closure (AMC) of Pope ["Mapping closures for turbulent mixing and reaction," Theor. Comput. Fluid Dyn. 2, 255 (1991)] are taken as reference solutions to establish the accuracy of the FP model in the case of binary mixing. The DNSs of Juneja and Pope ["A DNS study of turbulent mixing of two passive scalars," Phys. Fluids 8, 2161 (1996)] are used to validate the results obtained for ternary mixing. Simulations are performed with both the conditional scalar dissipation rate (CSDR) proposed by Fox [Computational Methods for Turbulent Reacting Flows (Cambridge University Press, 2003)] and the CSDR from AMC, with the scalar dissipation rate provided as input and obtained from the DNS. Using scalar moments up to fourth order, the ability of the FP model to capture the evolution of the shape of the PDF, important in turbulent mixing problems, is demonstrated. Compared to the widely used assumed β -PDF model [S. S. Girimaji, "Assumed β-pdf model for turbulent mixing: Validation and extension to multiple scalar mixing," Combust. Sci. Technol. 78, 177 (1991)], the β -EQMOM solution to the FP model more accurately describes the initial mixing process with a relatively small increase in computational cost.
Energy Technology Data Exchange (ETDEWEB)
Hanks, T.C.; Kanamori, H.
1979-05-10
The nearly conincident forms of the relations between seismic moment M/sub o/ and the magnitudes M/sub L/, M/sub s/, and M/sub w/ imply a moment magnitude scale M=2/3 log M/sub o/-10.7 which is uniformly valid for 3< or approx. =M/sub L/< or approx. = 7, 5 < or approx. =M/sub s/< or approx. =7 1/2 and M/sub w/> or approx. = 7 1/2.
Applied systems ecology: models, data, and statistical methods
Energy Technology Data Exchange (ETDEWEB)
Eberhardt, L L
1976-01-01
In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.
Statistical and Fractal Processing of Phase Images of Human Biological Fluids
Directory of Open Access Journals (Sweden)
MARCHUK, Y. I.
2010-11-01
Full Text Available Performed in this work are complex statistical and fractal analyses of phase properties inherent to birefringence networks of liquid crystals consisting of optically-thin layers prepared from human bile. Within the framework of a statistical approach, the authors have investigated values and ranges for changes of statistical moments of the 1-st to 4-th orders that characterize coordinate distributions for phase shifts between orthogonal components of amplitudes inherent to laser radiation transformed by human bile with various pathologies. Using the Gramm-Charlie method, ascertained are correlation criteria for differentiation of phase maps describing pathologically changed liquid-crystal networks. In the framework of the fractal approach, determined are dimensionalities of self-similar coordinate phase distributions as well as features of transformation of logarithmic dependences for power spectra of these distributions for various types of human pathologies.
Yuvchenko, S. A.; Ushakova, E. V.; Pavlova, M. V.; Alonova, M. V.; Zimnyakov, D. A.
2018-04-01
We consider the practical realization of a new optical probe method of the random media which is defined as the reference-free path length interferometry with the intensity moments analysis. A peculiarity in the statistics of the spectrally selected fluorescence radiation in laser-pumped dye-doped random medium is discussed. Previously established correlations between the second- and the third-order moments of the intensity fluctuations in the random interference patterns, the coherence function of the probe radiation, and the path difference probability density for the interfering partial waves in the medium are confirmed. The correlations were verified using the statistical analysis of the spectrally selected fluorescence radiation emitted by a laser-pumped dye-doped random medium. Water solution of Rhodamine 6G was applied as the doping fluorescent agent for the ensembles of the densely packed silica grains, which were pumped by the 532 nm radiation of a solid state laser. The spectrum of the mean path length for a random medium was reconstructed.
Nuclear structure studies by means of magnetic moments of excited states
International Nuclear Information System (INIS)
Kaeubler, L.; Prade, H.; Schneider, L.; Brinckmann, H.F.; Stary, F.
1981-09-01
Experimental arrangements installed at the cyclotron U-120 and the tandem accelerator EGP-10 for the in-beam measurement of magnetic moments of excited nuclear states are discribed. The Perturbed-Angular-Distribution-method (PAD) has been used. A new evaluation method has been developed for the unique determination of the Larmor frequency from spin-procession spectra R(t) with less than half of an oscillation period between consecutive particle pulses. Magnetic moments in transitional nuclei or in nuclei near closed shells ( 103 Pd, 105 Ag, 117 Sb, 117 Te, 121 Te, 121 I, 143 Pm and 207 Bi) were measured. The results are discussed with the aim to get information about the nuclear structure of the corresponding isomeric states in connection with complex spectroscopic investigations. Therefore, the experimental values are compared to the results of model calculations (core-polarization, core-particle-coupling, Nilsson, particle-rotation-coupling or shell-model) or to the estimates on the basis of the additivity of effective magnetic moments. Single-particle aspects are discussed in connection with the magnetic moments of hsub(11/2)-, dsub(5/2)- and gsub(7/2)-neutron (ν) and proton (π) states in the nuclei 103 Pd, 117 Te, 121 Te and 143 Pm, respectively. The configurations of (π) 3 and (π)(ν) 2 -three-particle states in 105 Ag, 117 Sb, 121 I and 207 Bi could be determined using the additivity rule. The experimental magnetic moments of states in 143 Pm agree very well with the results of shell-model calculations, which have firstly been carried out also for negative-parity states in this mass region. Considering magnetic moments in 117 Te and 121 Te we could demonstrate the influence of different nuclear deformations on the magnetic moments in transitional nuclei. (author)
Moment distributions of clusters and molecules in the adiabatic rotor model
Ballentine, G. E.; Bertsch, G. F.; Onishi, N.; Yabana, K.
2008-01-01
We present a Fortran program to compute the distribution of dipole moments of free particles for use in analyzing molecular beams experiments that measure moments by deflection in an inhomogeneous field. The theory is the same for magnetic and electric dipole moments, and is based on a thermal ensemble of classical particles that are free to rotate and that have moment vectors aligned along a principal axis of rotation. The theory has two parameters, the ratio of the magnetic (or electric) dipole energy to the thermal energy, and the ratio of moments of inertia of the rotor. Program summaryProgram title:AdiabaticRotor Catalogue identifier:ADZO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZO_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:479 No. of bytes in distributed program, including test data, etc.:4853 Distribution format:tar.gz Programming language:Fortran 90 Computer:Pentium-IV, Macintosh Power PC G4 Operating system:Linux, Mac OS X RAM:600 Kbytes Word size:64 bits Classification:2.3 Nature of problem:The system considered is a thermal ensemble of rotors having a magnetic or electric moment aligned along one of the principal axes. The ensemble is placed in an external field which is turned on adiabatically. The problem is to find the distribution of moments in the presence of the external field. Solution method:There are three adiabatic invariants. The only nontrivial one is the action associated with the polar angle of the rotor axis with respect to external field. It is found by Newton's method. Running time:3 min on a 3 GHz Pentium IV processor.
International Nuclear Information System (INIS)
Calvin W. Johnson
2005-01-01
The general goal of the project is to develop and implement computer codes and input files to compute nuclear densities of state. Such densities are important input into calculations of statistical neutron capture, and are difficult to access experimentally. In particular, we will focus on calculating densities for nuclides in the mass range A ∼ 50-100. We use statistical spectroscopy, a moments method based upon a microscopic framework, the interacting shell model. Second year goals and milestones: Develop two or three competing interactions (based upon surface-delta, Gogny, and NN-scattering) suitable for application to nuclei up to A = 100. Begin calculations for nuclides with A = 50-70
Directory of Open Access Journals (Sweden)
Essadki Mohamed
2016-09-01
Full Text Available Predictive simulation of liquid fuel injection in automotive engines has become a major challenge for science and applications. The key issue in order to properly predict various combustion regimes and pollutant formation is to accurately describe the interaction between the carrier gaseous phase and the polydisperse evaporating spray produced through atomization. For this purpose, we rely on the EMSM (Eulerian Multi-Size Moment Eulerian polydisperse model. It is based on a high order moment method in size, with a maximization of entropy technique in order to provide a smooth reconstruction of the distribution, derived from a Williams-Boltzmann mesoscopic model under the monokinetic assumption [O. Emre (2014 PhD Thesis, École Centrale Paris; O. Emre, R.O. Fox, M. Massot, S. Chaisemartin, S. Jay, F. Laurent (2014 Flow, Turbulence and Combustion 93, 689-722; O. Emre, D. Kah, S. Jay, Q.-H. Tran, A. Velghe, S. de Chaisemartin, F. Laurent, M. Massot (2015 Atomization Sprays 25, 189-254; D. Kah, F. Laurent, M. Massot, S. Jay (2012 J. Comput. Phys. 231, 394-422; D. Kah, O. Emre, Q.-H. Tran, S. de Chaisemartin, S. Jay, F. Laurent, M. Massot (2015 Int. J. Multiphase Flows 71, 38-65; A. Vié, F. Laurent, M. Massot (2013 J. Comp. Phys. 237, 277-310]. The present contribution relies on a major extension of this model [M. Essadki, S. de Chaisemartin, F. Laurent, A. Larat, M. Massot (2016 Submitted to SIAM J. Appl. Math.], with the aim of building a unified approach and coupling with a separated phases model describing the dynamics and atomization of the interface near the injector. The novelty is to be found in terms of modeling, numerical schemes and implementation. A new high order moment approach is introduced using fractional moments in surface, which can be related to geometrical quantities of the gas-liquid interface. We also provide a novel algorithm for an accurate resolution of the evaporation. Adaptive mesh refinement properly scaling on massively
Statistics of spatially integrated speckle intensity difference
DEFF Research Database (Denmark)
Hanson, Steen Grüner; Yura, Harold
2009-01-01
We consider the statistics of the spatially integrated speckle intensity difference obtained from two separated finite collecting apertures. For fully developed speckle, closed-form analytic solutions for both the probability density function and the cumulative distribution function are derived...... here for both arbitrary values of the mean number of speckles contained within an aperture and the degree of coherence of the optical field. Additionally, closed-form expressions are obtained for the corresponding nth statistical moments....
Mathematical methods in quantum and statistical mechanics
International Nuclear Information System (INIS)
Fishman, L.
1977-01-01
The mathematical structure and closed-form solutions pertaining to several physical problems in quantum and statistical mechanics are examined in some detail. The J-matrix method, introduced previously for s-wave scattering and based upon well-established Hilbert Space theory and related generalized integral transformation techniques, is extended to treat the lth partial wave kinetic energy and Coulomb Hamiltonians within the context of square integrable (L 2 ), Laguerre (Slater), and oscillator (Gaussian) basis sets. The theory of relaxation in statistical mechanics within the context of the theory of linear integro-differential equations of the Master Equation type and their corresponding Markov processes is examined. Several topics of a mathematical nature concerning various computational aspects of the L 2 approach to quantum scattering theory are discussed
Classification of Specialized Farms Applying Multivariate Statistical Methods
Directory of Open Access Journals (Sweden)
Zuzana Hloušková
2017-01-01
Full Text Available Classification of specialized farms applying multivariate statistical methods The paper is aimed at application of advanced multivariate statistical methods when classifying cattle breeding farming enterprises by their economic size. Advantage of the model is its ability to use a few selected indicators compared to the complex methodology of current classification model that requires knowledge of detailed structure of the herd turnover and structure of cultivated crops. Output of the paper is intended to be applied within farm structure research focused on future development of Czech agriculture. As data source, the farming enterprises database for 2014 has been used, from the FADN CZ system. The predictive model proposed exploits knowledge of actual size classes of the farms tested. Outcomes of the linear discriminatory analysis multifactor classification method have supported the chance of filing farming enterprises in the group of Small farms (98 % filed correctly, and the Large and Very Large enterprises (100 % filed correctly. The Medium Size farms have been correctly filed at 58.11 % only. Partial shortages of the process presented have been found when discriminating Medium and Small farms.
Real-Time Tracking of Knee Adduction Moment in Patients with Knee Osteoarthritis
Kang, Sang Hoon; Lee, Song Joo; Zhang, Li-Qun
2014-01-01
Background The external knee adduction moment (EKAM) is closely associated with the presence, progression, and severity of knee osteoarthritis (OA). However, there is a lack of convenient and practical method to estimate and track in real-time the EKAM of patients with knee OA for clinical evaluation and gait training, especially outside of gait laboratories. New Method A real-time EKAM estimation method was developed and applied to track and investigate the EKAM and other knee moments during stepping on an elliptical trainer in both healthy subjects and a patient with knee OA. Results Substantial changes were observed in the EKAM and other knee moments during stepping in the patient with knee OA. Comparison with Existing Method(s) This is the first study to develop and test feasibility of real-time tracking method of the EKAM on patients with knee OA using 3-D inverse dynamics. Conclusions The study provides us an accurate and practical method to evaluate in real-time the critical EKAM associated with knee OA, which is expected to help us to diagnose and evaluate patients with knee OA and provide the patients with real-time EKAM feedback rehabilitation training. PMID:24361759
Statistical Moments in Variable Density Incompressible Mixing Flows
2015-08-28
59]. The algorithm uses an approximate projection method [16] with the interface modeled with the Immersed Boundary Method ( IBM ), as spread via a nu...and B. C. Watson . Taylor instability of finite surface waves. J. Fluid Mech., 7:177–193, 1960. [32] E. Fermi. Taylor instability of an
Lagrangian statistics in weakly forced two-dimensional turbulence.
Rivera, Michael K; Ecke, Robert E
2016-01-01
Measurements of Lagrangian single-point and multiple-point statistics in a quasi-two-dimensional stratified layer system are reported. The system consists of a layer of salt water over an immiscible layer of Fluorinert and is forced electromagnetically so that mean-squared vorticity is injected at a well-defined spatial scale ri. Simultaneous cascades develop in which enstrophy flows predominately to small scales whereas energy cascades, on average, to larger scales. Lagrangian correlations and one- and two-point displacements are measured for random initial conditions and for initial positions within topological centers and saddles. Some of the behavior of these quantities can be understood in terms of the trapping characteristics of long-lived centers, the slow motion near strong saddles, and the rapid fluctuations outside of either centers or saddles. We also present statistics of Lagrangian velocity fluctuations using energy spectra in frequency space and structure functions in real space. We compare with complementary Eulerian velocity statistics. We find that simultaneous inverse energy and enstrophy ranges present in spectra are not directly echoed in real-space moments of velocity difference. Nevertheless, the spectral ranges line up well with features of moment ratios, indicating that although the moments are not exhibiting unambiguous scaling, the behavior of the probability distribution functions is changing over short ranges of length scales. Implications for understanding weakly forced 2D turbulence with simultaneous inverse and direct cascades are discussed.
An Efficient Graph-based Method for Long-term Land-use Change Statistics
Directory of Open Access Journals (Sweden)
Yipeng Zhang
2015-12-01
Full Text Available Statistical analysis of land-use change plays an important role in sustainable land management and has received increasing attention from scholars and administrative departments. However, the statistical process involving spatial overlay analysis remains difficult and needs improvement to deal with mass land-use data. In this paper, we introduce a spatio-temporal flow network model to reveal the hidden relational information among spatio-temporal entities. Based on graph theory, the constant condition of saturated multi-commodity flow is derived. A new method based on a network partition technique of spatio-temporal flow network are proposed to optimize the transition statistical process. The effectiveness and efficiency of the proposed method is verified through experiments using land-use data in Hunan from 2009 to 2014. In the comparison among three different land-use change statistical methods, the proposed method exhibits remarkable superiority in efficiency.
Energy transfer moments in thermalization; Les moments dei transfert d'energie en thermalisation
Energy Technology Data Exchange (ETDEWEB)
Soule, J L; Pillard, D [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1964-07-01
For all moderators of the 'incoherent gaussian' type, it is possible to calculate, at any temperature, the energy transfer moments as a function of the incident energy without having to use the differential sections. Integral formulae are derived for the integral cross-section, the first and the second moment, which make it possible to tabulate directly these three functions in a few minutes calculation on IBM 7094, for the most part models proposed in the literature for the common moderators. (authors) [French] Pour tous les moderateurs de type 'incoherent gaussien' on peut calculer, a n'importe quelle temperature, les moments de transfert d'energie en fonction de l'energie incidente, sans passer par l'intermediaire des sections differentielles. On developpe des formules integrales pour la section efficace integrale, le premier et le second moment, qui permettent de tabuler directement ces trois fonctions en quelques minutes de calcul sur IBM 7094, pour la plupart des modeles proposes dans la litterature pour les moderateurs usuels. (auteurs)
Han, Kyunghwa; Jung, Inkyung
2018-05-01
This review article presents an assessment of trends in statistical methods and an evaluation of their appropriateness in articles published in the Archives of Plastic Surgery (APS) from 2012 to 2017. We reviewed 388 original articles published in APS between 2012 and 2017. We categorized the articles that used statistical methods according to the type of statistical method, the number of statistical methods, and the type of statistical software used. We checked whether there were errors in the description of statistical methods and results. A total of 230 articles (59.3%) published in APS between 2012 and 2017 used one or more statistical method. Within these articles, there were 261 applications of statistical methods with continuous or ordinal outcomes, and 139 applications of statistical methods with categorical outcome. The Pearson chi-square test (17.4%) and the Mann-Whitney U test (14.4%) were the most frequently used methods. Errors in describing statistical methods and results were found in 133 of the 230 articles (57.8%). Inadequate description of P-values was the most common error (39.1%). Among the 230 articles that used statistical methods, 71.7% provided details about the statistical software programs used for the analyses. SPSS was predominantly used in the articles that presented statistical analyses. We found that the use of statistical methods in APS has increased over the last 6 years. It seems that researchers have been paying more attention to the proper use of statistics in recent years. It is expected that these positive trends will continue in APS.
Multifractal moments in heavy ion Pb-Pb collisions at 158 A GeV
Energy Technology Data Exchange (ETDEWEB)
Dutt, Sunil [Department of Physics, Govt. College for Women GandhiNagar, Jammu - J& K (India)
2016-05-06
In present work, we use the method of scaled factorial moments to search for intermittent behavior in Pb-Pb interactions at 158 A GeV. The analysis is done on photon distributions obtained using preshower photon multiplicity detector. Scaled factorial moments are used to study short range fluctuations in pseudorapidity distributions of photons. Scaled factorial moments are calculated using horizontal corrected and vertical analysis. The results are compared with simulation analysis using VENUS event generator.
Application of Turchin's method of statistical regularization
Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey
2018-04-01
During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.
Statistic methods for searching inundated radioactive entities
International Nuclear Information System (INIS)
Dubasov, Yu.V.; Krivokhatskij, A.S.; Khramov, N.N.
1993-01-01
The problem of searching flooded radioactive object in a present area was considered. Various models of the searching route plotting are discussed. It is shown that spiral route by random points from the centre of the area examined is the most efficient one. The conclusion is made that, when searching flooded radioactive objects, it is advisable to use multidimensional statistical methods of classification
Heeling Moment Acting on a River Cruiser in Manoeuvring Motion
Directory of Open Access Journals (Sweden)
Tabaczek Tomasz
2016-01-01
Full Text Available By using fully theoretical method the heeling moment due to centrifugal forces has been determined for a small river cruiser in turning manoeuvre. The authors applied CFD software for determination of hull hydrodynamic forces, and open water characteristics of ducted propeller for estimation of thrust of rudder-propellers. Numerical integration of equations of 3DOF motion was used for prediction of ship trajectory and time histories of velocities, forces and heeling moment.
Nonequilibrium Statistical Operator Method and Generalized Kinetic Equations
Kuzemsky, A. L.
2018-01-01
We consider some principal problems of nonequilibrium statistical thermodynamics in the framework of the Zubarev nonequilibrium statistical operator approach. We present a brief comparative analysis of some approaches to describing irreversible processes based on the concept of nonequilibrium Gibbs ensembles and their applicability to describing nonequilibrium processes. We discuss the derivation of generalized kinetic equations for a system in a heat bath. We obtain and analyze a damped Schrödinger-type equation for a dynamical system in a heat bath. We study the dynamical behavior of a particle in a medium taking the dissipation effects into account. We consider the scattering problem for neutrons in a nonequilibrium medium and derive a generalized Van Hove formula. We show that the nonequilibrium statistical operator method is an effective, convenient tool for describing irreversible processes in condensed matter.
A behavioral asset pricing model with a time-varying second moment
International Nuclear Information System (INIS)
Chiarella, Carl; He Xuezhong; Wang, Duo
2006-01-01
We develop a simple behavioral asset pricing model with fundamentalists and chartists in order to study price behavior in financial markets when chartists estimate both conditional mean and variance by using a weighted averaging process. Through a stability, bifurcation, and normal form analysis, the market impact of the weighting process and time-varying second moment are examined. It is found that the fundamental price becomes stable (unstable) when the activities from both types of traders are balanced (unbalanced). When the fundamental price becomes unstable, the weighting process leads to different price dynamics, depending on whether the chartists act as either trend followers or contrarians. It is also found that a time-varying second moment of the chartists does not change the stability of the fundamental price, but it does influence the stability of the bifurcations. The bifurcation becomes stable (unstable) when the chartists are more (less) concerned about the market risk characterized by the time-varying second moment. Different routes to complicated price dynamics are also observed. The analysis provides an analytical foundation for the statistical analysis of the corresponding stochastic version of this type of behavioral model
A statistical method for 2D facial landmarking
Dibeklioğlu, H.; Salah, A.A.; Gevers, T.
2012-01-01
Many facial-analysis approaches rely on robust and accurate automatic facial landmarking to correctly function. In this paper, we describe a statistical method for automatic facial-landmark localization. Our landmarking relies on a parsimonious mixture model of Gabor wavelet features, computed in
(ajst) statistical mechanics model for orientational
African Journals Online (AJOL)
Science and Engineering Series Vol. 6, No. 2, pp. 94 - 101. STATISTICAL MECHANICS MODEL FOR ORIENTATIONAL. MOTION OF TWO-DIMENSIONAL RIGID ROTATOR. Malo, J.O. ... there is no translational motion and that they are well separated so .... constant and I is the moment of inertia of a linear rotator. Thus, the ...
Evaluating statistical cloud schemes: What can we gain from ground-based remote sensing?
Grützun, V.; Quaas, J.; Morcrette, C. J.; Ament, F.
2013-09-01
Statistical cloud schemes with prognostic probability distribution functions have become more important in atmospheric modeling, especially since they are in principle scale adaptive and capture cloud physics in more detail. While in theory the schemes have a great potential, their accuracy is still questionable. High-resolution three-dimensional observational data of water vapor and cloud water, which could be used for testing them, are missing. We explore the potential of ground-based remote sensing such as lidar, microwave, and radar to evaluate prognostic distribution moments using the "perfect model approach." This means that we employ a high-resolution weather model as virtual reality and retrieve full three-dimensional atmospheric quantities and virtual ground-based observations. We then use statistics from the virtual observation to validate the modeled 3-D statistics. Since the data are entirely consistent, any discrepancy occurring is due to the method. Focusing on total water mixing ratio, we find that the mean ratio can be evaluated decently but that it strongly depends on the meteorological conditions as to whether the variance and skewness are reliable. Using some simple schematic description of different synoptic conditions, we show how statistics obtained from point or line measurements can be poor at representing the full three-dimensional distribution of water in the atmosphere. We argue that a careful analysis of measurement data and detailed knowledge of the meteorological situation is necessary to judge whether we can use the data for an evaluation of higher moments of the humidity distribution used by a statistical cloud scheme.
Multivariate methods and forecasting with IBM SPSS statistics
Aljandali, Abdulkader
2017-01-01
This is the second of a two-part guide to quantitative analysis using the IBM SPSS Statistics software package; this volume focuses on multivariate statistical methods and advanced forecasting techniques. More often than not, regression models involve more than one independent variable. For example, forecasting methods are commonly applied to aggregates such as inflation rates, unemployment, exchange rates, etc., that have complex relationships with determining variables. This book introduces multivariate regression models and provides examples to help understand theory underpinning the model. The book presents the fundamentals of multivariate regression and then moves on to examine several related techniques that have application in business-orientated fields such as logistic and multinomial regression. Forecasting tools such as the Box-Jenkins approach to time series modeling are introduced, as well as exponential smoothing and naïve techniques. This part also covers hot topics such as Factor Analysis, Dis...
Statistical sampling method for releasing decontaminated vehicles
International Nuclear Information System (INIS)
Lively, J.W.; Ware, J.A.
1996-01-01
Earth moving vehicles (e.g., dump trucks, belly dumps) commonly haul radiologically contaminated materials from a site being remediated to a disposal site. Traditionally, each vehicle must be surveyed before being released. The logistical difficulties of implementing the traditional approach on a large scale demand that an alternative be devised. A statistical method (MIL-STD-105E, open-quotes Sampling Procedures and Tables for Inspection by Attributesclose quotes) for assessing product quality from a continuous process was adapted to the vehicle decontamination process. This method produced a sampling scheme that automatically compensates and accommodates fluctuating batch sizes and changing conditions without the need to modify or rectify the sampling scheme in the field. Vehicles are randomly selected (sampled) upon completion of the decontamination process to be surveyed for residual radioactive surface contamination. The frequency of sampling is based on the expected number of vehicles passing through the decontamination process in a given period and the confidence level desired. This process has been successfully used for 1 year at the former uranium mill site in Monticello, Utah (a CERCLA regulated clean-up site). The method forces improvement in the quality of the decontamination process and results in a lower likelihood that vehicles exceeding the surface contamination standards are offered for survey. Implementation of this statistical sampling method on Monticello Projects has resulted in more efficient processing of vehicles through decontamination and radiological release, saved hundreds of hours of processing time, provided a high level of confidence that release limits are met, and improved the radiological cleanliness of vehicles leaving the controlled site
Hybrid statistics-simulations based method for atom-counting from ADF STEM images
Energy Technology Data Exchange (ETDEWEB)
De wael, Annelies, E-mail: annelies.dewael@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); De Backer, Annick [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Jones, Lewys; Nellist, Peter D. [Department of Materials, University of Oxford, Parks Road, OX1 3PH Oxford (United Kingdom); Van Aert, Sandra, E-mail: sandra.vanaert@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium)
2017-06-15
A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. - Highlights: • A hybrid method for atom-counting from ADF STEM images is introduced. • Image simulations are incorporated into a statistical framework in a reliable manner. • Limits of the existing methods for atom-counting are far exceeded. • Reliable counting results from an experimental low dose image are obtained. • Progress towards reliable quantitative analysis of beam-sensitive materials is made.