Variational collocation on finite intervals
Energy Technology Data Exchange (ETDEWEB)
Amore, Paolo [Facultad de Ciencias, Universidad de Colima, Bernal DIaz del Castillo 340, Colima, Colima (Mexico); Cervantes, Mayra [Facultad de Ciencias, Universidad de Colima, Bernal DIaz del Castillo 340, Colima, Colima (Mexico); Fernandez, Francisco M [INIFTA (Conicet, UNLP), Diag. 113 y 64 S/N, Sucursal 4, Casilla de Correo 16, 1900 La Plata (Argentina)
2007-10-26
In this paper, we study a set of functions, defined on an interval of finite width, which are orthogonal and which reduce to the sinc functions when the appropriate limit is taken. We show that these functions can be used within a variational approach to obtain accurate results for a variety of problems. We have applied them to the interpolation of functions on finite domains and to the solution of the Schroedinger equation, and we have compared the performance of the present approach with others.
INTERVAL ARITHMETIC AND STATIC INTERVAL FINITE ELEMENT METHOD
Institute of Scientific and Technical Information of China (English)
郭书祥; 吕震宙
2001-01-01
When the uncertainties of structures may be bounded in intervals, through some suitable discretization, interval finite element method can be constructed by combining the interval analysis with the traditional finite element method(FEM). The two parameters,median and deviation, were used to represent the uncertainties of interval variables. Based on the arithmetic rules of intervals, some properties and arithmetic rules of interval variables were demonstrated. Combining the procedure of interval analysis with FEM, a static linear interval finite element method was presented to solve the non-random uncertain structures. The solving of the characteristic parameters of n-freedom uncertain displacement field of the static governing equation was transformed into 2 n-order linear equations. It is shown by a numerical example that the proposed method is practical and effective.
Interval Finite Element Analysis of Wing Flutter
Institute of Scientific and Technical Information of China (English)
Wang Xiaojun; Qiu Zhiping
2008-01-01
The influences of uncertainties in structural parameters on the flutter speed of wing are studied. On the basis of the deterministic flutter analysis model of wing, the uncertainties in structural parameters are considered and described by interval numbers. By virtue of first-order Taylor series expansion, the lower and upper bound curves of the transient decay rate coefficient versus wind velocity are given. So the interval estimation of the flutter critical wind speed of wing can be obtained, which is more reasonable than the point esti- mation obtained by the deterministic flutter analysis and provides the basis for the further non-probabilistic interval reliability analysis of wing flutter. The flow chart for interval finite element model of flutter analysis of wing is given. The proposed interval finite element model and the stochastic finite element model for wing flutter analysis are compared by the examples of a three degrees of freedorn airfoil and fuselage and a 15° swepthack wing, and the results have shown the effectiveness and feasibility of the presented model. The prominent advantage of the proposed interval finite element model is that only the bounds of uncertain parameters axe required, and the probabilistic distribution densities or other statistical characteristics are not needed.
The Partial Averaging of Fuzzy Differential Inclusions on Finite Interval
Directory of Open Access Journals (Sweden)
Andrej V. Plotnikov
2014-01-01
Full Text Available The substantiation of a possibility of application of partial averaging method on finite interval for differential inclusions with the fuzzy right-hand side with a small parameter is considered.
Stability problem for singular Dirac equation system on finite interval
Ercan, Ahu; Panakhov, Etibar
2017-01-01
In this study, we show the stability problem for the singular Dirac equation system respect to two spectra on finite interval. The meaning of the stability problem of differential operators is to estimate difference of the spectral functions which considered problems when a finite number of eigenvalues of these problems coincide. The method is based on work by Ryabushko in [12]. The author in [12] studied to what extent only finitely many eigenvalues in one or both spectra determine the potential. We obtain a bound on variation of difference of the spectral functions for singular Dirac equation system.
Better Confidence Intervals for Importance Sampling
HALIS SAK; WOLFGANG HÖRMANN; JOSEF LEYDOLD
2010-01-01
It is well known that for highly skewed distributions the standard method of using the t statistic for the confidence interval of the mean does not give robust results. This is an important problem for importance sampling (IS) as its final distribution is often skewed due to a heavy tailed weight distribution. In this paper, we first explain Hall's transformation and its variants to correct the confidence interval of the mean and then evaluate the performance of these methods for two numerica...
Fuzzy and interval finite element method for heat conduction problem
Majumdar, Sarangam; Chakraverty, S
2012-01-01
Traditional finite element method is a well-established method to solve various problems of science and engineering. Different authors have used various methods to solve governing differential equation of heat conduction problem. In this study, heat conduction in a circular rod has been considered which is made up of two different materials viz. aluminum and copper. In earlier studies parameters in the differential equation have been taken as fixed (crisp) numbers which actually may not. Those parameters are found in general by some measurements or experiments. So the material properties are actually uncertain and may be considered to vary in an interval or as fuzzy and in that case complex interval arithmetic or fuzzy arithmetic has to be considered in the analysis. As such the problem is discretized into finite number of elements which depend on interval/fuzzy parameters. Representation of interval/fuzzy numbers may give the clear picture of uncertainty. Hence interval/fuzzy arithmetic is applied in the fin...
Probabilistic sampling of finite renewal processes
Antunes, Nelson; 10.3150/10-BEJ321
2012-01-01
Consider a finite renewal process in the sense that interrenewal times are positive i.i.d. variables and the total number of renewals is a random variable, independent of interrenewal times. A finite point process can be obtained by probabilistic sampling of the finite renewal process, where each renewal is sampled with a fixed probability and independently of other renewals. The problem addressed in this work concerns statistical inference of the original distributions of the total number of renewals and interrenewal times from a sample of i.i.d. finite point processes obtained by sampling finite renewal processes. This problem is motivated by traffic measurements in the Internet in order to characterize flows of packets (which can be seen as finite renewal processes) and where the use of packet sampling is becoming prevalent due to increasing link speeds and limited storage and processing capacities.
Wang, Chong; Qiu, Zhi-Ping
2014-04-01
A new numerical technique named interval finite difference method is proposed for the steady-state temperature field prediction with uncertainties in both physical parameters and boundary conditions. Interval variables are used to quantitatively describe the uncertain parameters with limited information. Based on different Taylor and Neumann series, two kinds of parameter perturbation methods are presented to approximately yield the ranges of the uncertain temperature field. By comparing the results with traditional Monte Carlo simulation, a numerical example is given to demonstrate the feasibility and effectiveness of the proposed method for solving steady-state heat conduction problem with uncertain-but-bounded parameters. [Figure not available: see fulltext.
Sampling errors of quantile estimations from finite samples of data
Roy, Philippe; Gachon, Philippe
2016-01-01
Empirical relationships are derived for the expected sampling error of quantile estimations using Monte Carlo experiments for two frequency distributions frequently encountered in climate sciences. The relationships found are expressed as a scaling factor times the standard error of the mean; these give a quick tool to estimate the uncertainty of quantiles for a given finite sample size.
Sampling-interval-dependent stability for linear sampled-data systems with non-uniform sampling
Shao, Hanyong; Lam, James; Feng, Zhiguang
2016-09-01
This paper is concerned with the sampling-interval-dependent stability of linear sampled-data systems with non-uniform sampling. A new Lyapunov-like functional is constructed to derive sampling-interval-dependent stability results. The Lyapunov-like functional has three features. First, it depends on time explicitly. Second, it may be discontinuous at the sampling instants. Third, it is not required to be positive definite between sampling instants. Moreover, the new Lyapunov-like functional can make use of the information fully of the sampled-data system, including that of both ends of the sampling interval. By making a new proposition for the Lyapunov-like functional, a sampling-interval-dependent stability criterion with reduced conservatism is derived. The new sampling-interval-dependent stability criterion is further extended to linear sampled-data systems with polytopic uncertainties. Finally, examples are given to illustrate the reduced conservatism of the stability criteria.
Thompson Sampling: An Optimal Finite Time Analysis
Kaufmann, Emilie; Munos, Rémi
2012-01-01
The question of the optimality of Thompson Sampling for solving the stochastic multi-armed bandit problem had been open since 1933. In this paper we answer it positively for the case of Bernoulli rewards by providing the first finite-time analysis that matches the asymptotic rate given in the Lai and Robbins lower bound for the cumulative regret. The proof is accompanied by a numerical comparison with other optimal policies, experiments that have been lacking in the literature until now for the Bernoulli case.
Interval Analysis of the Finite Element Method for Stochastic Structures
Institute of Scientific and Technical Information of China (English)
刘长虹; 刘筱玲; 陈虬
2004-01-01
A random parameter can be transformed into an interval number in the structural analysis with the concept of the confidence interval. Hence, analyses of uncertain structural systems can be used in the traditional FEM software. In some cases, the amount of solutions in stochastic structures is nearly as many as that in the traditional structural problems. In addition, a new method to evaluate the failure probability of structures is presented for the needs of the modern engineering design.
THE FINITE AUTOMATA OF EVENTUALLY PERIODIC UNIMODAL MAPS ON THE INTERVAL
Institute of Scientific and Technical Information of China (English)
谢惠民
1993-01-01
For unimodal maps on the interval we prove that, if the kneading sequences (KS) are eventually periodic, then their formal languages are regular ones. The finite automata for such languages are constructed. Comparing with the languages generated by periodic KS, it is shown that the languages here are not finite complement languages.
Interval finite element method and its application on anti-slide stability analysis
Institute of Scientific and Technical Information of China (English)
SHAO Guo-jian; SU Jing-bo
2007-01-01
The problem of interval correlation results in interval extension is discussed by the relationship of interval-valued functions and real-valued functions. The methods of reducing interval extension are given. Based on the ideas of the paper, the formulas of sub-interval perturbed finite element method based on the elements are given. The sub-interval amount is discussed and the approximate computation formula is given. At the same time, the computational precision is discussed and some measures of improving computational efficiency are given. Finally, based on sub-interval perturbed finite element method and anti-slide stability analysis method, the formula for computing the bounds of stability factor is given. It provides a basis for estimating and evaluating reasonably anti-slide stability of structures.
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments.
Sampling Theorem in Terms of the Bandwidth and Sampling Interval
Dean, Bruce H.
2011-01-01
An approach has been developed for interpolating non-uniformly sampled data, with applications in signal and image reconstruction. This innovation generalizes the Whittaker-Shannon sampling theorem by emphasizing two assumptions explicitly (definition of a band-limited function and construction by periodic extension). The Whittaker- Shannon sampling theorem is thus expressed in terms of two fundamental length scales that are derived from these assumptions. The result is more general than what is usually reported, and contains the Whittaker- Shannon form as a special case corresponding to Nyquist-sampled data. The approach also shows that the preferred basis set for interpolation is found by varying the frequency component of the basis functions in an optimal way.
Comparing interval estimates for small sample ordinal CFA models.
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading
Estimation of individual reference intervals in small sample sizes
DEFF Research Database (Denmark)
Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz
2007-01-01
of that order of magnitude for all topics in question. Therefore, new methods to estimate reference intervals for small sample sizes are needed. We present an alternative method based on variance component models. The models are based on data from 37 men and 84 women taking into account biological variation...... presented in this study. The presented method enables occupational health researchers to calculate reference intervals for specific groups, i.e. smokers versus non-smokers, etc. In conclusion, the variance component models provide an appropriate tool to estimate reference intervals based on small sample...
INTERVAL FINITE VOLUME METHOD FOR UNCERTAINTY SIMULATION OF TWO-DIMENSIONAL RIVER WATER QUALITY
Institute of Scientific and Technical Information of China (English)
HE Li; ZENG Guang-ming; HUANG Guo-he; LU Hong-wei
2004-01-01
Under the interval uncertainties, by incorporating the discretization form of finite volume method and interval algebra theory, an Interval Finite Volume Method (IFVM) was developed to solve water quality simulation issues for two-dimensional river when lacking effective data of flow velocity and flow quantity. The IFVM was practically applied to a segment of the Xiangjiang River because the Project of Hunan Inland Waterway Multipurpose must be started working after the environmental impact assessment for it. The simulation results suggest that there exist rather apparent pollution zones of BOD5 downstream the Dongqiaogang discharger and that of COD downstream Xiaoxiangjie discharger, but the pollution sources have no impact on the safety of the three water plants located in this river segment. Although the developed IFVM is to be perfected, it is still a powerful tool under interval uncertainties for water environmental impact assessment, risk analysis, and water quality planning, etc. besides water quality simulation studied in this paper.
Relativistic rise measurements with very fine sampling intervals
Energy Technology Data Exchange (ETDEWEB)
Ludlam, T.; Platner, E.D.; Polychronakos, V.A.; Lindenbaum, S.J.; Kramer, M.A.; Teramoto, Y.
1980-01-01
The motivation of this work was to determine whether the technique of charged particle identification via the relativistic rise in the ionization loss can be significantly improved by virtue of very small sampling intervals. A fast-sampling ADC and a longitudinal drift geometry were used to provide a large number of samples from a single drift chamber gap, achieving sampling intervals roughly 10 times smaller than any previous study. A single layer drift chamber was used, and tracks of 1 meter length were simulated by combining together samples from many identified particles in this detector. These data were used to study the resolving power for particle identification as a function of sample size, averaging technique, and the number of discrimination levels (ADC bits) used for pulse height measurements.
Truncated matricial moment problems on a finite interval: the operator approach
Zagorodnyuk, Sergey M
2009-01-01
In this paper we obtain a description of all solutions of truncated matricial moment problems on a finite interval in a general case (no conditions besides solvability are assumed). We use the basic results of M.G. Krein and I.E. Ovcharenko about generalized sc-resolvents of Hermitian contractions.
A Genealogy for Finite Kneading Sequences of Bimodal Maps on the Interval
Ringland, J; Ringland, John; Tresser, Charles
1993-01-01
We generate all the finite kneading sequences of one of the two kinds of bimodal map on the interval, building each sequence uniquely from a pair of shorter ones. There is a single pair at generation 0, with members of length 1. Concomitant with this genealogy of kneading sequences is a unified genealogy of all the periodic orbits. (6/93)
Numerical modeling of skin tissue heating using the interval finite difference method.
Mochnacki, B; Belkhayat, Alicja Piasecka
2013-09-01
Numerical analysis of heat transfer processes proceeding in a nonhomogeneous biological tissue domain is presented. In particular, the skin tissue domain subjected to an external heat source is considered. The problem is treated as an axially-symmetrical one (it results from the mathematical form of the function describing the external heat source). Thermophysical parameters of sub-domains (volumetric specific heat, thermal conductivity, perfusion coefficient etc.) are given as interval numbers. The problem discussed is solved using the interval finite difference method basing on the rules of directed interval arithmetic, this means that at the stage of FDM algorithm construction the mathematical manipulations are realized using the interval numbers. In the final part of the paper the results of numerical computations are shown, in particular the problem of admissible thermal dose is analyzed.
Binomial Distribution Sample Confidence Intervals Estimation 6. Excess Risk
Directory of Open Access Journals (Sweden)
Sorana BOLBOACĂ
2004-02-01
Full Text Available We present the problem of the confidence interval estimation for excess risk (Y/n-X/m fraction, a parameter which allows evaluating of the specificity of an association between predisposing or causal factors and disease in medical studies. The parameter is computes based on 2x2 contingency table and qualitative variables. The aim of this paper is to introduce four new methods of computing confidence intervals for excess risk called DAC, DAs, DAsC, DBinomial, and DBinomialC and to compare theirs performance with the asymptotic method called here DWald.In order to assess the methods, we use the PHP programming language and a PHP program was creates. The performance of each method for different sample sizes and different values of binomial variables were assess using a set of criterions. First, the upper and lower boundaries for a given X, Y and a specified sample size for choused methods were compute. Second, were assessed the average and standard deviation of the experimental errors, and the deviation relative to imposed significance level α = 5%. Four methods were assessed on random numbers for binomial variables and for sample sizes from 4 to 1000 domain.The experiments show that the DAC methods obtain performances in confidence intervals estimation for excess risk.
Institute of Scientific and Technical Information of China (English)
Ravi P. AGARWAL; Donal O'REGAN; Svatoslav STAN(E)K
2006-01-01
A new upper and lower solution theory is presented for the second order problem (G'(y))' + f(t,y) = 0 on finite and infinite intervals. The theory on finite intervals is based on a Leray-Schauder alternative, whereas the theory on infinite intervals is based on results on the finite interval and a diagonalization process.
Effective Stiffness: Generalizing Effective Resistance Sampling to Finite Element Matrices
Avron, Haim
2011-01-01
We define the notion of effective stiffness and show that it can used to build sparsifiers, algorithms that sparsify linear systems arising from finite-element discretizations of PDEs. In particular, we show that sampling $O(n\\log n)$ elements according to probabilities derived from effective stiffnesses yields an high quality preconditioner that can be used to solve the linear system in a small number of iterations. Effective stiffness generalizes the notion of effective resistance, a key ingredient of recent progress in developing nearly linear symmetric diagonally dominant (SDD) linear solvers. Solving finite elements problems is of considerably more interest than the solution of SDD linear systems, since the finite element method is frequently used to numerically solve PDEs arising in scientific and engineering applications. Unlike SDD systems, which are relatively easy to precondition, there has been limited success in designing fast solvers for finite element systems, and previous algorithms usually tar...
Encounter distribution of two random walkers on a finite one-dimensional interval
Energy Technology Data Exchange (ETDEWEB)
Tejedor, Vincent; Schad, Michaela; Metzler, Ralf [Physics Department, Technical University of Munich, James Franck Strasse, 85747 Garching (Germany); Benichou, Olivier; Voituriez, Raphael, E-mail: metz@ph.tum.de [Laboratoire de Physique Theorique de la Matiere Condensee (UMR 7600), Universite Pierre et Marie Curie, 4 Place Jussieu, 75255 Paris Cedex (France)
2011-09-30
We analyse the first-passage properties of two random walkers confined to a finite one-dimensional domain. For the case of absorbing boundaries at the endpoints of the interval, we derive the probability that the two particles meet before either one of them becomes absorbed at one of the boundaries. For the case of reflecting boundaries, we obtain the mean first encounter time of the two particles. Our approach leads to closed-form expressions that are more easily tractable than a previously derived solution in terms of the Weierstrass' elliptic function. (paper)
Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan
2016-10-01
This paper introduces mixed fuzzy and interval parametric uncertainties into the FE components of the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model for mid-frequency analysis of built-up systems, thus an uncertain ensemble combining non-parametric with mixed fuzzy and interval parametric uncertainties comes into being. A fuzzy interval Finite Element/Statistical Energy Analysis (FIFE/SEA) framework is proposed to obtain the uncertain responses of built-up systems, which are described as intervals with fuzzy bounds, termed as fuzzy-bounded intervals (FBIs) in this paper. Based on the level-cut technique, a first-order fuzzy interval perturbation FE/SEA (FFIPFE/SEA) and a second-order fuzzy interval perturbation FE/SEA method (SFIPFE/SEA) are developed to handle the mixed parametric uncertainties efficiently. FFIPFE/SEA approximates the response functions by the first-order Taylor series, while SFIPFE/SEA improves the accuracy by considering the second-order items of Taylor series, in which all the mixed second-order items are neglected. To further improve the accuracy, a Chebyshev fuzzy interval method (CFIM) is proposed, in which the Chebyshev polynomials is used to approximate the response functions. The FBIs are eventually reconstructed by assembling the extrema solutions at all cut levels. Numerical results on two built-up systems verify the effectiveness of the proposed methods.
Sample Size for the "Z" Test and Its Confidence Interval
Liu, Xiaofeng Steven
2012-01-01
The statistical power of a significance test is closely related to the length of the confidence interval (i.e. estimate precision). In the case of a "Z" test, the length of the confidence interval can be expressed as a function of the statistical power. (Contains 1 figure and 1 table.)
Sample Size for the "Z" Test and Its Confidence Interval
Liu, Xiaofeng Steven
2012-01-01
The statistical power of a significance test is closely related to the length of the confidence interval (i.e. estimate precision). In the case of a "Z" test, the length of the confidence interval can be expressed as a function of the statistical power. (Contains 1 figure and 1 table.)
The Analysis of Curved Beam Using B-Spline Wavelet on Interval Finite Element Method
Directory of Open Access Journals (Sweden)
Zhibo Yang
2014-01-01
Full Text Available A B-spline wavelet on interval (BSWI finite element is developed for curved beams, and the static and free vibration behaviors of curved beam (arch are investigated in this paper. Instead of the traditional polynomial interpolation, scaling functions at a certain scale have been adopted to form the shape functions and construct wavelet-based elements. Different from the process of the direct wavelet addition in the other wavelet numerical methods, the element displacement field represented by the coefficients of wavelets expansions is transformed from wavelet space to physical space by aid of the corresponding transformation matrix. Furthermore, compared with the commonly used Daubechies wavelet, BSWI has explicit expressions and excellent approximation properties, which guarantee satisfactory results. Numerical examples are performed to demonstrate the accuracy and efficiency with respect to previously published formulations for curved beams.
Directory of Open Access Journals (Sweden)
Tudor DRUGAN
2003-08-01
Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.
Fourier spectrum and phases for a signal in a finite interval
Belmont, Gérard; Dorville, Nicolas; Sahraoui, Fouad; Rezeau, Laurence
2015-04-01
When investigating the physics of turbulent media, as the solar wind or the magnetosheath plasmas, obtaining accurate Fourier spectra and phases is a crucial issue. For the different fields, the spectra allow in particular verifying whether one or several power laws can be determined in different frequency ranges. Accurate phases are necessary as well for all the "higher order statistics" studies in Fourier space, the coherence ones and for the polarization studies. Unfortunately, the Fourier analysis is not unique for a finite time interval of duration T: the frequencies lower than 1/T have a large influence on the result, which can hardly be controlled. This unknown "trend" has in particular the effect of introducing jumps at the edges of the interval, for the function under study itself, as well as for all its derivatives. The Fourier transform obtained directly by FFT (Fast Fourier Transform) is generally much influenced by these effects and cannot be used without care for wide band signals. The interference between the jumps and the signal itself provide in particular characteristic "hairs" on the spectrum, which are clearly visible on it with df≈1/T. These fluctuations are usually eliminated by smoothing the spectrum, or by averaging several successive spectra. Nevertheless, such treatments introduce uncertainties on the spectral laws (the phases being anyway completely lost). Windowing is also a method currently used to suppress or decrease the jumps, but it modifies the signal (the windowed trend has a spectrum, which is convolved with the searched one) and the phases are generally much altered. Here, we present a new data processing technique to circumvent these difficulties. It takes advantage of the fact that the signal is generally not unknown out of the interval under study: the complete signal is tapered to this interval of interest thanks to a new window function, sharp but not square. This window function is chosen such that the spectrum obtained
Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model
DEFF Research Database (Denmark)
Kirkegaard, Poul Henning
1993-01-01
Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...... in a Fisherian sense, is given. The solution is investigated by a simulation study. It is shown that if the experimental length T1 is fixed it may be useful to sample the record at a high sampling rate, since more measurements from the system are then collected. No optimal sampling interval exists....... But if the total number of sample points N is fixed an optimal sampling interval exists. Then it is far worse to use a too large sampling interval than a too small one since the information losses increase rapidly when the sampling interval increases from the optimal value....
Directory of Open Access Journals (Sweden)
Baoqiang Yan
2006-07-01
Full Text Available In this paper, Krasnoselskii's theorem and the fixed point theorem of cone expansion and compression are improved. Using the results obtained, we establish the existence of multiple positive solutions for the singular second-order boundary-value problems with derivative dependance on finite and infinite intervals.
Stability on FInite Time Interval and Time—Dependent Bifurcation Analysis of Duffing‘s Equations6
Institute of Scientific and Technical Information of China (English)
CuncaiHUA; QishaoLU
1999-01-01
The concept of stability on finite time interval is proposed and some stability theorems are established.The delayed bifurcation transition of Duffing's equations with a time-dependent parameter is analyzed.Function is used to predict the bifurcation transition value.The sensitivity of the solutions to initial values and parameters is also studied.
Dujardin, G. M.
2009-08-12
This paper deals with the asymptotic behaviour of the solutions of linear initial boundary value problems with constant coefficients on the half-line and on finite intervals. We assume that the boundary data are periodic in time and we investigate whether the solution becomes time-periodic after sufficiently long time. Using Fokas\\' transformation method, we show that, for the linear Schrödinger equation, the linear heat equation and the linearized KdV equation on the half-line, the solutions indeed become periodic for large time. However, for the same linear Schrödinger equation on a finite interval, we show that the solution, in general, is not asymptotically periodic; actually, the asymptotic behaviour of the solution depends on the commensurability of the time period T of the boundary data with the square of the length of the interval over. © 2009 The Royal Society.
Directory of Open Access Journals (Sweden)
Matías Ernesto Barber
2016-06-01
Full Text Available The spatial sampling interval, as related to the ability to digitize a soil profile with a certain number of features per unit length, depends on the profiling technique itself. From a variety of profiling techniques, roughness parameters are estimated at different sampling intervals. Since soil profiles have continuous spectral components, it is clear that roughness parameters are influenced by the sampling interval of the measurement device employed. In this work, we contributed to answer which sampling interval the profiles needed to be measured at to accurately account for the microwave response of agricultural surfaces. For this purpose, a 2-D laser profiler was built and used to measure surface soil roughness at field scale over agricultural sites in Argentina. Sampling intervals ranged from large (50 mm to small ones (1 mm, with several intermediate values. Large- and intermediate-sampling-interval profiles were synthetically derived from nominal, 1 mm ones. With these data, the effect of sampling-interval-dependent roughness parameters on backscatter response was assessed using the theoretical backscatter model IEM2M. Simulations demonstrated that variations of roughness parameters depended on the working wavelength and was less important at L-band than at C- or X-band. In any case, an underestimation of the backscattering coefficient of about 1-4 dB was observed at larger sampling intervals. As a general rule a sampling interval of 15 mm can be recommended for L-band and 5 mm for C-band.
Compressive Sampling of EEG Signals with Finite Rate of Innovation
Directory of Open Access Journals (Sweden)
Poh Kok-Kiong
2010-01-01
Full Text Available Analyses of electroencephalographic signals and subsequent diagnoses can only be done effectively on long term recordings that preserve the signals' morphologies. Currently, electroencephalographic signals are obtained at Nyquist rate or higher, thus introducing redundancies. Existing compression methods remove these redundancies, thereby achieving compression. We propose an alternative compression scheme based on a sampling theory developed for signals with a finite rate of innovation (FRI which compresses electroencephalographic signals during acquisition. We model the signals as FRI signals and then sample them at their rate of innovation. The signals are thus effectively represented by a small set of Fourier coefficients corresponding to the signals' rate of innovation. Using the FRI theory, original signals can be reconstructed using this set of coefficients. Seventy-two hours of electroencephalographic recording are tested and results based on metrices used in compression literature and morphological similarities of electroencephalographic signals are presented. The proposed method achieves results comparable to that of wavelet compression methods, achieving low reconstruction errors while preserving the morphologiies of the signals. More importantly, it introduces a new framework to acquire electroencephalographic signals at their rate of innovation, thus entailing a less costly low-rate sampling device that does not waste precious computational resources.
Finite sampling inequalities: an application to two-sample Kolmogorov-Smirnov statistics.
Greene, Evan; Wellner, Jon A
2016-12-01
We review a finite-sampling exponential bound due to Serfling and discuss related exponential bounds for the hypergeometric distribution. We then discuss how such bounds motivate some new results for two-sample empirical processes. Our development complements recent results by Wei and Dudley (2012) concerning exponential bounds for two-sided Kolmogorov - Smirnov statistics by giving corresponding results for one-sided statistics with emphasis on "adjusted" inequalities of the type proved originally by Dvoretzky et al. (1956) and by Massart (1990) for one-sample versions of these statistics.
Institute of Scientific and Technical Information of China (English)
Benedetto BONGIORNO; Luisa DI PIAZZA; Kazimierz MUSIAL
2012-01-01
Let X be a Banach space with a Schauder basts {en},and let Φ(Ⅰ) =∑∞n=1 en ∫I fn(t)dt be a finitely additive interval measure on the unit interval [0,1],where the integrals are taken in the sense of Henstock-Kurzweil.Necessary and sufficient conditions are given for Φ to be the indefinite integral of a Henstock-Kurzweil-Pettis (or Henstock,or variational Henstock) integrable function f:[0,1] → X.
Robust L2-L∞ Filtering of Time-Delay Jump Systems with Respect to the Finite-Time Interval
Directory of Open Access Journals (Sweden)
Shuping He
2011-01-01
Full Text Available This paper studied the problem of stochastic finite-time boundedness and disturbance attenuation for a class of linear time-delayed systems with Markov jumping parameters. Sufficient conditions are provided to solve this problem. The L2-L∞ filters are, respectively, designed for time-delayed Markov jump linear systems with/without uncertain parameters such that the resulting filtering error dynamic system is stochastically finite-time bounded and has the finite-time interval disturbance attenuation γ for all admissible uncertainties, time delays, and unknown disturbances. By using stochastic Lyapunov-Krasovskii functional approach, it is shown that the filter designing problem is in terms of the solutions of a set of coupled linear matrix inequalities. Simulation examples are included to demonstrate the potential of the proposed results.
Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model
DEFF Research Database (Denmark)
Kirkegaard, Poul Henning
1993-01-01
Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...
Energy Technology Data Exchange (ETDEWEB)
Yagasaki, Kazuyuki [Department of Mechanical and Systems Engineering, Gifu University, Gifu 501-1193 (Japan)], E-mail: yagasaki@gifu-u.ac.jp
2007-08-20
In experiments for single and coupled pendula, we demonstrate the effectiveness of a new control method based on dynamical systems theory for stabilizing unstable aperiodic trajectories defined on infinite- or finite-time intervals. The basic idea of the method is similar to that of the OGY method, which is a well-known, chaos control method. Extended concepts of the stable and unstable manifolds of hyperbolic trajectories are used here.
AN EMPIRICAL ANALYSIS OF SAMPLING INTERVAL FOR EXCHANGE RATE FORECASTING WITH NEURAL NETWORKS
Institute of Scientific and Technical Information of China (English)
HUANG Wei; K. K. Lai; Y. Nakamori; WANG Shouyang
2003-01-01
Artificial neural networks (ANNs) have been widely used as a promising alternative approach for forecast task because of their several distinguishing features. In this paper, we investigate the effect of different sampling intervals on predictive performance of ANNs in forecasting exchange rate time series. It is shown that selection of an appropriate sampling interval would permit the neural network to model adequately the financial time series. Too short or too long a sampling interval does not provide good forecasting accuracy. In addition, we discuss the effect of forecasting horizons and input nodes on the prediction performance of neural networks.
Effects of sampling interval on spatial patterns and statistics of watershed nitrogen concentration
Wu, S.-S.D.; Usery, E.L.; Finn, M.P.; Bosch, D.D.
2009-01-01
This study investigates how spatial patterns and statistics of a 30 m resolution, model-simulated, watershed nitrogen concentration surface change with sampling intervals from 30 m to 600 m for every 30 m increase for the Little River Watershed (Georgia, USA). The results indicate that the mean, standard deviation, and variogram sills do not have consistent trends with increasing sampling intervals, whereas the variogram ranges remain constant. A sampling interval smaller than or equal to 90 m is necessary to build a representative variogram. The interpolation accuracy, clustering level, and total hot spot areas show decreasing trends approximating a logarithmic function. The trends correspond to the nitrogen variogram and start to level at a sampling interval of 360 m, which is therefore regarded as a critical spatial scale of the Little River Watershed. Copyright ?? 2009 by Bellwether Publishing, Ltd. All right reserved.
Monotonicity in the Sample Size of the Length of Classical Confidence Intervals
Kagan, Abram M
2012-01-01
It is proved that the average length of standard confidence intervals for parameters of gamma and normal distributions monotonically decrease with the sample size. The proofs are based on fine properties of the classical gamma function.
The Gas Sampling Interval Effect on V˙O2peak Is Independent of Exercise Protocol.
Scheadler, Cory M; Garver, Matthew J; Hanson, Nicholas J
2017-09-01
There is a plethora of gas sampling intervals available during cardiopulmonary exercise testing to measure peak oxygen consumption (V˙O2peak). Different intervals can lead to altered V˙O2peak. Whether differences are affected by the exercise protocol or subject sample is not clear. The purpose of this investigation was to determine whether V˙O2peak differed because of the manipulation of sampling intervals and whether differences were independent of the protocol and subject sample. The first subject sample (24 ± 3 yr; V˙O2peak via 15-breath moving averages: 56.2 ± 6.8 mL·kg·min) completed the Bruce and the self-paced V˙O2max protocols. The second subject sample (21.9 ± 2.7 yr; V˙O2peak via 15-breath moving averages: 54.2 ± 8.0 mL·kg·min) completed the Bruce and the modified Astrand protocols. V˙O2peak was identified using five sampling intervals: 15-s block averages, 30-s block averages, 15-breath block averages, 15-breath moving averages, and 30-s block averages aligned to the end of exercise. Differences in V˙O2peak between intervals were determined using repeated-measures ANOVAs. The influence of subject sample on the sampling effect was determined using independent t-tests. There was a significant main effect of sampling interval on V˙O2peak (first sample Bruce and self-paced V˙O2max P V˙O2peak between sampling intervals followed a similar pattern for each protocol and subject sample, with 15-breath moving average presenting the highest V˙O2peak. The effect of manipulating gas sampling intervals on V˙O2peak appears to be protocol and sample independent. These findings highlight our recommendation that the clinical and scientific community request and report the sampling interval whenever metabolic data are presented. The standardization of reporting would assist in the comparison of V˙O2peak.
Terry, Leann; Kelley, Ken
2012-11-01
Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.
The influence of sampling interval on the accuracy of trail impact assessment
Leung, Y.-F.; Marion, J.L.
1999-01-01
Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.
Exponentially slow traveling waves on a finite interval for Burgers' type equation
Directory of Open Access Journals (Sweden)
Pieter De Groen
1998-11-01
Full Text Available In this paper we study for small positive $epsilon$ the slow motion of the solution for evolution equations of Burgers' type with small diffusion, $$ u_t=epsilon u_{xx}+f(u,u_x,, quad u(x,0=u_0(x, quad u(pm 1,t=pm 1, $$ on the bounded spatial domain $[-1,1]$; $f$ is a smooth function satisfying $f(1>0, f(-1<0$ and $int_{-1}^{1}f(tdt=0$. The initial and boundary value problem~($star$ has a unique asymptotically stable equilibrium solution that attracts all solutions starting with continuous initial data $u_0$. On the infinite spatial domain ${mathbb R}$ the differential equation has slow speed traveling wave solutions generated by profiles that satisfy the boundary conditions of~($star$. As long as its zero stays inside the interval $[-1,1]$, such a traveling wave suitably describes the slow long term behaviour of the solution of ($star$ and its speed characterizes the local velocity of the slow motion with exponential precision. A solution that starts near a traveling wave moves in a small neighborhood of the traveling wave with exponentially slow velocity (measured as the speed of the unique zero during an exponentially long time interval $(0,T$. In this paper we give a unified treatment of the problem, using both Hilbert space and maximum principle methods, and we give rigorous proofs of convergence of the solution and of the asymptotic estimate of the velocity.
Uniqueness of the potential function for the vectorial Sturm-Liouville equation on a finite interval
Directory of Open Access Journals (Sweden)
Chang Tsorng-Hwa
2011-01-01
Full Text Available Abstract In this paper, the vectorial Sturm-Liouville operator L Q = - d 2 d x 2 + Q ( x is considered, where Q(x is an integrable m × m matrix-valued function defined on the interval [0,π] The authors prove that m 2+1 characteristic functions can determine the potential function of a vectorial Sturm-Liouville operator uniquely. In particular, if Q(x is real symmetric, then m ( m + 1 2 + 1 characteristic functions can determine the potential function uniquely. Moreover, if only the spectral data of self-adjoint problems are considered, then m 2 + 1 spectral data can determine Q(x uniquely.
Directory of Open Access Journals (Sweden)
Aristides T Hatjimihail
Full Text Available BACKGROUND: An open problem in clinical chemistry is the estimation of the optimal sampling time intervals for the application of statistical quality control (QC procedures that are based on the measurement of control materials. This is a probabilistic risk assessment problem that requires reliability analysis of the analytical system, and the estimation of the risk caused by the measurement error. METHODOLOGY/PRINCIPAL FINDINGS: Assuming that the states of the analytical system are the reliability state, the maintenance state, the critical-failure modes and their combinations, we can define risk functions based on the mean time of the states, their measurement error and the medically acceptable measurement error. Consequently, a residual risk measure rr can be defined for each sampling time interval. The rr depends on the state probability vectors of the analytical system, the state transition probability matrices before and after each application of the QC procedure and the state mean time matrices. As optimal sampling time intervals can be defined those minimizing a QC related cost measure while the rr is acceptable. I developed an algorithm that estimates the rr for any QC sampling time interval of a QC procedure applied to analytical systems with an arbitrary number of critical-failure modes, assuming any failure time and measurement error probability density function for each mode. Furthermore, given the acceptable rr, it can estimate the optimal QC sampling time intervals. CONCLUSIONS/SIGNIFICANCE: It is possible to rationally estimate the optimal QC sampling time intervals of an analytical system to sustain an acceptable residual risk with the minimum QC related cost. For the optimization the reliability analysis of the analytical system and the risk analysis of the measurement error are needed.
Institute of Scientific and Technical Information of China (English)
Wu Fuxian; Wen Weidong
2016-01-01
Classic maximum entropy quantile function method (CMEQFM) based on the probabil-ity weighted moments (PWMs) can accurately estimate the quantile function of random variable on small samples, but inaccurately on the very small samples. To overcome this weakness, least square maximum entropy quantile function method (LSMEQFM) and that with constraint condition (LSMEQFMCC) are proposed. To improve the confidence level of quantile function estimation, scatter factor method is combined with maximum entropy method to estimate the confidence inter-val of quantile function. From the comparisons of these methods about two common probability distributions and one engineering application, it is showed that CMEQFM can estimate the quan-tile function accurately on the small samples but inaccurately on the very small samples (10 sam-ples); LSMEQFM and LSMEQFMCC can be successfully applied to the very small samples;with consideration of the constraint condition on quantile function, LSMEQFMCC is more stable and computationally accurate than LSMEQFM; scatter factor confidence interval estimation method based on LSMEQFM or LSMEQFMCC has good estimation accuracy on the confidence interval of quantile function, and that based on LSMEQFMCC is the most stable and accurate method on the very small samples (10 samples).
Directory of Open Access Journals (Sweden)
Liaqat Ali
2016-09-01
Full Text Available In this research work a new version of Optimal Homotopy Asymptotic Method is applied to solve nonlinear boundary value problems (BVPs in finite and infinite intervals. It comprises of initial guess, auxiliary functions (containing unknown convergence controlling parameters and a homotopy. The said method is applied to solve nonlinear Riccati equations and nonlinear BVP of order two for thin film flow of a third grade fluid on a moving belt. It is also used to solve nonlinear BVP of order three achieved by Mostafa et al. for Hydro-magnetic boundary layer and micro-polar fluid flow over a stretching surface embedded in a non-Darcian porous medium with radiation. The obtained results are compared with the existing results of Runge-Kutta (RK-4 and Optimal Homotopy Asymptotic Method (OHAM-1. The outcomes achieved by this method are in excellent concurrence with the exact solution and hence it is proved that this method is easy and effective.
Humphreys, Michael Keith; Panacek, Edward; Green, William; Albers, Elizabeth
2013-03-01
The use of histology in the as a tool for estimating postmortem intervals has rarely been explored but it has the potential for offering medical examiners an additional means for estimating the postmortem submersion interval (PMSI) during a death investigation. This study used perinatal piglets as human analogs which were submerged in freshwater for various time intervals. Each piglet was extracted from the water and underwent a necropsy examination during which histological samples were collected. The samples revealed that the necrotic tissue decomposed relatively predictably over time and that this decompositional progression may have the potential to be used via a scoring system to determine or aid in determining the PMSI. This method for calculating PMSI allows for normalization between piglets of various mass and body types. It also prevents any contamination of the remains via algae growth and animal activity that may exacerbate and possibly exaggerate PMSI calculation.
Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size
Shieh, Gwowen
2015-01-01
Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…
A design-based approximation to the Bayes Information Criterion in finite population sampling
Directory of Open Access Journals (Sweden)
Enrico Fabrizi
2014-05-01
Full Text Available In this article, various issues related to the implementation of the usual Bayesian Information Criterion (BIC are critically examined in the context of modelling a finite population. A suitable design-based approximation to the BIC is proposed in order to avoid the derivation of the exact likelihood of the sample which is often very complex in a finite population sampling. The approximation is justified using a theoretical argument and a Monte Carlo simulation study.
Directory of Open Access Journals (Sweden)
Doo Yong Choi
2016-04-01
Full Text Available Rapid detection of bursts and leaks in water distribution systems (WDSs can reduce the social and economic costs incurred through direct loss of water into the ground, additional energy demand for water supply, and service interruptions. Many real-time burst detection models have been developed in accordance with the use of supervisory control and data acquisition (SCADA systems and the establishment of district meter areas (DMAs. Nonetheless, no consideration has been given to how frequently a flow meter measures and transmits data for predicting breaks and leaks in pipes. This paper analyzes the effect of sampling interval when an adaptive Kalman filter is used for detecting bursts in a WDS. A new sampling algorithm is presented that adjusts the sampling interval depending on the normalized residuals of flow after filtering. The proposed algorithm is applied to a virtual sinusoidal flow curve and real DMA flow data obtained from Jeongeup city in South Korea. The simulation results prove that the self-adjusting algorithm for determining the sampling interval is efficient and maintains reasonable accuracy in burst detection. The proposed sampling method has a significant potential for water utilities to build and operate real-time DMA monitoring systems combined with smart customer metering systems.
Bolboacă, Sorana; Jäntschi, Lorentz
2005-01-01
Likelihood Ratio medical key parameters calculated on categorical results from diagnostic tests are usually express accompanied with their confidence intervals, computed using the normal distribution approximation of binomial distribution. The approximation creates known anomalies,especially for limit cases. In order to improve the quality of estimation, four new methods (called here RPAC, RPAC0, RPAC1, and RPAC2) were developed and compared with the classical method (called here RPWald), using an exact probability calculation algorithm.Computer implementations of the methods use the PHP language. We defined and implemented the functions of the four new methods and the five criterions of confidence interval assessment. The experiments run for samples sizes which vary in 14 - 34 range, 90 - 100 range (0 binomial variables (1 interval for positive and negative likelihood ratios.
Finite-sample-size effects on convection in mushy layers
Zhong, Jin-Qiang; Wells, Andrew J; Wettlaufer, John S
2012-01-01
We report theoretical and experimental investigations of the flow instability responsible for the mushy-layer mode of convection and the formation of chimneys, drainage channels devoid of solid, during steady-state solidification of aqueous ammonium chloride. Under certain growth conditions a state of steady mushy-layer growth with no flow is unstable to the onset of convection, resulting in the formation of chimneys. We present regime diagrams to quantify the state of the flow as a function of the initial liquid concentration, the porous-medium Rayleigh number, and the sample width. For a given liquid concentration, increasing both the porous-medium Rayleigh number and the sample width caused the system to change from a stable state of no flow to a different state with the formation of chimneys. Decreasing the concentration ratio destabilized the system and promoted the formation of chimneys. As the initial liquid concentration increased, onset of convection and formation of chimneys occurred at larger value...
Directory of Open Access Journals (Sweden)
Andrei ACHIMAŞ CADARIU
2004-08-01
Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.
Learning algorithms for feedforward networks based on finite samples
Energy Technology Data Exchange (ETDEWEB)
Rao, N.S.V.; Protopopescu, V.; Mann, R.C.; Oblow, E.M.; Iyengar, S.S.
1994-09-01
Two classes of convergent algorithms for learning continuous functions (and also regression functions) that are represented by feedforward networks, are discussed. The first class of algorithms, applicable to networks with unknown weights located only in the output layer, is obtained by utilizing the potential function methods of Aizerman et al. The second class, applicable to general feedforward networks, is obtained by utilizing the classical Robbins-Monro style stochastic approximation methods. Conditions relating the sample sizes to the error bounds are derived for both classes of algorithms using martingale-type inequalities. For concreteness, the discussion is presented in terms of neural networks, but the results are applicable to general feedforward networks, in particular to wavelet networks. The algorithms can be directly adapted to concept learning problems.
Wang, Chao; Huang, Peng; Huang, Duan; Lin, Dakai; Zeng, Guihua
2016-02-01
Practical security of the continuous-variable quantum key distribution (CVQKD) system with finite sampling bandwidth of analog-to-digital converter (ADC) at the receiver's side is investigated. We find that the finite sampling bandwidth effects may decrease the lower bound of secret key rate without awareness of the legitimate communicators. This leaves security loopholes for Eve to attack the system. In addition, this effect may restrains the linear relationship of secret key bit rate with repetition rate of the system; subsequently, there is a saturation value for the secret key bit rate with the repetition rate. To resist such kind of effects, we propose a dual sampling detection approach in which two ADCs are employed so that the finite sampling bandwidth effects are removed.
DEFF Research Database (Denmark)
Nielsen, Morten Ø.; Frederiksen, Per Houmann
2005-01-01
In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods. The es...... the time domain parametric methods, and (4) without sufficient trimming of scales the wavelet-based estimators are heavily biased.......In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods....... The estimators are briefly introduced and compared, and the criteria adopted for measuring finite sample performance are bias and root mean squared error. Most importantly, the simulations reveal that (1) the frequency domain maximum likelihood procedure is superior to the time domain parametric methods, (2) all...
The impact of different sampling rates and calculation time intervals on ROTI values
Directory of Open Access Journals (Sweden)
Jacobsen Knut Stanley
2014-01-01
Full Text Available The ROTI (Rate of TEC index is a commonly used measure of ionospheric irregularities level. The algorithm to calculate ROTI is easily implemented, and is the same from paper to paper. However, the sample rate of the GNSS data used, and the time interval over which a value of ROTI is calculated, varies from paper to paper. When comparing ROTI values from different studies, this must be taken into account. This paper aims to show what these differences are, to increase the awareness of this issue. We have investigated the effect of different parameters for the calculation of ROTI values, using one year of data from 8 receivers at latitudes ranging from 59° N to 79° N. We have found that the ROTI values calculated using different parameter choices are strongly positively correlated. However, the ROTI values are quite different. The effect of a lower sample rate is to lower the ROTI value, due to the loss of high-frequency parts of the ROT spectrum, while the effect of a longer calculation time interval is to remove or reduce short-lived peaks due to the inherent smoothing effect. The ratio of ROTI values based on data of different sampling rate is examined in relation to the ROT power spectrum. Of relevance to statistical studies, we find that the median level of ROTI depends strongly on sample rate, strongly on latitude at auroral latitudes, and weakly on time interval. Thus, a baseline “quiet” or “noisy” level for one location or choice or parameters may not be valid for another location or choice of parameters.
A Variable Sampling Interval Synthetic Xbar Chart for the Process Mean.
Directory of Open Access Journals (Sweden)
Lei Yong Lee
Full Text Available The usual practice of using a control chart to monitor a process is to take samples from the process with fixed sampling interval (FSI. In this paper, a synthetic X control chart with the variable sampling interval (VSI feature is proposed for monitoring changes in the process mean. The VSI synthetic X chart integrates the VSI X chart and the VSI conforming run length (CRL chart. The proposed VSI synthetic X chart is evaluated using the average time to signal (ATS criterion. The optimal charting parameters of the proposed chart are obtained by minimizing the out-of-control ATS for a desired shift. Comparisons between the VSI synthetic X chart and the existing X, synthetic X, VSI X and EWMA X charts, in terms of ATS, are made. The ATS results show that the VSI synthetic X chart outperforms the other X type charts for detecting moderate and large shifts. An illustrative example is also presented to explain the application of the VSI synthetic X chart.
Efficient Estimation for Diffusions Sampled at High Frequency Over a Fixed Time Interval
DEFF Research Database (Denmark)
Jakobsen, Nina Munkholt; Sørensen, Michael
Parametric estimation for diffusion processes is considered for high frequency observations over a fixed time interval. The processes solve stochastic differential equations with an unknown parameter in the diffusion coefficient. We find easily verified conditions on approximate martingale...... estimating functions under which estimators are consistent, rate optimal, and efficient under high frequency (in-fill) asymptotics. The asymptotic distributions of the estimators are shown to be normal variance-mixtures, where the mixing distribution generally depends on the full sample path of the diffusion...
Maris, E.
1998-01-01
The sampling interpretation of confidence intervals and hypothesis tests is discussed in the context of conditional maximum likelihood estimation. Three different interpretations are discussed, and it is shown that confidence intervals constructed from the asymptotic distribution under the third sampling scheme discussed are valid for the first…
A proof of the Woodward-Lawson sampling method for a finite linear array
Somers, Gary A.
1993-01-01
An extension of the continuous aperture Woodward-Lawson sampling theorem has been developed for a finite linear array of equidistant identical elements with arbitrary excitations. It is shown that by sampling the array factor at a finite number of specified points in the far field, the exact array factor over all space can be efficiently reconstructed in closed form. The specified sample points lie in real space and hence are measurable provided that the interelement spacing is greater than approximately one half of a wavelength. This paper provides insight as to why the length parameter used in the sampling formulas for discrete arrays is larger than the physical span of the lattice points in contrast with the continuous aperture case where the length parameter is precisely the physical aperture length.
A proof of the Woodward-Lawson sampling method for a finite linear array
Somers, Gary A.
1993-01-01
An extension of the continuous aperture Woodward-Lawson sampling theorem has been developed for a finite linear array of equidistant identical elements with arbitrary excitations. It is shown that by sampling the array factor at a finite number of specified points in the far field, the exact array factor over all space can be efficiently reconstructed in closed form. The specified sample points lie in real space and hence are measurable provided that the interelement spacing is greater than approximately one half of a wavelength. This paper provides insight as to why the length parameter used in the sampling formulas for discrete arrays is larger than the physical span of the lattice points in contrast with the continuous aperture case where the length parameter is precisely the physical aperture length.
Directory of Open Access Journals (Sweden)
Atta Ullah
2014-01-01
Full Text Available In practical utilization of stratified random sampling scheme, the investigator meets a problem to select a sample that maximizes the precision of a finite population mean under cost constraint. An allocation of sample size becomes complicated when more than one characteristic is observed from each selected unit in a sample. In many real life situations, a linear cost function of a sample size nh is not a good approximation to actual cost of sample survey when traveling cost between selected units in a stratum is significant. In this paper, sample allocation problem in multivariate stratified random sampling with proposed cost function is formulated in integer nonlinear multiobjective mathematical programming. A solution procedure is proposed using extended lexicographic goal programming approach. A numerical example is presented to illustrate the computational details and to compare the efficiency of proposed compromise allocation.
Wang, Wei; Zhuge, Qunbi; Morsy-Osman, Mohamed; Gao, Yuliang; Xu, Xian; Chagnon, Mathieu; Qiu, Meng; Hoang, Minh Thang; Zhang, Fangyuan; Li, Rui; Plant, David V
2014-11-03
We propose a decision-aided algorithm to compensate the sampling frequency offset (SFO) between the transmitter and receiver for reduced-guard-interval (RGI) coherent optical (CO) OFDM systems. In this paper, we first derive the cyclic prefix (CP) requirement for preventing OFDM symbols from SFO induced inter-symbol interference (ISI). Then we propose a new decision-aided SFO compensation (DA-SFOC) algorithm, which shows a high SFO tolerance and reduces the CP requirement. The performance of DA-SFOC is numerically investigated for various situations. Finally, the proposed algorithm is verified in a single channel 28 Gbaud polarization division multiplexing (PDM) RGI CO-OFDM experiment with QPSK, 8 QAM and 16 QAM modulation formats, respectively. Both numerical and experimental results show that the proposed DA-SFOC method is highly robust against the standard SFO in optical fiber transmission.
Prediction and standard error estimation for a finite universe total when a stratum is not sampled
Energy Technology Data Exchange (ETDEWEB)
Wright, T.
1994-01-01
In the context of a universe of trucks operating in the United States in 1990, this paper presents statistical methodology for estimating a finite universe total on a second occasion when a part of the universe is sampled and the remainder of the universe is not sampled. Prediction is used to compensate for the lack of data from the unsampled portion of the universe. The sample is assumed to be a subsample of an earlier sample where stratification is used on both occasions before sample selection. Accounting for births and deaths in the universe between the two points in time, the detailed sampling plan, estimator, standard error, and optimal sample allocation, are presented with a focus on the second occasion. If prior auxiliary information is available, the methodology is also applicable to a first occasion.
Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato
2017-07-01
An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.
On the Influence of the Data Sampling Interval on Computer-Derived K-Indices
Directory of Open Access Journals (Sweden)
A Bernard
2011-06-01
Full Text Available The K index was devised by Bartels et al. (1939 to provide an objective monitoring of irregular geomagnetic activity. The K index was then routinely used to monitor the magnetic activity at permanent magnetic observatories as well as at temporary stations. The increasing number of digital and sometimes unmanned observatories and the creation of INTERMAGNET put the question of computer production of K at the centre of the debate. Four algorithms were selected during the Vienna meeting (1991 and endorsed by IAGA for the computer production of K indices. We used one of them (FMI algorithm to investigate the impact of the geomagnetic data sampling interval on computer produced K values through the comparison of the computer derived K values for the period 2009, January 1st to 2010, May 31st at the Port-aux-Francais magnetic observatory using magnetic data series with different sampling rates (the smaller: 1 second; the larger: 1 minute. The impact is investigated on both 3-hour range values and K indices data series, as a function of the activity level for low and moderate geomagnetic activity.
DEFF Research Database (Denmark)
Nielsen, Morten Ø.; Frederiksen, Per Houmann
2005-01-01
In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods....... The estimators are briefly introduced and compared, and the criteria adopted for measuring finite sample performance are bias and root mean squared error. Most importantly, the simulations reveal that (1) the frequency domain maximum likelihood procedure is superior to the time domain parametric methods, (2) all...... the estimators are fairly robust to conditionally heteroscedastic errors, (3) the local polynomial Whittle and bias-reduced log-periodogram regression estimators are shown to be more robust to short-run dynamics than other semiparametric (frequency domain and wavelet) estimators and in some cases even outperform...
Finite element analysis of seal mechanism using SMA for Mars sample return
Bao, Xiaoqi; Younse, Paulo
2014-04-01
Returning Martian samples to Earth for extensive analysis is of great interest to planetary science community. Current Mars sample return architecture would require leaving the acquired samples on Mars for several years before being retrieved by subsequent mission. Each sample would be sealed securely to keep its integrity. A reliable seal technique that does not affect the integrity of the samples and uses simple low-mass tool is required. The shape memory alloy (SMA) seal technique is a promising candidate. The performances of several primary designs of SMA seal for sample tubes were analyzed by finite element (FE) modeling. The results of thermal heating characteristics had been reported in a previous presentation this paper focus on the preparation and actuation of SMA plugs, the seal pressure, and the stress and strain induced in the sealing procedure with various designs.
Elastic fields of stationary and moving dislocations in three dimensional finite samples
1997-01-01
Integral expressions are determined for the elastic displacement and stress fields due to stationary or moving dislocation loops in three dimensional, not necessarily isotropic, finite samples. A line integral representation is found for the stress field, thus satisfying the expectation that stresses should depend on the location of the dislocation loop, but not on the location of surfaces bounded by such loops that are devoid of physical significance. In the stationary case the line integral...
Wang, Ronghao; Xing, Jianchun; Li, Juelong; Xiang, Zhengrong
2016-10-01
This paper studies the problem of stabilising a sampled-data switched linear system by quantised feedback asynchronously switched controllers. The idea of a quantised feedback asynchronously switched control strategy originates in earlier work reflecting actual system characteristic of switching and quantising, respectively. A quantised scheme is designed depending on switching time using dynamic quantiser. When sampling time, system switching time and controller switching time are all not uniform, the proposed switching controllers guarantee the system to be finite-time stable by a piecewise Lyapunov function and the average dwell-time method. Simulation examples are provided to show the effectiveness of the developed results.
Light propagation in tissues: effect of finite size of tissue sample
Melnik, Ivan S.; Dets, Sergiy M.; Rusina, Tatyana V.
1995-12-01
Laser beam propagation inside tissues with different lateral dimensions has been considered. Scattering and anisotropic properties of tissue critically determine spatial fluence distribution and predict sizes of tissue specimens when deviations of this distribution can be neglected. Along the axis of incident beam the fluence rate weakly depends on sample size whereas its relative increase (more than 20%) towards the lateral boundaries. The finite sizes were considered to be substantial only for samples with sizes comparable with the diameter of the laser beam. Interstitial irradiance patterns simulated by Monte Carlo method were compared with direct measurements in human brain specimens.
Taylor, Matthew A.; Skourides, Andreas; Alvero, Alicia M.
2012-01-01
Interval recording procedures are used by persons who collect data through observation to estimate the cumulative occurrence and nonoccurrence of behavior/events. Although interval recording procedures can increase the efficiency of observational data collection, they can also induce error from the observer. In the present study, 50 observers were…
Taylor, Matthew A.; Skourides, Andreas; Alvero, Alicia M.
2012-01-01
Interval recording procedures are used by persons who collect data through observation to estimate the cumulative occurrence and nonoccurrence of behavior/events. Although interval recording procedures can increase the efficiency of observational data collection, they can also induce error from the observer. In the present study, 50 observers were…
Finite mixture models for the computation of isotope ratios in mixed isotopic samples
Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas
2013-04-01
Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control
Approximate sampling formulas for general finite-alleles models of mutation
Bhaskar, Anand; Song, Yun S
2011-01-01
Many applications in genetic analyses utilize sampling distributions, which describe the probability of observing a sample of DNA sequences randomly drawn from a population. In the one-locus case with special models of mutation such as the infinite-alleles model or the finite-alleles parent-independent mutation model, closed-form sampling distributions under the coalescent have been known for many decades. However, no exact formula is currently known for more general models of mutation that are of biological interest. Models with finitely-many alleles are considered in this paper, and approximate closed-form sampling formulas are derived for an arbitrary recurrent mutation model or for a reversible recurrent mutation model, depending on whether the number of distinct observed allele types is at most three or four, respectively. Two different approaches---one based on perturbation expansion and the other on an urn construction related to the coalescent---are developed here. Computation in the former approach i...
Sealey, Linda R.; Gilmore, Susan E.
2008-01-01
Informal language sampling is ubiquitous in the study of developing grammatical abilities in children with and without delayed language, including study of grammatical abilities in the area of finite verb production. Finite verbs are particularly important to assess as they appear to be the grammatical morphemes most vulnerable to error in the…
DEFF Research Database (Denmark)
Veraart, Almut
and present a new estimator for the asymptotic ‘variance’ of the centered realised variance in the presence of jumps. Next, we compare the finite sample performance of the various estimators by means of detailed Monte Carlo studies where we study the impact of the jump activity, the jump size of the jumps...... in the price and the presence of additional independent or dependent jumps in the volatility on the finite sample performance of the various estimators. We find that the finite sample performance of realised variance, and in particular of the log–transformed realised variance, is generally good, whereas...
Shao, Jing; Fan, Liu-Yin; Cao, Cheng-Xi; Huang, Xian-Qing; Xu, Yu-Quan
2012-07-01
Interval free-flow zone electrophoresis (FFZE) has been used to suppress sample band broadening greatly hindering the development of free-flow electrophoresis (FFE). However, there has been still no quantitative study on the resolution increase of interval FFZE. Herein, we tried to make a comparison between bandwidths in interval FFZE and continuous one. A commercial dye with methyl green and crystal violet was well chosen to show the bandwidth. The comparative experiments were conducted under the same sample loading of the model dye (viz. 3.49, 1.75, 1.17, and 0.88 mg/h), the same running time (viz. 5, 10, 15, and 20 min), and the same flux ratio between sample and background buffer (= 10.64 × 10⁻³). Under the given conditions, the experiments demonstrated that (i) the band broadening was evidently caused by hydrodynamic factor in continuous mode, and (ii) the interval mode could clearly eliminate the hydrodynamic broadening existing in continuous mode, greatly increasing the resolution of dye separation. Finally, the interval FFZE was successfully used for the complete separation of two-model antibiotics (herein pyoluteorin and phenazine-1-carboxylic acid coexisting in fermentation broth of a new strain Pseudomonas aeruginosa M18), demonstrating the feasibility of interval FFZE mode for separation of biomolecules.
Energy Technology Data Exchange (ETDEWEB)
ROMERO,VICENTE J.; SWILER,LAURA PAINTON; GIUNTA,ANTHONY A.
2000-04-25
This paper examines the modeling accuracy of finite element interpolation, kriging, and polynomial regression used in conjunction with the Progressive Lattice Sampling (PLS) incremental design-of-experiments approach. PLS is a paradigm for sampling a deterministic hypercubic parameter space by placing and incrementally adding samples in a manner intended to maximally reduce lack of knowledge in the parameter space. When combined with suitable interpolation methods, PLS is a formulation for progressive construction of response surface approximations (RSA) in which the RSA are efficiently upgradable, and upon upgrading, offer convergence information essential in estimating error introduced by the use of RSA in the problem. The three interpolation methods tried here are examined for performance in replicating an analytic test function as measured by several different indicators. The process described here provides a framework for future studies using other interpolation schemes, test functions, and measures of approximation quality.
Finite sample properties of power-law cross-correlations estimators
Kristoufek, Ladislav
2014-01-01
We study finite sample properties of estimators of power-law cross-correlations -- detrended cross-correlation analysis (DCCA), height cross-correlation analysis (HXA) and detrending moving-average cross-correlation analysis (DMCA) -- with a special focus on short-term memory bias as well as power-law coherency. Presented broad Monte Carlo simulation study focuses on different time series lengths, specific methods' parameter setting, and memory strength. We find that each method is best suited for different time series dynamics so that there is no clear winner between the three. The method selection should be then made based on observed dynamic properties of the analyzed series.
Finite sample performance of the E-M algorithm for ranks data modelling
Directory of Open Access Journals (Sweden)
Angela D'Elia
2007-10-01
Full Text Available We check the finite sample performance of the maximum likelihood estimators of the parameters of a mixture distribution recently introduced for modelling ranks/preference data. The estimates are derived by the E-M algorithm and the performance is evaluated both from an univariate and bivariate points of view. While the results are generally acceptable as far as it concerns the bias, the Monte Carlo experiment shows a different behaviour of the estimators efficiency for the two parameters of the mixture, mainly depending upon their location in the admissible parametric space. Some operative suggestions conclude the paer.
Institute of Scientific and Technical Information of China (English)
Abdalroof M S; Zhao Zhi-wen; Wang De-hui
2015-01-01
In this paper, the estimation of parameters based on a progressively type-I interval censored sample from a Rayleigh distribution is studied. Different methods of estimation are discussed. They include mid-point approximation estima-tor, the maximum likelihood estimator, moment estimator, Bayes estimator, sampling adjustment moment estimator, sampling adjustment maximum likelihood estimator and estimator based on percentile. The estimation procedures are discussed in details and compared via Monte Carlo simulations in terms of their biases.
Kelley, Ken
2008-01-01
Methods of sample size planning are developed from the accuracy in parameter approach in the multiple regression context in order to obtain a sufficiently narrow confidence interval for the population squared multiple correlation coefficient when regressors are random. Approximate and exact methods are developed that provide necessary sample size so that the expected width of the confidence interval will be sufficiently narrow. Modifications of these methods are then developed so that necessary sample size will lead to sufficiently narrow confidence intervals with no less than some desired degree of assurance. Computer routines have been developed and are included within the MBESS R package so that the methods discussed in the article can be implemented. The methods and computer routines are demonstrated using an empirical example linking innovation in the health services industry with previous innovation, personality factors, and group climate characteristics.
A simple method for estimating genetic diversity in large populations from finite sample sizes
Directory of Open Access Journals (Sweden)
Rajora Om P
2009-12-01
Full Text Available Abstract Background Sample size is one of the critical factors affecting the accuracy of the estimation of population genetic diversity parameters. Small sample sizes often lead to significant errors in determining the allelic richness, which is one of the most important and commonly used estimators of genetic diversity in populations. Correct estimation of allelic richness in natural populations is challenging since they often do not conform to model assumptions. Here, we introduce a simple and robust approach to estimate the genetic diversity in large natural populations based on the empirical data for finite sample sizes. Results We developed a non-linear regression model to infer genetic diversity estimates in large natural populations from finite sample sizes. The allelic richness values predicted by our model were in good agreement with those observed in the simulated data sets and the true allelic richness observed in the source populations. The model has been validated using simulated population genetic data sets with different evolutionary scenarios implied in the simulated populations, as well as large microsatellite and allozyme experimental data sets for four conifer species with contrasting patterns of inherent genetic diversity and mating systems. Our model was a better predictor for allelic richness in natural populations than the widely-used Ewens sampling formula, coalescent approach, and rarefaction algorithm. Conclusions Our regression model was capable of accurately estimating allelic richness in natural populations regardless of the species and marker system. This regression modeling approach is free from assumptions and can be widely used for population genetic and conservation applications.
Linkage effects and analysis of finite sample errors in the HapMap.
Zaitlen, Noah; Kang, Hyun Min; Eskin, Eleazar
2009-01-01
The HapMap provides a valuable resource to help uncover genetic variants of important complex phenotypes such as disease risk and outcome. Using the HapMap we can infer the patterns of LD within different human populations. This is a critical step for determining which SNPs to genotype as part of a study, estimating study power, designing a follow-up study to identify the causal variants, 'imputing' untyped SNPs, and estimating recombination rates along the genome. Despite its tremendous importance, the HapMap suffers from the fundamental limitation that at most 60 unrelated individuals are available per population. We present an analytical framework for analyzing the implications of a finite sample HapMap. We present and justify simple approximations for deriving analytical estimates of important statistics such as the square of the correlation coefficient r(2) between two SNPs. Finally, we use this framework to show that current HapMap based estimates of r(2) and power have significant errors, and that tag sets highly overestimate their coverage. We show that a reasonable increase in the number of individuals, such as that proposed by the 1000 genomes project, greatly reduces the errors due to finite sample size for a large proportion of SNPs.
Ruelas-Mayorga, A; Trujillo-Lara, M; Nigoche-Netro, A; Echevarría, J; García, A M; Ramírez-Vélez, J
2016-01-01
In this paper we carry out a preliminary study of the dependence of the Tully-Fisher Relation (TFR) with the width and intensity level of the absolute magnitude interval of a limited sample of 2411 galaxies taken from Mathewson \\& Ford (1996). The galaxies in this sample do not differ significantly in morphological type, and are distributed over an $\\sim11$-magnitude interval ($-24.4 < I < -13.0$). We take as directives the papers by Nigoche-Netro et al. (2008, 2009, 2010) in which they study the dependence of the Kormendy (KR), the Fundamental Plane (FPR) and the Faber-Jackson Relations (FJR) with the magnitude interval within which the observed galaxies used to derive these relations are contained. We were able to characterise the behaviour of the TFR coefficients $(\\alpha, \\beta)$ with respect to the width of the magnitude interval as well as with the brightness of the galaxies within this magnitude interval. We concluded that the TFR for this specific sample of galaxies depends on observational ...
微带电路谱域Prony方法采样间隔的研究%Study on Sampling Interval in Spectral Domain Prony's Method
Institute of Scientific and Technical Information of China (English)
高立新; 龚主前; 李元新
2014-01-01
对计算微带电路S参数的改进谱域Prony方法进行研究，分析比较时域有限差分法( FDTD)中波端口激励方式和模式端口激励方式，提出采样间隔选取标准，然后通过几个实际的工程算例来讨论改进谱域Prony方法的性能。数值结果表明，在非常小的采样间隔条件下改进谱域Prony方法仍可以准确计算相位常数和S参数。%The improved spectral domain Prony’s method that calculates S-parameters for microstrip cir-cuits is investigated. Mode-port excitation technique and wave-port excitation technique in the finite difference time domain method ( FDTD) are analyzed and compared. A sampling interval selection criterion is proposed. Several practical engineering examples are provided to demonstrate the performance of the im-proved spectral domain Prony’s method. Numerical results show that the phase constant and S-parameters still can be accurately calculated under very small sampling interval condition with the improved spectral domain Prony’s method.
Ramyachitra, D; Sofia, M; Manikandan, P
2015-09-01
Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.
Directory of Open Access Journals (Sweden)
D. Ramyachitra
2015-09-01
Full Text Available Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM, K-nearest neighbor (KNN, Interval Valued Classification (IVC and the improvised Interval Value based Particle Swarm Optimization (IVPSO algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.
Algina, James; Keselman, H. J.
2008-01-01
Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)
Radley, Keith C.; O'Handley, Roderick D.; Labrot, Zachary C.
2015-01-01
Assessment in social skills training often utilizes procedures such as partial-interval recording (PIR) and momentary time sampling (MTS) to estimate changes in duration in social engagements due to intervention. Although previous research suggests PIR to be more inaccurate than MTS in estimating levels of behavior, treatment analysis decisions…
Institute of Scientific and Technical Information of China (English)
Abdalroof M.S.; Zhao Zhi-wen; Wang De-hui
2014-01-01
In this paper, the estimation of parameters based on a progressively type-I interval censored sample from a Pareto distribution is studied. Different methods of estimation are discussed, which include mid-point approximation estimator, the maximum likelihood estimator and moment estimator. The estimation procedures are discussed in details and compared via Monte Carlo simulations in terms of their biases.
Quantifying uncertainty in mean earthquake interevent times for a finite sample
Naylor, M.; Main, I. G.; Touati, S.
2009-01-01
Seismic activity is routinely quantified using means in event rate or interevent time. Standard estimates of the error on such mean values implicitly assume that the events used to calculate the mean are independent. However, earthquakes can be triggered by other events and are thus not necessarily independent. As a result, the errors on mean earthquake interevent times do not exhibit Gaussian convergence with increasing sample size according to the central limit theorem. In this paper we investigate how the errors decay with sample size in real earthquake catalogues and how the nature of this convergence varies with the spatial extent of the region under investigation. We demonstrate that the errors in mean interevent times, as a function of sample size, are well estimated by defining an effective sample size, using the autocorrelation function to estimate the number of pieces of independent data that exist in samples of different length. This allows us to accurately project error estimates from finite natural earthquake catalogues into the future and promotes a definition of stability wherein the autocorrelation function is not varying in time. The technique is easy to apply, and we suggest that it is routinely applied to define errors on mean interevent times as part of seismic hazard assessment studies. This is particularly important for studies that utilize small catalogue subsets (fewer than ˜1000 events) in time-dependent or high spatial resolution (e.g., for catastrophe modeling) hazard assessment.
Optimum sampling interval for evaluating ferromanganese nodule resources in the central Indian Ocean
Digital Repository Service at National Institute of Oceanography (India)
Jauhari, P.; Kodagali, V.N.; Sankar, S.J.
by progressively reducing the grid spacing. Sampling the corners of the 1 degree survey block (approximately 110-km spacing), i.e., four stations with 5-7 free-fall operations (sampling locations) in each case, indicated a nodule abundance of 3.50 kg/m sup(2...
DEFF Research Database (Denmark)
Rumessen, J J; Hamberg, O; Gudmand-Høyer, E
1990-01-01
Lactulose H2 breath tests are widely used for quantifying carbohydrate malabsorption, but the validity of the commonly used technique (interval sampling of H2 concentrations) has not been systematically investigated. In eight healthy adults we studied the reproducibility of the technique and the ......Lactulose H2 breath tests are widely used for quantifying carbohydrate malabsorption, but the validity of the commonly used technique (interval sampling of H2 concentrations) has not been systematically investigated. In eight healthy adults we studied the reproducibility of the technique......-60%, interquartile range). This corresponded to the deviation in reproducibility of the standard dose. We suggest that individual estimates of carbohydrate malabsorption by means of H2 breath tests should be interpreted with caution if tests of reproducibility are not incorporated. Both areas under curves and peak H...
Westreich, Daniel; Cole, Stephen R; Schisterman, Enrique F; Platt, Robert W
2012-08-30
Motivated by a previously published study of HIV treatment, we simulated data subject to time-varying confounding affected by prior treatment to examine some finite-sample properties of marginal structural Cox proportional hazards models. We compared (a) unadjusted, (b) regression-adjusted, (c) unstabilized, and (d) stabilized marginal structural (inverse probability-of-treatment [IPT] weighted) model estimators of effect in terms of bias, standard error, root mean squared error (MSE), and 95% confidence limit coverage over a range of research scenarios, including relatively small sample sizes and 10 study assessments. In the base-case scenario resembling the motivating example, where the true hazard ratio was 0.5, both IPT-weighted analyses were unbiased, whereas crude and adjusted analyses showed substantial bias towards and across the null. Stabilized IPT-weighted analyses remained unbiased across a range of scenarios, including relatively small sample size; however, the standard error was generally smaller in crude and adjusted models. In many cases, unstabilized weighted analysis showed a substantial increase in standard error compared with other approaches. Root MSE was smallest in the IPT-weighted analyses for the base-case scenario. In situations where time-varying confounding affected by prior treatment was absent, IPT-weighted analyses were less precise and therefore had greater root MSE compared with adjusted analyses. The 95% confidence limit coverage was close to nominal for all stabilized IPT-weighted but poor in crude, adjusted, and unstabilized IPT-weighted analysis. Under realistic scenarios, marginal structural Cox proportional hazards models performed according to expectations based on large-sample theory and provided accurate estimates of the hazard ratio.
Ledgerwood, D N; Winckler, C; Tucker, C B
2010-11-01
Lying behavior in dairy cattle can provide insight into how cows interact with their environment. Although lying behavior is a useful indicator of cow comfort, it can be time consuming to measure. In response to these time constraints, using data loggers to automate behavioral recording has become increasingly common. We tested the accuracy of the Onset Pendant G data logger (Onset Computer Corporation, Bourne, MA) for measuring lying behavior in dairy cattle (n=24 cows; 12 in each of 2 experiments). Cows wore the logger on the lateral (experiment 1) or medial (experiment 2) side of the hind leg above the metatarsophalangeal joint. Loggers recorded behavior at 4 sampling intervals (6, 30, 60, and 300 s) for at least 1.5 d. Data were smoothed using 3 editing methods to examine the effects of short, potentially erroneous readings. For this purpose, Microsoft Excel macros (Microsoft Corp., Redmond, WA) converted readings (i.e., lying events bordered by standing or vice versa) occurring singly or in consecutive runs of ≤2 or ≤6. Behavior was simultaneously recorded with digital video equipment. The logger accurately measured lying and standing. For example, predictability, sensitivity, and specificity were >99% using 30-s sampling and the single-event filter compared with continuously scored video recordings. The 6- and 30-s sampling intervals were comparable for all aspects of lying behavior when short events were filtered from the data set. Estimates of lying time generated from the 300-s interval unfiltered regimen were positively related (R(2) ≥ 0.99) to estimates of lying time from video, but this sampling regimen overestimated the number of lying bouts. This is likely because short standing and lying bouts were missed (12 and 34% of lying and standing bouts were <300 s in experiment 1 and 2, respectively). In summary, the data logger accurately measured all aspects of lying behavior when the sampling interval was ≤30 s and when short readings of lying and
Song, Hairong; Ferrer, Emilio
2009-01-01
This article presents a state-space modeling (SSM) technique for fitting process factor analysis models directly to raw data. The Kalman smoother via the expectation-maximization algorithm to obtain maximum likelihood parameter estimates is used. To examine the finite sample properties of the estimates in SSM when common factors are involved, a…
Ouyang, Kesai; Lu, Siliang; Zhang, Shangbin; Zhang, Haibin; He, Qingbo; Kong, Fanrang
2015-08-27
The railway occupies a fairly important position in transportation due to its high speed and strong transportation capability. As a consequence, it is a key issue to guarantee continuous running and transportation safety of trains. Meanwhile, time consumption of the diagnosis procedure is of extreme importance for the detecting system. However, most of the current adopted techniques in the wayside acoustic defective bearing detector system (ADBD) are offline strategies, which means that the signal is analyzed after the sampling process. This would result in unavoidable time latency. Besides, the acquired acoustic signal would be corrupted by the Doppler effect because of high relative speed between the train and the data acquisition system (DAS). Thus, it is difficult to effectively diagnose the bearing defects immediately. In this paper, a new strategy called online Doppler effect elimination (ODEE) is proposed to remove the Doppler distortion online by the introduced unequal interval sampling scheme. The steps of proposed strategy are as follows: The essential parameters are acquired in advance. Then, the introduced unequal time interval sampling strategy is used to restore the Doppler distortion signal, and the amplitude of the signal is demodulated as well. Thus, the restored Doppler-free signal is obtained online. The proposed ODEE method has been employed in simulation analysis. Ultimately, the ODEE method is implemented in the embedded system for fault diagnosis of the train bearing. The results are in good accordance with the bearing defects, which verifies the good performance of the proposed strategy.
Directory of Open Access Journals (Sweden)
Kesai Ouyang
2015-08-01
Full Text Available The railway occupies a fairly important position in transportation due to its high speed and strong transportation capability. As a consequence, it is a key issue to guarantee continuous running and transportation safety of trains. Meanwhile, time consumption of the diagnosis procedure is of extreme importance for the detecting system. However, most of the current adopted techniques in the wayside acoustic defective bearing detector system (ADBD are offline strategies, which means that the signal is analyzed after the sampling process. This would result in unavoidable time latency. Besides, the acquired acoustic signal would be corrupted by the Doppler effect because of high relative speed between the train and the data acquisition system (DAS. Thus, it is difficult to effectively diagnose the bearing defects immediately. In this paper, a new strategy called online Doppler effect elimination (ODEE is proposed to remove the Doppler distortion online by the introduced unequal interval sampling scheme. The steps of proposed strategy are as follows: The essential parameters are acquired in advance. Then, the introduced unequal time interval sampling strategy is used to restore the Doppler distortion signal, and the amplitude of the signal is demodulated as well. Thus, the restored Doppler-free signal is obtained online. The proposed ODEE method has been employed in simulation analysis. Ultimately, the ODEE method is implemented in the embedded system for fault diagnosis of the train bearing. The results are in good accordance with the bearing defects, which verifies the good performance of the proposed strategy.
Institute of Scientific and Technical Information of China (English)
Yaqin Li; Guoshu Jian; Shifa Wu
2006-01-01
The rational design of the sample cell may improve the sensitivity of surface-enhanced Raman scattering(SERS) detection in a high degree. Finite difference time domain (FDTD) simulations of the configurationof Ag film-Ag particles illuminated by plane wave and evanescent wave are performed to provide physicalinsight for design of the sample cell. Numerical solutions indicate that the sample cell can provide more"hot spots" and the massive field intensity enhancement occurs in these "hot spots". More information onthe nanometer character of the sample can be got because of gradient-field Raman (GFR) of evanescentwave.
Willruth, A M; Steinhard, J; Enzensberger, C; Axt-Fliedner, R; Gembruch, U; Doelle, A; Dimitriou, I; Fimmers, R; Bahlmann, F
2016-02-04
Purpose: To assess the time intervals of the cardiac cycle in healthy fetuses in the second and third trimester using color tissue Doppler imaging (cTDI) and to evaluate the influence of different sizes of sample gates on time interval values. Materials and Methods: Time intervals were measured from the cTDI-derived Doppler waveform using a small and large region of interest (ROI) in healthy fetuses. Results: 40 fetuses were included. The median gestational age at examination was 26 + 1 (range: 20 + 5 - 34 + 5) weeks. The median frame rate was 116/s (100 - 161/s) and the median heart rate 143 (range: 125 - 158) beats per minute (bpm). Using small and large ROIs, the second trimester right ventricular (RV) mean isovolumetric contraction times (ICTs) were 39.8 and 41.4 ms (p = 0.17), the mean ejection times (ETs) were 170.2 and 164.6 ms (p < 0.001), the mean isovolumetric relaxation times (IRTs) were 52.8 and 55.3 ms (p = 0.08), respectively. The left ventricular (LV) mean ICTs were 36.2 and 39.4 ms (p = 0.05), the mean ETs were 167.4 and 164.5 ms (p = 0.013), the mean IRTs were 53.9 and 57.1 ms (p = 0.05), respectively. The third trimester RV mean ICTs were 50.7 and 50.4 ms (p = 0.75), the mean ETs were 172.3 and 181.4 ms (p = 0.49), the mean IRTs were 50.2 and 54.6 ms (p = 0.03); the LV mean ICTs were 45.1 and 46.2 ms (p = 0.35), the mean ETs were 175.2 vs. 172.9 ms (p = 0.29), the mean IRTs were 47.1 and 50.0 ms (p = 0.01), respectively. Conclusion: Isovolumetric time intervals can be analyzed precisely and relatively independent of ROI size. In the near future, automatic time interval measurement using ultrasound systems will be feasible and the analysis of fetal myocardial function can become part of the clinical routine.
Directory of Open Access Journals (Sweden)
Andreas Steimer
Full Text Available Oscillations between high and low values of the membrane potential (UP and DOWN states respectively are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs of the exponential integrate and fire (EIF model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing
DEFF Research Database (Denmark)
Rumessen, J J; Hamberg, O; Gudmand-Høyer, E
1990-01-01
Lactulose H2 breath tests are widely used for quantifying carbohydrate malabsorption, but the validity of the commonly used technique (interval sampling of H2 concentrations) has not been systematically investigated. In eight healthy adults we studied the reproducibility of the technique......-60%, interquartile range). This corresponded to the deviation in reproducibility of the standard dose. We suggest that individual estimates of carbohydrate malabsorption by means of H2 breath tests should be interpreted with caution if tests of reproducibility are not incorporated. Both areas under curves and peak H...... and the accuracy with which 5 g and 20 g doses of lactulose could be calculated from the H2 excretion after their ingestion by means of a 10 g lactulose standard. The influence of different lengths of the test period, different definitions of the baseline and the significance of standard meals and peak H2...
Energy Technology Data Exchange (ETDEWEB)
Fields, M.W.; Schryver, J.C.; Brandt, C.C.; Yan, T.; Zhou, J.Z.; Palumbo, A.V.
2007-04-02
The goal of this research was to investigate the influenceof the error rate of sequence determination on the differentiation ofcloned SSU rRNA gene sequences for assessment of community structure. SSUrRNA cloned sequences from groundwater samples that represent differentbacterial divisions were sequenced multiple times with the samesequencing primer. From comparison of sequence alignments with unediteddata, confidence intervals were obtained from both a adouble binomial Tmodel of sequence comparison and by non-parametric methods. The resultsindicated that similarity values below 0.9946 arelikely derived fromdissimilar sequences at a confidence level of 0.95, and not sequencingerrors. The results confirmed that screening by direct sequencedetermination could be reliably used to differentiate at the specieslevel. However, given sequencing errors comparable to those seen in thisstudy, sequences with similarities above 0.9946 should be treated as thesame sequence if a 95 percent confidence is desired.
Directory of Open Access Journals (Sweden)
Lee Tae-Hoon
2016-12-01
Full Text Available In many cases, a X¯$\\overline X $ control chart based on a performance variable is used in industrial fields. Typically, the control chart monitors the measurements of a performance variable itself. However, if the performance variable is too costly or impossible to measure, and a less expensive surrogate variable is available, the process may be more efficiently controlled using surrogate variables. In this paper, we present a model for the economic statistical design of a VSI (Variable Sampling Interval X¯$\\overline X $ control chart using a surrogate variable that is linearly correlated with the performance variable. We derive the total average profit model from an economic viewpoint and apply the model to a Very High Temperature Reactor (VHTR nuclear fuel measurement system and derive the optimal result using genetic algorithms. Compared with the control chart based on a performance variable, the proposed model gives a larger expected net income per unit of time in the long-run if the correlation between the performance variable and the surrogate variable is relatively high. The proposed model was confined to the sample mean control chart under the assumption that a single assignable cause occurs according to the Poisson process. However, the model may also be extended to other types of control charts using a single or multiple assignable cause assumptions such as VSS (Variable Sample Size X¯$\\overline X $ control chart, EWMA, CUSUM charts and so on.
Lugo, Jorge; Sosa, Victor
1999-10-01
The repulsion force between a cylindrical superconductor in the Meissner state and a small permanent magnet was calculated under the assumption that the superconductor was formed by a continuous array of dipoles distributed in the finite volume of the sample. After summing up the dipole-dipole interactions with the magnet, we obtained analytical expressions for the levitation force as a function of the superconductor-magnet distance, radius and thickness of the sample. We analyzed two configurations, with the magnet in a horizontal or vertical orientation.
DEFF Research Database (Denmark)
Vahdatirad, Mohammadjavad; Bayat, Mehdi; Andersen, Lars Vabbersgaard
2015-01-01
shear strength of clay. Normal and Sobol sampling are employed to provide the asymptotic sampling method to generate the probability distribution of the foundation stiffnesses. Monte Carlo simulation is used as a benchmark. Asymptotic sampling accompanied with Sobol quasi random sampling demonstrates...... an efficient method for estimating the probability distribution of stiffnesses for the offshore monopile foundation....
DEFF Research Database (Denmark)
Vahdatirad, Mohammadjavad; Bayat, Mehdi; Andersen, Lars Vabbersgaard
2012-01-01
undrained shear strength. The random field is applied to represent the spatial variation of the soil properties. Monte Carlo simulation associate with an improved asymptotic sampling using normal and sobol sampling is utilized to generate the probability distribution of the foundation stiffness. It is shown...... that asymptotic sampling associated with sobol quasi random sampling is an efficient method to estimation of probability distribution in presented problems....
Directory of Open Access Journals (Sweden)
Manzoor Khan
2014-01-01
Full Text Available This paper presents new classes of estimators in estimating the finite population mean under double sampling in the presence of nonresponse when using information on fractional raw moments. The expressions for mean square error of the proposed classes of estimators are derived up to the first degree of approximation. It is shown that a proposed class of estimators performs better than the usual mean estimator, ratio type estimators, and Singh and Kumar (2009 estimator. An empirical study is carried out to demonstrate the performance of a proposed class of estimators.
Yan, Hongyong; Yang, Lei; Li, Xiang-Yang
2016-12-01
High-order staggered-grid finite-difference (SFD) schemes have been universally used to improve the accuracy of wave equation modeling. However, the high-order SFD coefficients on spatial derivatives are usually determined by the Taylor-series expansion (TE) method, which just leads to great accuracy at small wavenumbers for wave equation modeling. Some conventional optimization methods can achieve high accuracy at large wavenumbers, but they hardly guarantee the small numerical dispersion error at small wavenumbers. In this paper, we develop new optimal explicit SFD (ESFD) and implicit SFD (ISFD) schemes for wave equation modeling. We first derive the optimal ESFD and ISFD coefficients for the first-order spatial derivatives by applying the combination of the TE and the sampling approximation to the dispersion relation, and then analyze their numerical accuracy. Finally, we perform elastic wave modeling with the ESFD and ISFD schemes based on the TE method and the optimal method, respectively. When the appropriate number and interval for the sampling points are chosen, these optimal schemes have extremely high accuracy at small wavenumbers, and can also guarantee small numerical dispersion error at large wavenumbers. Numerical accuracy analyses and modeling results demonstrate the optimal ESFD and ISFD schemes can efficiently suppress the numerical dispersion and significantly improve the modeling accuracy compared to the TE-based ESFD and ISFD schemes.
Institute of Scientific and Technical Information of China (English)
叶世伟; 史忠植
1995-01-01
This paper presents another necessary condition about the optimum partition on a finite set of samples.From this condition,a corresponding generalized sequential hard k-means (GSHKM) clustering algorithm is built and many well-known clustering algorithms are found to be included in it.Under some assumptions the well-known MacQueen;s SHKM (Sequential Hard K-Means) algorithm,FSCL(Frequency Sensitive Competitive Learning) algorithm and RPCL (Rival Penalized Competitive Learning) algorithm are derived.It is shown that FSCL in fact still belongs to the kind of GSHKM clustering algorithm and is more suitable for producing means of K-partition of sample data,which is illustrated by numerical experiment.Meanwhile,some improvements on these algorithms are also given.
Directory of Open Access Journals (Sweden)
Carlos A. L. Pires
2013-02-01
Full Text Available The Minimum Mutual Information (MinMI Principle provides the least committed, maximum-joint-entropy (ME inferential law that is compatible with prescribed marginal distributions and empirical cross constraints. Here, we estimate MI bounds (the MinMI values generated by constraining sets Tcr comprehended by mcr linear and/or nonlinear joint expectations, computed from samples of N iid outcomes. Marginals (and their entropy are imposed by single morphisms of the original random variables. N-asymptotic formulas are given both for the distribution of cross expectation’s estimation errors, the MinMI estimation bias, its variance and distribution. A growing Tcr leads to an increasing MinMI, converging eventually to the total MI. Under N-sized samples, the MinMI increment relative to two encapsulated sets Tcr1 ⊂ Tcr2 (with numbers of constraints mcr1
Markel, Vadim A
2013-01-01
Reflection and refraction of electromagnetic waves by artificial periodic composites (metamaterials) can be accurately modeled by an effective medium theory only if the boundary of the medium is explicitly taken into account and the two effective parameters of the medium -- the index of refraction and the impedance -- are correctly determined. Theories that consider infinite periodic composites do not satisfy the above condition. As a result, they cannot model reflection and transmission by finite samples with the desired accuracy and are not useful for design of metamaterial-based devices. As an instructive case in point, we consider the "current-driven" homogenization theory, which has recently gained popularity. We apply this theory to the case of one-dimensional periodic medium wherein both exact and homogenization results can be obtained analytically in closed form. We show that, beyond the well-understood zero-cell limit, the current-driven homogenization result is inconsistent with the exact reflection...
Energy Technology Data Exchange (ETDEWEB)
Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp [Department of Mechanical Engineering, Osaka University, Suita 565-0871 (Japan); Zhang, Xu [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China); School of Mechanics and Engineering Science, Zhengzhou University, Zhengzhou 450001 (China); Shang, Fulin [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China)
2015-07-07
Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources and pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.
Wen, Xin-Xin; Xu, Chao; Zong, Chun-Lin; Feng, Ya-Fei; Ma, Xiang-Yu; Wang, Fa-Qi; Yan, Ya-Bo; Lei, Wei
2016-07-01
Micro-finite element (μFE) models have been widely used to assess the biomechanical properties of trabecular bone. How to choose a proper sample volume of trabecular bone, which could predict the real bone biomechanical properties and reduce the calculation time, was an interesting problem. Therefore, the purpose of this study was to investigate the relationship between different sample volumes and apparent elastic modulus (E) calculated from μFE model. 5 Human lumbar vertebral bodies (L1-L5) were scanned by micro-CT. Cubic concentric samples of different lengths were constructed as the experimental groups and the largest possible volumes of interest (VOI) were constructed as the control group. A direct voxel-to-element approach was used to generate μFE models and steel layers were added to the superior and inferior surface to mimic axial compression tests. A 1% axial strain was prescribed to the top surface of the model to obtain the E values. ANOVA tests were performed to compare the E values from the different VOIs against that of the control group. Nonlinear function curve fitting was performed to study the relationship between volumes and E values. The larger cubic VOI included more nodes and elements, and more CPU times were needed for calculations. E values showed a descending tendency as the length of cubic VOI decreased. When the volume of VOI was smaller than (7.34mm(3)), E values were significantly different from the control group. The fit function showed that E values approached an asymptotic values with increasing length of VOI. Our study demonstrated that apparent elastic modulus calculated from μFE models were affected by the sample volumes. There was a descending tendency of E values as the length of cubic VOI decreased. Sample volume which was not smaller than (7.34mm(3)) was efficient enough and timesaving for the calculation of E.
Covo, Shai
2010-01-01
Original paper: We revisit the probability that any two consecutive events in a Poisson process N on [0,t] are separated by a time interval which is greater than s(
Trynin, Alexandr Yu
2009-12-01
Classes of functions in the space of continuous functions f defined on the interval \\lbrack 0,\\pi \\rbrack and vanishing at its end-points are described for which there is pointwise and approximate uniform convergence of Lagrange-type operators \\displaystyle S_\\lambda(f,x)=\\sum_{k=0}^n\\frac{y(x,\\lambda)}{y'(x_{k,\\lambda})(x-x_{k,\\lambda})}f(x_{k,\\lambda}). These operators involve the solutions y(x,\\lambda) of the Cauchy problem for the equation \\displaystyle y''+(\\lambda-q_\\lambda(x))y=0 where q_\\lambda\\in V_{\\rho_\\lambda} \\lbrack 0,\\pi \\rbrack (here V_{\\rho_\\lambda} \\lbrack 0,\\pi \\rbrack is the ball of radius \\rho_\\lambda=o(\\sqrt\\lambda/\\ln\\lambda) in the space of functions of bounded variation vanishing at the origin, and y(x_{k,\\lambda})=0). Several modifications of this operator are proposed, which allow an arbitrary continuous function on \\lbrack 0,\\pi \\rbrack to be approximated uniformly. Bibliography: 40 titles.
Institute of Scientific and Technical Information of China (English)
韩志杰; 王璋奇
2012-01-01
Considering the uncertainty of turbine blade design parameters, static response of blade was studied based on non-probabilistic reliability theory and interval finite element method. To avoid the corre- lation possibly existing among the coefficients of interval finite element equations determined by uncertain parameters, the non linear interval stiffness matrix was linearized using Taylor expansion, while the inter- val finite element equations solved by use of the improved iterative algorithm so as to obtain fluctuation range of blade＇s structural static response （stress and deformation）. Combined with examples, a compari- son has been made to the static response respectively obtained hy analytical method and improved iterative algorithm. Results show that the improved iterative algorithm can satisfy the requirement of actual engi- neering projects, which may help to quantitatively determine the reliability of blade.%考虑汽轮机叶片设计参数的选取存在不确定性,利用非概率可靠性理论并结合区间有限元方法对汽轮机叶片的静态结构响应进行了研究.由于不确定参数确定的区间有限元控制方程中系数可能存在相关性,利用泰勒展开对非线性的区间刚度矩阵进行线性化处理,避免系数之间的相关性,采用改进迭代解法求解区间有限元控制方程,获得叶片静态结构响应（即应力和变形）的区间范围,并结合实例对叶片响应值与解析解进行了比较.结果表明：改进迭代解法的计算结果能够满足工程应用,并且可定量判断出叶片的可靠程度.
Directory of Open Access Journals (Sweden)
Md. Maksudul Hasan
2013-02-01
Full Text Available Premature ventricular contractions (PVC are premature heartbeats originating from the ventricles of the heart. These heartbeats occur before the regular heartbeat. The Fractal analysis is most mathematical models produce intractable solutions. Some studies tried to apply the fractal dimension (FD to calculate of cardiac abnormality. Based on FD change, we can identify different abnormalities present in Electrocardiogram (ECG. Present of the uses of Poincaré plot indexes and the sample entropy (SE analyses of heart rate variability (HRV from short term ECG recordings as a screening tool for PVC. Poincaré plot indexes and the SE measure used for analyzing variability and complexity of HRV. A clear reduction of standard deviation (SD projections in Poincaré plot pattern observed a significant difference of SD between healthy Person and PVC subjects. Finally, a comparison shows for FD, SE and Poincaré plot parameters.
Institute of Scientific and Technical Information of China (English)
赵庆乐; 吴怀春; 李海燕; 张世红
2011-01-01
旋回地层学方法近年来被成功应用于年代确定及重大地质事件天文影响因素的判别.采样是旋回分析中最重要的一步,目前大多使用地球物理、地球化学替代性指标.采样频率过高.会大大增加测量和计算的工作量,同时也会增加随机干扰或其他非气候因素的干扰;采样频率过低,可能识别不出其中所包含的米兰柯维奇旋回成分.为确定一个最佳的采样间隔,通过对80～100Ma理论日照量曲线及两个实测剖面3种采样间隔(密集采样问隔与约等于一个岁差周期沉积厚度四分之一和一半的采样间隔)数据分别进行谱估计并比较谱估计结果.发现在满足采样定理的前提下,以一个岁差周期沉积厚度的约一半作为采样间隔,既可以分析出全部的米兰柯维奇旋回信号,又具有最少的工作量,是旋回分析的最佳采样间隔.实际采样中需根据平均沉积速率来确定这个最佳采样间隔.%In recent years, cyclostratigraphy has been successfully applied to dating strata and recongnizing the possible astronomical forcing on major geological events. Sampling is one of the most important routines in cyclostratigraphic analysis to get the suitable geophysical or geochemical paleoclimate proxies. However, the workload will be significantly increased and random noises or other non-climatic noises will be introduced if the sampling frequency is too high; on the contrary, a lower sampling frequency may make it difficult to recognize Milankovitch signals in successions. In order to identify an optimal sampling interval, we used theoretic daily insolation data of time intervals of 80-100 Ma and two geological datasets to estimate each power spectra at three sampling intervals (high resolution, one quarter and half of one precession cycle), and then compared corresponding spectra analysis results. As a result, under the condition of satisfying the sampling theorem, sampling interval which
Directory of Open Access Journals (Sweden)
Javid Shabbir
2012-01-01
Full Text Available In this paper we propose a combined exponential ratio type estimator of finite population mean utilizing information on the auxiliary attribute(s under non-response. Expressions for bias and MSE of the proposed estimator are derived up to first order of approximation. An empirical study is carried out to observe the performances of the estimators.
非正态总体下的小样本区间估计问题%Small Sample Interval Estimation of Non-normal Population
Institute of Scientific and Technical Information of China (English)
杭国明; 祝国强
2013-01-01
在非正态总体的条件下，给出的样本是小样本时，根据总体的不同情况，可以用确切概率计算法、Fisher 正态近似法、切比雪夫不等式法等方法来确定总体未知参数的置信区间。%When the population is non-normal and the sample is a small one ,we can use such methods as exact probability calculation ,Fisher normal approximation and Chebyshev inequality to determine the confi-dence interval of unknow n parameters according to the difference conditions of the population .
Directory of Open Access Journals (Sweden)
Angelo Montanari
2014-08-01
Full Text Available We introduce the synthesis problem for Halpern and Shoham's modal logic of intervals extended with an equivalence relation over time points, abbreviated HSeq. In analogy to the case of monadic second-order logic of one successor, the considered synthesis problem receives as input an HSeq formula phi and a finite set Sigma of propositional variables and temporal requests, and it establishes whether or not, for all possible evaluations of elements in Sigma in every interval structure, there exists an evaluation of the remaining propositional variables and temporal requests such that the resulting structure is a model for phi. We focus our attention on decidability of the synthesis problem for some meaningful fragments of HSeq, whose modalities are drawn from the set A (meets, Abar (met by, B (begins, Bbar (begun by, interpreted over finite linear orders and natural numbers. We prove that the fragment ABBbareq is decidable (non-primitive recursive hard, while the fragment AAbarBBbar turns out to be undecidable. In addition, we show that even the synthesis problem for ABBbar becomes undecidable if we replace finite linear orders by natural numbers.
Directory of Open Access Journals (Sweden)
Housila P. Singh
2013-05-01
Full Text Available In this paper a double (or two-phase sampling version of (Singh and Tailor, 2005 estimator has been suggested along with its properties under large sample approximation. It is shown that the estimator due to (Kawathekar and Ajgaonkar, 1984 is a member of the proposed class of estimators. Realistic conditions have been obtained under which the proposed estimator is better than usual unbiased estimator, usual double sampling ratio ( tRd product ( tPd estimators and (Kawathekar and Ajgaonkar, 1984 estimator. This fact has been shown also through an empirical study.
Wu, Zeng-Qiang; Du, Wen-Bin; Li, Jin-Yi; Xia, Xing-Hua; Fang, Qun
2015-08-01
Numerical simulation can provide valuable insights for complex microfluidic phenomena coupling mixing and diffusion processes. Herein, a novel finite element model (FEM) has been established to extract chemical reaction kinetics in a microfluidic flow injection analysis (micro-FIA) system using high throughput sample introduction. To reduce the computation burden, the finite element mesh generation is performed with different scales based on the different geometric sizes of micro-FIA. In order to study the contribution of chemical reaction kinetics under non-equilibrium condition, a pseudo-first-order chemical kinetics equation is adopted in the numerical simulations. The effect of reactants diffusion on reaction products is evaluated, and the results demonstrate that the Taylor dispersion plays a determining role in the micro-FIA system. In addition, the effects of flow velocity and injection volume on the reaction product are also simulated. The simulated results agree well with the ones from experiments. Although gravity driven flow is used to the numerical model in the present study, the FEM model also can be applied into the systems with other driving forces such as pressure. Therefore, the established FEM model will facilitate the understanding of reaction mechanism in micro-FIA systems and help us to optimize the manifold of micro-FIA systems.
Directory of Open Access Journals (Sweden)
J. A. Chatfield
1978-01-01
Full Text Available Suppose N is a Banach space of norm |•| and R is the set of real numbers. All integrals used are of the subdivision-refinement type. The main theorem [Theorem 3] gives a representation of TH where H is a function from R×R to N such that H(p+,p+, H(p,p+, H(p−,p−, and H(p−,p each exist for each p and T is a bounded linear operator on the space of all such functions H. In particular we show that TH=(I∫abfHdα+∑i=1∞[H(xi−1,xi−1+−H(xi−1+,xi−1+]β(xi−1+∑i=1∞[H(xi−,xi−H(xi−,xi−]Θ(xi−1,xiwhere each of α, β, and Θ depend only on T, α is of bounded variation, β and Θ are 0 except at a countable number of points, fH is a function from R to N depending on H and {xi}i=1∞ denotes the points P in [a,b]. for which [H(p,p+−H(p+,p+]≠0 or [H(p−,p−H(p−,p−]≠0. We also define an interior interval function integral and give a relationship between it and the standard interval function integral.
DEFF Research Database (Denmark)
Polyzos, Nikolaos P; Nelson, Scott M; Stoop, Dominic
2013-01-01
To investigate whether the time interval between serum antimüllerian hormone (AMH) sampling and initiation of ovarian stimulation for in vitro fertilization-intracytoplasmic sperm injection (IVF-ICSI) may affect the predictive ability of the marker for low and excessive ovarian response.......To investigate whether the time interval between serum antimüllerian hormone (AMH) sampling and initiation of ovarian stimulation for in vitro fertilization-intracytoplasmic sperm injection (IVF-ICSI) may affect the predictive ability of the marker for low and excessive ovarian response....
Directory of Open Access Journals (Sweden)
Arlete Maria dos Santos Fernandes
2006-10-01
foi cesárea. Não se detectou diferença nas taxas de satisfação e arrependimento após o procedimento entre os grupos.BACKGROUND: Brazil is a country with a high prevalence of tubal ligation, which is frequently performed at the time of delivery. In recent years, an increase in tubal reversal has been noticed, primarily among young women. OBJECTIVES: To study characteristics correlated with the procedure, determine frequency of intrapartum tubal ligation, measure patient satisfaction rates and tubal sterilization regret, in a sample of post-tubal patients. METHODS: Three hundred and thirty-five women underwent tubal ligation. The variables studied were related to the procedure: age at tubal ligation, whether ligation was performed intrapartum (vaginal or cesarean section or after an interval (other than the intrapartum and puerperal period, health service performing the sterilization, medical expenses paid for the procedure, reason stated for choosing the method and causes related to satisfaction/regret: desire to become pregnant after sterilization, search for treatment and performance of tubal ligation reversal. The women were divided into two groups, a group undergoing ligation in the intrapartum period and a second group ligated after an interval, to evaluate the association between variables by using Fisher's exact test and chi-squared calculation with Yates' correction. The study was approved by the Ethics Committee of the institution. RESULTS: There was a predominance of Caucasian women over 35 years of age, married, and with a low level of education of which 43.5% had undergone sterilization before 30 years of age. Two hundred and forty-five women underwent intrapartum tubal ligation, 91.2% of them had cesarean delivery and 44.6% vaginal delivery. In both groups undergoing intrapartum tubal ligation and ligation after an interval, 82.0% and 80.8% reported satisfaction with the method. Although 14.6% expressed a desire to become pregnant at some time after
Confidence Intervals from One One Observation
Rodriguez, Carlos C
2008-01-01
Robert Machol's surprising result, that from a single observation it is possible to have finite length confidence intervals for the parameters of location-scale models, is re-produced and extended. Two previously unpublished modifications are included. First, Herbert Robbins nonparametric confidence interval is obtained. Second, I introduce a technique for obtaining confidence intervals for the scale parameter of finite length in the logarithmic metric. Keywords: Theory/Foundations , Estimation, Prior Distributions, Non-parametrics & Semi-parametrics Geometry of Inference, Confidence Intervals, Location-Scale models
Borges, I C; Andrade, D C; Vilas-Boas, A-L; Fontoura, M-S H; Laitinen, H; Ekström, N; Adrian, P V; Meinke, A; Cardoso, M-R A; Barral, A; Ruuskanen, O; Käyhty, H; Nascimento-Carvalho, C M
2015-08-01
We evaluated the effects of combining different numbers of pneumococcal antigens, pre-existing antibody levels, sampling interval, age, and duration of illness on the detection of IgG responses against eight Streptococcus pneumoniae proteins, three Haemophilus influenzae proteins, and five Moraxella catarrhalis proteins in 690 children aged pneumonia. Serological tests were performed on acute and convalescent serum samples with a multiplexed bead-based immunoassay. The median sampling interval was 19 days, the median age was 26.7 months, and the median duration of illness was 5 days. The rate of antibody responses was 15.4 % for at least one pneumococcal antigen, 5.8 % for H. influenzae, and 2.3 % for M. catarrhalis. The rate of antibody responses against each pneumococcal antigen varied from 3.5 to 7.1 %. By multivariate analysis, pre-existing antibody levels showed a negative association with the detection of antibody responses against pneumococcal and H. influenzae antigens; the sampling interval was positively associated with the detection of antibody responses against pneumococcal and H. influenzae antigens. A sampling interval of 3 weeks was the optimal cut-off for the detection of antibody responses against pneumococcal and H. influenzae proteins. Duration of illness was negatively associated with antibody responses against PspA. Age did not influence antibody responses against the investigated antigens. In conclusion, serological assays using combinations of different pneumococcal proteins detect a higher rate of antibody responses against S. pneumoniae compared to assays using a single pneumococcal protein. Pre-existing antibody levels and sampling interval influence the detection of antibody responses against pneumococcal and H. influenzae proteins. These factors should be considered when determining pneumonia etiology by serological methods in children.
2005-01-01
This self-paced narrated tutorial covers the following about Finite Automata: Uses, Examples, Alphabet, strings, concatenation, powers of an alphabet, Languages (automata and formal languages), Deterministic finite automata (DFA) SW4600 Automata, Formal Specification and Run-time Verification
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
Statistical intervals a guide for practitioners
Hahn, Gerald J
2011-01-01
Presents a detailed exposition of statistical intervals and emphasizes applications in industry. The discussion differentiates at an elementary level among different kinds of statistical intervals and gives instruction with numerous examples and simple math on how to construct such intervals from sample data. This includes confidence intervals to contain a population percentile, confidence intervals on probability of meeting specified threshold value, and prediction intervals to include observation in a future sample. Also has an appendix containing computer subroutines for nonparametric stati
Cao, Youfang; Liang, Jie
2013-07-14
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Estey, Mathew P; Cohen, Ashley H; Colantonio, David A; Chan, Man Khun; Marvasti, Tina Binesh; Randell, Edward; Delvin, Edgard; Cousineau, Jocelyne; Grey, Vijaylaxmi; Greenway, Donald; Meng, Qing H; Jung, Benjamin; Bhuiyan, Jalaluddin; Seccombe, David; Adeli, Khosrow
2013-09-01
The CALIPER program recently established a comprehensive database of age- and sex-stratified pediatric reference intervals for 40 biochemical markers. However, this database was only directly applicable for Abbott ARCHITECT assays. We therefore sought to expand the scope of this database to biochemical assays from other major manufacturers, allowing for a much wider application of the CALIPER database. Based on CLSI C28-A3 and EP9-A2 guidelines, CALIPER reference intervals were transferred (using specific statistical criteria) to assays performed on four other commonly used clinical chemistry platforms including Beckman Coulter DxC800, Ortho Vitros 5600, Roche Cobas 6000, and Siemens Vista 1500. The resulting reference intervals were subjected to a thorough validation using 100 reference specimens (healthy community children and adolescents) from the CALIPER bio-bank, and all testing centers participated in an external quality assessment (EQA) evaluation. In general, the transferred pediatric reference intervals were similar to those established in our previous study. However, assay-specific differences in reference limits were observed for many analytes, and in some instances were considerable. The results of the EQA evaluation generally mimicked the similarities and differences in reference limits among the five manufacturers' assays. In addition, the majority of transferred reference intervals were validated through the analysis of CALIPER reference samples. This study greatly extends the utility of the CALIPER reference interval database which is now directly applicable for assays performed on five major analytical platforms in clinical use, and should permit the worldwide application of CALIPER pediatric reference intervals. Copyright © 2013 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Wood, David B.
2007-11-01
Between 1951 and 1992, 828 underground tests were conducted on the Nevada National Security Site, Nye County, Nevada. Prior to and following these nuclear tests, holes were drilled and mined to collect rock samples. These samples are organized and stored by depth of borehole or drift at the U.S. Geological Survey Core Library and Data Center at Mercury, Nevada, on the Nevada National Security Site. From these rock samples, rock properties were analyzed and interpreted and compiled into project files and in published reports that are maintained at the Core Library and at the U.S. Geological Survey office in Henderson, Nevada. These rock-sample data include lithologic descriptions, physical and mechanical properties, and fracture characteristics. Hydraulic properties also were compiled from holes completed in the water table. Rock samples are irreplaceable because pre-test, in-place conditions cannot be recreated and samples can not be recollected from the many holes destroyed by testing. Documenting these data in a published report will ensure availability for future investigators.
Energy Technology Data Exchange (ETDEWEB)
David B. Wood
2009-10-08
Between 1951 and 1992, underground nuclear weapons testing was conducted at 828 sites on the Nevada Test Site, Nye County, Nevada. Prior to and following these nuclear tests, holes were drilled and mined to collect rock samples. These samples are organized and stored by depth of borehole or drift at the U.S. Geological Survey Core Library and Data Center at Mercury, Nevada, on the Nevada Test Site. From these rock samples, rock properties were analyzed and interpreted and compiled into project files and in published reports that are maintained at the Core Library and at the U.S. Geological Survey office in Henderson, Nevada. These rock-sample data include lithologic descriptions, physical and mechanical properties, and fracture characteristics. Hydraulic properties also were compiled from holes completed in the water table. Rock samples are irreplaceable because pre-test, in-place conditions cannot be recreated and samples cannot be recollected from the many holes destroyed by testing. Documenting these data in a published report will ensure availability for future investigators.
Energy Technology Data Exchange (ETDEWEB)
David B. Wood
2007-10-24
Between 1951 and 1992, 828 underground tests were conducted on the Nevada Test Site, Nye County, Nevada. Prior to and following these nuclear tests, holes were drilled and mined to collect rock samples. These samples are organized and stored by depth of borehole or drift at the U.S. Geological Survey Core Library and Data Center at Mercury, Nevada, on the Nevada Test Site. From these rock samples, rock properties were analyzed and interpreted and compiled into project files and in published reports that are maintained at the Core Library and at the U.S. Geological Survey office in Henderson, Nevada. These rock-sample data include lithologic descriptions, physical and mechanical properties, and fracture characteristics. Hydraulic properties also were compiled from holes completed in the water table. Rock samples are irreplaceable because pre-test, in-place conditions cannot be recreated and samples cannot be recollected from the many holes destroyed by testing. Documenting these data in a published report will ensure availability for future investigators.
Harris, Janna L; Reeves, Thomas M; Phillips, Linda L
2009-10-01
In the present study we examined expression of four real-time quantitative RT-PCR reference genes commonly applied to rodent models of brain injury. Transcripts for beta-actin, cyclophilin A, GAPDH, and 18S rRNA were assessed at 2-15 days post-injury, focusing on the period of synaptic recovery. Diffuse moderate central fluid percussion injury (FPI) was contrasted with unilateral entorhinal cortex lesion (UEC), a model of targeted deafferentation. Expression in UEC hippocampus, as well as in FPI hippocampus and parietotemporal cortex was analyzed by qRT-PCR. Within-group variability of gene expression was assessed and change in expression relative to paired controls was determined. None of the four common reference genes tested was invariant across brain region, survival time, and type of injury. Cyclophilin A appeared appropriate as a reference gene in UEC hippocampus, while beta-actin was most stable for the hippocampus subjected to FPI. However, each gene may fail as a suitable reference with certain test genes whose RNA expression is targeted for measurement. In FPI cortex, all reference genes were significantly altered over time, compromising their utility for time-course studies. Despite such temporal variability, certain genes may be appropriate references if limited to single survival times. These data provide an extended baseline for identification of appropriate reference genes in rodent studies of recovery from brain injury. In this context, we outline additional considerations for selecting a qRT-PCR normalization strategy in such studies. As previously concluded for acute post-injury intervals, we stress the importance of reference gene validation for each brain injury paradigm and each set of experimental conditions.
Reference Intervals in Neonatal Hematology.
Henry, Erick; Christensen, Robert D
2015-09-01
The various blood cell counts of neonates must be interpreted in accordance with high-quality reference intervals based on gestational and postnatal age. Using very large sample sizes, we generated neonatal reference intervals for each element of the complete blood count (CBC). Knowledge of whether a patient has CBC values that are too high (above the upper reference interval) or too low (below the lower reference interval) provides important insights into the specific disorder involved and in many instances suggests a treatment plan. Copyright © 2015 Elsevier Inc. All rights reserved.
Varieties of Confidence Intervals.
Cousineau, Denis
2017-01-01
Error bars are useful to understand data and their interrelations. Here, it is shown that confidence intervals of the mean (CI M s) can be adjusted based on whether the objective is to highlight differences between measures or not and based on the experimental design (within- or between-group designs). Confidence intervals (CIs) can also be adjusted to take into account the sampling mechanisms and the population size (if not infinite). Names are proposed to distinguish the various types of CIs and the assumptions underlying them, and how to assess their validity is explained. The various CIs presented here are easily obtained from a succession of multiplicative adjustments to the basic (unadjusted) CI width. All summary results should present a measure of precision, such as CIs, as this information is complementary to effect sizes.
Conductance and Absolutely Continuous Spectrum of 1D Samples
Bruneau, L.; Jakšić, V.; Last, Y.; Pillet, C.-A.
2016-06-01
We characterize the absolutely continuous spectrum of the one-dimensional Schrödinger operators {h = -Δ + v} acting on {ℓ^2(mathbb{Z}_+)} in terms of the limiting behaviour of the Landauer-Büttiker and Thouless conductances of the associated finite samples. The finite sample is defined by restricting h to a finite interval {[1, L] \\cap mathbb{Z}_+} and the conductance refers to the charge current across the sample in the open quantum system obtained by attaching independent electronic reservoirs to the sample ends. Our main result is that the conductances associated to an energy interval {I} are non-vanishing in the limit {L to infty} iff {sp_ac(h) \\cap I neq emptyset}. We also discuss the relationship between this result and the Schrödinger Conjecture (Avila, J Am Math Soc 28:579-616, 2015; Bruneau et al., Commun Math Phys 319:501-513, 2013).
The Gauss map on a class of interval translation mappings
Bruin, H; Troubetzkoy, S
2003-01-01
We study the dynamics of a class of interval translation map on three intervals. We show that in this class the typical ITM is of finite type (reduce to an interval exchange transformation) and that the complement contains a Cantor set. We relate our maps to substitution subshifts. Results on
Restuccia, A; Taylor, J G
1992-01-01
This is the first complete account of the construction and finiteness analysis of multi-loop scattering amplitudes for superstrings, and of the guarantee that for certain superstrings (in particular the heterotic one), the symmetries of the theory in the embedding space-time are those of the super-poincaré group SP10 and that the multi-loop amplitudes are each finite. The book attempts to be self-contained in its analysis, although it draws on the works of many researchers. It also presents the first complete field theory for such superstrings. As such it demonstrates that gravity can be quant
Interval arithmetic in calculations
Bairbekova, Gaziza; Mazakov, Talgat; Djomartova, Sholpan; Nugmanova, Salima
2016-10-01
Interval arithmetic is the mathematical structure, which for real intervals defines operations analogous to ordinary arithmetic ones. This field of mathematics is also called interval analysis or interval calculations. The given math model is convenient for investigating various applied objects: the quantities, the approximate values of which are known; the quantities obtained during calculations, the values of which are not exact because of rounding errors; random quantities. As a whole, the idea of interval calculations is the use of intervals as basic data objects. In this paper, we considered the definition of interval mathematics, investigated its properties, proved a theorem, and showed the efficiency of the new interval arithmetic. Besides, we briefly reviewed the works devoted to interval analysis and observed basic tendencies of development of integral analysis and interval calculations.
Hadachi, Hirotaka; Saito, Takashi
2013-04-20
Digital holographic microscopy (DHM) has been used to determine the morphology and shape of transparent objects. However, the obtained shape is often inaccurate depending on the object properties and the setup of the optical imaging system. To understand these effects, we developed a new DHM model on the basis of a hybrid pupil imaging and finite-difference time-domain method. To demonstrate this model, we compared the results of an experiment with those of a simulation using borosilicate glass microspheres and a mold with a linear step structure. The simulation and experimental results showed good agreement. We also showed how the curvature and refractive index of objects affect the accuracy of thickness measurements.
Finite Population Correction for Two-Level Hierarchical Linear Models.
Lai, Mark H C; Kwok, Oi-Man; Hsiao, Yu-Yu; Cao, Qian
2017-03-16
The research literature has paid little attention to the issue of finite population at a higher level in hierarchical linear modeling. In this article, we propose a method to obtain finite-population-adjusted standard errors of Level-1 and Level-2 fixed effects in 2-level hierarchical linear models. When the finite population at Level-2 is incorrectly assumed as being infinite, the standard errors of the fixed effects are overestimated, resulting in lower statistical power and wider confidence intervals. The impact of ignoring finite population correction is illustrated by using both a real data example and a simulation study with a random intercept model and a random slope model. Simulation results indicated that the bias in the unadjusted fixed-effect standard errors was substantial when the Level-2 sample size exceeded 10% of the Level-2 population size; the bias increased with a larger intraclass correlation, a larger number of clusters, and a larger average cluster size. We also found that the proposed adjustment produced unbiased standard errors, particularly when the number of clusters was at least 30 and the average cluster size was at least 10. We encourage researchers to consider the characteristics of the target population for their studies and adjust for finite population when appropriate. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Haemostatic reference intervals in pregnancy
DEFF Research Database (Denmark)
Szecsi, Pal Bela; Jørgensen, Maja; Klajnbard, Anna;
2010-01-01
Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age......-specific reference intervals for coagulation tests during normal pregnancy. Eight hundred one women with expected normal pregnancies were included in the study. Of these women, 391 had no complications during pregnancy, vaginal delivery, or postpartum period. Plasma samples were obtained at gestational weeks 13......-20, 21-28, 29-34, 35-42, at active labor, and on postpartum days 1 and 2. Reference intervals for each gestational period using only the uncomplicated pregnancies were calculated in all 391 women for activated partial thromboplastin time (aPTT), fibrinogen, fibrin D-dimer, antithrombin, free protein S...
Alparslan-Gok, S.Z.; Brânzei, R.; Tijs, S.H.
2008-01-01
In this paper big boss interval games are introduced and various characterizations are given. The structure of the core of a big boss interval game is explicitly described and plays an important role relative to interval-type bi-monotonic allocation schemes for such games. Specifically, each element
Cultural Consensus Theory: Aggregating Continuous Responses in a Finite Interval
Batchelder, William H.; Strashny, Alex; Romney, A. Kimball
Cultural consensus theory (CCT) consists of cognitive models for aggregating responses of "informants" to test items about some domain of their shared cultural knowledge. This paper develops a CCT model for items requiring bounded numerical responses, e.g. probability estimates, confidence judgments, or similarity judgments. The model assumes that each item generates a latent random representation in each informant, with mean equal to the consensus answer and variance depending jointly on the informant and the location of the consensus answer. The manifest responses may reflect biases of the informants. Markov Chain Monte Carlo (MCMC) methods were used to estimate the model, and simulation studies validated the approach. The model was applied to an existing cross-cultural dataset involving native Japanese and English speakers judging the similarity of emotion terms. The results sharpened earlier studies that showed that both cultures appear to have very similar cognitive representations of emotion terms.
Directory of Open Access Journals (Sweden)
Karla D. S. Ribeiro
2007-08-01
Full Text Available OBJETIVO: Avaliar a concentração de retinol no colostro coletado em intervalo de 24 h. MÉTODOS: Coletou-se o colostro de 24 puérperas em dois períodos, tempo zero (T0 e tempo 24 h (T24, e um pool da união de T0 e T24. A gordura foi determinada pelo crematócrito, e o retinol por cromatografia líquida de alta eficiência. RESULTADOS: Quando expresso por volume de leite (µg/dL, o nível de retinol sofreu variações entre T0, T24 e pool: 94,9±58,9; 129±78,6 e 111,9±60,4 µg/dL, respectivamente. Entretanto, quando expresso pela quantidade de gordura (µg/g, não foi observada diferença significativa. CONCLUSÕES: O retinol quantificado no colostro através de coleta única não deve ser utilizado como indicador do estado nutricional em vitamina A, devido à grande variabilidade no decorrer das coletas. Sugere-se que os resultados sejam expressos por grama de gordura, para minimizar as variações em decorrência do volume de leite.OBJECTIVE: To evaluate retinol concentration in colostrum samples collected with a 24 hour interval. METHODS: Colostrum was collected from 24 recently-delivered mothers at two points in time, 0 hours (T0 and 24 hours later (T24, and a pooled sample of colostrum from T0 and T24 was also analyzed. Fat content was determined by creamatocrit, and retinol assayed by high performance liquid chromatography. RESULTS: When expressed in terms of volume of milk (µg/dL, retinol levels varied across T0, T24 and the pooled sample: 94.9±58.9, 129±78.6 and 111.9±60.4 µg/dL, respectively. However, when expressed with relation to fat content (µg/g, no significant difference was observed. CONCLUSIONS: Retinol assayed in colostrum from a single sample should not be used as an indicator of vitamin A nutritional status, due to the great variation between samples collected at different times. It is suggested that results be expressed per gram of fat, in order to minimize variations resulting from the volume of milk.
Haemostatic reference intervals in pregnancy
DEFF Research Database (Denmark)
Szecsi, Pal Bela; Jørgensen, Maja; Klajnbard, Anna
2010-01-01
Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age-specific refe......Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age......-specific reference intervals for coagulation tests during normal pregnancy. Eight hundred one women with expected normal pregnancies were included in the study. Of these women, 391 had no complications during pregnancy, vaginal delivery, or postpartum period. Plasma samples were obtained at gestational weeks 13......-20, 21-28, 29-34, 35-42, at active labor, and on postpartum days 1 and 2. Reference intervals for each gestational period using only the uncomplicated pregnancies were calculated in all 391 women for activated partial thromboplastin time (aPTT), fibrinogen, fibrin D-dimer, antithrombin, free protein S...
Brassey, Charlotte A; Margetts, Lee; Kitchener, Andrew C; Withers, Philip J; Manning, Phillip L; Sellers, William I
2013-02-01
Classic beam theory is frequently used in biomechanics to model the stress behaviour of vertebrate long bones, particularly when creating intraspecific scaling models. Although methodologically straightforward, classic beam theory requires complex irregular bones to be approximated as slender beams, and the errors associated with simplifying complex organic structures to such an extent are unknown. Alternative approaches, such as finite element analysis (FEA), while much more time-consuming to perform, require no such assumptions. This study compares the results obtained using classic beam theory with those from FEA to quantify the beam theory errors and to provide recommendations about when a full FEA is essential for reasonable biomechanical predictions. High-resolution computed tomographic scans of eight vertebrate long bones were used to calculate diaphyseal stress owing to various loading regimes. Under compression, FEA values of minimum principal stress (σ(min)) were on average 142 per cent (±28% s.e.) larger than those predicted by beam theory, with deviation between the two models correlated to shaft curvature (two-tailed p = 0.03, r(2) = 0.56). Under bending, FEA values of maximum principal stress (σ(max)) and beam theory values differed on average by 12 per cent (±4% s.e.), with deviation between the models significantly correlated to cross-sectional asymmetry at midshaft (two-tailed p = 0.02, r(2) = 0.62). In torsion, assuming maximum stress values occurred at the location of minimum cortical thickness brought beam theory and FEA values closest in line, and in this case FEA values of τ(torsion) were on average 14 per cent (±5% s.e.) higher than beam theory. Therefore, FEA is the preferred modelling solution when estimates of absolute diaphyseal stress are required, although values calculated by beam theory for bending may be acceptable in some situations.
Dürr, Christoph; Spieksma, Frits C R; Nobibon, Fabrice Talla; Woeginger, Gerhard J
2011-01-01
For a given set of intervals on the real line, we consider the problem of ordering the intervals with the goal of minimizing an objective function that depends on the exposed interval pieces (that is, the pieces that are not covered by earlier intervals in the ordering). This problem is motivated by an application in molecular biology that concerns the determination of the structure of the backbone of a protein. We present polynomial-time algorithms for several natural special cases of the problem that cover the situation where the interval boundaries are agreeably ordered and the situation where the interval set is laminar. Also the bottleneck variant of the problem is shown to be solvable in polynomial time. Finally we prove that the general problem is NP-hard, and that the existence of a constant-factor-approximation algorithm is unlikely.
Kolen, A.W.J.; Lenstra, J.K.; Papadimitriou, C.H.; Spieksma, F.C.R.
2007-01-01
In interval scheduling, not only the processing times of the jobs but also their starting times are given. This article surveys the area of interval scheduling and presents proofs of results that have been known within the community for some time. We first review the complexity and approximability o
Ph.H.B.F. Franses (Philip Hans); B.L.K. Vroomen (Björn)
2003-01-01
textabstractDuration intervals measure the dynamic impact of advertising on sales. More precise, the p per cent duration interval measures the time lag between the advertising impulse and the moment that p per cent of its effect has decayed. In this paper, we derive an expression for the duration
Jampani, Krishnam Raju
2010-01-01
In a recent paper, we introduced the simultaneous representation problem (defined for any graph class C) and studied the problem for chordal, comparability and permutation graphs. For interval graphs, the problem is defined as follows. Two interval graphs G_1 and G_2, sharing some vertices I (and the corresponding induced edges), are said to be `simultaneous interval graphs' if there exist interval representations R_1 and R_2 of G_1 and G_2, such that any vertex of I is mapped to the same interval in both R_1 and R_2. Equivalently, G_1 and G_2 are simultaneous interval graphs if there exist edges E' between G_1-I and G_2-I such that G_1 \\cup G_2 \\cup E' is an interval graph. Simultaneous representation problems are related to simultaneous planar embeddings, and have applications in any situation where it is desirable to consistently represent two related graphs, for example: interval graphs capturing overlaps of DNA fragments of two similar organisms; or graphs connected in time, where one is an updated versi...
An Optimization-Based Approach to Calculate Confidence Interval on Mean Value with Interval Data
Directory of Open Access Journals (Sweden)
Kais Zaman
2014-01-01
Full Text Available In this paper, we propose a methodology for construction of confidence interval on mean values with interval data for input variable in uncertainty analysis and design optimization problems. The construction of confidence interval with interval data is known as a combinatorial optimization problem. Finding confidence bounds on the mean with interval data has been generally considered an NP hard problem, because it includes a search among the combinations of multiple values of the variables, including interval endpoints. In this paper, we present efficient algorithms based on continuous optimization to find the confidence interval on mean values with interval data. With numerical experimentation, we show that the proposed confidence bound algorithms are scalable in polynomial time with respect to increasing number of intervals. Several sets of interval data with different numbers of intervals and type of overlap are presented to demonstrate the proposed methods. As against the current practice for the design optimization with interval data that typically implements the constraints on interval variables through the computation of bounds on mean values from the sampled data, the proposed approach of construction of confidence interval enables more complete implementation of design optimization under interval uncertainty.
INTERVAL OBSERVER FOR A BIOLOGICAL REACTOR MODEL
Directory of Open Access Journals (Sweden)
T. A. Kharkovskaia
2014-05-01
Full Text Available The method of an interval observer design for nonlinear systems with parametric uncertainties is considered. The interval observer synthesis problem for systems with varying parameters consists in the following. If there is the uncertainty restraint for the state values of the system, limiting the initial conditions of the system and the set of admissible values for the vector of unknown parameters and inputs, the interval existence condition for the estimations of the system state variables, containing the actual state at a given time, needs to be held valid over the whole considered time segment as well. Conditions of the interval observers design for the considered class of systems are shown. They are: limitation of the input and state, the existence of a majorizing function defining the uncertainty vector for the system, Lipschitz continuity or finiteness of this function, the existence of an observer gain with the suitable Lyapunov matrix. The main condition for design of such a device is cooperativity of the interval estimation error dynamics. An individual observer gain matrix selection problem is considered. In order to ensure the property of cooperativity for interval estimation error dynamics, a static transformation of coordinates is proposed. The proposed algorithm is demonstrated by computer modeling of the biological reactor. Possible applications of these interval estimation systems are the spheres of robust control, where the presence of various types of uncertainties in the system dynamics is assumed, biotechnology and environmental systems and processes, mechatronics and robotics, etc.
Institute of Scientific and Technical Information of China (English)
王建平; 赵高丽; 胡孟杰; 陈伟
2014-01-01
Multi-target tracking is a hot topic of current research on wireless sensor networks (WSN ). Based on adaptive sampling interval,we propose a multi-target tracking algorithm in order to save energy consumption and prevent tracking lost for WSN.We contrast the targets moving model by using the position metadata,and predicte the targets moving status based on extended Kalman filter (EKF).we adopt the probability density function (PDF )of the estimated targets to establish the tracking cluster.By defining the tracking center,we use Markov distance to quantify the election process of the main node (MN).We comput targets impact strength through the targets importance and the distance to MN node, and then use it to build tracking algorithm.We do the simulation experiment based on MATLAB,and the experiment results show that the proposed algorithm can accurate predict the trajectory of the targets,and adjust the sampling interval while the targets were moving.By analyzing the experiments data,we know that the proposed algorithm can improve the tracking precision and save the energy consumption of WSN obviously.%多目标跟踪是无线传感器网络当前研究的热点问题。针对多目标跟踪存在耗能较大，跟踪丢失等问题，提出了一种自适应采样间隔的多目标跟踪算法。采用跟踪目标的定位元数据来对目标的运动模式进行建模。基于扩展的卡尔曼滤波器来预测跟踪目标状态，采用预测目标定位的概率密度函数构建跟踪簇。通过定义跟踪目标中心，基于马氏距离来量化主节点 MN 的选举过程。通过跟踪目标重要性和其与MN之间的距离来量化目标的影响强度，并以此构建自适应采样间隔的多目标跟踪算法。基于MATLAB进行了仿真实验，实验结果显示，本文设计的跟踪算法能准确预测目标的运动轨迹，能随着运动目标的状态实时采用自适应的采样间隔。通过数据分析得知，本
Finite element model updating for large span spatial steel structure considering uncertainties
Institute of Scientific and Technical Information of China (English)
TENG Jun; ZHU Yan-huang; ZHOU Feng; LI Hui; OU Jin-ping
2010-01-01
In order to establish the baseline finite element model for structural health monitoring,a new method of model updating was proposed after analyzing the uncertainties of measured data and the error of finite element model.In the new method,the finite element model was replaced by the multi-output support vector regression machine(MSVR).The interval variables of the measured frequency were sampled by Latin hypercube sampling method.The samples of frequency were regarded as the inputs of the trained MSVR.The outputs of MSVR were the target values of design parameters.The steel structure of National Aquatic Center for Beijing Olympic Games was introduced as a case for finite element model updating.The results show that the proposed method can avoid solving the problem of complicated calculation.Both the estimated values and associated uncertainties of the structure parameters can be obtained by the method.The static and dynamic characteristics of the updated finite element model are in good agreement with the measured data.
The fallacy of placing confidence in confidence intervals
Morey, Richard D.; Hoekstra, Rink; Rouder, Jeffrey N.; Lee, Michael D.; Wagenmakers, Eric-Jan
2016-01-01
Interval estimates – estimates of parameters that include an allowance for sampling uncertainty – have long been touted as a key component of statistical analyses. There are several kinds of interval estimates, but the most popular are confidence intervals (CIs): intervals that contain the true
The fallacy of placing confidence in confidence intervals
Morey, R.D.; Hoekstra, R.; Rouder, J.N.; Lee, M.D.; Wagenmakers, E.-J.
Interval estimates – estimates of parameters that include an allowance for sampling uncertainty – have long been touted as a key component of statistical analyses. There are several kinds of interval estimates, but the most popular are confidence intervals (CIs): intervals that contain the true
BIRTH INTERVAL AMONG NOMAD WOMEN
Directory of Open Access Journals (Sweden)
E.Keyvan
1976-06-01
Full Text Available To have an, idea about the relation between the length of birth interval and lactation, and birth control program this study have been done. The material for such analysis was nomad women's fertility history that was in their reproductive period (15-44. The material itself was gathered through a health survey. The main sample was composed of 2,165 qualified women, of whom 49 due to previous or presently using contraceptive methods and 10 for the lack of enough data were excluded from 'this study. Purpose of analysis was to find a relation between No. of live births and pregnancies with total duration of married life (in other word, total months which the women were at risk of pregnancy. 2,106 women which their fertility history was analyzed had a totally of272, 502 months married life. During this time 8,520 live births did occurred which gave a birth interval of 32 months. As pregnancy termination could be through either live birth, still birth or abortion (induced or spontaneous, bringing all together will give No. of pregnancies which have occurred during this period (8,520 + 124 + 328 = 8,972 with an average of interpregnancy interval of 30.3 months. Considering the length of components of birth interval: Post partum amenorrhea which depends upon lactation. - Anovulatory cycles (2 month - Ooulatory exposure, in the absence of contraceptive methods (5 months - Pregnancy (9 months.Difference between the length, of birth interval from the sum of the mentioned period (except the first component, (2 + 5+ 9 = 16 will be duration of post partum amenorrhea (32 - 16 = 16, or in other word duration of breast feeding among nomad women. In this study it was found that, in order to reduce birth by 50% a contraceptive method with 87% effectiveness is needed.
Computing discrete logarithm by interval-valued paradigm
Directory of Open Access Journals (Sweden)
Benedek Nagy
2014-03-01
Full Text Available Interval-valued computing is a relatively new computing paradigm. It uses finitely many interval segments over the unit interval in a computation as data structure. The satisfiability of Quantified Boolean formulae and other hard problems, like integer factorization, can be solved in an effective way by its massive parallelism. The discrete logarithm problem plays an important role in practice, there are cryptographical methods based on its computational hardness. In this paper we show that the discrete logarithm problem is computable by an interval-valued computing in a polynomial number of steps (within this paradigm.
Indian Academy of Sciences (India)
Deepak D’Souza; P S Thiagarajan
2002-04-01
We identify a subclass of timed automata called product interval automata and develop its theory. These automata consist of a network of timed agents with the key restriction being that there is just one clock for each agent and the way the clocks are read and reset is determined by the distribution of shared actions across the agents. We show that the resulting automata admit a clean theory in both logical and language theoretic terms. We also show that product interval automata are expressive enough to model the timed behaviour of asynchronous digital circuits.
Compositional Finite-Time Stability analysis of nonlinear systems
DEFF Research Database (Denmark)
Tabatabaeipour, Mojtaba; Blanke, Mogens
2014-01-01
for the system but with bounded disturbance. Sufficient conditions for finite-time stability and finite-time boundedness of nonlinear systems as well as a computational method based on sum of squares programming to check the conditions are given. The problem of finite-time stability for a system that consists......This paper, investigates finite-time stability and finite-time boundedness for nonlinear systems with polynomial vector fields. Finite-time stability requires the states of the system to remain a given bounded set in a finite-time interval and finite-time boundedness considers the same problem...... of an interconnection of subsystems is also considered and we show how to decompose the problem into subproblems for each subsystem with coupling constraints. A solution to the problem using sum of squares programming and dual decomposition is presented. The method is demonstrated through some examples....
Magnetic Resonance Fingerprinting with short relaxation intervals.
Amthor, Thomas; Doneva, Mariya; Koken, Peter; Sommer, Karsten; Meineke, Jakob; Börnert, Peter
2017-09-01
The aim of this study was to investigate a technique for improving the performance of Magnetic Resonance Fingerprinting (MRF) in repetitive sampling schemes, in particular for 3D MRF acquisition, by shortening relaxation intervals between MRF pulse train repetitions. A calculation method for MRF dictionaries adapted to short relaxation intervals and non-relaxed initial spin states is presented, based on the concept of stationary fingerprints. The method is applicable to many different k-space sampling schemes in 2D and 3D. For accuracy analysis, T1 and T2 values of a phantom are determined by single-slice Cartesian MRF for different relaxation intervals and are compared with quantitative reference measurements. The relevance of slice profile effects is also investigated in this case. To further illustrate the capabilities of the method, an application to in-vivo spiral 3D MRF measurements is demonstrated. The proposed computation method enables accurate parameter estimation even for the shortest relaxation intervals, as investigated for different sampling patterns in 2D and 3D. In 2D Cartesian measurements, we achieved a scan acceleration of more than a factor of two, while maintaining acceptable accuracy: The largest T1 values of a sample set deviated from their reference values by 0.3% (longest relaxation interval) and 2.4% (shortest relaxation interval). The largest T2 values showed systematic deviations of up to 10% for all relaxation intervals, which is discussed. The influence of slice profile effects for multislice acquisition is shown to become increasingly relevant for short relaxation intervals. In 3D spiral measurements, a scan time reduction of 36% was achieved, maintaining the quality of in-vivo T1 and T2 maps. Reducing the relaxation interval between MRF sequence repetitions using stationary fingerprint dictionaries is a feasible method to improve the scan efficiency of MRF sequences. The method enables fast implementations of 3D spatially resolved
Adolph, Karen E.; Robinson, Scott R.
2011-01-01
Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…
Interval methods: An introduction
DEFF Research Database (Denmark)
Achenie, L.E.K.; Kreinovich, V.; Madsen, Kaj
2006-01-01
. An important characteristic of the computer performance in scientific computing is the accuracy of the Computation results. Often, we can estimate this accuracy by using traditional statistical techniques. However, in many practical situations, we do not know the probability distributions of different...... the potential for solving increasingly difficult computational problems. However, given the complexity of modern computer architectures, the task of realizing this potential needs careful attention. A main concern of HPC is the development of software that optimizes the performance of a given computer...... measurement, estimation, and/or roundoff errors, we only know estimates of the upper bounds on the corresponding measurement errors, i.e., we only know an interval of possible values of each such error. The papers from the following chapter contain the description of the corresponding '' interval computation...
Applications of interval computations
Kreinovich, Vladik
1996-01-01
Primary Audience for the Book • Specialists in numerical computations who are interested in algorithms with automatic result verification. • Engineers, scientists, and practitioners who desire results with automatic verification and who would therefore benefit from the experience of suc cessful applications. • Students in applied mathematics and computer science who want to learn these methods. Goal Of the Book This book contains surveys of applications of interval computations, i. e. , appli cations of numerical methods with automatic result verification, that were pre sented at an international workshop on the subject in EI Paso, Texas, February 23-25, 1995. The purpose of this book is to disseminate detailed and surveyed information about existing and potential applications of this new growing field. Brief Description of the Papers At the most fundamental level, interval arithmetic operations work with sets: The result of a single arithmetic operation is the set of all possible results as the o...
Finite Discrete Gabor Analysis
DEFF Research Database (Denmark)
Søndergaard, Peter Lempel
2007-01-01
on the real line to be well approximated by finite and discrete Gabor frames. This method of approximation is especially attractive because efficient numerical methods exists for doing computations with finite, discrete Gabor systems. This thesis presents new algorithms for the efficient computation of finite...
Interval methods: An introduction
DEFF Research Database (Denmark)
Achenie, L.E.K.; Kreinovich, V.; Madsen, Kaj
2006-01-01
This chapter contains selected papers presented at the Minisymposium on Interval Methods of the PARA'04 Workshop '' State-of-the-Art in Scientific Computing ''. The emphasis of the workshop was on high-performance computing (HPC). The ongoing development of ever more advanced computers provides....... An important characteristic of the computer performance in scientific computing is the accuracy of the Computation results. Often, we can estimate this accuracy by using traditional statistical techniques. However, in many practical situations, we do not know the probability distributions of different...... '' techniques, and the applications of these techniques to various problems of scientific computing....
Interval probabilistic neural network.
Kowalski, Piotr A; Kulczycki, Piotr
2017-01-01
Automated classification systems have allowed for the rapid development of exploratory data analysis. Such systems increase the independence of human intervention in obtaining the analysis results, especially when inaccurate information is under consideration. The aim of this paper is to present a novel approach, a neural networking, for use in classifying interval information. As presented, neural methodology is a generalization of probabilistic neural network for interval data processing. The simple structure of this neural classification algorithm makes it applicable for research purposes. The procedure is based on the Bayes approach, ensuring minimal potential losses with regard to that which comes about through classification errors. In this article, the topological structure of the network and the learning process are described in detail. Of note, the correctness of the procedure proposed here has been verified by way of numerical tests. These tests include examples of both synthetic data, as well as benchmark instances. The results of numerical verification, carried out for different shapes of data sets, as well as a comparative analysis with other methods of similar conditioning, have validated both the concept presented here and its positive features.
Ruette, Sylvie
2017-01-01
The aim of this book is to survey the relations between the various kinds of chaos and related notions for continuous interval maps from a topological point of view. The papers on this topic are numerous and widely scattered in the literature; some of them are little known, difficult to find, or originally published in Russian, Ukrainian, or Chinese. Dynamical systems given by the iteration of a continuous map on an interval have been broadly studied because they are simple but nevertheless exhibit complex behaviors. They also allow numerical simulations, which enabled the discovery of some chaotic phenomena. Moreover, the "most interesting" part of some higher-dimensional systems can be of lower dimension, which allows, in some cases, boiling it down to systems in dimension one. Some of the more recent developments such as distributional chaos, the relation between entropy and Li-Yorke chaos, sequence entropy, and maps with infinitely many branches are presented in book form for the first time. The author gi...
Interval arithmetic operations for uncertainty analysis with correlated interval variables
Institute of Scientific and Technical Information of China (English)
Chao Jiang; Chun-Ming Fu; Bing-Yu Ni; Xu Han
2016-01-01
A new interval arithmetic method is proposed to solve interval functions with correlated intervals through which the overestimation problem existing in interval analy-sis could be significantly alleviated. The correlation between interval parameters is defined by the multidimensional par-allelepiped model which is convenient to describe the correlative and independent interval variables in a unified framework. The original interval variables with correlation are transformed into the standard space without correlation, and then the relationship between the original variables and the standard interval variables is obtained. The expressions of four basic interval arithmetic operations, namely addi-tion, subtraction, multiplication, and division, are given in the standard space. Finally, several numerical examples and a two-step bar are used to demonstrate the effectiveness of the proposed method.
Approximating the Finite-Time Ruin Probability under Interest Force
Brekelmans, R.C.M.; De Waegenaere, A.M.B.
2000-01-01
We present an algorithm to determine both a lower and an upper bound for the finite-time probability of ruin for a risk process with constant interest force. We split the time horizon into smaller intervals of equal length and consider the probability of ruin in case premium income for a time interv
Simple Finite Jordan Pseudoalgebras
Directory of Open Access Journals (Sweden)
Pavel Kolesnikov
2009-01-01
Full Text Available We consider the structure of Jordan H-pseudoalgebras which are linearly finitely generated over a Hopf algebra H. There are two cases under consideration: H = U(h and H = U(h # C[Γ], where h is a finite-dimensional Lie algebra over C, Γ is an arbitrary group acting on U(h by automorphisms. We construct an analogue of the Tits-Kantor-Koecher construction for finite Jordan pseudoalgebras and describe all simple ones.
Simple Finite Jordan Pseudoalgebras
Kolesnikov, Pavel
2009-01-01
We consider the structure of Jordan H-pseudoalgebras which are linearly finitely generated over a Hopf algebra H. There are two cases under consideration: H = U(h) and H = U(h) # C[Γ], where h is a finite-dimensional Lie algebra over C, Γ is an arbitrary group acting on U(h) by automorphisms. We construct an analogue of the Tits-Kantor-Koecher construction for finite Jordan pseudoalgebras and describe all simple ones.
Finite Unification: phenomenology
Energy Technology Data Exchange (ETDEWEB)
Heinemeyer, S; Ma, E; Mondragon, M; Zoupanos, G, E-mail: sven.heinemeyer@cern.ch, E-mail: ma@phyun8.ucr.edu, E-mail: myriarn@fisica.unam.mx, E-mail: george.zoupanos@cern.ch
2010-11-01
We study the phenomenological implications of Finite Unified Theories (FUTs). In particular we look at the predictions for the lightest Higgs mass and the s-spectra of two all-loop finite models with SU(5) as gauge group. We also consider a two-loop finite model with gauge group SU(3){sup 3}, which is finite if and only if there are exactly three generations. In this latter model we concetrate here only on the predictions for the third generation of quark masses.
Bathe, Klaus-Jürgen
2015-01-01
Finite element procedures are now an important and frequently indispensable part of engineering analyses and scientific investigations. This book focuses on finite element procedures that are very useful and are widely employed. Formulations for the linear and nonlinear analyses of solids and structures, fluids, and multiphysics problems are presented, appropriate finite elements are discussed, and solution techniques for the governing finite element equations are given. The book presents general, reliable, and effective procedures that are fundamental and can be expected to be in use for a long time. The given procedures form also the foundations of recent developments in the field.
Mullen, Gary L
2013-01-01
Poised to become the leading reference in the field, the Handbook of Finite Fields is exclusively devoted to the theory and applications of finite fields. More than 80 international contributors compile state-of-the-art research in this definitive handbook. Edited by two renowned researchers, the book uses a uniform style and format throughout and each chapter is self contained and peer reviewed. The first part of the book traces the history of finite fields through the eighteenth and nineteenth centuries. The second part presents theoretical properties of finite fields, covering polynomials,
Finite Symplectic Matrix Groups
2011-01-01
The finite subgroups of GL(m, Q) are those subgroups that fix a full lattice in Q^m together with some positive definite symmetric form. A subgroup of GL(m, Q) is called symplectic, if it fixes a nondegenerate skewsymmetric form. Such groups only exist if m is even. A symplectic subgroup of GL(2n, Q) is called maximal finite symplectic if it is not properly contained in some finite symplectic subgroup of GL(2n, Q). This thesis classifies all conjugacy classes of maximal finite symplectic subg...
FE Thermomechanics and Material Sampling Points
Giessen, Erik van der
1987-01-01
The thermomechanics of finite elements of continuous media is discussed. The novel key concept introduced is that of material sampling points attributed to each finite element. Similar to representing the spatial interactions by a finite number of nodal quantities, the state of a finite element is r
Interval Size and Affect: An Ethnomusicological Perspective
Directory of Open Access Journals (Sweden)
Sarha Moore
2013-08-01
Full Text Available This commentary addresses Huron and Davis's question of whether "The Harmonic Minor Provides an Optimum Way of Reducing Average Melodic Interval Size, Consistent with Sad Affect Cues" within any non-Western musical cultures. The harmonic minor scale and other semitone-heavy scales, such as Bhairav raga and Hicaz makam, are featured widely in the musical cultures of North India and the Middle East. Do melodies from these genres also have a preponderance of semitone intervals and low incidence of the augmented second interval, as in Huron and Davis's sample? Does the presence of more semitone intervals in a melody affect its emotional connotations in different cultural settings? Are all semitone intervals equal in their effect? My own ethnographic research within these cultures reveals comparable connotations in melodies that linger on semitone intervals, centered on concepts of tension and metaphors of falling. However, across different musical cultures there may also be neutral or lively interpretations of these same pitch sets, dependent on context, manner of performance, and tradition. Small pitch movement may also be associated with social functions such as prayer or lullabies, and may not be described as "sad." "Sad," moreover may not connote the same affect cross-culturally.
Sman, van der R.G.M.
2006-01-01
In the special case of relaxation parameter = 1 lattice Boltzmann schemes for (convection) diffusion and fluid flow are equivalent to finite difference/volume (FD) schemes, and are thus coined finite Boltzmann (FB) schemes. We show that the equivalence is inherent to the homology of the
1996-01-01
Designs and Finite Geometries brings together in one place important contributions and up-to-date research results in this important area of mathematics. Designs and Finite Geometries serves as an excellent reference, providing insight into some of the most important research issues in the field.
On finitely recursive programs
Baselice, Sabrina; Criscuolo, Giovanni
2009-01-01
Disjunctive finitary programs are a class of logic programs admitting function symbols and hence infinite domains. They have very good computational properties, for example ground queries are decidable while in the general case the stable model semantics is highly undecidable. In this paper we prove that a larger class of programs, called finitely recursive programs, preserves most of the good properties of finitary programs under the stable model semantics, namely: (i) finitely recursive programs enjoy a compactness property; (ii) inconsistency checking and skeptical reasoning are semidecidable; (iii) skeptical resolution is complete for normal finitely recursive programs. Moreover, we show how to check inconsistency and answer skeptical queries using finite subsets of the ground program instantiation. We achieve this by extending the splitting sequence theorem by Lifschitz and Turner: We prove that if the input program P is finitely recursive, then the partial stable models determined by any smooth splittin...
Directory of Open Access Journals (Sweden)
Camila Marinelli Martins
2012-06-01
Full Text Available Domestic animals in urban areas may serve as reservoirs for parasitic zoonoses. The aim of this study was to monitor the parasitic status of household dogs in an urban area of Pinhais, in the metropolitan region of Curitiba, Paraná State, Brazil, after a one-year period. In May 2009, fecal samples, skin scrapings and ticks were collected from 171 dogs. Questionnaires were applied to the owners (sex, age, environment and anthelmintic use. In May 2010, 26.3% (45/171 of the dogs were fecal samples reanalysed. From the fecal samples, 33.3% (57/171 in 2009 and 64.4% (29/45 in 2010 were positive. The parasite species most observed were, respectively in 2009 and 2010, Ancylostoma sp., 66.7 and 44.8%, and Strongyloidesstercoralis, 26.3 and 3.4%. All the skin scrapings were negative, and no ticks or protozoa were found. There was no statistical association (p > 0.05 between positive fecal tests and age, sex or environment. In 2009 alone, dogs with a history of antiparasitic drug administration were 2.3 times more likely to be negative. A great number of replacement dogs was noticed one year later. Therefore, isolated antiparasitic treatment strategies may have no impact on parasite control, given the risk of introduction of new agents, thereby limiting the prevention strategies.Animais domésticos em áreas urbanas podem servir de reservatório para zoonoses parasitárias. O objetivo deste trabalho foi monitorar a situação parasitária de cães domiciliados, após um ano, em área urbana de Pinhais, região metropolitana de Curitiba, Estado do Paraná, Brasil. Em maio de 2009, foram coletadas amostras de fezes de 171 cães, realizados raspados cutâneos e pesquisa de carrapatos. Foi aplicado um questionário aos proprietários (sexo, idade, ambiente e uso de vermífugos. Em maio de 2010, 26,3% (45/171 dos cães tiveram as amostras de fezes analisadas novamente. Das amostras de fezes, 33,3% (57/171 em 2009 e 64,4% (29/45 em 2010, foram positivas. As
Measurable Maximal Energy and Minimal Time Interval
Dahab, Eiman Abou El
2014-01-01
The possibility of finding the measurable maximal energy and the minimal time interval is discussed in different quantum aspects. It is found that the linear generalized uncertainty principle (GUP) approach gives a non-physical result. Based on large scale Schwarzshild solution, the quadratic GUP approach is utilized. The calculations are performed at the shortest distance, at which the general relativity is assumed to be a good approximation for the quantum gravity and at larger distances, as well. It is found that both maximal energy and minimal time have the order of the Planck time. Then, the uncertainties in both quantities are accordingly bounded. Some physical insights are addressed. Also, the implications on the physics of early Universe and on quantized mass are outlined. The results are related to the existence of finite cosmological constant and minimum mass (mass quanta).
Interval Semantics for Standard Floating-Point Arithmetic
Edmonson, W W
2008-01-01
If the non-zero finite floating-point numbers are interpreted as point intervals, then the effect of rounding can be interpreted as computing one of the bounds of the result according to interval arithmetic. We give an interval interpretation for the signed zeros and infinities, so that the undefined operations 0*inf, inf - inf, inf/inf, and 0/0 become defined. In this way no operation remains that gives rise to an error condition. Mathematically questionable features of the floating-point standard become well-defined sets of reals. Interval semantics provides a basis for the verification of numerical algorithms. We derive the results of the newly defined operations and consider the implications for hardware implementation.
Minimax confidence intervals in geomagnetism
Stark, Philip B.
1992-01-01
The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.
Finite elements and approximation
Zienkiewicz, O C
2006-01-01
A powerful tool for the approximate solution of differential equations, the finite element is extensively used in industry and research. This book offers students of engineering and physics a comprehensive view of the principles involved, with numerous illustrative examples and exercises.Starting with continuum boundary value problems and the need for numerical discretization, the text examines finite difference methods, weighted residual methods in the context of continuous trial functions, and piecewise defined trial functions and the finite element method. Additional topics include higher o
Introduction to finite geometries
Kárteszi, F
1976-01-01
North-Holland Texts in Advanced Mathematics: Introduction to Finite Geometries focuses on the advancements in finite geometries, including mapping and combinatorics. The manuscript first offers information on the basic concepts on finite geometries and Galois geometries. Discussions focus on linear mapping of a given quadrangle onto another given quadrangle; point configurations of order 2 on a Galois plane of even order; canonical equation of curves of the second order on the Galois planes of even order; and set of collineations mapping a Galois plane onto itself. The text then ponders on geo
Finite-time stability and control
Amato, Francesco; Ariola, Marco; Cosentino, Carlo; De Tommasi, Gianmaria
2014-01-01
Finite-time stability (FTS) is a more practical concept than classical Lyapunov stability, useful for checking whether the state trajectories of a system remain within pre-specified bounds over a finite time interval. In a linear systems framework, FTS problems can be cast as convex optimization problems and solved by the use of effective off-the-shelf computational tools such as LMI solvers. Finite-time Stability and Control exploits this benefit to present the practical applications of FTS and finite-time control-theoretical results to various engineering fields. The text is divided into two parts: · linear systems; and · hybrid systems. The building of practical motivating examples helps the reader to understand the methods presented. Finite-time Stability and Control is addressed to academic researchers and to engineers working in the field of robust process control. Instructors teaching graduate courses in advanced control will also find parts of this book useful for the...
Energy Technology Data Exchange (ETDEWEB)
Barnich, Glenn [Physique Théorique et Mathématique,Université Libre de Bruxelles and International Solvay Institutes,Campus Plaine C.P. 231, B-1050 Bruxelles (Belgium); Troessaert, Cédric [Centro de Estudios Científicos (CECs),Arturo Prat 514, Valdivia (Chile)
2016-03-24
The action of finite BMS and Weyl transformations on the gravitational data at null infinity is worked out in three and four dimensions in the case of an arbitrary conformal factor for the boundary metric induced on Scri.
Guichon, P A M; Thomas, A W
1996-01-01
We describe the development of a theoretical description of the structure of finite nuclei based on a relativistic quark model of the structure of the bound nucleons which interact through the (self-consistent) exchange of scalar and vector mesons.
Advanced finite element technologies
Wriggers, Peter
2016-01-01
The book presents an overview of the state of research of advanced finite element technologies. Besides the mathematical analysis, the finite element development and their engineering applications are shown to the reader. The authors give a survey of the methods and technologies concerning efficiency, robustness and performance aspects. The book covers the topics of mathematical foundations for variational approaches and the mathematical understanding of the analytical requirements of modern finite element methods. Special attention is paid to finite deformations, adaptive strategies, incompressible, isotropic or anisotropic material behavior and the mathematical and numerical treatment of the well-known locking phenomenon. Beyond that new results for the introduced approaches are presented especially for challenging nonlinear problems.
Interval analysis for Certified Numerical Solution of Problems in Robotics
Merlet, Jean-Pierre
2009-01-01
International audience; Interval analysis is a relatively new mathematical tool that allows one to deal with problems that may have to be solved numerically with a computer. Examples of such problems are system solving and global optimization but numerous other problems may be addressed as well. This approach has the following general advantages: 1 it allows to find solutions of a problem only within some finite domain which make sense as soon as the unknowns in the problem are physical param...
Automatic Error Analysis Using Intervals
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
Explorations in Statistics: Confidence Intervals
Curran-Everett, Douglas
2009-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of "Explorations in Statistics" investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter…
2009-08-01
setting in Duc & Siegmund [28]: Definition A.10 (Dynamic partition of IR2). Consider the extended phase space, IR2 × I, associated with the flow...Fluid Dynamics, Cambridge University Press, Cam- bridge, 1967. [8] A. Berger, D. T. Son, and S. Siegmund , Nonautonomous finite-time dynamics, Discrete...28] L. H. Duc and S. Siegmund , Hyperbolicity and invariant manifolds for planar nonau- tonomous systems on finite time intervals, Int. J. Bif. Chaos
The Relation of Finite Element and Finite Difference Methods
Vinokur, M.
1976-01-01
Finite element and finite difference methods are examined in order to bring out their relationship. It is shown that both methods use two types of discrete representations of continuous functions. They differ in that finite difference methods emphasize the discretization of independent variable, while finite element methods emphasize the discretization of dependent variable (referred to as functional approximations). An important point is that finite element methods use global piecewise functional approximations, while finite difference methods normally use local functional approximations. A general conclusion is that finite element methods are best designed to handle complex boundaries, while finite difference methods are superior for complex equations. It is also shown that finite volume difference methods possess many of the advantages attributed to finite element methods.
Circular Interval Arithmetic Applied on LDMT for Linear Interval System
Directory of Open Access Journals (Sweden)
Stephen Ehidiamhen Uwamusi
2014-07-01
Full Text Available The paper considers the LDMT Factorization of a general nxn matrix arising from system of interval linear equations. We paid special emphasis on Interval Cholesky Factorization. The basic computational tool used is the square root method of circular interval arithmetic in a sense analogous to Gargantini and Henrici as well as the generalized square root method due to Petkovic which enables the construction of the square root of the resulting diagonal matrix. We also made use of Rump’s method for multiplying two intervals expressed in the form of midpoint-radius respectively. Numerical example of matrix factorization in this regard is given which forms the basis of discussion. It is shown that LDMT even though is a numerically stable method for any diagonally dominant matrix it also can lead to excess width of the solution set. It is also pointed out that in spite of the above mentioned objection to interval LDMT it has in addition , the advantage that in the presence of several solution sets sharing the same interval matrix the LDMT Factorization requires to be computed only once which helps in saving substantial computational time. This may be found applicable in the development of military hard ware which requires shooting at a single point but produces multiple broadcast at all other points
Interval Estimation of Seismic Hazard Parameters
Orlecka-Sikora, Beata; Lasocki, Stanislaw
2016-11-01
The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Begin, After, and Later: a Maximal Decidable Interval Temporal Logic
Directory of Open Access Journals (Sweden)
Davide Bresolin
2010-06-01
Full Text Available Interval temporal logics (ITLs are logics for reasoning about temporal statements expressed over intervals, i.e., periods of time. The most famous ITL studied so far is Halpern and Shoham's HS, which is the logic of the thirteen Allen's interval relations. Unfortunately, HS and most of its fragments have an undecidable satisfiability problem. This discouraged the research in this area until recently, when a number non-trivial decidable ITLs have been discovered. This paper is a contribution towards the complete classification of all different fragments of HS. We consider different combinations of the interval relations Begins, After, Later and their inverses Abar, Bbar, and Lbar. We know from previous works that the combination ABBbarAbar is decidable only when finite domains are considered (and undecidable elsewhere, and that ABBbar is decidable over the natural numbers. We extend these results by showing that decidability of ABBar can be further extended to capture the language ABBbarLbar, which lays in between ABBar and ABBbarAbar, and that turns out to be maximal w.r.t decidability over strongly discrete linear orders (e.g. finite orders, the naturals, the integers. We also prove that the proposed decision procedure is optimal with respect to the complexity class.
2010-01-01
Finite element analysis is an engineering method for the numerical analysis of complex structures. This book provides a bird's eye view on this very broad matter through 27 original and innovative research studies exhibiting various investigation directions. Through its chapters the reader will have access to works related to Biomedical Engineering, Materials Engineering, Process Analysis and Civil Engineering. The text is addressed not only to researchers, but also to professional engineers, engineering lecturers and students seeking to gain a better understanding of where Finite Element Analysis stands today.
Baumeister, Barbara
2009-01-01
We continue the work by Aschbacher, Kinyon and Phillips [AKP] as well as of Glauberman [Glaub1,2] by describing the structure of the finite Bruck loops. We show essentially that a finite Bruck loop $X$ is the direct product of a Bruck loop of odd order with either a soluble Bruck loop of 2-power order or a product of loops related to the groups $PSL_2(q)$, $q= 9$ or $q \\geq 5$ a Fermat prime. The latter possibillity does occur as is shown in [Nag1, BS]. As corollaries we obtain versions of Sylow's, Lagrange's and Hall's Theorems for loops.
Finite element mesh generation
Lo, Daniel SH
2014-01-01
Highlights the Progression of Meshing Technologies and Their ApplicationsFinite Element Mesh Generation provides a concise and comprehensive guide to the application of finite element mesh generation over 2D domains, curved surfaces, and 3D space. Organised according to the geometry and dimension of the problem domains, it develops from the basic meshing algorithms to the most advanced schemes to deal with problems with specific requirements such as boundary conformity, adaptive and anisotropic elements, shape qualities, and mesh optimization. It sets out the fundamentals of popular techniques
Hydrologic studies in wells open through large intervals
Energy Technology Data Exchange (ETDEWEB)
1992-01-01
This report describes and summarizes activities, data, and preliminary data interpretation from the INEL Oversight Program R D-1 project titled Hydrologic Studies In Wells Open Through Large Intervals.'' The project is designed to use a straddle-packer system to isolate, hydraulically test, and sample specific intervals of monitoring wells that are open (uncased, unscreened) over large intervals of the Snake River Plain aquifer. The objectives of the project are to determine and compare vertical variations in water quality and aquifer properties that have previously only been determined in an integrated fashion over the entire thickness of the open interval of the observation wells.
Directory of Open Access Journals (Sweden)
João Batista Kochenborger Fernandes
2012-04-01
Full Text Available This study evaluated the apparent protein and energy digestibility of common feed ingredients (soybean meal, fish meal, wheat meal and corn by juvenile oscars using two different sampling intervals (30 min. and 12h. The 160 juvenile oscar fish tested (22.37 ± 3.06 g BW were divided into four cylindrical plastic net cages, each one placed in a 1000 L feeding tank. The experiment was completely randomized in a 2 x 4 factorial design (2 feces collection intervals and 4 feed ingredients with four replications. The statistical tests did not detect an interaction effect of sampling interval and type of ingredient on digestibility coefficients. Sampling interval did not affect protein and energy digestibility. The physical characteristics of juvenile oscar feces likely make them less susceptible to nutrient loss by leaching and can therefore be collected at longer intervals. Protein digestibility of the different ingredients was similar, showing that apparent digestibility of both animal and plant ingredients by juvenile oscars was efficient. Energy digestibility coefficients of fish meal and soybean meal were higher than those of wheat meal and corn. Carbohydrate-rich ingredients (wheat meal and corn had the worst energy digestibility coefficients and are therefore not used efficiently by juvenile oscars.O presente estudo avaliou a digestibilidade aparente da proteína e da energia de ingredientes (farelo de soja, farinha de peixe, farelo de trigo e milho por juvenis de apaiari (Astronotus ocellatus usando dois diferentes intervalos de coleta (30 min. e 12h. Os 160 juvenis de apaiari utilizados (22,37 ± 3,06 g de peso corporal foram divididos em quatro tanques rede plásticos e cilíndricos, cada um colocado em um tanque de alimentação de 1.000 L. O experimento foi inteiramente casualizado em esquema fatorial 2 x 4 (2 intervalos de coleta de fezes e 4 ingredientes foram com quatro repetições. Os testes estatísticos não detectaram efeito da
Kravchuk functions for the finite oscillator approximation
Atakishiyev, Natig M.; Wolf, Kurt Bernardo
1995-01-01
Kravchuk orthogonal functions - Kravchuk polynomials multiplied by the square root of the weight function - simplify the inversion algorithm for the analysis of discrete, finite signals in harmonic oscillator components. They can be regarded as the best approximation set. As the number of sampling points increases, the Kravchuk expansion becomes the standard oscillator expansion.
Reference intervals data mining: no longer a probability paper method.
Katayev, Alexander; Fleming, James K; Luo, Dajie; Fisher, Arren H; Sharp, Thomas M
2015-01-01
To describe the application of a data-mining statistical algorithm for calculation of clinical laboratory tests reference intervals. Reference intervals for eight different analytes and different age and sex groups (a total of 11 separate reference intervals) for tests that are unlikely to be ordered during routine screening of disease-free populations were calculated using the modified algorithm for data mining of test results stored in the laboratory database and compared with published peer-reviewed studies that used direct sampling. The selection of analytes was based on the predefined criteria that include comparability of analytical methods with a statistically significant number of observations. Of the 11 calculated reference intervals, having upper and lower limits for each, 21 of 22 reference interval limits were not statistically different from the reference studies. The presented statistical algorithm is shown to be an accurate and practical tool for reference interval calculations. Copyright© by the American Society for Clinical Pathology.
Energy Technology Data Exchange (ETDEWEB)
Atakishiyev, Natig M [Centro de Ciencias FIsicas, UNAM, Apartado Postal 48-3, 62251 Cuernavaca, Morelos (Mexico); Klimyk, Anatoliy U [Centro de Ciencias FIsicas, UNAM, Apartado Postal 48-3, 62251 Cuernavaca, Morelos (Mexico); Wolf, Kurt Bernardo [Centro de Ciencias FIsicas, UNAM, Apartado Postal 48-3, 62251 Cuernavaca, Morelos (Mexico)
2004-05-28
The finite q-oscillator is a model that obeys the dynamics of the harmonic oscillator, with the operators of position, momentum and Hamiltonian being functions of elements of the q-algebra su{sub q}(2). The spectrum of position in this discrete system, in a fixed representation j, consists of 2j + 1 'sensor'-points x{sub s} = 1/2 [2s]{sub q}, s element of {l_brace}-j, -j+1, ..., j{r_brace}, and similarly for the momentum observable. The spectrum of energies is finite and equally spaced, so the system supports coherent states. The wavefunctions involve dual q-Kravchuk polynomials, which are solutions to a finite-difference Schroedinger equation. Time evolution (times a phase) defines the fractional Fourier-q-Kravchuk transform. In the classical limit as q {yields} 1 we recover the finite oscillator Lie algebra, the N = 2j {yields} {infinity} limit returns the Macfarlane-Biedenharn q-oscillator and both limits contract the generators to the standard quantum-mechanical harmonic oscillator.
Atakishiyev, Natig M.; Klimyk, Anatoliy U.; Wolf, Kurt Bernardo
2004-05-01
The finite q-oscillator is a model that obeys the dynamics of the harmonic oscillator, with the operators of position, momentum and Hamiltonian being functions of elements of the q-algebra suq(2). The spectrum of position in this discrete system, in a fixed representation j, consists of 2j + 1 'sensor'-points x_s={\\case12}[2s]_q, s\\in\\{-j,-j+1,\\ldots,j\\} , and similarly for the momentum observable. The spectrum of energies is finite and equally spaced, so the system supports coherent states. The wavefunctions involve dual q-Kravchuk polynomials, which are solutions to a finite-difference Schrödinger equation. Time evolution (times a phase) defines the fractional Fourier-q-Kravchuk transform. In the classical limit as q rarr 1 we recover the finite oscillator Lie algebra, the N = 2j rarr infin limit returns the Macfarlane-Biedenharn q-oscillator and both limits contract the generators to the standard quantum-mechanical harmonic oscillator.
Silva, P J; Dudal, D; Bicudo, P; Cardoso, N
2016-01-01
The gluon propagator is investigated at finite temperature via lattice simulations. In particular, we discuss its interpretation as a massive-type bosonic propagator. Moreover, we compute the corresponding spectral density and study the violation of spectral positivity. Finally, we explore the dependence of the gluon propagator on the phase of the Polyakov loop.
Energy Technology Data Exchange (ETDEWEB)
Kapetanakis, D. (Technische Univ. Muenchen, Garching (Germany). Physik Dept.); Mondragon, M. (Technische Univ. Muenchen, Garching (Germany). Physik Dept.); Zoupanos, G. (National Technical Univ., Athens (Greece). Physics Dept.)
1993-09-01
We present phenomenologically viable SU(5) unified models which are finite to all orders before the spontaneous symmetry breaking. In the case of two models with three families the top quark mass is predicted to be 178.8 GeV. (orig.)
Ciocanea Teodorescu I.,
2016-01-01
In this thesis we are interested in describing algorithms that answer questions arising in ring and module theory. Our focus is on deterministic polynomial-time algorithms and rings and modules that are finite. The first main result of this thesis is a solution to the module isomorphism problem in
Institute of Scientific and Technical Information of China (English)
Ronald W. Langacker
2008-01-01
This paper explores the conceptual basis of finite complimentation in English.It first considem the distinguishing property of a finite clause,namely grounding,effeeted by tense and the modals.Notions crucial for clausal grounding--including a reality conception and the striving for control at the effective and epistemic levelsalso figure in the semantic import of eomplementation.An essential feature of complement constructions is the involvement of multiple conceptualizers,each with their own conception of reality.The different types of complement and their grammatical markings can be characterized on this basis.Finite complements differ from other types by virtue of expressing an autonomous proposition capable of being apprehended by multiple conceptualizers,each from their own vantage point.Acognitive model representing phases in the striving for epistemic control provides a partial basis for the semantic description of predicates taking finite complements.The same model supports the description of both personal and impersonal complement constructions.
Ciocanea Teodorescu I.,
2016-01-01
In this thesis we are interested in describing algorithms that answer questions arising in ring and module theory. Our focus is on deterministic polynomial-time algorithms and rings and modules that are finite. The first main result of this thesis is a solution to the module isomorphism problem in
Weiser, Martin
2016-01-01
All relevant implementation aspects of finite element methods are discussed in this book. The focus is on algorithms and data structures as well as on their concrete implementation. Theory is covered as far as it gives insight into the construction of algorithms. Throughout the exercises a complete FE-solver for scalar 2D problems will be implemented in Matlab/Octave.
A novel algorithm for spectral interval combination optimization.
Song, Xiangzhong; Huang, Yue; Yan, Hong; Xiong, Yanmei; Min, Shungeng
2016-12-15
In this study, a new wavelength interval selection algorithm named as interval combination optimization (ICO) was proposed under the framework of model population analysis (MPA). In this method, the full spectra are divided into a fixed number of equal-width intervals firstly. Then the optimal interval combination is searched iteratively under the guide of MPA in a soft shrinkage manner, among which weighted bootstrap sampling (WBS) is employed as random sampling method. Finally, local search is conducted to optimize the widths of selected intervals. Three NIR datasets were used to validate the performance of ICO algorithm. Results show that ICO can select fewer wavelengths with better prediction performance when compared with other four wavelength selection methods, including VISSA, VISSA-iPLS, iVISSA and GA-iPLS. In addition, the computational intensity of ICO is also economical, benefit from fewer tune parameters and faster convergence speed. Copyright © 2016 Elsevier B.V. All rights reserved.
On the numerical calculation of Hadamard finite-part integrals
Directory of Open Access Journals (Sweden)
Ezio Venturino
1998-10-01
Full Text Available In this paper we consider a simple method for calculating integrals possessing strong singularities, to be interpreted in the Hadamard finite-part sense. We partition the original interval of integration and then integrate over the subintervals by using suitably modified low-order Gaussian-type quadratures. Convergence is shown under suitable assumptions and numerical evidence supports the theoretical findings.
Directory of Open Access Journals (Sweden)
Érica Luciana de Paula Furlan
2012-10-01
Full Text Available OBJETIVOS: Elaborar modelos de predição de peso fetal e de percentis longitudinais de peso fetal estimado (PFE com uma amostra da população brasileira. MÉTODOS: Estudo observacional prospectivo. Dois grupos de gestantes foram recrutados: Grupo EPF (estimativa de peso fetal: pacientes para elaboração (EPF-El e validação (EPF-Val de um modelo de predição de peso fetal; Grupo IRL (intervalos de referência longitudinais: gestantes para elaboração (IRL-El e validação (IRL-Val de intervalos de referência longitudinais de PFE. Regressão polinomial foi utilizada com os dados do subgrupo EPF-El para gerar o modelo de predição de peso fetal. O desempenho deste modelo foi comparado com os de outros disponíveis na literatura. Modelos lineares mistos foram usados para elaboração de intervalos longitudinais de PFE com os dados do subgrupo IRL-El. Os dados do subgrupo IRL-Val foram usados para validação destes intervalos. RESULTADOS: Quatrocentos e cinqüenta e oito pacientes compuseram o Grupo EPF (EPF-El: 367; EPF-Val: 91 e 315 o Grupo IRL (IRL-El: 265; IRL-Val: 50. A fórmula para cálculo do PFE foi: PFE=-8,277+2,146xDBPxCAxCF-2,449xCFxDBP². Os desempenhos de outras fórmulas para estimativa de peso fetal em nossa amostra foram significativamente piores do que os do modelo gerado neste estudo. Equações para predição de percentis condicionais de PFE foram derivadas das avaliações longitudinais do subgrupo IRL-El e validadas com os dados do subgrupo IRL-Val. CONCLUSÕES: descrevemos um método para adaptação de intervalos de referência longitudinais de PFE, sendo este obtido por meio de fórmulas geradas em uma amostra da população brasileira.PURPOSES: To elaborate models for the estimation of fetal weight and longitudinal reference intervals of estimated fetal weight (EFW using a sample of the Brazilian population. METHODS: Prospective observational study. Two groups of patients were evaluated: Group EFW (estimation of
On sampling social networking services
Wang, Baiyang
2012-01-01
This article aims at summarizing the existing methods for sampling social networking services and proposing a faster confidence interval for related sampling methods. It also includes comparisons of common network sampling techniques.
Interval analysis of transient temperature field with uncertain-but-bounded parameters
Wang, Chong; Qiu, ZhiPing
2014-10-01
Based on the traditional finite volume method, a new numerical technique is presented for the transient temperature field prediction with interval uncertainties in both the physical parameters and initial/boundary conditions. New stability theory applicable to interval discrete schemes is developed. Interval ranges of the uncertain temperature field can be approximately yielded by two kinds of parameter perturbation methods. Different order Neumann series are adopted to approximate the interval matrix inverse. By comparing the results with traditional Monte Carlo simulation, a numerical example is given to demonstrate the feasibility and effectiveness of the proposed model and methods.
Finite-part singular integral approximations in Hilbert spaces
Directory of Open Access Journals (Sweden)
E. G. Ladopoulos
2004-01-01
Full Text Available Some new approximation methods are proposed for the numerical evaluation of the finite-part singular integral equations defined on Hilbert spaces when their singularity consists of a homeomorphism of the integration interval, which is a unit circle, on itself. Therefore, some existence theorems are proved for the solutions of the finite-part singular integral equations, approximated by several systems of linear algebraic equations. The method is further extended for the proof of the existence of solutions for systems of finite-part singular integral equations defined on Hilbert spaces, when their singularity consists of a system of diffeomorphisms of the integration interval, which is a unit circle, on itself.
Complete blood count reference intervals for healthy Han Chinese adults.
Directory of Open Access Journals (Sweden)
Xinzhong Wu
Full Text Available Complete blood count (CBC reference intervals are important to diagnose diseases, screen blood donors, and assess overall health. However, current reference intervals established by older instruments and technologies and those from American and European populations are not suitable for Chinese samples due to ethnic, dietary, and lifestyle differences. The aim of this multicenter collaborative study was to establish CBC reference intervals for healthy Han Chinese adults.A total of 4,642 healthy individuals (2,136 males and 2,506 females were recruited from six clinical centers in China (Shenyang, Beijing, Shanghai, Guangzhou, Chengdu, and Xi'an. Blood samples collected in K2EDTA anticoagulant tubes were analyzed. Analysis of variance was performed to determine differences in consensus intervals according to the use of data from the combined sample and selected samples.Median and mean platelet counts from the Chengdu center were significantly lower than those from other centers. Red blood cell count (RBC, hemoglobin (HGB, and hematocrit (HCT values were higher in males than in females at all ages. Other CBC parameters showed no significant instrument-, region-, age-, or sex-dependent difference. Thalassemia carriers were found to affect the lower or upper limit of different RBC profiles.We were able to establish consensus intervals for CBC parameters in healthy Han Chinese adults. RBC, HGB, and HCT intervals were established for each sex. The reference interval for platelets for the Chengdu center should be established independently.
Phase transitions in finite systems
Energy Technology Data Exchange (ETDEWEB)
Chomaz, Ph. [Grand Accelerateur National d' Ions Lourds (GANIL), DSM-CEA / IN2P3-CNRS, 14 - Caen (France); Gulminelli, F. [Caen Univ., 14 (France). Lab. de Physique Corpusculaire
2002-07-01
In this series of lectures we will first review the general theory of phase transition in the framework of information theory and briefly address some of the well known mean field solutions of three dimensional problems. The theory of phase transitions in finite systems will then be discussed, with a special emphasis to the conceptual problems linked to a thermodynamical description for small, short-lived, open systems as metal clusters and data samples coming from nuclear collisions. The concept of negative heat capacity developed in the early seventies in the context of self-gravitating systems will be reinterpreted in the general framework of convexity anomalies of thermo-statistical potentials. The connection with the distribution of the order parameter will lead us to a definition of first order phase transitions in finite systems based on topology anomalies of the event distribution in the space of observations. Finally a careful study of the thermodynamical limit will provide a bridge with the standard theory of phase transitions and show that in a wide class of physical situations the different statistical ensembles are irreducibly inequivalent. (authors)
Almost primes in short intervals
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
In this paper,we prove that the short interval(x-x101/232,x] contains at least an almost prime P2 for sufficiently large x,where P2 denotes an integer having at most two prime factors counted with multiplicity.
Robust misinterpretation of confidence intervals
Hoekstra, Rink; Morey, Richard; Rouder, Jeffrey N.; Wagenmakers, Eric-Jan
2014-01-01
Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more
Robust misinterpretation of confidence intervals
Hoekstra, R.; Morey, R.D.; Rouder, J.N.; Wagenmakers, E.-J.
2014-01-01
Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more
Differential calculi on finite groups
Castellani, L
1999-01-01
A brief review of bicovariant differential calculi on finite groups is given, with some new developments on diffeomorphisms and integration. We illustrate the general theory with the example of the nonabelian finite group S_3.
Distributions of order patterns of interval maps
Abrams, Aaron; Landau, Henry; Landau, Zeph; Pommersheim, James
2010-01-01
A permutation $\\sigma$ describing the relative orders of the first $n$ iterates of a point $x$ under a self-map $f$ of the interval $I=[0,1]$ is called an \\emph{order pattern}. For fixed $f$ and $n$, measuring the points $x\\in I$ (according to Lebesgue measure) that generate the order pattern $\\sigma$ gives a probability distribution $\\mu_n(f)$ on the set of length $n$ permutations. We study the distributions that arise this way for various classes of functions $f$. Our main results treat the class of measure preserving functions. We obtain an exact description of the set of realizable distributions in this case: for each $n$ this set is a union of open faces of the polytope of flows on a certain digraph, and a simple combinatorial criterion determines which faces are included. We also show that for general $f$, apart from an obvious compatibility condition, there is no restriction on the sequence $\\{\\mu_n(f)\\}$ for $n=1,2,...$. In addition, we give a necessary condition for $f$ to have \\emph{finite exclusion...
Energy Technology Data Exchange (ETDEWEB)
Mondragon, M [Inst. de Fisica, Universidad Nacional Autonoma de Mexico, Apdo. Postal 20-364, Mexico 01000 D.F. (Mexico); Zoupanos, G, E-mail: myriam@fisica.unam.m, E-mail: zoupanos@mail.cern.c [Physics Department, National Technical University of Athens, Zografou Campus: Heroon Polytechniou 9, 15780 Zografou, Athens (Greece)
2009-06-01
All-loop Finite Unified Theories (FUTs) are very interesting N=1 GUTs in which a complete reduction of couplings has been achieved. FUTs realize an old field theoretical dream and have remarkable predictive power. Reduction of dimensionless couplings in N=1 GUTs is achieved by searching for renormalization group invariant (RGI) relations among them holding beyond the unification scale. Finiteness results from the fact that there exists RGI relations among dimensionless couplings that guarantee the vanishing of the beta-functions in certain N=1 supersymmetric GUTS even to all orders. Furthermore, developments in the soft supersymmetry breaking sector of N=1 GUTs and FUTs lead to exact RGI relations also in this dimensionful sector of the theories. Of particular interest for the construction of realistic theories is a RGI sum rule for the soft scalar masses holding to all orders.
Modesto, Leonardo
2013-01-01
We hereby present a class of multidimensional higher derivative theories of gravity that realizes an ultraviolet completion of Einstein general relativity. This class is marked by a "non-polynomal" entire function (form factor), which averts extra degrees of freedom (including ghosts) and improves the high energy behavior of the loop amplitudes. By power counting arguments, it is proved that the theory is super-renormalizable in any dimension, i.e. only one-loop divergences survive. Furthermore, in odd dimensions there are no counter terms for pure gravity and the theory turns out to be "finite." Finally, considering the infinite tower of massive states coming from dimensional reduction, quantum gravity is finite in even dimension as well.
Confinement at Finite Temperature
Cardoso, Nuno; Bicudo, Pedro; Cardoso, Marco
2017-05-01
We show the flux tubes produced by static quark-antiquark, quark-quark and quark-gluon charges at finite temperature. The sources are placed on the lattice with fundamental and adjoint Polyakov loops. We compute the squared strengths of the chromomagnetic and chromoelectric fields above and below the critical temperature. Our results are for pure gauge SU(3) gauge theory, they are invariant and all computations are done with GPUs using CUDA.
Dimensionality and the sample unit
Francis A. Roesch
2009-01-01
The sample unit and its implications for the Forest Service, U.S. Department of Agriculture's Forest Inventory and Analysis program are discussed in light of a generalized three-dimensional concept of continuous forest inventories. The concept views the sampled population as a spatial-temporal cube and the sample as a finite partitioning of the cube. The sample...
Interval Valued Neutrosophic Soft Topological Spaces
Directory of Open Access Journals (Sweden)
Anjan Mukherjee
2014-12-01
Full Text Available In this paper we introduce the concept of interval valued neutrosophic soft topological space together with interval valued neutrosophic soft finer and interval valued neutrosophic soft coarser topology. We also define interval valued neutrosophic interior and closer of an interval valued neutrosophic soft set. Some theorems and examples are cites. Interval valued neutrosophic soft subspace topology are studied. Some examples and theorems regarding this concept are presented.
SURFACE FINITE ELEMENTS FOR PARABOLIC EQUATIONS
Institute of Scientific and Technical Information of China (English)
G. Dziuk; C.M. Elliott
2007-01-01
In this article we define a surface finite element method (SFEM) for the numerical solution of parabolic partial differential equations on hypersurfaces Γ in (R)n+1. The key idea is based on the approximation of Γ by a polyhedral surface Γh consisting of a union of simplices (triangles for n = 2, intervals for n = 1) with vertices on Γ. A finite element space of functions is then defined by taking the continuous functions on Γh which are linear affine on each simplex of the polygonal surface. We use surface gradients to define weak forms of elliptic operators and naturally generate weak formulations of elliptic and parabolic equations on Γ. Our finite element method is applied to weak forms of the equations. The computation of the mass and element stiffness matrices are simple and straightforward.We give an example of error bounds in the case of semi-discretization in space for a fourth order linear problem. Numerical experiments are described for several linear and nonlinear partial differential equations. In particular the power of the method is demonstrated by employing it to solve highly nonlinear second and fourth order problems such as surface Allen-Cahn and Cahn-Hilliard equations and surface level set equations for geodesic mean curvature flow.
Experimental uncertainty estimation and statistics for data having interval uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)
2007-05-01
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Recommended Nordic paediatric reference intervals for 21 common biochemical properties
DEFF Research Database (Denmark)
Hilsted, Linda; Rustad, Pål; Aksglæde, Lise;
2013-01-01
Abstract Paediatric reference intervals based on samples from healthy children are difficult to establish and consequently data are often from hospitalized children. Furthermore, biases may present in published data due to differences in the analytical methods employed. Blood samples from 1429...... healthy Danish children were collected for establishing reference intervals for 21 common biochemical properties (Alanine transaminase, Albumin, Alkaline phosphatase, Aspartate transaminase, Bilirubin, Calcium, Cholesterol, Creatinine, Creatine kinase, HDL-Cholesterol, Iron, Lactate dehydrogenase, LDL...... values of X for the properties and statistical calculations carried out as performed in the NORIP study. Thus commutable (regarding analytical method) reference intervals for 20 properties were established and for LDL-Cholesterol reference intervals were reported for the specific analytical method...
FUZZY ARITHMETIC AND SOLVING OF THE STATIC GOVERNING EQUATIONS OF FUZZY FINITE ELEMENT METHOD
Institute of Scientific and Technical Information of China (English)
郭书祥; 吕震宙; 冯立富
2002-01-01
The key component of finite element analysis of structures with fuzzy parameters,which is associated with handling of some fuzzy information and arithmetic relation of fuzzy variables, was the solving of the governing equations of fuzzy finite element method. Based on a given interval representation of fuzzy numbers, some arithmetic rules of fuzzy numbers and fuzzy variables were developed in terms of the properties of interval arithmetic.According to the rules and by the theory of interval finite element method, procedures for solving the static governing equations of fuzzy finite element method of structures were presented. By the proposed procedure, the possibility distributions of responses of fuzzy structures can be generated in terms of the membership functions of the input fuzzy numbers.It is shown by a numerical example that the computational burden of the presented procedures is low and easy to implement. The effectiveness and usefulness of the presented procedures are also illustrated.
Anderson, Ian
2011-01-01
Coherent treatment provides comprehensive view of basic methods and results of the combinatorial study of finite set systems. The Clements-Lindstrom extension of the Kruskal-Katona theorem to multisets is explored, as is the Greene-Kleitman result concerning k-saturated chain partitions of general partially ordered sets. Connections with Dilworth's theorem, the marriage problem, and probability are also discussed. Each chapter ends with a helpful series of exercises and outline solutions appear at the end. ""An excellent text for a topics course in discrete mathematics."" - Bulletin of the Ame
Aloisio, R; Di Carlo, G; Galante, A; Grillo, A F
2000-01-01
Lattice formulation of Finite Baryon Density QCD is problematic from computer simulation point of view; it is well known that for light quark masses the reconstructed partition function fails to be positive in a wide region of parameter space. For large bare quark masses, instead, it is possible to obtain more sensible results; problems are still present but restricted to a small region. We present evidence for a saturation transition independent from the gauge coupling $\\beta$ and for a transition line that, starting from the temperature critical point at $\\mu=0$, moves towards smaller $\\beta$ with increasing $\\mu$ as expected from simplified phenomenological arguments.
Finite Size Corrections to the Excitation Energy Transfer in a Massless Scalar Interaction Model
Maeda, N; Tobita, Y; Ishikawa, K
2016-01-01
We study the excitation energy transfer (EET) for a simple model in which a virtual massless scalar particle is exchanged between two molecules. If the time interval is finite, then the finite size effect generally appears in a transition amplitude through the regions where the wave nature of quanta remains. We calculated the transition amplitude for EET and obtained finite size corrections to the standard formula derived by using Fermi's golden rule. These corrections for the transition amplitude appear outside the resonance energy region. The estimation in a photosynthesis system indicates that the finite size correction could reduce the EET time considerably.
Finite-Difference Algorithms For Computing Sound Waves
Davis, Sanford
1993-01-01
Governing equations considered as matrix system. Method variant of method described in "Scheme for Finite-Difference Computations of Waves" (ARC-12970). Present method begins with matrix-vector formulation of fundamental equations, involving first-order partial derivatives of primitive variables with respect to space and time. Particular matrix formulation places time and spatial coordinates on equal footing, so governing equations considered as matrix system and treated as unit. Spatial and temporal discretizations not treated separately as in other finite-difference methods, instead treated together by linking spatial-grid interval and time step via common scale factor related to speed of sound.
Dijets at large rapidity intervals
Pope, B G
2001-01-01
Inclusive diet production at large pseudorapidity intervals ( Delta eta ) between the two jets has been suggested as a regime for observing BFKL dynamics. We have measured the dijet cross section for large Delta eta in pp collisions at square root s = 1800 and 630 GeV using the DOE detector. The partonic cross section increases strongly with the size of Delta eta . The observed growth is even stronger than expected on the basis of BFKL resummation in the leading logarithmic approximation. The growth of the partonic cross section can be accommodated with an effective BFKL intercept of alpha /sub BFKL/(20 GeV) = 1.65 +or- 0.07.
Modesto, Leonardo; Piva, Marco; Rachwał, Lesław
2016-07-01
We explicitly compute the one-loop exact beta function for a nonlocal extension of the standard gauge theory, in particular, Yang-Mills and QED. The theory, made of a weakly nonlocal kinetic term and a local potential of the gauge field, is unitary (ghost-free) and perturbatively super-renormalizable. Moreover, in the action we can always choose the potential (consisting of one "killer operator") to make zero the beta function of the running gauge coupling constant. The outcome is a UV finite theory for any gauge interaction. Our calculations are done in D =4 , but the results can be generalized to even or odd spacetime dimensions. We compute the contribution to the beta function from two different killer operators by using two independent techniques, namely, the Feynman diagrams and the Barvinsky-Vilkovisky traces. By making the theories finite, we are able to solve also the Landau pole problems, in particular, in QED. Without any potential, the beta function of the one-loop super-renormalizable theory shows a universal Landau pole in the running coupling constant in the ultraviolet regime (UV), regardless of the specific higher-derivative structure. However, the dressed propagator shows neither the Landau pole in the UV nor the singularities in the infrared regime (IR).
DEFF Research Database (Denmark)
Ottink, Marco; Brunskog, Jonas; Jeong, Cheol-Ho
2016-01-01
absorbers at oblique incidence in situ. Due to the edge diffraction effect, oblique incidence methods considering an infinite sample fail to measure the absorption coefficient at large incidence angles of finite samples. This paper aims for the development of a measurement method that accounts...... for the finiteness of the absorber. A sound field model, which accounts for scattering from the finite absorber edges, assuming plane wave incidence is derived. A significant influence of the finiteness on the radiation impedance and the corresponding absorption coefficient is found. A finite surface method, which...
DEFF Research Database (Denmark)
Ottink, Marco; Brunskog, Jonas; Jeong, Cheol-Ho
2016-01-01
absorbers at oblique incidence in situ. Due to the edge diffraction effect, oblique incidence methods considering an infinite sample fail to measure the absorption coefficient at large incidence angles of finite samples. This paper aims for the development of a measurement method that accounts...... for the finiteness of the absorber. A sound field model, which accounts for scattering from the finite absorber edges, assuming plane wave incidence is derived. A significant influence of the finiteness on the radiation impedance and the corresponding absorption coefficient is found. A finite surface method, which...
Some Characterizations of Convex Interval Games
Brânzei, R.; Tijs, S.H.; Alparslan-Gok, S.Z.
2008-01-01
This paper focuses on new characterizations of convex interval games using the notions of exactness and superadditivity. We also relate big boss interval games with concave interval games and obtain characterizations of big boss interval games in terms of exactness and subadditivity.
Some Characterizations of Convex Interval Games
Brânzei, R.; Tijs, S.H.; Alparslan-Gok, S.Z.
2008-01-01
This paper focuses on new characterizations of convex interval games using the notions of exactness and superadditivity. We also relate big boss interval games with concave interval games and obtain characterizations of big boss interval games in terms of exactness and subadditivity.
Implicit finite difference methods on composite grids
Mastin, C. Wayne
1987-01-01
Techniques for eliminating time lags in the implicit finite-difference solution of partial differential equations are investigated analytically, with a focus on transient fluid dynamics problems on overlapping multicomponent grids. The fundamental principles of the approach are explained, and the method is shown to be applicable to both rectangular and curvilinear grids. Numerical results for sample problems are compared with exact solutions in graphs, and good agreement is demonstrated.
Finite, primitive and euclidean spaces
Directory of Open Access Journals (Sweden)
Efim Khalimsky
1988-01-01
Full Text Available Integer and digital spaces are playing a significant role in digital image processing, computer graphics, computer tomography, robot vision, and many other fields dealing with finitely or countable many objects. It is proven here that every finite T0-space is a quotient space of a subspace of some simplex, i.e. of some subspace of a Euclidean space. Thus finite and digital spaces can be considered as abstract simplicial structures of subspaces of Euclidean spaces. Primitive subspaces of finite, digital, and integer spaces are introduced. They prove to be useful in the investigation of connectedness structure, which can be represented as a poset, and also in consideration of the dimension of finite spaces. Essentially T0-spaces and finitely connected and primitively path connected spaces are discussed.
A unified approach to finite-time hyperbolicity which extends finite-time Lyapunov exponents
Doan, T. S.; Karrasch, D.; Nguyen, T. Y.; Siegmund, S.
A hyperbolicity notion for linear differential equations x˙=A(t)x, t∈[t-,t+], is defined which unifies different existing notions like finite-time Lyapunov exponents (Haller, 2001, [13], Shadden et al., 2005, [24]), uniform or M-hyperbolicity (Haller, 2001, [13], Berger et al., 2009, [6]) and (t-,(t+-t-))-dichotomy (Rasmussen, 2010, [21]). Its relation to the dichotomy spectrum (Sacker and Sell, 1978, [23], Siegmund, 2002, [26]), D-hyperbolicity (Berger et al., 2009, [6]) and real parts of the eigenvalues (in case A is constant) is described. We prove a spectral theorem and provide an approximation result for the spectral intervals.
STABILITY FOR SEVERAL TYPES OF INTERVAL MATRICES
Institute of Scientific and Technical Information of China (English)
NianXiaohong; GaoJintai
1999-01-01
The robust stability for some types of tlme-varying interval raatrices and nonlineartime-varying interval matrices is considered and some sufficient conditions for robust stability of such interval matrices are given, The main results of this paper are only related to the verticesset of a interval matrices, and therefore, can be easily applied to test robust stability of interval matrices. Finally, some examples are given to illustrate the results.
Interval Arithmetic for Nonlinear Problem Solving
2013-01-01
Implementation of interval arithmetic in complex problems has been hampered by the tedious programming exercise needed to develop a particular implementation. In order to improve productivity, the use of interval mathematics is demonstrated using the computing platform INTLAB that allows for the development of interval-arithmetic-based programs more efficiently than with previous interval-arithmetic libraries. An interval-Newton Generalized-Bisection (IN/GB) method is developed in this platfo...
Boundary implications for frequency response of interval FIR and IIR filters
Bose, N. K.; Kim, K. D.
1991-01-01
It is shown that vertex implication results in parameter space apply to interval trigonometric polynomials. Subsequently, it is shown that the frequency responses of both interval FIR and IIR filters are bounded by the frequency responses of certain extreme filters. The results apply directly in the evaluation of properties of designed filters, especially because it is more realistic to bound the filter coefficients from above and below instead of determining those with infinite precision because of finite arithmetic effects. Illustrative examples are provided to show how the extreme filters might be easily derived in any specific interval FIR or IIR filter design problem.
Finite Random Domino Automaton
Bialecki, Mariusz
2012-01-01
Finite version of Random Domino Automaton (FRDA) - recently proposed a toy model of earthquakes - is investigated. Respective set of equations describing stationary state of the FRDA is derived and compared with infinite case. It is shown that for the system of big size, these equations are coincident with RDA equations. We demonstrate a non-existence of exact equations for size N bigger then 4 and propose appropriate approximations, the quality of which is studied in examples obtained within Markov chains framework. We derive several exact formulas describing properties of the automaton, including time aspects. In particular, a way to achieve a quasi-periodic like behaviour of RDA is presented. Thus, based on the same microscopic rule - which produces exponential and inverse-power like distributions - we extend applicability of the model to quasi-periodic phenomena.
Finite energy electroweak dyon
Energy Technology Data Exchange (ETDEWEB)
Kimm, Kyoungtae [Seoul National University, Faculty of Liberal Education, Seoul (Korea, Republic of); Yoon, J.H. [Konkuk University, Department of Physics, College of Natural Sciences, Seoul (Korea, Republic of); Cho, Y.M. [Konkuk University, Administration Building 310-4, Seoul (Korea, Republic of); Seoul National University, School of Physics and Astronomy, Seoul (Korea, Republic of)
2015-02-01
The latest MoEDAL experiment at LHC to detect the electroweak monopole makes the theoretical prediction of the monopole mass an urgent issue. We discuss three different ways to estimate the mass of the electroweak monopole. We first present the dimensional and scaling arguments which indicate the monopole mass to be around 4 to 10 TeV. To justify this we construct finite energy analytic dyon solutions which could be viewed as the regularized Cho-Maison dyon, modifying the coupling strength at short distance. Our result demonstrates that a genuine electroweak monopole whose mass scale is much smaller than the grand unification scale can exist, which can actually be detected at the present LHC. (orig.)
Desu, M M
2012-01-01
One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria
Finite elements and finite differences for transonic flow calculations
Hafez, M. M.; Murman, E. M.; Wellford, L. C.
1978-01-01
The paper reviews the chief finite difference and finite element techniques used for numerical solution of nonlinear mixed elliptic-hyperbolic equations governing transonic flow. The forms of the governing equations for unsteady two-dimensional transonic flow considered are the Euler equation, the full potential equation in both conservative and nonconservative form, the transonic small-disturbance equation in both conservative and nonconservative form, and the hodograph equations for the small-disturbance case and the full-potential case. Finite difference methods considered include time-dependent methods, relaxation methods, semidirect methods, and hybrid methods. Finite element methods include finite element Lax-Wendroff schemes, implicit Galerkin method, mixed variational principles, dual iterative procedures, optimal control methods and least squares.
Adaptive boundaryless finite-difference method.
Lopez-Mago, Dorilian; Gutiérrez-Vega, Julio C
2013-02-01
The boundaryless beam propagation method uses a mapping function to transform the infinite real space into a finite-size computational domain [Opt. Lett.21, 4 (1996)]. This leads to a bounded field that avoids the artificial reflections produced by the computational window. However, the method suffers from frequency aliasing problems, limiting the physical region to be sampled. We propose an adaptive boundaryless method that concentrates the higher density of sampling points in the region of interest. The method is implemented in Cartesian and cylindrical coordinate systems. It keeps the same advantages of the original method but increases accuracy and is not affected by frequency aliasing.
A New Finite Interval Lifetime Distribution Model for Fitting Bathtub-Shaped Failure Rate Curve
Directory of Open Access Journals (Sweden)
Xiaohong Wang
2015-01-01
Full Text Available This paper raised a new four-parameter fitting model to describe bathtub curve, which is widely used in research on components’ life analysis, then gave explanation of model parameters, and provided parameter estimation method as well as application examples utilizing some well-known lifetime data. By comparative analysis between the new model and some existing bathtub curve fitting model, we can find that the new fitting model is very convenient and its parameters are clear; moreover, this model is of universal applicability which is not only suitable for bathtub-shaped failure rate curves but also applicable for the constant, increasing, and decreasing failure rate curves.
Finite groups with transitive semipermutability
Institute of Scientific and Technical Information of China (English)
Lifang WANG; Yanming WANG
2008-01-01
A group G is said to be a T-group (resp. PT-group, PST-group), if normality (resp. permutability, S-permutability) is a transitive relation. In this paper, we get the characterization of finite solvable PST-groups. We also give a new characterization of finite solvable PT-groups.
Directory of Open Access Journals (Sweden)
Michael Hammond
2008-06-01
Full Text Available Finite-state methods are finding ever increasing use among linguists as a way of modeling phonology and morphology and as a method for manipulating and modeling text. This paper describes a suite of very simple finite-state tools written by the author that can be used to investigate this area and that can be used for simple analysis.
Solution of Finite Element Equations
DEFF Research Database (Denmark)
Krenk, Steen
An important step in solving any problem by the finite element method is the solution of the global equations. Numerical solution of linear equations is a subject covered in most courses in numerical analysis. However, the equations encountered in most finite element applications have some special...
Control System Design Using Finite Laplace Transform Theory
Das, Subhendu
2011-01-01
The Laplace transform theory violates a very fundamental requirement of all engineering systems. We show that this theory assumes that all signals must exist over infinite time interval. Since in engineering this infinite time assumption is not meaningful and feasible, this paper presents a design for linear control systems using the well known theory of Finite Laplace transform (FLT). The major contributions of this paper can be listed as: (a) A design principle for linear control systems us...
Deng, Bai-Chuan; Yun, Yong-Huan; Ma, Pan; Lin, Chen-Chen; Ren, Da-Bing; Liang, Yi-Zeng
2015-03-21
In this study, a new algorithm for wavelength interval selection, known as interval variable iterative space shrinkage approach (iVISSA), is proposed based on the VISSA algorithm. It combines global and local searches to iteratively and intelligently optimize the locations, widths and combinations of the spectral intervals. In the global search procedure, it inherits the merit of soft shrinkage from VISSA to search the locations and combinations of informative wavelengths, whereas in the local search procedure, it utilizes the information of continuity in spectroscopic data to determine the widths of wavelength intervals. The global and local search procedures are carried out alternatively to realize wavelength interval selection. This method was tested using three near infrared (NIR) datasets. Some high-performing wavelength selection methods, such as synergy interval partial least squares (siPLS), moving window partial least squares (MW-PLS), competitive adaptive reweighted sampling (CARS), genetic algorithm PLS (GA-PLS) and interval random frog (iRF), were used for comparison. The results show that the proposed method is very promising with good results both on prediction capability and stability. The MATLAB codes for implementing iVISSA are freely available on the website: .
Massively Parallel Finite Element Programming
Heister, Timo
2010-01-01
Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.
Direct Interval Forecasting of Wind Power
DEFF Research Database (Denmark)
Wan, Can; Xu, Zhao; Pinson, Pierre
2013-01-01
This letter proposes a novel approach to directly formulate the prediction intervals of wind power generation based on extreme learning machine and particle swarm optimization, where prediction intervals are generated through direct optimization of both the coverage probability and sharpness...
Robust misinterpretation of confidence intervals.
Hoekstra, Rink; Morey, Richard D; Rouder, Jeffrey N; Wagenmakers, Eric-Jan
2014-10-01
Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more useful alternative to NHST, and their use is strongly encouraged in the APA Manual. Nevertheless, little is known about how researchers interpret CIs. In this study, 120 researchers and 442 students-all in the field of psychology-were asked to assess the truth value of six particular statements involving different interpretations of a CI. Although all six statements were false, both researchers and students endorsed, on average, more than three statements, indicating a gross misunderstanding of CIs. Self-declared experience with statistics was not related to researchers' performance, and, even more surprisingly, researchers hardly outperformed the students, even though the students had not received any education on statistical inference whatsoever. Our findings suggest that many researchers do not know the correct interpretation of a CI. The misunderstandings surrounding p-values and CIs are particularly unfortunate because they constitute the main tools by which psychologists draw conclusions from data.
Sampling errors in the measurement of rain and hail parameters
Gertzman, H. S.; Atlas, D.
1977-01-01
Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.
Scaling of light and dark time intervals.
Marinova, J
1978-01-01
Scaling of light and dark time intervals of 0.1 to 1.1 s is performed by the mehtod of magnitude estimation with respect to a given standard. The standards differ in duration and type (light and dark). The light intervals are subjectively estimated as longer than the dark ones. The relation between the mean interval estimations and their magnitude is linear for both light and dark intervals.
Generalised Interval-Valued Fuzzy Soft Set
Shawkat Alkhazaleh; Abdul Razak Salleh
2012-01-01
We introduce the concept of generalised interval-valued fuzzy soft set and its operations and study some of their properties. We give applications of this theory in solving a decision making problem. We also introduce a similarity measure of two generalised interval-valued fuzzy soft sets and discuss its application in a medical diagnosis problem: fuzzy set; soft set; fuzzy soft set; generalised fuzzy soft set; generalised interval-valued fuzzy soft set; interval-valued fuzz...
Finite element and finite difference methods in electromagnetic scattering
Morgan, MA
2013-01-01
This second volume in the Progress in Electromagnetic Research series examines recent advances in computational electromagnetics, with emphasis on scattering, as brought about by new formulations and algorithms which use finite element or finite difference techniques. Containing contributions by some of the world's leading experts, the papers thoroughly review and analyze this rapidly evolving area of computational electromagnetics. Covering topics ranging from the new finite-element based formulation for representing time-harmonic vector fields in 3-D inhomogeneous media using two coupled sca
A finiteness result for post-critically finite polynomials
Ingram, Patrick
2010-01-01
We show that the set of complex points in the moduli space of polynomials of degree d corresponding to post-critically finite polynomials is a set of algebraic points of bounded height. It follows that for any B, the set of conjugacy classes of post-critically finite polynomials of degree d with coefficients of algebraic degree at most B is a finite and effectively computable set. In the case d=3 and B=1 we perform this computation. The proof of the main result comes down to finding a relation between the "naive" height on the moduli space, and Silverman's critical height.
The Fuzzy Set by Fuzzy Interval
Dr.Pranita Goswami
2011-01-01
Fuzzy set by Fuzzy interval is atriangular fuzzy number lying between the two specified limits. The limits to be not greater than 2 and less than -2 by fuzzy interval have been discussed in this paper. Through fuzzy interval we arrived at exactness which is a fuzzymeasure and fuzzy integral
The Fundamental Theorems of Interval Analysis
van Emden, M. H.; Moa, B.
2007-01-01
Expressions are not functions. Confusing the two concepts or failing to define the function that is computed by an expression weakens the rigour of interval arithmetic. We give such a definition and continue with the required re-statements and proofs of the fundamental theorems of interval arithmetic and interval analysis. Revision Feb. 10, 2009: added reference to and acknowledgement of P. Taylor.
An Adequate First Order Logic of Intervals
DEFF Research Database (Denmark)
Chaochen, Zhou; Hansen, Michael Reichhardt
1998-01-01
This paper introduces left and right neighbourhoods as primitive interval modalities to define other unary and binary modalities of intervals in a first order logic with interval length. A complete first order logic for the neighbourhood modalities is presented. It is demonstrated how the logic c...
Consistency and Refinement for Interval Markov Chains
DEFF Research Database (Denmark)
Delahaye, Benoit; Larsen, Kim Guldstrand; Legay, Axel;
2012-01-01
Interval Markov Chains (IMC), or Markov Chains with probability intervals in the transition matrix, are the base of a classic specification theory for probabilistic systems [18]. The standard semantics of IMCs assigns to a specification the set of all Markov Chains that satisfy its interval...
Directory of Open Access Journals (Sweden)
Sandro Manuel Mueller
Full Text Available Aerobic high-intensity interval training (HIT improves cardiovascular capacity but may reduce the finite work capacity above critical power (W' and lead to atrophy of myosin heavy chain (MyHC-2 fibers. Since whole-body vibration may enhance indices of anaerobic performance, we examined whether side-alternating whole-body vibration as a replacement for the active rest intervals during a 4 x 4 min HIT prevents decreases in anaerobic performance and capacity without compromising gains in aerobic function. Thirty-three young recreationally active men were randomly assigned to conduct either conventional 4 x 4 min HIT, HIT with 3 min of WBV at 18 Hz (HIT+VIB18 or 30 Hz (HIT+VIB30 in lieu of conventional rest intervals, or WBV at 30 Hz (VIB30. Pre and post training, critical power (CP, W', cellular muscle characteristics, as well as cardiovascular and neuromuscular variables were determined. W' (-14.3%, P = 0.013, maximal voluntary torque (-8.6%, P = 0.001, rate of force development (-10.5%, P = 0.018, maximal jumping power (-6.3%, P = 0.007 and cross-sectional areas of MyHC-2A fibers (-6.4%, P = 0.044 were reduced only after conventional HIT. CP, V̇O2peak, peak cardiac output, and overall capillary-to-fiber ratio were increased after HIT, HIT+VIB18, and HIT+VIB30 without differences between groups. HIT-specific reductions in anaerobic performance and capacity were prevented by replacing active rest intervals with side-alternating whole-body vibration, notably without compromising aerobic adaptations. Therefore, competitive cyclists (and potentially other endurance-oriented athletes may benefit from replacing the active rest intervals during aerobic HIT with side-alternating whole-body vibration.ClinicalTrials.gov Identifier: NCT01875146.
Composing Cardinal Direction Relations Basing on Interval Algebra
Chen, Juan; Jia, Haiyang; Liu, Dayou; Zhang, Changhai
Direction relations between extended spatial objects are important commonsense knowledge. Skiadopoulos proposed a formal model for representing direction relations between compound regions (the finite union of simple regions), known as SK-model. It perhaps is currently one of most cognitive plausible models for qualitative direction information, and has attracted interests from artificial intelligence and geographic information system. Originating from Allen first using composition table to process time interval constraints; composing has become the key technique in qualitative spatial reasoning to check the consistency. Due to the massive number of basic directions in SK-model, its composition becomes extraordinary complex. This paper proposed a novel algorithm for the composition. Basing the concepts of smallest rectangular directions and its original directions, it transforms the composition of basic cardinal direction relations into the composition of interval relations corresponding to Allen's interval algebra. Comparing with existing methods, this algorithm has quite good dimensional extendibility, that is, it can be easily transferred to the tridimensional space with a few modifications.
Comparing confidence intervals for Goodman and Kruskal's gamma coefficient
van der Ark, L.A.; van Aert, R.C.M.
2015-01-01
This study was motivated by the question which type of confidence interval (CI) one should use to summarize sample variance of Goodman and Kruskal's coefficient gamma. In a Monte-Carlo study, we investigated the coverage and computation time of the Goodman-Kruskal CI, the Cliff-consistent CI, the
Quantum state discrimination bounds for finite sample size
Audenaert, Koenraad M R; Verstraete, Frank
2012-01-01
In the problem of quantum state discrimination, one has to determine by measurements the state of a quantum system, based on the a priori side information that the true state is one of two given and completely known states, rho or sigma. In general, it is not possible to decide the identity of the true state with certainty, and the optimal measurement strategy depends on whether the two possible errors (mistaking rho for sigma, or the other way around) are treated as of equal importance or not. Recent results on the quantum Chernoff and Hoeffding bounds show that, if several copies of the system are available then the optimal error probabilities decay exponentially in the number of copies, and the decay rate is given by a certain statistical distance between rho and sigma (the Chernoff distance and the Hoeffding distances, respectively). While these results provide a complete solution for the asymptotic problem, they are not completely satisfying from a practical point of view. Indeed, in realistic scenarios ...
quadratic spline finite element method
Directory of Open Access Journals (Sweden)
A. R. Bahadir
2002-01-01
Full Text Available The problem of heat transfer in a Positive Temperature Coefficient (PTC thermistor, which may form one element of an electric circuit, is solved numerically by a finite element method. The approach used is based on Galerkin finite element using quadratic splines as shape functions. The resulting system of ordinary differential equations is solved by the finite difference method. Comparison is made with numerical and analytical solutions and the accuracy of the computed solutions indicates that the method is well suited for the solution of the PTC thermistor problem.
Automatic Construction of Finite Algebras
Institute of Scientific and Technical Information of China (English)
张健
1995-01-01
This paper deals with model generation for equational theories,i.e.,automatically generating (finite)models of a given set of (logical) equations.Our method of finite model generation and a tool for automatic construction of finite algebras is described.Some examples are given to show the applications of our program.We argue that,the combination of model generators and theorem provers enables us to get a better understanding of logical theories.A brief comparison betwween our tool and other similar tools is also presented.
Finite element computational fluid mechanics
Baker, A. J.
1983-01-01
Finite element analysis as applied to the broad spectrum of computational fluid mechanics is analyzed. The finite element solution methodology is derived, developed, and applied directly to the differential equation systems governing classes of problems in fluid mechanics. The heat conduction equation is used to reveal the essence and elegance of finite element theory, including higher order accuracy and convergence. The algorithm is extended to the pervasive nonlinearity of the Navier-Stokes equations. A specific fluid mechanics problem class is analyzed with an even mix of theory and applications, including turbulence closure and the solution of turbulent flows.
Reinforcing value of interval and continuous physical activity in children.
Barkley, Jacob E; Epstein, Leonard H; Roemmich, James N
2009-08-04
During play children engage in short bouts of intense activity, much like interval training. This natural preference for interval-type activity may have important implications for prescribing the most motivating type of physical activity, but the motivation of children to be physically active in interval or continuous fashion has not yet been examined. In the present study, ventilatory threshold (VT) and VO2 peak were determined in boys (n=16) and girls (n=16) age 10+/-1.3 years. Children sampled interval and continuous constant-load physical activity protocols on a cycle ergometer at 20% VT on another day. The physical activity protocols were matched for energy expenditure. Children then completed an operant button pressing task using a progressive fixed ratio schedule to assess the relative reinforcing value (RRV) of interval versus continuous physical activity. The number of button presses performed to gain access in interval or continuous physical activity and output maximum (O(max)) were the primary outcome variables. Children performed more button presses (PVT and reinforcing than continuous constant-load physical activity for children when exercising both >VT and reinforcing than longer, continuous activity.
Fusing photovoltaic data for improved confidence intervals
Directory of Open Access Journals (Sweden)
Ansgar Steland
2017-01-01
Full Text Available Characterizing and testing photovoltaic modules requires carefully made measurements on important variables such as the power output under standard conditions. When additional data is available, which has been collected using a different measurement system and therefore may be of different accuracy, the question arises how one can combine the information present in both data sets. In some cases one even has prior knowledge about the ordering of the variances of the measurement errors, which is not fully taken into account by commonly known estimators. We discuss several statistical estimators to combine the sample means of independent series of measurements, both under the assumption of heterogeneous variances and ordered variances. The critical issue is then to assess the estimator’s variance and to construct confidence intervals. We propose and discuss the application of a new jackknife variance estimator devised by [1] to such photovoltaic data, in order to assess the variability of common mean estimation under heterogeneous and ordered variances in a reliable and nonparametric way. When serial correlations are present, which usually a ect the marginal variances, it is proposed to construct a thinned data set by downsampling the series in such a way that autocorrelations are removed or dampened. We propose a data adaptive procedure which downsamples a series at irregularly spaced time points in such a way that the autocorrelations are minimized. The procedures are illustrated by applying them to real photovoltaic power output measurements from two different sun light flashers. In addition, focusing on simulations governed by real photovoltaic data, we investigate the accuracy of the jackknife approach and compare it with other approaches. Among those is a variance estimator based on Nair’s formula for Gaussian data and, as a parametric alternative, two Bayesian models. We investigate the statistical accuracy of the resulting confidence
Finite volume form factors and correlation functions at finite temperature
Pozsgay, Balázs
2009-01-01
In this thesis we investigate finite size effects in 1+1 dimensional integrable QFT. In particular we consider matrix elements of local operators (finite volume form factors) and vacuum expectation values and correlation functions at finite temperature. In the first part of the thesis we give a complete description of the finite volume form factors in terms of the infinite volume form factors (solutions of the bootstrap program) and the S-matrix of the theory. The calculations are correct to all orders in the inverse of the volume, only exponentially decaying (residual) finite size effects are neglected. We also consider matrix elements with disconnected pieces and determine the general rule for evaluating such contributions in a finite volume. The analytic results are tested against numerical data obtained by the truncated conformal space approach in the Lee-Yang model and the Ising model in a magnetic field. In a separate section we also evaluate the leading exponential correction (the $\\mu$-term) associate...
Finite-Time Attractivity for Diagonally Dominant Systems with Off-Diagonal Delays
Directory of Open Access Journals (Sweden)
T. S. Doan
2012-01-01
Full Text Available We introduce a notion of attractivity for delay equations which are defined on bounded time intervals. Our main result shows that linear delay equations are finite-time attractive, provided that the delay is only in the coupling terms between different components, and the system is diagonally dominant. We apply this result to a nonlinear Lotka-Volterra system and show that the delay is harmless and does not destroy finite-time attractivity.
Event- and interval-based measurement of stuttering: a review.
Valente, Ana Rita S; Jesus, Luis M T; Hall, Andreia; Leahy, Margaret
2015-01-01
Event- and interval-based measurements are two different ways of computing frequency of stuttering. Interval-based methodology emerged as an alternative measure to overcome problems associated with reproducibility in the event-based methodology. No review has been made to study the effect of methodological factors in interval-based absolute reliability data or to compute the agreement between the two methodologies in terms of inter-judge, intra-judge and accuracy (i.e., correspondence between raters' scores and an established criterion). To provide a review related to reproducibility of event-based and time-interval measurement, and to verify the effect of methodological factors (training, experience, interval duration, sample presentation order and judgment conditions) on agreement of time-interval measurement; in addition, to determine if it is possible to quantify the agreement between the two methodologies The first two authors searched for articles on ERIC, MEDLINE, PubMed, B-on, CENTRAL and Dissertation Abstracts during January-February 2013 and retrieved 495 articles. Forty-eight articles were selected for review. Content tables were constructed with the main findings. Articles related to event-based measurements revealed values of inter- and intra-judge greater than 0.70 and agreement percentages beyond 80%. The articles related to time-interval measures revealed that, in general, judges with more experience with stuttering presented significantly higher levels of intra- and inter-judge agreement. Inter- and intra-judge values were beyond the references for high reproducibility values for both methodologies. Accuracy (regarding the closeness of raters' judgements with an established criterion), intra- and inter-judge agreement were higher for trained groups when compared with non-trained groups. Sample presentation order and audio/video conditions did not result in differences in inter- or intra-judge results. A duration of 5 s for an interval appears to be
Branicki, Michal
2009-01-01
We consider issues associated with the Lagrangian characterisation of flow structures arising in aperiodically time-dependent vector fields that are only known on a finite time interval. A major motivation for the consideration of this problem arises from the desire to study transport and mixing problems in geophysical flows where the flow is obtained from a numerical solution, on a finite space-time grid, of an appropriate partial differential equation model for the velocity field. Of particular interest is the characterisation, location, and evolution of "transport barriers" in the flow, i.e. material curves and surfaces. We argue that a general theory of Lagrangian transport has to account for the effects of transient flow phenomena which are not captured by the infinite-time notions of hyperbolicity even for flows defined for all time. Notions of finite-time hyperbolic trajectories, their finite time stable and unstable manifolds, as well as finite-time Lyapunov exponent (FTLE) fields and associated Lagra...
Rajabpour, M. A.
2016-12-01
We calculate formation probabilities of the ground state of the finite size quantum critical chains using conformal field theory (CFT) techniques. In particular, we calculate the formation probability of one interval in the finite open chain and also formation probability of two disjoint intervals in a finite periodic system. The presented formulas can be also interpreted as the Casimir energy of needles in particular geometries. We numerically check the validity of the exact CFT results in the case of the transverse field Ising chain.
Language dynamics in finite populations.
Komarova, Natalia L; Nowak, Martin A
2003-04-01
Any mechanism of language acquisition can only learn a restricted set of grammars. The human brain contains a mechanism for language acquisition which can learn a restricted set of grammars. The theory of this restricted set is universal grammar (UG). UG has to be sufficiently specific to induce linguistic coherence in a population. This phenomenon is known as "coherence threshold". Previously, we have calculated the coherence threshold for deterministic dynamics and infinitely large populations. Here, we extend the framework to stochastic processes and finite populations. If there is selection for communicative function (selective language dynamics), then the analytic results for infinite populations are excellent approximations for finite populations; as expected, finite populations need a slightly higher accuracy of language acquisition to maintain coherence. If there is no selection for communicative function (neutral language dynamics), then linguistic coherence is only possible for finite populations.
Combinatorial Properties of Finite Models
Hubicka, Jan
2010-01-01
We study countable embedding-universal and homomorphism-universal structures and unify results related to both of these notions. We show that many universal and ultrahomogeneous structures allow a concise description (called here a finite presentation). Extending classical work of Rado (for the random graph), we find a finite presentation for each of the following classes: homogeneous undirected graphs, homogeneous tournaments and homogeneous partially ordered sets. We also give a finite presentation of the rational Urysohn metric space and some homogeneous directed graphs. We survey well known structures that are finitely presented. We focus on structures endowed with natural partial orders and prove their universality. These partial orders include partial orders on sets of words, partial orders formed by geometric objects, grammars, polynomials and homomorphism orders for various combinatorial objects. We give a new combinatorial proof of the existence of embedding-universal objects for homomorphism-defined...
Directory of Open Access Journals (Sweden)
Wararit Panichkitkosolkul
2010-01-01
Full Text Available A new asymptotic confidence interval constructed by using a confidence intervalfor the Poisson mean is proposed for the coefficient of variation of the Poisson distribution.The following confidence intervals are considered: McKay’s confidence interval, Vangel’sconfidence interval and the proposed confidence interval. Using Monte Carlo simulations,the coverage probabilities and expected lengths of these confidence intervals are compared.Simulation results show that all scenarios of the new asymptotic confidence interval havedesired minimum coverage probabilities of 0.95 and 0.90. In addition, the newly proposedconfidence interval is better than the existing ones in terms of coverage probability andexpected length for all sample sizes and parameter values considered in this paper.
Programming the finite element method
Smith, I M; Margetts, L
2013-01-01
Many students, engineers, scientists and researchers have benefited from the practical, programming-oriented style of the previous editions of Programming the Finite Element Method, learning how to develop computer programs to solve specific engineering problems using the finite element method. This new fifth edition offers timely revisions that include programs and subroutine libraries fully updated to Fortran 2003, which are freely available online, and provides updated material on advances in parallel computing, thermal stress analysis, plasticity return algorithms, convection boundary c
Directory of Open Access Journals (Sweden)
M. Branicki
2010-01-01
Full Text Available We consider issues associated with the Lagrangian characterisation of flow structures arising in aperiodically time-dependent vector fields that are only known on a finite time interval. A major motivation for the consideration of this problem arises from the desire to study transport and mixing problems in geophysical flows where the flow is obtained from a numerical solution, on a finite space-time grid, of an appropriate partial differential equation model for the velocity field. Of particular interest is the characterisation, location, and evolution of transport barriers in the flow, i.e. material curves and surfaces. We argue that a general theory of Lagrangian transport has to account for the effects of transient flow phenomena which are not captured by the infinite-time notions of hyperbolicity even for flows defined for all time. Notions of finite-time hyperbolic trajectories, their finite time stable and unstable manifolds, as well as finite-time Lyapunov exponent (FTLE fields and associated Lagrangian coherent structures have been the main tools for characterising transport barriers in the time-aperiodic situation. In this paper we consider a variety of examples, some with explicit solutions, that illustrate in a concrete manner the issues and phenomena that arise in the setting of finite-time dynamical systems. Of particular significance for geophysical applications is the notion of flow transition which occurs when finite-time hyperbolicity is lost or gained. The phenomena discovered and analysed in our examples point the way to a variety of directions for rigorous mathematical research in this rapidly developing and important area of dynamical systems theory.
Intervals in evolutionary algorithms for global optimization
Energy Technology Data Exchange (ETDEWEB)
Patil, R.B.
1995-05-01
Optimization is of central concern to a number of disciplines. Interval Arithmetic methods for global optimization provide us with (guaranteed) verified results. These methods are mainly restricted to the classes of objective functions that are twice differentiable and use a simple strategy of eliminating a splitting larger regions of search space in the global optimization process. An efficient approach that combines the efficient strategy from Interval Global Optimization Methods and robustness of the Evolutionary Algorithms is proposed. In the proposed approach, search begins with randomly created interval vectors with interval widths equal to the whole domain. Before the beginning of the evolutionary process, fitness of these interval parameter vectors is defined by evaluating the objective function at the center of the initial interval vectors. In the subsequent evolutionary process the local optimization process returns an estimate of the bounds of the objective function over the interval vectors. Though these bounds may not be correct at the beginning due to large interval widths and complicated function properties, the process of reducing interval widths over time and a selection approach similar to simulated annealing helps in estimating reasonably correct bounds as the population evolves. The interval parameter vectors at these estimated bounds (local optima) are then subjected to crossover and mutation operators. This evolutionary process continues for predetermined number of generations in the search of the global optimum.
DEFF Research Database (Denmark)
Pinson, Pierre; Tastu, Julija
2014-01-01
A new score for the evaluation of interval forecasts, the so-called coverage width-based criterion (CWC), was proposed and utilized.. This score has been used for the tuning (in-sample) and genuine evaluation (out-ofsample) of prediction intervals for various applications, e.g., electric load [1]...
Directory of Open Access Journals (Sweden)
Anjan Mukherjee
2016-08-01
Full Text Available In this paper we introduce the concept of restricted interval valued neutrosophic sets (RIVNS in short. Some basic operations and properties of RIVNS are discussed. The concept of restricted interval valued neutrosophic topology is also introduced together with restricted interval valued neutrosophic finer and restricted interval valued neutrosophic coarser topology. We also define restricted interval valued neutrosophic interior and closer of a restricted interval valued neutrosophic set. Some theorems and examples are cites. Restricted interval valued neutrosophic subspace topology is also studied.
[The QT interval: standardization, limits and interpretation].
Ouali, S; Ben Salem, H; Gribaa, R; Kacem, S; Hammas, S; Fradi, S; Neffeti, E; Remedi, F; Boughzela, E
2012-02-01
Despite clinical importance of ventricular repolarisation, it remains difficult to analyse. Conventionally, quantification of the electrocardiographic ventricular repolarization is usually performed with reference to axis of the T wave and QT interval duration. A variety of factors can prolong the QT interval, such as drug effects, electrolyte imbalances, and myocardial ischemia. The biggest risk with prolongation of the QT interval is the development of torsades de pointes. Commonly accepted reference ranges for the electrocardiogram (ECG) have been in use, with little change, for many years. Populations throughout the world present several differences: age, ethnic compositions, and are exposed to different environmental factors. Recent studies have reported reference data for QT interval in healthy population and have evaluated the influence of age, gender, QRS duration and heart rate on this interval. In this review, we address several issues relative to the measurement, and interpretation of QT interval and its adjustment for rate, age, gender and QRS duration.
Normal range of Atlanta-dental interval
Energy Technology Data Exchange (ETDEWEB)
Kim, Y. L.; Lee, S. C.; Lee, K. H.; Sung, J. H.; Joo, K. B.; Lee, S. R.; Hahm, C. K. [Hanyang University School of Medicine, Seoul (Korea, Republic of)
1986-08-15
The roentgenologic diagnosis of lateral subluxation of the atlanto-axial joint is very difficult because the only presentation is increase of the atlanto-dental interval. This study was carried out with 70 volunteers for normal value of atlanto-dental interval. We measured these intervals from lateral roentgenograms of cervical spine in neutral, flexion, and extension position of the neck. The results were as follows: 1. The mean value of atlanto-dental interval in all subjects was 1.54+-0.52mm in neutral, 1.59+-0.62mm in flexion, and 1.46+-0.48mm in extension position. 2. After thirty years of age the atlanto-dental interval was getting narrower according to aging. 3. In neutral and flexion positions there is no difference in atlanto-dental intervals, but in extension position it was significantly narrowed.
Design of optimized Interval Arithmetic Multiplier
Directory of Open Access Journals (Sweden)
Rajashekar B.Shettar
2011-07-01
Full Text Available Many DSP and Control applications that require the user to know how various numericalerrors(uncertainty affect the result. This uncertainty is eliminated by replacing non-interval values withintervals. Since most DSPs operate in real time environments, fast processors are required to implementinterval arithmetic. The goal is to develop a platform in which Interval Arithmetic operations areperformed at the same computational speed as present day signal processors. So we have proposed thedesign and implementation of Interval Arithmetic multiplier, which operates with IEEE 754 numbers. Theproposed unit consists of a floating point CSD multiplier, Interval operation selector. This architectureimplements an algorithm which is faster than conventional algorithm of Interval multiplier . The costoverhead of the proposed unit is 30% with respect to a conventional floating point multiplier. Theperformance of proposed architecture is better than that of a conventional CSD floating-point multiplier,as it can perform both interval multiplication and floating-point multiplication as well as Intervalcomparisons
Zhou, Douglas; Zhang, Yaoyu; Xiao, Yanyang; Cai, David
2014-01-01
Granger causality (GC) is a powerful method for causal inference for time series. In general, the GC value is computed using discrete time series sampled from continuous-time processes with a certain sampling interval length τ, i.e., the GC value is a function of τ. Using the GC analysis for the topology extraction of the simplest integrate-and-fire neuronal network of two neurons, we discuss behaviors of the GC value as a function of τ, which exhibits (i) oscillations, often vanishing at certain finite sampling interval lengths, (ii) the GC vanishes linearly as one uses finer and finer sampling. We show that these sampling effects can occur in both linear and non-linear dynamics: the GC value may vanish in the presence of true causal influence or become non-zero in the absence of causal influence. Without properly taking this issue into account, GC analysis may produce unreliable conclusions about causal influence when applied to empirical data. These sampling artifacts on the GC value greatly complicate the reliability of causal inference using the GC analysis, in general, and the validity of topology reconstruction for networks, in particular. We use idealized linear models to illustrate possible mechanisms underlying these phenomena and to gain insight into the general spectral structures that give rise to these sampling effects. Finally, we present an approach to circumvent these sampling artifacts to obtain reliable GC values.
Directory of Open Access Journals (Sweden)
Douglas eZhou
2014-07-01
Full Text Available Granger causality (GC is a powerful method for causal inference for time series. In general, the GC value is computed using discrete time series sampled from continuous-time processes with a certain sampling interval length $tau$, emph{i.e.}, the GC value is a function of $tau$. Using the GC analysis for the topology extraction of the simplest integrate-and-fire neuronal network of two neurons, we discuss behaviors of the GC value as a function of $tau$, which exhibits (i oscillations, often vanishing at certain finite sampling interval lengths, (ii the GC vanishes linearly as one uses finer and finer sampling. We show that these sampling effects can occur in both linear and nonlinear dynamics: the GC value may vanish in the presence of true causal influence or become nonzero in the absence of causal influence. Without properly taking this issue into account, GC analysis may produce unreliable conclusions about causal influence when applied to empirical data. These sampling artifacts on the GC value greatly complicate the reliability of causal inference using the GC analysis, in general, and the validity of topology reconstruction for networks, in particular. We use idealized linear models to illustrate possible mechanisms underlying these phenomena and to gain insight into the general spectral structures that give rise to these sampling effects. Finally, we present an approach to circumvent these sampling artifacts to obtain reliable GC values.
Capacitated max -Batching with Interval Graph Compatibilities
Nonner, Tim
We consider the problem of partitioning interval graphs into cliques of bounded size. Each interval has a weight, and the weight of a clique is the maximum weight of any interval in the clique. This natural graph problem can be interpreted as a batch scheduling problem. Solving a long-standing open problem, we show NP-hardness, even if the bound on the clique sizes is constant. Moreover, we give a PTAS based on a novel dynamic programming technique for this case.
Increasing the Confidence in Student's $t$ Interval
Goutis, Constantinos; Casella, George
1992-01-01
The usual confidence interval, based on Student's $t$ distribution, has conditional confidence that is larger than the nominal confidence level. Although this fact is known, along with the fact that increased conditional confidence can be used to improve a confidence assertion, the confidence assertion of Student's $t$ interval has never been critically examined. We do so here, and construct a confidence estimator that allows uniformly higher confidence in the interval and is closer (than $1 ...
Increasing the Confidence in Student's $t$ Interval
Goutis, Constantinos; Casella, George
1992-01-01
The usual confidence interval, based on Student's $t$ distribution, has conditional confidence that is larger than the nominal confidence level. Although this fact is known, along with the fact that increased conditional confidence can be used to improve a confidence assertion, the confidence assertion of Student's $t$ interval has never been critically examined. We do so here, and construct a confidence estimator that allows uniformly higher confidence in the interval and is closer (than $1 ...
Complex human activities recognition using interval temporal syntactic model
Institute of Scientific and Technical Information of China (English)
夏利民; 韩芬; 王军
2016-01-01
A novel method based on interval temporal syntactic model was proposed to recognize human activities in video flow. The method is composed of two parts: feature extract and activities recognition. Trajectory shape descriptor, speeded up robust features (SURF) and histograms of optical flow (HOF) were proposed to represent human activities, which provide more exhaustive information to describe human activities on shape, structure and motion. In the process of recognition, a probabilistic latent semantic analysis model (PLSA) was used to recognize sample activities at the first step. Then, an interval temporal syntactic model, which combines the syntactic model with the interval algebra to model the temporal dependencies of activities explicitly, was introduced to recognize the complex activities with a time relationship. Experiments results show the effectiveness of the proposed method in comparison with other state-of-the-art methods on the public databases for the recognition of complex activities.
INTERVALS OF ACTIVE PLAY AND BREAK IN BASKETBALL GAMES
Directory of Open Access Journals (Sweden)
Pavle Rubin
2010-09-01
Full Text Available The problem of the research comes from the need for decomposition of a basketball game. The aim was to determine the intervals of active game (“live ball” - term defined by rules and break (“dead ball” - term defined by rules, by analyzing basketball games. In order to obtain the relevant information, basketball games from five different competitions (top level of quality were analyzed. The sample consists of seven games played in the 2006/2007 season: NCAA Play - Off Final game, Adriatic League finals, ULEB Cup final game, Euroleague (2 games and the NBA league (2 games. The most important information gained by this research is that the average interval of active play lasts approximately 47 seconds, while the average break interval lasts approximately 57 seconds. This information is significant for coaches and should be used in planning the training process.
Confidence intervals for the MMPI-2.
Munley, P H
1991-08-01
The confidence intervals for the Minnesota Multiphasic Personality Inventory (MMPI-2) clinical scales were investigated. Based on the clinical scale reliabilities published in the MMPI-2 manual, estimated true scores, standard errors of measurement for estimated true scores, and 95% confidence intervals centered around estimated true scores were calculated at 5-point MMPI-2 T-score intervals. The relationships between obtained T-scores, estimated true T-scores, scale reliabilities, and confidence intervals are discussed. The possible role of error measurement in defining scale high point and code types is noted.
A note on the path interval distance.
Coons, Jane Ivy; Rusinko, Joseph
2016-06-01
The path interval distance accounts for global congruence between locally incongruent trees. We show that the path interval distance provides a lower bound for the nearest neighbor interchange distance. In contrast to the Robinson-Foulds distance, random pairs of trees are unlikely to be maximally distant from one another under the path interval distance. These features indicate that the path interval distance should play a role in phylogenomics where the comparison of trees on a fixed set of taxa is becoming increasingly important.
Spectral Statistics of RR Intervals in ECG
Martinis, M; Knezevic, A; Crnugelj, J
2003-01-01
The statistical properties (fluctuations) of heartbeat intervals (RR intervals) in ECG are studied and compared with the predictions of Random Matrix Theory (RMT). It is found that heartbeat intervals only locally exhibit the fluctuation patterns (universality) predicted by the RMT. This finding shows that heartbeat dynamics is of the mixed type where regular and irregular (chaotic) regimes coexist and the Berry-Robnik theory can be applied. It is also observed that the distribution of heartbeat intervals is well described by the one-parameter Brody distribution. The parameter $\\beta $ of the Brody distribution is seen to be connected with the dynamical state of the heart.
Infinite to finite: An overview of finite element analysis
Directory of Open Access Journals (Sweden)
Srirekha A
2010-01-01
Full Text Available The method of finite elements was developed at perfectly right times; growing computer capacities, growing human skills and industry demands for ever faster and cost effective product development providing unlimited possibilities for the researching community. This paper reviews the basic concept, current status, advances, advantages, limitations and applications of finite element method (FEM in restorative dentistry and endodontics. Finite element method is able to reveal the otherwise inaccessible stress distribution within the tooth-restoration complex and it has proven to be a useful tool in the thinking process for the understanding of tooth biomechanics and the biomimetic approach in restorative dentistry. Further improvement of the non-linear FEM solutions should be encouraged to widen the range of applications in dental and oral health science.
Directory of Open Access Journals (Sweden)
Dominic Beaulieu-Prévost
2006-03-01
Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.
A Finite Speed Curzon-Ahlborn Engine
Agrawal, D. C.
2009-01-01
Curzon and Ahlborn achieved finite power output by introducing the concept of finite rate of heat transfer in a Carnot engine. The finite power can also be achieved through a finite speed of the piston on the four branches of the Carnot cycle. The present paper combines these two approaches to study the behaviour of output power in terms of…
Geometrical Underpinning of Finite Dimensional Hilbert space
Revzen, M
2011-01-01
Finite geometry is employed to underpin operators in finite, d, dimensional Hilbert space. The central role of Hilbert space operators that form mutual unbiased bases (MUB) states projectors is exhibited. Interrelation among them revealed through their (finite) dual affine plane geometry (DAPG) underpinning is studied. Transcription to (finite) affine plane geometry (APG) is given and utilized for their interpretation.
Geometrical Underpinning of Finite Dimensional Hilbert space
Revzen, M.
2011-01-01
Finite geometry is employed to underpin operators in finite, d, dimensional Hilbert space. The central role of mutual unbiased bases (MUB) states projectors is exhibited. Interrelation among operators in Hilbert space, revealed through their (finite) dual affine plane geometry (DAPG) underpinning is studied. Transcription to (finite) affine plane geometry (APG) is given and utilized for their interpretation.
QT-Interval Duration and Mortality Rate
Zhang, Yiyi; Post, Wendy S.; Dalal, Darshan; Blasco-Colmenares, Elena; Tomaselli, Gordon F.; Guallar, Eliseo
2012-01-01
Background Extreme prolongation or reduction of the QT interval predisposes patients to malignant ventricular arrhythmias and sudden cardiac death, but the association of variations in the QT interval within a reference range with mortality end points in the general population is unclear. Methods We included 7828 men and women from the Third National Health and Nutrition Examination Survey. Baseline QT interval was measured via standard 12-lead electrocardiographic readings. Mortality end points were assessed through December 31, 2006 (2291 deaths). Results After an average follow-up of 13.7 years, the association between QT interval and mortality end points was U-shaped. The multivariate-adjusted hazard ratios comparing participants at or above the 95th percentile of age-, sex-, race-, and R-R interval–corrected QT interval (≥439 milliseconds) with participants in the middle quintile (401 to <410 milliseconds) were 2.03 (95% confidence interval, 1.46-2.81) for total mortality, 2.55 (1.59-4.09) for mortality due to cardiovascular disease (CVD), 1.63 (0.96-2.75) for mortality due to coronary heart disease, and 1.65 (1.16-2.35) for non-CVD mortality. The corresponding hazard ratios comparing participants with a corrected QT interval below the fifth percentile (<377 milliseconds) with those in the middle quintile were 1.39 (95% confidence interval, 1.02-1.88) for total mortality, 1.35 (0.77-2.36) for CVD mortality, 1.02 (0.44-2.38) for coronary heart disease mortality, and 1.42 (0.97-2.08) for non-CVD mortality. Increased mortality also was observed with less extreme deviations of QT-interval duration. Similar, albeit weaker, associations also were observed with Bazett-corrected QT intervals. Conclusion Shortened and prolonged QT-interval durations, even within a reference range, are associated with increased mortality risk in the general population. PMID:22025428
Normality and Finite-state Dimension of Liouville numbers
Nandakumar, Satyadev
2012-01-01
Liouville numbers were the first class of real numbers which were proven to be transcendental. It is easy to construct non-normal Liouville numbers. Kano and Bugeaud have proved, using analytic techniques, that there are normal Liouville numbers. Here, for a given base k >= 2, we give two simple constructions of a Liouville number which is normal to the base k. The first construction is combinatorial, and is based on de Bruijn sequences. A real number in the unit interval is normal if and only if its finite-state dimension is 1. We generalize our construction to prove that for any rational r in the closed unit interval, there is a Liouville number with finite state dimension r. This refines Staiger's result that the set of Liouville numbers has constructive Hausdorff dimension zero, showing a new quantitative classification of Liouville numbers can be attained using finite-state dimension. In the second number-theoretic construction, we use an arithmetic property of numbers - the existence of primitive roots ...
A new wavelet-based thin plate element using B-spline wavelet on the interval
Jiawei, Xiang; Xuefeng, Chen; Zhengjia, He; Yinghong, Zhang
2008-01-01
By interacting and synchronizing wavelet theory in mathematics and variational principle in finite element method, a class of wavelet-based plate element is constructed. In the construction of wavelet-based plate element, the element displacement field represented by the coefficients of wavelet expansions in wavelet space is transformed into the physical degree of freedoms in finite element space via the corresponding two-dimensional C1 type transformation matrix. Then, based on the associated generalized function of potential energy of thin plate bending and vibration problems, the scaling functions of B-spline wavelet on the interval (BSWI) at different scale are employed directly to form the multi-scale finite element approximation basis so as to construct BSWI plate element via variational principle. BSWI plate element combines the accuracy of B-spline functions approximation and various wavelet-based elements for structural analysis. Some static and dynamic numerical examples are studied to demonstrate the performances of the present element.
Combinatorial Properties of Finite Models
Hubicka, Jan
2010-09-01
We study countable embedding-universal and homomorphism-universal structures and unify results related to both of these notions. We show that many universal and ultrahomogeneous structures allow a concise description (called here a finite presentation). Extending classical work of Rado (for the random graph), we find a finite presentation for each of the following classes: homogeneous undirected graphs, homogeneous tournaments and homogeneous partially ordered sets. We also give a finite presentation of the rational Urysohn metric space and some homogeneous directed graphs. We survey well known structures that are finitely presented. We focus on structures endowed with natural partial orders and prove their universality. These partial orders include partial orders on sets of words, partial orders formed by geometric objects, grammars, polynomials and homomorphism orders for various combinatorial objects. We give a new combinatorial proof of the existence of embedding-universal objects for homomorphism-defined classes of structures. This relates countable embedding-universal structures to homomorphism dualities (finite homomorphism-universal structures) and Urysohn metric spaces. Our explicit construction also allows us to show several properties of these structures.
Nonlinear Finite Strain Consolidation Analysis with Secondary Consolidation Behavior
Directory of Open Access Journals (Sweden)
Jieqing Huang
2014-01-01
Full Text Available This paper aims to analyze nonlinear finite strain consolidation with secondary consolidation behavior. On the basis of some assumptions about the secondary consolidation behavior, the continuity equation of pore water in Gibson’s consolidation theory is modified. Taking the nonlinear compressibility and nonlinear permeability of soils into consideration, the governing equation for finite strain consolidation analysis is derived. Based on the experimental data of Hangzhou soft clay samples, the new governing equation is solved with the finite element method. Afterwards, the calculation results of this new method and other two methods are compared. It can be found that Gibson’s method may underestimate the excess pore water pressure during primary consolidation. The new method which takes the secondary consolidation behavior, the nonlinear compressibility, and nonlinear permeability of soils into consideration can precisely estimate the settlement rate and the final settlement of Hangzhou soft clay sample.
Clique-width of unit interval graphs
Lozin, Vadim V.
2007-01-01
The clique-width is known to be unbounded in the class of unit interval graphs. In this paper, we show that this is a minimal hereditary class of unbounded clique-width, i.e., in every hereditary subclass of unit interval graphs the clique-width is bounded by a constant.
Breastfeeding, birth intervals and child survival:
African Journals Online (AJOL)
short birth intervals are associated with inceased mortality rates in the ages 1-12 months, and to ... and early childhood mortality in Ethiopia is ... factors linking birth intervals and child survival ... and women in their reproductive ages. ... and 2,550 women of reproductive age. ..... to Ecological Degradation and Food Insecurity:.
Bayesian credible interval construction for Poisson statistics
Institute of Scientific and Technical Information of China (English)
ZHU Yong-Sheng
2008-01-01
The construction of the Bayesian credible (confidence) interval for a Poisson observable including both the signal and background with and without systematic uncertainties is presented.Introducing the conditional probability satisfying the requirement of the background not larger than the observed events to construct the Bayesian credible interval is also discussed.A Fortran routine,BPOCI,has been developed to implement the calculation.
Nonparametric confidence intervals for monotone functions
Groeneboom, P.; Jongbloed, G.
2015-01-01
We study nonparametric isotonic confidence intervals for monotone functions. In [Ann. Statist. 29 (2001) 1699–1731], pointwise confidence intervals, based on likelihood ratio tests using the restricted and unrestricted MLE in the current status model, are introduced. We extend the method to the trea
Nonparametric confidence intervals for monotone functions
Groeneboom, P.; Jongbloed, G.
2015-01-01
We study nonparametric isotonic confidence intervals for monotone functions. In [Ann. Statist. 29 (2001) 1699–1731], pointwise confidence intervals, based on likelihood ratio tests using the restricted and unrestricted MLE in the current status model, are introduced. We extend the method to the
Optimal Approximation of Quadratic Interval Functions
Koshelev, Misha; Taillibert, Patrick
1997-01-01
Measurements are never absolutely accurate, as a result, after each measurement, we do not get the exact value of the measured quantity; at best, we get an interval of its possible values, For dynamically changing quantities x, the additional problem is that we cannot measure them continuously; we can only measure them at certain discrete moments of time t(sub 1), t(sub 2), ... If we know that the value x(t(sub j)) at a moment t(sub j) of the last measurement was in the interval [x-(t(sub j)), x + (t(sub j))], and if we know the upper bound D on the rate with which x changes, then, for any given moment of time t, we can conclude that x(t) belongs to the interval [x-(t(sub j)) - D (t - t(sub j)), x + (t(sub j)) + D (t - t(sub j))]. This interval changes linearly with time, an is, therefore, called a linear interval function. When we process these intervals, we get an expression that is quadratic and higher order w.r.t. time t, Such "quadratic" intervals are difficult to process and therefore, it is necessary to approximate them by linear ones. In this paper, we describe an algorithm that gives the optimal approximation of quadratic interval functions by linear ones.
Finiteness conditions for unions of semigroups
Abu-Ghazalh, Nabilah Hani
2013-01-01
In this thesis we prove the following: The semigroup which is a disjoint union of two or three copies of a group is a Clifford semigroup, Rees matrix semigroup or a combination between a Rees matrix semigroup and a group. Furthermore, the semigroup which is a disjoint union of finitely many copies of a finitely presented (residually finite) group is finitely presented (residually finite) semigroup. The constructions of the semigroup which is a disjoint union of two copies of the f...
Superrosy dependent groups having finitely satisfiable generics
Ealy, Clifton; Pillay, Anand
2007-01-01
We study a model theoretic context (finite thorn rank, NIP, with finitely satisfiable generics) which is a common generalization of groups of finite Morley rank and definably compact groups in o-minimal structures. We show that assuming thorn rank 1, the group is abelian-by-finite, and assuming thorn rank 2 the group is solvable by finite. Also a field is algebraically closed.
Radon Transform in Finite Dimensional Hilbert Space
Revzen, M.
2012-01-01
Novel analysis of finite dimensional Hilbert space is outlined. The approach bypasses general, inherent, difficulties present in handling angular variables in finite dimensional problems: The finite dimensional, d, Hilbert space operators are underpinned with finite geometry which provide intuitive perspective to the physical operators. The analysis emphasizes a central role for projectors of mutual unbiased bases (MUB) states, extending thereby their use in finite dimensional quantum mechani...
Diagnostic interval and mortality in colorectal cancer
DEFF Research Database (Denmark)
Tørring, Marie Louise; Frydenberg, Morten; Hamilton, William;
2012-01-01
Objective To test the theory of a U-shaped association between time from the first presentation of symptoms in primary care to the diagnosis (the diagnostic interval) and mortality after diagnosis of colorectal cancer (CRC). Study Design and Setting Three population-based studies in Denmark...... presentation, the association between the length of the diagnostic interval and 5-year mortality rate after the diagnosis of CRC was the same for all three types of data: displaying a U-shaped association with decreasing and subsequently increasing mortality with longer diagnostic intervals. Conclusion Unknown...... confounding and in particular confounding by indication is likely to explain the counterintuitive findings of higher mortality among patients with very short diagnostic intervals, but cannot explain the increasing mortality with longer diagnostic intervals. The results support the theory that longer...
Generation interval contraction and epidemic data analysis
Kenah, Eben; Robins, James M
2008-01-01
The generation interval is the time between the infection time of an infected person and the infection time of his or her infector. Probability density functions for generation intervals have been an important input for epidemic models and epidemic data analysis. In this paper, we specify a general stochastic SIR epidemic model and prove that the mean generation interval decreases when susceptible persons are at risk of infectious contact from multiple sources. The intuition behind this is that when a susceptible person has multiple potential infectors, there is a ``race'' to infect him or her in which only the first infectious contact leads to infection. In an epidemic, the mean generation interval contracts as the prevalence of infection increases. We call this global competition among potential infectors. When there is rapid transmission within clusters of contacts, generation interval contraction can be caused by a high local prevalence of infection even when the global prevalence is low. We call this loc...
Sound radiation from finite surfaces
DEFF Research Database (Denmark)
Brunskog, Jonas
2013-01-01
A method to account for the effect of finite size in acoustic power radiation problem of planar surfaces using spatial windowing is developed. Cremer and Heckl presents a very useful formula for the power radiating from a structure using the spatially Fourier transformed velocity, which combined...... with spatially windowing of a plane waves can be used to take into account the finite size. In the present paper, this is developed by means of a radiation impedance for finite surfaces, that is used instead of the radiation impedance for infinite surfaces. In this way, the spatial windowing is included...... in the radiation formula directly, and no pre-windowing is needed. Examples are given for the radiation efficiency, and the results are compared with results found in the literature....
Second order tensor finite element
Oden, J. Tinsley; Fly, J.; Berry, C.; Tworzydlo, W.; Vadaketh, S.; Bass, J.
1990-01-01
The results of a research and software development effort are presented for the finite element modeling of the static and dynamic behavior of anisotropic materials, with emphasis on single crystal alloys. Various versions of two dimensional and three dimensional hybrid finite elements were implemented and compared with displacement-based elements. Both static and dynamic cases are considered. The hybrid elements developed in the project were incorporated into the SPAR finite element code. In an extension of the first phase of the project, optimization of experimental tests for anisotropic materials was addressed. In particular, the problem of calculating material properties from tensile tests and of calculating stresses from strain measurements were considered. For both cases, numerical procedures and software for the optimization of strain gauge and material axes orientation were developed.
Finite element methods for engineers
Fenner, Roger T
2013-01-01
This book is intended as a textbook providing a deliberately simple introduction to finite element methods in a way that should be readily understandable to engineers, both students and practising professionals. Only the very simplest elements are considered, mainly two dimensional three-noded “constant strain triangles”, with simple linear variation of the relevant variables. Chapters of the book deal with structural problems (beams), classification of a broad range of engineering into harmonic and biharmonic types, finite element analysis of harmonic problems, and finite element analysis of biharmonic problems (plane stress and plane strain). Full Fortran programs are listed and explained in detail, and a range of practical problems solved in the text. Despite being somewhat unfashionable for general programming purposes, the Fortran language remains very widely used in engineering. The programs listed, which were originally developed for use on mainframe computers, have been thoroughly updated for use ...
Finite and profinite quantum systems
Vourdas, Apostolos
2017-01-01
This monograph provides an introduction to finite quantum systems, a field at the interface between quantum information and number theory, with applications in quantum computation and condensed matter physics. The first major part of this monograph studies the so-called `qubits' and `qudits', systems with periodic finite lattice as position space. It also discusses the so-called mutually unbiased bases, which have applications in quantum information and quantum cryptography. Quantum logic and its applications to quantum gates is also studied. The second part studies finite quantum systems, where the position takes values in a Galois field. This combines quantum mechanics with Galois theory. The third part extends the discussion to quantum systems with variables in profinite groups, considering the limit where the dimension of the system becomes very large. It uses the concepts of inverse and direct limit and studies quantum mechanics on p-adic numbers. Applications of the formalism include quantum optics and ...
Numerical computation of transonic flows by finite-element and finite-difference methods
Hafez, M. M.; Wellford, L. C.; Merkle, C. L.; Murman, E. M.
1978-01-01
Studies on applications of the finite element approach to transonic flow calculations are reported. Different discretization techniques of the differential equations and boundary conditions are compared. Finite element analogs of Murman's mixed type finite difference operators for small disturbance formulations were constructed and the time dependent approach (using finite differences in time and finite elements in space) was examined.
Ottink, Marco; Brunskog, Jonas; Jeong, Cheol-Ho; Fernandez-Grande, Efren; Trojgaard, Per; Tiana-Roig, Elisabet
2016-01-01
Absorption coefficients are mostly measured in reverberation rooms or with impedance tubes. Since these methods are only suitable for measuring the random incidence and the normal incidence absorption coefficient, there exists an increasing need for absorption coefficient measurement of finite absorbers at oblique incidence in situ. Due to the edge diffraction effect, oblique incidence methods considering an infinite sample fail to measure the absorption coefficient at large incidence angles of finite samples. This paper aims for the development of a measurement method that accounts for the finiteness of the absorber. A sound field model, which accounts for scattering from the finite absorber edges, assuming plane wave incidence is derived. A significant influence of the finiteness on the radiation impedance and the corresponding absorption coefficient is found. A finite surface method, which combines microphone array measurements over a finite sample with the sound field model in an inverse manner, is proposed. Besides, a temporal subtraction method, a microphone array method, impedance tube measurements, and an equivalent fluid model are used for validation. The finite surface method gives promising agreement with theory, especially at near grazing incidence. Thus, the finite surface method is proposed for further measurements at large incidence angles.
Character theory of finite groups
Isaacs, I Martin
2006-01-01
Character theory is a powerful tool for understanding finite groups. In particular, the theory has been a key ingredient in the classification of finite simple groups. Characters are also of interest in their own right, and their properties are closely related to properties of the structure of the underlying group. The book begins by developing the module theory of complex group algebras. After the module-theoretic foundations are laid in the first chapter, the focus is primarily on characters. This enhances the accessibility of the material for students, which was a major consideration in the
Finite elements of nonlinear continua
Oden, J T
2000-01-01
Geared toward undergraduate and graduate students, this text extends applications of the finite element method from linear problems in elastic structures to a broad class of practical, nonlinear problems in continuum mechanics. It treats both theory and applications from a general and unifying point of view.The text reviews the thermomechanical principles of continuous media and the properties of the finite element method, and then brings them together to produce discrete physical models of nonlinear continua. The mathematical properties of these models are analyzed, along with the numerical s
Existentially closed locally finite groups
Shelah, Saharon
2011-01-01
We investigate this class of groups originally called ulf (universal locally finite groups) of cardinality lambda . We prove that for every locally finite group G there is a canonical existentially closed extention of the same cardinality, unique up to isomorphism and increasing with G . Also we get, e.g. existence of complete members (i.e. with no non-inner automorphisms) in many cardinals (provably in ZFC). We also get a parallel to stability theory in the sense of investigating definable types.
FINITE ELEMENT ANALYSIS OF STRUCTURES
Directory of Open Access Journals (Sweden)
PECINGINA OLIMPIA-MIOARA
2015-05-01
Full Text Available The application of finite element method is analytical when solutions can not be applied for deeper study analyzes static, dynamic or other types of requirements in different points of the structures .In practice it is necessary to know the behavior of the structure or certain parts components of the machine under the influence of certain factors static and dynamic . The application of finite element in the optimization of components leads to economic growth , to increase reliability and durability organs studied, thus the machine itself.
Marmi, Stefano; Yoccoz, Jean-Christophe
2016-05-01
We prove that the solutions of the cohomological equation for Roth type interval exchange maps are Hölder continuous provided that the datum is of class {C^r} with {r > 1} and belongs to a finite-codimension linear subspace.
Note---New Confidence Interval Estimators Using Standardized Time Series
David Goldsman; Lee Schruben
1990-01-01
We develop new asymptotically valid confidence interval estimators (CIE's) for the underlying mean of a stationary simulation process. The new estimators are weighted generalizations of Schruben's standardized time series area CIE. We show that the weighted CIE's have the same asymptotic expected length and variance of the length as the area CIE; but in the small sample environment, the new CIE's exhibit performance characteristics which are different from those of the area CIE.
Circadian modulation of interval timing in mice.
Agostino, Patricia V; do Nascimento, Micaela; Bussi, Ivana L; Eguía, Manuel C; Golombek, Diego A
2011-01-25
Temporal perception is fundamental to environmental adaptation in humans and other animals. To deal with timing and time perception, organisms have developed multiple systems that are active over a broad range of order of magnitude, the most important being circadian timing, interval timing and millisecond timing. The circadian pacemaker is located in the suprachiasmatic nuclei (SCN) of the hypothalamus, and is driven by a self-sustaining oscillator with a period close to 24h. Time estimation in the second-to-minutes range--known as interval timing--involves the interaction of the basal ganglia and the prefrontal cortex. In this work we tested the hypothesis that interval timing in mice is sensitive to circadian modulations. Animals were trained following the peak-interval (PI) procedure. Results show significant differences in the estimation of 24-second intervals at different times of day, with a higher accuracy in the group trained at night, which were maintained under constant dark (DD) conditions. Interval timing was also studied in animals under constant light (LL) conditions, which abolish circadian rhythmicity. Mice under LL conditions were unable to acquire temporal control in the peak interval procedure. Moreover, short time estimation in animals subjected to circadian desynchronizations (modeling jet lag-like situations) was also affected. Taken together, our results indicate that short-time estimation is modulated by the circadian clock. Copyright © 2010 Elsevier B.V. All rights reserved.
Fast transfer of crossmodal time interval training.
Chen, Lihan; Zhou, Xiaolin
2014-06-01
Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.
Advanced Interval Management: A Benefit Analysis
Timer, Sebastian; Peters, Mark
2016-01-01
This document is the final report for the NASA Langley Research Center (LaRC)- sponsored task order 'Possible Benefits for Advanced Interval Management Operations.' Under this research project, Architecture Technology Corporation performed an analysis to determine the maximum potential benefit to be gained if specific Advanced Interval Management (AIM) operations were implemented in the National Airspace System (NAS). The motivation for this research is to guide NASA decision-making on which Interval Management (IM) applications offer the most potential benefit and warrant further research.
Dynamics of Non-Classical Interval Exchanges
Gadre, Vaibhav S
2009-01-01
Train tracks with a single vertex are a generalization of interval exchange maps. Here, we consider non-classical interval exchanges: complete train tracks with a single vertex. These can be studied as a dynamical system by considering Rauzy induction in this context. This gives a refinement process on the parameter space similar to Kerckhoff's simplicial systems. We show that the refinement process gives an expansion that has a key dynamical property called uniform distortion. We use uniform distortion to prove normality of the expansion. Consequently we prove an analog of Keane's conjecture: almost every non-classical interval exchange is uniquely ergodic.
Linear chord diagrams on two intervals
DEFF Research Database (Denmark)
Andersen, Jørgen Ellegaard; Penner, Robert; Reidys, Christian
Consider all possible ways of attaching disjoint chords to two ordered and oriented disjoint intervals so as to produce a connected graph. Taking the intervals to lie in the real axis with the induced orientation and the chords to lie in the upper half plane canonically determines a corresponding...... generating function ${\\bf C}_g(z)=z^{2g}R_g(z)/(1-4z)^{3g-{1\\over 2}}$ for chords attached to a single interval is algebraic, for $g\\geq 1$, where the polynomial $R_g(z)$ with degree at most $g-1$ has integer coefficients and satisfies $R_g(1/4)\
Essays on Finite Mixture Models
A. van Dijk (Bram)
2009-01-01
textabstractFinite mixture distributions are a weighted average of a ¯nite number of distributions. The latter are usually called the mixture components. The weights are usually described by a multinomial distribution and are sometimes called mixing proportions. The mixture components may be the
Finite-dimensional (*)-serial algebras
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
Let A be a finite-dimensional associative algebra with identity over a field k. In this paper we introduce the concept of (*)-serial algebras which is a generalization of serial algebras. We investigate the properties of (*)-serial algebras, and we obtain suficient and necessary conditions for an associative algebra to be (*)-serial.
Symmetric relations of finite negativity
Kaltenbaeck, M.; Winkler, H.; Woracek, H.; Forster, KH; Jonas, P; Langer, H
2006-01-01
We construct and investigate a space which is related to a symmetric linear relation S of finite negativity on an almost Pontryagin space. This space is the indefinite generalization of the completion of dom S with respect to (S.,.) for a strictly positive S on a Hilbert space.
Finite length Taylor Couette flow
Streett, C. L.; Hussaini, M. Y.
1987-01-01
Axisymmetric numerical solutions of the unsteady Navier-Stokes equations for flow between concentric rotating cylinders of finite length are obtained by a spectral collocation method. These representative results pertain to two-cell/one-cell exchange process, and are compared with recent experiments.
Essays on Finite Mixture Models
A. van Dijk (Bram)
2009-01-01
textabstractFinite mixture distributions are a weighted average of a ¯nite number of distributions. The latter are usually called the mixture components. The weights are usually described by a multinomial distribution and are sometimes called mixing proportions. The mixture components may be the sam
Critical Phenomena in Finite Systems
Bonasera, A; Chiba, S
2001-01-01
We discuss the dynamics of finite systems within molecular dynamics models. Signatures of a critical behavior are analyzed and compared to experimental data both in nucleus-nucleus and metallic cluster collisions. We suggest the possibility to explore the instability region via tunneling. In this way we can obtain fragments at very low temperatures and densities. We call these fragments quantum drops.
The ideal dimensions of a Halbach cylinder of finite length
Bjørk, R
2014-01-01
In this paper the smallest or optimal dimensions of a Halbach cylinder of a finite length for a given sample volume and desired flux density are determined using numerical modeling and parameter variation. A sample volume that is centered in and shaped as the Halbach cylinder bore but with a possible shorter length is considered. The external radius and the length of the Halbach cylinder with the smallest possible dimensions are found as a function of a desired internal radius, length of the sample volume and mean flux density. It is shown that the optimal ratio between the outer and inner radius of the Halbach cylinder does not depend on the length of the sample volume. Finally, the efficiency of a finite length Halbach cylinder is considered and compared with the case of a cylinder of infinite length. The most efficient dimensions for a Halbach cylinder are found and it is shown that the efficiency increases slowly with the length of the cylinder.
Efficient Low-Sensitivity Sampling of Multiband Signals with Bounded Components
Selva, J
2010-01-01
This paper presents an efficient method to sample multiband signals with bounded components, at a rate below the Nyquist limit, while keeping at the same time the numerical sensitivity at a low level. The method is based on band-limited windowing, followed by trigonometric approximation in consecutive time intervals. The key point is that the trigonometric approximation "inherits" the multiband property, that is, its coefficients are formed by bursts of non-zero elements corresponding to the multiband components. It is shown that this method can be well combined with the recently proposed synchronous multi-rate sampling (SMRS) scheme, given that the resulting linear system is sparse and formed by ones and zeroes. The proposed method allows one to trade sampling efficiency for noise sensitivity, and is specially well suited for bounded signals with unbounded energy like those in communications, navigation, audio systems, etc. Besides, it is also applicable to finite energy signals and periodic band-limited sig...
Population-Based Pediatric Reference Intervals in General Clinical Chemistry: A Swedish Survey.
Ridefelt, Peter
2015-01-01
Very few high quality studies on pediatric reference intervals for general clinical chemistry and hematology analytes have been performed. Three recent prospective community-based projects utilising blood samples from healthy children in Sweden, Denmark and Canada have substantially improved the situation. The Swedish survey included 701 healthy children. Reference intervals for general clinical chemistry and hematology were defined.
Combination of structural reliability and interval analysis
Institute of Scientific and Technical Information of China (English)
Zhiping Qiu; Di Yang; saac Elishakoff
2008-01-01
In engineering applications,probabilistic reliability theory appears to be presently the most important method,however,in many cases precise probabilistic reliability theory cannot be considered as adequate and credible model of the real state of actual affairs.In this paper,we developed a hybrid of probabilistic and non-probabilistic reliability theory,which describes the structural uncertain parameters as interval variables when statistical data are found insufficient.By using the interval analysis,a new method for calculating the interval of the structural reliability as well as the reliability index is introduced in this paper,and the traditional probabilistic theory is incorporated with the interval analysis.Moreover,the new method preserves the useful part of the traditional probabilistic reliability theory,but removes the restriction of its strict requirement on data acquisition.Example is presented to demonstrate the feasibility and validity of the proposed theory.
Application of Interval Analysis to Error Control.
1976-09-01
We give simple examples of ways in which interval arithmetic can be used to alert instabilities in computer algorithms , roundoff error accumulation, and even the effects of hardware inadequacies. This paper is primarily tutorial. (Author)
Classifying and ranking DMUs in interval DEA
Institute of Scientific and Technical Information of China (English)
GUO Jun-peng; WU Yu-hua; LI Wen-hua
2005-01-01
During efficiency evaluating by DEA, the inputs and outputs of DMUs may be intervals because of insufficient information or measure error. For this reason, interval DEA is proposed. To make the efficiency scores more discriminative, this paper builds an Interval Modified DEA (IMDEA) model based on MDEA.Furthermore, models of obtaining upper and lower bounds of the efficiency scores for each DMU are set up.Based on this, the DMUs are classified into three types. Next, a new order relation between intervals which can express the DM' s preference to the three types is proposed. As a result, a full and more eonvietive ranking is made on all the DMUs. Finally an example is given.
Conditional prediction intervals of wind power generation
DEFF Research Database (Denmark)
Pinson, Pierre; Kariniotakis, Georges
2010-01-01
A generic method for the providing of prediction intervals of wind power generation is described. Prediction intervals complement the more common wind power point forecasts, by giving a range of potential outcomes for a given probability, their so-called nominal coverage rate. Ideally they inform...... on the characteristics of prediction errors for providing conditional interval forecasts. By simultaneously generating prediction intervals with various nominal coverage rates, one obtains full predictive distributions of wind generation. Adapted resampling is applied here to the case of an onshore Danish wind farm......, for which three point forecasting methods are considered as input. The probabilistic forecasts generated are evaluated based on their reliability and sharpness, while compared to forecasts based on quantile regression and the climatology benchmark. The operational application of adapted resampling...
Different radiation impedance models for finite porous materials
DEFF Research Database (Denmark)
Nolan, Melanie; Jeong, Cheol-Ho; Brunskog, Jonas;
2015-01-01
coupled to the transfer matrix method (TMM). These methods are found to yield comparable results when predicting the Sabine absorption coefficients of finite porous materials. Discrepancies with measurement results can essentially be explained by the unbalance between grazing and non-grazing sound field...... the infinite case. Thus, in order to predict the Sabine absorption coefficients of finite porous samples, one can incorporate models of the radiation impedance. In this study, different radiation impedance models are compared with two experimental examples. Thomasson’s model is compared to Rhazi’s method when...
Two-dimensional finite-element temperature variance analysis
Heuser, J. S.
1972-01-01
The finite element method is extended to thermal analysis by forming a variance analysis of temperature results so that the sensitivity of predicted temperatures to uncertainties in input variables is determined. The temperature fields within a finite number of elements are described in terms of the temperatures of vertices and the variational principle is used to minimize the integral equation describing thermal potential energy. A computer calculation yields the desired solution matrix of predicted temperatures and provides information about initial thermal parameters and their associated errors. Sample calculations show that all predicted temperatures are most effected by temperature values along fixed boundaries; more accurate specifications of these temperatures reduce errors in thermal calculations.
A finite Zitterbewegung model for relativistic quantum mechanics
Energy Technology Data Exchange (ETDEWEB)
Noyes, H.P.
1990-02-19
Starting from steps of length h/mc and time intervals h/mc{sup 2}, which imply a quasi-local Zitterbewegung with velocity steps {plus minus}c, we employ discrimination between bit-strings of finite length to construct a necessary 3+1 dimensional event-space for relativistic quantum mechanics. By using the combinatorial hierarchy to label the strings, we provide a successful start on constructing the coupling constants and mass ratios implied by the scheme. Agreement with experiments is surprisingly accurate. 22 refs., 1 fig.
Combining one-sample confidence procedures for inference in the two-sample case.
Fay, Michael P; Proschan, Michael A; Brittain, Erica
2015-03-01
We present a simple general method for combining two one-sample confidence procedures to obtain inferences in the two-sample problem. Some applications give striking connections to established methods; for example, combining exact binomial confidence procedures gives new confidence intervals on the difference or ratio of proportions that match inferences using Fisher's exact test, and numeric studies show the associated confidence intervals bound the type I error rate. Combining exact one-sample Poisson confidence procedures recreates standard confidence intervals on the ratio, and introduces new ones for the difference. Combining confidence procedures associated with one-sample t-tests recreates the Behrens-Fisher intervals. Other applications provide new confidence intervals with fewer assumptions than previously needed. For example, the method creates new confidence intervals on the difference in medians that do not require shift and continuity assumptions. We create a new confidence interval for the difference between two survival distributions at a fixed time point when there is independent censoring by combining the recently developed beta product confidence procedure for each single sample. The resulting interval is designed to guarantee coverage regardless of sample size or censoring distribution, and produces equivalent inferences to Fisher's exact test when there is no censoring. We show theoretically that when combining intervals asymptotically equivalent to normal intervals, our method has asymptotically accurate coverage. Importantly, all situations studied suggest guaranteed nominal coverage for our new interval whenever the original confidence procedures themselves guarantee coverage.
Haematological Reference Intervals in a Multiethnic Population
Angeli Ambayya; Anselm Ting Su; Nadila Haryani Osman; Nik Rosnita Nik-Samsudin; Khadijah Khalid; Kian Meng Chang; Jameela Sathar; Jay Suriar Rajasuriar; Subramanian Yegappan
2014-01-01
INTRODUCTION: Similar to other populations, full blood count reference (FBC) intervals in Malaysia are generally derived from non-Malaysian subjects. However, numerous studies have shown significant differences between and within populations supporting the need for population specific intervals. METHODS: Two thousand seven hundred twenty five apparently healthy adults comprising all ages, both genders and three principal races were recruited through voluntary participation. FBC was performed ...
Conditional prediction intervals of wind power generation
Pinson, Pierre; Kariniotakis, Georges
2010-01-01
A generic method for the providing of prediction intervals of wind power generation is described. Prediction intervals complement the more common wind power point forecasts, by giving a range of potential outcomes for a given probability, their so-called nominal coverage rate. Ideally they inform of the situation-specific uncertainty of point forecasts. In order to avoid a restrictive assumption on the shape of forecast error distributions, focus is given to an empirical and nonparametric app...
Optimal prediction intervals of wind power generation
Wan, Can; Wu, Zhao; Pinson, Pierre; Dong, Zhao Yang; Wong, Kit Po
2014-01-01
Accurate and reliable wind power forecasting is essential to power system operation. Given significant uncertainties involved in wind generation, probabilistic interval forecasting provides a unique solution to estimate and quantify the potential impacts and risks facing system operation with wind penetration beforehand. This paper proposes a novel hybrid intelligent algorithm approach to directly formulate optimal prediction intervals of wind power generation based on extreme learning machin...
Surveillance intervals for small abdominal aortic aneurysms
DEFF Research Database (Denmark)
Bown, Matthew J; Sweeting, Michael J; Brown, Louise C
2013-01-01
Small abdominal aortic aneurysms (AAAs [3.0 cm-5.4 cm in diameter]) are monitored by ultrasound surveillance. The intervals between surveillance scans should be chosen to detect an expanding aneurysm prior to rupture.......Small abdominal aortic aneurysms (AAAs [3.0 cm-5.4 cm in diameter]) are monitored by ultrasound surveillance. The intervals between surveillance scans should be chosen to detect an expanding aneurysm prior to rupture....
Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.
2016-12-01
Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.
Recurrence interval analysis of trading volumes.
Ren, Fei; Zhou, Wei-Xing
2010-06-01
We study the statistical properties of the recurrence intervals τ between successive trading volumes exceeding a certain threshold q. The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes.
Finite Dimensional KP \\tau-functions I. Finite Grassmannians
Balogh, F; Harnad, J
2014-01-01
We study \\tau-functions of the KP hierarchy in terms of abelian group actions on finite dimensional Grassmannians, viewed as subquotients of the Hilbert space Grassmannians of Sato, Segal and Wilson. A determinantal formula of Gekhtman and Kasman involving exponentials of finite dimensional matrices is shown to follow naturally from such reductions. All reduced flows of exponential type generated by matrices with arbitrary nondegenerate Jordan forms are derived, both in the Grassmannian setting and within the fermionic operator formalism. A slightly more general determinantal formula involving resolvents of the matrices generating the flow, valid on the big cell of the Grassmannian, is also derived. An explicit expression is deduced for the Pl\\"ucker coordinates appearing as coefficients in the Schur function expansion of the \\tau-function.
Robust stability test for 2-D continuous-discrete systems with interval parameters
Institute of Scientific and Technical Information of China (English)
肖扬
2004-01-01
It is revealed that the dynamic stability of 2-D recursive continuous-discrete systems with interval parameters involves the problem of robust Hurwitz-Schur stability of bivariate polynomials family. It is proved that the HurwitzSchur stability of the denominator polynomials of the systems is necessary and sufficient for the asymptotic stability of the 2-D hybrid systems. The 2-D hybrid transformation, i.e. 2-D Laplace-Z transformation, has been proposed to solve the stability analysis of the 2-D continuous-discrete systems, to get the 2-D hybrid transfer functions of the systems. The edge test for the Hurwitz-Schur stability of interval bivariate polynomials is introduced. The Hurwitz-Schur stability of the interval family of 2-D polynomials can be guaranteed by the stability of its finite edge polynomials of the family. An algorithm about the stability test of edge polynomials is given.
Hydrologic studies in wells open through large intervals. Annual report, 1992
Energy Technology Data Exchange (ETDEWEB)
1992-12-31
This report describes and summarizes activities, data, and preliminary data interpretation from the INEL Oversight Program R&D-1 project titled ``Hydrologic Studies In Wells Open Through Large Intervals.`` The project is designed to use a straddle-packer system to isolate, hydraulically test, and sample specific intervals of monitoring wells that are open (uncased, unscreened) over large intervals of the Snake River Plain aquifer. The objectives of the project are to determine and compare vertical variations in water quality and aquifer properties that have previously only been determined in an integrated fashion over the entire thickness of the open interval of the observation wells.
Confidence intervals in Flow Forecasting by using artificial neural networks
Panagoulia, Dionysia; Tsekouras, George
2014-05-01
One of the major inadequacies in implementation of Artificial Neural Networks (ANNs) for flow forecasting is the development of confidence intervals, because the relevant estimation cannot be implemented directly, contrasted to the classical forecasting methods. The variation in the ANN output is a measure of uncertainty in the model predictions based on the training data set. Different methods for uncertainty analysis, such as bootstrap, Bayesian, Monte Carlo, have already proposed for hydrologic and geophysical models, while methods for confidence intervals, such as error output, re-sampling, multi-linear regression adapted to ANN have been used for power load forecasting [1-2]. The aim of this paper is to present the re-sampling method for ANN prediction models and to develop this for flow forecasting of the next day. The re-sampling method is based on the ascending sorting of the errors between real and predicted values for all input vectors. The cumulative sample distribution function of the prediction errors is calculated and the confidence intervals are estimated by keeping the intermediate value, rejecting the extreme values according to the desired confidence levels, and holding the intervals symmetrical in probability. For application of the confidence intervals issue, input vectors are used from the Mesochora catchment in western-central Greece. The ANN's training algorithm is the stochastic training back-propagation process with decreasing functions of learning rate and momentum term, for which an optimization process is conducted regarding the crucial parameters values, such as the number of neurons, the kind of activation functions, the initial values and time parameters of learning rate and momentum term etc. Input variables are historical data of previous days, such as flows, nonlinearly weather related temperatures and nonlinearly weather related rainfalls based on correlation analysis between the under prediction flow and each implicit input
DEFF Research Database (Denmark)
Elming, H; Holm, E; Jun, L
1998-01-01
of 1658 women and 1797 men aged 30-60 years. QT interval dispersion was calculated from the maximal difference between QT intervals in any two leads. All cause mortality over 13 years, and cardiovascular mortality as well as cardiac morbidity over 11 years, were the main outcome parameters. Subjects...... of cardiovascular mortality and cardiac fatal and non-fatal morbidity in a general population over 11 years.......AIMS: To evaluate the prognostic value of the QT interval and QT interval dispersion in total and in cardiovascular mortality, as well as in cardiac morbidity, in a general population. METHODS AND RESULTS: The QT interval was measured in all leads from a standard 12-lead ECG in a random sample...
Elements with Square Roots in Finite Groups
Institute of Scientific and Technical Information of China (English)
M.S. Lucido; M.R. Pournaki
2005-01-01
In this paper, we study the probability that a randomly chosen element in a finite group has a square root, in particular the simple groups of Lie type of rank 1, the sporadic finite simple groups and the alternating groups.
Infinite Possibilities for the Finite Element.
Finlayson, Bruce A.
1981-01-01
Describes the uses of finite element methods in solving problems of heat transfer, fluid flow, etc. Suggests that engineers should know the general concepts and be able to apply the principles of finite element methods. (Author/WB)
Conforming finite elements with embedded strong discontinuities
Dias-da-Costa, D.; Alfaiate, J.; Sluys, L.J.; Areias, P.; Fernandes, C.; Julio, E.
2012-01-01
The possibility of embedding strong discontinuities into finite elements allowed the simulation of different problems, namely, brickwork masonry fracture, dynamic fracture, failure in finite strain problems and simulation of reinforcement concrete members. However, despite the significant contributi
Vibration analysis of composite pipes using the finite element method with B-spline wavelets
Energy Technology Data Exchange (ETDEWEB)
Oke, Wasiu A.; Khulief, Yehia A. [King Fahd University of Petroleum and Minerals, Dhahran (Saudi Arabia)
2016-02-15
A finite element formulation using the B-spline wavelets on the interval is developed for modeling the free vibrations of composite pipes. The composite FRP pipe element is treated as a beam element. The finite pipe element is constructed in the wavelet space and then transformed to the physical space. Detailed expressions of the mass and stiffness matrices are derived for the composite pipe using the Bspline scaling and wavelet functions. Both Euler-Bernoulli and Timoshenko beam theories are considered. The generalized eigenvalue problem is formulated and solved to obtain the modal characteristics of the composite pipe. The developed wavelet-based finite element discretization scheme utilizes significantly less elements compared to the conventional finite element method for modeling composite pipes. Numerical solutions are obtained to demonstrate the accuracy of the developed element, which is verified by comparisons with some available results in the literature.
Sampling designs dependent on sample parameters of auxiliary variables
Wywiał, Janusz L
2015-01-01
The book offers a valuable resource for students and statisticians whose work involves survey sampling. An estimation of the population parameters in finite and fixed populations assisted by auxiliary variables is considered. New sampling designs dependent on moments or quantiles of auxiliary variables are presented on the background of the classical methods. Accuracies of the estimators based on original sampling design are compared with classical estimation procedures. Specific conditional sampling designs are applied to problems of small area estimation as well as to estimation of quantiles of variables under study. .
A novel nonparametric confidence interval for differences of proportions for correlated binary data.
Duan, Chongyang; Cao, Yingshu; Zhou, Lizhi; Tan, Ming T; Chen, Pingyan
2016-11-16
Various confidence interval estimators have been developed for differences in proportions resulted from correlated binary data. However, the width of the mostly recommended Tango's score confidence interval tends to be wide, and the computing burden of exact methods recommended for small-sample data is intensive. The recently proposed rank-based nonparametric method by treating proportion as special areas under receiver operating characteristic provided a new way to construct the confidence interval for proportion difference on paired data, while the complex computation limits its application in practice. In this article, we develop a new nonparametric method utilizing the U-statistics approach for comparing two or more correlated areas under receiver operating characteristics. The new confidence interval has a simple analytic form with a new estimate of the degrees of freedom of n - 1. It demonstrates good coverage properties and has shorter confidence interval widths than that of Tango. This new confidence interval with the new estimate of degrees of freedom also leads to coverage probabilities that are an improvement on the rank-based nonparametric confidence interval. Comparing with the approximate exact unconditional method, the nonparametric confidence interval demonstrates good coverage properties even in small samples, and yet they are very easy to implement computationally. This nonparametric procedure is evaluated using simulation studies and illustrated with three real examples. The simplified nonparametric confidence interval is an appealing choice in practice for its ease of use and good performance. © The Author(s) 2016.
Effect size, confidence intervals and statistical power in psychological research.
Directory of Open Access Journals (Sweden)
Téllez A.
2015-07-01
Full Text Available Quantitative psychological research is focused on detecting the occurrence of certain population phenomena by analyzing data from a sample, and statistics is a particularly helpful mathematical tool that is used by researchers to evaluate hypotheses and make decisions to accept or reject such hypotheses. In this paper, the various statistical tools in psychological research are reviewed. The limitations of null hypothesis significance testing (NHST and the advantages of using effect size and its respective confidence intervals are explained, as the latter two measurements can provide important information about the results of a study. These measurements also can facilitate data interpretation and easily detect trivial effects, enabling researchers to make decisions in a more clinically relevant fashion. Moreover, it is recommended to establish an appropriate sample size by calculating the optimum statistical power at the moment that the research is designed. Psychological journal editors are encouraged to follow APA recommendations strictly and ask authors of original research studies to report the effect size, its confidence intervals, statistical power and, when required, any measure of clinical significance. Additionally, we must account for the teaching of statistics at the graduate level. At that level, students do not receive sufficient information concerning the importance of using different types of effect sizes and their confidence intervals according to the different types of research designs; instead, most of the information is focused on the various tools of NHST.
DOLFIN: Automated Finite Element Computing
Logg, Anders; 10.1145/1731022.1731030
2011-01-01
We describe here a library aimed at automating the solution of partial differential equations using the finite element method. By employing novel techniques for automated code generation, the library combines a high level of expressiveness with efficient computation. Finite element variational forms may be expressed in near mathematical notation, from which low-level code is automatically generated, compiled and seamlessly integrated with efficient implementations of computational meshes and high-performance linear algebra. Easy-to-use object-oriented interfaces to the library are provided in the form of a C++ library and a Python module. This paper discusses the mathematical abstractions and methods used in the design of the library and its implementation. A number of examples are presented to demonstrate the use of the library in application code.
Finite elements methods in mechanics
Eslami, M Reza
2014-01-01
This book covers all basic areas of mechanical engineering, such as fluid mechanics, heat conduction, beams, and elasticity with detailed derivations for the mass, stiffness, and force matrices. It is especially designed to give physical feeling to the reader for finite element approximation by the introduction of finite elements to the elevation of elastic membrane. A detailed treatment of computer methods with numerical examples are provided. In the fluid mechanics chapter, the conventional and vorticity transport formulations for viscous incompressible fluid flow with discussion on the method of solution are presented. The variational and Galerkin formulations of the heat conduction, beams, and elasticity problems are also discussed in detail. Three computer codes are provided to solve the elastic membrane problem. One of them solves the Poisson’s equation. The second computer program handles the two dimensional elasticity problems, and the third one presents the three dimensional transient heat conducti...
Automation of finite element methods
Korelc, Jože
2016-01-01
New finite elements are needed as well in research as in industry environments for the development of virtual prediction techniques. The design and implementation of novel finite elements for specific purposes is a tedious and time consuming task, especially for nonlinear formulations. The automation of this process can help to speed up this process considerably since the generation of the final computer code can be accelerated by order of several magnitudes. This book provides the reader with the required knowledge needed to employ modern automatic tools like AceGen within solid mechanics in a successful way. It covers the range from the theoretical background, algorithmic treatments to many different applications. The book is written for advanced students in the engineering field and for researchers in educational and industrial environments.
Representation theory of finite monoids
Steinberg, Benjamin
2016-01-01
This first text on the subject provides a comprehensive introduction to the representation theory of finite monoids. Carefully worked examples and exercises provide the bells and whistles for graduate accessibility, bringing a broad range of advanced readers to the forefront of research in the area. Highlights of the text include applications to probability theory, symbolic dynamics, and automata theory. Comfort with module theory, a familiarity with ordinary group representation theory, and the basics of Wedderburn theory, are prerequisites for advanced graduate level study. Researchers in algebra, algebraic combinatorics, automata theory, and probability theory, will find this text enriching with its thorough presentation of applications of the theory to these fields. Prior knowledge of semigroup theory is not expected for the diverse readership that may benefit from this exposition. The approach taken in this book is highly module-theoretic and follows the modern flavor of the theory of finite dimensional ...
Selective Smoothed Finite Element Method
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
The paper examines three selective schemes for the smoothed finite element method (SFEM) which was formulated by incorporating a cell-wise strain smoothing operation into the standard compatible finite element method (FEM). These selective SFEM schemes were formulated based on three selective integration FEM schemes with similar properties found between the number of smoothing cells in the SFEM and the number of Gaussian integration points in the FEM. Both scheme 1 and scheme 2 are free of nearly incompressible locking, but scheme 2 is more general and gives better results than scheme 1. In addition, scheme 2 can be applied to anisotropic and nonlinear situations, while scheme 1 can only be applied to isotropic and linear situations. Scheme 3 is free of shear locking. This scheme can be applied to plate and shell problems. Results of the numerical study show that the selective SFEM schemes give more accurate results than the FEM schemes.
Quantum Computing over Finite Fields
James, Roshan P; Sabry, Amr
2011-01-01
In recent work, Benjamin Schumacher and Michael~D. Westmoreland investigate a version of quantum mechanics which they call "modal quantum theory" but which we prefer to call "discrete quantum theory". This theory is obtained by instantiating the mathematical framework of Hilbert spaces with a finite field instead of the field of complex numbers. This instantiation collapses much the structure of actual quantum mechanics but retains several of its distinguishing characteristics including the notions of superposition, interference, and entanglement. Furthermore, discrete quantum theory excludes local hidden variable models, has a no-cloning theorem, and can express natural counterparts of quantum information protocols such as superdense coding and teleportation. Our first result is to distill a model of discrete quantum computing from this quantum theory. The model is expressed using a monadic metalanguage built on top of a universal reversible language for finite computations, and hence is directly implementab...
Factorization Properties of Finite Spaces
Simkhovich, B; Zak, J; 10.1088/1751-8113/43/4/045301
2010-01-01
In 1960 Schwinger [J. Schwinger, Proc.Natl.Acad.Sci. 46 (1960) 570- 579] proposed the algorithm for factorization of unitary operators in the finite M dimensional Hilbert space according to a coprime decomposition of M. Using a special permutation operator A we generalize the Schwinger factorization to every decomposition of M. We obtain the factorized pairs of unitary operators and show that they obey the same commutation relations as Schwinger's. We apply the new factorization to two problems. First, we show how to generate two kq-like mutually unbiased bases for any composite dimension. Then, using a Harper-like Hamiltonian model in the finite dimension M = M1M2, we show how to design a physical system with M1 energy levels, each having degeneracy M2.
Finite mathematics models and applications
Morris, Carla C
2015-01-01
Features step-by-step examples based on actual data and connects fundamental mathematical modeling skills and decision making concepts to everyday applicability Featuring key linear programming, matrix, and probability concepts, Finite Mathematics: Models and Applications emphasizes cross-disciplinary applications that relate mathematics to everyday life. The book provides a unique combination of practical mathematical applications to illustrate the wide use of mathematics in fields ranging from business, economics, finance, management, operations research, and the life and social sciences.
Maximal subgroups of finite groups
Directory of Open Access Journals (Sweden)
S. Srinivasan
1990-01-01
Full Text Available In finite groups maximal subgroups play a very important role. Results in the literature show that if the maximal subgroup has a very small index in the whole group then it influences the structure of the group itself. In this paper we study the case when the index of the maximal subgroups of the groups have a special type of relation with the Fitting subgroup of the group.
Commutators with Finite Spectrum Ⅱ
Institute of Scientific and Technical Information of China (English)
Nadia BOUDI
2009-01-01
The purpose of this paper is to study derivations d, d' defined on a Banach algebra A such that the spectrum σ([dx, d'x]) is finite for all x ∈ A. In particular we show that if the algebra is semisimple, then there exists an element a in the socle of A such that [d, d'] is the inner derivation implemented by a.
Flux tubes at Finite Temperature
Bicudo, Pedro; Cardoso, Marco
2016-01-01
We show the flux tubes produced by static quark-antiquark, quark-quark and quark-gluon charges at finite temperature. The sources are placed in the lattice with fundamental and adjoint Polyakov loops. We compute the square densities of the chromomagnetic and chromoelectric fields above and below the phase transition. Our results are gauge invariant and produced in pure gauge SU(3). The codes are written in CUDA and the computations are performed with GPUs.
Meng, Bin
2010-01-01
Operator-valued frames are natural generalization of frames that have been used in quantum computing, packets encoding, etc. In this paper, we focus on developing the theory about operator-valued frames for finite Hilbert spaces. Some results concerning dilation, alternate dual, and existence of operator-valued frames are given. Then we characterize the optimal operator-valued frames under the case which one packet of data is lost in transmission. At last we construct the operator-valued fram...
Genetic analyses of a seasonal interval timer.
Prendergast, Brian J; Renstrom, Randall A; Nelson, Randy J
2004-08-01
Seasonal clocks (e.g., circannual clocks, seasonal interval timers) permit anticipation of regularly occurring environmental events by timing the onset of seasonal transitions in reproduction, metabolism, and behavior. Implicit in the concept that seasonal clocks reflect adaptations to the local environment is the unexamined assumption that heritable genetic variance exists in the critical features of such clocks, namely, their temporal properties. These experiments quantified the intraspecific variance in, and heritability of, the photorefractoriness interval timer in Siberian hamsters (Phodopus sungorus), a seasonal clock that provides temporal information to mechanisms that regulate seasonal transitions in body weight. Twenty-seven families consisting of 54 parents and 109 offspring were raised in a long-day photoperiod and transferred as adults to an inhibitory photoperiod (continuous darkness; DD). Weekly body weight measurements permitted specification of the interval of responsiveness to DD, a reflection of the duration of the interval timer, in each individual. Body weights of males and females decreased after exposure to DD, but 3 to 5 months later, somatic recrudescence occurred, indicative of photorefractoriness to DD. The interval timer was approximately 5 weeks longer and twice as variable in females relative to males. Analyses of variance of full siblings revealed an overall intraclass correlation of 0.71 +/- 0.04 (0.51 +/- 0.10 for male offspring and 0.80 +/- 0.06 for female offspring), suggesting a significant family resemblance in the duration of interval timers. Parent-offspring regression analyses yielded an overall heritability estimate of 0.61 +/- 0.2; h(2) estimates from parent-offspring regression analyses were significant for female offspring (0.91 +/- 0.4) but not for male offspring (0.35 +/- 0.2), indicating strong additive genetic components for this trait, primarily in females. In nature, individual differences, both within and between
Strong reality of finite simple groups
Vdovin, E P
2010-01-01
The classification of finite simple strongly real groups is complete. It is easy to see that strong reality for every nonabelian finite simple group is equivalent to the fact that each element can be written as a product of two involutions. We thus obtain a solution to Problem 14.82 from the Kourovka notebook from the classification of finite simple strongly real groups.
FINITE RIODAN MATRIX AND RIODAN GROUP
Institute of Scientific and Technical Information of China (English)
2000-01-01
Riodan Matrix is a lower triangular matrix of in finite order with certainly restricted conditions.In this paper,the author defines two kinds of finite Riodan matrices which are not limited to lower triangular.Properties of group theory of the two kinds matrices are considered.Applications of the finite Riodan matrices are researched.
Finite Metric Spaces of Strictly Negative Type
DEFF Research Database (Denmark)
Hjorth, Poul; Lisonek, P.; Markvorsen, Steen
1998-01-01
We prove that, if a finite metric space is of strictly negative type, then its transfinite diameter is uniquely realized by the infinite extender (load vector). Finite metric spaces that have this property include all spaces on two, three, or four points, all trees, and all finite subspaces of Eu...
Selforthogonal modules with finite injective dimension
Institute of Scientific and Technical Information of China (English)
黄兆泳
2000-01-01
The category consisting of finitely generated modules which are left orthogonal with a cotilting bimodule is shown to be functorially finite. The notion of left orthogonal dimension is introduced , and then a necessary and sufficient condition of selforthogonal modules having finite injective dimension and a characterization of cotilting modules are given.
Selforthogonal modules with finite injective dimension
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
The category consisting of finitely generated modules which are left orthogonal with a cotilting bimodule is shown to be functorially finite. The notion of left orthogonal dimension is introduced, and then a necessary and sufficient condition of selforthogonal modules having finite injective dimension and a characterization of cotilting modules are given.
Heart rate dependency of JT interval sections.
Hnatkova, Katerina; Johannesen, Lars; Vicente, Jose; Malik, Marek
2017-08-09
Little experience exists with the heart rate correction of J-Tpeak and Tpeak-Tend intervals. In a population of 176 female and 176 male healthy subjects aged 32.3±9.8 and 33.1±8.4years, respectively, curve-linear and linear relationship to heart rate was investigated for different sections of the JT interval defined by the proportions of the area under the vector magnitude of the reconstructed 3D vectorcardiographic loop. The duration of the JT sub-section between approximately just before the T peak and almost the T end was found heart rate independent. Most of the JT heart rate dependency relates to the beginning of the interval. The duration of the terminal T wave tail is only weakly heart rate dependent. The Tpeak-Tend is only minimally heart rate dependent and in studies not showing substantial heart rate changes does not need to be heart rate corrected. For any correction formula that has linear additive properties, heart rate correction of JT and JTpeak intervals is practically the same as of the QT interval. However, this does not apply to the formulas in the form of Int/RR(a) since they do not have linear additive properties. Copyright © 2017 Elsevier Inc. All rights reserved.
Transmission line sag calculations using interval mathematics
Energy Technology Data Exchange (ETDEWEB)
Shaalan, H. [Institute of Electrical and Electronics Engineers, Washington, DC (United States)]|[US Merchant Marine Academy, Kings Point, NY (United States)
2007-07-01
Electric utilities are facing the need for additional generating capacity, new transmission systems and more efficient use of existing resources. As such, there are several uncertainties associated with utility decisions. These uncertainties include future load growth, construction times and costs, and performance of new resources. Regulatory and economic environments also present uncertainties. Uncertainty can be modeled based on a probabilistic approach where probability distributions for all of the uncertainties are assumed. Another approach to modeling uncertainty is referred to as unknown but bounded. In this approach, the upper and lower bounds on the uncertainties are assumed without probability distributions. Interval mathematics is a tool for the practical use and extension of the unknown but bounded concept. In this study, the calculation of transmission line sag was used as an example to demonstrate the use of interval mathematics. The objective was to determine the change in cable length, based on a fixed span and an interval of cable sag values for a range of temperatures. The resulting change in cable length was an interval corresponding to the interval of cable sag values. It was shown that there is a small change in conductor length due to variation in sag based on the temperature ranges used in this study. 8 refs.
Confidence interval based parameter estimation--a new SOCR applet and activity.
Directory of Open Access Journals (Sweden)
Nicolas Christou
Full Text Available Many scientific investigations depend on obtaining data-driven, accurate, robust and computationally-tractable parameter estimates. In the face of unavoidable intrinsic variability, there are different algorithmic approaches, prior assumptions and fundamental principles for computing point and interval estimates. Efficient and reliable parameter estimation is critical in making inference about observable experiments, summarizing process characteristics and prediction of experimental behaviors. In this manuscript, we demonstrate simulation, construction, validation and interpretation of confidence intervals, under various assumptions, using the interactive web-based tools provided by the Statistics Online Computational Resource (http://www.SOCR.ucla.edu. Specifically, we present confidence interval examples for population means, with known or unknown population standard deviation; population variance; population proportion (exact and approximate, as well as confidence intervals based on bootstrapping or the asymptotic properties of the maximum likelihood estimates. Like all SOCR resources, these confidence interval resources may be openly accessed via an Internet-connected Java-enabled browser. The SOCR confidence interval applet enables the user to empirically explore and investigate the effects of the confidence-level, the sample-size and parameter of interest on the corresponding confidence interval. Two applications of the new interval estimation computational library are presented. The first one is a simulation of confidence interval estimating the US unemployment rate and the second application demonstrates the computations of point and interval estimates of hippocampal surface complexity for Alzheimers disease patients, mild cognitive impairment subjects and asymptomatic controls.
Confidence interval based parameter estimation--a new SOCR applet and activity.
Christou, Nicolas; Dinov, Ivo D
2011-01-01
Many scientific investigations depend on obtaining data-driven, accurate, robust and computationally-tractable parameter estimates. In the face of unavoidable intrinsic variability, there are different algorithmic approaches, prior assumptions and fundamental principles for computing point and interval estimates. Efficient and reliable parameter estimation is critical in making inference about observable experiments, summarizing process characteristics and prediction of experimental behaviors. In this manuscript, we demonstrate simulation, construction, validation and interpretation of confidence intervals, under various assumptions, using the interactive web-based tools provided by the Statistics Online Computational Resource (http://www.SOCR.ucla.edu). Specifically, we present confidence interval examples for population means, with known or unknown population standard deviation; population variance; population proportion (exact and approximate), as well as confidence intervals based on bootstrapping or the asymptotic properties of the maximum likelihood estimates. Like all SOCR resources, these confidence interval resources may be openly accessed via an Internet-connected Java-enabled browser. The SOCR confidence interval applet enables the user to empirically explore and investigate the effects of the confidence-level, the sample-size and parameter of interest on the corresponding confidence interval. Two applications of the new interval estimation computational library are presented. The first one is a simulation of confidence interval estimating the US unemployment rate and the second application demonstrates the computations of point and interval estimates of hippocampal surface complexity for Alzheimers disease patients, mild cognitive impairment subjects and asymptomatic controls.
Optimal prediction intervals of wind power generation
DEFF Research Database (Denmark)
Wan, Can; Wu, Zhao; Pinson, Pierre
2014-01-01
Accurate and reliable wind power forecasting is essential to power system operation. Given significant uncertainties involved in wind generation, probabilistic interval forecasting provides a unique solution to estimate and quantify the potential impacts and risks facing system operation with wind...... penetration beforehand. This paper proposes a novel hybrid intelligent algorithm approach to directly formulate optimal prediction intervals of wind power generation based on extreme learning machine and particle swarm optimization. Prediction intervals with Associated confidence levels are generated through...... conducted. Comparing with benchmarks applied, experimental results demonstrate the high efficiency and reliability of the developed approach. It is therefore convinced that the proposed method provides a new generalized framework for probabilistic wind power forecasting with high reliability and flexibility...
Confidence intervals with a priori parameter bounds
Lokhov, A V
2014-01-01
We review the methods of constructing confidence intervals that account for a priori information about one-sided constraints on the parameter being estimated. We show that the so-called method of sensitivity limit yields a correct solution of the problem. Derived are the solutions for the cases of a continuous distribution with non-negative estimated parameter and a discrete distribution, specifically a Poisson process with background. For both cases, the best upper limit is constructed that accounts for the a priori information. A table is provided with the confidence intervals for the parameter of Poisson distribution that correctly accounts for the information on the known value of the background along with the software for calculating the confidence intervals for any confidence levels and magnitudes of the background (the software is freely available for download via Internet).
Existence test for asynchronous interval iterations
DEFF Research Database (Denmark)
Madsen, Kaj; Caprani, O.; Stauning, Ole
1997-01-01
In the search for regions that contain fixed points ofa real function of several variables, tests based on interval calculationscan be used to establish existence ornon-existence of fixed points in regions that are examined in the course ofthe search. The search can e.g. be performed...... as a synchronous (sequential) interval iteration:In each iteration step all components of the iterate are calculatedbased on the previous iterate. In this case it is straight forward to base simple interval existence and non-existencetests on the calculations done in each step of the iteration. The search can also...... be performed as an asynchronous (parallel) iteration: Only a few components are changed in each stepand this calculation is in general based on components from differentprevious iterates. For the asynchronous iteration it turns out thatsimple tests of existence and non-existence can be based...
Jendzjowsky, Nicholas G; DeLorey, Darren S
2011-10-01
Non-interval and interval training progressions were used to determine (i) the mean rate at which treadmill speed could be incremented daily using a non-interval training progression to train rats to run continuously at different intensities and (ii) the number of training days required for rats to run continuously at different exercise intensities with non-interval- and interval-based training progressions to establish methods of progressive overload for rodent exercise training studies. Rats were randomly assigned to mild-intensity (n = 5, 20 m·min(-1), 5% grade), moderate-intensity (n = 5, 30 m·min(-1), 5% grade), and heavy-intensity non-interval groups (n = 5, 40 m·min(-1), 5% grade) or a heavy-intensity interval (n = 5, 40 m·min(-1), 5% grade) group and ran 5 days·week(-1) for 6 weeks. Non-interval training involved a daily increase of treadmill speed, whereas interval training involved a daily increase of interval time, until the animal could run continuously at a prescribed intensity. In mild-, moderate-, and heavy-intensity non-interval-trained rats, treadmill speed was increased by 0.6 ± 0.7 m·min(-1)·day(-1), 0.6 ± 0.2 m·min(-1)·day(-1), and 0.8 ± 0.1 m·min(-1)·day(-1), respectively. Target training intensity and duration were obtained following 0.4 ± 0.5 days, 17 ± 3 days, and 23 ± 3 training days (p progression enables rats to run continuously at moderate and heavy intensities with 3-4 weeks of progressive overload. Interval training significantly reduces the number of training days required to attain a target intensity.
Intervality and coherence in complex networks
Domínguez-García, Virginia; Johnson, Samuel; Muñoz, Miguel A.
2016-06-01
Food webs—networks of predators and prey—have long been known to exhibit "intervality": species can generally be ordered along a single axis in such a way that the prey of any given predator tend to lie on unbroken compact intervals. Although the meaning of this axis—usually identified with a "niche" dimension—has remained a mystery, it is assumed to lie at the basis of the highly non-trivial structure of food webs. With this in mind, most trophic network modelling has for decades been based on assigning species a niche value by hand. However, we argue here that intervality should not be considered the cause but rather a consequence of food-web structure. First, analysing a set of 46 empirical food webs, we find that they also exhibit predator intervality: the predators of any given species are as likely to be contiguous as the prey are, but in a different ordering. Furthermore, this property is not exclusive of trophic networks: several networks of genes, neurons, metabolites, cellular machines, airports, and words are found to be approximately as interval as food webs. We go on to show that a simple model of food-web assembly which does not make use of a niche axis can nevertheless generate significant intervality. Therefore, the niche dimension (in the sense used for food-web modelling) could in fact be the consequence of other, more fundamental structural traits. We conclude that a new approach to food-web modelling is required for a deeper understanding of ecosystem assembly, structure, and function, and propose that certain topological features thought to be specific of food webs are in fact common to many complex networks.
Radial basis function networks with linear interval regression weights for symbolic interval data.
Su, Shun-Feng; Chuang, Chen-Chia; Tao, C W; Jeng, Jin-Tsong; Hsiao, Chih-Ching
2012-02-01
This paper introduces a new structure of radial basis function networks (RBFNs) that can successfully model symbolic interval-valued data. In the proposed structure, to handle symbolic interval data, the Gaussian functions required in the RBFNs are modified to consider interval distance measure, and the synaptic weights of the RBFNs are replaced by linear interval regression weights. In the linear interval regression weights, the lower and upper bounds of the interval-valued data as well as the center and range of the interval-valued data are considered. In addition, in the proposed approach, two stages of learning mechanisms are proposed. In stage 1, an initial structure (i.e., the number of hidden nodes and the adjustable parameters of radial basis functions) of the proposed structure is obtained by the interval competitive agglomeration clustering algorithm. In stage 2, a gradient-descent kind of learning algorithm is applied to fine-tune the parameters of the radial basis function and the coefficients of the linear interval regression weights. Various experiments are conducted, and the average behavior of the root mean square error and the square of the correlation coefficient in the framework of a Monte Carlo experiment are considered as the performance index. The results clearly show the effectiveness of the proposed structure.
Interval Continuous Plant Identification from Value Sets
Directory of Open Access Journals (Sweden)
R. Hernández
2012-01-01
Full Text Available This paper shows how to obtain the values of the numerator and denominator Kharitonov polynomials of an interval plant from its value set at a given frequency. Moreover, it is proven that given a value set, all the assigned polynomials of the vertices can be determined if and only if there is a complete edge or a complete arc lying on a quadrant. This algorithm is nonconservative in the sense that if the value-set boundary of an interval plant is exactly known, and particularly its vertices, then the Kharitonov rectangles are exactly those used to obtain these value sets.
A sequent calculus for signed interval logic
DEFF Research Database (Denmark)
Rasmussen, Thomas Marthedal
2001-01-01
We propose and discuss a complete sequent calculus formulation for Signed Interval Logic (SIL) with the chief purpose of improving proof support for SIL in practice. The main theoretical result is a simple characterization of the limit between decidability and undecidability of quantifier-free SIL....... We present a mechanization of SIL in the generic proof assistant Isabelle and consider techniques for automated reasoning. Many of the results and ideas of this report are also applicable to traditional (non-signed) interval logic and, hence, to Duration Calculus....
Finite Metric Spaces of Strictly negative Type
DEFF Research Database (Denmark)
Hjorth, Poul G.
If a finite metric space is of strictly negative type then its transfinite diameter is uniquely realized by an infinite extent (“load vector''). Finite metric spaces that have this property include all trees, and all finite subspaces of Euclidean and Hyperbolic spaces. We prove that if the distan...... matrix of a finite metric space is both hypermetric and regular, then it is of strictly negative type. We show that the strictly negative type finite subspaces of spheres are precisely those which do not contain two pairs of antipodal points....
Tillé, Yves
2006-01-01
Important progresses in the methods of sampling have been achieved. This book draws up an inventory of methods that can be useful for selecting samples. Forty-six sampling methods are described in the framework of general theory. This book is suitable for experienced statisticians who are familiar with the theory of survey sampling.
Brus, D.J.
2015-01-01
In balanced sampling a linear relation between the soil property of interest and one or more covariates with known means is exploited in selecting the sampling locations. Recent developments make this sampling design attractive for statistical soil surveys. This paper introduces balanced sampling
Ross, Kenneth N.
1987-01-01
This article considers various kinds of probability and non-probability samples in both experimental and survey studies. Throughout, how a sample is chosen is stressed. Size alone is not the determining consideration in sample selection. Good samples do not occur by accident; they are the result of a careful design. (Author/JAZ)
Brus, D.J.
2015-01-01
In balanced sampling a linear relation between the soil property of interest and one or more covariates with known means is exploited in selecting the sampling locations. Recent developments make this sampling design attractive for statistical soil surveys. This paper introduces balanced sampling
Finite Element Model of Cardiac Electrical Conduction.
Yin, John Zhihao
1994-01-01
In this thesis, we develop mathematical models to study electrical conduction of the heart. One important pattern of wave propagation of electrical excitation in the heart is reentry which is believed to be the underlying mechanism of some dangerous cardiac arhythmias such as ventricular tachycardia and ventricular fibrillation. We present in this thesis a new ionic channel model of the ventricular cardiac cell membrane to study the microscopic electrical properties of myocardium. We base our model on recent single channel experiment data and a simple physical diffusion model of the calcium channel. Our ionic channel model of myocardium has simpler differential equations and fewer parameters than previous models. Further more, our ionic channel model achieves better results in simulating the strength-interval curve when we connect the membrane patch model to form a one dimensional cardiac muscle strand. We go on to study a finite element model which uses multiple states and non-nearest neighbor interactions to include curvature and dispersion effects. We create a generalized lattice randomization to overcome the artifacts generated by the interaction between the local dynamics and the regularities of the square lattice. We show that the homogeneous model does not display spontaneous wavefront breakup in a reentrant wave propagation once the lattice artifacts have been smoothed out by lattice randomization with a randomization scale larger than the characteristic length of the interaction. We further develop a finite 3-D 3-state heart model which employs a probability interaction rule. This model is applied to the simulation of Body Surface Laplacian Mapping (BSLM) using a cylindrical volume conductor as the torso model. We show that BSLM has a higher spatial resolution than conventional mapping methods in revealing the underlying electrical activities of the heart. The results of these studies demonstrate that mathematical modeling and computer simulation are very
Neal, R M
2000-01-01
Markov chain sampling methods that automatically adapt to characteristics of the distribution being sampled can be constructed by exploiting the principle that one can sample from a distribution by sampling uniformly from the region under the plot of its density function. A Markov chain that converges to this uniform distribution can be constructed by alternating uniform sampling in the vertical direction with uniform sampling from the horizontal `slice' defined by the current vertical position, or more generally, with some update that leaves the uniform distribution over this slice invariant. Variations on such `slice sampling' methods are easily implemented for univariate distributions, and can be used to sample from a multivariate distribution by updating each variable in turn. This approach is often easier to implement than Gibbs sampling, and more efficient than simple Metropolis updates, due to the ability of slice sampling to adaptively choose the magnitude of changes made. It is therefore attractive f...
An 8-node tetrahedral finite element suitable for explicit transient dynamic simulations
Energy Technology Data Exchange (ETDEWEB)
Key, S.W.; Heinstein, M.W.; Stone, C.M. [Sandia National Labs., Albuquerque, NM (United States)
1997-12-31
Considerable effort has been expended in perfecting the algorithmic properties of 8-node hexahedral finite elements. Today the element is well understood and performs exceptionally well when used in modeling three-dimensional explicit transient dynamic events. However, the automatic generation of all-hexahedral meshes remains an elusive achievement. The alternative of automatic generation for all-tetrahedral finite element is a notoriously poor performer, and the 10-node quadratic tetrahedral finite element while a better performer numerically is computationally expensive. To use the all-tetrahedral mesh generation extant today, the authors have explored the creation of a quality 8-node tetrahedral finite element (a four-node tetrahedral finite element enriched with four midface nodal points). The derivation of the element`s gradient operator, studies in obtaining a suitable mass lumping and the element`s performance in applications are presented. In particular, they examine the 80node tetrahedral finite element`s behavior in longitudinal plane wave propagation, in transverse cylindrical wave propagation, and in simulating Taylor bar impacts. The element only samples constant strain states and, therefore, has 12 hourglass modes. In this regard, it bears similarities to the 8-node, mean-quadrature hexahedral finite element. Given automatic all-tetrahedral meshing, the 8-node, constant-strain tetrahedral finite element is a suitable replacement for the 8-node hexahedral finite element and handbuilt meshes.
Symmetric finite volume schemes for eigenvalue problems in arbitrary dimensions
Institute of Scientific and Technical Information of China (English)
2008-01-01
Based on a linear finite element space,two symmetric finite volume schemes for eigenvalue problems in arbitrary dimensions are constructed and analyzed.Some relationships between the finite element method and the finite difference method are addressed,too.
Symmetric finite volume schemes for eigenvalue problems in arbitrary dimensions
Institute of Scientific and Technical Information of China (English)
DAI Xiaoying; YANG Zhang; ZHOU Aihui
2008-01-01
Based on a linear finite element space, two symmetric finite volume schemes for eigenvalue problems in arbitrary dimensions are constructed and analyzed. Some relationships between the finite element method and the finite difference method are addressed, too.
Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions
Padilla, Miguel A.; Divers, Jasmin
2013-01-01
The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…
[Short pregnancy interval and reproductive disorders
Jongbloet, P.H.; Zielhuis, G.A.; Pasker-de Jong, P.C.M.
2002-01-01
The cause of the 'borderline personality disorder' of Vincent van Gogh has been discussed in social-psychiatric terms related to so-called 'substitute children', born after the loss of a previous child. A biological-organic genesis, i.e. the very short birth interval of precisely one year between Va
Happiness Scale Interval Study. Methodological Considerations
Kalmijn, W. M.; Arends, L. R.; Veenhoven, R.
2011-01-01
The Happiness Scale Interval Study deals with survey questions on happiness, using verbal response options, such as "very happy" and "pretty happy". The aim is to estimate what degrees of happiness are denoted by such terms in different questions and languages. These degrees are expressed in numerical values on a continuous [0,10] scale, which are…
Interval-based Specification of Concurrent Objects
DEFF Research Database (Denmark)
Løvengreen, Hans Henrik; Sørensen, Morten U.
1998-01-01
We propose a logic for specifying the behaviour of concurrent objects, ie. concurrent entities that invoke operation of each other. The logic is an interval logic whith operation invocatins as primitive formulas. The strengths and deficiencies of the logic are illustrated by specifying a variety...
Interval logic. Proof theory and theorem proving
DEFF Research Database (Denmark)
Rasmussen, Thomas Marthedal
2002-01-01
Real-time systems are computer systems which have to meet real-time constraints. To increase the confidence in such systems, formal methods and formal verification are utilized. The class of logics known as interval logics can be used for expressing properties and requirements of real-time system...
Mitotic activity index in interval breast cancers.
Groenendijk, R.P.R.; Bult, P.; Noppen, C.M.; Boetes, C.; Ruers, T.J.M.; Wobbes, Th.
2003-01-01
AIMS: The Mitotic Activity Index (MAI) is a strong prognostic factor for disease free survival in breast cancer. The MAI is lower in screen detected tumours, correlating with less aggressive biological behaviour in this group. In this study the MAI is compared between screen detected, interval and s
High Intensity Interval Training: New Insights
Institute of Scientific and Technical Information of China (English)
Martin J.Gibala
2008-01-01
@@ KEY POINTS ·High-intensity interval training(HIT)is characterized by repeated sessions of relatively brief，intermittent exercise.often performed with an“a11 out”effort or at an intensity close to that which elicits peak oxygen uptake(i.e.，≥90%of VO2 peak).
Robust stability of interval parameter matrices
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
This note is devoted to the problem of robust stability of interval parameter matrices. Based on some basic facts relating the H∞ norm of a transfer function to the Riccati matrix inequality and Hamilton matrix, several test conditions with parameter perturbation bounds are obtained.
Precise Interval Timer for Software Defined Radio
Pozhidaev, Aleksey (Inventor)
2014-01-01
A precise digital fractional interval timer for software defined radios which vary their waveform on a packet-by-packet basis. The timer allows for variable length in the preamble of the RF packet and allows to adjust boundaries of the TDMA (Time Division Multiple Access) Slots of the receiver of an SDR based on the reception of the RF packet of interest.
Happiness Scale Interval Study. Methodological Considerations
Kalmijn, W. M.; Arends, L. R.; Veenhoven, R.
2011-01-01
The Happiness Scale Interval Study deals with survey questions on happiness, using verbal response options, such as "very happy" and "pretty happy". The aim is to estimate what degrees of happiness are denoted by such terms in different questions and languages. These degrees are expressed in numerical values on a continuous…
Computation of confidence intervals for Poisson processes
Aguilar-Saavedra, J. A.
2000-07-01
We present an algorithm which allows a fast numerical computation of Feldman-Cousins confidence intervals for Poisson processes, even when the number of background events is relatively large. This algorithm incorporates an appropriate treatment of the singularities that arise as a consequence of the discreteness of the variable.
Computation of confidence intervals for Poisson processes
Aguilar-Saavedra, J A
2000-01-01
We present an algorithm which allows a fast numerical computation of Feldman-Cousins confidence intervals for Poisson processes, even when the number of background events is relatively large. This algorithm incorporates an appropriate treatment of the singularities that arise as a consequence of the discreteness of the variable.
[Determination and verification of reference intervals].
Henny, J; Arnaud, J; Giroud, C; Vassault, A
2010-12-01
Based on the original recommendation of the Expert Panel on the Theory of Reference values of the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC-LM), updated guidelines were recently published under the auspices of IFCC and the Clinical and Laboratory Standards Institute. These updated guidelines add valuable improvements (transference, validation and verifying reference intervals).
Interval Mapping of Multiple Quantitative Trait Loci
Jansen, Ritsert C.
1993-01-01
The interval mapping method is widely used for the mapping of quantitative trait loci (QTLs) in segregating generations derived from crosses between inbred lines. The efficiency of detecting and the accuracy of mapping multiple QTLs by using genetic markers are much increased by employing multiple Q
Linear chord diagrams on two intervals
DEFF Research Database (Denmark)
Andersen, Jørgen Ellegaard; Penner, Robert; Reidys, Christian
generating function ${\\bf C}_g(z)=z^{2g}R_g(z)/(1-4z)^{3g-{1\\over 2}}$ for chords attached to a single interval is algebraic, for $g\\geq 1$, where the polynomial $R_g(z)$ with degree at most $g-1$ has integer coefficients and satisfies $R_g(1/4)\
John Buridan's Sophismata and interval temporal semantics
Uckelman, S.L.; Johnston, S.
2010-01-01
In this paper we look at the suitability of modern interval-based temporal logic for modeling John Buridan’s treatment of tensed sentences in his Sophismata. Building on the paper [Øhrstrøm 1984], we develop Buridan’s analysis of temporal logic, paying particular attention to his notions of negation
Modal Transition Systems with Weight Intervals
DEFF Research Database (Denmark)
Juhl, Line; Larsen, Kim Guldstrand; Srba, Jiri
2012-01-01
We propose weighted modal transition systems, an extension to the well-studied specification formalism of modal transition systems that allows to express both required and optional behaviours of their intended implementations. In our extension we decorate each transition with a weight interval th...
Learned Interval Time Facilitates Associate Memory Retrieval
van de Ven, Vincent; Kochs, Sarah; Smulders, Fren; De Weerd, Peter
2017-01-01
The extent to which time is represented in memory remains underinvestigated. We designed a time paired associate task (TPAT) in which participants implicitly learned cue-time-target associations between cue-target pairs and specific cue-target intervals. During subsequent memory testing, participants showed increased accuracy of identifying…
Peridynamic Multiscale Finite Element Methods
Energy Technology Data Exchange (ETDEWEB)
Costa, Timothy [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bond, Stephen D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Littlewood, David John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Moore, Stan Gerald [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-12-01
The problem of computing quantum-accurate design-scale solutions to mechanics problems is rich with applications and serves as the background to modern multiscale science research. The prob- lem can be broken into component problems comprised of communicating across adjacent scales, which when strung together create a pipeline for information to travel from quantum scales to design scales. Traditionally, this involves connections between a) quantum electronic structure calculations and molecular dynamics and between b) molecular dynamics and local partial differ- ential equation models at the design scale. The second step, b), is particularly challenging since the appropriate scales of molecular dynamic and local partial differential equation models do not overlap. The peridynamic model for continuum mechanics provides an advantage in this endeavor, as the basic equations of peridynamics are valid at a wide range of scales limiting from the classical partial differential equation models valid at the design scale to the scale of molecular dynamics. In this work we focus on the development of multiscale finite element methods for the peridynamic model, in an effort to create a mathematically consistent channel for microscale information to travel from the upper limits of the molecular dynamics scale to the design scale. In particular, we first develop a Nonlocal Multiscale Finite Element Method which solves the peridynamic model at multiple scales to include microscale information at the coarse-scale. We then consider a method that solves a fine-scale peridynamic model to build element-support basis functions for a coarse- scale local partial differential equation model, called the Mixed Locality Multiscale Finite Element Method. Given decades of research and development into finite element codes for the local partial differential equation models of continuum mechanics there is a strong desire to couple local and nonlocal models to leverage the speed and state of the
Assessing performance and validating finite element simulations using probabilistic knowledge
Energy Technology Data Exchange (ETDEWEB)
Dolin, Ronald M.; Rodriguez, E. A. (Edward A.)
2002-01-01
Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrence results are used to validate finite element predictions.
Covariate-adjusted confidence interval for the intraclass correlation coefficient.
Shoukri, Mohamed M; Donner, Allan; El-Dali, Abdelmoneim
2013-09-01
A crucial step in designing a new study is to estimate the required sample size. For a design involving cluster sampling, the appropriate sample size depends on the so-called design effect, which is a function of the average cluster size and the intracluster correlation coefficient (ICC). It is well-known that under the framework of hierarchical and generalized linear models, a reduction in residual error may be achieved by including risk factors as covariates. In this paper we show that the covariate design, indicating whether the covariates are measured at the cluster level or at the within-cluster subject level affects the estimation of the ICC, and hence the design effect. Therefore, the distinction between these two types of covariates should be made at the design stage. In this paper we use the nested-bootstrap method to assess the accuracy of the estimated ICC for continuous and binary response variables under different covariate structures. The codes of two SAS macros are made available by the authors for interested readers to facilitate the construction of confidence intervals for the ICC. Moreover, using Monte Carlo simulations we evaluate the relative efficiency of the estimators and evaluate the accuracy of the coverage probabilities of a 95% confidence interval on the population ICC. The methodology is illustrated using a published data set of blood pressure measurements taken on family members.
Haematological reference intervals in a multiethnic population.
Directory of Open Access Journals (Sweden)
Angeli Ambayya
Full Text Available INTRODUCTION: Similar to other populations, full blood count reference (FBC intervals in Malaysia are generally derived from non-Malaysian subjects. However, numerous studies have shown significant differences between and within populations supporting the need for population specific intervals. METHODS: Two thousand seven hundred twenty five apparently healthy adults comprising all ages, both genders and three principal races were recruited through voluntary participation. FBC was performed on two analysers, Sysmex XE-5000 and Unicel DxH 800, in addition to blood smears and haemoglobin analysis. Serum ferritin, soluble transferrin receptor and C-reactive protein assays were performed in selected subjects. All parameters of qualified subjects were tested for normality followed by determination of reference intervals, measures of central tendency and dispersion along with point estimates for each subgroup. RESULTS: Complete data was available in 2440 subjects of whom 56% (907 women and 469 men were included in reference interval calculation. Compared to other populations there were significant differences for haemoglobin, red blood cell count, platelet count and haematocrit in Malaysians. There were differences between men and women, and between younger and older men; unlike in other populations, haemoglobin was similar in younger and older women. However ethnicity and smoking had little impact. 70% of anemia in premenopausal women, 24% in postmenopausal women and 20% of males is attributable to iron deficiency. There was excellent correlation between Sysmex XE-5000 and Unicel DxH 800. CONCLUSION: Our data confirms the importance of population specific haematological parameters and supports the need for local guidelines rather than adoption of generalised reference intervals and cut-offs.
A Few Finite Trigonometric Sums
Directory of Open Access Journals (Sweden)
Chandan Datta
2017-02-01
Full Text Available Finite trigonometric sums occur in various branches of physics, mathematics, and their applications. These sums may contain various powers of one or more trigonometric functions. Sums with one trigonometric function are known; however, sums with products of trigonometric functions can become complicated, and may not have a simple expression in a number of cases. Some of these sums have interesting properties, and can have amazingly simple values. However, only some of them are available in the literature. We obtain a number of such sums using the method of residues.
The Finiteness of Moffatt vortices
Kalita, Jiten C; Panda, Swapnendu; Unal, Aynur
2016-01-01
Till date, the sequence of vortices present in the solid corners of internal viscous incompressible flows, widely known as Moffatt vortices was thought to be infinite. In this paper, we propose two topological equivalence classes of Moffatt vortices in terms of orientation-preserving homeomorphism as well as critical point theory. We further quantify the centers of vortices as fixed points through Brower fixed point theorem and define boundary of a vortex as circle cell. With the aid of these new developments and some existing theorems in topology, we provide six proofs establishing that the sequence of Moffatt vortices cannot be infinite; in fact it is at most finite.
Functionals of finite Riemann surfaces
Schiffer, Menahem
2014-01-01
This advanced monograph on finite Riemann surfaces, based on the authors' 1949-50 lectures at Princeton University, remains a fundamental book for graduate students. The Bulletin of the American Mathematical Society hailed the self-contained treatment as the source of ""a plethora of ideas, each interesting in its own right,"" noting that ""the patient reader will be richly rewarded."" Suitable for graduate-level courses, the text begins with three chapters that offer a development of the classical theory along historical lines, examining geometrical and physical considerations, existence theo
Discrete and finite General Relativity
De Souza, M M; Souza, Manoelito M. de; Silveira, Robson N.
1999-01-01
We develop the General Theory of Relativity in a formalism with extended causality that describes physical interaction through discrete, transversal and localized pointlike fields. The homogeneous field equations are then solved for a finite, singularity-free, point-like field that we associate to a ``classical graviton". The standard Einstein's continuous formalism is retrieved by means of an averaging process, and its continuous solutions are determined by the chsosen imposed symetry. The Schwarzschild metric is obtained by the imposition of spherical symmetry on the averaged field.
Meng, Bin
2010-01-01
Operator-valued frames are natural generalization of frames that have been used in quantum computing, packets encoding, etc. In this paper, we focus on developing the theory about operator-valued frames for finite Hilbert spaces. Some results concerning dilation, alternate dual, and existence of operator-valued frames are given. Then we characterize the optimal operator-valued frames under the case which one packet of data is lost in transmission. At last we construct the operator-valued frames $\\{V_j\\}_{j=1}^m$ with given frame operator $S$ and satisfying $V_jV_j^*=\\alpha_jI$, where $\\alpha_j's$ are positive numbers.
Simulating QCD at finite density
de Forcrand, Philippe
2009-01-01
In this review, I recall the nature and the inevitability of the "sign problem" which plagues attempts to simulate lattice QCD at finite baryon density. I present the main approaches used to circumvent the sign problem at small chemical potential. I sketch how one can predict analytically the severity of the sign problem, as well as the numerically accessible range of baryon densities. I review progress towards the determination of the pseudo-critical temperature T_c(mu), and towards the identification of a possible QCD critical point. Some promising advances with non-standard approaches are reviewed.
Reliable calculation in probabilistic logic: Accounting for small sample size and model uncertainty
Energy Technology Data Exchange (ETDEWEB)
Ferson, S. [Applied Biomathematics, Setauket, NY (United States)
1996-12-31
A variety of practical computational problems arise in risk and safety assessments, forensic statistics and decision analyses in which the probability of some event or proposition E is to be estimated from the probabilities of a finite list of related subevents or propositions F,G,H,.... In practice, the analyst`s knowledge may be incomplete in two ways. First, the probabilities of the subevents may be imprecisely known from statistical estimations, perhaps based on very small sample sizes. Second, relationships among the subevents may be known imprecisely. For instance, there may be only limited information about their stochastic dependencies. Representing probability estimates as interval ranges on has been suggested as a way to address the first source of imprecision. A suite of AND, OR and NOT operators defined with reference to the classical Frochet inequalities permit these probability intervals to be used in calculations that address the second source of imprecision, in many cases, in a best possible way. Using statistical confidence intervals as inputs unravels the closure properties of this approach however, requiring that probability estimates be characterized by a nested stack of intervals for all possible levels of statistical confidence, from a point estimate (0% confidence) to the entire unit interval (100% confidence). The corresponding logical operations implied by convolutive application of the logical operators for every possible pair of confidence intervals reduces by symmetry to a manageably simple level-wise iteration. The resulting calculus can be implemented in software that allows users to compute comprehensive and often level-wise best possible bounds on probabilities for logical functions of events.
Abou El Hassan, Mohamed; Stoianov, Alexandra; Araújo, Petra A T; Sadeghieh, Tara; Chan, Man Khun; Chen, Yunqi; Randell, Edward; Nieuwesteeg, Michelle; Adeli, Khosrow
2015-11-01
The CALIPER program has established a comprehensive database of pediatric reference intervals using largely the Abbott ARCHITECT biochemical assays. To expand clinical application of CALIPER reference standards, the present study is aimed at transferring CALIPER reference intervals from the Abbott ARCHITECT to Beckman Coulter AU assays. Transference of CALIPER reference intervals was performed based on the CLSI guidelines C28-A3 and EP9-A2. The new reference intervals were directly verified using up to 100 reference samples from the healthy CALIPER cohort. We found a strong correlation between Abbott ARCHITECT and Beckman Coulter AU biochemical assays, allowing the transference of the vast majority (94%; 30 out of 32 assays) of CALIPER reference intervals previously established using Abbott assays. Transferred reference intervals were, in general, similar to previously published CALIPER reference intervals, with some exceptions. Most of the transferred reference intervals were sex-specific and were verified using healthy reference samples from the CALIPER biobank based on CLSI criteria. It is important to note that the comparisons performed between the Abbott and Beckman Coulter assays make no assumptions as to assay accuracy or which system is more correct/accurate. The majority of CALIPER reference intervals were transferrable to Beckman Coulter AU assays, allowing the establishment of a new database of pediatric reference intervals. This further expands the utility of the CALIPER database to clinical laboratories using the AU assays; however, each laboratory should validate these intervals for their analytical platform and local population as recommended by the CLSI. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Circadian profile of QT interval and QT interval variability in 172 healthy volunteers
DEFF Research Database (Denmark)
Bonnemeier, Hendrik; Wiegand, Uwe K H; Braasch, Wiebke
2003-01-01
of sleep. QT and R-R intervals revealed a characteristic day-night-pattern. Diurnal profiles of QT interval variability exhibited a significant increase in the morning hours (6-9 AM; P ...-to-beat QT interval duration (QT, QTapex [QTa], Tend [Te]), variability (QTSD, QTaSD), and the mean R-R interval were determined from 24-hour ambulatory electrocardiograms after exclusion of artifacts and premature beats. All volunteers were fully active, awoke at approximately 7:00 AM, and had 6-8 hours...... alterations mainly at daytime with normal aging. Furthermore, the diurnal course of the QT interval variability strongly suggests that it is related to cardiac sympathetic activity and to the reported diurnal pattern of malignant ventricular arrhythmias....
Application of Interval Algorithm in Rural Power Network Planning
Institute of Scientific and Technical Information of China (English)
GU Zhuomu; ZHAO Yulin
2009-01-01
Rural power network planning is a complicated nonlinear optimized combination problem which based on load forecasting results, and its actual load is affected by many uncertain factors, which influenced optimization results of rural power network planning. To solve the problems, the interval algorithm was used to modify the initial search method of uncertainty load mathematics model in rural network planning. Meanwhile, the genetic/tabu search combination algorithm was adopted to optimize the initialized network. The sample analysis results showed that compared with the certainty planning, the improved method was suitable for urban medium-voltage distribution network planning with consideration of uncertainty load and the planning results conformed to the reality.
Asynchronous Preparation of Tonally Fused Intervals in Polyphonic Music
Directory of Open Access Journals (Sweden)
David Huron
2008-02-01
Full Text Available An analysis of a sample of polyphonic keyboard works by J.S. Bach shows that synchronous note onsets are avoided for those harmonic intervals that most promote tonal fusion (such as unison, fifths and octaves. This pattern is consistent with perceptual research showing an interaction between onset synchrony and tonal fusion in the formation of auditory streams (e.g., Vos, 1995. The results provide further support for the notion that polyphonic music is organized so as to facilitate the perceptual independence of the concurrent parts.
Rules for confidence intervals of permeability coefficients for water flow in over-broken rock mass
Institute of Scientific and Technical Information of China (English)
Liu Weiqun; Fei Xiaodong; Fang Jingnian
2012-01-01
Based on the steady-state seepage method,we used the Mechanical Testing and Simulation 815.02 System and a self-designed seepage instrument for over-broken stone to measure seepage properties of water flows in three types of crushed rock samples.Three methods of confidence interval in describing permeability coefficients are presented:the secure interval,the calculated interval and the systemic interval.The lower bound of the secure interval can be applied to water-inrush and the upper bound can solve the problem of connectivity.For the calculated interval,as the axial pressure increases,the length of confidence interval is shortened and the upper and lower bounds are reduced.For the systemic interval,the length of its confidence interval,as well as the upper and lower bounds,clearly vary under low axial pressure but are fairly similar under high axial pressure.These three methods provide useful information and references for analyzing the permeability coefficient of over-broken rock.
Reference intervals for acetylated fetal hemoglobin in healthy newborns
Directory of Open Access Journals (Sweden)
Renata Paleari
2014-09-01
Full Text Available The acetylated fetal hemoglobin (AcHbF derives from an enzyme-mediated post-translational modification occurring on the N-terminal glycine residues of γ-chains. At present, no established data are available on reference intervals for AcHbF in newborns. A total of 92 healthy infants, with gestational age between 37 and 41 weeks were selected for the establishment of AcHbF reference intervals. Blood samples were collected by heel pricking, when collecting routine neonatal screening, and the hemoglobin pattern was analyzed by high-performance liquid chromatography. AcHbF results were then normalized for HbF content in order to account for differences in hemoglobin switch. No difference was found in AcHbF values between genders (P=0.858. AcHbF results were as follow: 12.8±0.8% (mean±standard deviation, reference interval: 11.3-14.3%. This finding could facilitate further studies aimed to assess the possible use of AcHbF, for instance as a possible fetal metabolic biomarker during pregnancy.
Phase transitions in a one-dimensional multibarrier potential of finite range
Bar, D
2002-01-01
We have previously studied properties of a one-dimensional potential with $N$ equally spaced identical barries in a (fixed) finite interval for both finite and infinite $N$. It was observed that scattering and spectral properties depend sensitively on the ratio $c$ of spacing to width of the barriers (even in the limit $N \\to \\infty$). We compute here the specific heat of an ensemble of such systems and show that there is critical dependence on this parameter, as well as on the temperature, strongly suggestive of phase transitions.
Transient and Stationary Losses in a Finite-Buffer Queue with Batch Arrivals
Directory of Open Access Journals (Sweden)
Andrzej Chydzinski
2012-01-01
Full Text Available We present an analysis of the number of losses, caused by the buffer overflows, in a finite-buffer queue with batch arrivals and autocorrelated interarrival times. Using the batch Markovian arrival process, the formulas for the average number of losses in a finite time interval and the stationary loss ratio are shown. In addition, several numerical examples are presented, including illustrations of the dependence of the number of losses on the average batch size, buffer size, system load, autocorrelation structure, and time.
The sampling theory developed and decribed by Pierre Gy is compared to design-based classical finite sampling methods for estimation of a ratio of random variables. For samples of materials that can be completely enumerated, the methods are asymptotically equivalent. Gy extends t...
Finite Unification: Theory and Predictions
Directory of Open Access Journals (Sweden)
Sven Heinemeyer
2010-06-01
Full Text Available All-loop Finite Unified Theories (FUTs are very interesting N=1 supersymmetric Grand Unified Theories (GUTs which not only realise an old field theoretic dream but also have a remarkable predictive power due to the required reduction of couplings. The reduction of the dimensionless couplings in N=1 GUTs is achieved by searching for renormalization group invariant (RGI relations among them holding beyond the unification scale. Finiteness results from the fact that there exist RGI relations among dimensionless couplings that guarantee the vanishing of all beta-functions in certain N=1 GUTs even to all orders. Furthermore developments in the soft supersymmetry breaking sector of N=1 GUTs and FUTs lead to exact RGI relations, i.e. reduction of couplings, in this dimensionful sector of the theory too. Based on the above theoretical framework phenomenologically consistent FUTS have been constructed. Here we present FUT models based on the SU(5 and SU(3^3 gauge groups and their predictions. Of particular interest is the Higgs mass prediction of one of the models which is expected to be tested at the LHC.
Biset functors for finite groups
Bouc, Serge
2010-01-01
This volume exposes the theory of biset functors for finite groups, which yields a unified framework for operations of induction, restriction, inflation, deflation and transport by isomorphism. The first part recalls the basics on biset categories and biset functors. The second part is concerned with the Burnside functor and the functor of complex characters, together with semisimplicity issues and an overview of Green biset functors. The last part is devoted to biset functors defined over p-groups for a fixed prime number p. This includes the structure of the functor of rational representations and rational p-biset functors. The last two chapters expose three applications of biset functors to long-standing open problems, in particular the structure of the Dade group of an arbitrary finite p-group.This book is intended both to students and researchers, as it gives a didactic exposition of the basics and a rewriting of advanced results in the area, with some new ideas and proofs.
Quasispecies theory for finite populations
Park, Jeong-Man; Muñoz, Enrique; Deem, Michael W.
2010-01-01
We present stochastic, finite-population formulations of the Crow-Kimura and Eigen models of quasispecies theory, for fitness functions that depend in an arbitrary way on the number of mutations from the wild type. We include back mutations in our description. We show that the fluctuation of the population numbers about the average values is exceedingly large in these physical models of evolution. We further show that horizontal gene transfer reduces by orders of magnitude the fluctuations in the population numbers and reduces the accumulation of deleterious mutations in the finite population due to Muller’s ratchet. Indeed, the population sizes needed to converge to the infinite population limit are often larger than those found in nature for smooth fitness functions in the absence of horizontal gene transfer. These analytical results are derived for the steady state by means of a field-theoretic representation. Numerical results are presented that indicate horizontal gene transfer speeds up the dynamics of evolution as well.
... several times a day using capillary blood sampling. Disadvantages to capillary blood sampling include: Only a limited ... do not constitute endorsements of those other sites. Copyright 1997-2017, A.D.A.M., Inc. Duplication ...
Energy Technology Data Exchange (ETDEWEB)
Zapata, Francisco; Kreinovich, Vladik; Joslyn, Cliff A.; Hogan, Emilie A.
2013-08-01
To make a decision, we need to compare the values of quantities. In many practical situations, we know the values with interval uncertainty. In such situations, we need to compare intervals. Allen’s algebra describes all possible relations between intervals on the real line, and ordering relations between such intervals are well studied. In this paper, we extend this description to intervals in an arbitrary partially ordered set (poset). In particular, we explicitly describe ordering relations between intervals that generalize relation between points. As auxiliary results, we provide a logical interpretation of the relation between intervals, and extend the results about interval graphs to intervals over posets.
Prediction with measurement errors in finite populations.
Singer, Julio M; Stanek, Edward J; Lencina, Viviana B; González, Luz Mery; Li, Wenjun; Martino, Silvina San
2012-02-01
We address the problem of selecting the best linear unbiased predictor (BLUP) of the latent value (e.g., serum glucose fasting level) of sample subjects with heteroskedastic measurement errors. Using a simple example, we compare the usual mixed model BLUP to a similar predictor based on a mixed model framed in a finite population (FPMM) setup with two sources of variability, the first of which corresponds to simple random sampling and the second, to heteroskedastic measurement errors. Under this last approach, we show that when measurement errors are subject-specific, the BLUP shrinkage constants are based on a pooled measurement error variance as opposed to the individual ones generally considered for the usual mixed model BLUP. In contrast, when the heteroskedastic measurement errors are measurement condition-specific, the FPMM BLUP involves different shrinkage constants. We also show that in this setup, when measurement errors are subject-specific, the usual mixed model predictor is biased but has a smaller mean squared error than the FPMM BLUP which point to some difficulties in the interpretation of such predictors.
Delayed Interval Delivery in a Triplet Pregnancy
Directory of Open Access Journals (Sweden)
Sheng-Po Kao
2006-02-01
Full Text Available Due to a surge in the availability of assisted reproductive techniques (ART, the incidence of multiple pregnancies is increasing. Preterm labor is a major complication in such pregnancies. Preterm delivery of the first fetus is often followed by delivery of the remaining fetuses. However, conservative management and delayed interval delivery in the remaining fetuses might allow for fetal lung maturity and would reduce perinatal morbidities. A 32-year-old female had a quadruplet pregnancy after receiving ART. Fetal reduction to triplet pregnancy was performed at 11 weeks of gestation. The remaining triplet pregnancy was stable until 29 weeks of gestation, when the first triplet was delivered after spontaneous rupture of membranes. Under intensive monitoring, the remaining 2 fetuses were delivered by cesarean section at 31 weeks of gestation. Only the first fetus had retinopathy after discharge. In conclusion, delayed interval delivery of the remaining fetuses should be attempted after preterm delivery of the first fetus.
On singular interval-valued iteration groups
Directory of Open Access Journals (Sweden)
Marek Cezary Zdun
2016-09-01
Full Text Available Let I=(a,b and L be a nowhere dense perfect set containing the ends of the interval I and let varphi:Ito mathbb R be a non-increasing continuoussurjection constant on the components of Isetminus L and the closures of these components be the maximal intervals of constancy of varphi . The family F^t,tin mathbb R of theinterval-valued functions F^t(x:=varphi^ -1 t+varphi(x , xin I forms a set-valued iteration group. We determine a maximal densesubgroup Tsubsetneq mathbb R such that the set-valued subgroup F^t,tin T has some regular properties. In particular, the mappings Tbackepsilon tto F^t(x for tin T possess selections f^t(x in F^t(x, which are disjoint group of continuous functions.
Two-sorted Point-Interval Temporal Logics
DEFF Research Database (Denmark)
Balbiani, Philippe; Goranko, Valentin; Sciavicco, Guido
2011-01-01
There are two natural and well-studied approaches to temporal ontology and reasoning: point-based and interval-based. Usually, interval-based temporal reasoning deals with points as particular, duration-less intervals. Here we develop explicitly two-sorted point-interval temporal logical framework...... their expressiveness, comparative to interval-based logics, and the complexity of their satisfiability problems. In particular, we identify some previously not studied and potentially interesting interval logics. © 2011 Elsevier B.V....
Option Pricing Method in a Market Involving Interval Number Factors
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
The method for pricing the option in a market with interval number factors is proposed. The no-arbitrage principle in the interval number valued market and the rule to judge the reasonability of a price interval are given. Using the method, the price interval where the riskless interest and the volatility under B-S setting is given. The price interval from binomial tree model when the key factors u, d, R are all interval numbers is also discussed.
Learning about confidence intervals with software R
Gariela Gonçalves; Luís Afonso; Marta Ferreira; Teresa Ferro; Nascimento, Maria M.
2013-01-01
This work was to study the feasibility of implementing a teaching method that employs software, in a Computational Mathematics course, involving students and teachers through the use of the statistical software R in carrying out practical work, such as strengthening the traditional teaching. The statistical inference, namely the determination of confidence intervals, was the content selected for this experience. It was intended show, first of all, that it is possible to promote, through t...
Understanding Confidence Intervals With Visual Representations
Navruz, Bilgin; DELEN, Erhan
2014-01-01
In the present paper, we showed how confidence intervals (CIs) are valuable and useful in research studies when they are used in the correct form with correct interpretations. The sixth edition of the APA (2010) Publication Manual strongly recommended reporting CIs in research studies, and it was described as “the best reporting strategy” (p. 34). Misconceptions and correct interpretations of CIs were presented from several textbooks. In addition, limitations of the null hypothesis statistica...
Periodicity In The Intervals Between Primes
2015-07-02
simple proportionality in Eq.(32), such that the basic form of Dc over the range (0,Γ] is repeated in progressively smaller scale over subsequent...striations are an aggregate visual effect that could be produced only from finely-grained periodicity in the δ- dependence of Dp. Figure (6) is a...a randomized analogue of the first 104 primes. indicates a certain ‘Russian dolls’-style self-similarity in the periodicity of the prime intervals
Interval Mathematics Applied to Critical Point Transitions
Stradi, Benito A.
2012-01-01
The determination of critical points of mixtures is important for both practical and theoretical reasons in the modeling of phase behavior, especially at high pressure. The equations that describe the behavior of complex mixtures near critical points are highly nonlinear and with multiplicity of solutions to the critical point equations. Interval arithmetic can be used to reliably locate all the critical points of a given mixture. The method also verifies the nonexistence of a critical point ...
Prolonged QT interval in Rett syndrome
1999-01-01
Rett syndrome is a severe neurodevelopmental disorder of unknown aetiology. A prolonged QT interval has been described previously in patients with Rett syndrome. To investigate QT prolongation and the presence of cardiac tachyarrhythmias in Rett syndrome electrocardiography and 24 hour Holter monitoring were performed prospectively in a cohort of 34 girls with Rett syndrome. The corrected QT value was prolonged in nine patients. Compared with a group of healthy controls of a...
Systolic Time Intervals and New Measurement Methods.
Tavakolian, Kouhyar
2016-06-01
Systolic time intervals have been used to detect and quantify the directional changes of left ventricular function. New methods of recording these cardiac timings, which are less cumbersome, have been recently developed and this has created a renewed interest and novel applications for these cardiac timings. This manuscript reviews these new methods and addresses the potential for the application of these cardiac timings for the diagnosis and prognosis of different cardiac diseases.
Finite element differential forms on cubical meshes
Arnold, Douglas N
2012-01-01
We develop a family of finite element spaces of differential forms defined on cubical meshes in any number of dimensions. The family contains elements of all polynomial degrees and all form degrees. In two dimensions, these include the serendipity finite elements and the rectangular BDM elements. In three dimensions they include a recent generalization of the serendipity spaces, and new H(curl) and H(div) finite element spaces. Spaces in the family can be combined to give finite element subcomplexes of the de Rham complex which satisfy the basic hypotheses of the finite element exterior calculus, and hence can be used for stable discretization of a variety of problems. The construction and properties of the spaces are established in a uniform manner using finite element exterior calculus.
The free abelian topological group and the free locally convex space on the unit interval
Leiderman, A G; Pestov, V G
1992-01-01
We give a complete description of the topological spaces $X$ such that the free abelian topological group $A(X)$ embeds into the free abelian topological group $A(I)$ of the closed unit interval. In particular, the free abelian topological group $A(X)$ of any finite-dimensional compact metrizable space $X$ embeds into $A(I)$. The situation turns out to be somewhat different for free locally convex spaces. Some results for the spaces of continuous functions with the pointwise topology are also obtained. Proofs are based on the classical Kolmogorov's Superposition Theorem.