Czech Academy of Sciences Publication Activity Database
Krištoufek, Ladislav
4/2010, č. 3 (2010), s. 236-250 ISSN 1802-4696 R&D Projects: GA ČR GD402/09/H045; GA ČR GA402/09/0965 Grant - others:GA UK(CZ) 118310 Institutional research plan: CEZ:AV0Z10750506 Keywords : rescaled range analysis * detrended fluctuation analysis * Hurst exponent * long-range dependence Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2010/E/kristoufek-rescaled range analysis and detrended fluctuation analysis finite sample properties and confidence intervals.pdf
Fung, Tak; Keenan, Kevin
2014-01-01
The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.
Directory of Open Access Journals (Sweden)
Tak Fung
Full Text Available The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%, a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L., occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.
Variational collocation on finite intervals
International Nuclear Information System (INIS)
Amore, Paolo; Cervantes, Mayra; Fernandez, Francisco M
2007-01-01
In this paper, we study a set of functions, defined on an interval of finite width, which are orthogonal and which reduce to the sinc functions when the appropriate limit is taken. We show that these functions can be used within a variational approach to obtain accurate results for a variety of problems. We have applied them to the interpolation of functions on finite domains and to the solution of the Schroedinger equation, and we have compared the performance of the present approach with others
Integral equations with difference kernels on finite intervals
Sakhnovich, Lev A
2015-01-01
This book focuses on solving integral equations with difference kernels on finite intervals. The corresponding problem on the semiaxis was previously solved by N. Wiener–E. Hopf and by M.G. Krein. The problem on finite intervals, though significantly more difficult, may be solved using our method of operator identities. This method is also actively employed in inverse spectral problems, operator factorization and nonlinear integral equations. Applications of the obtained results to optimal synthesis, light scattering, diffraction, and hydrodynamics problems are discussed in this book, which also describes how the theory of operators with difference kernels is applied to stable processes and used to solve the famous M. Kac problems on stable processes. In this second edition these results are extensively generalized and include the case of all Levy processes. We present the convolution expression for the well-known Ito formula of the generator operator, a convolution expression that has proven to be fruitful...
A summary of maintenance policies for a finite interval
International Nuclear Information System (INIS)
Nakagawa, T.; Mizutani, S.
2009-01-01
It would be an important problem to consider practically some maintenance policies for a finite time span, because the working times of most units are finite in actual fields. This paper converts the usual maintenance models to finite maintenance models. It is more difficult to study theoretically optimal policies for a finite time span than those for an infinite time span. Three usual models of periodic replacement with minimal repair, block replacement and simple replacement are transformed to finite replacement models. Further, optimal periodic and sequential policies for an imperfect preventive maintenance and an inspection model for a finite time span are considered. Optimal policies for each model are analytically derived and are numerically computed
Robust weak measurements on finite samples
International Nuclear Information System (INIS)
Tollaksen, Jeff
2007-01-01
A new weak measurement procedure is introduced for finite samples which yields accurate weak values that are outside the range of eigenvalues and which do not require an exponentially rare ensemble. This procedure provides a unique advantage in the amplification of small nonrandom signals by minimizing uncertainties in determining the weak value and by minimizing sample size. This procedure can also extend the strength of the coupling between the system and measuring device to a new regime
GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING
Directory of Open Access Journals (Sweden)
Christopher Ouma Onyango
2010-09-01
Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.
Complexity of a kind of interval continuous self-map of finite type
International Nuclear Information System (INIS)
Wang Lidong; Chu Zhenyan; Liao Gongfu
2011-01-01
Highlights: → We find the Hausdorff dimension for an interval continuous self-map f of finite type is s element of (0,1) on a non-wandering set. → f| Ω(f) has positive topological entropy. → f| Ω(f) is chaotic such as Devaney chaos, Kato chaos, two point distributional chaos and so on. - Abstract: An interval map is called finitely typal, if the restriction of the map to non-wandering set is topologically conjugate with a subshift of finite type. In this paper, we prove that there exists an interval continuous self-map of finite type such that the Hausdorff dimension is an arbitrary number in the interval (0, 1), discuss various chaotic properties of the map and the relations between chaotic set and the set of recurrent points.
Complexity of a kind of interval continuous self-map of finite type
Energy Technology Data Exchange (ETDEWEB)
Wang Lidong, E-mail: wld@dlnu.edu.cn [Institute of Mathematics, Dalian Nationalities University, Dalian 116600 (China); Institute of Mathematics, Jilin Normal University, Siping 136000 (China); Chu Zhenyan, E-mail: chuzhenyan8@163.com [Institute of Mathematics, Dalian Nationalities University, Dalian 116600 (China) and Institute of Mathematics, Jilin University, Changchun 130023 (China); Liao Gongfu, E-mail: liaogf@email.jlu.edu.cn [Institute of Mathematics, Jilin University, Changchun 130023 (China)
2011-10-15
Highlights: > We find the Hausdorff dimension for an interval continuous self-map f of finite type is s element of (0,1) on a non-wandering set. > f|{sub {Omega}(f)} has positive topological entropy. > f|{sub {Omega}(f)} is chaotic such as Devaney chaos, Kato chaos, two point distributional chaos and so on. - Abstract: An interval map is called finitely typal, if the restriction of the map to non-wandering set is topologically conjugate with a subshift of finite type. In this paper, we prove that there exists an interval continuous self-map of finite type such that the Hausdorff dimension is an arbitrary number in the interval (0, 1), discuss various chaotic properties of the map and the relations between chaotic set and the set of recurrent points.
Estimation of individual reference intervals in small sample sizes
DEFF Research Database (Denmark)
Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz
2007-01-01
In occupational health studies, the study groups most often comprise healthy subjects performing their work. Sampling is often planned in the most practical way, e.g., sampling of blood in the morning at the work site just after the work starts. Optimal use of reference intervals requires...... from various variables such as gender, age, BMI, alcohol, smoking, and menopause. The reference intervals were compared to reference intervals calculated using IFCC recommendations. Where comparable, the IFCC calculated reference intervals had a wider range compared to the variance component models...
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
Comparing interval estimates for small sample ordinal CFA models.
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading
Relativistic rise measurements with very fine sampling intervals
International Nuclear Information System (INIS)
Ludlam, T.; Platner, E.D.; Polychronakos, V.A.; Lindenbaum, S.J.; Kramer, M.A.; Teramoto, Y.
1980-01-01
The motivation of this work was to determine whether the technique of charged particle identification via the relativistic rise in the ionization loss can be significantly improved by virtue of very small sampling intervals. A fast-sampling ADC and a longitudinal drift geometry were used to provide a large number of samples from a single drift chamber gap, achieving sampling intervals roughly 10 times smaller than any previous study. A single layer drift chamber was used, and tracks of 1 meter length were simulated by combining together samples from many identified particles in this detector. These data were used to study the resolving power for particle identification as a function of sample size, averaging technique, and the number of discrimination levels (ADC bits) used for pulse height measurements
Interpolating and sampling sequences in finite Riemann surfaces
Ortega-Cerda, Joaquim
2007-01-01
We provide a description of the interpolating and sampling sequences on a space of holomorphic functions on a finite Riemann surface, where a uniform growth restriction is imposed on the holomorphic functions.
An Improvement to Interval Estimation for Small Samples
Directory of Open Access Journals (Sweden)
SUN Hui-Ling
2017-02-01
Full Text Available Because it is difficult and complex to determine the probability distribution of small samples，it is improper to use traditional probability theory to process parameter estimation for small samples. Bayes Bootstrap method is always used in the project. Although，the Bayes Bootstrap method has its own limitation，In this article an improvement is given to the Bayes Bootstrap method，This method extended the amount of samples by numerical simulation without changing the circumstances in a small sample of the original sample. And the new method can give the accurate interval estimation for the small samples. Finally，by using the Monte Carlo simulation to model simulation to the specific small sample problems. The effectiveness and practicability of the Improved-Bootstrap method was proved.
Number of core samples: Mean concentrations and confidence intervals
International Nuclear Information System (INIS)
Jensen, L.; Cromar, R.D.; Wilmarth, S.R.; Heasler, P.G.
1995-01-01
This document provides estimates of how well the mean concentration of analytes are known as a function of the number of core samples, composite samples, and replicate analyses. The estimates are based upon core composite data from nine recently sampled single-shell tanks. The results can be used when determining the number of core samples needed to ''characterize'' the waste from similar single-shell tanks. A standard way of expressing uncertainty in the estimate of a mean is with a 95% confidence interval (CI). The authors investigate how the width of a 95% CI on the mean concentration decreases as the number of observations increase. Specifically, the tables and figures show how the relative half-width (RHW) of a 95% CI decreases as the number of core samples increases. The RHW of a CI is a unit-less measure of uncertainty. The general conclusions are as follows: (1) the RHW decreases dramatically as the number of core samples is increased, the decrease is much smaller when the number of composited samples or the number of replicate analyses are increase; (2) if the mean concentration of an analyte needs to be estimated with a small RHW, then a large number of core samples is required. The estimated number of core samples given in the tables and figures were determined by specifying different sizes of the RHW. Four nominal sizes were examined: 10%, 25%, 50%, and 100% of the observed mean concentration. For a majority of analytes the number of core samples required to achieve an accuracy within 10% of the mean concentration is extremely large. In many cases, however, two or three core samples is sufficient to achieve a RHW of approximately 50 to 100%. Because many of the analytes in the data have small concentrations, this level of accuracy may be satisfactory for some applications
Gap probabilities for edge intervals in finite Gaussian and Jacobi unitary matrix ensembles
International Nuclear Information System (INIS)
Witte, N.S.; Forrester, P.J.
1999-01-01
The probabilities for gaps in the eigenvalue spectrum of the finite dimension N x N random matrix Hermite and Jacobi unitary ensembles on some single and disconnected double intervals are found. These are cases where a reflection symmetry exists and the probability factors into two other related probabilities, defined on single intervals. Our investigation uses the system of partial differential equations arising from the Fredholm determinant expression for the gap probability and the differential-recurrence equations satisfied by Hermite and Jacobi orthogonal polynomials. In our study we find second and third order nonlinear ordinary differential equations defining the probabilities in the general N case, specific explicit solutions for N = 1 and N = 2, asymptotic expansions, scaling at the edge of the Hermite spectrum as N →∞ and the Jacobi to Hermite limit both of which make correspondence to other cases reported here or known previously. (authors)
The intervals method: a new approach to analyse finite element outputs using multivariate statistics
Directory of Open Access Journals (Sweden)
Jordi Marcé-Nogué
2017-10-01
Full Text Available Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches.
The intervals method: a new approach to analyse finite element outputs using multivariate statistics
De Esteban-Trivigno, Soledad; Püschel, Thomas A.; Fortuny, Josep
2017-01-01
Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches. PMID:29043107
A new variable interval schedule with constant hazard rate and finite time range.
Bugallo, Mehdi; Machado, Armando; Vasconcelos, Marco
2018-05-27
We propose a new variable interval (VI) schedule that achieves constant probability of reinforcement in time while using a bounded range of intervals. By sampling each trial duration from a uniform distribution ranging from 0 to 2 T seconds, and then applying a reinforcement rule that depends linearly on trial duration, the schedule alternates reinforced and unreinforced trials, each less than 2 T seconds, while preserving a constant hazard function. © 2018 Society for the Experimental Analysis of Behavior.
Finite-Time Stability of Large-Scale Systems with Interval Time-Varying Delay in Interconnection
Directory of Open Access Journals (Sweden)
T. La-inchua
2017-01-01
Full Text Available We investigate finite-time stability of a class of nonlinear large-scale systems with interval time-varying delays in interconnection. Time-delay functions are continuous but not necessarily differentiable. Based on Lyapunov stability theory and new integral bounding technique, finite-time stability of large-scale systems with interval time-varying delays in interconnection is derived. The finite-time stability criteria are delays-dependent and are given in terms of linear matrix inequalities which can be solved by various available algorithms. Numerical examples are given to illustrate effectiveness of the proposed method.
Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan
2016-10-01
This paper introduces mixed fuzzy and interval parametric uncertainties into the FE components of the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model for mid-frequency analysis of built-up systems, thus an uncertain ensemble combining non-parametric with mixed fuzzy and interval parametric uncertainties comes into being. A fuzzy interval Finite Element/Statistical Energy Analysis (FIFE/SEA) framework is proposed to obtain the uncertain responses of built-up systems, which are described as intervals with fuzzy bounds, termed as fuzzy-bounded intervals (FBIs) in this paper. Based on the level-cut technique, a first-order fuzzy interval perturbation FE/SEA (FFIPFE/SEA) and a second-order fuzzy interval perturbation FE/SEA method (SFIPFE/SEA) are developed to handle the mixed parametric uncertainties efficiently. FFIPFE/SEA approximates the response functions by the first-order Taylor series, while SFIPFE/SEA improves the accuracy by considering the second-order items of Taylor series, in which all the mixed second-order items are neglected. To further improve the accuracy, a Chebyshev fuzzy interval method (CFIM) is proposed, in which the Chebyshev polynomials is used to approximate the response functions. The FBIs are eventually reconstructed by assembling the extrema solutions at all cut levels. Numerical results on two built-up systems verify the effectiveness of the proposed methods.
Directory of Open Access Journals (Sweden)
Tudor DRUGAN
2003-08-01
Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.
Dujardin, G. M.
2009-01-01
This paper deals with the asymptotic behaviour of the solutions of linear initial boundary value problems with constant coefficients on the half-line and on finite intervals. We assume that the boundary data are periodic in time and we investigate
Dujardin, G. M.
2009-08-12
This paper deals with the asymptotic behaviour of the solutions of linear initial boundary value problems with constant coefficients on the half-line and on finite intervals. We assume that the boundary data are periodic in time and we investigate whether the solution becomes time-periodic after sufficiently long time. Using Fokas\\' transformation method, we show that, for the linear Schrödinger equation, the linear heat equation and the linearized KdV equation on the half-line, the solutions indeed become periodic for large time. However, for the same linear Schrödinger equation on a finite interval, we show that the solution, in general, is not asymptotically periodic; actually, the asymptotic behaviour of the solution depends on the commensurability of the time period T of the boundary data with the square of the length of the interval over. © 2009 The Royal Society.
Compressive Sampling of EEG Signals with Finite Rate of Innovation
Directory of Open Access Journals (Sweden)
Poh Kok-Kiong
2010-01-01
Full Text Available Analyses of electroencephalographic signals and subsequent diagnoses can only be done effectively on long term recordings that preserve the signals' morphologies. Currently, electroencephalographic signals are obtained at Nyquist rate or higher, thus introducing redundancies. Existing compression methods remove these redundancies, thereby achieving compression. We propose an alternative compression scheme based on a sampling theory developed for signals with a finite rate of innovation (FRI which compresses electroencephalographic signals during acquisition. We model the signals as FRI signals and then sample them at their rate of innovation. The signals are thus effectively represented by a small set of Fourier coefficients corresponding to the signals' rate of innovation. Using the FRI theory, original signals can be reconstructed using this set of coefficients. Seventy-two hours of electroencephalographic recording are tested and results based on metrices used in compression literature and morphological similarities of electroencephalographic signals are presented. The proposed method achieves results comparable to that of wavelet compression methods, achieving low reconstruction errors while preserving the morphologiies of the signals. More importantly, it introduces a new framework to acquire electroencephalographic signals at their rate of innovation, thus entailing a less costly low-rate sampling device that does not waste precious computational resources.
Robust L2-L∞ Filtering of Time-Delay Jump Systems with Respect to the Finite-Time Interval
Directory of Open Access Journals (Sweden)
Shuping He
2011-01-01
Full Text Available This paper studied the problem of stochastic finite-time boundedness and disturbance attenuation for a class of linear time-delayed systems with Markov jumping parameters. Sufficient conditions are provided to solve this problem. The L2-L∞ filters are, respectively, designed for time-delayed Markov jump linear systems with/without uncertain parameters such that the resulting filtering error dynamic system is stochastically finite-time bounded and has the finite-time interval disturbance attenuation γ for all admissible uncertainties, time delays, and unknown disturbances. By using stochastic Lyapunov-Krasovskii functional approach, it is shown that the filter designing problem is in terms of the solutions of a set of coupled linear matrix inequalities. Simulation examples are included to demonstrate the potential of the proposed results.
International Nuclear Information System (INIS)
Yagasaki, Kazuyuki
2007-01-01
In experiments for single and coupled pendula, we demonstrate the effectiveness of a new control method based on dynamical systems theory for stabilizing unstable aperiodic trajectories defined on infinite- or finite-time intervals. The basic idea of the method is similar to that of the OGY method, which is a well-known, chaos control method. Extended concepts of the stable and unstable manifolds of hyperbolic trajectories are used here
Directory of Open Access Journals (Sweden)
Matías Ernesto Barber
2016-06-01
Full Text Available The spatial sampling interval, as related to the ability to digitize a soil profile with a certain number of features per unit length, depends on the profiling technique itself. From a variety of profiling techniques, roughness parameters are estimated at different sampling intervals. Since soil profiles have continuous spectral components, it is clear that roughness parameters are influenced by the sampling interval of the measurement device employed. In this work, we contributed to answer which sampling interval the profiles needed to be measured at to accurately account for the microwave response of agricultural surfaces. For this purpose, a 2-D laser profiler was built and used to measure surface soil roughness at field scale over agricultural sites in Argentina. Sampling intervals ranged from large (50 mm to small ones (1 mm, with several intermediate values. Large- and intermediate-sampling-interval profiles were synthetically derived from nominal, 1 mm ones. With these data, the effect of sampling-interval-dependent roughness parameters on backscatter response was assessed using the theoretical backscatter model IEM2M. Simulations demonstrated that variations of roughness parameters depended on the working wavelength and was less important at L-band than at C- or X-band. In any case, an underestimation of the backscattering coefficient of about 1-4 dB was observed at larger sampling intervals. As a general rule a sampling interval of 15 mm can be recommended for L-band and 5 mm for C-band.
Huh, Joonsuk; Yung, Man-Hong
2017-08-07
Molecular vibroic spectroscopy, where the transitions involve non-trivial Bosonic correlation due to the Duschinsky Rotation, is strongly believed to be in a similar complexity class as Boson Sampling. At finite temperature, the problem is represented as a Boson Sampling experiment with correlated Gaussian input states. This molecular problem with temperature effect is intimately related to the various versions of Boson Sampling sharing the similar computational complexity. Here we provide a full description to this relation in the context of Gaussian Boson Sampling. We find a hierarchical structure, which illustrates the relationship among various Boson Sampling schemes. Specifically, we show that every instance of Gaussian Boson Sampling with an initial correlation can be simulated by an instance of Gaussian Boson Sampling without initial correlation, with only a polynomial overhead. Since every Gaussian state is associated with a thermal state, our result implies that every sampling problem in molecular vibronic transitions, at any temperature, can be simulated by Gaussian Boson Sampling associated with a product of vacuum modes. We refer such a generalized Gaussian Boson Sampling motivated by the molecular sampling problem as Vibronic Boson Sampling.
Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model
DEFF Research Database (Denmark)
Kirkegaard, Poul Henning
1993-01-01
Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...
Interval estimation methods of the mean in small sample situation and the results' comparison
International Nuclear Information System (INIS)
Wu Changli; Guo Chunying; Jiang Meng; Lin Yuangen
2009-01-01
The methods of the sample mean's interval estimation, namely the classical method, the Bootstrap method, the Bayesian Bootstrap method, the Jackknife method and the spread method of the Empirical Characteristic distribution function are described. Numerical calculation on the samples' mean intervals is carried out where the numbers of the samples are 4, 5, 6 respectively. The results indicate the Bootstrap method and the Bayesian Bootstrap method are much more appropriate than others in small sample situation. (authors)
Pullin, A N; Pairis-Garcia, M D; Campbell, B J; Campler, M R; Proudfoot, K L
2017-11-01
When considering methodologies for collecting behavioral data, continuous sampling provides the most complete and accurate data set whereas instantaneous sampling can provide similar results and also increase the efficiency of data collection. However, instantaneous time intervals require validation to ensure accurate estimation of the data. Therefore, the objective of this study was to validate scan sampling intervals for lambs housed in a feedlot environment. Feeding, lying, standing, drinking, locomotion, and oral manipulation were measured on 18 crossbred lambs housed in an indoor feedlot facility for 14 h (0600-2000 h). Data from continuous sampling were compared with data from instantaneous scan sampling intervals of 5, 10, 15, and 20 min using a linear regression analysis. Three criteria determined if a time interval accurately estimated behaviors: 1) ≥ 0.90, 2) slope not statistically different from 1 ( > 0.05), and 3) intercept not statistically different from 0 ( > 0.05). Estimations for lying behavior were accurate up to 20-min intervals, whereas feeding and standing behaviors were accurate only at 5-min intervals (i.e., met all 3 regression criteria). Drinking, locomotion, and oral manipulation demonstrated poor associations () for all tested intervals. The results from this study suggest that a 5-min instantaneous sampling interval will accurately estimate lying, feeding, and standing behaviors for lambs housed in a feedlot, whereas continuous sampling is recommended for the remaining behaviors. This methodology will contribute toward the efficiency, accuracy, and transparency of future behavioral data collection in lamb behavior research.
International Nuclear Information System (INIS)
Csenki, A.
1995-01-01
The interval reliability for a repairable system which alternates between working and repair periods is defined as the probability of the system being functional throughout a given time interval. In this paper, a set of integral equations is derived for this dependability measure, under the assumption that the system is modelled by an irreducible finite semi-Markov process. The result is applied to the semi-Markov model of a two-unit system with sequential preventive maintenance. The method used for the numerical solution of the resulting system of integral equations is a two-point trapezoidal rule. The system of implementation is the matrix computation package MATLAB on the Apple Macintosh SE/30. The numerical results are discussed and compared with those from simulation
The Gas Sampling Interval Effect on V˙O2peak Is Independent of Exercise Protocol.
Scheadler, Cory M; Garver, Matthew J; Hanson, Nicholas J
2017-09-01
There is a plethora of gas sampling intervals available during cardiopulmonary exercise testing to measure peak oxygen consumption (V˙O2peak). Different intervals can lead to altered V˙O2peak. Whether differences are affected by the exercise protocol or subject sample is not clear. The purpose of this investigation was to determine whether V˙O2peak differed because of the manipulation of sampling intervals and whether differences were independent of the protocol and subject sample. The first subject sample (24 ± 3 yr; V˙O2peak via 15-breath moving averages: 56.2 ± 6.8 mL·kg·min) completed the Bruce and the self-paced V˙O2max protocols. The second subject sample (21.9 ± 2.7 yr; V˙O2peak via 15-breath moving averages: 54.2 ± 8.0 mL·kg·min) completed the Bruce and the modified Astrand protocols. V˙O2peak was identified using five sampling intervals: 15-s block averages, 30-s block averages, 15-breath block averages, 15-breath moving averages, and 30-s block averages aligned to the end of exercise. Differences in V˙O2peak between intervals were determined using repeated-measures ANOVAs. The influence of subject sample on the sampling effect was determined using independent t-tests. There was a significant main effect of sampling interval on V˙O2peak (first sample Bruce and self-paced V˙O2max P sample Bruce and modified Astrand P sampling intervals followed a similar pattern for each protocol and subject sample, with 15-breath moving average presenting the highest V˙O2peak. The effect of manipulating gas sampling intervals on V˙O2peak appears to be protocol and sample independent. These findings highlight our recommendation that the clinical and scientific community request and report the sampling interval whenever metabolic data are presented. The standardization of reporting would assist in the comparison of V˙O2peak.
International Nuclear Information System (INIS)
Todinov, M.T.
2004-01-01
A new reliability measure is proposed and equations are derived which determine the probability of existence of a specified set of minimum gaps between random variables following a homogeneous Poisson process in a finite interval. Using the derived equations, a method is proposed for specifying the upper bound of the random variables' number density which guarantees that the probability of clustering of two or more random variables in a finite interval remains below a maximum acceptable level. It is demonstrated that even for moderate number densities the probability of clustering is substantial and should not be neglected in reliability calculations. In the important special case where the random variables are failure times, models have been proposed for determining the upper bound of the hazard rate which guarantees a set of minimum failure-free operating intervals before the random failures, with a specified probability. A model has also been proposed for determining the upper bound of the hazard rate which guarantees a minimum availability target. Using the models proposed, a new strategy, models and reliability tools have been developed for setting quantitative reliability requirements which consist of determining the intersection of the hazard rate envelopes (hazard rate upper bounds) which deliver a minimum failure-free operating period before random failures, a risk of premature failure below a maximum acceptable level and a minimum required availability. It is demonstrated that setting reliability requirements solely based on an availability target does not necessarily mean a low risk of premature failure. Even at a high availability level, the probability of premature failure can be substantial. For industries characterised by a high cost of failure, the reliability requirements should involve a hazard rate envelope limiting the risk of failure below a maximum acceptable level
International Nuclear Information System (INIS)
Ikehata, Masaru; Kawashita, Mishio
2010-01-01
The enclosure method was originally introduced for inverse problems concerning non-destructive evaluation governed by elliptic equations. It was developed as one of the useful approaches in inverse problems and applied for various equations. In this paper, an application of the enclosure method to an inverse initial boundary value problem for a parabolic equation with a discontinuous coefficient is given. A simple method to extract the depth of unknown inclusions in a heat conductive body from a single set of the temperature and heat flux on the boundary observed over a finite time interval is introduced. Other related results with infinitely many data are also reported. One of them gives the minimum radius of the open ball centred at a given point that contains the inclusions. The formula for the minimum radius is newly discovered
Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel Antonio; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Marin-Hernandez, Antonio; Herrera-May, Agustin Leobardo; Diaz-Sanchez, Alejandro; Huerta-Chua, Jesus
2014-01-01
In this article, we propose the application of a modified Taylor series method (MTSM) for the approximation of nonlinear problems described on finite intervals. The issue of Taylor series method with mixed boundary conditions is circumvented using shooting constants and extra derivatives of the problem. In order to show the benefits of this proposal, three different kinds of problems are solved: three-point boundary valued problem (BVP) of third-order with a hyperbolic sine nonlinearity, two-point BVP for a second-order nonlinear differential equation with an exponential nonlinearity, and a two-point BVP for a third-order nonlinear differential equation with a radical nonlinearity. The result shows that the MTSM method is capable to generate easily computable and highly accurate approximations for nonlinear equations. 34L30.
The effects of varying sampling intervals on the growth and survival ...
African Journals Online (AJOL)
Four different sampling intervals were investigated during a six-week outdoor nursery management of Heterobranchus longifilis (Valenciennes, 1840) fry in outdoor concrete tanks in order to determine the most suitable sampling regime for maximum productivity in terms of optimum growth and survival of hatchlings and ...
Terry, Leann; Kelley, Ken
2012-11-01
Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.
A design-based approximation to the Bayes Information Criterion in finite population sampling
Directory of Open Access Journals (Sweden)
Enrico Fabrizi
2014-05-01
Full Text Available In this article, various issues related to the implementation of the usual Bayesian Information Criterion (BIC are critically examined in the context of modelling a finite population. A suitable design-based approximation to the BIC is proposed in order to avoid the derivation of the exact likelihood of the sample which is often very complex in a finite population sampling. The approximation is justified using a theoretical argument and a Monte Carlo simulation study.
Estimating fluvial wood discharge from timelapse photography with varying sampling intervals
Anderson, N. K.
2013-12-01
There is recent focus on calculating wood budgets for streams and rivers to help inform management decisions, ecological studies and carbon/nutrient cycling models. Most work has measured in situ wood in temporary storage along stream banks or estimated wood inputs from banks. Little effort has been employed monitoring and quantifying wood in transport during high flows. This paper outlines a procedure for estimating total seasonal wood loads using non-continuous coarse interval sampling and examines differences in estimation between sampling at 1, 5, 10 and 15 minutes. Analysis is performed on wood transport for the Slave River in Northwest Territories, Canada. Relative to the 1 minute dataset, precision decreased by 23%, 46% and 60% for the 5, 10 and 15 minute datasets, respectively. Five and 10 minute sampling intervals provided unbiased equal variance estimates of 1 minute sampling, whereas 15 minute intervals were biased towards underestimation by 6%. Stratifying estimates by day and by discharge increased precision over non-stratification by 4% and 3%, respectively. Not including wood transported during ice break-up, the total minimum wood load estimated at this site is 3300 × 800$ m3 for the 2012 runoff season. The vast majority of the imprecision in total wood volumes came from variance in estimating average volume per log. Comparison of proportions and variance across sample intervals using bootstrap sampling to achieve equal n. Each trial was sampled for n=100, 10,000 times and averaged. All trials were then averaged to obtain an estimate for each sample interval. Dashed lines represent values from the one minute dataset.
Influence of sampling interval and number of projections on the quality of SR-XFMT reconstruction
International Nuclear Information System (INIS)
Deng Biao; Yu Xiaohan; Xu Hongjie
2007-01-01
Synchrotron Radiation based X-ray Fluorescent Microtomography (SR-XFMT) is a nondestructive technique for detecting elemental composition and distribution inside a specimen with high spatial resolution and sensitivity. In this paper, computer simulation of SR-XFMT experiment is performed. The influence of the sampling interval and the number of projections on the quality of SR-XFMT image reconstruction is analyzed. It is found that the sampling interval has greater effect on the quality of reconstruction than the number of projections. (authors)
Heemels, W.P.M.H.; Teel, A.R.; Wouw, van de N.; Nesic, D.
2010-01-01
There are many communication imperfections in networked control systems (NCS) such as varying transmission delays, varying sampling/transmission intervals, packet loss, communication constraints and quantization effects. Most of the available literature on NCS focuses on only some of these aspects,
Estimation of reference intervals from small samples: an example using canine plasma creatinine.
Geffré, A; Braun, J P; Trumel, C; Concordet, D
2009-12-01
According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.
Directory of Open Access Journals (Sweden)
M.M. Mohie El-Din
2011-10-01
Full Text Available In this paper, two sample Bayesian prediction intervals for order statistics (OS are obtained. This prediction is based on a certain class of the inverse exponential-type distributions using a right censored sample. A general class of prior density functions is used and the predictive cumulative function is obtained in the two samples case. The class of the inverse exponential-type distributions includes several important distributions such the inverse Weibull distribution, the inverse Burr distribution, the loglogistic distribution, the inverse Pareto distribution and the inverse paralogistic distribution. Special cases of the inverse Weibull model such as the inverse exponential model and the inverse Rayleigh model are considered.
Directory of Open Access Journals (Sweden)
Doo Yong Choi
2016-04-01
Full Text Available Rapid detection of bursts and leaks in water distribution systems (WDSs can reduce the social and economic costs incurred through direct loss of water into the ground, additional energy demand for water supply, and service interruptions. Many real-time burst detection models have been developed in accordance with the use of supervisory control and data acquisition (SCADA systems and the establishment of district meter areas (DMAs. Nonetheless, no consideration has been given to how frequently a flow meter measures and transmits data for predicting breaks and leaks in pipes. This paper analyzes the effect of sampling interval when an adaptive Kalman filter is used for detecting bursts in a WDS. A new sampling algorithm is presented that adjusts the sampling interval depending on the normalized residuals of flow after filtering. The proposed algorithm is applied to a virtual sinusoidal flow curve and real DMA flow data obtained from Jeongeup city in South Korea. The simulation results prove that the self-adjusting algorithm for determining the sampling interval is efficient and maintains reasonable accuracy in burst detection. The proposed sampling method has a significant potential for water utilities to build and operate real-time DMA monitoring systems combined with smart customer metering systems.
Directory of Open Access Journals (Sweden)
Andrei ACHIMAŞ CADARIU
2004-08-01
Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.
DEFF Research Database (Denmark)
Nielsen, Morten Ø.; Frederiksen, Per Houmann
2005-01-01
In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods. The es...... the time domain parametric methods, and (4) without sufficient trimming of scales the wavelet-based estimators are heavily biased.......In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods....... The estimators are briefly introduced and compared, and the criteria adopted for measuring finite sample performance are bias and root mean squared error. Most importantly, the simulations reveal that (1) the frequency domain maximum likelihood procedure is superior to the time domain parametric methods, (2) all...
A Note on Confidence Interval for the Power of the One Sample Test
A. Wong
2010-01-01
In introductory statistics texts, the power of the test of a one-sample mean when the variance is known is widely discussed. However, when the variance is unknown, the power of the Student's -test is seldom mentioned. In this note, a general methodology for obtaining inference concerning a scalar parameter of interest of any exponential family model is proposed. The method is then applied to the one-sample mean problem with unknown variance to obtain a ( 1 − ) 100% confidence interval for...
Practical continuous-variable quantum key distribution without finite sampling bandwidth effects.
Li, Huasheng; Wang, Chao; Huang, Peng; Huang, Duan; Wang, Tao; Zeng, Guihua
2016-09-05
In a practical continuous-variable quantum key distribution system, finite sampling bandwidth of the employed analog-to-digital converter at the receiver's side may lead to inaccurate results of pulse peak sampling. Then, errors in the parameters estimation resulted. Subsequently, the system performance decreases and security loopholes are exposed to eavesdroppers. In this paper, we propose a novel data acquisition scheme which consists of two parts, i.e., a dynamic delay adjusting module and a statistical power feedback-control algorithm. The proposed scheme may improve dramatically the data acquisition precision of pulse peak sampling and remove the finite sampling bandwidth effects. Moreover, the optimal peak sampling position of a pulse signal can be dynamically calibrated through monitoring the change of the statistical power of the sampled data in the proposed scheme. This helps to resist against some practical attacks, such as the well-known local oscillator calibration attack.
A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling
Directory of Open Access Journals (Sweden)
Ying Yan
2017-01-01
Full Text Available Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix between interval evidences is constructed, and expert’s information is fused. Comment grades are quantified using the interval number, and cumulative probability function for evaluating the importance of indices is constructed based on the fused information. Finally, index weights are obtained by Monte Carlo random sampling. The method can process expert’s information with varying degrees of uncertainties, which possesses good compatibility. Difficulty in effectively fusing high-conflict group decision-making information and large information loss after fusion is avertible. Original expert judgments are retained rather objectively throughout the processing procedure. Cumulative probability function constructing and random sampling processes do not require any human intervention or judgment. It can be implemented by computer programs easily, thus having an apparent advantage in evaluation practices of fairly huge index systems.
Directory of Open Access Journals (Sweden)
W.R. Azzam
2015-08-01
Full Text Available This paper reports the application of using a skirted foundation system to study the behavior of foundations with structural skirts adjacent to a sand slope and subjected to earthquake loading. The effect of the adopted skirts to safeguard foundation and slope from collapse is studied. The skirts effect on controlling horizontal soil movement and decreasing pore water pressure beneath foundations and beside the slopes during earthquake is investigated. This technique is investigated numerically using finite element analysis. A four story reinforced concrete building that rests on a raft foundation is idealized as a two-dimensional model with and without skirts. A two dimensional plain strain program PLAXIS, (dynamic version is adopted. A series of models for the problem under investigation were run under different skirt depths and lactation from the slope crest. The effect of subgrade relative density and skirts thickness is also discussed. Nodal displacement and element strains were analyzed for the foundation with and without skirts and at different studied parameters. The research results showed a great effectiveness in increasing the overall stability of the slope and foundation. The confined soil footing system by such skirts reduced the foundation acceleration therefore it can be tended to damping element and relieved the transmitted disturbance to the adjacent slope. This technique can be considered as a good method to control the slope deformation and decrease the slope acceleration during earthquakes.
A Note on Confidence Interval for the Power of the One Sample Test
Directory of Open Access Journals (Sweden)
A. Wong
2010-01-01
Full Text Available In introductory statistics texts, the power of the test of a one-sample mean when the variance is known is widely discussed. However, when the variance is unknown, the power of the Student's -test is seldom mentioned. In this note, a general methodology for obtaining inference concerning a scalar parameter of interest of any exponential family model is proposed. The method is then applied to the one-sample mean problem with unknown variance to obtain a (1−100% confidence interval for the power of the Student's -test that detects the difference (−0. The calculations require only the density and the cumulative distribution functions of the standard normal distribution. In addition, the methodology presented can also be applied to determine the required sample size when the effect size and the power of a size test of mean are given.
DEFF Research Database (Denmark)
Huber, Martin; Lechner, Michael; Mellace, Giovanni
Using a comprehensive simulation study based on empirical data, this paper investigates the finite sample properties of different classes of parametric and semi-parametric estimators of (natural) direct and indirect causal effects used in mediation analysis under sequential conditional independence...
DEFF Research Database (Denmark)
Huber, Martin; Lechner, Michael; Mellace, Giovanni
2016-01-01
Using a comprehensive simulation study based on empirical data, this paper investigates the finite sample properties of different classes of parametric and semi-parametric estimators of (natural) direct and indirect causal effects used in mediation analysis under sequential conditional independen...... of the methods often (but not always) varies with the features of the data generating process....
Finite-sample instrumental variables inference using an asymptotically pivotal statistic
Bekker, P; Kleibergen, F
2003-01-01
We consider the K-statistic, Kleibergen's (2002, Econometrica 70, 1781-1803) adaptation of the Anderson-Rubin (AR) statistic in instrumental variables regression. Whereas Kleibergen (2002) especially analyzes the asymptotic behavior of the statistic, we focus on finite-sample properties in, a
A proof of the Woodward-Lawson sampling method for a finite linear array
Somers, Gary A.
1993-01-01
An extension of the continuous aperture Woodward-Lawson sampling theorem has been developed for a finite linear array of equidistant identical elements with arbitrary excitations. It is shown that by sampling the array factor at a finite number of specified points in the far field, the exact array factor over all space can be efficiently reconstructed in closed form. The specified sample points lie in real space and hence are measurable provided that the interelement spacing is greater than approximately one half of a wavelength. This paper provides insight as to why the length parameter used in the sampling formulas for discrete arrays is larger than the physical span of the lattice points in contrast with the continuous aperture case where the length parameter is precisely the physical aperture length.
Shaw, Simon C.; Goldstein, Michael
2017-01-01
We explore the effect of finite population sampling in design problems with many variables cross-classified in many ways. In particular, we investigate designs where we wish to sample individuals belonging to different groups for which the underlying covariance matrices are separable between groups and variables. We exploit the generalised conditional independence structure of the model to show how the analysis of the full model can be reduced to an interpretable series of lower dimensional p...
DEFF Research Database (Denmark)
Vahdatirad, Mohammadjavad; Bayat, Mehdi; Andersen, Lars Vabbersgaard
2015-01-01
shear strength of clay. Normal and Sobol sampling are employed to provide the asymptotic sampling method to generate the probability distribution of the foundation stiffnesses. Monte Carlo simulation is used as a benchmark. Asymptotic sampling accompanied with Sobol quasi random sampling demonstrates......The mechanical responses of an offshore monopile foundation mounted in over-consolidated clay are calculated by employing a stochastic approach where a nonlinear p–y curve is incorporated with a finite element scheme. The random field theory is applied to represent a spatial variation for undrained...... an efficient method for estimating the probability distribution of stiffnesses for the offshore monopile foundation....
Directory of Open Access Journals (Sweden)
Atta Ullah
2014-01-01
Full Text Available In practical utilization of stratified random sampling scheme, the investigator meets a problem to select a sample that maximizes the precision of a finite population mean under cost constraint. An allocation of sample size becomes complicated when more than one characteristic is observed from each selected unit in a sample. In many real life situations, a linear cost function of a sample size nh is not a good approximation to actual cost of sample survey when traveling cost between selected units in a stratum is significant. In this paper, sample allocation problem in multivariate stratified random sampling with proposed cost function is formulated in integer nonlinear multiobjective mathematical programming. A solution procedure is proposed using extended lexicographic goal programming approach. A numerical example is presented to illustrate the computational details and to compare the efficiency of proposed compromise allocation.
Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato
2017-07-01
An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.
Precision of quantization of the hall conductivity in a finite-size sample: Power law
International Nuclear Information System (INIS)
Greshnov, A. A.; Kolesnikova, E. N.; Zegrya, G. G.
2006-01-01
A microscopic calculation of the conductivity in the integer quantum Hall effect (IQHE) mode is carried out. The precision of quantization is analyzed for finite-size samples. The precision of quantization shows a power-law dependence on the sample size. A new scaling parameter describing this dependence is introduced. It is also demonstrated that the precision of quantization linearly depends on the ratio between the amplitude of the disorder potential and the cyclotron energy. The data obtained are compared with the results of magnetotransport measurements in mesoscopic samples
High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data
Morelli, Eugene A.
1997-01-01
Many system identification and signal processing procedures can be done advantageously in the frequency domain. A required preliminary step for this approach is the transformation of sampled time domain data into the frequency domain. The analytical tool used for this transformation is the finite Fourier transform. Inaccuracy in the transformation can degrade system identification and signal processing results. This work presents a method for evaluating the finite Fourier transform using cubic interpolation of sampled time domain data for high accuracy, and the chirp Zeta-transform for arbitrary frequency resolution. The accuracy of the technique is demonstrated in example cases where the transformation can be evaluated analytically. Arbitrary frequency resolution is shown to be important for capturing details of the data in the frequency domain. The technique is demonstrated using flight test data from a longitudinal maneuver of the F-18 High Alpha Research Vehicle.
Sampling of finite elements for sparse recovery in large scale 3D electrical impedance tomography
International Nuclear Information System (INIS)
Javaherian, Ashkan; Moeller, Knut; Soleimani, Manuchehr
2015-01-01
This study proposes a method to improve performance of sparse recovery inverse solvers in 3D electrical impedance tomography (3D EIT), especially when the volume under study contains small-sized inclusions, e.g. 3D imaging of breast tumours. Initially, a quadratic regularized inverse solver is applied in a fast manner with a stopping threshold much greater than the optimum. Based on assuming a fixed level of sparsity for the conductivity field, finite elements are then sampled via applying a compressive sensing (CS) algorithm to the rough blurred estimation previously made by the quadratic solver. Finally, a sparse inverse solver is applied solely to the sampled finite elements, with the solution to the CS as its initial guess. The results show the great potential of the proposed CS-based sparse recovery in improving accuracy of sparse solution to the large-size 3D EIT. (paper)
Weng, Falu; Liu, Mingxin; Mao, Weijie; Ding, Yuanchun; Liu, Feifei
2018-05-10
The problem of sampled-data-based vibration control for structural systems with finite-time state constraint and sensor outage is investigated in this paper. The objective of designing controllers is to guarantee the stability and anti-disturbance performance of the closed-loop systems while some sensor outages happen. Firstly, based on matrix transformation, the state-space model of structural systems with sensor outages and uncertainties appearing in the mass, damping and stiffness matrices is established. Secondly, by considering most of those earthquakes or strong winds happen in a very short time, and it is often the peak values make the structures damaged, the finite-time stability analysis method is introduced to constrain the state responses in a given time interval, and the H-infinity stability is adopted in the controller design to make sure that the closed-loop system has a prescribed level of disturbance attenuation performance during the whole control process. Furthermore, all stabilization conditions are expressed in the forms of linear matrix inequalities (LMIs), whose feasibility can be easily checked by using the LMI Toolbox. Finally, numerical examples are given to demonstrate the effectiveness of the proposed theorems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Mukumoto, Nobutaka; Nakamura, Mitsuhiro; Akimoto, Mami; Miyabe, Yuki; Yokota, Kenji; Matsuo, Yukinori; Mizowaki, Takashi; Hiraoka, Masahiro
2017-08-01
To explore the effect of sampling interval of training data acquisition on the intrafractional prediction error of surrogate signal-based dynamic tumor-tracking using a gimbal-mounted linac. Twenty pairs of respiratory motions were acquired from 20 patients (ten lung, five liver, and five pancreatic cancer patients) who underwent dynamic tumor-tracking with the Vero4DRT. First, respiratory motions were acquired as training data for an initial construction of the prediction model before the irradiation. Next, additional respiratory motions were acquired for an update of the prediction model due to the change of the respiratory pattern during the irradiation. The time elapsed prior to the second acquisition of the respiratory motion was 12.6 ± 3.1 min. A four-axis moving phantom reproduced patients' three dimensional (3D) target motions and one dimensional surrogate motions. To predict the future internal target motion from the external surrogate motion, prediction models were constructed by minimizing residual prediction errors for training data acquired at 80 and 320 ms sampling intervals for 20 s, and at 500, 1,000, and 2,000 ms sampling intervals for 60 s using orthogonal kV x-ray imaging systems. The accuracies of prediction models trained with various sampling intervals were estimated based on training data with each sampling interval during the training process. The intrafractional prediction errors for various prediction models were then calculated on intrafractional monitoring images taken for 30 s at the constant sampling interval of a 500 ms fairly to evaluate the prediction accuracy for the same motion pattern. In addition, the first respiratory motion was used for the training and the second respiratory motion was used for the evaluation of the intrafractional prediction errors for the changed respiratory motion to evaluate the robustness of the prediction models. The training error of the prediction model was 1.7 ± 0.7 mm in 3D for all sampling
On the Influence of the Data Sampling Interval on Computer-Derived K-Indices
Directory of Open Access Journals (Sweden)
A Bernard
2011-06-01
Full Text Available The K index was devised by Bartels et al. (1939 to provide an objective monitoring of irregular geomagnetic activity. The K index was then routinely used to monitor the magnetic activity at permanent magnetic observatories as well as at temporary stations. The increasing number of digital and sometimes unmanned observatories and the creation of INTERMAGNET put the question of computer production of K at the centre of the debate. Four algorithms were selected during the Vienna meeting (1991 and endorsed by IAGA for the computer production of K indices. We used one of them (FMI algorithm to investigate the impact of the geomagnetic data sampling interval on computer produced K values through the comparison of the computer derived K values for the period 2009, January 1st to 2010, May 31st at the Port-aux-Francais magnetic observatory using magnetic data series with different sampling rates (the smaller: 1 second; the larger: 1 minute. The impact is investigated on both 3-hour range values and K indices data series, as a function of the activity level for low and moderate geomagnetic activity.
Finite mixture models for the computation of isotope ratios in mixed isotopic samples
Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas
2013-04-01
Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control
Finite sample performance of the E-M algorithm for ranks data modelling
Directory of Open Access Journals (Sweden)
Angela D'Elia
2007-10-01
Full Text Available We check the finite sample performance of the maximum likelihood estimators of the parameters of a mixture distribution recently introduced for modelling ranks/preference data. The estimates are derived by the E-M algorithm and the performance is evaluated both from an univariate and bivariate points of view. While the results are generally acceptable as far as it concerns the bias, the Monte Carlo experiment shows a different behaviour of the estimators efficiency for the two parameters of the mixture, mainly depending upon their location in the admissible parametric space. Some operative suggestions conclude the paer.
Efficient Estimation for Diffusions Sampled at High Frequency Over a Fixed Time Interval
DEFF Research Database (Denmark)
Jakobsen, Nina Munkholt; Sørensen, Michael
Parametric estimation for diffusion processes is considered for high frequency observations over a fixed time interval. The processes solve stochastic differential equations with an unknown parameter in the diffusion coefficient. We find easily verified conditions on approximate martingale...
Finite element simulation of the T-shaped ECAP processing of round samples
Shaban Ghazani, Mehdi; Fardi-Ilkhchy, Ali; Binesh, Behzad
2018-05-01
Grain refinement is the only mechanism that increases the yield strength and toughness of the materials simultaneously. Severe plastic deformation is one of the promising methods to refine the microstructure of materials. Among different severe plastic deformation processes, the T-shaped equal channel angular pressing (T-ECAP) is a relatively new technique. In the present study, finite element analysis was conducted to evaluate the deformation behavior of metals during T-ECAP process. The study was focused mainly on flow characteristics, plastic strain distribution and its homogeneity, damage development, and pressing force which are among the most important factors governing the sound and successful processing of nanostructured materials by severe plastic deformation techniques. The results showed that plastic strain is localized in the bottom side of sample and uniform deformation cannot be possible using T-ECAP processing. Friction coefficient between sample and die channel wall has a little effect on strain distributions in mirror plane and transverse plane of deformed sample. Also, damage analysis showed that superficial cracks may be initiated from bottom side of sample and their propagation will be limited due to the compressive state of stress. It was demonstrated that the V shaped deformation zone are existed in T-ECAP process and the pressing load needed for execution of deformation process is increased with friction.
A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling
Yan, Ying; Suo, Bin
2017-01-01
Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix b...
Directory of Open Access Journals (Sweden)
D. Ramyachitra
2015-09-01
Full Text Available Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM, K-nearest neighbor (KNN, Interval Valued Classification (IVC and the improvised Interval Value based Particle Swarm Optimization (IVPSO algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.
Ramyachitra, D; Sofia, M; Manikandan, P
2015-09-01
Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.
DEFF Research Database (Denmark)
Veraart, Almut
and present a new estimator for the asymptotic ‘variance’ of the centered realised variance in the presence of jumps. Next, we compare the finite sample performance of the various estimators by means of detailed Monte Carlo studies where we study the impact of the jump activity, the jump size of the jumps......This paper studies the impact of jumps on volatility estimation and inference based on various realised variation measures such as realised variance, realised multipower variation and truncated realised multipower variation. We review the asymptotic theory of those realised variation measures...... in the price and the presence of additional independent or dependent jumps in the volatility on the finite sample performance of the various estimators. We find that the finite sample performance of realised variance, and in particular of the log–transformed realised variance, is generally good, whereas...
Model-based estimation of finite population total in stratified sampling
African Journals Online (AJOL)
The work presented in this paper concerns the estimation of finite population total under model – based framework. Nonparametric regression approach as a method of estimating finite population total is explored. The asymptotic properties of the estimators based on nonparametric regression are also developed under ...
Energy Technology Data Exchange (ETDEWEB)
Martin, S.J.; Zielinski, R.E.
1978-07-14
A palynological investigation was performed on 55 samples of core material from four wells drilled in the Devonian Shale interval of the Appalachian and Illinois Basins. Using a combination of spores and acritarchs, it was possible to divide the Middle Devonian from the Upper Devonian and to make subdivisions within the Middle and Upper Devonian. The age of the palynomorphs encountered in this study is Upper Devonian.
Hong-Ghi Min
2011-01-01
Using Monte Carlo simulation of the Portfolio-balance model of the exchange rates, we report finite sample properties of the GMM estimator for testing over-identifying restrictions in the simultaneous equations model. F-form of Sargans statistic performs better than its chi-squared form while Hansens GMM statistic has the smallest bias.
DEFF Research Database (Denmark)
Vahdatirad, Mohammadjavad; Bayat, Mehdi; Andersen, Lars Vabbersgaard
2012-01-01
In this study a stochastic approach is conducted to obtain the horizontal and rotational stiffness of an offshore monopile foundation. A nonlinear stochastic p-y curve is integrated into a finite element scheme for calculation of the monopile response in over-consolidated clay having spatial...
Directory of Open Access Journals (Sweden)
Andreas Steimer
Full Text Available Oscillations between high and low values of the membrane potential (UP and DOWN states respectively are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs of the exponential integrate and fire (EIF model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing
Steimer, Andreas; Schindler, Kaspar
2015-01-01
Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational
Directory of Open Access Journals (Sweden)
Lee Tae-Hoon
2016-12-01
Full Text Available In many cases, a X¯$\\overline X $ control chart based on a performance variable is used in industrial fields. Typically, the control chart monitors the measurements of a performance variable itself. However, if the performance variable is too costly or impossible to measure, and a less expensive surrogate variable is available, the process may be more efficiently controlled using surrogate variables. In this paper, we present a model for the economic statistical design of a VSI (Variable Sampling Interval X¯$\\overline X $ control chart using a surrogate variable that is linearly correlated with the performance variable. We derive the total average profit model from an economic viewpoint and apply the model to a Very High Temperature Reactor (VHTR nuclear fuel measurement system and derive the optimal result using genetic algorithms. Compared with the control chart based on a performance variable, the proposed model gives a larger expected net income per unit of time in the long-run if the correlation between the performance variable and the surrogate variable is relatively high. The proposed model was confined to the sample mean control chart under the assumption that a single assignable cause occurs according to the Poisson process. However, the model may also be extended to other types of control charts using a single or multiple assignable cause assumptions such as VSS (Variable Sample Size X¯$\\overline X $ control chart, EWMA, CUSUM charts and so on.
Directory of Open Access Journals (Sweden)
Petr Koňas
2009-01-01
Full Text Available The work summarizes created algorithms for formation of finite element (FE mesh which is derived from bitmap pattern. Process of registration, segmentation and meshing is described in detail. C++ library of STL from Insight Toolkit (ITK Project together with Visualization Toolkit (VTK were used for base processing of images. Several methods for appropriate mesh output are discussed. Multiplatform application WOOD3D for the task under GNU GPL license was assembled. Several methods of segmentation and mainly different ways of contouring were included. Tetrahedral and rectilinear types of mesh were programmed. Improving of mesh quality in some simple ways is mentioned. Testing and verification of final program on wood anatomy samples of spruce and walnut was realized. Methods of microscopic anatomy samples preparation are depicted. Final utilization of formed mesh in the simple structural analysis was performed.The article discusses main problems in image analysis due to incompatible colour spaces, samples preparation, thresholding and final conversion into finite element mesh. Assembling of mentioned tasks together and evaluation of the application are main original results of the presented work. In presented program two thresholding filters were used. By utilization of ITK two following filters were included. Otsu filter based and binary filter based were used. The most problematic task occurred in a production of wood anatomy samples in the unique light conditions with minimal or zero colour space shift and the following appropriate definition of thresholds (corresponding thresholding parameters and connected methods (prefiltering + registration which influence the continuity and mainly separation of wood anatomy structure. Solution in samples staining is suggested with the following quick image analysis realization. Next original result of the work is complex fully automated application which offers three types of finite element mesh
Directory of Open Access Journals (Sweden)
Manzoor Khan
2014-01-01
Full Text Available This paper presents new classes of estimators in estimating the finite population mean under double sampling in the presence of nonresponse when using information on fractional raw moments. The expressions for mean square error of the proposed classes of estimators are derived up to the first degree of approximation. It is shown that a proposed class of estimators performs better than the usual mean estimator, ratio type estimators, and Singh and Kumar (2009 estimator. An empirical study is carried out to demonstrate the performance of a proposed class of estimators.
International Nuclear Information System (INIS)
Lisichkin, Yu.V.; Dovbenko, A.G.; Efimenko, B.A.; Novikov, A.G.; Smirenkina, L.D.; Tikhonova, S.I.
1979-01-01
Described is a method of taking account of finite sample dimensions in processing measurement results of double differential cross sections (DDCS) of slow neutron scattering. A necessity of corrective approach to the account taken of the effect of sample finite dimensions is shown, and, in particular, the necessity to conduct preliminary processing of DDCS, the account being taken of attenuation coefficients of single scattered neutrons (SSN) for measurements on the sample with a container, and on the container. Correction for multiple scattering (MS) calculated on the base of the dynamic model should be obtained, the account being taken of resolution effects. To minimize the effect of the dynamic model used in calculations it is preferred to make absolute measurements of DDCS and to use the subraction method. The above method was realized in the set of programs for the BESM-5 computer. The FISC program computes the coefficients of SSN attenuation and correction for MS. The DDS program serves to compute a model DDCS averaged as per the resolution function of an instrument. The SCATL program is intended to prepare initial information necessary for the FISC program, and permits to compute the scattering law for all materials. Presented are the results of using the above method while processing experimental data on measuring DDCS of water by the DIN-1M spectrometer
Directory of Open Access Journals (Sweden)
Petr Koňas
2009-01-01
Full Text Available Paper presents new original application WOOD3D in form of program code assembling. The work extends the previous article “Part I – Theoretical approach” in detail description of implemented C++ classes of utilized projects Visualization Toolkit (VTK, Insight Toolkit (ITK and MIMX. Code is written in CMake style and it is available as multiplatform application. Currently GNU Linux (32/64b and MS Windows (32/64b platforms were released. Article discusses various filter classes for image filtering. Mainly Otsu and Binary threshold filters are classified for anatomy wood samples thresholding. Registration of images series is emphasized for difference of colour spaces compensation is included. Resulted work flow of image analysis is new methodological approach for images processing through the composition, visualization, filtering, registration and finite element mesh formation. Application generates script in ANSYS parametric design language (APDL which is fully compatible with ANSYS finite element solver and designer environment. The script includes the whole definition of unstructured finite element mesh formed by individual elements and nodes. Due to simple notation, the same script can be used for generation of geometrical entities in element positions. Such formed volumetric entities are prepared for further geometry approximation (e.g. by boolean or more advanced methods. Hexahedral and tetrahedral types of mesh elements are formed on user request with specified mesh options. Hexahedral meshes are formed both with uniform element size and with anisotropic character. Modified octree method for hexahedral mesh with anisotropic character was declared in application. Multicore CPUs in the application are supported for fast image analysis realization. Visualization of image series and consequent 3D image are realized in VTK format sufficiently known and public format, visualized in GPL application Paraview. Future work based on mesh
Stability of equilibrium states in finite samples of smectic C* liquid crystals
International Nuclear Information System (INIS)
Stewart, I W
2005-01-01
Equilibrium solutions for a sample of ferroelectric smectic C (SmC*) liquid crystal in the 'bookshelf' geometry under the influence of a tilted electric field will be presented. A linear stability criterion is identified and used to confirm stability for typical materials possessing either positive or negative dielectric anisotropy. The theoretical response times for perturbations to the equilibrium solutions are calculated numerically and found to be consistent with estimates for response times in ferroelectric smectic C liquid crystals reported elsewhere in the literature for non-tilted fields
Stability of equilibrium states in finite samples of smectic C* liquid crystals
Energy Technology Data Exchange (ETDEWEB)
Stewart, I W [Department of Mathematics, University of Strathclyde, Livingstone Tower, 26 Richmond Street, Glasgow G1 1XH (United Kingdom)
2005-03-04
Equilibrium solutions for a sample of ferroelectric smectic C (SmC*) liquid crystal in the 'bookshelf' geometry under the influence of a tilted electric field will be presented. A linear stability criterion is identified and used to confirm stability for typical materials possessing either positive or negative dielectric anisotropy. The theoretical response times for perturbations to the equilibrium solutions are calculated numerically and found to be consistent with estimates for response times in ferroelectric smectic C liquid crystals reported elsewhere in the literature for non-tilted fields.
International Nuclear Information System (INIS)
Hanasaki, Itsuo; Kawano, Satoyuki
2013-01-01
Motility of bacteria is usually recognized in the trajectory data and compared with Brownian motion, but the diffusion coefficient is insufficient to evaluate it. In this paper, we propose a method based on the large deviation principle. We show that it can be used to evaluate the non-Gaussian characteristics of model Escherichia coli motions and to distinguish combinations of the mean running duration and running speed that lead to the same diffusion coefficient. Our proposed method does not require chemical stimuli to induce the chemotaxis in a specific direction, and it is applicable to various types of self-propelling motions for which no a priori information of, for example, threshold parameters for run and tumble or head/tail direction is available. We also address the issue of the finite-sample effect on the large deviation quantities, but we propose to make use of it to characterize the nature of motility. (paper)
Zhang, Xian-Ming; Han, Qing-Long; Ge, Xiaohua
2017-09-22
This paper is concerned with the problem of robust H∞ control of an uncertain discrete-time Takagi-Sugeno fuzzy system with an interval-like time-varying delay. A novel finite-sum inequality-based method is proposed to provide a tighter estimation on the forward difference of certain Lyapunov functional, leading to a less conservative result. First, an auxiliary vector function is used to establish two finite-sum inequalities, which can produce tighter bounds for the finite-sum terms appearing in the forward difference of the Lyapunov functional. Second, a matrix-based quadratic convex approach is employed to equivalently convert the original matrix inequality including a quadratic polynomial on the time-varying delay into two boundary matrix inequalities, which delivers a less conservative bounded real lemma (BRL) for the resultant closed-loop system. Third, based on the BRL, a novel sufficient condition on the existence of suitable robust H∞ fuzzy controllers is derived. Finally, two numerical examples and a computer-simulated truck-trailer system are provided to show the effectiveness of the obtained results.
Energy Technology Data Exchange (ETDEWEB)
Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp [Department of Mechanical Engineering, Osaka University, Suita 565-0871 (Japan); Zhang, Xu [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China); School of Mechanics and Engineering Science, Zhengzhou University, Zhengzhou 450001 (China); Shang, Fulin [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China)
2015-07-07
Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources and pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.
International Nuclear Information System (INIS)
Hirose, Akio; Ishii, Daido
1975-01-01
Estimation of the optimum cooling interval by the mathematic or graphic method for Ge(Li) γ-ray spectrometry performed in the presence of some Compton interferences, and the recommended cooling intervals available for activation analysis of unknown samples have been proposed, and applied to the non-destructive activation analysis of gold in pure copper. In the presence of Compton interferences, two kinds of optimum cooling intervals were discussed. One maximizes the S/N ratio of a desired photo-peak. This interval had been originated by Isenhour, et al. Using the computer technique, this work is abbreviated as tsub( s/ n). The other, which minimizes the relative standard deviation (delta s/S) of a net photo-peak counting rate of interest (S) was originated by Tomov, et al. and Quittner, et al., this work is abbreviated as tsub(opt) or t'sub(opt). All equations derived by the above authors, however, have the practical disadvantage of including a term relating to the intensity of the desired photo-peak, thus making it difficult to predict the optimum cooling interval before irradiation. Since in chemical analysis, the concentration of the desired element, or the intensity of the photo-peak of interest, should be considered as ''unknown''. In the present work, an approach to the selection of recommended cooling interval applicable to the unknown sample has been discussed, and the interval, tsub(opt), which minimizes the lower limit of detection of a desired element under given irradiation and counting conditions has been proposed. (Evans, J.)
Le Boedec, Kevin
2016-12-01
According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.
Yun, Wanying; Lu, Zhenzhou; Jiang, Xian
2018-06-01
To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.
Brown, Angus M
2010-04-01
The objective of the method described in this paper is to develop a spreadsheet template for the purpose of comparing multiple sample means. An initial analysis of variance (ANOVA) test on the data returns F--the test statistic. If F is larger than the critical F value drawn from the F distribution at the appropriate degrees of freedom, convention dictates rejection of the null hypothesis and allows subsequent multiple comparison testing to determine where the inequalities between the sample means lie. A variety of multiple comparison methods are described that return the 95% confidence intervals for differences between means using an inclusive pairwise comparison of the sample means. 2009 Elsevier Ireland Ltd. All rights reserved.
DEFF Research Database (Denmark)
Veraart, Almut
2011-01-01
and present a new estimator for the asymptotic "variance" of the centered realised variance in the presence of jumps. Next, we compare the finite sample performance of the various estimators by means of detailed Monte Carlo studies. Here we study the impact of the jump activity, of the jump size of the jumps......This paper studies the impact of jumps on volatility estimation and inference based on various realised variation measures such as realised variance, realised multipower variation and truncated realised multipower variation. We review the asymptotic theory of those realised variation measures...... in the price and of the presence of additional independent or dependent jumps in the volatility. We find that the finite sample performance of realised variance and, in particular, of log--transformed realised variance is generally good, whereas the jump--robust statistics tend to struggle in the presence...
Directory of Open Access Journals (Sweden)
Carmela Protano
2018-04-01
Full Text Available (1 Background: Environmental Tobacco Smoke (ETS exposure remains a public health problem worldwide. The aims are to establish urinary (u- cotinine reference values for healthy Italian children, to evaluate the role of the sampling time and of other factors on children’s u-cotinine excretion. (2 Methods: A cross-sectional study was performed on 330 children. Information on participants was gathered by a questionnaire and u-cotinine was determined in two samples for each child, collected during the evening and the next morning. (3 Results: Reference intervals (as the 2.5th and 97.5th percentiles of the distribution in evening and morning samples were respectively equal to 0.98–4.29 and 0.91–4.50 µg L−1 (ETS unexposed and 1.39–16.34 and 1.49–20.95 µg L−1 (ETS exposed. No statistical differences were recovered between median values found in evening and morning samples, both in ETS unexposed and exposed. Significant predictors of u-cotinine excretions were ponderal status according to body mass index of children (β = 0.202; p-value = 0.041 for evening samples; β = 0.169; p-value = 0.039 for morning samples and paternal educational level (β = −0.258; p-value = 0.010; for evening samples; β = −0.013; p-value = 0.003 for morning samples. (4 Conclusions: The results evidenced the need of further studies for assessing the role of confounding factors on ETS exposure, and the necessity of educational interventions on smokers for rising their awareness about ETS.
de Almeida, Valber Elias; de Araújo Gomes, Adriano; de Sousa Fernandes, David Douglas; Goicoechea, Héctor Casimiro; Galvão, Roberto Kawakami Harrop; Araújo, Mario Cesar Ugulino
2018-05-01
This paper proposes a new variable selection method for nonlinear multivariate calibration, combining the Successive Projections Algorithm for interval selection (iSPA) with the Kernel Partial Least Squares (Kernel-PLS) modelling technique. The proposed iSPA-Kernel-PLS algorithm is employed in a case study involving a Vis-NIR spectrometric dataset with complex nonlinear features. The analytical problem consists of determining Brix and sucrose content in samples from a sugar production system, on the basis of transflectance spectra. As compared to full-spectrum Kernel-PLS, the iSPA-Kernel-PLS models involve a smaller number of variables and display statistically significant superiority in terms of accuracy and/or bias in the predictions. Published by Elsevier B.V.
Feng, Dai; Cortese, Giuliana; Baumgartner, Richard
2017-12-01
The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.
Directory of Open Access Journals (Sweden)
Arlete Maria dos Santos Fernandes
2006-10-01
foi cesárea. Não se detectou diferença nas taxas de satisfação e arrependimento após o procedimento entre os grupos.BACKGROUND: Brazil is a country with a high prevalence of tubal ligation, which is frequently performed at the time of delivery. In recent years, an increase in tubal reversal has been noticed, primarily among young women. OBJECTIVES: To study characteristics correlated with the procedure, determine frequency of intrapartum tubal ligation, measure patient satisfaction rates and tubal sterilization regret, in a sample of post-tubal patients. METHODS: Three hundred and thirty-five women underwent tubal ligation. The variables studied were related to the procedure: age at tubal ligation, whether ligation was performed intrapartum (vaginal or cesarean section or after an interval (other than the intrapartum and puerperal period, health service performing the sterilization, medical expenses paid for the procedure, reason stated for choosing the method and causes related to satisfaction/regret: desire to become pregnant after sterilization, search for treatment and performance of tubal ligation reversal. The women were divided into two groups, a group undergoing ligation in the intrapartum period and a second group ligated after an interval, to evaluate the association between variables by using Fisher's exact test and chi-squared calculation with Yates' correction. The study was approved by the Ethics Committee of the institution. RESULTS: There was a predominance of Caucasian women over 35 years of age, married, and with a low level of education of which 43.5% had undergone sterilization before 30 years of age. Two hundred and forty-five women underwent intrapartum tubal ligation, 91.2% of them had cesarean delivery and 44.6% vaginal delivery. In both groups undergoing intrapartum tubal ligation and ligation after an interval, 82.0% and 80.8% reported satisfaction with the method. Although 14.6% expressed a desire to become pregnant at some time after
DEFF Research Database (Denmark)
Polyzos, Nikolaos P; Nelson, Scott M; Stoop, Dominic
2013-01-01
To investigate whether the time interval between serum antimüllerian hormone (AMH) sampling and initiation of ovarian stimulation for in vitro fertilization-intracytoplasmic sperm injection (IVF-ICSI) may affect the predictive ability of the marker for low and excessive ovarian response.......To investigate whether the time interval between serum antimüllerian hormone (AMH) sampling and initiation of ovarian stimulation for in vitro fertilization-intracytoplasmic sperm injection (IVF-ICSI) may affect the predictive ability of the marker for low and excessive ovarian response....
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
Finite Discrete Gabor Analysis
DEFF Research Database (Denmark)
Søndergaard, Peter Lempel
2007-01-01
frequency bands at certain times. Gabor theory can be formulated for both functions on the real line and for discrete signals of finite length. The two theories are largely the same because many aspects come from the same underlying theory of locally compact Abelian groups. The two types of Gabor systems...... can also be related by sampling and periodization. This thesis extends on this theory by showing new results for window construction. It also provides a discussion of the problems associated to discrete Gabor bases. The sampling and periodization connection is handy because it allows Gabor systems...... on the real line to be well approximated by finite and discrete Gabor frames. This method of approximation is especially attractive because efficient numerical methods exists for doing computations with finite, discrete Gabor systems. This thesis presents new algorithms for the efficient computation of finite...
Statistical intervals a guide for practitioners
Hahn, Gerald J
2011-01-01
Presents a detailed exposition of statistical intervals and emphasizes applications in industry. The discussion differentiates at an elementary level among different kinds of statistical intervals and gives instruction with numerous examples and simple math on how to construct such intervals from sample data. This includes confidence intervals to contain a population percentile, confidence intervals on probability of meeting specified threshold value, and prediction intervals to include observation in a future sample. Also has an appendix containing computer subroutines for nonparametric stati
Estey, Mathew P; Cohen, Ashley H; Colantonio, David A; Chan, Man Khun; Marvasti, Tina Binesh; Randell, Edward; Delvin, Edgard; Cousineau, Jocelyne; Grey, Vijaylaxmi; Greenway, Donald; Meng, Qing H; Jung, Benjamin; Bhuiyan, Jalaluddin; Seccombe, David; Adeli, Khosrow
2013-09-01
The CALIPER program recently established a comprehensive database of age- and sex-stratified pediatric reference intervals for 40 biochemical markers. However, this database was only directly applicable for Abbott ARCHITECT assays. We therefore sought to expand the scope of this database to biochemical assays from other major manufacturers, allowing for a much wider application of the CALIPER database. Based on CLSI C28-A3 and EP9-A2 guidelines, CALIPER reference intervals were transferred (using specific statistical criteria) to assays performed on four other commonly used clinical chemistry platforms including Beckman Coulter DxC800, Ortho Vitros 5600, Roche Cobas 6000, and Siemens Vista 1500. The resulting reference intervals were subjected to a thorough validation using 100 reference specimens (healthy community children and adolescents) from the CALIPER bio-bank, and all testing centers participated in an external quality assessment (EQA) evaluation. In general, the transferred pediatric reference intervals were similar to those established in our previous study. However, assay-specific differences in reference limits were observed for many analytes, and in some instances were considerable. The results of the EQA evaluation generally mimicked the similarities and differences in reference limits among the five manufacturers' assays. In addition, the majority of transferred reference intervals were validated through the analysis of CALIPER reference samples. This study greatly extends the utility of the CALIPER reference interval database which is now directly applicable for assays performed on five major analytical platforms in clinical use, and should permit the worldwide application of CALIPER pediatric reference intervals. Copyright © 2013 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Interval stability for complex systems
Klinshov, Vladimir V.; Kirillov, Sergey; Kurths, Jürgen; Nekorkin, Vladimir I.
2018-04-01
Stability of dynamical systems against strong perturbations is an important problem of nonlinear dynamics relevant to many applications in various areas. Here, we develop a novel concept of interval stability, referring to the behavior of the perturbed system during a finite time interval. Based on this concept, we suggest new measures of stability, namely interval basin stability (IBS) and interval stability threshold (IST). IBS characterizes the likelihood that the perturbed system returns to the stable regime (attractor) in a given time. IST provides the minimal magnitude of the perturbation capable to disrupt the stable regime for a given interval of time. The suggested measures provide important information about the system susceptibility to external perturbations which may be useful for practical applications. Moreover, from a theoretical viewpoint the interval stability measures are shown to bridge the gap between linear and asymptotic stability. We also suggest numerical algorithms for quantification of the interval stability characteristics and demonstrate their potential for several dynamical systems of various nature, such as power grids and neural networks.
Construction of prediction intervals for Palmer Drought Severity Index using bootstrap
Beyaztas, Ufuk; Bickici Arikan, Bugrayhan; Beyaztas, Beste Hamiye; Kahya, Ercan
2018-04-01
In this study, we propose an approach based on the residual-based bootstrap method to obtain valid prediction intervals using monthly, short-term (three-months) and mid-term (six-months) drought observations. The effects of North Atlantic and Arctic Oscillation indexes on the constructed prediction intervals are also examined. Performance of the proposed approach is evaluated for the Palmer Drought Severity Index (PDSI) obtained from Konya closed basin located in Central Anatolia, Turkey. The finite sample properties of the proposed method are further illustrated by an extensive simulation study. Our results revealed that the proposed approach is capable of producing valid prediction intervals for future PDSI values.
Energy Technology Data Exchange (ETDEWEB)
David B. Wood
2007-10-24
Between 1951 and 1992, 828 underground tests were conducted on the Nevada Test Site, Nye County, Nevada. Prior to and following these nuclear tests, holes were drilled and mined to collect rock samples. These samples are organized and stored by depth of borehole or drift at the U.S. Geological Survey Core Library and Data Center at Mercury, Nevada, on the Nevada Test Site. From these rock samples, rock properties were analyzed and interpreted and compiled into project files and in published reports that are maintained at the Core Library and at the U.S. Geological Survey office in Henderson, Nevada. These rock-sample data include lithologic descriptions, physical and mechanical properties, and fracture characteristics. Hydraulic properties also were compiled from holes completed in the water table. Rock samples are irreplaceable because pre-test, in-place conditions cannot be recreated and samples cannot be recollected from the many holes destroyed by testing. Documenting these data in a published report will ensure availability for future investigators.
Energy Technology Data Exchange (ETDEWEB)
David B. Wood
2009-10-08
Between 1951 and 1992, underground nuclear weapons testing was conducted at 828 sites on the Nevada Test Site, Nye County, Nevada. Prior to and following these nuclear tests, holes were drilled and mined to collect rock samples. These samples are organized and stored by depth of borehole or drift at the U.S. Geological Survey Core Library and Data Center at Mercury, Nevada, on the Nevada Test Site. From these rock samples, rock properties were analyzed and interpreted and compiled into project files and in published reports that are maintained at the Core Library and at the U.S. Geological Survey office in Henderson, Nevada. These rock-sample data include lithologic descriptions, physical and mechanical properties, and fracture characteristics. Hydraulic properties also were compiled from holes completed in the water table. Rock samples are irreplaceable because pre-test, in-place conditions cannot be recreated and samples cannot be recollected from the many holes destroyed by testing. Documenting these data in a published report will ensure availability for future investigators.
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Interval selection with machine-dependent intervals
Bohmova K.; Disser Y.; Mihalak M.; Widmayer P.
2013-01-01
We study an offline interval scheduling problem where every job has exactly one associated interval on every machine. To schedule a set of jobs, exactly one of the intervals associated with each job must be selected, and the intervals selected on the same machine must not intersect.We show that deciding whether all jobs can be scheduled is NP-complete already in various simple cases. In particular, by showing the NP-completeness for the case when all the intervals associated with the same job...
Marchisio, Andrea; Minella, Marco; Maurino, Valter; Minero, Claudio; Vione, Davide
2015-04-15
Chromophoric dissolved organic matter (CDOM) in surface waters is a photochemical source of several transient species such as CDOM triplet states ((3)CDOM*), singlet oxygen ((1)O2) and the hydroxyl radical (OH). By irradiation of lake water samples, it is shown here that the quantum yields for the formation of these transients by CDOM vary depending on the irradiation wavelength range, in the order UVB > UVA > blue. A possible explanation is that radiation at longer wavelengths is preferentially absorbed by the larger CDOM fractions, which show lesser photoactivity compared to smaller CDOM moieties. The quantum yield variations in different spectral ranges were definitely more marked for (3)CDOM* and OH compared to (1)O2. The decrease of the quantum yields with increasing wavelength has important implications for the photochemistry of surface waters, because long-wavelength radiation penetrates deeper in water columns compared to short-wavelength radiation. The average steady-state concentrations of the transients ((3)CDOM*, (1)O2 and OH) were modelled in water columns of different depths, based on the experimentally determined wavelength trends of the formation quantum yields. Important differences were found between such modelling results and those obtained in a wavelength-independent quantum yield scenario. Copyright © 2015 Elsevier Ltd. All rights reserved.
Alparslan-Gok, S.Z.; Brânzei, R.; Tijs, S.H.
2008-01-01
In this paper, convex interval games are introduced and some characterizations are given. Some economic situations leading to convex interval games are discussed. The Weber set and the Shapley value are defined for a suitable class of interval games and their relations with the interval core for
Zhang, L.-C.; Patone, M.
2017-01-01
We synthesise the existing theory of graph sampling. We propose a formal definition of sampling in finite graphs, and provide a classification of potential graph parameters. We develop a general approach of Horvitz–Thompson estimation to T-stage snowball sampling, and present various reformulations of some common network sampling methods in the literature in terms of the outlined graph sampling theory.
Nonparametric Estimation of Interval Reliability for Discrete-Time Semi-Markov Systems
DEFF Research Database (Denmark)
Georgiadis, Stylianos; Limnios, Nikolaos
2016-01-01
In this article, we consider a repairable discrete-time semi-Markov system with finite state space. The measure of the interval reliability is given as the probability of the system being operational over a given finite-length time interval. A nonparametric estimator is proposed for the interval...
Cultural Consensus Theory: Aggregating Continuous Responses in a Finite Interval
Batchelder, William H.; Strashny, Alex; Romney, A. Kimball
Cultural consensus theory (CCT) consists of cognitive models for aggregating responses of "informants" to test items about some domain of their shared cultural knowledge. This paper develops a CCT model for items requiring bounded numerical responses, e.g. probability estimates, confidence judgments, or similarity judgments. The model assumes that each item generates a latent random representation in each informant, with mean equal to the consensus answer and variance depending jointly on the informant and the location of the consensus answer. The manifest responses may reflect biases of the informants. Markov Chain Monte Carlo (MCMC) methods were used to estimate the model, and simulation studies validated the approach. The model was applied to an existing cross-cultural dataset involving native Japanese and English speakers judging the similarity of emotion terms. The results sharpened earlier studies that showed that both cultures appear to have very similar cognitive representations of emotion terms.
Chosen interval methods for solving linear interval systems with special type of matrix
Szyszka, Barbara
2013-10-01
The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.
Leamer, Micah J.
2004-01-01
Let K be a field and Q a finite directed multi-graph. In this paper I classify all path algebras KQ and admissible orders with the property that all of their finitely generated ideals have finite Groebner bases. MS
Locally Finite Root Supersystems
Yousofzadeh, Malihe
2013-01-01
We introduce the notion of locally finite root supersystems as a generalization of both locally finite root systems and generalized root systems. We classify irreducible locally finite root supersystems.
Matsakis, Nicholas D.; Gross, Thomas R.
Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.
Haemostatic reference intervals in pregnancy
DEFF Research Database (Denmark)
Szecsi, Pal Bela; Jørgensen, Maja; Klajnbard, Anna
2010-01-01
Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age-specific refe......Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age......-specific reference intervals for coagulation tests during normal pregnancy. Eight hundred one women with expected normal pregnancies were included in the study. Of these women, 391 had no complications during pregnancy, vaginal delivery, or postpartum period. Plasma samples were obtained at gestational weeks 13......-20, 21-28, 29-34, 35-42, at active labor, and on postpartum days 1 and 2. Reference intervals for each gestational period using only the uncomplicated pregnancies were calculated in all 391 women for activated partial thromboplastin time (aPTT), fibrinogen, fibrin D-dimer, antithrombin, free protein S...
Belytschko, Ted; Wing, Kam Liu
1987-01-01
In the Probabilistic Finite Element Method (PFEM), finite element methods have been efficiently combined with second-order perturbation techniques to provide an effective method for informing the designer of the range of response which is likely in a given problem. The designer must provide as input the statistical character of the input variables, such as yield strength, load magnitude, and Young's modulus, by specifying their mean values and their variances. The output then consists of the mean response and the variance in the response. Thus the designer is given a much broader picture of the predicted performance than with simply a single response curve. These methods are applicable to a wide class of problems, provided that the scale of randomness is not too large and the probabilistic density functions possess decaying tails. By incorporating the computational techniques we have developed in the past 3 years for efficiency, the probabilistic finite element methods are capable of handling large systems with many sources of uncertainties. Sample results for an elastic-plastic ten-bar structure and an elastic-plastic plane continuum with a circular hole subject to cyclic loadings with the yield stress on the random field are given.
International Nuclear Information System (INIS)
Hofmann, R.
1982-08-01
STEALTH sample and verification problems are presented to help users become familiar with STEALTH capabilities, input, and output. Problems are grouped into articles which are completely self-contained. The pagination in each article is A.n, where A is a unique alphabetic-character article identifier and n is a sequential page number which starts from 1 on the first page of text for each article. Articles concerning new capabilities will be added as they become available. STEALTH sample and verification calculations are divided into the following general categories: transient mechanical calculations dealing with solids; transient mechanical calculations dealing with fluids; transient thermal calculations dealing with solids; transient thermal calculations dealing with fluids; static and quasi-static calculations; and complex boundary interaction calculations
Haemostatic reference intervals in pregnancy
DEFF Research Database (Denmark)
Szecsi, Pal Bela; Jørgensen, Maja; Klajnbard, Anna
2010-01-01
largely unchanged during pregnancy, delivery, and postpartum and were within non-pregnant reference intervals. However, levels of fibrinogen, D-dimer, and coagulation factors VII, VIII, and IX increased markedly. Protein S activity decreased substantially, while free protein S decreased slightly and total......Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age......-20, 21-28, 29-34, 35-42, at active labor, and on postpartum days 1 and 2. Reference intervals for each gestational period using only the uncomplicated pregnancies were calculated in all 391 women for activated partial thromboplastin time (aPTT), fibrinogen, fibrin D-dimer, antithrombin, free protein S...
INTERVAL OBSERVER FOR A BIOLOGICAL REACTOR MODEL
Directory of Open Access Journals (Sweden)
T. A. Kharkovskaia
2014-05-01
Full Text Available The method of an interval observer design for nonlinear systems with parametric uncertainties is considered. The interval observer synthesis problem for systems with varying parameters consists in the following. If there is the uncertainty restraint for the state values of the system, limiting the initial conditions of the system and the set of admissible values for the vector of unknown parameters and inputs, the interval existence condition for the estimations of the system state variables, containing the actual state at a given time, needs to be held valid over the whole considered time segment as well. Conditions of the interval observers design for the considered class of systems are shown. They are: limitation of the input and state, the existence of a majorizing function defining the uncertainty vector for the system, Lipschitz continuity or finiteness of this function, the existence of an observer gain with the suitable Lyapunov matrix. The main condition for design of such a device is cooperativity of the interval estimation error dynamics. An individual observer gain matrix selection problem is considered. In order to ensure the property of cooperativity for interval estimation error dynamics, a static transformation of coordinates is proposed. The proposed algorithm is demonstrated by computer modeling of the biological reactor. Possible applications of these interval estimation systems are the spheres of robust control, where the presence of various types of uncertainties in the system dynamics is assumed, biotechnology and environmental systems and processes, mechatronics and robotics, etc.
Generalized Confidence Intervals and Fiducial Intervals for Some Epidemiological Measures
Directory of Open Access Journals (Sweden)
Ionut Bebu
2016-06-01
Full Text Available For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The measures considered include the common odds ratio (OR from several studies, the number needed to treat (NNT, and the prevalence ratio. For each parameter, confidence intervals are constructed using the concepts of generalized pivotal quantities and fiducial quantities. Numerical results show that the confidence intervals so obtained exhibit satisfactory performance in terms of maintaining the coverage probabilities even when the sample sizes are not large. An appealing feature of the proposed solutions is that they are not based on maximization of the likelihood, and hence are free from convergence issues associated with the numerical calculation of the maximum likelihood estimators, especially in the context of the log-binomial model. The results are illustrated with a number of examples. The overall conclusion is that the proposed methodologies based on generalized pivotal quantities and fiducial quantities provide an accurate and unified approach for the interval estimation of the various epidemiological measures in the context of binary outcome data with or without covariates.
International Nuclear Information System (INIS)
Acharya, B.S.; Douglas, M.R.
2006-06-01
We present evidence that the number of string/M theory vacua consistent with experiments is finite. We do this both by explicit analysis of infinite sequences of vacua and by applying various mathematical finiteness theorems. (author)
Nilpotent -local finite groups
Cantarero, José; Scherer, Jérôme; Viruel, Antonio
2014-10-01
We provide characterizations of -nilpotency for fusion systems and -local finite groups that are inspired by known result for finite groups. In particular, we generalize criteria by Atiyah, Brunetti, Frobenius, Quillen, Stammbach and Tate.
Overconfidence in Interval Estimates
Soll, Jack B.; Klayman, Joshua
2004-01-01
Judges were asked to make numerical estimates (e.g., "In what year was the first flight of a hot air balloon?"). Judges provided high and low estimates such that they were X% sure that the correct answer lay between them. They exhibited substantial overconfidence: The correct answer fell inside their intervals much less than X% of the time. This…
International Nuclear Information System (INIS)
Lee, Byeong Hae
1992-02-01
This book gives descriptions of basic finite element method, which includes basic finite element method and data, black box, writing of data, definition of VECTOR, definition of matrix, matrix and multiplication of matrix, addition of matrix, and unit matrix, conception of hardness matrix like spring power and displacement, governed equation of an elastic body, finite element method, Fortran method and programming such as composition of computer, order of programming and data card and Fortran card, finite element program and application of nonelastic problem.
Alabdulmohsin, Ibrahim M.
2018-01-01
In this chapter, we extend the previous results of Chap. 2 to the more general case of composite finite sums. We describe what composite finite sums are and how their analysis can be reduced to the analysis of simple finite sums using the chain rule. We apply these techniques, next, on numerical integration and on some identities of Ramanujan.
Alabdulmohsin, Ibrahim M.
2018-03-07
In this chapter, we extend the previous results of Chap. 2 to the more general case of composite finite sums. We describe what composite finite sums are and how their analysis can be reduced to the analysis of simple finite sums using the chain rule. We apply these techniques, next, on numerical integration and on some identities of Ramanujan.
Applications of interval computations
Kreinovich, Vladik
1996-01-01
Primary Audience for the Book • Specialists in numerical computations who are interested in algorithms with automatic result verification. • Engineers, scientists, and practitioners who desire results with automatic verification and who would therefore benefit from the experience of suc cessful applications. • Students in applied mathematics and computer science who want to learn these methods. Goal Of the Book This book contains surveys of applications of interval computations, i. e. , appli cations of numerical methods with automatic result verification, that were pre sented at an international workshop on the subject in EI Paso, Texas, February 23-25, 1995. The purpose of this book is to disseminate detailed and surveyed information about existing and potential applications of this new growing field. Brief Description of the Papers At the most fundamental level, interval arithmetic operations work with sets: The result of a single arithmetic operation is the set of all possible results as the o...
Using the confidence interval confidently.
Hazra, Avijit
2017-10-01
Biomedical research is seldom done with entire populations but rather with samples drawn from a population. Although we work with samples, our goal is to describe and draw inferences regarding the underlying population. It is possible to use a sample statistic and estimates of error in the sample to get a fair idea of the population parameter, not as a single value, but as a range of values. This range is the confidence interval (CI) which is estimated on the basis of a desired confidence level. Calculation of the CI of a sample statistic takes the general form: CI = Point estimate ± Margin of error, where the margin of error is given by the product of a critical value (z) derived from the standard normal curve and the standard error of point estimate. Calculation of the standard error varies depending on whether the sample statistic of interest is a mean, proportion, odds ratio (OR), and so on. The factors affecting the width of the CI include the desired confidence level, the sample size and the variability in the sample. Although the 95% CI is most often used in biomedical research, a CI can be calculated for any level of confidence. A 99% CI will be wider than 95% CI for the same sample. Conflict between clinical importance and statistical significance is an important issue in biomedical research. Clinical importance is best inferred by looking at the effect size, that is how much is the actual change or difference. However, statistical significance in terms of P only suggests whether there is any difference in probability terms. Use of the CI supplements the P value by providing an estimate of actual clinical effect. Of late, clinical trials are being designed specifically as superiority, non-inferiority or equivalence studies. The conclusions from these alternative trial designs are based on CI values rather than the P value from intergroup comparison.
Magnetic Resonance Fingerprinting with short relaxation intervals.
Amthor, Thomas; Doneva, Mariya; Koken, Peter; Sommer, Karsten; Meineke, Jakob; Börnert, Peter
2017-09-01
The aim of this study was to investigate a technique for improving the performance of Magnetic Resonance Fingerprinting (MRF) in repetitive sampling schemes, in particular for 3D MRF acquisition, by shortening relaxation intervals between MRF pulse train repetitions. A calculation method for MRF dictionaries adapted to short relaxation intervals and non-relaxed initial spin states is presented, based on the concept of stationary fingerprints. The method is applicable to many different k-space sampling schemes in 2D and 3D. For accuracy analysis, T 1 and T 2 values of a phantom are determined by single-slice Cartesian MRF for different relaxation intervals and are compared with quantitative reference measurements. The relevance of slice profile effects is also investigated in this case. To further illustrate the capabilities of the method, an application to in-vivo spiral 3D MRF measurements is demonstrated. The proposed computation method enables accurate parameter estimation even for the shortest relaxation intervals, as investigated for different sampling patterns in 2D and 3D. In 2D Cartesian measurements, we achieved a scan acceleration of more than a factor of two, while maintaining acceptable accuracy: The largest T 1 values of a sample set deviated from their reference values by 0.3% (longest relaxation interval) and 2.4% (shortest relaxation interval). The largest T 2 values showed systematic deviations of up to 10% for all relaxation intervals, which is discussed. The influence of slice profile effects for multislice acquisition is shown to become increasingly relevant for short relaxation intervals. In 3D spiral measurements, a scan time reduction of 36% was achieved, maintaining the quality of in-vivo T1 and T2 maps. Reducing the relaxation interval between MRF sequence repetitions using stationary fingerprint dictionaries is a feasible method to improve the scan efficiency of MRF sequences. The method enables fast implementations of 3D spatially
Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.
Bartley, David; Slaven, James; Harper, Martin
2017-03-01
The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.
Surveillance test interval optimization
International Nuclear Information System (INIS)
Cepin, M.; Mavko, B.
1995-01-01
Technical specifications have been developed on the bases of deterministic analyses, engineering judgment, and expert opinion. This paper introduces our risk-based approach to surveillance test interval (STI) optimization. This approach consists of three main levels. The first level is the component level, which serves as a rough estimation of the optimal STI and can be calculated analytically by a differentiating equation for mean unavailability. The second and third levels give more representative results. They take into account the results of probabilistic risk assessment (PRA) calculated by a personal computer (PC) based code and are based on system unavailability at the system level and on core damage frequency at the plant level
Ruette, Sylvie
2017-01-01
The aim of this book is to survey the relations between the various kinds of chaos and related notions for continuous interval maps from a topological point of view. The papers on this topic are numerous and widely scattered in the literature; some of them are little known, difficult to find, or originally published in Russian, Ukrainian, or Chinese. Dynamical systems given by the iteration of a continuous map on an interval have been broadly studied because they are simple but nevertheless exhibit complex behaviors. They also allow numerical simulations, which enabled the discovery of some chaotic phenomena. Moreover, the "most interesting" part of some higher-dimensional systems can be of lower dimension, which allows, in some cases, boiling it down to systems in dimension one. Some of the more recent developments such as distributional chaos, the relation between entropy and Li-Yorke chaos, sequence entropy, and maps with infinitely many branches are presented in book form for the first time. The author gi...
Fractional finite Fourier transform.
Khare, Kedar; George, Nicholas
2004-07-01
We show that a fractional version of the finite Fourier transform may be defined by using prolate spheroidal wave functions of order zero. The transform is linear and additive in its index and asymptotically goes over to Namias's definition of the fractional Fourier transform. As a special case of this definition, it is shown that the finite Fourier transform may be inverted by using information over a finite range of frequencies in Fourier space, the inversion being sensitive to noise. Numerical illustrations for both forward (fractional) and inverse finite transforms are provided.
International Nuclear Information System (INIS)
Lucha, W.; Neufeld, H.
1986-01-01
We investigate the relation between finiteness of a four-dimensional quantum field theory and global supersymmetry. To this end we consider the most general quantum field theory and analyse the finiteness conditions resulting from the requirement of the absence of divergent contributions to the renormalizations of the parameters of the theory. In addition to the gauge bosons, both fermions and scalar bosons turn out to be a necessary ingredient in a non-trivial finite gauge theory. In all cases discussed, the supersymmetric theory restricted by two well-known constraints on the dimensionless couplings proves to be the unique solution of the finiteness conditions. (Author)
Interval methods: An introduction
DEFF Research Database (Denmark)
Achenie, L.E.K.; Kreinovich, V.; Madsen, Kaj
2006-01-01
This chapter contains selected papers presented at the Minisymposium on Interval Methods of the PARA'04 Workshop '' State-of-the-Art in Scientific Computing ''. The emphasis of the workshop was on high-performance computing (HPC). The ongoing development of ever more advanced computers provides...... the potential for solving increasingly difficult computational problems. However, given the complexity of modern computer architectures, the task of realizing this potential needs careful attention. A main concern of HPC is the development of software that optimizes the performance of a given computer....... An important characteristic of the computer performance in scientific computing is the accuracy of the Computation results. Often, we can estimate this accuracy by using traditional statistical techniques. However, in many practical situations, we do not know the probability distributions of different...
International Nuclear Information System (INIS)
Turko, B.T.
1983-10-01
A CAMAC based modular multichannel interval timer is described. The timer comprises twelve high resolution time digitizers with a common start enabling twelve independent stop inputs. Ten time ranges from 2.5 μs to 1.3 μs can be preset. Time can be read out in twelve 24-bit words either via CAMAC Crate Controller or an external FIFO register. LSB time calibration is 78.125 ps. An additional word reads out the operational status of twelve stop channels. The system consists of two modules. The analog module contains a reference clock and 13 analog time stretchers. The digital module contains counters, logic and interface circuits. The timer has an excellent differential linearity, thermal stability and crosstalk free performance
Experimenting with musical intervals
Lo Presto, Michael C.
2003-07-01
When two tuning forks of different frequency are sounded simultaneously the result is a complex wave with a repetition frequency that is the fundamental of the harmonic series to which both frequencies belong. The ear perceives this 'musical interval' as a single musical pitch with a sound quality produced by the harmonic spectrum responsible for the waveform. This waveform can be captured and displayed with data collection hardware and software. The fundamental frequency can then be calculated and compared with what would be expected from the frequencies of the tuning forks. Also, graphing software can be used to determine equations for the waveforms and predict their shapes. This experiment could be used in an introductory physics or musical acoustics course as a practical lesson in superposition of waves, basic Fourier series and the relationship between some of the ear's subjective perceptions of sound and the physical properties of the waves that cause them.
Design of sampling tools for Monte Carlo particle transport code JMCT
International Nuclear Information System (INIS)
Shangguan Danhua; Li Gang; Zhang Baoyin; Deng Li
2012-01-01
A class of sampling tools for general Monte Carlo particle transport code JMCT is designed. Two ways are provided to sample from distributions. One is the utilization of special sampling methods for special distribution; the other is the utilization of general sampling methods for arbitrary discrete distribution and one-dimensional continuous distribution on a finite interval. Some open source codes are included in the general sampling method for the maximum convenience of users. The sampling results show sampling correctly from distribution which are popular in particle transport can be achieved with these tools, and the user's convenience can be assured. (authors)
Sman, van der R.G.M.
2006-01-01
In the special case of relaxation parameter = 1 lattice Boltzmann schemes for (convection) diffusion and fluid flow are equivalent to finite difference/volume (FD) schemes, and are thus coined finite Boltzmann (FB) schemes. We show that the equivalence is inherent to the homology of the
1996-01-01
Designs and Finite Geometries brings together in one place important contributions and up-to-date research results in this important area of mathematics. Designs and Finite Geometries serves as an excellent reference, providing insight into some of the most important research issues in the field.
Supersymmetric theories and finiteness
International Nuclear Information System (INIS)
Helayel-Neto, J.A.
1989-01-01
We attempt here to present a short survey of the all-order finite Lagrangian field theories known at present in four-and two-dimensional space-times. The question of the possible relevance of these ultraviolet finite models in the formulation of consistent unified frameworks for the fundamental forces is also addressed to. (author)
Alabdulmohsin, Ibrahim M.
2018-03-07
We will begin our treatment of summability calculus by analyzing what will be referred to, throughout this book, as simple finite sums. Even though the results of this chapter are particular cases of the more general results presented in later chapters, they are important to start with for a few reasons. First, this chapter serves as an excellent introduction to what summability calculus can markedly accomplish. Second, simple finite sums are encountered more often and, hence, they deserve special treatment. Third, the results presented in this chapter for simple finite sums will, themselves, be used as building blocks for deriving the most general results in subsequent chapters. Among others, we establish that fractional finite sums are well-defined mathematical objects and show how various identities related to the Euler constant as well as the Riemann zeta function can actually be derived in an elementary manner using fractional finite sums.
Alabdulmohsin, Ibrahim M.
2018-01-01
We will begin our treatment of summability calculus by analyzing what will be referred to, throughout this book, as simple finite sums. Even though the results of this chapter are particular cases of the more general results presented in later chapters, they are important to start with for a few reasons. First, this chapter serves as an excellent introduction to what summability calculus can markedly accomplish. Second, simple finite sums are encountered more often and, hence, they deserve special treatment. Third, the results presented in this chapter for simple finite sums will, themselves, be used as building blocks for deriving the most general results in subsequent chapters. Among others, we establish that fractional finite sums are well-defined mathematical objects and show how various identities related to the Euler constant as well as the Riemann zeta function can actually be derived in an elementary manner using fractional finite sums.
Finite fields and applications
Mullen, Gary L
2007-01-01
This book provides a brief and accessible introduction to the theory of finite fields and to some of their many fascinating and practical applications. The first chapter is devoted to the theory of finite fields. After covering their construction and elementary properties, the authors discuss the trace and norm functions, bases for finite fields, and properties of polynomials over finite fields. Each of the remaining chapters details applications. Chapter 2 deals with combinatorial topics such as the construction of sets of orthogonal latin squares, affine and projective planes, block designs, and Hadamard matrices. Chapters 3 and 4 provide a number of constructions and basic properties of error-correcting codes and cryptographic systems using finite fields. Each chapter includes a set of exercises of varying levels of difficulty which help to further explain and motivate the material. Appendix A provides a brief review of the basic number theory and abstract algebra used in the text, as well as exercises rel...
Adolph, Karen E.; Robinson, Scott R.
2011-01-01
Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…
Finite elements and approximation
Zienkiewicz, O C
2006-01-01
A powerful tool for the approximate solution of differential equations, the finite element is extensively used in industry and research. This book offers students of engineering and physics a comprehensive view of the principles involved, with numerous illustrative examples and exercises.Starting with continuum boundary value problems and the need for numerical discretization, the text examines finite difference methods, weighted residual methods in the context of continuous trial functions, and piecewise defined trial functions and the finite element method. Additional topics include higher o
Indian Academy of Sciences (India)
IAS Admin
wavelength, they are called shallow water waves. In the ... Deep and intermediate water waves are dispersive as the velocity of these depends on wavelength. This is not the ..... generation processes, the finite amplitude wave theories are very ...
International Nuclear Information System (INIS)
Rittenberg, V.
1983-01-01
Fischer's finite-size scaling describes the cross over from the singular behaviour of thermodynamic quantities at the critical point to the analytic behaviour of the finite system. Recent extensions of the method--transfer matrix technique, and the Hamiltonian formalism--are discussed in this paper. The method is presented, with equations deriving scaling function, critical temperature, and exponent v. As an application of the method, a 3-states Hamiltonian with Z 3 global symmetry is studied. Diagonalization of the Hamiltonian for finite chains allows one to estimate the critical exponents, and also to discover new phase transitions at lower temperatures. The critical points lambda, and indices v estimated for finite-scaling are given
Supersymmetry at finite temperature
International Nuclear Information System (INIS)
Clark, T.E.; Love, S.T.
1983-01-01
Finite-temperature supersymmetry (SUSY) is characterized by unbroken Ward identities for SUSY variations of ensemble averages of Klein-operator inserted imaginary time-ordered products of fields. Path-integral representations of these products are defined and the Feynman rules in superspace are given. The finite-temperature no-renormalization theorem is derived. Spontaneously broken SUSY at zero temperature is shown not to be restored at high temperature. (orig.)
Finite element model updating of a small steel frame using neural networks
International Nuclear Information System (INIS)
Zapico, J L; González, M P; Alonso, R; González-Buelga, A
2008-01-01
This paper presents an experimental and analytical dynamic study of a small-scale steel frame. The experimental model was physically built and dynamically tested on a shaking table in a series of different configurations obtained from the original one by changing the mass and by causing structural damage. Finite element modelling and parameterization with physical meaning is iteratively tried for the original undamaged configuration. The finite element model is updated through a neural network, the natural frequencies of the model being the net input. The updating process is made more accurate and robust by using a regressive procedure, which constitutes an original contribution of this work. A novel simplified analytical model has been developed to evaluate the reduction of bending stiffness of the elements due to damage. The experimental results of the rest of the configurations have been used to validate both the updated finite element model and the analytical one. The statistical properties of the identified modal data are evaluated. From these, the statistical properties and a confidence interval for the estimated model parameters are obtained by using the Latin Hypercube sampling technique. The results obtained are successful: the updated model accurately reproduces the low modes identified experimentally for all configurations, and the statistical study of the transmission of errors yields a narrow confidence interval for all the identified parameters
Interval Size and Affect: An Ethnomusicological Perspective
Directory of Open Access Journals (Sweden)
Sarha Moore
2013-08-01
Full Text Available This commentary addresses Huron and Davis's question of whether "The Harmonic Minor Provides an Optimum Way of Reducing Average Melodic Interval Size, Consistent with Sad Affect Cues" within any non-Western musical cultures. The harmonic minor scale and other semitone-heavy scales, such as Bhairav raga and Hicaz makam, are featured widely in the musical cultures of North India and the Middle East. Do melodies from these genres also have a preponderance of semitone intervals and low incidence of the augmented second interval, as in Huron and Davis's sample? Does the presence of more semitone intervals in a melody affect its emotional connotations in different cultural settings? Are all semitone intervals equal in their effect? My own ethnographic research within these cultures reveals comparable connotations in melodies that linger on semitone intervals, centered on concepts of tension and metaphors of falling. However, across different musical cultures there may also be neutral or lively interpretations of these same pitch sets, dependent on context, manner of performance, and tradition. Small pitch movement may also be associated with social functions such as prayer or lullabies, and may not be described as "sad." "Sad," moreover may not connote the same affect cross-culturally.
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
International Nuclear Information System (INIS)
Feinsilver, Philip; Schott, Rene
2009-01-01
We discuss topics related to finite-dimensional calculus in the context of finite-dimensional quantum mechanics. The truncated Heisenberg-Weyl algebra is called a TAA algebra after Tekin, Aydin and Arik who formulated it in terms of orthofermions. It is shown how to use a matrix approach to implement analytic representations of the Heisenberg-Weyl algebra in univariate and multivariate settings. We provide examples for the univariate case. Krawtchouk polynomials are presented in detail, including a review of Krawtchouk polynomials that illustrates some curious properties of the Heisenberg-Weyl algebra, as well as presenting an approach to computing Krawtchouk expansions. From a mathematical perspective, we are providing indications as to how to implement infinite terms Rota's 'finite operator calculus'.
Finite temperature field theory
Das, Ashok
1997-01-01
This book discusses all three formalisms used in the study of finite temperature field theory, namely the imaginary time formalism, the closed time formalism and thermofield dynamics. Applications of the formalisms are worked out in detail. Gauge field theories and symmetry restoration at finite temperature are among the practical examples discussed in depth. The question of gauge dependence of the effective potential and the Nielsen identities are explained. The nonrestoration of some symmetries at high temperature (such as supersymmetry) and theories on nonsimply connected space-times are al
International Nuclear Information System (INIS)
Wachspress, E.
2009-01-01
Triangles and rectangles are the ubiquitous elements in finite element studies. Only these elements admit polynomial basis functions. Rational functions provide a basis for elements having any number of straight and curved sides. Numerical complexities initially associated with rational bases precluded extensive use. Recent analysis has reduced these difficulties and programs have been written to illustrate effectiveness. Although incorporation in major finite element software requires considerable effort, there are advantages in some applications which warrant implementation. An outline of the basic theory and of recent innovations is presented here. (authors)
Interpregnancy interval and risk of autistic disorder.
Gunnes, Nina; Surén, Pål; Bresnahan, Michaeline; Hornig, Mady; Lie, Kari Kveim; Lipkin, W Ian; Magnus, Per; Nilsen, Roy Miodini; Reichborn-Kjennerud, Ted; Schjølberg, Synnve; Susser, Ezra Saul; Øyen, Anne-Siri; Stoltenberg, Camilla
2013-11-01
A recent California study reported increased risk of autistic disorder in children conceived within a year after the birth of a sibling. We assessed the association between interpregnancy interval and risk of autistic disorder using nationwide registry data on pairs of singleton full siblings born in Norway. We defined interpregnancy interval as the time from birth of the first-born child to conception of the second-born child in a sibship. The outcome of interest was autistic disorder in the second-born child. Analyses were restricted to sibships in which the second-born child was born in 1990-2004. Odds ratios (ORs) were estimated by fitting ordinary logistic models and logistic generalized additive models. The study sample included 223,476 singleton full-sibling pairs. In sibships with interpregnancy intervals autistic disorder, compared with 0.13% in the reference category (≥ 36 months). For interpregnancy intervals shorter than 9 months, the adjusted OR of autistic disorder in the second-born child was 2.18 (95% confidence interval 1.42-3.26). The risk of autistic disorder in the second-born child was also increased for interpregnancy intervals of 9-11 months in the adjusted analysis (OR = 1.71 [95% CI = 1.07-2.64]). Consistent with a previous report from California, interpregnancy intervals shorter than 1 year were associated with increased risk of autistic disorder in the second-born child. A possible explanation is depletion of micronutrients in mothers with closely spaced pregnancies.
Finite-time braiding exponents
Budišić, Marko; Thiffeault, Jean-Luc
2015-08-01
Topological entropy of a dynamical system is an upper bound for the sum of positive Lyapunov exponents; in practice, it is strongly indicative of the presence of mixing in a subset of the domain. Topological entropy can be computed by partition methods, by estimating the maximal growth rate of material lines or other material elements, or by counting the unstable periodic orbits of the flow. All these methods require detailed knowledge of the velocity field that is not always available, for example, when ocean flows are measured using a small number of floating sensors. We propose an alternative calculation, applicable to two-dimensional flows, that uses only a sparse set of flow trajectories as its input. To represent the sparse set of trajectories, we use braids, algebraic objects that record how trajectories exchange positions with respect to a projection axis. Material curves advected by the flow are represented as simplified loop coordinates. The exponential rate at which a braid stretches loops over a finite time interval is the Finite-Time Braiding Exponent (FTBE). We study FTBEs through numerical simulations of the Aref Blinking Vortex flow, as a representative of a general class of flows having a single invariant component with positive topological entropy. The FTBEs approach the value of the topological entropy from below as the length and number of trajectories is increased; we conjecture that this result holds for a general class of ergodic, mixing systems. Furthermore, FTBEs are computed robustly with respect to the numerical time step, details of braid representation, and choice of initial conditions. We find that, in the class of systems we describe, trajectories can be re-used to form different braids, which greatly reduces the amount of data needed to assess the complexity of the flow.
International Nuclear Information System (INIS)
Meszaros, A.
1984-05-01
In case the graviton has a very small non-zero mass, the existence of six additional massive gravitons with very big masses leads to a finite quantum gravity. There is an acausal behaviour on the scales that is determined by the masses of additional gravitons. (author)
Finite lattice extrapolation algorithms
International Nuclear Information System (INIS)
Henkel, M.; Schuetz, G.
1987-08-01
Two algorithms for sequence extrapolation, due to von den Broeck and Schwartz and Bulirsch and Stoer are reviewed and critically compared. Applications to three states and six states quantum chains and to the (2+1)D Ising model show that the algorithm of Bulirsch and Stoer is superior, in particular if only very few finite lattice data are available. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Kapetanakis, D. (Technische Univ. Muenchen, Garching (Germany). Physik Dept.); Mondragon, M. (Technische Univ. Muenchen, Garching (Germany). Physik Dept.); Zoupanos, G. (National Technical Univ., Athens (Greece). Physics Dept.)
1993-09-01
We present phenomenologically viable SU(5) unified models which are finite to all orders before the spontaneous symmetry breaking. In the case of two models with three families the top quark mass is predicted to be 178.8 GeV. (orig.)
International Nuclear Information System (INIS)
Kapetanakis, D.; Mondragon, M.; Zoupanos, G.
1993-01-01
We present phenomenologically viable SU(5) unified models which are finite to all orders before the spontaneous symmetry breaking. In the case of two models with three families the top quark mass is predicted to be 178.8 GeV. (orig.)
International Nuclear Information System (INIS)
Kapetanakis, D.; Mondragon, M.
1993-01-01
It is shown how to obtain phenomenologically viable SU(5) unified models which are finite to all orders before the spontaneous symmetry breaking. A very interesting feature of the models with three families is that they predict the top quark mass to be around 178 GeV. 16 refs
Czech Academy of Sciences Publication Activity Database
Šorel, Michal; Šíma, Jiří
2004-01-01
Roč. 62, - (2004), s. 93-110 ISSN 0925-2312 R&D Projects: GA AV ČR IAB2030007; GA MŠk LN00A056 Keywords : radial basis function * neural network * finite automaton * Boolean circuit * computational power Subject RIV: BA - General Mathematics Impact factor: 0.641, year: 2004
Weiser, Martin
2016-01-01
All relevant implementation aspects of finite element methods are discussed in this book. The focus is on algorithms and data structures as well as on their concrete implementation. Theory is covered as far as it gives insight into the construction of algorithms. Throughout the exercises a complete FE-solver for scalar 2D problems will be implemented in Matlab/Octave.
Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi
2011-06-01
For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.
Finite mode analysis through harmonic waveguides
Alieva, T.; Wolf, K.B.
2000-01-01
The mode analysis of signals in a multimodal shallow harmonic waveguide whose eigenfrequencies are equally spaced and finite can be performed by an optoelectronic device, of which the optical part uses the guide to sample the wave field at a number of sensors along its axis and the electronic part
Large Sample Confidence Intervals for Item Response Theory Reliability Coefficients
Andersson, Björn; Xin, Tao
2018-01-01
In applications of item response theory (IRT), an estimate of the reliability of the ability estimates or sum scores is often reported. However, analytical expressions for the standard errors of the estimators of the reliability coefficients are not available in the literature and therefore the variability associated with the estimated reliability…
Phase transitions in finite systems
Energy Technology Data Exchange (ETDEWEB)
Chomaz, Ph. [Grand Accelerateur National d' Ions Lourds (GANIL), DSM-CEA / IN2P3-CNRS, 14 - Caen (France); Gulminelli, F. [Caen Univ., 14 (France). Lab. de Physique Corpusculaire
2002-07-01
In this series of lectures we will first review the general theory of phase transition in the framework of information theory and briefly address some of the well known mean field solutions of three dimensional problems. The theory of phase transitions in finite systems will then be discussed, with a special emphasis to the conceptual problems linked to a thermodynamical description for small, short-lived, open systems as metal clusters and data samples coming from nuclear collisions. The concept of negative heat capacity developed in the early seventies in the context of self-gravitating systems will be reinterpreted in the general framework of convexity anomalies of thermo-statistical potentials. The connection with the distribution of the order parameter will lead us to a definition of first order phase transitions in finite systems based on topology anomalies of the event distribution in the space of observations. Finally a careful study of the thermodynamical limit will provide a bridge with the standard theory of phase transitions and show that in a wide class of physical situations the different statistical ensembles are irreducibly inequivalent. (authors)
Phase transitions in finite systems
International Nuclear Information System (INIS)
Chomaz, Ph.; Gulminelli, F.
2002-01-01
In this series of lectures we will first review the general theory of phase transition in the framework of information theory and briefly address some of the well known mean field solutions of three dimensional problems. The theory of phase transitions in finite systems will then be discussed, with a special emphasis to the conceptual problems linked to a thermodynamical description for small, short-lived, open systems as metal clusters and data samples coming from nuclear collisions. The concept of negative heat capacity developed in the early seventies in the context of self-gravitating systems will be reinterpreted in the general framework of convexity anomalies of thermo-statistical potentials. The connection with the distribution of the order parameter will lead us to a definition of first order phase transitions in finite systems based on topology anomalies of the event distribution in the space of observations. Finally a careful study of the thermodynamical limit will provide a bridge with the standard theory of phase transitions and show that in a wide class of physical situations the different statistical ensembles are irreducibly inequivalent. (authors)
Strong interaction at finite temperature
Indian Academy of Sciences (India)
Quantum chromodynamics; finite temperature; chiral perturbation theory; QCD sum rules. PACS Nos 11.10. ..... at finite temperature. The self-energy diagrams of figure 2 modify it to ..... method of determination at present. Acknowledgement.
Supersymmetry at finite temperature
International Nuclear Information System (INIS)
Oliveira, M.W. de.
1986-01-01
The consequences of the incorporation of finite temperature effects in fields theories are investigated. Particularly, we consider the sypersymmetric non-linear sigma model, calculating the effective potencial in the large N limit. Initially, we present the 1/N expantion formalism and, for the O(N) model of scalar field, we show the impossibility of spontaneous symmetry breaking. Next, we study the same model at finite temperature and in the presence of conserved charges (the O(N) symmetry's generator). We conclude that these conserved charges explicitly break the symmetry. We introduce a calculation method for the thermodynamic potential of the theory in the presence of chemical potentials. We present an introduction to Supersymmetry in the aim of describing some important concepts for the treatment at T>0. We show that Suppersymmetry is broken for any T>0, in opposition to what one expects, by the solution of the Hierachy Problem. (author) [pt
Directory of Open Access Journals (Sweden)
M.H.R. Ghoreishy
2008-02-01
Full Text Available This research work is devoted to the footprint analysis of a steel-belted radial tyre (185/65R14 under vertical static load using finite element method. Two models have been developed in which in the first model the tread patterns were replaced by simple ribs while the second model was consisted of details of the tread blocks. Linear elastic and hyper elastic (Arruda-Boyce material models were selected to describe the mechanical behavior of the reinforcing and rubbery parts, respectively. The above two finite element models of the tyre were analyzed under inflation pressure and vertical static loads. The second model (with detailed tread patterns was analyzed with and without friction effect between tread and contact surfaces. In every stage of the analysis, the results were compared with the experimental data to confirm the accuracy and applicability of the model. Results showed that neglecting the tread pattern design not only reduces the computational cost and effort but also the differences between computed deformations do not show significant changes. However, more complicated variables such as shape and area of the footprint zone and contact pressure are affected considerably by the finite element model selected for the tread blocks. In addition, inclusion of friction even in static state changes these variables significantly.
Inverse Interval Matrix: A Survey
Czech Academy of Sciences Publication Activity Database
Rohn, Jiří; Farhadsefat, R.
2011-01-01
Roč. 22, - (2011), s. 704-719 E-ISSN 1081-3810 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval matrix * inverse interval matrix * NP-hardness * enclosure * unit midpoint * inverse sign stability * nonnegative invertibility * absolute value equation * algorithm Subject RIV: BA - General Mathematics Impact factor: 0.808, year: 2010 http://www.math.technion.ac.il/iic/ ela / ela -articles/articles/vol22_pp704-719.pdf
Dynamic Properties of QT Intervals
Czech Academy of Sciences Publication Activity Database
Halámek, Josef; Jurák, Pavel; Vondra, Vlastimil; Lipoldová, J.; Leinveber, Pavel; Plachý, M.; Fráňa, P.; Kára, T.
2009-01-01
Roč. 36, - (2009), s. 517-520 ISSN 0276-6574 R&D Projects: GA ČR GA102/08/1129; GA MŠk ME09050 Institutional research plan: CEZ:AV0Z20650511 Keywords : QT Intervals * arrhythmia diagnosis Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering http://cinc.mit.edu/archives/2009/pdf/0517.pdf
Robust misinterpretation of confidence intervals
Hoekstra, Rink; Morey, Richard; Rouder, Jeffrey N.; Wagenmakers, Eric-Jan
2014-01-01
Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more
Interval matrices: Regularity generates singularity
Czech Academy of Sciences Publication Activity Database
Rohn, Jiří; Shary, S.P.
2018-01-01
Roč. 540, 1 March (2018), s. 149-159 ISSN 0024-3795 Institutional support: RVO:67985807 Keywords : interval matrix * regularity * singularity * P-matrix * absolute value equation * diagonally singilarizable matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016
Chaotic dynamics from interspike intervals
DEFF Research Database (Denmark)
Pavlov, A N; Sosnovtseva, Olga; Mosekilde, Erik
2001-01-01
Considering two different mathematical models describing chaotic spiking phenomena, namely, an integrate-and-fire and a threshold-crossing model, we discuss the problem of extracting dynamics from interspike intervals (ISIs) and show that the possibilities of computing the largest Lyapunov expone...
Optical Finite Element Processor
Casasent, David; Taylor, Bradley K.
1986-01-01
A new high-accuracy optical linear algebra processor (OLAP) with many advantageous features is described. It achieves floating point accuracy, handles bipolar data by sign-magnitude representation, performs LU decomposition using only one channel, easily partitions and considers data flow. A new application (finite element (FE) structural analysis) for OLAPs is introduced and the results of a case study presented. Error sources in encoded OLAPs are addressed for the first time. Their modeling and simulation are discussed and quantitative data are presented. Dominant error sources and the effects of composite error sources are analyzed.
Anderson, Ian
2011-01-01
Coherent treatment provides comprehensive view of basic methods and results of the combinatorial study of finite set systems. The Clements-Lindstrom extension of the Kruskal-Katona theorem to multisets is explored, as is the Greene-Kleitman result concerning k-saturated chain partitions of general partially ordered sets. Connections with Dilworth's theorem, the marriage problem, and probability are also discussed. Each chapter ends with a helpful series of exercises and outline solutions appear at the end. ""An excellent text for a topics course in discrete mathematics."" - Bulletin of the Ame
Directory of Open Access Journals (Sweden)
Érica Luciana de Paula Furlan
2012-10-01
Full Text Available OBJETIVOS: Elaborar modelos de predição de peso fetal e de percentis longitudinais de peso fetal estimado (PFE com uma amostra da população brasileira. MÉTODOS: Estudo observacional prospectivo. Dois grupos de gestantes foram recrutados: Grupo EPF (estimativa de peso fetal: pacientes para elaboração (EPF-El e validação (EPF-Val de um modelo de predição de peso fetal; Grupo IRL (intervalos de referência longitudinais: gestantes para elaboração (IRL-El e validação (IRL-Val de intervalos de referência longitudinais de PFE. Regressão polinomial foi utilizada com os dados do subgrupo EPF-El para gerar o modelo de predição de peso fetal. O desempenho deste modelo foi comparado com os de outros disponíveis na literatura. Modelos lineares mistos foram usados para elaboração de intervalos longitudinais de PFE com os dados do subgrupo IRL-El. Os dados do subgrupo IRL-Val foram usados para validação destes intervalos. RESULTADOS: Quatrocentos e cinqüenta e oito pacientes compuseram o Grupo EPF (EPF-El: 367; EPF-Val: 91 e 315 o Grupo IRL (IRL-El: 265; IRL-Val: 50. A fórmula para cálculo do PFE foi: PFE=-8,277+2,146xDBPxCAxCF-2,449xCFxDBP². Os desempenhos de outras fórmulas para estimativa de peso fetal em nossa amostra foram significativamente piores do que os do modelo gerado neste estudo. Equações para predição de percentis condicionais de PFE foram derivadas das avaliações longitudinais do subgrupo IRL-El e validadas com os dados do subgrupo IRL-Val. CONCLUSÕES: descrevemos um método para adaptação de intervalos de referência longitudinais de PFE, sendo este obtido por meio de fórmulas geradas em uma amostra da população brasileira.PURPOSES: To elaborate models for the estimation of fetal weight and longitudinal reference intervals of estimated fetal weight (EFW using a sample of the Brazilian population. METHODS: Prospective observational study. Two groups of patients were evaluated: Group EFW (estimation of
Complete Blood Count Reference Intervals for Healthy Han Chinese Adults
Mu, Runqing; Guo, Wei; Qiao, Rui; Chen, Wenxiang; Jiang, Hong; Ma, Yueyun; Shang, Hong
2015-01-01
Background Complete blood count (CBC) reference intervals are important to diagnose diseases, screen blood donors, and assess overall health. However, current reference intervals established by older instruments and technologies and those from American and European populations are not suitable for Chinese samples due to ethnic, dietary, and lifestyle differences. The aim of this multicenter collaborative study was to establish CBC reference intervals for healthy Han Chinese adults. Methods A total of 4,642 healthy individuals (2,136 males and 2,506 females) were recruited from six clinical centers in China (Shenyang, Beijing, Shanghai, Guangzhou, Chengdu, and Xi’an). Blood samples collected in K2EDTA anticoagulant tubes were analyzed. Analysis of variance was performed to determine differences in consensus intervals according to the use of data from the combined sample and selected samples. Results Median and mean platelet counts from the Chengdu center were significantly lower than those from other centers. Red blood cell count (RBC), hemoglobin (HGB), and hematocrit (HCT) values were higher in males than in females at all ages. Other CBC parameters showed no significant instrument-, region-, age-, or sex-dependent difference. Thalassemia carriers were found to affect the lower or upper limit of different RBC profiles. Conclusion We were able to establish consensus intervals for CBC parameters in healthy Han Chinese adults. RBC, HGB, and HCT intervals were established for each sex. The reference interval for platelets for the Chengdu center should be established independently. PMID:25769040
Confidence intervals for correlations when data are not normal.
Bishara, Anthony J; Hittner, James B
2017-02-01
With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.
Kalita, Jiten C.; Biswas, Sougata; Panda, Swapnendu
2018-04-01
Till date, the sequence of vortices present in the solid corners of steady internal viscous incompressible flows was thought to be infinite. However, the already existing and most recent geometric theories on incompressible viscous flows that express vortical structures in terms of critical points in bounded domains indicate a strong opposition to this notion of infiniteness. In this study, we endeavor to bridge the gap between the two opposing stream of thoughts by diagnosing the assumptions of the existing theorems on such vortices. We provide our own set of proofs for establishing the finiteness of the sequence of corner vortices by making use of the continuum hypothesis and Kolmogorov scale, which guarantee a nonzero scale for the smallest vortex structure possible in incompressible viscous flows. We point out that the notion of infiniteness resulting from discrete self-similarity of the vortex structures is not physically feasible. Making use of some elementary concepts of mathematical analysis and our own construction of diametric disks, we conclude that the sequence of corner vortices is finite.
The Determining Finite Automata Process
Directory of Open Access Journals (Sweden)
M. S. Vinogradova
2017-01-01
Full Text Available The theory of formal languages widely uses finite state automata both in implementation of automata-based approach to programming, and in synthesis of logical control algorithms.To ensure unambiguous operation of the algorithms, the synthesized finite state automata must be deterministic. Within the approach to the synthesis of the mobile robot controls, for example, based on the theory of formal languages, there are problems concerning the construction of various finite automata, but such finite automata, as a rule, will not be deterministic. The algorithm of determinization can be applied to the finite automata, as specified, in various ways. The basic ideas of the algorithm of determinization can be most simply explained using the representations of a finite automaton in the form of a weighted directed graph.The paper deals with finite automata represented as weighted directed graphs, and discusses in detail the procedure for determining the finite automata represented in this way. Gives a detailed description of the algorithm for determining finite automata. A large number of examples illustrate a capability of the determinization algorithm.
Finite energy electroweak dyon
Energy Technology Data Exchange (ETDEWEB)
Kimm, Kyoungtae [Seoul National University, Faculty of Liberal Education, Seoul (Korea, Republic of); Yoon, J.H. [Konkuk University, Department of Physics, College of Natural Sciences, Seoul (Korea, Republic of); Cho, Y.M. [Konkuk University, Administration Building 310-4, Seoul (Korea, Republic of); Seoul National University, School of Physics and Astronomy, Seoul (Korea, Republic of)
2015-02-01
The latest MoEDAL experiment at LHC to detect the electroweak monopole makes the theoretical prediction of the monopole mass an urgent issue. We discuss three different ways to estimate the mass of the electroweak monopole. We first present the dimensional and scaling arguments which indicate the monopole mass to be around 4 to 10 TeV. To justify this we construct finite energy analytic dyon solutions which could be viewed as the regularized Cho-Maison dyon, modifying the coupling strength at short distance. Our result demonstrates that a genuine electroweak monopole whose mass scale is much smaller than the grand unification scale can exist, which can actually be detected at the present LHC. (orig.)
Probabilistic fracture finite elements
Liu, W. K.; Belytschko, T.; Lua, Y. J.
1991-05-01
The Probabilistic Fracture Mechanics (PFM) is a promising method for estimating the fatigue life and inspection cycles for mechanical and structural components. The Probability Finite Element Method (PFEM), which is based on second moment analysis, has proved to be a promising, practical approach to handle problems with uncertainties. As the PFEM provides a powerful computational tool to determine first and second moment of random parameters, the second moment reliability method can be easily combined with PFEM to obtain measures of the reliability of the structural system. The method is also being applied to fatigue crack growth. Uncertainties in the material properties of advanced materials such as polycrystalline alloys, ceramics, and composites are commonly observed from experimental tests. This is mainly attributed to intrinsic microcracks, which are randomly distributed as a result of the applied load and the residual stress.
International Nuclear Information System (INIS)
Tonks, M.R.; Williamson, R.; Masson, R.
2015-01-01
The Finite Element Method (FEM) is a numerical technique for finding approximate solutions to boundary value problems. While FEM is commonly used to solve solid mechanics equations, it can be applied to a large range of BVPs from many different fields. FEM has been used for reactor fuels modelling for many years. It is most often used for fuel performance modelling at the pellet and pin scale, however, it has also been used to investigate properties of the fuel material, such as thermal conductivity and fission gas release. Recently, the United Stated Department Nuclear Energy Advanced Modelling and Simulation Program has begun using FEM as the basis of the MOOSE-BISON-MARMOT Project that is developing a multi-dimensional, multi-physics fuel performance capability that is massively parallel and will use multi-scale material models to provide a truly predictive modelling capability. (authors)
Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data
Bonett, Douglas G.; Price, Robert M.
2012-01-01
Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…
Indirect methods for reference interval determination - review and recommendations.
Jones, Graham R D; Haeckel, Rainer; Loh, Tze Ping; Sikaris, Ken; Streichert, Thomas; Katayev, Alex; Barth, Julian H; Ozarda, Yesim
2018-04-19
Reference intervals are a vital part of the information supplied by clinical laboratories to support interpretation of numerical pathology results such as are produced in clinical chemistry and hematology laboratories. The traditional method for establishing reference intervals, known as the direct approach, is based on collecting samples from members of a preselected reference population, making the measurements and then determining the intervals. An alternative approach is to perform analysis of results generated as part of routine pathology testing and using appropriate statistical techniques to determine reference intervals. This is known as the indirect approach. This paper from a working group of the International Federation of Clinical Chemistry (IFCC) Committee on Reference Intervals and Decision Limits (C-RIDL) aims to summarize current thinking on indirect approaches to reference intervals. The indirect approach has some major potential advantages compared with direct methods. The processes are faster, cheaper and do not involve patient inconvenience, discomfort or the risks associated with generating new patient health information. Indirect methods also use the same preanalytical and analytical techniques used for patient management and can provide very large numbers for assessment. Limitations to the indirect methods include possible effects of diseased subpopulations on the derived interval. The IFCC C-RIDL aims to encourage the use of indirect methods to establish and verify reference intervals, to promote publication of such intervals with clear explanation of the process used and also to support the development of improved statistical techniques for these studies.
Axial anomaly at finite temperature and finite density
International Nuclear Information System (INIS)
Qian Zhixin; Su Rukeng; Yu, P.K.N.
1994-01-01
The U(1) axial anomaly in a hot fermion medium is investigated by using the real time Green's function method. After calculating the lowest order triangle diagrams, we find that finite temperature as well as finite fermion density does not affect the axial anomaly. The higher order corrections for the axial anomaly are discussed. (orig.)
Single interval Rényi entropy at low temperature
Chen, Bin; Wu, Jie-qiang
2014-08-01
In this paper, we calculate the Rényi entropy of one single interval on a circle at finite temperature in 2D CFT. In the low temperature limit, we expand the thermal density matrix level by level in the vacuum Verma module, and calculate the first few leading terms in e -π/ T L explicitly. On the other hand, we compute the same Rényi entropy holographically. After considering the dependence of the Rényi entropy on the temperature, we manage to fix the interval-independent constant terms in the classical part of holographic Rényi entropy. We furthermore extend the analysis in [9] to higher orders and find exact agreement between the results from field theory and bulk computations in the large central charge limit. Our work provides another piece of evidence to support holographic computation of Rényi entropy in AdS3/CFT2 correspondence, even with thermal effect.
Dijets at large rapidity intervals
Pope, B G
2001-01-01
Inclusive diet production at large pseudorapidity intervals ( Delta eta ) between the two jets has been suggested as a regime for observing BFKL dynamics. We have measured the dijet cross section for large Delta eta in pp collisions at square root s = 1800 and 630 GeV using the DOE detector. The partonic cross section increases strongly with the size of Delta eta . The observed growth is even stronger than expected on the basis of BFKL resummation in the leading logarithmic approximation. The growth of the partonic cross section can be accommodated with an effective BFKL intercept of alpha /sub BFKL/(20 GeV) = 1.65 +or- 0.07.
On sampling social networking services
Wang, Baiyang
2012-01-01
This article aims at summarizing the existing methods for sampling social networking services and proposing a faster confidence interval for related sampling methods. It also includes comparisons of common network sampling techniques.
Axial anomaly at finite temperature
International Nuclear Information System (INIS)
Chaturvedi, S.; Gupte, Neelima; Srinivasan, V.
1985-01-01
The Jackiw-Bardeen-Adler anomaly for QED 4 and QED 2 are calculated at finite temperature. It is found that the anomaly is independent of temperature. Ishikawa's method [1984, Phys. Rev. Lett. vol. 53 1615] for calculating the quantised Hall effect is extended to finite temperature. (author)
Finite flavour groups of fermions
International Nuclear Information System (INIS)
Grimus, Walter; Ludl, Patrick Otto
2012-01-01
We present an overview of the theory of finite groups, with regard to their application as flavour symmetries in particle physics. In a general part, we discuss useful theorems concerning group structure, conjugacy classes, representations and character tables. In a specialized part, we attempt to give a fairly comprehensive review of finite subgroups of SO(3) and SU(3), in which we apply and illustrate the general theory. Moreover, we also provide a concise description of the symmetric and alternating groups and comment on the relationship between finite subgroups of U(3) and finite subgroups of SU(3). Although in this review we give a detailed description of a wide range of finite groups, the main focus is on the methods which allow the exploration of their different aspects. (topical review)
On finite quantum field theories
International Nuclear Information System (INIS)
Rajpoot, S.; Taylor, J.G.
1984-01-01
The properties that make massless versions of N = 4 super Yang-Mills theory and a class of N = 2 supersymmetric theories finite are: (I) a universal coupling for the gauge and matter interactions, (II) anomaly-free representations to which the bosonic and fermionic matter belong, and (III) no charge renormalisation, i.e. β(g) = 0. It was conjectured that field theories constructed out of N = 1 matter multiplets are also finite if they too share the above properties. Explicit calculations have verified these theories to be finite up to two loops. The implications of the finiteness conditions for N = 1 finite field theories with SU(M) gauge symmetry are discussed. (orig.)
Massively Parallel Finite Element Programming
Heister, Timo
2010-01-01
Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.
Massively Parallel Finite Element Programming
Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang
2010-01-01
Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.
Experimental uncertainty estimation and statistics for data having interval uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)
2007-05-01
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Sequential Interval Estimation of a Location Parameter with Fixed Width in the Nonregular Case
Koike, Ken-ichi
2007-01-01
For a location-scale parameter family of distributions with a finite support, a sequential confidence interval with a fixed width is obtained for the location parameter, and its asymptotic consistency and efficiency are shown. Some comparisons with the Chow-Robbins procedure are also done.
On entire functions restricted to intervals, partition of unities, and dual Gabor frames
DEFF Research Database (Denmark)
Christensen, Ole; Kim, Hong Oh; Kim, Rae Young
2014-01-01
Partition of unities appears in many places in analysis. Typically it is generated by compactly supported functions with a certain regularity. In this paper we consider partition of unities obtained as integer-translates of entire functions restricted to finite intervals. We characterize the enti...
Some Characterizations of Convex Interval Games
Brânzei, R.; Tijs, S.H.; Alparslan-Gok, S.Z.
2008-01-01
This paper focuses on new characterizations of convex interval games using the notions of exactness and superadditivity. We also relate big boss interval games with concave interval games and obtain characterizations of big boss interval games in terms of exactness and subadditivity.
International Nuclear Information System (INIS)
Souza, Manoelito M. de
1997-01-01
We discuss the physical meaning and the geometric interpretation of implementation in classical field theories. The origin of infinities and other inconsistencies in field theories is traced to fields defined with support on the light cone; a finite and consistent field theory requires a light-cone generator as the field support. Then, we introduce a classical field theory with support on the light cone generators. It results on a description of discrete (point-like) interactions in terms of localized particle-like fields. We find the propagators of these particle-like fields and discuss their physical meaning, properties and consequences. They are conformally invariant, singularity-free, and describing a manifestly covariant (1 + 1)-dimensional dynamics in a (3 = 1) spacetime. Remarkably this conformal symmetry remains even for the propagation of a massive field in four spacetime dimensions. We apply this formalism to Classical electrodynamics and to the General Relativity Theory. The standard formalism with its distributed fields is retrieved in terms of spacetime average of the discrete field. Singularities are the by-products of the averaging process. This new formalism enlighten the meaning and the problem of field theory, and may allow a softer transition to a quantum theory. (author)
Mimetic finite difference method
Lipnikov, Konstantin; Manzini, Gianmarco; Shashkov, Mikhail
2014-01-01
The mimetic finite difference (MFD) method mimics fundamental properties of mathematical and physical systems including conservation laws, symmetry and positivity of solutions, duality and self-adjointness of differential operators, and exact mathematical identities of the vector and tensor calculus. This article is the first comprehensive review of the 50-year long history of the mimetic methodology and describes in a systematic way the major mimetic ideas and their relevance to academic and real-life problems. The supporting applications include diffusion, electromagnetics, fluid flow, and Lagrangian hydrodynamics problems. The article provides enough details to build various discrete operators on unstructured polygonal and polyhedral meshes and summarizes the major convergence results for the mimetic approximations. Most of these theoretical results, which are presented here as lemmas, propositions and theorems, are either original or an extension of existing results to a more general formulation using polyhedral meshes. Finally, flexibility and extensibility of the mimetic methodology are shown by deriving higher-order approximations, enforcing discrete maximum principles for diffusion problems, and ensuring the numerical stability for saddle-point systems.
Estimating reliable paediatric reference intervals in clinical chemistry and haematology.
Ridefelt, Peter; Hellberg, Dan; Aldrimer, Mattias; Gustafsson, Jan
2014-01-01
Very few high-quality studies on paediatric reference intervals for general clinical chemistry and haematology analytes have been performed. Three recent prospective community-based projects utilising blood samples from healthy children in Sweden, Denmark and Canada have substantially improved the situation. The present review summarises current reference interval studies for common clinical chemistry and haematology analyses. ©2013 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
Differentially Private Confidence Intervals for Empirical Risk Minimization
Wang, Yue; Kifer, Daniel; Lee, Jaewoo
2018-01-01
The process of data mining with differential privacy produces results that are affected by two types of noise: sampling noise due to data collection and privacy noise that is designed to prevent the reconstruction of sensitive information. In this paper, we consider the problem of designing confidence intervals for the parameters of a variety of differentially private machine learning models. The algorithms can provide confidence intervals that satisfy differential privacy (as well as the mor...
Finite element and finite difference methods in electromagnetic scattering
Morgan, MA
2013-01-01
This second volume in the Progress in Electromagnetic Research series examines recent advances in computational electromagnetics, with emphasis on scattering, as brought about by new formulations and algorithms which use finite element or finite difference techniques. Containing contributions by some of the world's leading experts, the papers thoroughly review and analyze this rapidly evolving area of computational electromagnetics. Covering topics ranging from the new finite-element based formulation for representing time-harmonic vector fields in 3-D inhomogeneous media using two coupled sca
Application of the entropic coefficient for interval number optimization during interval assessment
Directory of Open Access Journals (Sweden)
Tynynyka A. N.
2017-06-01
Full Text Available In solving many statistical problems, the most precise choice of the distribution law of a random variable is required, the sample of which the authors observe. This choice requires the construction of an interval series. Therefore, the problem arises of assigning an optimal number of intervals, and this study proposes a number of formulas for solving it. Which of these formulas solves the problem more accurately? In [9], this question is investigated using the Pearson criterion. This article describes the procedure and on its basis gives formulas available in literature and proposed new formulas using the entropy coefficient. A comparison is made with the previously published results of applying Pearson's concord criterion for these purposes. Differences in the estimates of the accuracy of the formulas are found. The proposed new formulas for calculating the number of intervals showed the best results. Calculations have been made to compare the work of the same formulas for the distribution of sample data according to the normal law and the Rayleigh law.
Bai, Jianwen; Shen, Zhenyao; Yan, Tiezhu; Qiu, Jiali; Li, Yangyang
2017-06-01
Pathogens in manure can cause waterborne-disease outbreaks, serious illness, and even death in humans. Therefore, information about the transformation and transport of bacteria is crucial for determining their source. In this study, the Soil and Water Assessment Tool (SWAT) was applied to simulate fecal coliform bacteria load in the Miyun Reservoir watershed, China. The data for the fecal coliform were obtained at three sampling sites, Chenying (CY), Gubeikou (GBK), and Xiahui (XH). The calibration processes of the fecal coliform were conducted using the CY and GBK sites, and validation was conducted at the XH site. An interval-to-interval approach was designed and incorporated into the processes of fecal coliform calibration and validation. The 95% confidence interval of the predicted values and the 95% confidence interval of measured values were considered during calibration and validation in the interval-to-interval approach. Compared with the traditional point-to-point comparison, this method can improve simulation accuracy. The results indicated that the simulation of fecal coliform using the interval-to-interval approach was reasonable for the watershed. This method could provide a new research direction for future model calibration and validation studies.
Transmission of electrons with flat passbands in finite superlattices
International Nuclear Information System (INIS)
Barajas-Aguilar, A H; Rodríguez-Magdaleno, K A; Martínez-Orozco, J C; Enciso-Muñoz, A; Contreras-Solorio, D A
2013-01-01
Using the transfer matrix method and the Ben Daniel-Duke equation for variable mass electrons propagation, we calculate the transmittance for symmetric finite superlattices where the width and the height of the potential barriers follow a linear dependence. The width and height of the barriers decreases from the center to the ends of the superlattice. The transmittance presents intervals of stopbands and quite flat passbands.
Finite spatial volume approach to finite temperature field theory
International Nuclear Information System (INIS)
Weiss, Nathan
1981-01-01
A relativistic quantum field theory at finite temperature T=β -1 is equivalent to the same field theory at zero temperature but with one spatial dimension of finite length β. This equivalence is discussed for scalars, for fermions, and for gauge theories. The relationship is checked for free field theory. The translation of correlation functions between the two formulations is described with special emphasis on the nonlocal order parameters of gauge theories. Possible applications are mentioned. (auth)
Automatic Construction of Finite Algebras
Institute of Scientific and Technical Information of China (English)
张健
1995-01-01
This paper deals with model generation for equational theories,i.e.,automatically generating (finite)models of a given set of (logical) equations.Our method of finite model generation and a tool for automatic construction of finite algebras is described.Some examples are given to show the applications of our program.We argue that,the combination of model generators and theorem provers enables us to get a better understanding of logical theories.A brief comparison betwween our tool and other similar tools is also presented.
Photon propagators at finite temperature
International Nuclear Information System (INIS)
Yee, J.H.
1982-07-01
We have used the real time formalism to compute the one-loop finite temperature corrections to the photon self energies in spinor and scalar QED. We show that, for a real photon, only the transverse components develop the temperature-dependent masses, while, for an external static electromagnetic field applied to the finite temperature system, only the static electric field is screened by thermal fluctuations. After showing how to compute systematically the imaginary parts of the finite temperature Green functions, we have attempted to give a microscopic interpretation of the imaginary parts of the self energies. (author)
Sound radiation from finite surfaces
DEFF Research Database (Denmark)
Brunskog, Jonas
2013-01-01
A method to account for the effect of finite size in acoustic power radiation problem of planar surfaces using spatial windowing is developed. Cremer and Heckl presents a very useful formula for the power radiating from a structure using the spatially Fourier transformed velocity, which combined...... with spatially windowing of a plane waves can be used to take into account the finite size. In the present paper, this is developed by means of a radiation impedance for finite surfaces, that is used instead of the radiation impedance for infinite surfaces. In this way, the spatial windowing is included...
Observations on finite quantum mechanics
International Nuclear Information System (INIS)
Balian, R.; Itzykson, C.
1986-01-01
We study the canonical transformations of the quantum mechanics on a finite phase space. For simplicity we assume that the configuration variable takes an odd prime number 4 K±1 of distinct values. We show that the canonical group is unitarily implemented. It admits a maximal abelian subgroup of order 4 K, commuting with the finite Fourier transform F, a finite analogue of the harmonic oscillator group. This provides a natural construction of F 1/K and of an orthogonal basis of eigenstates of F [fr
Elming, H; Holm, E; Jun, L; Torp-Pedersen, C; Køber, L; Kircshoff, M; Malik, M; Camm, J
1998-09-01
To evaluate the prognostic value of the QT interval and QT interval dispersion in total and in cardiovascular mortality, as well as in cardiac morbidity, in a general population. The QT interval was measured in all leads from a standard 12-lead ECG in a random sample of 1658 women and 1797 men aged 30-60 years. QT interval dispersion was calculated from the maximal difference between QT intervals in any two leads. All cause mortality over 13 years, and cardiovascular mortality as well as cardiac morbidity over 11 years, were the main outcome parameters. Subjects with a prolonged QT interval (430 ms or more) or prolonged QT interval dispersion (80 ms or more) were at higher risk of cardiovascular death and cardiac morbidity than subjects whose QT interval was less than 360 ms, or whose QT interval dispersion was less than 30 ms. Cardiovascular death relative risk ratios, adjusted for age, gender, myocardial infarct, angina pectoris, diabetes mellitus, arterial hypertension, smoking habits, serum cholesterol level, and heart rate were 2.9 for the QT interval (95% confidence interval 1.1-7.8) and 4.4 for QT interval dispersion (95% confidence interval 1.0-19-1). Fatal and non-fatal cardiac morbidity relative risk ratios were similar, at 2.7 (95% confidence interval 1.4-5.5) for the QT interval and 2.2 (95% confidence interval 1.1-4.0) for QT interval dispersion. Prolongation of the QT interval and QT interval dispersion independently affected the prognosis of cardiovascular mortality and cardiac fatal and non-fatal morbidity in a general population over 11 years.
Robust misinterpretation of confidence intervals.
Hoekstra, Rink; Morey, Richard D; Rouder, Jeffrey N; Wagenmakers, Eric-Jan
2014-10-01
Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more useful alternative to NHST, and their use is strongly encouraged in the APA Manual. Nevertheless, little is known about how researchers interpret CIs. In this study, 120 researchers and 442 students-all in the field of psychology-were asked to assess the truth value of six particular statements involving different interpretations of a CI. Although all six statements were false, both researchers and students endorsed, on average, more than three statements, indicating a gross misunderstanding of CIs. Self-declared experience with statistics was not related to researchers' performance, and, even more surprisingly, researchers hardly outperformed the students, even though the students had not received any education on statistical inference whatsoever. Our findings suggest that many researchers do not know the correct interpretation of a CI. The misunderstandings surrounding p-values and CIs are particularly unfortunate because they constitute the main tools by which psychologists draw conclusions from data.
Finite element computational fluid mechanics
International Nuclear Information System (INIS)
Baker, A.J.
1983-01-01
This book analyzes finite element theory as applied to computational fluid mechanics. It includes a chapter on using the heat conduction equation to expose the essence of finite element theory, including higher-order accuracy and convergence in a common knowledge framework. Another chapter generalizes the algorithm to extend application to the nonlinearity of the Navier-Stokes equations. Other chapters are concerned with the analysis of a specific fluids mechanics problem class, including theory and applications. Some of the topics covered include finite element theory for linear mechanics; potential flow; weighted residuals/galerkin finite element theory; inviscid and convection dominated flows; boundary layers; parabolic three-dimensional flows; and viscous and rotational flows
Programming the finite element method
Smith, I M; Margetts, L
2013-01-01
Many students, engineers, scientists and researchers have benefited from the practical, programming-oriented style of the previous editions of Programming the Finite Element Method, learning how to develop computer programs to solve specific engineering problems using the finite element method. This new fifth edition offers timely revisions that include programs and subroutine libraries fully updated to Fortran 2003, which are freely available online, and provides updated material on advances in parallel computing, thermal stress analysis, plasticity return algorithms, convection boundary c
Finite Size Scaling of Perceptron
Korutcheva, Elka; Tonchev, N.
2000-01-01
We study the first-order transition in the model of a simple perceptron with continuous weights and large, bit finite value of the inputs. Making the analogy with the usual finite-size physical systems, we calculate the shift and the rounding exponents near the transition point. In the case of a general perceptron with larger variety of inputs, the analysis only gives bounds for the exponents.
Incompleteness in the finite domain
Czech Academy of Sciences Publication Activity Database
Pudlák, Pavel
2017-01-01
Roč. 23, č. 4 (2017), s. 405-441 ISSN 1079-8986 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : finite domain Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.742, year: 2016 https://www.cambridge.org/core/journals/bulletin-of-symbolic-logic/article/incompleteness-in-the-finite-domain/D239B1761A73DCA534A4805A76D81C76
Symbolic computation with finite biquandles
Creel, Conrad; Nelson, Sam
2007-01-01
A method of computing a basis for the second Yang-Baxter cohomology of a finite biquandle with coefficients in Q and Z_p from a matrix presentation of the finite biquandle is described. We also describe a method for computing the Yang-Baxter cocycle invariants of an oriented knot or link represented as a signed Gauss code. We provide a URL for our Maple implementations of these algorithms.
Direct Interval Forecasting of Wind Power
DEFF Research Database (Denmark)
Wan, Can; Xu, Zhao; Pinson, Pierre
2013-01-01
This letter proposes a novel approach to directly formulate the prediction intervals of wind power generation based on extreme learning machine and particle swarm optimization, where prediction intervals are generated through direct optimization of both the coverage probability and sharpness...
A note on birth interval distributions
International Nuclear Information System (INIS)
Shrestha, G.
1989-08-01
A considerable amount of work has been done regarding the birth interval analysis in mathematical demography. This paper is prepared with the intention of reviewing some probability models related to interlive birth intervals proposed by different researchers. (author). 14 refs
International Nuclear Information System (INIS)
Gong Zhaohu; Wang Kan; Yao Dong
2011-01-01
Highlights: → We present a new Loading Pattern Optimization method - Interval Bound Algorithm (IBA). → IBA directly uses the reactivity of fuel assemblies and burnable poison. → IBA can optimize fuel assembly orientation in a coupled way. → Numerical experiment shows that IBA outperforms genetic algorithm and engineers. → We devise DDWF technique to deal with multiple objectives and constraints. - Abstract: In order to optimize the core loading pattern in Nuclear Power Plants, the paper presents a new optimization method - Interval Bound Algorithm (IBA). Similar to the typical population based algorithms, e.g. genetic algorithm, IBA maintains a population of solutions and evolves them during the optimization process. IBA acquires the solution by statistical learning and sampling the control variable intervals of the population in each iteration. The control variables are the transforms of the reactivity of fuel assemblies or the worth of burnable poisons, which are the crucial heuristic information for loading pattern optimization problems. IBA can deal with the relationship between the dependent variables by defining the control variables. Based on the IBA algorithm, a parallel Loading Pattern Optimization code, named IBALPO, has been developed. To deal with multiple objectives and constraints, the Dynamic Discontinuous Weight Factors (DDWF) for the fitness function have been used in IBALPO. Finally, the code system has been used to solve a realistic reloading problem and a better pattern has been obtained compared with the ones searched by engineers and genetic algorithm, thus the performance of the code is proved.
Optimal Data Interval for Estimating Advertising Response
Gerard J. Tellis; Philip Hans Franses
2006-01-01
The abundance of highly disaggregate data (e.g., at five-second intervals) raises the question of the optimal data interval to estimate advertising carryover. The literature assumes that (1) the optimal data interval is the interpurchase time, (2) too disaggregate data causes a disaggregation bias, and (3) recovery of true parameters requires assumption of the underlying advertising process. In contrast, we show that (1) the optimal data interval is what we call , (2) too disaggregate data do...
Embedding the Finite Sampling Process at a Rate
Shorack, Galen R.
1991-01-01
A huge body of if and only if theorems can be obtained based on certain strong embedding theorems for the partial sum process $\\mathbb{S}_n$ and the uniform empirical and quantile processes $\\mathbb{U}_n$ and $\\mathbb{V}_n$. This embedding was accomplished in 1986 by M. Csorgo, S. Csorgo, L. Horvath and D. Mason. Their embedding is beautifully formulated so that many necessary and sufficient type results can be established using it. It is worthwhile to have an accessible proof. Indeed, these ...
Modified stochastic fragmentation of an interval as an ageing process
Fortin, Jean-Yves
2018-02-01
We study a stochastic model based on modified fragmentation of a finite interval. The mechanism consists of cutting the interval at a random location and substituting a unique fragment on the right of the cut to regenerate and preserve the interval length. This leads to a set of segments of random sizes, with the accumulation of small fragments near the origin. This model is an example of record dynamics, with the presence of ‘quakes’ and slow dynamics. The fragment size distribution is a universal inverse power law with logarithmic corrections. The exact distribution for the fragment number as function of time is simply related to the unsigned Stirling numbers of the first kind. Two-time correlation functions are defined, and computed exactly. They satisfy scaling relations, and exhibit aging phenomena. In particular, the probability that the same number of fragments is found at two different times t>s is asymptotically equal to [4πlog(s)]-1/2 when s\\gg 1 and the ratio t/s is fixed, in agreement with the numerical simulations. The same process with a reset impedes the aging phenomenon-beyond a typical time scale defined by the reset parameter.
An Adequate First Order Logic of Intervals
DEFF Research Database (Denmark)
Chaochen, Zhou; Hansen, Michael Reichhardt
1998-01-01
This paper introduces left and right neighbourhoods as primitive interval modalities to define other unary and binary modalities of intervals in a first order logic with interval length. A complete first order logic for the neighbourhood modalities is presented. It is demonstrated how the logic can...... support formal specification and verification of liveness and fairness, and also of various notions of real analysis....
Consistency and refinement for Interval Markov Chains
DEFF Research Database (Denmark)
Delahaye, Benoit; Larsen, Kim Guldstrand; Legay, Axel
2012-01-01
Interval Markov Chains (IMC), or Markov Chains with probability intervals in the transition matrix, are the base of a classic specification theory for probabilistic systems [18]. The standard semantics of IMCs assigns to a specification the set of all Markov Chains that satisfy its interval...
Multivariate interval-censored survival data
DEFF Research Database (Denmark)
Hougaard, Philip
2014-01-01
Interval censoring means that an event time is only known to lie in an interval (L,R], with L the last examination time before the event, and R the first after. In the univariate case, parametric models are easily fitted, whereas for non-parametric models, the mass is placed on some intervals, de...
FINELM: a multigroup finite element diffusion code
International Nuclear Information System (INIS)
Higgs, C.E.; Davierwalla, D.M.
1981-06-01
FINELM is a FORTRAN IV program to solve the Neutron Diffusion Equation in X-Y, R-Z, R-theta, X-Y-Z and R-theta-Z geometries using the method of Finite Elements. Lagrangian elements of linear or higher degree to approximate the spacial flux distribution have been provided. The method of dissections, coarse mesh rebalancing and Chebyshev acceleration techniques are available. Simple user defined input is achieved through extensive input subroutines. The input preparation is described followed by a program structure description. Sample test cases are provided. (Auth.)
Desu, M M
2012-01-01
One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria
Finite-Time Attractivity for Diagonally Dominant Systems with Off-Diagonal Delays
Directory of Open Access Journals (Sweden)
T. S. Doan
2012-01-01
Full Text Available We introduce a notion of attractivity for delay equations which are defined on bounded time intervals. Our main result shows that linear delay equations are finite-time attractive, provided that the delay is only in the coupling terms between different components, and the system is diagonally dominant. We apply this result to a nonlinear Lotka-Volterra system and show that the delay is harmless and does not destroy finite-time attractivity.
Hematology reference intervals for neonatal Holstein calves.
Panousis, Nikolaos; Siachos, Nektarios; Kitkas, Georgios; Kalaitzakis, Emmanouil; Kritsepi-Konstantinou, Maria; Valergakis, Georgios E
2018-01-09
Data regarding hematologic reference intervals (RI) for neonatal calves have not been published yet. The aims of this study were: a) to establish hematology RIs for neonatal Holstein calves, b) to compare them with the RIs for lactating cows, and c) to investigate the relationship of age and gender with the hematologic profile of calves. Two-hundred and fifty-four clinically healthy Holstein calves (1-9days old, from 30 farms) and 82 healthy Holstein cows (between 30 and 150days in milk, from 10 farms) were blood sampled once for a complete blood count evaluation, using the ADVIA 120 hematology analyzer. An additional blood sample was collected from each calf for serum total protein concentration measurement. RIs and age-related RIs were calculated with the Reference Value Advisor freeware. Comparisons between calves and cows and between male and female calves were performed with t-test or Mann-Whitney test. Red blood cell count (RBC), white blood cell count (WBC), neutrophil, lymphocyte and platelet counts in calves were higher, while mean corpuscular volume (MCV), mean corpuscular hemoglobin (MCH) and mean corpuscular hemoglobin concentration (MCHC) were lower than in cows. Lymphocyte and platelets showed a notable increase through age. Finally, female calves had higher RBC, hematocrit and hemoglobin concentration than males. Age-specific RIs should be used for the interpretation of the complete blood count in Holstein calves. Copyright © 2018 Elsevier Ltd. All rights reserved.
Finiteness of quantum field theories and supersymmetry
International Nuclear Information System (INIS)
Lucha, W.; Neufeld, H.
1986-01-01
We study the consequences of finiteness for a general renormalizable quantum field theory by analysing the finiteness conditions resulting from the requirement of absence of divergent contributions to the renormalizations of the parameters of an arbitrary gauge theory. In all cases considered, the well-known two-loop finite supersymmetric theories prove to be the unique solution of the finiteness criterion. (Author)
Toward finite quantum field theories
International Nuclear Information System (INIS)
Rajpoot, S.; Taylor, J.G.
1986-01-01
The properties that make the N=4 super Yang-Mills theory free from ultraviolet divergences are (i) a universal coupling for gauge and matter interactions, (ii) anomaly-free representations, (iii) no charge renormalization, and (iv) if masses are explicitly introduced into the theory, then these are required to satisfy the mass-squared supertrace sum rule Σsub(s=0.1/2)(-1)sup(2s+1)(2s+1)M 2 sub(s)=O. Finite N=2 theories are found to satisfy the above criteria. The missing member in this class of field theories are finite field theories consisting of N=1 superfields. These theories are discussed in the light of the above finiteness properties. In particular, the representations of all simple classical groups satisfying the anomaly-free and no-charge renormalization conditions for finite N=1 field theories are discussed. A consequence of these restrictions on the allowed representations is that an N=1 finite SU(5)-based model of strong and electroweak interactions can contain at most five conventional families of quarks and leptons, a constraint almost compatible with the one deduced from cosmological arguments. (author)
Profile-likelihood Confidence Intervals in Item Response Theory Models.
Chalmers, R Philip; Pek, Jolynn; Liu, Yang
2017-01-01
Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.
Using finite mixture models in thermal-hydraulics system code uncertainty analysis
Energy Technology Data Exchange (ETDEWEB)
Carlos, S., E-mail: scarlos@iqn.upv.es [Department d’Enginyeria Química i Nuclear, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Sánchez, A. [Department d’Estadística Aplicada i Qualitat, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Ginestar, D. [Department de Matemàtica Aplicada, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Martorell, S. [Department d’Enginyeria Química i Nuclear, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain)
2013-09-15
Highlights: • Best estimate codes simulation needs uncertainty quantification. • The output variables can present multimodal probability distributions. • The analysis of multimodal distribution is performed using finite mixture models. • Two methods to reconstruct output variable probability distribution are used. -- Abstract: Nuclear Power Plant safety analysis is mainly based on the use of best estimate (BE) codes that predict the plant behavior under normal or accidental conditions. As the BE codes introduce uncertainties due to uncertainty in input parameters and modeling, it is necessary to perform uncertainty assessment (UA), and eventually sensitivity analysis (SA), of the results obtained. These analyses are part of the appropriate treatment of uncertainties imposed by current regulation based on the adoption of the best estimate plus uncertainty (BEPU) approach. The most popular approach for uncertainty assessment, based on Wilks’ method, obtains a tolerance/confidence interval, but it does not completely characterize the output variable behavior, which is required for an extended UA and SA. However, the development of standard UA and SA impose high computational cost due to the large number of simulations needed. In order to obtain more information about the output variable and, at the same time, to keep computational cost as low as possible, there has been a recent shift toward developing metamodels (model of model), or surrogate models, that approximate or emulate complex computer codes. In this way, there exist different techniques to reconstruct the probability distribution using the information provided by a sample of values as, for example, the finite mixture models. In this paper, the Expectation Maximization and the k-means algorithms are used to obtain a finite mixture model that reconstructs the output variable probability distribution from data obtained with RELAP-5 simulations. Both methodologies have been applied to a separated
Comparing confidence intervals for Goodman and Kruskal's gamma coefficient
van der Ark, L.A.; van Aert, R.C.M.
2015-01-01
This study was motivated by the question which type of confidence interval (CI) one should use to summarize sample variance of Goodman and Kruskal's coefficient gamma. In a Monte-Carlo study, we investigated the coverage and computation time of the Goodman-Kruskal CI, the Cliff-consistent CI, the
Bootstrap confidence intervals for three-way methods
Kiers, Henk A.L.
Results from exploratory three-way analysis techniques such as CANDECOMP/PARAFAC and Tucker3 analysis are usually presented without giving insight into uncertainties due to sampling. Here a bootstrap procedure is proposed that produces percentile intervals for all output parameters. Special
Hobolth, Asger; Stone, Eric A
2009-09-01
Analyses of serially-sampled data often begin with the assumption that the observations represent discrete samples from a latent continuous-time stochastic process. The continuous-time Markov chain (CTMC) is one such generative model whose popularity extends to a variety of disciplines ranging from computational finance to human genetics and genomics. A common theme among these diverse applications is the need to simulate sample paths of a CTMC conditional on realized data that is discretely observed. Here we present a general solution to this sampling problem when the CTMC is defined on a discrete and finite state space. Specifically, we consider the generation of sample paths, including intermediate states and times of transition, from a CTMC whose beginning and ending states are known across a time interval of length T. We first unify the literature through a discussion of the three predominant approaches: (1) modified rejection sampling, (2) direct sampling, and (3) uniformization. We then give analytical results for the complexity and efficiency of each method in terms of the instantaneous transition rate matrix Q of the CTMC, its beginning and ending states, and the length of sampling time T. In doing so, we show that no method dominates the others across all model specifications, and we give explicit proof of which method prevails for any given Q, T, and endpoints. Finally, we introduce and compare three applications of CTMCs to demonstrate the pitfalls of choosing an inefficient sampler.
Reference Intervals of Common Clinical Chemistry Analytes for Adults in Hong Kong.
Lo, Y C; Armbruster, David A
2012-04-01
Defining reference intervals is a major challenge because of the difficulty in recruiting volunteers to participate and testing samples from a significant number of healthy reference individuals. Historical literature citation intervals are often suboptimal because they're be based on obsolete methods and/or only a small number of poorly defined reference samples. Blood donors in Hong Kong gave permission for additional blood to be collected for reference interval testing. The samples were tested for twenty-five routine analytes on the Abbott ARCHITECT clinical chemistry system. Results were analyzed using the Rhoads EP evaluator software program, which is based on the CLSI/IFCC C28-A guideline, and defines the reference interval as the 95% central range. Method specific reference intervals were established for twenty-five common clinical chemistry analytes for a Chinese ethnic population. The intervals were defined for each gender separately and for genders combined. Gender specific or combined gender intervals were adapted as appropriate for each analyte. A large number of healthy, apparently normal blood donors from a local ethnic population were tested to provide current reference intervals for a new clinical chemistry system. Intervals were determined following an accepted international guideline. Laboratories using the same or similar methodologies may adapt these intervals if deemed validated and deemed suitable for their patient population. Laboratories using different methodologies may be able to successfully adapt the intervals for their facilities using the reference interval transference technique based on a method comparison study.
On characters of finite groups
Broué, Michel
2017-01-01
This book explores the classical and beautiful character theory of finite groups. It does it by using some rudiments of the language of categories. Originally emerging from two courses offered at Peking University (PKU), primarily for third-year students, it is now better suited for graduate courses, and provides broader coverage than books that focus almost exclusively on groups. The book presents the basic tools, notions and theorems of character theory (including a new treatment of the control of fusion and isometries), and introduces readers to the categorical language at several levels. It includes and proves the major results on characteristic zero representations without any assumptions about the base field. The book includes a dedicated chapter on graded representations and applications of polynomial invariants of finite groups, and its closing chapter addresses the more recent notion of the Drinfeld double of a finite group and the corresponding representation of GL_2(Z).
Finite and profinite quantum systems
Vourdas, Apostolos
2017-01-01
This monograph provides an introduction to finite quantum systems, a field at the interface between quantum information and number theory, with applications in quantum computation and condensed matter physics. The first major part of this monograph studies the so-called `qubits' and `qudits', systems with periodic finite lattice as position space. It also discusses the so-called mutually unbiased bases, which have applications in quantum information and quantum cryptography. Quantum logic and its applications to quantum gates is also studied. The second part studies finite quantum systems, where the position takes values in a Galois field. This combines quantum mechanics with Galois theory. The third part extends the discussion to quantum systems with variables in profinite groups, considering the limit where the dimension of the system becomes very large. It uses the concepts of inverse and direct limit and studies quantum mechanics on p-adic numbers. Applications of the formalism include quantum optics and ...
Preservation theorems on finite structures
International Nuclear Information System (INIS)
Hebert, M.
1994-09-01
This paper concerns classical Preservation results applied to finite structures. We consider binary relations for which a strong form of preservation theorem (called strong interpolation) exists in the usual case. This includes most classical cases: embeddings, extensions, homomorphisms into and onto, sandwiches, etc. We establish necessary and sufficient syntactic conditions for the preservation theorems for sentences and for theories to hold in the restricted context of finite structures. We deduce that for all relations above, the restricted theorem for theories hold provided the language is finite. For the sentences the restricted version fails in most cases; in fact the ''homomorphism into'' case seems to be the only possible one, but the efforts to show that have failed. We hope our results may help to solve this frustrating problem; in the meantime, they are used to put a lower bound on the level of complexity of potential counterexamples. (author). 8 refs
Encoding of temporal intervals in the rat hindlimb sensorimotor cortex
Directory of Open Access Journals (Sweden)
Eric Bean Knudsen
2012-09-01
Full Text Available The gradual buildup of neural activity over experimentally imposed delay periods, termed climbing activity, is well documented and is a potential mechanism by which interval time is encoded by distributed cortico-thalamico-striatal networks in the brain. Additionally, when multiple delay periods are incorporated, this activity has been shown to scale its rate of climbing proportional to the delay period. However, it remains unclear whether these patterns of activity occur within areas of motor cortex dedicated to hindlimb movement. Moreover, the effects of behavioral training (e.g. motor tasks under different reward conditions but with similar behavioral output are not well addressed. To address this, we recorded activity from the hindlimb sensorimotor cortex (HLSMC of two groups of rats performing a skilled hindlimb press task. In one group, rats were trained only to a make a valid press within a finite window after cue presentation for reward (non-interval trained, nIT; n=5, while rats in the second group were given duration-specific cues in which they had to make presses of either short or long duration to receive reward (interval trained, IT; n=6. Using PETH analyses, we show that cells recorded from both groups showed climbing activity during the task in similar proportions (35% IT and 47% nIT, however only climbing activity from IT rats was temporally scaled to press duration. Furthermore, using single trial decoding techniques (Wiener filter, we show that press duration can be inferred using climbing activity from IT animals (R=0.61 significantly better than nIT animals (R=0.507, p<0.01, suggesting IT animals encode press duration through temporally scaled climbing activity. Thus, if temporal intervals are behaviorally relevant then the activity of climbing neurons is temporally scaled to encode the passage of time.
Finite element analysis of a finite-strain plasticity problem
International Nuclear Information System (INIS)
Crose, J.G.; Fong, H.H.
1984-01-01
A finite-strain plasticity analysis was performed of an engraving process in a plastic rotating band during the firing of a gun projectile. The aim was to verify a nonlinear feature of the NIFDI/RB code: plastic large deformation analysis of nearly incompressible materials using a deformation theory of plasticity approach and a total Lagrangian scheme. (orig.)
Nonlinear Finite Strain Consolidation Analysis with Secondary Consolidation Behavior
Directory of Open Access Journals (Sweden)
Jieqing Huang
2014-01-01
Full Text Available This paper aims to analyze nonlinear finite strain consolidation with secondary consolidation behavior. On the basis of some assumptions about the secondary consolidation behavior, the continuity equation of pore water in Gibson’s consolidation theory is modified. Taking the nonlinear compressibility and nonlinear permeability of soils into consideration, the governing equation for finite strain consolidation analysis is derived. Based on the experimental data of Hangzhou soft clay samples, the new governing equation is solved with the finite element method. Afterwards, the calculation results of this new method and other two methods are compared. It can be found that Gibson’s method may underestimate the excess pore water pressure during primary consolidation. The new method which takes the secondary consolidation behavior, the nonlinear compressibility, and nonlinear permeability of soils into consideration can precisely estimate the settlement rate and the final settlement of Hangzhou soft clay sample.
FINITE ELEMENT ANALYSIS OF STRUCTURES
Directory of Open Access Journals (Sweden)
PECINGINA OLIMPIA-MIOARA
2015-05-01
Full Text Available The application of finite element method is analytical when solutions can not be applied for deeper study analyzes static, dynamic or other types of requirements in different points of the structures .In practice it is necessary to know the behavior of the structure or certain parts components of the machine under the influence of certain factors static and dynamic . The application of finite element in the optimization of components leads to economic growth , to increase reliability and durability organs studied, thus the machine itself.
Finite elements of nonlinear continua
Oden, John Tinsley
1972-01-01
Geared toward undergraduate and graduate students, this text extends applications of the finite element method from linear problems in elastic structures to a broad class of practical, nonlinear problems in continuum mechanics. It treats both theory and applications from a general and unifying point of view.The text reviews the thermomechanical principles of continuous media and the properties of the finite element method, and then brings them together to produce discrete physical models of nonlinear continua. The mathematical properties of these models are analyzed, along with the numerical s
Finite connectivity attractor neural networks
International Nuclear Information System (INIS)
Wemmenhove, B; Coolen, A C C
2003-01-01
We study a family of diluted attractor neural networks with a finite average number of (symmetric) connections per neuron. As in finite connectivity spin glasses, their equilibrium properties are described by order parameter functions, for which we derive an integral equation in replica symmetric approximation. A bifurcation analysis of this equation reveals the locations of the paramagnetic to recall and paramagnetic to spin-glass transition lines in the phase diagram. The line separating the retrieval phase from the spin-glass phase is calculated at zero temperature. All phase transitions are found to be continuous
Probability Distribution for Flowing Interval Spacing
International Nuclear Information System (INIS)
Kuzio, S.
2001-01-01
The purpose of this analysis is to develop a probability distribution for flowing interval spacing. A flowing interval is defined as a fractured zone that transmits flow in the Saturated Zone (SZ), as identified through borehole flow meter surveys (Figure 1). This analysis uses the term ''flowing interval spacing'' as opposed to fractured spacing, which is typically used in the literature. The term fracture spacing was not used in this analysis because the data used identify a zone (or a flowing interval) that contains fluid-conducting fractures but does not distinguish how many or which fractures comprise the flowing interval. The flowing interval spacing is measured between the midpoints of each flowing interval. Fracture spacing within the SZ is defined as the spacing between fractures, with no regard to which fractures are carrying flow. The Development Plan associated with this analysis is entitled, ''Probability Distribution for Flowing Interval Spacing'', (CRWMS M and O 2000a). The parameter from this analysis may be used in the TSPA SR/LA Saturated Zone Flow and Transport Work Direction and Planning Documents: (1) ''Abstraction of Matrix Diffusion for SZ Flow and Transport Analyses'' (CRWMS M and O 1999a) and (2) ''Incorporation of Heterogeneity in SZ Flow and Transport Analyses'', (CRWMS M and O 1999b). A limitation of this analysis is that the probability distribution of flowing interval spacing may underestimate the effect of incorporating matrix diffusion processes in the SZ transport model because of the possible overestimation of the flowing interval spacing. Larger flowing interval spacing results in a decrease in the matrix diffusion processes. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be determined from the data. Because each flowing interval probably has more than one fracture contributing to a flowing interval, the true flowing interval spacing could be
Event- and interval-based measurement of stuttering: a review.
Valente, Ana Rita S; Jesus, Luis M T; Hall, Andreia; Leahy, Margaret
2015-01-01
Event- and interval-based measurements are two different ways of computing frequency of stuttering. Interval-based methodology emerged as an alternative measure to overcome problems associated with reproducibility in the event-based methodology. No review has been made to study the effect of methodological factors in interval-based absolute reliability data or to compute the agreement between the two methodologies in terms of inter-judge, intra-judge and accuracy (i.e., correspondence between raters' scores and an established criterion). To provide a review related to reproducibility of event-based and time-interval measurement, and to verify the effect of methodological factors (training, experience, interval duration, sample presentation order and judgment conditions) on agreement of time-interval measurement; in addition, to determine if it is possible to quantify the agreement between the two methodologies The first two authors searched for articles on ERIC, MEDLINE, PubMed, B-on, CENTRAL and Dissertation Abstracts during January-February 2013 and retrieved 495 articles. Forty-eight articles were selected for review. Content tables were constructed with the main findings. Articles related to event-based measurements revealed values of inter- and intra-judge greater than 0.70 and agreement percentages beyond 80%. The articles related to time-interval measures revealed that, in general, judges with more experience with stuttering presented significantly higher levels of intra- and inter-judge agreement. Inter- and intra-judge values were beyond the references for high reproducibility values for both methodologies. Accuracy (regarding the closeness of raters' judgements with an established criterion), intra- and inter-judge agreement were higher for trained groups when compared with non-trained groups. Sample presentation order and audio/video conditions did not result in differences in inter- or intra-judge results. A duration of 5 s for an interval appears to be
Correct Bayesian and frequentist intervals are similar
International Nuclear Information System (INIS)
Atwood, C.L.
1986-01-01
This paper argues that Bayesians and frequentists will normally reach numerically similar conclusions, when dealing with vague data or sparse data. It is shown that both statistical methodologies can deal reasonably with vague data. With sparse data, in many important practical cases Bayesian interval estimates and frequentist confidence intervals are approximately equal, although with discrete data the frequentist intervals are somewhat longer. This is not to say that the two methodologies are equally easy to use: The construction of a frequentist confidence interval may require new theoretical development. Bayesians methods typically require numerical integration, perhaps over many variables. Also, Bayesian can easily fall into the trap of over-optimism about their amount of prior knowledge. But in cases where both intervals are found correctly, the two intervals are usually not very different. (orig.)
Directory of Open Access Journals (Sweden)
Anjan Mukherjee
2016-08-01
Full Text Available In this paper we introduce the concept of restricted interval valued neutrosophic sets (RIVNS in short. Some basic operations and properties of RIVNS are discussed. The concept of restricted interval valued neutrosophic topology is also introduced together with restricted interval valued neutrosophic finer and restricted interval valued neutrosophic coarser topology. We also define restricted interval valued neutrosophic interior and closer of a restricted interval valued neutrosophic set. Some theorems and examples are cites. Restricted interval valued neutrosophic subspace topology is also studied.
Adeli, Khosrow; Higgins, Victoria; Seccombe, David; Collier, Christine P; Balion, Cynthia M; Cembrowski, George; Venner, Allison A; Shaw, Julie
2017-11-01
Reference intervals are widely used decision-making tools in laboratory medicine, serving as health-associated standards to interpret laboratory test results. Numerous studies have shown wide variation in reference intervals, even between laboratories using assays from the same manufacturer. Lack of consistency in either sample measurement or reference intervals across laboratories challenges the expectation of standardized patient care regardless of testing location. Here, we present data from a national survey conducted by the Canadian Society of Clinical Chemists (CSCC) Reference Interval Harmonization (hRI) Working Group that examines variation in laboratory reference sample measurements, as well as pediatric and adult reference intervals currently used in clinical practice across Canada. Data on reference intervals currently used by 37 laboratories were collected through a national survey to examine the variation in reference intervals for seven common laboratory tests. Additionally, 40 clinical laboratories participated in a baseline assessment by measuring six analytes in a reference sample. Of the seven analytes examined, alanine aminotransferase (ALT), alkaline phosphatase (ALP), and creatinine reference intervals were most variable. As expected, reference interval variation was more substantial in the pediatric population and varied between laboratories using the same instrumentation. Reference sample results differed between laboratories, particularly for ALT and free thyroxine (FT4). Reference interval variation was greater than test result variation for the majority of analytes. It is evident that there is a critical lack of harmonization in laboratory reference intervals, particularly for the pediatric population. Furthermore, the observed variation in reference intervals across instruments cannot be explained by the bias between the results obtained on instruments by different manufacturers. Copyright © 2017 The Canadian Society of Clinical Chemists
Group-invariant finite Fourier transforms
International Nuclear Information System (INIS)
Shenefelt, M.H.
1988-01-01
The computation of the finite Fourier transform of functions is one of the most used computations in crystallography. Since the Fourier transform involved in 3-dimensional, the size of the computation becomes very large even for relatively few sample points along each edge. In this thesis, there is a family of algorithms that reduce the computation of Fourier transform of functions respecting the symmetries. Some properties of these algorithms are: (1) The algorithms make full use of the group of symmetries of a crystal. (2) The algorithms can be factored and combined according to the prime factorization of the number of points in the sample space. (3) The algorithms are organized into a family using the group structure of the crystallographic groups to make iterative procedures possible
Conditional prediction intervals of wind power generation
DEFF Research Database (Denmark)
Pinson, Pierre; Kariniotakis, Georges
2010-01-01
A generic method for the providing of prediction intervals of wind power generation is described. Prediction intervals complement the more common wind power point forecasts, by giving a range of potential outcomes for a given probability, their so-called nominal coverage rate. Ideally they inform...... on the characteristics of prediction errors for providing conditional interval forecasts. By simultaneously generating prediction intervals with various nominal coverage rates, one obtains full predictive distributions of wind generation. Adapted resampling is applied here to the case of an onshore Danish wind farm...... to the case of a large number of wind farms in Europe and Australia among others is finally discussed....
Differential equations and finite groups
Put, Marius van der; Ulmer, Felix
2000-01-01
The classical solution of the Riemann-Hilbert problem attaches to a given representation of the fundamental group a regular singular linear differential equation. We present a method to compute this differential equation in the case of a representation with finite image. The approach uses Galois
Symmetric relations of finite negativity
Kaltenbaeck, M.; Winkler, H.; Woracek, H.; Forster, KH; Jonas, P; Langer, H
2006-01-01
We construct and investigate a space which is related to a symmetric linear relation S of finite negativity on an almost Pontryagin space. This space is the indefinite generalization of the completion of dom S with respect to (S.,.) for a strictly positive S on a Hilbert space.
International Nuclear Information System (INIS)
Bovier, A.; Lueling, M.; Wyler, D.
1980-12-01
We present a new class of finite subgroups of SU(3) of the form Zsub(m) s zsub(n) (semidirect product). We also apply the methods used to investigate semidirect products to the known SU(3) subgroups Δ(3n 2 ) and Δ(6n 2 ) and give analytic formulae for representations (characters) and Clebsch-Gordan coefficients. (orig.)
On symmetric pyramidal finite elements
Czech Academy of Sciences Publication Activity Database
Liu, L.; Davies, K. B.; Yuan, K.; Křížek, Michal
2004-01-01
Roč. 11, 1-2 (2004), s. 213-227 ISSN 1492-8760 R&D Projects: GA AV ČR IAA1019201 Institutional research plan: CEZ:AV0Z1019905 Keywords : mesh generation * finite element method * composite elements Subject RIV: BA - General Mathematics Impact factor: 0.108, year: 2004
Finite length Taylor Couette flow
Streett, C. L.; Hussaini, M. Y.
1987-01-01
Axisymmetric numerical solutions of the unsteady Navier-Stokes equations for flow between concentric rotating cylinders of finite length are obtained by a spectral collocation method. These representative results pertain to two-cell/one-cell exchange process, and are compared with recent experiments.
Finite-temperature confinement transitions
International Nuclear Information System (INIS)
Svetitsky, B.
1984-01-01
The formalism of lattice gauge theory at finite temperature is introduced. The framework of universality predictions for critical behavior is outlined, and recent analytic work in this direction is reviewed. New Monte Carlo information for the SU(4) theory are represented, and possible results of the inclusion of fermions in the SU(3) theory are listed
Ward identities at finite temperature
International Nuclear Information System (INIS)
DOlivo, J.C.; Torres, M.; Tututi, E.
1996-01-01
The Ward identities for QED at finite temperature are derived using the functional real-time formalism. They are verified by an explicit one-loop calculation. An effective causal vertex is constructed which satisfy the Ward identity with the associated retarded self-energy. copyright 1996 American Institute of Physics
Finite-Temperature Higgs Potentials
International Nuclear Information System (INIS)
Dolgopolov, M.V.; Gurskaya, A.V.; Rykova, E.N.
2016-01-01
In the present article we consider the short description of the “Finite-Temperature Higgs Potentials” program for calculating loop integrals at vanishing external momenta and applications for extended Higgs potentials reconstructions. Here we collect the analytic forms of the relevant loop integrals for our work in reconstruction of the effective Higgs potential parameters in extended models (MSSM, NMSSM and etc.)
Voelkle, Manuel C; Oud, Johan H L
2013-02-01
When designing longitudinal studies, researchers often aim at equal intervals. In practice, however, this goal is hardly ever met, with different time intervals between assessment waves and different time intervals between individuals being more the rule than the exception. One of the reasons for the introduction of continuous time models by means of structural equation modelling has been to deal with irregularly spaced assessment waves (e.g., Oud & Delsing, 2010). In the present paper we extend the approach to individually varying time intervals for oscillating and non-oscillating processes. In addition, we show not only that equal intervals are unnecessary but also that it can be advantageous to use unequal sampling intervals, in particular when the sampling rate is low. Two examples are provided to support our arguments. In the first example we compare a continuous time model of a bivariate coupled process with varying time intervals to a standard discrete time model to illustrate the importance of accounting for the exact time intervals. In the second example the effect of different sampling intervals on estimating a damped linear oscillator is investigated by means of a Monte Carlo simulation. We conclude that it is important to account for individually varying time intervals, and encourage researchers to conceive of longitudinal studies with different time intervals within and between individuals as an opportunity rather than a problem. © 2012 The British Psychological Society.
Wild bootstrapping in finite populations with auxiliary information
R. Helmers (Roelof); M.H. Wegkamp
1995-01-01
textabstractConsider a finite population $u$, which can be viewed as a realization of a superpopulation model. A simple ratio model (linear regression, without intercept) with heteroscedastic errors is supposed to have generated u. A random sample is drawn without replacement from $u$. In this
Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection
Kumar, Sricharan; Srivistava, Ashok N.
2012-01-01
Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.
The dark side of Interval Temporal Logic: sharpening the undecidability border
DEFF Research Database (Denmark)
Bresolin, Davide; Monica, Dario Della; Goranko, Valentin
2011-01-01
on the class of models (in our case, the class of interval structures)in which it is interpreted. In this paper, we have identified several new minimal undecidable logics amongst the fragments of Halpern-Shoham logic HS, including the logic of the overlaps relation, over the classes of all and finite linear...... orders, as well as the logic of the meet and subinterval relations, over the class of dense linear orders. Together with previous undecid ability results, this work contributes to delineate the border of the dark side of interval temporal logics quite sharply....
Directory of Open Access Journals (Sweden)
Dominic Beaulieu-Prévost
2006-03-01
Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.
INTERVALS OF ACTIVE PLAY AND BREAK IN BASKETBALL GAMES
Directory of Open Access Journals (Sweden)
Pavle Rubin
2010-09-01
Full Text Available The problem of the research comes from the need for decomposition of a basketball game. The aim was to determine the intervals of active game (“live ball” - term defined by rules and break (“dead ball” - term defined by rules, by analyzing basketball games. In order to obtain the relevant information, basketball games from five different competitions (top level of quality were analyzed. The sample consists of seven games played in the 2006/2007 season: NCAA Play - Off Final game, Adriatic League finals, ULEB Cup final game, Euroleague (2 games and the NBA league (2 games. The most important information gained by this research is that the average interval of active play lasts approximately 47 seconds, while the average break interval lasts approximately 57 seconds. This information is significant for coaches and should be used in planning the training process.
Interpretation of Confidence Interval Facing the Conflict
Andrade, Luisa; Fernández, Felipe
2016-01-01
As literature has reported, it is usual that university students in statistics courses, and even statistics teachers, interpret the confidence level associated with a confidence interval as the probability that the parameter value will be between the lower and upper interval limits. To confront this misconception, class activities have been…
Interval logic. Proof theory and theorem proving
DEFF Research Database (Denmark)
Rasmussen, Thomas Marthedal
2002-01-01
of a direction of an interval, and present a sound and complete Hilbert proof system for it. Because of its generality, SIL can conveniently act as a general formalism in which other interval logics can be encoded. We develop proof theory for SIL including both a sequent calculus system and a labelled natural...
Risk factors for QTc interval prolongation
Heemskerk, Charlotte P.M.; Pereboom, Marieke; van Stralen, Karlijn; Berger, Florine A.; van den Bemt, Patricia M.L.A.; Kuijper, Aaf F.M.; van der Hoeven, Ruud T M; Mantel-Teeuwisse, Aukje K.; Becker, Matthijs L
2018-01-01
Purpose: Prolongation of the QTc interval may result in Torsade de Pointes, a ventricular arrhythmia. Numerous risk factors for QTc interval prolongation have been described, including the use of certain drugs. In clinical practice, there is much debate about the management of the risks involved. In
Interval Forecast for Smooth Transition Autoregressive Model ...
African Journals Online (AJOL)
In this paper, we propose a simple method for constructing interval forecast for smooth transition autoregressive (STAR) model. This interval forecast is based on bootstrapping the residual error of the estimated STAR model for each forecast horizon and computing various Akaike information criterion (AIC) function. This new ...
Confidence Interval Approximation For Treatment Variance In ...
African Journals Online (AJOL)
In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...
New interval forecast for stationary autoregressive models ...
African Journals Online (AJOL)
In this paper, we proposed a new forecasting interval for stationary Autoregressive, AR(p) models using the Akaike information criterion (AIC) function. Ordinarily, the AIC function is used to determine the order of an AR(p) process. In this study however, AIC forecast interval compared favorably with the theoretical forecast ...
Introduction to finite temperature and finite density QCD
International Nuclear Information System (INIS)
Kitazawa, Masakiyo
2014-01-01
It has been pointed out that QCD (Quantum Chromodynamics) in the circumstances of medium at finite temperature and density shows numbers of phenomena similar to the characteristics of solid state physics, e.g. phase transitions. In the past ten years, the very high temperature and density matter came to be observed experimentally at the heavy ion collisions. At the same time, the numerical QCD analysis at finite temperature and density attained quantitative level analysis possible owing to the remarkable progress of computers. In this summer school lecture, it has been set out to give not only the recent results, but also the spontaneous breaking of the chiral symmetry, the fundamental theory of finite temperature and further expositions as in the following four sections. The first section is titled as 'Introduction to Finite Temperature and Density QCD' with subsections of 1.1 standard model and QCD, 1.2 phase transition and phase structure of QCD, 1.3 lattice QCD and thermodynamic quantity, 1.4 heavy ion collision experiments, and 1.5 neutron stars. The second one is 'Equilibrium State' with subsections of 2.1 chiral symmetry, 2.2 vacuum state: BCS theory, 2.3 NJL (Nambu-Jona-Lasinio) model, and 2.4 color superconductivity. The third one is 'Static fluctuations' with subsections of 3.1 fluctuations, 3.2 moment and cumulant, 3.3 increase of fluctuations at critical points, 3.4 analysis of fluctuations by lattice QCD and Taylor expansion, and 3.5 experimental exploration of QCD phase structure. The fourth one is 'Dynamical Structure' with 4.1 linear response theory, 4.2 spectral functions, 4.3 Matsubara function, and 4.4 analyses of dynamical structure by lattice QCD. (S. Funahashi)
Expressing Intervals in Automated Service Negotiation
Clark, Kassidy P.; Warnier, Martijn; van Splunter, Sander; Brazier, Frances M. T.
During automated negotiation of services between autonomous agents, utility functions are used to evaluate the terms of negotiation. These terms often include intervals of values which are prone to misinterpretation. It is often unclear if an interval embodies a continuum of real numbers or a subset of natural numbers. Furthermore, it is often unclear if an agent is expected to choose only one value, multiple values, a sub-interval or even multiple sub-intervals. Additional semantics are needed to clarify these issues. Normally, these semantics are stored in a domain ontology. However, ontologies are typically domain specific and static in nature. For dynamic environments, in which autonomous agents negotiate resources whose attributes and relationships change rapidly, semantics should be made explicit in the service negotiation. This paper identifies issues that are prone to misinterpretation and proposes a notation for expressing intervals. This notation is illustrated using an example in WS-Agreement.
Electrocardiographic Abnormalities and QTc Interval in Patients Undergoing Hemodialysis.
Directory of Open Access Journals (Sweden)
Yuxin Nie
Full Text Available Sudden cardiac death is one of the primary causes of mortality in chronic hemodialysis (HD patients. Prolonged QTc interval is associated with increased rate of sudden cardiac death. The aim of this article is to assess the abnormalities found in electrocardiograms (ECGs, and to explore factors that can influence the QTc interval.A total of 141 conventional HD patients were enrolled in this study. ECG tests were conducted on each patient before a single dialysis session and 15 minutes before the end of dialysis session (at peak stress. Echocardiography tests were conducted before dialysis session began. Blood samples were drawn by phlebotomy immediately before and after the dialysis session.Before dialysis, 93.62% of the patients were in sinus rhythm, and approximately 65% of the patients showed a prolonged QTc interval (i.e., a QTc interval above 440 ms in males and above 460ms in females. A comparison of ECG parameters before dialysis and at peak stress showed increases in heart rate (77.45±11.92 vs. 80.38±14.65 bpm, p = 0.001 and QTc interval (460.05±24.53 ms vs. 470.93±24.92 ms, p<0.001. After dividing patients into two groups according to the QTc interval, lower pre-dialysis serum concentrations of potassium (K+, calcium (Ca2+, phosphorus, calcium* phosphorus (Ca*P, and higher concentrations of plasma brain natriuretic peptide (BNP were found in the group with prolonged QTc intervals. Patients in this group also had a larger left atrial diameter (LAD and a thicker interventricular septum, and they tended to be older than patients in the other group. Then patients were divided into two groups according to ΔQTc (ΔQTc = QTc peak-stress- QTc pre-HD. When analyzing the patients whose QTc intervals were longer at peak stress than before HD, we found that they had higher concentrations of Ca2+ and P5+ and lower concentrations of K+, ferritin, UA, and BNP. They were also more likely to be female. In addition, more cardiac construction
Reviewing interval cancers: Time well spent?
International Nuclear Information System (INIS)
Gower-Thomas, Kate; Fielder, Hilary M.P.; Branston, Lucy; Greening, Sarah; Beer, Helen; Rogers, Cerilan
2002-01-01
OBJECTIVES: To categorize interval cancers, and thus identify false-negatives, following prevalent and incident screens in the Welsh breast screening programme. SETTING: Breast Test Wales (BTW) Llandudno, Cardiff and Swansea breast screening units. METHODS: Five hundred and sixty interval breast cancers identified following negative mammographic screening between 1989 and 1997 were reviewed by eight screening radiologists. The blind review was achieved by mixing the screening films of women who subsequently developed an interval cancer with screen negative films of women who did not develop cancer, in a ratio of 4:1. Another radiologist used patients' symptomatic films to record a reference against which the reviewers' reports of the screening films were compared. Interval cancers were categorized as 'true', 'occult', 'false-negative' or 'unclassified' interval cancers or interval cancers with minimal signs, based on the National Health Service breast screening programme (NHSBSP) guidelines. RESULTS: Of the classifiable interval films, 32% were false-negatives, 55% were true intervals and 12% occult. The proportion of false-negatives following incident screens was half that following prevalent screens (P = 0.004). Forty percent of the seed films were recalled by the panel. CONCLUSIONS: Low false-negative interval cancer rates following incident screens (18%) versus prevalent screens (36%) suggest that lower cancer detection rates at incident screens may have resulted from fewer cancers than expected being present, rather than from a failure to detect tumours. The panel method for categorizing interval cancers has significant flaws as the results vary markedly with different protocol and is no more accurate than other, quicker and more timely methods. Gower-Thomas, K. et al. (2002)
Different radiation impedance models for finite porous materials
DEFF Research Database (Denmark)
Nolan, Melanie; Jeong, Cheol-Ho; Brunskog, Jonas
2015-01-01
The Sabine absorption coefficients of finite absorbers are measured in a reverberation chamber according to the international standard ISO 354. They vary with the specimen size essentially due to diffraction at the specimen edges, which can be seen as the radiation impedance differing from...... the infinite case. Thus, in order to predict the Sabine absorption coefficients of finite porous samples, one can incorporate models of the radiation impedance. In this study, different radiation impedance models are compared with two experimental examples. Thomasson’s model is compared to Rhazi’s method when...
Confidence interval procedures for Monte Carlo transport simulations
International Nuclear Information System (INIS)
Pederson, S.P.
1997-01-01
The problem of obtaining valid confidence intervals based on estimates from sampled distributions using Monte Carlo particle transport simulation codes such as MCNP is examined. Such intervals can cover the true parameter of interest at a lower than nominal rate if the sampled distribution is extremely right-skewed by large tallies. Modifications to the standard theory of confidence intervals are discussed and compared with some existing heuristics, including batched means normality tests. Two new types of diagnostics are introduced to assess whether the conditions of central limit theorem-type results are satisfied: the relative variance of the variance determines whether the sample size is sufficiently large, and estimators of the slope of the right tail of the distribution are used to indicate the number of moments that exist. A simulation study is conducted to quantify the relationship between various diagnostics and coverage rates and to find sample-based quantities useful in indicating when intervals are expected to be valid. Simulated tally distributions are chosen to emulate behavior seen in difficult particle transport problems. Measures of variation in the sample variance s 2 are found to be much more effective than existing methods in predicting when coverage will be near nominal rates. Batched means tests are found to be overly conservative in this regard. A simple but pathological MCNP problem is presented as an example of false convergence using existing heuristics. The new methods readily detect the false convergence and show that the results of the problem, which are a factor of 4 too small, should not be used. Recommendations are made for applying these techniques in practice, using the statistical output currently produced by MCNP
FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL ...
African Journals Online (AJOL)
FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL STRESSES IN ... the transverse residual stress in the x-direction (σx) had a maximum value of 375MPa ... the finite element method are in fair agreement with the experimental results.
Automation of finite element methods
Korelc, Jože
2016-01-01
New finite elements are needed as well in research as in industry environments for the development of virtual prediction techniques. The design and implementation of novel finite elements for specific purposes is a tedious and time consuming task, especially for nonlinear formulations. The automation of this process can help to speed up this process considerably since the generation of the final computer code can be accelerated by order of several magnitudes. This book provides the reader with the required knowledge needed to employ modern automatic tools like AceGen within solid mechanics in a successful way. It covers the range from the theoretical background, algorithmic treatments to many different applications. The book is written for advanced students in the engineering field and for researchers in educational and industrial environments.
Finite elements methods in mechanics
Eslami, M Reza
2014-01-01
This book covers all basic areas of mechanical engineering, such as fluid mechanics, heat conduction, beams, and elasticity with detailed derivations for the mass, stiffness, and force matrices. It is especially designed to give physical feeling to the reader for finite element approximation by the introduction of finite elements to the elevation of elastic membrane. A detailed treatment of computer methods with numerical examples are provided. In the fluid mechanics chapter, the conventional and vorticity transport formulations for viscous incompressible fluid flow with discussion on the method of solution are presented. The variational and Galerkin formulations of the heat conduction, beams, and elasticity problems are also discussed in detail. Three computer codes are provided to solve the elastic membrane problem. One of them solves the Poisson’s equation. The second computer program handles the two dimensional elasticity problems, and the third one presents the three dimensional transient heat conducti...
Representation theory of finite monoids
Steinberg, Benjamin
2016-01-01
This first text on the subject provides a comprehensive introduction to the representation theory of finite monoids. Carefully worked examples and exercises provide the bells and whistles for graduate accessibility, bringing a broad range of advanced readers to the forefront of research in the area. Highlights of the text include applications to probability theory, symbolic dynamics, and automata theory. Comfort with module theory, a familiarity with ordinary group representation theory, and the basics of Wedderburn theory, are prerequisites for advanced graduate level study. Researchers in algebra, algebraic combinatorics, automata theory, and probability theory, will find this text enriching with its thorough presentation of applications of the theory to these fields. Prior knowledge of semigroup theory is not expected for the diverse readership that may benefit from this exposition. The approach taken in this book is highly module-theoretic and follows the modern flavor of the theory of finite dimensional ...
Structural modeling techniques by finite element method
International Nuclear Information System (INIS)
Kang, Yeong Jin; Kim, Geung Hwan; Ju, Gwan Jeong
1991-01-01
This book includes introduction table of contents chapter 1 finite element idealization introduction summary of the finite element method equilibrium and compatibility in the finite element solution degrees of freedom symmetry and anti symmetry modeling guidelines local analysis example references chapter 2 static analysis structural geometry finite element models analysis procedure modeling guidelines references chapter 3 dynamic analysis models for dynamic analysis dynamic analysis procedures modeling guidelines and modeling guidelines.
$\\delta$-Expansion at Finite Temperature
Ramos, Rudnei O.
1996-01-01
We apply the $\\delta$-expansion perturbation scheme to the $\\lambda \\phi^{4}$ self-interacting scalar field theory in 3+1 D at finite temperature. In the $\\delta$-expansion the interaction term is written as $\\lambda (\\phi^{2})^{ 1 + \\delta}$ and $\\delta$ is considered as the perturbation parameter. We compute within this perturbative approach the renormalized mass at finite temperature at a finite order in $\\delta$. The results are compared with the usual loop-expansion at finite temperature.
Finite temperature instability for compactification
International Nuclear Information System (INIS)
Accetta, F.S.; Kolb, E.W.
1986-03-01
We consider finite temperature effects upon theories with extra dimensions compactified via vacuum stress energy (Casimir) effects. For sufficiently high temperature, a static configuration for the internal space is impossible. At somewhat lower temperatures, there is an instability due to thermal fluctuations of radius of the compact dimensions. For both cases, the Universe can evolve to a de Sitter-like expansion of all dimensions. Stability to late times constrains the initial entropy of the universe. 28 refs., 1 fig., 2 tabs
Finite mathematics models and applications
Morris, Carla C
2015-01-01
Features step-by-step examples based on actual data and connects fundamental mathematical modeling skills and decision making concepts to everyday applicability Featuring key linear programming, matrix, and probability concepts, Finite Mathematics: Models and Applications emphasizes cross-disciplinary applications that relate mathematics to everyday life. The book provides a unique combination of practical mathematical applications to illustrate the wide use of mathematics in fields ranging from business, economics, finance, management, operations research, and the life and social sciences.
Quantum Chromodynamic at finite temperature
International Nuclear Information System (INIS)
Magalhaes, N.S.
1987-01-01
A formal expression to the Gibbs free energy of topological defects of quantum chromodynamics (QCD)by using the semiclassical approach in the context of field theory at finite temperature and in the high temperature limit is determined. This expression is used to calculate the free energy of magnetic monopoles. Applying the obtained results to a method in which the free energy of topological defects of a theory may indicate its different phases, its searched for informations about phases of QCD. (author) [pt
Perturbative QCD at finite temperature
International Nuclear Information System (INIS)
Altherr, T.
1989-03-01
We discuss an application of finite temperature QCD to lepton-pair production in a quark-gluon plasma. The perturbative calculation is performed within the realtime formalism. After cancellation of infrared and mass singularities, the corrections at O (α s ) are found to be very small in the region where the mass of the Drell-Yan pair is much larger than the temperature of the plasma. Interesting effects, however, appear at the annihilation threshold of the thermalized quarks
Spinor pregeometry at finite temperature
International Nuclear Information System (INIS)
Yoshimoto, Seiji.
1985-10-01
We derive the effective action for gravity at finite temperature in spinor pregeometry. The temperature-dependent effective potential for the vierbein which is parametrized as e sub(kμ) = b.diag(1, xi, xi, xi) has the minimum at b = 0 for fixed xi, and behaves as -xi 3 for fixed b. These results indicate that the system of fundamental matters in spinor pregeometry cannot be in equilibrium. (author)
Probabilistic finite element modeling of waste rollover
International Nuclear Information System (INIS)
Khaleel, M.A.; Cofer, W.F.; Al-fouqaha, A.A.
1995-09-01
Stratification of the wastes in many Hanford storage tanks has resulted in sludge layers which are capable of retaining gases formed by chemical and/or radiolytic reactions. As the gas is produced, the mechanisms of gas storage evolve until the resulting buoyancy in the sludge leads to instability, at which point the sludge ''rolls over'' and a significant volume of gas is suddenly released. Because the releases may contain flammable gases, these episodes of release are potentially hazardous. Mitigation techniques are desirable for more controlled releases at more frequent intervals. To aid the mitigation efforts, a methodology for predicting of sludge rollover at specific times is desired. This methodology would then provide a rational basis for the development of a schedule for the mitigation procedures. In addition, a knowledge of the sensitivity of the sludge rollovers to various physical and chemical properties within the tanks would provide direction for efforts to reduce the frequency and severity of these events. In this report, the use of probabilistic finite element analyses for computing the probability of rollover and the sensitivity of rollover probability to various parameters is described
Performance of finite order distribution-generated universal portfolios
Pang, Sook Theng; Liew, How Hui; Chang, Yun Fah
2017-04-01
A Constant Rebalanced Portfolio (CRP) is an investment strategy which reinvests by redistributing wealth equally among a set of stocks. The empirical performance of the distribution-generated universal portfolio strategies are analysed experimentally concerning 10 higher volume stocks from different categories in Kuala Lumpur Stock Exchange. The time interval of study is from January 2000 to December 2015, which includes the credit crisis from September 2008 to March 2009. The performance of the finite-order universal portfolio strategies has been shown to be better than Constant Rebalanced Portfolio with some selected parameters of proposed universal portfolios.
A finite Zitterbewegung model for relativistic quantum mechanics
International Nuclear Information System (INIS)
Noyes, H.P.
1990-01-01
Starting from steps of length h/mc and time intervals h/mc 2 , which imply a quasi-local Zitterbewegung with velocity steps ±c, we employ discrimination between bit-strings of finite length to construct a necessary 3+1 dimensional event-space for relativistic quantum mechanics. By using the combinatorial hierarchy to label the strings, we provide a successful start on constructing the coupling constants and mass ratios implied by the scheme. Agreement with experiments is surprisingly accurate. 22 refs., 1 fig
A finite Zitterbewegung model for relativistic quantum mechanics
Energy Technology Data Exchange (ETDEWEB)
Noyes, H.P.
1990-02-19
Starting from steps of length h/mc and time intervals h/mc{sup 2}, which imply a quasi-local Zitterbewegung with velocity steps {plus minus}c, we employ discrimination between bit-strings of finite length to construct a necessary 3+1 dimensional event-space for relativistic quantum mechanics. By using the combinatorial hierarchy to label the strings, we provide a successful start on constructing the coupling constants and mass ratios implied by the scheme. Agreement with experiments is surprisingly accurate. 22 refs., 1 fig.
Correlator of nucleon currents in finite temperature pion gas
International Nuclear Information System (INIS)
Eletsky, V.L.
1990-01-01
A retarded correlator of two currents with nucleon quantum numbers is calculated for finite temperature T π in the chiral limit. It is shown that for euclidean momenta the leading one-loop corrections arise from direct interaction of thermal pions with the currents. A dispersive representation for the correlator shows that this interaction smears the nucleon pole over a frequency interval with width ≅ T. This interaction does not change the exponential fall-off of the correlator in euclidean space but gives an O(T 2 /F 2 π ) contribution to the pre-exponential factor. (orig.)
Finite Metric Spaces of Strictly negative Type
DEFF Research Database (Denmark)
Hjorth, Poul G.
If a finite metric space is of strictly negative type then its transfinite diameter is uniquely realized by an infinite extent (“load vector''). Finite metric spaces that have this property include all trees, and all finite subspaces of Euclidean and Hyperbolic spaces. We prove that if the distance...
Characterization of finite spaces having dispersion points
International Nuclear Information System (INIS)
Al-Bsoul, A. T
1997-01-01
In this paper we shall characterize the finite spaces having dispersion points. Also, we prove that the dispersion point of a finite space with a dispersion points fixed under all non constant continuous functions which answers the question raised by J. C obb and W. Voxman in 1980 affirmatively for finite space. Some open problems are given. (author). 16 refs
Advanced Interval Management: A Benefit Analysis
Timer, Sebastian; Peters, Mark
2016-01-01
This document is the final report for the NASA Langley Research Center (LaRC)- sponsored task order 'Possible Benefits for Advanced Interval Management Operations.' Under this research project, Architecture Technology Corporation performed an analysis to determine the maximum potential benefit to be gained if specific Advanced Interval Management (AIM) operations were implemented in the National Airspace System (NAS). The motivation for this research is to guide NASA decision-making on which Interval Management (IM) applications offer the most potential benefit and warrant further research.
Generalized production planning problem under interval uncertainty
Directory of Open Access Journals (Sweden)
Samir A. Abass
2010-06-01
Full Text Available Data in many real life engineering and economical problems suffer from inexactness. Herein we assume that we are given some intervals in which the data can simultaneously and independently perturb. We consider the generalized production planning problem with interval data. The interval data are in both of the objective function and constraints. The existing results concerning the qualitative and quantitative analysis of basic notions in parametric production planning problem. These notions are the set of feasible parameters, the solvability set and the stability set of the first kind.
Reconstruction of dynamical systems from interspike intervals
International Nuclear Information System (INIS)
Sauer, T.
1994-01-01
Attractor reconstruction from interspike interval (ISI) data is described, in rough analogy with Taken's theorem for attractor reconstruction from time series. Assuming a generic integrate-and-fire model coupling the dynamical system to the spike train, there is a one-to-one correspondence between the system states and interspike interval vectors of sufficiently large dimension. The correspondence has an important implication: interspike intervals can be forecast from past history. We show that deterministically driven ISI series can be distinguished from stochastically driven ISI series on the basis of prediction error
Comparison of Bootstrap Confidence Intervals Using Monte Carlo Simulations
Directory of Open Access Journals (Sweden)
Roberto S. Flowers-Cano
2018-02-01
Full Text Available Design of hydraulic works requires the estimation of design hydrological events by statistical inference from a probability distribution. Using Monte Carlo simulations, we compared coverage of confidence intervals constructed with four bootstrap techniques: percentile bootstrap (BP, bias-corrected bootstrap (BC, accelerated bias-corrected bootstrap (BCA and a modified version of the standard bootstrap (MSB. Different simulation scenarios were analyzed. In some cases, the mother distribution function was fit to the random samples that were generated. In other cases, a distribution function different to the mother distribution was fit to the samples. When the fitted distribution had three parameters, and was the same as the mother distribution, the intervals constructed with the four techniques had acceptable coverage. However, the bootstrap techniques failed in several of the cases in which the fitted distribution had two parameters.
Pediatric Reference Intervals for Free Thyroxine and Free Triiodothyronine
Jang, Megan; Guo, Tiedong; Soldin, Steven J.
2009-01-01
Background The clinical value of free thyroxine (FT4) and free triiodothyronine (FT3) analysis depends on the reference intervals with which they are compared. We determined age- and sex-specific reference intervals for neonates, infants, and children 0–18 years of age for FT4 and FT3 using tandem mass spectrometry. Methods Reference intervals were calculated for serum FT4 (n = 1426) and FT3 (n = 1107) obtained from healthy children between January 1, 2008, and June 30, 2008, from Children's National Medical Center and Georgetown University Medical Center Bioanalytical Core Laboratory, Washington, DC. Serum samples were analyzed using isotope dilution liquid chromatography tandem mass spectrometry (LC/MS/MS) with deuterium-labeled internal standards. Results FT4 reference intervals were very similar for males and females of all ages and ranged between 1.3 and 2.4 ng/dL for children 1 to 18 years old. FT4 reference intervals for 1- to 12-month-old infants were 1.3–2.8 ng/dL. These 2.5 to 97.5 percentile intervals were much tighter than reference intervals obtained using immunoassay platforms 0.48–2.78 ng/dL for males and 0.85–2.09 ng/dL for females. Similarly, FT3 intervals were consistent and similar for males and females and for all ages, ranging between 1.5 pg/mL and approximately 6.0 pg/mL for children 1 month of age to 18 years old. Conclusions This is the first study to provide pediatric reference intervals of FT4 and FT3 for children from birth to 18 years of age using LC/MS/MS. Analysis using LC/MS/MS provides more specific quantification of thyroid hormones. A comparison of the ultrafiltration tandem mass spectrometric method with equilibrium dialysis showed very good correlation. PMID:19583487
Intact interval timing in circadian CLOCK mutants.
Cordes, Sara; Gallistel, C R
2008-08-28
While progress has been made in determining the molecular basis for the circadian clock, the mechanism by which mammalian brains time intervals measured in seconds to minutes remains a mystery. An obvious question is whether the interval-timing mechanism shares molecular machinery with the circadian timing mechanism. In the current study, we trained circadian CLOCK +/- and -/- mutant male mice in a peak-interval procedure with 10 and 20-s criteria. The mutant mice were more active than their wild-type littermates, but there were no reliable deficits in the accuracy or precision of their timing as compared with wild-type littermates. This suggests that expression of the CLOCK protein is not necessary for normal interval timing.
Socioeconomic position and the primary care interval
DEFF Research Database (Denmark)
Vedsted, Anders
2018-01-01
to the easiness to interpret the symptoms of the underlying cancer. Methods. We conducted a population-based cohort study using survey data on time intervals linked at an individually level to routine collected data on demographics from Danish registries. Using logistic regression we estimated the odds......Introduction. Diagnostic delays affect cancer survival negatively. Thus, the time interval from symptomatic presentation to a GP until referral to secondary care (i.e. primary care interval (PCI)), should be as short as possible. Lower socioeconomic position seems associated with poorer cancer...... younger than 45 years of age and older than 54 years of age had longer primary care interval than patients aged ‘45-54’ years. No other associations for SEP characteristics were observed. The findings may imply that GPs are referring patients regardless of SEP, although some room for improvement prevails...
Recurrence interval analysis of trading volumes.
Ren, Fei; Zhou, Wei-Xing
2010-06-01
We study the statistical properties of the recurrence intervals τ between successive trading volumes exceeding a certain threshold q. The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes.
Probability Distribution for Flowing Interval Spacing
International Nuclear Information System (INIS)
S. Kuzio
2004-01-01
Fracture spacing is a key hydrologic parameter in analyses of matrix diffusion. Although the individual fractures that transmit flow in the saturated zone (SZ) cannot be identified directly, it is possible to determine the fractured zones that transmit flow from flow meter survey observations. The fractured zones that transmit flow as identified through borehole flow meter surveys have been defined in this report as flowing intervals. The flowing interval spacing is measured between the midpoints of each flowing interval. The determination of flowing interval spacing is important because the flowing interval spacing parameter is a key hydrologic parameter in SZ transport modeling, which impacts the extent of matrix diffusion in the SZ volcanic matrix. The output of this report is input to the ''Saturated Zone Flow and Transport Model Abstraction'' (BSC 2004 [DIRS 170042]). Specifically, the analysis of data and development of a data distribution reported herein is used to develop the uncertainty distribution for the flowing interval spacing parameter for the SZ transport abstraction model. Figure 1-1 shows the relationship of this report to other model reports that also pertain to flow and transport in the SZ. Figure 1-1 also shows the flow of key information among the SZ reports. It should be noted that Figure 1-1 does not contain a complete representation of the data and parameter inputs and outputs of all SZ reports, nor does it show inputs external to this suite of SZ reports. Use of the developed flowing interval spacing probability distribution is subject to the limitations of the assumptions discussed in Sections 5 and 6 of this analysis report. The number of fractures in a flowing interval is not known. Therefore, the flowing intervals are assumed to be composed of one flowing zone in the transport simulations. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be
Chu, Chunlei; Stoffa, Paul L.
2012-01-01
sampled models onto vertically nonuniform grids. We use a 2D TTI salt model to demonstrate its effectiveness and show that the nonuniform grid implicit spatial finite difference method can produce highly accurate seismic modeling results with enhanced
Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.
2016-12-01
Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.
Peridynamic Multiscale Finite Element Methods
Energy Technology Data Exchange (ETDEWEB)
Costa, Timothy [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bond, Stephen D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Littlewood, David John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Moore, Stan Gerald [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-12-01
The problem of computing quantum-accurate design-scale solutions to mechanics problems is rich with applications and serves as the background to modern multiscale science research. The prob- lem can be broken into component problems comprised of communicating across adjacent scales, which when strung together create a pipeline for information to travel from quantum scales to design scales. Traditionally, this involves connections between a) quantum electronic structure calculations and molecular dynamics and between b) molecular dynamics and local partial differ- ential equation models at the design scale. The second step, b), is particularly challenging since the appropriate scales of molecular dynamic and local partial differential equation models do not overlap. The peridynamic model for continuum mechanics provides an advantage in this endeavor, as the basic equations of peridynamics are valid at a wide range of scales limiting from the classical partial differential equation models valid at the design scale to the scale of molecular dynamics. In this work we focus on the development of multiscale finite element methods for the peridynamic model, in an effort to create a mathematically consistent channel for microscale information to travel from the upper limits of the molecular dynamics scale to the design scale. In particular, we first develop a Nonlocal Multiscale Finite Element Method which solves the peridynamic model at multiple scales to include microscale information at the coarse-scale. We then consider a method that solves a fine-scale peridynamic model to build element-support basis functions for a coarse- scale local partial differential equation model, called the Mixed Locality Multiscale Finite Element Method. Given decades of research and development into finite element codes for the local partial differential equation models of continuum mechanics there is a strong desire to couple local and nonlocal models to leverage the speed and state of the
Concurrent variable-interval variable-ratio schedules in a dynamic choice environment.
Bell, Matthew C; Baum, William M
2017-11-01
Most studies of operant choice have focused on presenting subjects with a fixed pair of schedules across many experimental sessions. Using these methods, studies of concurrent variable- interval variable-ratio schedules helped to evaluate theories of choice. More recently, a growing literature has focused on dynamic choice behavior. Those dynamic choice studies have analyzed behavior on a number of different time scales using concurrent variable-interval schedules. Following the dynamic choice approach, the present experiment examined performance on concurrent variable-interval variable-ratio schedules in a rapidly changing environment. Our objectives were to compare performance on concurrent variable-interval variable-ratio schedules with extant data on concurrent variable-interval variable-interval schedules using a dynamic choice procedure and to extend earlier work on concurrent variable-interval variable-ratio schedules. We analyzed performances at different time scales, finding strong similarities between concurrent variable-interval variable-interval and concurrent variable-interval variable- ratio performance within dynamic choice procedures. Time-based measures revealed almost identical performance in the two procedures compared with response-based measures, supporting the view that choice is best understood as time allocation. Performance at the smaller time scale of visits accorded with the tendency seen in earlier research toward developing a pattern of strong preference for and long visits to the richer alternative paired with brief "samples" at the leaner alternative ("fix and sample"). © 2017 Society for the Experimental Analysis of Behavior.
Joint interval reliability for Markov systems with an application in transmission line reliability
International Nuclear Information System (INIS)
Csenki, Attila
2007-01-01
We consider Markov reliability models whose finite state space is partitioned into the set of up states U and the set of down states D . Given a collection of k disjoint time intervals I l =[t l ,t l +x l ], l=1,...,k, the joint interval reliability is defined as the probability of the system being in U for all time instances in I 1 union ... union I k . A closed form expression is derived here for the joint interval reliability for this class of models. The result is applied to power transmission lines in a two-state fluctuating environment. We use the Linux versions of the free packages Maxima and Scilab in our implementation for symbolic and numerical work, respectively
Joint interval reliability for Markov systems with an application in transmission line reliability
Energy Technology Data Exchange (ETDEWEB)
Csenki, Attila [School of Computing and Mathematics, University of Bradford, Bradford, West Yorkshire, BD7 1DP (United Kingdom)]. E-mail: a.csenki@bradford.ac.uk
2007-06-15
We consider Markov reliability models whose finite state space is partitioned into the set of up states {sub U} and the set of down states {sub D}. Given a collection of k disjoint time intervals I{sub l}=[t{sub l},t{sub l}+x{sub l}], l=1,...,k, the joint interval reliability is defined as the probability of the system being in {sub U} for all time instances in I{sub 1} union ... union I{sub k}. A closed form expression is derived here for the joint interval reliability for this class of models. The result is applied to power transmission lines in a two-state fluctuating environment. We use the Linux versions of the free packages Maxima and Scilab in our implementation for symbolic and numerical work, respectively.
Fault detection for discrete-time LPV systems using interval observers
Zhang, Zhi-Hui; Yang, Guang-Hong
2017-10-01
This paper is concerned with the fault detection (FD) problem for discrete-time linear parameter-varying systems subject to bounded disturbances. A parameter-dependent FD interval observer is designed based on parameter-dependent Lyapunov and slack matrices. The design method is presented by translating the parameter-dependent linear matrix inequalities (LMIs) into finite ones. In contrast to the existing results based on parameter-independent and diagonal Lyapunov matrices, the derived disturbance attenuation, fault sensitivity and nonnegative conditions lead to less conservative LMI characterisations. Furthermore, without the need to design the residual evaluation functions and thresholds, the residual intervals generated by the interval observers are used directly for FD decision. Finally, simulation results are presented for showing the effectiveness and superiority of the proposed method.
Functionals of finite Riemann surfaces
Schiffer, Menahem
1954-01-01
This advanced monograph on finite Riemann surfaces, based on the authors' 1949-50 lectures at Princeton University, remains a fundamental book for graduate students. The Bulletin of the American Mathematical Society hailed the self-contained treatment as the source of ""a plethora of ideas, each interesting in its own right,"" noting that ""the patient reader will be richly rewarded."" Suitable for graduate-level courses, the text begins with three chapters that offer a development of the classical theory along historical lines, examining geometrical and physical considerations, existence theo
International Nuclear Information System (INIS)
Barbarin, F.; Sorba, P.; Ragoucy, E.
1996-01-01
The property of some finite W algebras to be the commutant of a particular subalgebra of a simple Lie algebra G is used to construct realizations of G. When G ≅ so (4,2), unitary representations of the conformal and Poincare algebras are recognized in this approach, which can be compared to the usual induced representation technique. When G approx=(2, R), the anyonic parameter can be seen as the eigenvalue of a W generator in such W representations of G. The generalization of such properties to the affine case is also discussed in the conclusion, where an alternative of the Wakimoto construction for sl(2) k is briefly presented. (authors)
Simulating QCD at finite density
de Forcrand, Philippe
2009-01-01
In this review, I recall the nature and the inevitability of the "sign problem" which plagues attempts to simulate lattice QCD at finite baryon density. I present the main approaches used to circumvent the sign problem at small chemical potential. I sketch how one can predict analytically the severity of the sign problem, as well as the numerically accessible range of baryon densities. I review progress towards the determination of the pseudo-critical temperature T_c(mu), and towards the identification of a possible QCD critical point. Some promising advances with non-standard approaches are reviewed.
Finite temperature approach to confinement
International Nuclear Information System (INIS)
Gave, E.; Jengo, R.; Omero, C.
1980-06-01
The finite temperature treatment of gauge theories, formulated in terms of a gauge invariant variable as in a Polyakov method, is used as a device for obtaining an effective theory where the confinement test takes the form of a correlation function. The formalism is discussed for the abelian CPsup(n-1) model in various dimensionalities and for the pure Yang-Mills theory in the limit of zero temperature. In the latter case a class of vortex like configurations of the effective theory which induce confinement correspond in particular to the instanton solutions. (author)
Linear and Nonlinear Finite Elements.
1983-12-01
Metzler. Con/ ugte rapdent solution of a finite element elastic problem with high Poson rato without scaling and once with the global stiffness matrix K...nonzero c, that makes u(0) = 1. According to the linear, small deflection theory of the membrane the central displacement given to the membrane is not... theory is possible based on the approximations (l-y 2 )t = +y’ 2 +y , (1-y)’ 1-y’ 2 - y" (6) that change eq. (5) to V) = , [yŖ(1 + y") - Qy
Covariant gauges at finite temperature
Landshoff, Peter V
1992-01-01
A prescription is presented for real-time finite-temperature perturbation theory in covariant gauges, in which only the two physical degrees of freedom of the gauge-field propagator acquire thermal parts. The propagators for the unphysical degrees of freedom of the gauge field, and for the Faddeev-Popov ghost field, are independent of temperature. This prescription is applied to the calculation of the one-loop gluon self-energy and the two-loop interaction pressure, and is found to be simpler to use than the conventional one.
Transmission line sag calculations using interval mathematics
Energy Technology Data Exchange (ETDEWEB)
Shaalan, H. [Institute of Electrical and Electronics Engineers, Washington, DC (United States)]|[US Merchant Marine Academy, Kings Point, NY (United States)
2007-07-01
Electric utilities are facing the need for additional generating capacity, new transmission systems and more efficient use of existing resources. As such, there are several uncertainties associated with utility decisions. These uncertainties include future load growth, construction times and costs, and performance of new resources. Regulatory and economic environments also present uncertainties. Uncertainty can be modeled based on a probabilistic approach where probability distributions for all of the uncertainties are assumed. Another approach to modeling uncertainty is referred to as unknown but bounded. In this approach, the upper and lower bounds on the uncertainties are assumed without probability distributions. Interval mathematics is a tool for the practical use and extension of the unknown but bounded concept. In this study, the calculation of transmission line sag was used as an example to demonstrate the use of interval mathematics. The objective was to determine the change in cable length, based on a fixed span and an interval of cable sag values for a range of temperatures. The resulting change in cable length was an interval corresponding to the interval of cable sag values. It was shown that there is a small change in conductor length due to variation in sag based on the temperature ranges used in this study. 8 refs.
Using an R Shiny to Enhance the Learning Experience of Confidence Intervals
Williams, Immanuel James; Williams, Kelley Kim
2018-01-01
Many students find understanding confidence intervals difficult, especially because of the amalgamation of concepts such as confidence levels, standard error, point estimates and sample sizes. An R Shiny application was created to assist the learning process of confidence intervals using graphics and data from the US National Basketball…
Population-Based Pediatric Reference Intervals in General Clinical Chemistry: A Swedish Survey.
Ridefelt, Peter
2015-01-01
Very few high quality studies on pediatric reference intervals for general clinical chemistry and hematology analytes have been performed. Three recent prospective community-based projects utilising blood samples from healthy children in Sweden, Denmark and Canada have substantially improved the situation. The Swedish survey included 701 healthy children. Reference intervals for general clinical chemistry and hematology were defined.
Two intervals Rényi entanglement entropy of compact free boson on torus
International Nuclear Information System (INIS)
Liu, Feihu; Liu, Xiao
2016-01-01
We compute the N=2 Rényi entanglement entropy of two intervals at equal time in a circle, for the theory of a 2D compact complex free scalar at finite temperature. This is carried out by performing functional integral on a genus 3 ramified cover of the torus, wherein the quantum part of the integral is captured by the four point function of twist fields on the worldsheet torus, and the classical piece is given by summing over winding modes of the genus 3 surface onto the target space torus. The final result is given in terms of a product of theta functions and certain multi-dimensional theta functions. We demonstrate the T-duality invariance of the result. We also study its low temperature limit. In the case in which the size of the intervals and of their separation are much smaller than the whole system, our result is in exact agreement with the known result for two intervals on an infinite system at zero temperature http://dx.doi.org/10.1088/1742-5468/2009/11/P11001. In the case in which the separation between the two intervals is much smaller than the interval length, the leading thermal corrections take the same universal form as proposed in http://dx.doi.org/10.1103/PhysRevLett.112.171603, http://dx.doi.org/10.1103/PhysRevD.91.105013 for Rényi entanglement entropy of a single interval.
Biset functors for finite groups
Bouc, Serge
2010-01-01
This volume exposes the theory of biset functors for finite groups, which yields a unified framework for operations of induction, restriction, inflation, deflation and transport by isomorphism. The first part recalls the basics on biset categories and biset functors. The second part is concerned with the Burnside functor and the functor of complex characters, together with semisimplicity issues and an overview of Green biset functors. The last part is devoted to biset functors defined over p-groups for a fixed prime number p. This includes the structure of the functor of rational representations and rational p-biset functors. The last two chapters expose three applications of biset functors to long-standing open problems, in particular the structure of the Dade group of an arbitrary finite p-group.This book is intended both to students and researchers, as it gives a didactic exposition of the basics and a rewriting of advanced results in the area, with some new ideas and proofs.
Supersymmetry breaking at finite temperature
International Nuclear Information System (INIS)
Kratzert, K.
2002-11-01
The mechanism of supersymmetry breaking at finite temperature is still only partly understood. Though it has been proven that temperature always breaks supersymmetry, the spontaneous nature of this breaking remains unclear, in particular the role of the Goldstone fermion. The aim of this work is to unify two existing approaches to the subject. From a hydrodynamic point of view, it has been argued under very general assumptions that in any supersymmetric quantum field theory at finite temperature there should exist a massless fermionic collective excitation, named phonino because of the analogy to the phonon. In the framework of a self-consistent resummed perturbation theory, it is shown for the example of the Wess-Zumino model that this mode fits very well into the quantum field theoretical framework pursued by earlier works. Interpreted as a bound state of boson and fermion, it contributes to the supersymmetric Ward-Takahashi identities in a way showing that supersymmetry is indeed broken spontaneously with the phonino playing the role of the Goldstone fermion. The second part of the work addresses the case of supersymmetric quantum electrodynamics. It is shown that also here the phonino exists and must be interpreted as the Goldstone mode. This knowledge allows a generalization to a wider class of models. (orig.)
Finite groups and quantum physics
International Nuclear Information System (INIS)
Kornyak, V. V.
2013-01-01
Concepts of quantum theory are considered from the constructive “finite” point of view. The introduction of a continuum or other actual infinities in physics destroys constructiveness without any need for them in describing empirical observations. It is shown that quantum behavior is a natural consequence of symmetries of dynamical systems. The underlying reason is that it is impossible in principle to trace the identity of indistinguishable objects in their evolution—only information about invariant statements and values concerning such objects is available. General mathematical arguments indicate that any quantum dynamics is reducible to a sequence of permutations. Quantum phenomena, such as interference, arise in invariant subspaces of permutation representations of the symmetry group of a dynamical system. Observable quantities can be expressed in terms of permutation invariants. It is shown that nonconstructive number systems, such as complex numbers, are not needed for describing quantum phenomena. It is sufficient to employ cyclotomic numbers—a minimal extension of natural numbers that is appropriate for quantum mechanics. The use of finite groups in physics, which underlies the present approach, has an additional motivation. Numerous experiments and observations in the particle physics suggest the importance of finite groups of relatively small orders in some fundamental processes. The origin of these groups is unclear within the currently accepted theories—in particular, within the Standard Model.
Assessing performance and validating finite element simulations using probabilistic knowledge
Energy Technology Data Exchange (ETDEWEB)
Dolin, Ronald M.; Rodriguez, E. A. (Edward A.)
2002-01-01
Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrence results are used to validate finite element predictions.
Existence test for asynchronous interval iterations
DEFF Research Database (Denmark)
Madsen, Kaj; Caprani, O.; Stauning, Ole
1997-01-01
In the search for regions that contain fixed points ofa real function of several variables, tests based on interval calculationscan be used to establish existence ornon-existence of fixed points in regions that are examined in the course ofthe search. The search can e.g. be performed...... as a synchronous (sequential) interval iteration:In each iteration step all components of the iterate are calculatedbased on the previous iterate. In this case it is straight forward to base simple interval existence and non-existencetests on the calculations done in each step of the iteration. The search can also...... on thecomponentwise calculations done in the course of the iteration. These componentwisetests are useful for parallel implementation of the search, sincethe tests can then be performed local to each processor and only when a test issuccessful do a processor communicate this result to other processors....
Effect size, confidence intervals and statistical power in psychological research.
Directory of Open Access Journals (Sweden)
Téllez A.
2015-07-01
Full Text Available Quantitative psychological research is focused on detecting the occurrence of certain population phenomena by analyzing data from a sample, and statistics is a particularly helpful mathematical tool that is used by researchers to evaluate hypotheses and make decisions to accept or reject such hypotheses. In this paper, the various statistical tools in psychological research are reviewed. The limitations of null hypothesis significance testing (NHST and the advantages of using effect size and its respective confidence intervals are explained, as the latter two measurements can provide important information about the results of a study. These measurements also can facilitate data interpretation and easily detect trivial effects, enabling researchers to make decisions in a more clinically relevant fashion. Moreover, it is recommended to establish an appropriate sample size by calculating the optimum statistical power at the moment that the research is designed. Psychological journal editors are encouraged to follow APA recommendations strictly and ask authors of original research studies to report the effect size, its confidence intervals, statistical power and, when required, any measure of clinical significance. Additionally, we must account for the teaching of statistics at the graduate level. At that level, students do not receive sufficient information concerning the importance of using different types of effect sizes and their confidence intervals according to the different types of research designs; instead, most of the information is focused on the various tools of NHST.
The unified method: III. Nonlinearizable problems on the interval
International Nuclear Information System (INIS)
Lenells, J; Fokas, A S
2012-01-01
Boundary value problems for integrable nonlinear evolution PDEs formulated on the finite interval can be analyzed by the unified method introduced by one of the authors and extensively used in the literature. The implementation of this general method to this particular class of problems yields the solution in terms of the unique solution of a matrix Riemann–Hilbert problem formulated in the complex k-plane (the Fourier plane), which has a jump matrix with explicit (x, t)-dependence involving six scalar functions of k, called the spectral functions. Two of these functions depend on the initial data, whereas the other four depend on all boundary values. The most difficult step of the new method is the characterization of the latter four spectral functions in terms of the given initial and boundary data, i.e. the elimination of the unknown boundary values. Here, we present an effective characterization of the spectral functions in terms of the given initial and boundary data. We present two different characterizations of this problem. One is based on the analysis of the so-called global relation, on the analysis of the equations obtained from the global relation via certain transformations leaving the dispersion relation of the associated linearized PDE invariant and on the computation of the large k asymptotics of the eigenfunctions defining the relevant spectral functions. The other is based on the analysis of the global relation and on the introduction of the so-called Gelfand–Levitan–Marchenko representations of the eigenfunctions defining the relevant spectral functions. We also show that these two different characterizations are equivalent and that in the limit when the length of the interval tends to infinity, the relevant formulas reduce to the analogous formulas obtained recently for the case of boundary value problems formulated on the half-line. (paper)
POSTMORTAL CHANGES AND ASSESSMENT OF POSTMORTEM INTERVAL
Directory of Open Access Journals (Sweden)
Edin Šatrović
2013-01-01
Full Text Available This paper describes in a simple way the changes that occur in the body after death.They develop in a specific order, and the speed of their development and their expression are strongly influenced by various endogenous and exogenous factors. The aim of the authors is to indicate the characteristics of the postmortem changes, and their significance in establishing time since death, which can be established precisely within 72 hours. Accurate evaluation of the age of the corpse based on the common changes is not possible with longer postmortem intervals, so the entomological findings become the most significant change on the corpse for determination of the postmortem interval (PMI.
A sequent calculus for signed interval logic
DEFF Research Database (Denmark)
Rasmussen, Thomas Marthedal
2001-01-01
We propose and discuss a complete sequent calculus formulation for Signed Interval Logic (SIL) with the chief purpose of improving proof support for SIL in practice. The main theoretical result is a simple characterization of the limit between decidability and undecidability of quantifier-free SIL....... We present a mechanization of SIL in the generic proof assistant Isabelle and consider techniques for automated reasoning. Many of the results and ideas of this report are also applicable to traditional (non-signed) interval logic and, hence, to Duration Calculus....
Interval Continuous Plant Identification from Value Sets
Directory of Open Access Journals (Sweden)
R. Hernández
2012-01-01
Full Text Available This paper shows how to obtain the values of the numerator and denominator Kharitonov polynomials of an interval plant from its value set at a given frequency. Moreover, it is proven that given a value set, all the assigned polynomials of the vertices can be determined if and only if there is a complete edge or a complete arc lying on a quadrant. This algorithm is nonconservative in the sense that if the value-set boundary of an interval plant is exactly known, and particularly its vertices, then the Kharitonov rectangles are exactly those used to obtain these value sets.
Hybrid finite difference/finite element immersed boundary method.
E Griffith, Boyce; Luo, Xiaoyu
2017-12-01
The immersed boundary method is an approach to fluid-structure interaction that uses a Lagrangian description of the structural deformations, stresses, and forces along with an Eulerian description of the momentum, viscosity, and incompressibility of the fluid-structure system. The original immersed boundary methods described immersed elastic structures using systems of flexible fibers, and even now, most immersed boundary methods still require Lagrangian meshes that are finer than the Eulerian grid. This work introduces a coupling scheme for the immersed boundary method to link the Lagrangian and Eulerian variables that facilitates independent spatial discretizations for the structure and background grid. This approach uses a finite element discretization of the structure while retaining a finite difference scheme for the Eulerian variables. We apply this method to benchmark problems involving elastic, rigid, and actively contracting structures, including an idealized model of the left ventricle of the heart. Our tests include cases in which, for a fixed Eulerian grid spacing, coarser Lagrangian structural meshes yield discretization errors that are as much as several orders of magnitude smaller than errors obtained using finer structural meshes. The Lagrangian-Eulerian coupling approach developed in this work enables the effective use of these coarse structural meshes with the immersed boundary method. This work also contrasts two different weak forms of the equations, one of which is demonstrated to be more effective for the coarse structural discretizations facilitated by our coupling approach. © 2017 The Authors International Journal for Numerical Methods in Biomedical Engineering Published by John Wiley & Sons Ltd.
On interval and cyclic interval edge colorings of (3,5)-biregular graphs
DEFF Research Database (Denmark)
Casselgren, Carl Johan; Petrosyan, Petros; Toft, Bjarne
2017-01-01
A proper edge coloring f of a graph G with colors 1,2,3,…,t is called an interval coloring if the colors on the edges incident to every vertex of G form an interval of integers. The coloring f is cyclic interval if for every vertex v of G, the colors on the edges incident to v either form an inte...
Synchronizing data from irregularly sampled sensors
Uluyol, Onder
2017-07-11
A system and method include receiving a set of sampled measurements for each of multiple sensors, wherein the sampled measurements are at irregular intervals or different rates, re-sampling the sampled measurements of each of the multiple sensors at a higher rate than one of the sensor's set of sampled measurements, and synchronizing the sampled measurements of each of the multiple sensors.
International Nuclear Information System (INIS)
Bogachev, Mikhail I; Bunde, Armin; Kireenkov, Igor S; Nifontov, Eugene M
2009-01-01
We study the statistics of return intervals between large heartbeat intervals (above a certain threshold Q) in 24 h records obtained from healthy subjects. We find that both the linear and the nonlinear long-term memory inherent in the heartbeat intervals lead to power-laws in the probability density function P Q (r) of the return intervals. As a consequence, the probability W Q (t; Δt) that at least one large heartbeat interval will occur within the next Δt heartbeat intervals, with an increasing elapsed number of intervals t after the last large heartbeat interval, follows a power-law. Based on these results, we suggest a method of obtaining a priori information about the occurrence of the next large heartbeat interval, and thus to predict it. We show explicitly that the proposed method, which exploits long-term memory, is superior to the conventional precursory pattern recognition technique, which focuses solely on short-term memory. We believe that our results can be straightforwardly extended to obtain more reliable predictions in other physiological signals like blood pressure, as well as in other complex records exhibiting multifractal behaviour, e.g. turbulent flow, precipitation, river flows and network traffic.
Circadian profile of QT interval and QT interval variability in 172 healthy volunteers
DEFF Research Database (Denmark)
Bonnemeier, Hendrik; Wiegand, Uwe K H; Braasch, Wiebke
2003-01-01
of sleep. QT and R-R intervals revealed a characteristic day-night-pattern. Diurnal profiles of QT interval variability exhibited a significant increase in the morning hours (6-9 AM; P ... lower at day- and nighttime. Aging was associated with an increase of QT interval mainly at daytime and a significant shift of the T wave apex towards the end of the T wave. The circadian profile of ventricular repolarization is strongly related to the mean R-R interval, however, there are significant...
Characterization of resonances using finite size effects
International Nuclear Information System (INIS)
Pozsgay, B.; Takacs, G.
2006-01-01
We develop methods to extract resonance widths from finite volume spectra of (1+1)-dimensional quantum field theories. Our two methods are based on Luscher's description of finite size corrections, and are dubbed the Breit-Wigner and the improved ''mini-Hamiltonian'' method, respectively. We establish a consistent framework for the finite volume description of sufficiently narrow resonances that takes into account the finite size corrections and mass shifts properly. Using predictions from form factor perturbation theory, we test the two methods against finite size data from truncated conformal space approach, and find excellent agreement which confirms both the theoretical framework and the numerical validity of the methods. Although our investigation is carried out in 1+1 dimensions, the extension to physical 3+1 space-time dimensions appears straightforward, given sufficiently accurate finite volume spectra
Finite size scaling and lattice gauge theory
International Nuclear Information System (INIS)
Berg, B.A.
1986-01-01
Finite size (Fisher) scaling is investigated for four dimensional SU(2) and SU(3) lattice gauge theories without quarks. It allows to disentangle violations of (asymptotic) scaling and finite volume corrections. Mass spectrum, string tension, deconfinement temperature and lattice β-function are considered. For appropriate volumes, Monte Carlo investigations seem to be able to control the finite volume continuum limit. Contact is made with Luescher's small volume expansion and possibly also with the asymptotic large volume behavior. 41 refs., 19 figs
Finite element application to global reactor analysis
International Nuclear Information System (INIS)
Schmidt, F.A.R.
1981-01-01
The Finite Element Method is described as a Coarse Mesh Method with general basis and trial functions. Various consequences concerning programming and application of Finite Element Methods in reactor physics are drawn. One of the conclusions is that the Finite Element Method is a valuable tool in solving global reactor analysis problems. However, problems which can be described by rectangular boxes still can be solved with special coarse mesh programs more efficiently. (orig.) [de
Domain decomposition methods for mortar finite elements
Energy Technology Data Exchange (ETDEWEB)
Widlund, O.
1996-12-31
In the last few years, domain decomposition methods, previously developed and tested for standard finite element methods and elliptic problems, have been extended and modified to work for mortar and other nonconforming finite element methods. A survey will be given of work carried out jointly with Yves Achdou, Mario Casarin, Maksymilian Dryja and Yvon Maday. Results on the p- and h-p-version finite elements will also be discussed.
Diagnostic interval and mortality in colorectal cancer
DEFF Research Database (Denmark)
Tørring, Marie Louise; Frydenberg, Morten; Hamilton, William
2012-01-01
Objective To test the theory of a U-shaped association between time from the first presentation of symptoms in primary care to the diagnosis (the diagnostic interval) and mortality after diagnosis of colorectal cancer (CRC). Study Design and Setting Three population-based studies in Denmark...
Safety information on QT-interval prolongation
DEFF Research Database (Denmark)
Warnier, Miriam J; Holtkamp, Frank A; Rutten, Frans H
2014-01-01
Prolongation of the QT interval can predispose to fatal ventricular arrhythmias. Differences in QT-labeling language can result in miscommunication and suboptimal risk mitigation. We systematically compared the phraseology used to communicate on QT-prolonging properties of 144 drugs newly approve...
Interval scanning photomicrography of microbial cell populations.
Casida, L. E., Jr.
1972-01-01
A single reproducible area of the preparation in a fixed focal plane is photographically scanned at intervals during incubation. The procedure can be used for evaluating the aerobic or anaerobic growth of many microbial cells simultaneously within a population. In addition, the microscope is not restricted to the viewing of any one microculture preparation, since the slide cultures are incubated separately from the microscope.
Population based reference intervals for common blood ...
African Journals Online (AJOL)
Population based reference intervals for common blood haematological and biochemical parameters in the Akuapem north district. K.A Koram, M.M Addae, J.C Ocran, S Adu-amankwah, W.O Rogers, F.K Nkrumah ...
Changing reference intervals for haemoglobin in Denmark
DEFF Research Database (Denmark)
Ryberg-Nørholt, Judith; Frederiksen, Henrik; Nybo, Mads
2017-01-01
INTRODUCTION: Based on international experiences and altering demography the reference intervals (RI) for haemoglobin (Hb) concentrations in blood were changed in Denmark in 2013 from 113 - 161 g/L to 117 - 153 g/L for women and from 129 - 177 g/L to 134 - 170 g/L for men. The aim of this study w...
The Total Interval of a Graph.
1988-01-01
about them in a mathematical con- text. A thorough treatment of multiple interval representations, including applications, is given by Roberts [21...8217-. -- + .".-)’""- +_ .. ,_ _ CA6 46 operation applied to a member of .4 U 3 T U.) U.3 UU- T T i Figure 11.2.18 I Fieure 11.2.19 ,* This completes the proof
Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions
Padilla, Miguel A.; Divers, Jasmin
2013-01-01
The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…
Quinsy tonsillectomy or interval tonsillectomy - a prospective ...
African Journals Online (AJOL)
Fifty-one patients with peritonsillar abscesses were randomised to undergo either quinsy tonsillectomy (aT) or interval tonsillectomy (IT), and the two groups were compared. The QT group lost fewer (10,3 v. 17,9) working days and less blood during the operation (158,6 ml v. 205,7 ml); haemostasis was easier and the ...
Linear chord diagrams on two intervals
DEFF Research Database (Denmark)
Andersen, Jørgen Ellegaard; Penner, Robert; Reidys, Christian
generating function ${\\bf C}_g(z)=z^{2g}R_g(z)/(1-4z)^{3g-{1\\over 2}}$ for chords attached to a single interval is algebraic, for $g\\geq 1$, where the polynomial $R_g(z)$ with degree at most $g-1$ has integer coefficients and satisfies $R_g(1/4)\
Learned Interval Time Facilitates Associate Memory Retrieval
van de Ven, Vincent; Kochs, Sarah; Smulders, Fren; De Weerd, Peter
2017-01-01
The extent to which time is represented in memory remains underinvestigated. We designed a time paired associate task (TPAT) in which participants implicitly learned cue-time-target associations between cue-target pairs and specific cue-target intervals. During subsequent memory testing, participants showed increased accuracy of identifying…
Interval Appendicectomy and Management of Appendix Mass ...
African Journals Online (AJOL)
A wholly conservative management without interval appendicectomy was instituted for 13 patients diagnosed as having appendix mass between 1998 and 2002 in the University of Benin Teaching Hospital, Benin City, Nigeria. Within three days of admission, one patient developed clinical features of ruptured appendix and ...
International Nuclear Information System (INIS)
Boehme, R.C.; Nicholas, B.L.
1987-01-01
This invention relates to a method of an apparatus for ore sampling. The method includes the steps of periodically removing a sample of the output material of a sorting machine, weighing each sample so that each is of the same weight, measuring a characteristic such as the radioactivity, magnetivity or the like of each sample, subjecting at least an equal portion of each sample to chemical analysis to determine the mineral content of the sample and comparing the characteristic measurement with desired mineral content of the chemically analysed portion of the sample to determine the characteristic/mineral ratio of the sample. The apparatus includes an ore sample collector, a deflector for deflecting a sample of ore particles from the output of an ore sorter into the collector and means for moving the deflector from a first position in which it is clear of the particle path from the sorter to a second position in which it is in the particle path at predetermined time intervals and for predetermined time periods to deflect the sample particles into the collector. The apparatus conveniently includes an ore crusher for comminuting the sample particle, a sample hopper means for weighing the hopper, a detector in the hopper for measuring a characteristic such as radioactivity, magnetivity or the like of particles in the hopper, a discharge outlet from the hopper and means for feeding the particles from the collector to the crusher and then to the hopper
A first course in finite elements
Fish, Jacob
2007-01-01
Developed from the authors, combined total of 50 years undergraduate and graduate teaching experience, this book presents the finite element method formulated as a general-purpose numerical procedure for solving engineering problems governed by partial differential equations. Focusing on the formulation and application of the finite element method through the integration of finite element theory, code development, and software application, the book is both introductory and self-contained, as well as being a hands-on experience for any student. This authoritative text on Finite Elements:Adopts
Features of finite quantum field theories
International Nuclear Information System (INIS)
Boehm, M.; Denner, A.
1987-01-01
We analyse general features of finite quantum field theories. A quantum field theory is considered to be finite, if the corresponding renormalization constants evaluated in the dimensional regularization scheme are free from divergences in all orders of perturbation theory. We conclude that every finite renormalizable quantum field theory with fields of spin one or less must contain both scalar fields and fermion fields and nonabelian gauge fields. Some secific nonsupersymmetric models are found to be finite at the one- and two-loop level. (orig.)
Burnside structures of finite subgroups
International Nuclear Information System (INIS)
Lysenok, I G
2007-01-01
We establish conditions guaranteeing that a group B possesses the following property: there is a number l such that if elements w, x -1 wx,...,x -l+1 wx l-1 of B generate a finite subgroup G then x lies in the normalizer of G. These conditions are of a quite special form. They hold for groups with relations of the form x n =1 which appear as approximating groups for the free Burnside groups B(m,n) of sufficiently large even exponent n. We extract an algebraic assertion which plays an important role in all known approaches to substantial results on the groups B(m,n) of large even exponent, in particular, to proving their infiniteness. The main theorem asserts that when n is divisible by 16, B has the above property with l=6
Learning Extended Finite State Machines
Cassel, Sofia; Howar, Falk; Jonsson, Bengt; Steffen, Bernhard
2014-01-01
We present an active learning algorithm for inferring extended finite state machines (EFSM)s, combining data flow and control behavior. Key to our learning technique is a novel learning model based on so-called tree queries. The learning algorithm uses the tree queries to infer symbolic data constraints on parameters, e.g., sequence numbers, time stamps, identifiers, or even simple arithmetic. We describe sufficient conditions for the properties that the symbolic constraints provided by a tree query in general must have to be usable in our learning model. We have evaluated our algorithm in a black-box scenario, where tree queries are realized through (black-box) testing. Our case studies include connection establishment in TCP and a priority queue from the Java Class Library.
Phase transition in finite systems
International Nuclear Information System (INIS)
Chomaz, Ph.; Duflot, V.; Duflot, V.; Gulminelli, F.
2000-01-01
In this paper we present a review of selected aspects of Phase transitions in finite systems applied in particular to the liquid-gas phase transition in nuclei. We show that the problem of the non existence of boundary conditions can be solved by introducing a statistical ensemble with an averaged constrained volume. In such an ensemble the microcanonical heat capacity becomes negative in the transition region. We show that the caloric curve explicitly depends on the considered transformation of the volume with the excitation energy and so does not bear direct informations on the characteristics of the phase transition. Conversely, partial energy fluctuations are demonstrated to be a direct measure of the equation of state. Since the heat capacity has a negative branch in the phase transition region, the presence of abnormally large kinetic energy fluctuations is a signal of the liquid gas phase transition. (author)
A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies.
Kottas, Martina; Kuss, Oliver; Zapf, Antonia
2014-02-19
The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used.
Finite-element solidification modelling of metals and binary alloys
International Nuclear Information System (INIS)
Mathew, P.M.
1986-12-01
In the Canadian Nuclear Fuel Waste Management Program, cast metals and alloys are being evaluated for their ability to support a metallic fuel waste container shell under disposal vault conditions and to determine their performance as an additional barrier to radionuclide release. These materials would be cast to fill residual free space inside the container and allowed to solidify without major voids. To model their solidification characteristics following casting, a finite-element model, FAXMOD-3, was adopted. Input parameters were modified to account for the latent heat of fusion of the metals and alloys considered. This report describes the development of the solidification model and its theoretical verification. To model the solidification of pure metals and alloys that melt at a distinct temperature, the latent heat of fusion was incorporated as a double-ramp function in the specific heat-temperature relationship, within an interval of +- 1 K around the solidification temperature. Comparison of calculated results for lead, tin and lead-tin eutectic melts, unidirectionally cooled with and without superheat, showed good agreement with an alternative technique called the integral profile method. To model the solidification of alloys that melt over a temperature interval, the fraction of solid in the solid-liquid region, as calculated from the Scheil equation, was used to determine the fraction of latent heat to be liberated over a temperature interval within the solid-liquid zone. Comparison of calculated results for unidirectionally cooled aluminum-4 wt.% copper melt, with and without superheat, showed good agreement with alternative finite-difference techniques
DEFF Research Database (Denmark)
Pinson, Pierre; Tastu, Julija
2014-01-01
A new score for the evaluation of interval forecasts, the so-called coverage width-based criterion (CWC), was proposed and utilized.. This score has been used for the tuning (in-sample) and genuine evaluation (out-ofsample) of prediction intervals for various applications, e.g., electric load [1......], electricity prices [2], general purpose prediction [3], and wind power generation [4], [5]. Indeed, two papers by the same authors appearing in the IEEE Transactions On Sustainable Energy employ that score and use it to conclude on the comparative quality of alternative approaches to interval forecasting...
Gerlovina, Inna; van der Laan, Mark J; Hubbard, Alan
2017-05-20
Multiple comparisons and small sample size, common characteristics of many types of "Big Data" including those that are produced by genomic studies, present specific challenges that affect reliability of inference. Use of multiple testing procedures necessitates calculation of very small tail probabilities of a test statistic distribution. Results based on large deviation theory provide a formal condition that is necessary to guarantee error rate control given practical sample sizes, linking the number of tests and the sample size; this condition, however, is rarely satisfied. Using methods that are based on Edgeworth expansions (relying especially on the work of Peter Hall), we explore the impact of departures of sampling distributions from typical assumptions on actual error rates. Our investigation illustrates how far the actual error rates can be from the declared nominal levels, suggesting potentially wide-spread problems with error rate control, specifically excessive false positives. This is an important factor that contributes to "reproducibility crisis". We also review some other commonly used methods (such as permutation and methods based on finite sampling inequalities) in their application to multiple testing/small sample data. We point out that Edgeworth expansions, providing higher order approximations to the sampling distribution, offer a promising direction for data analysis that could improve reliability of studies relying on large numbers of comparisons with modest sample sizes.
Finite rotation shells basic equations and finite elements for Reissner kinematics
Wisniewski, K
2010-01-01
This book covers theoretical and computational aspects of non-linear shells. Several advanced topics of shell equations and finite elements - not included in standard textbooks on finite elements - are addressed, and the book includes an extensive bibliography.
Energy Technology Data Exchange (ETDEWEB)
Kim, S. [Purdue Univ., West Lafayette, IN (United States)
1994-12-31
Parallel iterative procedures based on domain decomposition techniques are defined and analyzed for the numerical solution of wave propagation by finite element and finite difference methods. For finite element methods, in a Lagrangian framework, an efficient way for choosing the algorithm parameter as well as the algorithm convergence are indicated. Some heuristic arguments for finding the algorithm parameter for finite difference schemes are addressed. Numerical results are presented to indicate the effectiveness of the methods.
A suitable low-order, eight-node tetrahedral finite element for solids
Energy Technology Data Exchange (ETDEWEB)
Key, S.W.; Heinstein, M.S.; Stone, C.M.; Mello, F.J.; Blanford, M.L.; Budge, K.G.
1998-03-01
To use the all-tetrahedral mesh generation existing today, the authors have explored the creation of a computationally efficient eight-node tetrahedral finite element (a four-node tetrahedral finite element enriched with four mid-face nodal points). The derivation of the element`s gradient operator, studies in obtaining a suitable mass lumping, and the element`s performance in applications are presented. In particular they examine the eight-node tetrahedral finite element`s behavior in longitudinal plane wave propagation, in transverse cylindrical wave propagation, and in simulating Taylor bar impacts. The element samples only constant strain states and, therefore, has 12 hour-glass modes. In this regard it bears similarities to the eight-node, mean-quadrature hexahedral finite element. Comparisons with the results obtained from the mean-quadrature eight-node hexahedral finite element and the four-node tetrahedral finite element are included. Given automatic all-tetrahedral meshing, the eight-node, constant-strain tetrahedral finite element is a suitable replacement for the eight-node hexahedral finite element in those cases where mesh generation requires an inordinate amount of user intervention and direction to obtain acceptable mesh properties.
A suitable low-order, eight-node tetrahedral finite element for solids
International Nuclear Information System (INIS)
Key, S.W.; Heinstein, M.S.; Stone, C.M.; Mello, F.J.; Blanford, M.L.; Budge, K.G.
1998-03-01
To use the all-tetrahedral mesh generation existing today, the authors have explored the creation of a computationally efficient eight-node tetrahedral finite element (a four-node tetrahedral finite element enriched with four mid-face nodal points). The derivation of the element's gradient operator, studies in obtaining a suitable mass lumping, and the element's performance in applications are presented. In particular they examine the eight-node tetrahedral finite element's behavior in longitudinal plane wave propagation, in transverse cylindrical wave propagation, and in simulating Taylor bar impacts. The element samples only constant strain states and, therefore, has 12 hour-glass modes. In this regard it bears similarities to the eight-node, mean-quadrature hexahedral finite element. Comparisons with the results obtained from the mean-quadrature eight-node hexahedral finite element and the four-node tetrahedral finite element are included. Given automatic all-tetrahedral meshing, the eight-node, constant-strain tetrahedral finite element is a suitable replacement for the eight-node hexahedral finite element in those cases where mesh generation requires an inordinate amount of user intervention and direction to obtain acceptable mesh properties
International Nuclear Information System (INIS)
Citanovic, M.; Bezlaj, H.
1994-01-01
This presentation describes essential boat sampling activities: on site boat sampling process optimization and qualification; boat sampling of base material (beltline region); boat sampling of weld material (weld No. 4); problems accompanied with weld crown varieties, RPV shell inner radius tolerance, local corrosion pitting and water clarity. The equipment used for boat sampling is described too. 7 pictures
On a linear method in bootstrap confidence intervals
Directory of Open Access Journals (Sweden)
Andrea Pallini
2007-10-01
Full Text Available A linear method for the construction of asymptotic bootstrap confidence intervals is proposed. We approximate asymptotically pivotal and non-pivotal quantities, which are smooth functions of means of n independent and identically distributed random variables, by using a sum of n independent smooth functions of the same analytical form. Errors are of order Op(n-3/2 and Op(n-2, respectively. The linear method allows a straightforward approximation of bootstrap cumulants, by considering the set of n independent smooth functions as an original random sample to be resampled with replacement.
Interval Mathematics Applied to Critical Point Transitions
Directory of Open Access Journals (Sweden)
Benito A. Stradi
2012-03-01
Full Text Available The determination of critical points of mixtures is important for both practical and theoretical reasons in the modeling of phase behavior, especially at high pressure. The equations that describe the behavior of complex mixtures near critical points are highly nonlinear and with multiplicity of solutions to the critical point equations. Interval arithmetic can be used to reliably locate all the critical points of a given mixture. The method also verifies the nonexistence of a critical point if a mixture of a given composition does not have one. This study uses an interval Newton/Generalized Bisection algorithm that provides a mathematical and computational guarantee that all mixture critical points are located. The technique is illustrated using several example problems. These problems involve cubic equation of state models; however, the technique is general purpose and can be applied in connection with other nonlinear problems.
Constraint-based Attribute and Interval Planning
Jonsson, Ari; Frank, Jeremy
2013-01-01
In this paper we describe Constraint-based Attribute and Interval Planning (CAIP), a paradigm for representing and reasoning about plans. The paradigm enables the description of planning domains with time, resources, concurrent activities, mutual exclusions among sets of activities, disjunctive preconditions and conditional effects. We provide a theoretical foundation for the paradigm, based on temporal intervals and attributes. We then show how the plans are naturally expressed by networks of constraints, and show that the process of planning maps directly to dynamic constraint reasoning. In addition, we de ne compatibilities, a compact mechanism for describing planning domains. We describe how this framework can incorporate the use of constraint reasoning technology to improve planning. Finally, we describe EUROPA, an implementation of the CAIP framework.
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
2017-06-01
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
2017-06-01
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which—as shown on the contact process—provides a significant improvement of the large deviation function estimators compared to the standard one.
Understanding Confidence Intervals With Visual Representations
Navruz, Bilgin; Delen, Erhan
2014-01-01
In the present paper, we showed how confidence intervals (CIs) are valuable and useful in research studies when they are used in the correct form with correct interpretations. The sixth edition of the APA (2010) Publication Manual strongly recommended reporting CIs in research studies, and it was described as “the best reporting strategy” (p. 34). Misconceptions and correct interpretations of CIs were presented from several textbooks. In addition, limitations of the null hypothesis statistica...
Interpretability degrees of finitely axiomatized sequential theories
Visser, Albert
In this paper we show that the degrees of interpretability of finitely axiomatized extensions-in-the-same-language of a finitely axiomatized sequential theory-like Elementary Arithmetic EA, IΣ1, or the Gödel-Bernays theory of sets and classes GB-have suprema. This partially answers a question posed
Interpretability Degrees of Finitely Axiomatized Sequential Theories
Visser, Albert
2012-01-01
In this paper we show that the degrees of interpretability of finitely axiomatized extensions-in-the-same-language of a finitely axiomatized sequential theory —like Elementary Arithmetic EA, IΣ1, or the Gödel-Bernays theory of sets and classes GB— have suprema. This partially answers a question
Finite Topological Spaces as a Pedagogical Tool
Helmstutler, Randall D.; Higginbottom, Ryan S.
2012-01-01
We propose the use of finite topological spaces as examples in a point-set topology class especially suited to help students transition into abstract mathematics. We describe how carefully chosen examples involving finite spaces may be used to reinforce concepts, highlight pathologies, and develop students' non-Euclidean intuition. We end with a…
Lectures on zeta functions over finite fields
Wan, Daqing
2007-01-01
These are the notes from the summer school in G\\"ottingen sponsored by NATO Advanced Study Institute on Higher-Dimensional Geometry over Finite Fields that took place in 2007. The aim was to give a short introduction on zeta functions over finite fields, focusing on moment zeta functions and zeta functions of affine toric hypersurfaces.
Non-linear finite element modeling
DEFF Research Database (Denmark)
Mikkelsen, Lars Pilgaard
The note is written for courses in "Non-linear finite element method". The note has been used by the author teaching non-linear finite element modeling at Civil Engineering at Aalborg University, Computational Mechanics at Aalborg University Esbjerg, Structural Engineering at the University...
Finite p′-nilpotent groups. II
Directory of Open Access Journals (Sweden)
S. Srinivasan
1987-01-01
Full Text Available In this paper we continue the study of finite p′-nilpotent groups that was started in the first part of this paper. Here we give a complete characterization of all finite groups that are not p′-nilpotent but all of whose proper subgroups are p′-nilpotent.
Nonlinear finite element modeling of corrugated board
A. C. Gilchrist; J. C. Suhling; T. J. Urbanik
1999-01-01
In this research, an investigation on the mechanical behavior of corrugated board has been performed using finite element analysis. Numerical finite element models for corrugated board geometries have been created and executed. Both geometric (large deformation) and material nonlinearities were included in the models. The analyses were performed using the commercial...
Regularization of finite temperature string theories
International Nuclear Information System (INIS)
Leblanc, Y.; Knecht, M.; Wallet, J.C.
1990-01-01
The tachyonic divergences occurring in the free energy of various string theories at finite temperature are eliminated through the use of regularization schemes and analytic continuations. For closed strings, we obtain finite expressions which, however, develop an imaginary part above the Hagedorn temperature, whereas open string theories are still plagued with dilatonic divergences. (orig.)
∗-supplemented subgroups of finite groups
Indian Academy of Sciences (India)
A subgroup H of a group G is said to be M∗-supplemented in G if ... normal subgroups and determined the structure of finite groups by using some ...... [12] Monakhov V S and Shnyparkov A V, On the p-supersolubility of a finite group with a.
Properties of the distributional finite Fourier transform
Carmichael, Richard D.
2016-01-01
The analytic functions in tubes which obtain the distributional finite Fourier transform as boundary value are shown to have a strong boundedness property and to be recoverable as a Fourier-Laplace transform, a distributional finite Fourier transform, and as a Cauchy integral of a distribution associated with the boundary value.
Dynamic pricing and learning with finite inventories
den Boer, A.V.; Zwart, Bert
2013-01-01
We study a dynamic pricing problem with finite inventory and parametric uncertainty on the demand distribution. Products are sold during selling seasons of finite length, and inventory that is unsold at the end of a selling season, perishes. The goal of the seller is to determine a pricing strategy
Dynamic pricing and learning with finite inventories
den Boer, A.V.; Zwart, Bert
We study a dynamic pricing problem with finite inventory and parametric uncertainty on the demand distribution. Products are sold during selling seasons of finite length, and inventory that is unsold at the end of a selling season perishes. The goal of the seller is to determine a pricing strategy
Dynamic Pricing and Learning with Finite Inventories
A.P. Zwart (Bert); A.V. den Boer (Arnoud)
2015-01-01
htmlabstractWe study a dynamic pricing problem with finite inventory and parametric uncertainty on the demand distribution. Products are sold during selling seasons of finite length, and inventory that is unsold at the end of a selling season perishes. The goal of the seller is to determine a
Dynamic pricing and learning with finite inventories
Boer, den A.V.; Zwart, B.
2015-01-01
We study a dynamic pricing problem with finite inventory and parametric uncertainty on the demand distribution. Products are sold during selling seasons of finite length, and inventory that is unsold at the end of a selling season perishes. The goal of the seller is to determine a pricing strategy
A Finite Model Property for Intersection Types
Directory of Open Access Journals (Sweden)
Rick Statman
2015-03-01
Full Text Available We show that the relational theory of intersection types known as BCD has the finite model property; that is, BCD is complete for its finite models. Our proof uses rewriting techniques which have as an immediate by-product the polynomial time decidability of the preorder <= (although this also follows from the so called beta soundness of BCD.
Why do probabilistic finite element analysis ?
Thacker, Ben H
2008-01-01
The intention of this book is to provide an introduction to performing probabilistic finite element analysis. As a short guideline, the objective is to inform the reader of the use, benefits and issues associated with performing probabilistic finite element analysis without excessive theory or mathematical detail.
Finite-Element Software for Conceptual Design
DEFF Research Database (Denmark)
Lindemann, J.; Sandberg, G.; Damkilde, Lars
2010-01-01
and research. Forcepad is an effort to provide a conceptual design and teaching tool in a finite-element software package. Forcepad is a two-dimensional finite-element application based on the same conceptual model as image editing applications such as Adobe Photoshop or Microsoft Paint. Instead of using...
The finite-dimensional Freeman thesis.
Rudolph, Lee
2008-06-01
I suggest a modification--and mathematization--of Freeman's thesis on the relations among "perception", "the finite brain", and "the world", based on my recent proposal that the theory of finite topological spaces is both an adequate and a natural mathematical foundation for human psychology.
Factoring polynomials over arbitrary finite fields
Lange, T.; Winterhof, A.
2000-01-01
We analyse an extension of Shoup's (Inform. Process. Lett. 33 (1990) 261–267) deterministic algorithm for factoring polynomials over finite prime fields to arbitrary finite fields. In particular, we prove the existence of a deterministic algorithm which completely factors all monic polynomials of
Early diastolic time intervals during hypertensive pregnancy.
Spinelli, L; Ferro, G; Nappi, C; Farace, M J; Talarico, G; Cinquegrana, G; Condorelli, M
1987-10-01
Early diastolic time intervals have been assessed by means of the echopolycardiographic method in 17 pregnant women who developed hypertension during pregnancy (HP) and in 14 normal pregnant women (N). Systolic time intervals (STI), stroke volume (SV), ejection fraction (EF), and mean velocity of myocardial fiber shortening (VCF) were also evaluated. Recordings were performed in the left lateral decubitus (LLD) and then in the supine decubitus (SD). In LLD, isovolumic relaxation period (IRP) was prolonged in the hypertensive pregnant women compared with normal pregnant women (HP 51 +/- 12.5 ms, N 32.4 +/- 15 ms p less than 0.05), whereas time of the mitral valve maximum opening (DE) was not different in the groups. There was no difference in SV, EF, and mean VCF, whereas STI showed only a significant (p less than 0.05) lengthening of pre-ejection period (PEP) in HP. When the subjects shifted from the left lateral to the supine decubitus position, left ventricular ejection time index (LVETi) and SV decreased significantly (p less than 0.05) in both normotensive hypertensive pregnant women. IRP and PEP lengthened significantly (p less than 0.05) only in normals, whereas they were unchanged in HP. DE time did not vary in either group. In conclusion, hypertension superimposed on pregnancy induces lengthening of IRP, as well as of PEP, and minimizes the effects of the postural changes in preload on the above-mentioned time intervals.
QT interval prolongation associated with sibutramine treatment
Harrison-Woolrych, Mira; Clark, David W J; Hill, Geraldine R; Rees, Mark I; Skinner, Jonathan R
2006-01-01
Aims To investigate a possible association of sibutramine with QT interval prolongation. Methods Post-marketing surveillance using prescription event monitoring in the New Zealand Intensive Medicines Monitoring Programme (IMMP) identified a case of QT prolongation and associated cardiac arrest in a patient taking sibutramine for 25 days. This patient was further investigated, including genotyping for long QT syndrome. Other IMMP case reports suggesting arrhythmias associated with sibutramine were assessed and further reports were obtained from the World Health Organisation (WHO) adverse drug reactions database. Results The index case displayed a novel mutation in a cardiac potassium channel subunit gene, KCNQ1, which is likely to prolong cardiac membrane depolarization and increase susceptibility to long QT intervals. Assessment of further IMMP reports identified five additional patients who experienced palpitations associated with syncope or presyncopal symptoms, one of whom had a QTc at the upper limit of normal. Assessment of reports from the WHO database identified three reports of QT prolongation and one fatal case of torsade de pointes in a patient also taking cisapride. Conclusions This case series suggests that sibutramine may be associated with QT prolongation and related dysrhythmias. Further studies are required, but in the meantime we would recommend that sibutramine should be avoided in patients with long QT syndrome and in patients taking other medicines that may prolong the QT interval. PMID:16542208
Stochastic delocalization of finite populations
International Nuclear Information System (INIS)
Geyrhofer, Lukas; Hallatschek, Oskar
2013-01-01
The localization of populations of replicating bacteria, viruses or autocatalytic chemicals arises in various contexts, such as ecology, evolution, medicine or chemistry. Several deterministic mathematical models have been used to characterize the conditions under which localized states can form, and how they break down due to convective driving forces. It has been repeatedly found that populations remain localized unless the bias exceeds a critical threshold value, and that close to the transition the population is characterized by a diverging length scale. These results, however, have been obtained upon ignoring number fluctuations (‘genetic drift’), which are inevitable given the discreteness of the replicating entities. Here, we study the localization/delocalization of a finite population in the presence of genetic drift. The population is modeled by a linear chain of subpopulations, or demes, which exchange migrants at a constant rate. Individuals in one particular deme, called ‘oasis’, receive a growth rate benefit, and the total population is regulated to have constant size N. In this ecological setting, we find that any finite population delocalizes on sufficiently long time scales. Depending on parameters, however, populations may remain localized for a very long time. The typical waiting time to delocalization increases exponentially with both population size and distance to the critical wind speed of the deterministic approximation. We augment these simulation results by a mathematical analysis that treats the reproduction and migration of individuals as branching random walks subject to global constraints. For a particular constraint, different from a fixed population size constraint, this model yields a solvable first moment equation. We find that this solvable model approximates very well the fixed population size model for large populations, but starts to deviate as population sizes are small. Nevertheless, the qualitative behavior of the
An introduction to finite tight frames
Waldron, Shayne F D
2018-01-01
This textbook is an introduction to the theory and applications of finite tight frames, an area that has developed rapidly in the last decade. Stimulating much of this growth are the applications of finite frames to diverse fields such as signal processing, quantum information theory, multivariate orthogonal polynomials, and remote sensing. Key features and topics: * First book entirely devoted to finite frames * Extensive exercises and MATLAB examples for classroom use * Important examples, such as harmonic and Heisenberg frames, are presented in preliminary chapters, encouraging readers to explore and develop an intuitive feeling for tight frames * Later chapters delve into general theory details and recent research results * Many illustrations showing the special aspects of the geometry of finite frames * Provides an overview of the field of finite tight frames * Discusses future research directions in the field Featuring exercises and MATLAB examples in each chapter, the book is well suited as a textbook ...
A programmable finite state module for use with the Fermilab Tevatron Clock
International Nuclear Information System (INIS)
Beechy, D.
1987-10-01
A VME module has been designed which implements several programmable finite state machines that use the Tevatron Clock signal as inputs. In addition to normal finite state machine type outputs, the module, called the VME Finite State Machine, or VFSM, records a history of changes of state so that the exact path through the state diagram can be determined. There is also provision for triggering and recording from an external digitizer so that samples can be taken and recorded under very precisely defined circumstances
Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.
Lee, Sunbok; Lei, Man-Kit; Brody, Gene H
2015-06-01
Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).
Effect of a data buffer on the recorded distribution of time intervals for random events
Energy Technology Data Exchange (ETDEWEB)
Barton, J C [Polytechnic of North London (UK)
1976-03-15
The use of a data buffer enables the distribution of the time intervals between events to be studied for times less than the recording system dead-time but the usual negative exponential distribution for random events has to be modified. The theory for this effect is developed for an n-stage buffer followed by an asynchronous recorder. Results are evaluated for the values of n from 1 to 5. In the language of queueing theory the system studied is of type M/D/1/n+1, i.e. with constant service time and a finite number of places.
Moriyama, Eduardo H.; Zangaro, Renato A.; Lobo, Paulo D. d. C.; Villaverde, Antonio G. J. B.; Watanabe-Sei, Ii; Pacheco, Marcos T. T.; Otsuka, Daniel K.
2002-06-01
Thermal damage in dental pulp during Nd:YAG laser irradiation have been studied by several researchers; but due to dentin inhomogeneous structure, laser interaction with dentin in the hypersensitivity treatment are not fully understood. In this work, heat distribution profile on human dentine samples irradiated with Nd:YAG laser was simulated at surface and subjacent layers. Calculations were carried out using the Crank-Nicolson's finite difference method. Sixteen dentin samples with 1,5 mm of thickness were evenly distributed into four groups and irradiated with Nd:YAG laser pulses, according to the following scheme: (I) 1 pulse of 900 mJ, (II) 2 pulses of 450 mJ, (III) 3 pulses of 300 mJ, (IV) 6 pulses of 150 mJ; corresponding to a total laser energy of 900 mJ. The pulse interval was 300ms, the pulse duration of 900 ms and irradiated surface area of 0,005 mm2. Laser induced morphological changes in dentin were observed for all the irradiated samples. The heat distribution throughout the dentin layer, from the external dentin surface to the pulpal chamber wall, was calculated for each case, in order to obtain further information about the pulsed Nd:YAG laser-oral hard tissue interaction. The simulation showed significant differences in the final temperature at the pulpal chamber, depending on the exposition time and the energy contained in the laser pulse.
Finite density aspects of leptogenesis
International Nuclear Information System (INIS)
Hohenegger, Andreas
2010-01-01
Leptogenesis takes place in the early universe at high temperatures and densities and a deviation from equilibrium in the decay of heavy Majorana neutrinos is a fundamental requirement for the generation of the asymmetry. The equations, commonly used for its description, are largely based on classical Boltzmann equations (BEs) while the source of CP-violation is a quantum interference phenomenon. In view of this clash, it is desirable to study such processes in terms of non-equilibrium quantum field theory. On the other hand, it is simpler to solve BEs rather than the corresponding quantum field theoretical ones. Therefore, we derive modified BEs from first principles in the Kadanoff-Baym (KB) formalism. The results, found for a simple toy model, can be applied to popular phenomenological scenarios by analogy. This approach uncovers structural differences of the corrected equations and leads to different results for the form of the finite density contributions to the CP-violating parameter. In the case of degenerate heavy neutrino masses, corresponding to the popular scenario of resonant leptogenesis, it allows to explicitly distinguish between regimes where BEs are applicable or inapplicable.
Finite element coiled cochlea model
Isailovic, Velibor; Nikolic, Milica; Milosevic, Zarko; Saveljic, Igor; Nikolic, Dalibor; Radovic, Milos; Filipović, Nenad
2015-12-01
Cochlea is important part of the hearing system, and thanks to special structure converts external sound waves into neural impulses which go to the brain. Shape of the cochlea is like snail, so geometry of the cochlea model is complex. The simplified cochlea coiled model was developed using finite element method inside SIFEM FP7 project. Software application is created on the way that user can prescribe set of the parameters for spiral cochlea, as well as material properties and boundary conditions to the model. Several mathematical models were tested. The acoustic wave equation for describing fluid in the cochlea chambers - scala vestibuli and scala timpani, and Newtonian dynamics for describing vibrations of the basilar membrane are used. The mechanical behavior of the coiled cochlea was analyzed and the third chamber, scala media, was not modeled because it does not have a significant impact on the mechanical vibrations of the basilar membrane. The obtained results are in good agreement with experimental measurements. Future work is needed for more realistic geometry model. Coiled model of the cochlea was created and results are compared with initial simplified coiled model of the cochlea.
Finite approximations in fluid mechanics
International Nuclear Information System (INIS)
Hirschel, E.H.
1986-01-01
This book contains twenty papers on work which was conducted between 1983 and 1985 in the Priority Research Program ''Finite Approximations in Fluid Mechanics'' of the German Research Society (Deutsche Forschungsgemeinschaft). Scientists from numerical mathematics, fluid mechanics, and aerodynamics present their research on boundary-element methods, factorization methods, higher-order panel methods, multigrid methods for elliptical and parabolic problems, two-step schemes for the Euler equations, etc. Applications are made to channel flows, gas dynamical problems, large eddy simulation of turbulence, non-Newtonian flow, turbomachine flow, zonal solutions for viscous flow problems, etc. The contents include: multigrid methods for problems from fluid dynamics, development of a 2D-Transonic Potential Flow Solver; a boundary element spectral method for nonstationary viscous flows in 3 dimensions; navier-stokes computations of two-dimensional laminar flows in a channel with a backward facing step; calculations and experimental investigations of the laminar unsteady flow in a pipe expansion; calculation of the flow-field caused by shock wave and deflagration interaction; a multi-level discretization and solution method for potential flow problems in three dimensions; solutions of the conservation equations with the approximate factorization method; inviscid and viscous flow through rotating meridional contours; zonal solutions for viscous flow problems
Abou El Hassan, Mohamed; Stoianov, Alexandra; Araújo, Petra A T; Sadeghieh, Tara; Chan, Man Khun; Chen, Yunqi; Randell, Edward; Nieuwesteeg, Michelle; Adeli, Khosrow
2015-11-01
The CALIPER program has established a comprehensive database of pediatric reference intervals using largely the Abbott ARCHITECT biochemical assays. To expand clinical application of CALIPER reference standards, the present study is aimed at transferring CALIPER reference intervals from the Abbott ARCHITECT to Beckman Coulter AU assays. Transference of CALIPER reference intervals was performed based on the CLSI guidelines C28-A3 and EP9-A2. The new reference intervals were directly verified using up to 100 reference samples from the healthy CALIPER cohort. We found a strong correlation between Abbott ARCHITECT and Beckman Coulter AU biochemical assays, allowing the transference of the vast majority (94%; 30 out of 32 assays) of CALIPER reference intervals previously established using Abbott assays. Transferred reference intervals were, in general, similar to previously published CALIPER reference intervals, with some exceptions. Most of the transferred reference intervals were sex-specific and were verified using healthy reference samples from the CALIPER biobank based on CLSI criteria. It is important to note that the comparisons performed between the Abbott and Beckman Coulter assays make no assumptions as to assay accuracy or which system is more correct/accurate. The majority of CALIPER reference intervals were transferrable to Beckman Coulter AU assays, allowing the establishment of a new database of pediatric reference intervals. This further expands the utility of the CALIPER database to clinical laboratories using the AU assays; however, each laboratory should validate these intervals for their analytical platform and local population as recommended by the CLSI. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Takeshi, Y.; Keisuke, K.
1983-01-01
The multigroup neutron diffusion equation for two-dimensional triangular geometry is solved by the finite Fourier transformation method. Using the zero-th-order equation of the integral equation derived by this method, simple algebraic expressions for the flux are derived and solved by the alternating direction implicit method. In sample calculations for a benchmark problem of a fast breeder reactor, it is shown that the present method gives good results with fewer mesh points than the usual finite difference method
What is finiteness? (Abhishek Banerjee) (Indian Institute of Science)
Indian Academy of Sciences (India)
Do finites get enough respect? • Finiteness is easy, no? • Just count whether 1, 2, 3,... • But then we miss out on the true richness of the concept of finitness. • There's more finiteness around. In fact, finiteness is what helps us really understand things. 5 ...
Iterative solutions of finite difference diffusion equations
International Nuclear Information System (INIS)
Menon, S.V.G.; Khandekar, D.C.; Trasi, M.S.
1981-01-01
The heterogeneous arrangement of materials and the three-dimensional character of the reactor physics problems encountered in the design and operation of nuclear reactors makes it necessary to use numerical methods for solution of the neutron diffusion equations which are based on the linear Boltzmann equation. The commonly used numerical method for this purpose is the finite difference method. It converts the diffusion equations to a system of algebraic equations. In practice, the size of this resulting algebraic system is so large that the iterative methods have to be used. Most frequently used iterative methods are discussed. They include : (1) basic iterative methods for one-group problems, (2) iterative methods for eigenvalue problems, and (3) iterative methods which use variable acceleration parameters. Application of Chebyshev theorem to iterative methods is discussed. The extension of the above iterative methods to multigroup neutron diffusion equations is also considered. These methods are applicable to elliptic boundary value problems in reactor design studies in particular, and to elliptic partial differential equations in general. Solution of sample problems is included to illustrate their applications. The subject matter is presented in as simple a manner as possible. However, a working knowledge of matrix theory is presupposed. (M.G.B.)
Brus, D.J.
2015-01-01
In balanced sampling a linear relation between the soil property of interest and one or more covariates with known means is exploited in selecting the sampling locations. Recent developments make this sampling design attractive for statistical soil surveys. This paper introduces balanced sampling
Finite element modeling of trolling-mode AFM.
Sajjadi, Mohammadreza; Pishkenari, Hossein Nejat; Vossoughi, Gholamreza
2018-06-01
Trolling mode atomic force microscopy (TR-AFM) has overcome many imaging problems in liquid environments by considerably reducing the liquid-resonator interaction forces. The finite element model of the TR-AFM resonator considering the effects of fluid and nanoneedle flexibility is presented in this research, for the first time. The model is verified by ABAQUS software. The effect of installation angle of the microbeam relative to the horizon and the effect of fluid on the system behavior are investigated. Using the finite element model, frequency response curve of the system is obtained and validated around the frequency of the operating mode by the available experimental results, in air and liquid. The changes in the natural frequencies in the presence of liquid are studied. The effects of tip-sample interaction on the excitation of higher order modes of the system are also investigated in air and liquid environments. Copyright © 2018 Elsevier B.V. All rights reserved.
Lu, Xiuyuan; Van Roy, Benjamin
2017-01-01
Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applica...
Clifford algebra in finite quantum field theories
International Nuclear Information System (INIS)
Moser, M.
1997-12-01
We consider the most general power counting renormalizable and gauge invariant Lagrangean density L invariant with respect to some non-Abelian, compact, and semisimple gauge group G. The particle content of this quantum field theory consists of gauge vector bosons, real scalar bosons, fermions, and ghost fields. We assume that the ultimate grand unified theory needs no cutoff. This yields so-called finiteness conditions, resulting from the demand for finite physical quantities calculated by the bare Lagrangean. In lower loop order, necessary conditions for finiteness are thus vanishing beta functions for dimensionless couplings. The complexity of the finiteness conditions for a general quantum field theory makes the discussion of non-supersymmetric theories rather cumbersome. Recently, the F = 1 class of finite quantum field theories has been proposed embracing all supersymmetric theories. A special type of F = 1 theories proposed turns out to have Yukawa couplings which are equivalent to generators of a Clifford algebra representation. These algebraic structures are remarkable all the more than in the context of a well-known conjecture which states that finiteness is maybe related to global symmetries (such as supersymmetry) of the Lagrangean density. We can prove that supersymmetric theories can never be of this Clifford-type. It turns out that these Clifford algebra representations found recently are a consequence of certain invariances of the finiteness conditions resulting from a vanishing of the renormalization group β-function for the Yukawa couplings. We are able to exclude almost all such Clifford-like theories. (author)
Intervención familiar: programa EDUCA
Díaz-Sibajas, Miguel Ángel
2014-01-01
El objetivo principal de este taller será el de presentar un programa de escuela de padres protocolizado en grupo para la prevención primaria y secundaria de los trastornos del comportamiento perturbador (el trastorno negativista desafiante y el trastorno disocial) en la infancia y la adolescencia. Se describirán las principales estrategias y dinámicas de intervención con el objetivo de que el taller sea eminentemente práctico, incidiendo en no sólo “ qué hacer", sino en "cómo podemos hace...
Approximation of the semi-infinite interval
Directory of Open Access Journals (Sweden)
A. McD. Mercer
1980-01-01
Full Text Available The approximation of a function f∈C[a,b] by Bernstein polynomials is well-known. It is based on the binomial distribution. O. Szasz has shown that there are analogous approximations on the interval [0,∞ based on the Poisson distribution. Recently R. Mohapatra has generalized Szasz' result to the case in which the approximating function is αe−ux∑k=N∞(uxkα+β−1Γ(kα+βf(kαuThe present note shows that these results are special cases of a Tauberian theorem for certain infinite series having positive coefficients.
Czech Academy of Sciences Publication Activity Database
Šůcha, P.; Hanzálek, Z.; Heřmánek, Antonín; Schier, Jan
2007-01-01
Roč. 46, č. 1 (2007), s. 35-53 ISSN 0922-5773 R&D Projects: GA AV ČR(CZ) 1ET300750402; GA MŠk(CZ) 1M0567; GA MPO(CZ) FD-K3/082 Institutional research plan: CEZ:AV0Z10750506 Keywords : high-level synthesis * cyclic scheduling * iterative algorithms * imperfectly nested loops * integer linear programming * FPGA * VLSI design * blind equalization * implementation Subject RIV: BA - General Mathematics Impact factor: 0.449, year: 2007 http://www.springerlink.com/content/t217kg0822538014/fulltext.pdf
Positive Solutions of the One-Dimensional p-Laplacian with Nonlinearity Defined on a Finite Interval
Ruyun Ma; Chunjie Xie; Abubaker Ahmed
2013-01-01
We use the quadrature method to show the existence and multiplicity of positive solutions of the boundary value problems involving one-dimensional $p$ -Laplacian ${\\left({u}^{\\prime }\\left(t\\right){|}^{p-2}{u}^{\\prime }\\left(t\\right)\\right)}^{\\prime }+\\lambda f\\left(u\\left(t\\right)\\right)=0$ , $t\\in \\left(0,1\\right)$ , $u\\left(0\\right)=u\\left(1\\right)=0$ , where $p\\in \\left(1,2\\right]$ , $\\lambda \\in \\left(0,\\mathrm{\\infty }\\right)$ is a parameter, $f\\in {C}^{1}\\left(\\left[0,r\\right),\\l...
Ordering, symbols and finite-dimensional approximations of path integrals
International Nuclear Information System (INIS)
Kashiwa, Taro; Sakoda, Seiji; Zenkin, S.V.
1994-01-01
We derive general form of finite-dimensional approximations of path integrals for both bosonic and fermionic canonical systems in terms of symbols of operators determined by operator ordering. We argue that for a system with a given quantum Hamiltonian such approximations are independent of the type of symbols up to terms of O(ε), where ε of is infinitesimal time interval determining the accuracy of the approximations. A new class of such approximations is found for both c-number and Grassmannian dynamical variables. The actions determined by the approximations are non-local and have no classical continuum limit except the cases of pq- and qp-ordering. As an explicit example the fermionic oscillator is considered in detail. (author)
Finite-time barriers to reaction front propagation
Locke, Rory; Mahoney, John; Mitchell, Kevin
2015-11-01
Front propagation in advection-reaction-diffusion systems gives rise to rich geometric patterns. It has been shown for time-independent and time-periodic fluid flows that invariant manifolds, termed burning invariant manifolds (BIMs), serve as one-sided dynamical barriers to the propagation of reaction front. More recently, theoretical work has suggested that one-sided barriers, termed burning Lagrangian Coherent structures (bLCSs), exist for fluid velocity data prescribed over a finite time interval, with no assumption on the time-dependence of the flow. In this presentation, we use a time-varying fluid ``wind'' in a double-vortex channel flow to demonstrate that bLCSs form the (locally) most attracting or repelling fronts.
Finite-volume scheme for anisotropic diffusion
Energy Technology Data Exchange (ETDEWEB)
Es, Bram van, E-mail: bramiozo@gmail.com [Centrum Wiskunde & Informatica, P.O. Box 94079, 1090GB Amsterdam (Netherlands); FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, The Netherlands" 1 (Netherlands); Koren, Barry [Eindhoven University of Technology (Netherlands); Blank, Hugo J. de [FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, The Netherlands" 1 (Netherlands)
2016-02-01
In this paper, we apply a special finite-volume scheme, limited to smooth temperature distributions and Cartesian grids, to test the importance of connectivity of the finite volumes. The area of application is nuclear fusion plasma with field line aligned temperature gradients and extreme anisotropy. We apply the scheme to the anisotropic heat-conduction equation, and compare its results with those of existing finite-volume schemes for anisotropic diffusion. Also, we introduce a general model adaptation of the steady diffusion equation for extremely anisotropic diffusion problems with closed field lines.
The theory of finitely generated commutative semigroups
Rédei, L; Stark, M; Gravett, K A H
1966-01-01
The Theory of Finitely Generated Commutative Semigroups describes a theory of finitely generated commutative semigroups which is founded essentially on a single """"fundamental theorem"""" and exhibits resemblance in many respects to the algebraic theory of numbers. The theory primarily involves the investigation of the F-congruences (F is the the free semimodule of the rank n, where n is a given natural number). As applications, several important special cases are given. This volume is comprised of five chapters and begins with preliminaries on finitely generated commutative semigroups before
Finite Markov processes and their applications
Iosifescu, Marius
2007-01-01
A self-contained treatment of finite Markov chains and processes, this text covers both theory and applications. Author Marius Iosifescu, vice president of the Romanian Academy and director of its Center for Mathematical Statistics, begins with a review of relevant aspects of probability theory and linear algebra. Experienced readers may start with the second chapter, a treatment of fundamental concepts of homogeneous finite Markov chain theory that offers examples of applicable models.The text advances to studies of two basic types of homogeneous finite Markov chains: absorbing and ergodic ch
An introduction to finite projective planes
Albert, Abraham Adrian
2015-01-01
Geared toward both beginning and advanced undergraduate and graduate students, this self-contained treatment offers an elementary approach to finite projective planes. Following a review of the basics of projective geometry, the text examines finite planes, field planes, and coordinates in an arbitrary plane. Additional topics include central collineations and the little Desargues' property, the fundamental theorem, and examples of finite non-Desarguesian planes.Virtually no knowledge or sophistication on the part of the student is assumed, and every algebraic system that arises is defined and
Polyelectrolyte Bundles: Finite size at thermodynamic equilibrium?
Sayar, Mehmet
2005-03-01
Experimental observation of finite size aggregates formed by polyelectrolytes such as DNA and F-actin, as well as synthetic polymers like poly(p-phenylene), has created a lot of attention in recent years. Here, bundle formation in rigid rod-like polyelectrolytes is studied via computer simulations. For the case of hydrophobically modified polyelectrolytes finite size bundles are observed even in the presence of only monovalent counterions. Furthermore, in the absence of a hydrophobic backbone, we have also observed formation of finite size aggregates via multivalent counterion condensation. The size distribution of such aggregates and the stability is analyzed in this study.
Books and monographs on finite element technology
Noor, A. K.
1985-01-01
The present paper proviees a listing of all of the English books and some of the foreign books on finite element technology, taking into account also a list of the conference proceedings devoted solely to finite elements. The references are divided into categories. Attention is given to fundamentals, mathematical foundations, structural and solid mechanics applications, fluid mechanics applications, other applied science and engineering applications, computer implementation and software systems, computational and modeling aspects, special topics, boundary element methods, proceedings of symmposia and conferences on finite element technology, bibliographies, handbooks, and historical accounts.
Modelling robot's behaviour using finite automata
Janošek, Michal; Žáček, Jaroslav
2017-07-01
This paper proposes a model of a robot's behaviour described by finite automata. We split robot's knowledge into several knowledge bases which are used by the inference mechanism of the robot's expert system to make a logic deduction. Each knowledgebase is dedicated to the particular behaviour domain and the finite automaton helps us switching among these knowledge bases with the respect of actual situation. Our goal is to simplify and reduce complexity of one big knowledgebase splitting it into several pieces. The advantage of this model is that we can easily add new behaviour by adding new knowledgebase and add this behaviour into the finite automaton and define necessary states and transitions.
Electrical machine analysis using finite elements
Bianchi, Nicola
2005-01-01
OUTLINE OF ELECTROMAGNETIC FIELDSVector AnalysisElectromagnetic FieldsFundamental Equations SummaryReferencesBASIC PRINCIPLES OF FINITE ELEMENT METHODSIntroductionField Problems with Boundary ConditionsClassical Method for the Field Problem SolutionThe Classical Residual Method (Galerkin's Method)The Classical Variational Method (Rayleigh-Ritz's Method)The Finite Element MethodReferencesAPPLICATIONS OF THE FINITE ELEMENT METHOD TO TWO-DIMENSIONAL FIELDSIntroductionLinear Interpolation of the Function fApplication of the Variational MethodSimple Descriptions of Electromagnetic FieldsAppendix: I
Finite element analysis of piezoelectric materials
International Nuclear Information System (INIS)
Lowrie, F.; Stewart, M.; Cain, M.; Gee, M.
1999-01-01
This guide is intended to help people wanting to do finite element analysis of piezoelectric materials by answering some of the questions that are peculiar to piezoelectric materials. The document is not intended as a complete beginners guide for finite element analysis in general as this is better dealt with by the individual software producers. The guide is based around the commercial package ANSYS as this is a popular package amongst piezoelectric material users, however much of the information will still be useful to users of other finite element codes. (author)
Implicit and fully implicit exponential finite difference methods
Indian Academy of Sciences (India)
Burgers' equation; exponential finite difference method; implicit exponential finite difference method; ... This paper describes two new techniques which give improved exponential finite difference solutions of Burgers' equation. ... Current Issue
Confidence intervals for the lognormal probability distribution
International Nuclear Information System (INIS)
Smith, D.L.; Naberejnev, D.G.
2004-01-01
The present communication addresses the topic of symmetric confidence intervals for the lognormal probability distribution. This distribution is frequently utilized to characterize inherently positive, continuous random variables that are selected to represent many physical quantities in applied nuclear science and technology. The basic formalism is outlined herein and a conjured numerical example is provided for illustration. It is demonstrated that when the uncertainty reflected in a lognormal probability distribution is large, the use of a confidence interval provides much more useful information about the variable used to represent a particular physical quantity than can be had by adhering to the notion that the mean value and standard deviation of the distribution ought to be interpreted as best value and corresponding error, respectively. Furthermore, it is shown that if the uncertainty is very large a disturbing anomaly can arise when one insists on interpreting the mean value and standard deviation as the best value and corresponding error, respectively. Reliance on using the mode and median as alternative parameters to represent the best available knowledge of a variable with large uncertainties is also shown to entail limitations. Finally, a realistic physical example involving the decay of radioactivity over a time period that spans many half-lives is presented and analyzed to further illustrate the concepts discussed in this communication
una experiencia de intervención
Directory of Open Access Journals (Sweden)
Cecilia Villarreal Montoya
2007-01-01
Full Text Available El artículo resume la experiencia de intervención con una familia de la escuela de Villa Esperanza de Pavas. Se trata de un matrimonio con dificultades para disciplinar a sus tres hijos varones de ocho, seis, y cuatro años. En primer lugar, se exponen los principios teóricos y metodológicos, para luego mostrar la aplicación de ellos en el proceso vivido por la familia. Se aplica el modelo de intervención estructural, que busca que la misma familia logre realizar, paso a paso, los cambios requeridos en la dinámica y estructura familiar. En la situación específica de esta familia, se observa el fortalecimiento de la pareja como tal y como madre y padre en la medida que van logrando asumir la autoridad en forma compartida para disciplinar a los hijos. Al compartir esta experiencia, la autora pretende estimular a profesionales en Orientación a asumir el reto de considerar a las familias de la comunidad estudiantil, de las instituciones educativas donde laboran, como parte importante en el quehacer orientador.
Reliable calculation in probabilistic logic: Accounting for small sample size and model uncertainty
Energy Technology Data Exchange (ETDEWEB)
Ferson, S. [Applied Biomathematics, Setauket, NY (United States)
1996-12-31
A variety of practical computational problems arise in risk and safety assessments, forensic statistics and decision analyses in which the probability of some event or proposition E is to be estimated from the probabilities of a finite list of related subevents or propositions F,G,H,.... In practice, the analyst`s knowledge may be incomplete in two ways. First, the probabilities of the subevents may be imprecisely known from statistical estimations, perhaps based on very small sample sizes. Second, relationships among the subevents may be known imprecisely. For instance, there may be only limited information about their stochastic dependencies. Representing probability estimates as interval ranges on has been suggested as a way to address the first source of imprecision. A suite of AND, OR and NOT operators defined with reference to the classical Frochet inequalities permit these probability intervals to be used in calculations that address the second source of imprecision, in many cases, in a best possible way. Using statistical confidence intervals as inputs unravels the closure properties of this approach however, requiring that probability estimates be characterized by a nested stack of intervals for all possible levels of statistical confidence, from a point estimate (0% confidence) to the entire unit interval (100% confidence). The corresponding logical operations implied by convolutive application of the logical operators for every possible pair of confidence intervals reduces by symmetry to a manageably simple level-wise iteration. The resulting calculus can be implemented in software that allows users to compute comprehensive and often level-wise best possible bounds on probabilities for logical functions of events.
Finite-volume spectra of the Lee-Yang model
Energy Technology Data Exchange (ETDEWEB)
Bajnok, Zoltan [MTA Lendület Holographic QFT Group, Wigner Research Centre for Physics,H-1525 Budapest 114, P.O.B. 49 (Hungary); Deeb, Omar el [MTA Lendület Holographic QFT Group, Wigner Research Centre for Physics,H-1525 Budapest 114, P.O.B. 49 (Hungary); Physics Department, Faculty of Science, Beirut Arab University (BAU),Beirut (Lebanon); Pearce, Paul A. [School of Mathematics and Statistics, University of Melbourne,Parkville, Victoria 3010 (Australia)
2015-04-15
We consider the non-unitary Lee-Yang minimal model M(2,5) in three different finite geometries: (i) on the interval with integrable boundary conditions labelled by the Kac labels (r,s)=(1,1),(1,2), (ii) on the circle with periodic boundary conditions and (iii) on the periodic circle including an integrable purely transmitting defect. We apply φ{sub 1,3} integrable perturbations on the boundary and on the defect and describe the flow of the spectrum. Adding a Φ{sub 1,3} integrable perturbation to move off-criticality in the bulk, we determine the finite size spectrum of the massive scattering theory in the three geometries via Thermodynamic Bethe Ansatz (TBA) equations. We derive these integral equations for all excitations by solving, in the continuum scaling limit, the TBA functional equations satisfied by the transfer matrices of the associated A{sub 4} RSOS lattice model of Forrester and Baxter in Regime III. The excitations are classified in terms of (m,n) systems. The excited state TBA equations agree with the previously conjectured equations in the boundary and periodic cases. In the defect case, new TBA equations confirm previously conjectured transmission factors.
Hermitian Mindlin Plate Wavelet Finite Element Method for Load Identification
Directory of Open Access Journals (Sweden)
Xiaofeng Xue
2016-01-01
Full Text Available A new Hermitian Mindlin plate wavelet element is proposed. The two-dimensional Hermitian cubic spline interpolation wavelet is substituted into finite element functions to construct frequency response function (FRF. It uses a system’s FRF and response spectrums to calculate load spectrums and then derives loads in the time domain via the inverse fast Fourier transform. By simulating different excitation cases, Hermitian cubic spline wavelets on the interval (HCSWI finite elements are used to reverse load identification in the Mindlin plate. The singular value decomposition (SVD method is adopted to solve the ill-posed inverse problem. Compared with ANSYS results, HCSWI Mindlin plate element can accurately identify the applied load. Numerical results show that the algorithm of HCSWI Mindlin plate element is effective. The accuracy of HCSWI can be verified by comparing the FRF of HCSWI and ANSYS elements with the experiment data. The experiment proves that the load identification of HCSWI Mindlin plate is effective and precise by using the FRF and response spectrums to calculate the loads.
Active earth pressure model tests versus finite element analysis
Pietrzak, Magdalena
2017-06-01
The purpose of the paper is to compare failure mechanisms observed in small scale model tests on granular sample in active state, and simulated by finite element method (FEM) using Plaxis 2D software. Small scale model tests were performed on rectangular granular sample retained by a rigid wall. Deformation of the sample resulted from simple wall translation in the direction `from the soil" (active earth pressure state. Simple Coulomb-Mohr model for soil can be helpful in interpreting experimental findings in case of granular materials. It was found that the general alignment of strain localization pattern (failure mechanism) may belong to macro scale features and be dominated by a test boundary conditions rather than the nature of the granular sample.
Directory of Open Access Journals (Sweden)
Mathias Baumert
2014-12-01
Full Text Available Autonomic activity affects beat-to-beat variability of heart rate and QT interval. The aim of this study was to explore whether entropy measures are suitable to detect changes in neural outflow to the heart elicited by two different stress paradigms. We recorded short-term ECG in 11 normal subjects during an experimental protocol that involved head-up tilt and mental arithmetic stress and computed sample entropy, cross-sample entropy and causal interactions based on conditional entropy from RR and QT interval time series. Head-up tilt resulted in a significant reduction in sample entropy of RR intervals and cross-sample entropy, while mental arithmetic stress resulted in a significant reduction in coupling directed from RR to QT. In conclusion, measures of entropy are suitable to detect changes in neural outflow to the heart and decoupling of repolarisation variability from heart rate variability elicited by orthostatic or mental arithmetic stress.
Assessing accuracy of point fire intervals across landscapes with simulation modelling
Russell A. Parsons; Emily K. Heyerdahl; Robert E. Keane; Brigitte Dorner; Joseph Fall
2007-01-01
We assessed accuracy in point fire intervals using a simulation model that sampled four spatially explicit simulated fire histories. These histories varied in fire frequency and size and were simulated on a flat landscape with two forest types (dry versus mesic). We used three sampling designs (random, systematic grids, and stratified). We assessed the sensitivity of...
Bootstrap confidence intervals for principal response curves
Timmerman, Marieke E.; Ter Braak, Cajo J. F.
2008-01-01
The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the
Bootstrap Confidence Intervals for Principal Response Curves
Timmerman, M.E.; Braak, ter C.J.F.
2008-01-01
The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the
International Nuclear Information System (INIS)
Gorbatenko, A A; Revina, E I
2015-01-01
The review is devoted to the major advances in laser sampling. The advantages and drawbacks of the technique are considered. Specific features of combinations of laser sampling with various instrumental analytical methods, primarily inductively coupled plasma mass spectrometry, are discussed. Examples of practical implementation of hybrid methods involving laser sampling as well as corresponding analytical characteristics are presented. The bibliography includes 78 references
Polynomials in finite geometries and combinatorics
Blokhuis, A.; Walker, K.
1993-01-01
It is illustrated how elementary properties of polynomials can be used to attack extremal problems in finite and euclidean geometry, and in combinatorics. Also a new result, related to the problem of neighbourly cylinders is presented.
Finite Volumes for Complex Applications VII
Ohlberger, Mario; Rohde, Christian
2014-01-01
The methods considered in the 7th conference on "Finite Volumes for Complex Applications" (Berlin, June 2014) have properties which offer distinct advantages for a number of applications. The second volume of the proceedings covers reviewed contributions reporting successful applications in the fields of fluid dynamics, magnetohydrodynamics, structural analysis, nuclear physics, semiconductor theory and other topics. The finite volume method in its various forms is a space discretization technique for partial differential equations based on the fundamental physical principle of conservation. Recent decades have brought significant success in the theoretical understanding of the method. Many finite volume methods preserve further qualitative or asymptotic properties, including maximum principles, dissipativity, monotone decay of free energy, and asymptotic stability. Due to these properties, finite volume methods belong to the wider class of compatible discretization methods, which preserve qualitative propert...
The finite Fourier transform of classical polynomials
Dixit, Atul; Jiu, Lin; Moll, Victor H.; Vignat, Christophe
2014-01-01
The finite Fourier transform of a family of orthogonal polynomials $A_{n}(x)$, is the usual transform of the polynomial extended by $0$ outside their natural domain. Explicit expressions are given for the Legendre, Jacobi, Gegenbauer and Chebyshev families.
Quantiles for Finite Mixtures of Normal Distributions
Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.
2006-01-01
Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)
Jauch-Piron logics with finiteness conditions
Rogalewicz, Vladimír
1991-04-01
We show that there are no non-Boolean block-finite orthomodular posets possessing a unital set of Jauch-Piron states. Thus, an orthomodular poset representing a quantum physical system must have infinitely many blocks.
Finite element methods a practical guide
Whiteley, Jonathan
2017-01-01
This book presents practical applications of the finite element method to general differential equations. The underlying strategy of deriving the finite element solution is introduced using linear ordinary differential equations, thus allowing the basic concepts of the finite element solution to be introduced without being obscured by the additional mathematical detail required when applying this technique to partial differential equations. The author generalizes the presented approach to partial differential equations which include nonlinearities. The book also includes variations of the finite element method such as different classes of meshes and basic functions. Practical application of the theory is emphasised, with development of all concepts leading ultimately to a description of their computational implementation illustrated using Matlab functions. The target audience primarily comprises applied researchers and practitioners in engineering, but the book may also be beneficial for graduate students.
Finite boson mappings of fermion systems
International Nuclear Information System (INIS)
Johnson, C.W.; Ginocchio, J.N.
1994-01-01
We discuss a general mapping of fermion pairs to bosons that preserves Hermitian conjugation, with an eye towards producing finite and usable boson Hamiltonians that approximate well the low-energy dynamics of a fermion Hamiltonian
Advanced finite element method in structural engineering
Long, Yu-Qiu; Long, Zhi-Fei
2009-01-01
This book systematically introduces the research work on the Finite Element Method completed over the past 25 years. Original theoretical achievements and their applications in the fields of structural engineering and computational mechanics are discussed.
A note on powers in finite fields
Aabrandt, Andreas; Lundsgaard Hansen, Vagn
2016-08-01
The study of solutions to polynomial equations over finite fields has a long history in mathematics and is an interesting area of contemporary research. In recent years, the subject has found important applications in the modelling of problems from applied mathematical fields such as signal analysis, system theory, coding theory and cryptology. In this connection, it is of interest to know criteria for the existence of squares and other powers in arbitrary finite fields. Making good use of polynomial division in polynomial rings over finite fields, we have examined a classical criterion of Euler for squares in odd prime fields, giving it a formulation that is apt for generalization to arbitrary finite fields and powers. Our proof uses algebra rather than classical number theory, which makes it convenient when presenting basic methods of applied algebra in the classroom.
Finite N=1 SUSY gauge field theories
International Nuclear Information System (INIS)
Kazakov, D.I.
1986-01-01
The authors give a detailed description of the method to construct finite N=1 SUSY gauge field theories in the framework of N=1 superfields within dimensional regularization. The finiteness of all Green functions is based on supersymmetry and gauge invariance and is achieved by a proper choice of matter content of the theory and Yukawa couplings in the form Y i =f i (ε)g, where g is the gauge coupling, and the function f i (ε) is regular at ε=0 and is calculated in perturbation theory. Necessary and sufficient conditions for finiteness are determined already in the one-loop approximation. The correspondence with an earlier proposed approach to construct finite theories based on aigenvalue solutions of renormalization-group equations is established
ANSYS mechanical APDL for finite element analysis
Thompson, Mary Kathryn
2017-01-01
ANSYS Mechanical APDL for Finite Element Analysis provides a hands-on introduction to engineering analysis using one of the most powerful commercial general purposes finite element programs on the market. Students will find a practical and integrated approach that combines finite element theory with best practices for developing, verifying, validating and interpreting the results of finite element models, while engineering professionals will appreciate the deep insight presented on the program's structure and behavior. Additional topics covered include an introduction to commands, input files, batch processing, and other advanced features in ANSYS. The book is written in a lecture/lab style, and each topic is supported by examples, exercises and suggestions for additional readings in the program documentation. Exercises gradually increase in difficulty and complexity, helping readers quickly gain confidence to independently use the program. This provides a solid foundation on which to build, preparing readers...
Collaborative Systems – Finite State Machines
Directory of Open Access Journals (Sweden)
Ion IVAN
2011-01-01
Full Text Available In this paper the finite state machines are defined and formalized. There are presented the collaborative banking systems and their correspondence is done with finite state machines. It highlights the role of finite state machines in the complexity analysis and performs operations on very large virtual databases as finite state machines. It builds the state diagram and presents the commands and documents transition between the collaborative systems states. The paper analyzes the data sets from Collaborative Multicash Servicedesk application and performs a combined analysis in order to determine certain statistics. Indicators are obtained, such as the number of requests by category and the load degree of an agent in the collaborative system.
Chiral crossover transition in a finite volume
Shi, Chao; Jia, Wenbao; Sun, An; Zhang, Liping; Zong, Hongshi
2018-02-01
Finite volume effects on the chiral crossover transition of strong interactions at finite temperature are studied by solving the quark gap equation within a cubic volume of finite size L. With the anti-periodic boundary condition, our calculation shows the chiral quark condensate, which characterizes the strength of dynamical chiral symmetry breaking, decreases as L decreases below 2.5 fm. We further study the finite volume effects on the pseudo-transition temperature {T}{{c}} of the crossover, showing a significant decrease in {T}{{c}} as L decreases below 3 fm. Supported by National Natural Science Foundation of China (11475085, 11535005, 11690030, 51405027), the Fundamental Research Funds for the Central Universities (020414380074), China Postdoctoral Science Foundation (2016M591808) and Open Research Foundation of State Key Lab. of Digital Manufacturing Equipment & Technology in Huazhong University of Science & Technology (DMETKF2015015)
A Finite Axiomatization of G-Dependence
Paolini, Gianluca
2015-01-01
We show that a form of dependence known as G-dependence (originally introduced by Grelling) admits a very natural finite axiomatization, as well as Armstrong relations. We also give an explicit translation between functional dependence and G-dependence.
Review on Finite Element Method * ERHUNMWUN, ID ...
African Journals Online (AJOL)
ADOWIE PERE
ABSTRACT: In this work, we have discussed what Finite Element Method (FEM) is, its historical development, advantages and ... residual procedures, are examples of the direct approach ... The paper centred on the "stiffness and deflection of ...
Finite element bending behaviour of discretely delaminated ...
African Journals Online (AJOL)
user
due to their light weight, high specific strength and stiffness properties. ... cylindrical shell roofs respectively using finite element method with centrally located .... where { }ε and { }γ are the direct and shear strains in midplane and { }κ denotes ...
Serial binary interval ratios improve rhythm reproduction.
Wu, Xiang; Westanmo, Anders; Zhou, Liang; Pan, Junhao
2013-01-01
Musical rhythm perception is a natural human ability that involves complex cognitive processes. Rhythm refers to the organization of events in time, and musical rhythms have an underlying hierarchical metrical structure. The metrical structure induces the feeling of a beat and the extent to which a rhythm induces the feeling of a beat is referred to as its metrical strength. Binary ratios are the most frequent interval ratio in musical rhythms. Rhythms with hierarchical binary ratios are better discriminated and reproduced than rhythms with hierarchical non-binary ratios. However, it remains unclear whether a superiority of serial binary over non-binary ratios in rhythm perception and reproduction exists. In addition, how different types of serial ratios influence the metrical strength of rhythms remains to be elucidated. The present study investigated serial binary vs. non-binary ratios in a reproduction task. Rhythms formed with exclusively binary (1:2:4:8), non-binary integer (1:3:5:6), and non-integer (1:2.3:5.3:6.4) ratios were examined within a constant meter. The results showed that the 1:2:4:8 rhythm type was more accurately reproduced than the 1:3:5:6 and 1:2.3:5.3:6.4 rhythm types, and the 1:2.3:5.3:6.4 rhythm type was more accurately reproduced than the 1:3:5:6 rhythm type. Further analyses showed that reproduction performance was better predicted by the distribution pattern of event occurrences within an inter-beat interval, than by the coincidence of events with beats, or the magnitude and complexity of interval ratios. Whereas rhythm theories and empirical data emphasize the role of the coincidence of events with beats in determining metrical strength and predicting rhythm performance, the present results suggest that rhythm processing may be better understood when the distribution pattern of event occurrences is taken into account. These results provide new insights into the mechanisms underlining musical rhythm perception.
Serial binary interval ratios improve rhythm reproduction
Directory of Open Access Journals (Sweden)
Xiang eWu
2013-08-01
Full Text Available Musical rhythm perception is a natural human ability that involves complex cognitive processes. Rhythm refers to the organization of events in time, and musical rhythms have an underlying hierarchical metrical structure. The metrical structure induces the feeling of a beat and the extent to which a rhythm induces the feeling of a beat is referred to as its metrical strength. Binary ratios are the most frequent interval ratio in musical rhythms. Rhythms with hierarchical binary ratios are better discriminated and reproduced than rhythms with hierarchical non-binary ratios. However, it remains unclear whether a superiority of serial binary over non-binary ratios in rhythm perception and reproduction exists. In addition, how different types of serial ratios influence the metrical strength of rhythms remains to be elucidated. The present study investigated serial binary vs. non-binary ratios in a reproduction task. Rhythms formed with exclusively binary (1:2:4:8, non-binary integer (1:3:5:6, and non-integer (1:2.3:5.3:6.4 ratios were examined within a constant meter. The results showed that the 1:2:4:8 rhythm type was more accurately reproduced than the 1:3:5:6 and 1:2.3:5.3:6.4 rhythm types, and the 1:2.3:5.3:6.4 rhythm type was more accurately reproduced than the 1:3:5:6 rhythm type. Further analyses showed that reproduction performance was better predicted by the distribution pattern of event occurrences within an inter-beat interval, than by the coincidence of events with beats, or the magnitude and complexity of interval ratios. Whereas rhythm theories and empirical data emphasize the role of the coincidence of events with beats in determining metrical strength and predicting rhythm performance, the present results suggest that rhythm processing may be better understood when the distribution pattern of event occurrences is taken into account. These results provide new insights into the mechanisms underlining musical rhythm perception.
Interval-based reconstruction for uncertainty quantification in PET
Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis
2018-02-01
A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.
Baeten; Bruggeman; Paepen; Carchon
2000-03-01
The non-destructive quantification of transuranic elements in nuclear waste management or in safeguards verifications is commonly performed by passive neutron assay techniques. To minimise the number of unknown sample-dependent parameters, Neutron Multiplicity Counting (NMC) is applied. We developed a new NMC-technique, called Time Interval Correlation Spectroscopy (TICS), which is based on the measurement of Rossi-alpha time interval distributions. Compared to other NMC-techniques, TICS offers several advantages.
Dynamic Pricing and Learning with Finite Inventories
Zwart, Bert; Boer, Arnoud
2015-01-01
We study a dynamic pricing problem with finite inventory and parametric uncertainty on the demand distribution. Products are sold during selling seasons of finite length, and inventory that is unsold at the end of a selling season, perishes. The goal of the seller is to determine a pricing strategy that maximizes the expected revenue. Inference on the unknown parameters is made by maximum likelihood estimation. We propose a pricing strategy for this problem, and show that the Regret - which i...
Bibliography for finite elements. [2200 references
Energy Technology Data Exchange (ETDEWEB)
Whiteman, J R [comp.
1975-01-01
This bibliography cites almost all of the significant papers on advances in the mathematical theory of finite elements. Reported are applications in aeronautical, civil, mechanical, nautical and nuclear engineering. Such topics as classical analysis, functional analysis, approximation theory, fluids, and diffusion are covered. Over 2200 references to publications up to the end of 1974 are included. Publications are listed alphabetically by author and also by keywords. In addition, finite element packages are listed.
Finite W-algebras and intermediate statistics
International Nuclear Information System (INIS)
Barbarin, F.; Ragoucy, E.; Sorba, P.
1995-01-01
New realizations of finite W-algebras are constructed by relaxing the usual constraint conditions. Then finite W-algebras are recognized in the Heisenberg quantization recently proposed by Leinaas and Myrheim, for a system of two identical particles in d dimensions. As the anyonic parameter is directly associated to the W-algebra involved in the d=1 case, it is natural to consider that the W-algebra framework is well adapted for a possible generalization of the anyon statistics. ((orig.))
Finite Optimal Stopping Problems: The Seller's Perspective
Hemmati, Mehdi; Smith, J. Cole
2011-01-01
We consider a version of an optimal stopping problem, in which a customer is presented with a finite set of items, one by one. The customer is aware of the number of items in the finite set and the minimum and maximum possible value of each item, and must purchase exactly one item. When an item is presented to the customer, she or he observes its…
Directory of Open Access Journals (Sweden)
S. Srinivasan
1987-01-01
Full Text Available In this paper we consider finite p′-nilpotent groups which is a generalization of finite p-nilpotent groups. This generalization leads us to consider the various special subgroups such as the Frattini subgroup, Fitting subgroup, and the hypercenter in this generalized setting. The paper also considers the conditions under which product of p′-nilpotent groups will be a p′-nilpotent group.
Entangling transformations in composite finite quantum systems
International Nuclear Information System (INIS)
Vourdas, A
2003-01-01
Phase space methods are applied in the context of finite quantum systems. 'Galois quantum systems' (with a dimension which is a power of a prime number) are considered, and symplectic Sp(2,Z(d)) transformations are studied. Composite systems comprising two finite quantum systems are also considered. Symplectic Sp(4,Z(d)) transformations are classified into local and entangling ones and the necessary matrices which perform such transformations are calculated numerically
The finite element method in electromagnetics
Jin, Jianming
2014-01-01
A new edition of the leading textbook on the finite element method, incorporating major advancements and further applications in the field of electromagnetics The finite element method (FEM) is a powerful simulation technique used to solve boundary-value problems in a variety of engineering circumstances. It has been widely used for analysis of electromagnetic fields in antennas, radar scattering, RF and microwave engineering, high-speed/high-frequency circuits, wireless communication, electromagnetic compatibility, photonics, remote sensing, biomedical engineering, and space exploration. The
Probabilistic finite elements for fracture mechanics
Besterfield, Glen
1988-01-01
The probabilistic finite element method (PFEM) is developed for probabilistic fracture mechanics (PFM). A finite element which has the near crack-tip singular strain embedded in the element is used. Probabilistic distributions, such as expectation, covariance and correlation stress intensity factors, are calculated for random load, random material and random crack length. The method is computationally quite efficient and can be expected to determine the probability of fracture or reliability.
Group foliation of finite difference equations
Thompson, Robert; Valiquette, Francis
2018-06-01
Using the theory of equivariant moving frames, a group foliation method for invariant finite difference equations is developed. This method is analogous to the group foliation of differential equations and uses the symmetry group of the equation to decompose the solution process into two steps, called resolving and reconstruction. Our constructions are performed algorithmically and symbolically by making use of discrete recurrence relations among joint invariants. Applications to invariant finite difference equations that approximate differential equations are given.
Anomalies in curved spacetime at finite temperature
International Nuclear Information System (INIS)
Boschi-Filho, H.; Natividade, C.P.
1993-01-01
We discuss the problem of the breakdown of conformal and gauge symmetries at finite temperature in curved spacetime background, when the changes in the background are gradual. We obtain the expressions for the Seeley's coefficients and the heat kernel expansion in this regime. As applications, we consider the self-interacting lambda phi''4 and chiral Schwinger models in curved backgrounds at finite temperature. (Author) 9 refs
Collision Probabilities for Finite Cylinders and Cuboids
Energy Technology Data Exchange (ETDEWEB)
Carlvik, I
1967-05-15
Analytical formulae have been derived for the collision probabilities of homogeneous finite cylinders and cuboids. The formula for the finite cylinder contains double integrals, and the formula for the cuboid only single integrals. Collision probabilities have been calculated by means of the formulae and compared with values obtained by other authors. It was found that the calculations using the analytical formulae are much quicker and give higher accuracy than Monte Carlo calculations.
Rough Finite State Automata and Rough Languages
Arulprakasam, R.; Perumal, R.; Radhakrishnan, M.; Dare, V. R.
2018-04-01
Sumita Basu [1, 2] recently introduced the concept of a rough finite state (semi)automaton, rough grammar and rough languages. Motivated by the work of [1, 2], in this paper, we investigate some closure properties of rough regular languages and establish the equivalence between the classes of rough languages generated by rough grammar and the classes of rough regular languages accepted by rough finite automaton.
Yamamoto, Naoki; Kanazawa, Takuya
2009-01-01
We study the properties of QCD at high baryon density in a finite volume where color superconductivity occurs. We derive exact sum rules for complex eigenvalues of the Dirac operator at finite chemical potential, and show that the Dirac spectrum is directly related to the color superconducting gap $\\Delta$. Also, we find a characteristic signature of color superconductivity: an X-shaped spectrum of partition function zeros in the complex quark mass plane near the origin, reflecting the $Z(2)_...
Surgery simulation using fast finite elements
DEFF Research Database (Denmark)
Bro-Nielsen, Morten
1996-01-01
This paper describes our recent work on real-time surgery simulation using fast finite element models of linear elasticity. In addition, we discuss various improvements in terms of speed and realism......This paper describes our recent work on real-time surgery simulation using fast finite element models of linear elasticity. In addition, we discuss various improvements in terms of speed and realism...
Generators for finite depth subfactor planar algebras
Indian Academy of Sciences (India)
The main result of Kodiyalam and Tupurani [3] shows that a subfactor planar algebra of finite depth is singly generated with a finite presentation. If P is a subfactor planar algebra of depth k, it is shown there that a single 2k-box generates P. It is natural to ask what the smallest s is such that a single s-box generates P. While ...
Thomas Fermi model of finite nuclei
International Nuclear Information System (INIS)
Boguta, J.; Rafelski, J.
1977-01-01
A relativistic Thomas-Fermi model of finite-nuclei is considered. The effective nuclear interaction is mediated by exchanges of isoscalar scalar and vector mesons. The authors include also a self-interaction of the scalar meson field and the Coulomb repulsion of the protons. The parameters of the model are constrained by the average nuclear properties. The Thomas-Fermi equations are solved numerically for finite, stable nuclei. The particular case of 208 82 Pb is considered in more detail. (Auth.)
Reference interval computation: which method (not) to choose?
Pavlov, Igor Y; Wilson, Andrew R; Delgado, Julio C
2012-07-11
When different methods are applied to reference interval (RI) calculation the results can sometimes be substantially different, especially for small reference groups. If there are no reliable RI data available, there is no way to confirm which method generates results closest to the true RI. We randomly drawn samples obtained from a public database for 33 markers. For each sample, RIs were calculated by bootstrapping, parametric, and Box-Cox transformed parametric methods. Results were compared to the values of the population RI. For approximately half of the 33 markers, results of all 3 methods were within 3% of the true reference value. For other markers, parametric results were either unavailable or deviated considerably from the true values. The transformed parametric method was more accurate than bootstrapping for sample size of 60, very close to bootstrapping for sample size 120, but in some cases unavailable. We recommend against using parametric calculations to determine RIs. The transformed parametric method utilizing Box-Cox transformation would be preferable way of RI calculation, if it satisfies normality test. If not, the bootstrapping is always available, and is almost as accurate and precise as the transformed parametric method. Copyright © 2012 Elsevier B.V. All rights reserved.
The finite-difference and finite-element modeling of seismic wave propagation and earthquake motion
International Nuclear Information System (INIS)
Moszo, P.; Kristek, J.; Galis, M.; Pazak, P.; Balazovijech, M.
2006-01-01
Numerical modeling of seismic wave propagation and earthquake motion is an irreplaceable tool in investigation of the Earth's structure, processes in the Earth, and particularly earthquake phenomena. Among various numerical methods, the finite-difference method is the dominant method in the modeling of earthquake motion. Moreover, it is becoming more important in the seismic exploration and structural modeling. At the same time we are convinced that the best time of the finite-difference method in seismology is in the future. This monograph provides tutorial and detailed introduction to the application of the finite-difference, finite-element, and hybrid finite-difference-finite-element methods to the modeling of seismic wave propagation and earthquake motion. The text does not cover all topics and aspects of the methods. We focus on those to which we have contributed. (Author)
Explicit isospectral flows associated to the AKNS operator on the unit interval. II
Amour, Laurent
2012-10-01
Explicit flows associated to any tangent vector fields on any isospectral manifold for the AKNS operator acting in L2 × L2 on the unit interval are written down. The manifolds are of infinite dimension (and infinite codimension). The flows are called isospectral and also are Hamiltonian flows. It is proven that they may be explicitly expressed in terms of regularized determinants of infinite matrix-valued functions with entries depending only on the spectral data at the starting point of the flow. The tangent vector fields are decomposed as ∑ξkTk where ξ ∈ ℓ2 and the Tk ∈ L2 × L2 form a particular basis of the tangent vector spaces of the infinite dimensional manifold. The paper here is a continuation of Amour ["Explicit isospectral flows for the AKNS operator on the unit interval," Inverse Probl. 25, 095008 (2009)], 10.1088/0266-5611/25/9/095008 where, except for a finite number, all the components of the sequence ξ are zero in order to obtain an explicit expression for the isospectral flows. The regularized determinants induce counter-terms allowing for the consideration of finite quantities when the sequences ξ run all over ℓ2.
Multifactorial QT Interval Prolongation and Takotsubo Cardiomyopathy
Directory of Open Access Journals (Sweden)
Michael Gysel
2014-01-01
Full Text Available A 71-year-old woman collapsed while working as a grocery store cashier. CPR was performed and an AED revealed torsades de pointes (TdP. She was subsequently defibrillated resulting in restoration of sinus rhythm with a QTc interval of 544 msec. Further evaluation revealed a diagnosis of Takotsubo Cardiomyopathy (TCM contributing to the development of a multifactorial acquired long QT syndrome (LQTS. The case highlights the role of TCM as a cause of LQTS in the setting of multiple risk factors including old age, female gender, hypokalemia, and treatment with QT prolonging medications. It also highlights the multifactorial nature of acquired LQTS and lends support to growing evidence of an association with TCM.
Tracking gauge symmetry factorizability on intervals
International Nuclear Information System (INIS)
Ngoc-Khanh Tran
2006-01-01
We track the gauge symmetry breaking pattern by boundary conditions on fifth and higher-dimensional intervals. It is found that, with Dirichlet-Neumann boundary conditions, the Kaluza-Klein decomposition in five-dimension for arbitrary gauge group can always be factorized into that for separate subsets of at most two gauge symmetries, and so is completely solvable. Accordingly, we present a simple and systematic geometric method to unambiguously identify the gauge breaking/mixing content by general set of Dirichlet-Neumann boundary conditions. We then formulate a limit theorem on gauge symmetry factorizability to recapitulate this interesting feature. Albeit the breaking/mixing, a particularly simple check of orthogonality and normalization of fields' modes in effective 4-dim picture is explicitly obtained. An interesting chained-mixing of gauge symmetries in higher dimensions by Dirichlet-Neumann boundary conditions is also explicitly constructed. This study has direct applications to higgsless/GUT model building
Intervals between multiple fractions per day
International Nuclear Information System (INIS)
Fowler, J.F.
1988-01-01
Assuming the linear quadratic model for dose-response curves enables the proportion of repairable damage to be calculated for any size of dose per fraction. It is given by the beta (dose squared) term, and represents a larger proportion of the total damage for larger doses per fraction, but also for late-reacting than for early-reacting tissues. For example at 2 Gy per fraction, repairable damage could represent nearly half the total damage in late-reacting tissues but only one fifth in early-reacting tissues. Even if repair occurs at the same rate in both tissues, it will obviously take longer for 50% of the damage to fade to an undetectable level (3 or 5%) than for 20% to do so. This means that late reactions require longer intervals than early reactions when multiple fraction per day radiotherapy is planned, even if the half-lives of repair are not different. (orig.)
Schriesheim, Chester A.; Novelli, Luke, Jr.
1989-01-01
Differences between recommended sets of equal-interval response anchors derived from scaling techniques using magnitude estimations and Thurstone Case III pair-comparison treatment of complete ranks were compared. Differences in results for 205 undergraduates reflected differences in the samples as well as in the tasks and computational…
High-Order Entropy Stable Finite Difference Schemes for Nonlinear Conservation Laws: Finite Domains
Fisher, Travis C.; Carpenter, Mark H.
2013-01-01
Developing stable and robust high-order finite difference schemes requires mathematical formalism and appropriate methods of analysis. In this work, nonlinear entropy stability is used to derive provably stable high-order finite difference methods with formal boundary closures for conservation laws. Particular emphasis is placed on the entropy stability of the compressible Navier-Stokes equations. A newly derived entropy stable weighted essentially non-oscillatory finite difference method is used to simulate problems with shocks and a conservative, entropy stable, narrow-stencil finite difference approach is used to approximate viscous terms.
International Nuclear Information System (INIS)
Fortunati, G.U.; Banfi, C.; Pasturenzi, M.
1994-01-01
This study attempts to survey the problems associated with techniques and strategies of soil sampling. Keeping in mind the well defined objectives of a sampling campaign, the aim was to highlight the most important aspect of representativeness of samples as a function of the available resources. Particular emphasis was given to the techniques and particularly to a description of the many types of samplers which are in use. The procedures and techniques employed during the investigations following the Seveso accident are described. (orig.)
Finite element modeling of ultrasonic inspection of weldments
International Nuclear Information System (INIS)
Dewey, B.R.; Adler, L.; Oliver, B.F.; Pickard, C.A.
1983-01-01
High performance weldments for critical service applications require 100% inspection. Balanced against the adaptability of the ultrasonic method for automated inspection are the difficulties encountered with nonhomogeneous and anisotropic materials. This research utilizes crystals and bicrystals of nickel to model austenitic weld metal, where the anisotropy produces scattering and mode conversion, making detection and measurement of actual defects difficult. Well characterized samples of Ni are produced in a levitation zone melting facility. Crystals in excess of 25 mm diameter and length are large enough to permit ultrasonic measurements of attenuation, wave speed, and spectral content. At the same time, the experiments are duplicated as finite element models for comparison purposes
Finite Element Simulation of Diametral Strength Test of Hydroxyapatite
International Nuclear Information System (INIS)
Ozturk, Fahrettin; Toros, Serkan; Evis, Zafer
2011-01-01
In this study, the diametral strength test of sintered hydroxyapatite was simulated by the finite element software, ABAQUS/Standard. Stress distributions on diametral test sample were determined. The effect of sintering temperature on stress distribution of hydroxyapatite was studied. It was concluded that high sintering temperatures did not reduce the stress on hydroxyapatite. It had a negative effect on stress distribution of hydroxyapatite after 1300 deg. C. In addition to the porosity, other factors (sintering temperature, presence of phases and the degree of crystallinity) affect the diametral strength of the hydroxyapatite.