WorldWideScience

Sample records for finite sampling interval

  1. Rescaled Range Analysis and Detrended Fluctuation Analysis: Finite Sample Properties and Confidence Intervals

    Czech Academy of Sciences Publication Activity Database

    Krištoufek, Ladislav

    4/2010, č. 3 (2010), s. 236-250 ISSN 1802-4696 R&D Projects: GA ČR GD402/09/H045; GA ČR GA402/09/0965 Grant - others:GA UK(CZ) 118310 Institutional research plan: CEZ:AV0Z10750506 Keywords : rescaled range analysis * detrended fluctuation analysis * Hurst exponent * long-range dependence Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2010/E/kristoufek-rescaled range analysis and detrended fluctuation analysis finite sample properties and confidence intervals.pdf

  2. Variational collocation on finite intervals

    International Nuclear Information System (INIS)

    Amore, Paolo; Cervantes, Mayra; Fernandez, Francisco M

    2007-01-01

    In this paper, we study a set of functions, defined on an interval of finite width, which are orthogonal and which reduce to the sinc functions when the appropriate limit is taken. We show that these functions can be used within a variational approach to obtain accurate results for a variety of problems. We have applied them to the interpolation of functions on finite domains and to the solution of the Schroedinger equation, and we have compared the performance of the present approach with others

  3. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    Science.gov (United States)

    Fung, Tak; Keenan, Kevin

    2014-01-01

    The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  4. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    Directory of Open Access Journals (Sweden)

    Tak Fung

    Full Text Available The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%, a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L., occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  5. Complexity of a kind of interval continuous self-map of finite type

    International Nuclear Information System (INIS)

    Wang Lidong; Chu Zhenyan; Liao Gongfu

    2011-01-01

    Highlights: → We find the Hausdorff dimension for an interval continuous self-map f of finite type is s element of (0,1) on a non-wandering set. → f| Ω(f) has positive topological entropy. → f| Ω(f) is chaotic such as Devaney chaos, Kato chaos, two point distributional chaos and so on. - Abstract: An interval map is called finitely typal, if the restriction of the map to non-wandering set is topologically conjugate with a subshift of finite type. In this paper, we prove that there exists an interval continuous self-map of finite type such that the Hausdorff dimension is an arbitrary number in the interval (0, 1), discuss various chaotic properties of the map and the relations between chaotic set and the set of recurrent points.

  6. Complexity of a kind of interval continuous self-map of finite type

    Energy Technology Data Exchange (ETDEWEB)

    Wang Lidong, E-mail: wld@dlnu.edu.cn [Institute of Mathematics, Dalian Nationalities University, Dalian 116600 (China); Institute of Mathematics, Jilin Normal University, Siping 136000 (China); Chu Zhenyan, E-mail: chuzhenyan8@163.com [Institute of Mathematics, Dalian Nationalities University, Dalian 116600 (China) and Institute of Mathematics, Jilin University, Changchun 130023 (China); Liao Gongfu, E-mail: liaogf@email.jlu.edu.cn [Institute of Mathematics, Jilin University, Changchun 130023 (China)

    2011-10-15

    Highlights: > We find the Hausdorff dimension for an interval continuous self-map f of finite type is s element of (0,1) on a non-wandering set. > f|{sub {Omega}(f)} has positive topological entropy. > f|{sub {Omega}(f)} is chaotic such as Devaney chaos, Kato chaos, two point distributional chaos and so on. - Abstract: An interval map is called finitely typal, if the restriction of the map to non-wandering set is topologically conjugate with a subshift of finite type. In this paper, we prove that there exists an interval continuous self-map of finite type such that the Hausdorff dimension is an arbitrary number in the interval (0, 1), discuss various chaotic properties of the map and the relations between chaotic set and the set of recurrent points.

  7. Integral equations with difference kernels on finite intervals

    CERN Document Server

    Sakhnovich, Lev A

    2015-01-01

    This book focuses on solving integral equations with difference kernels on finite intervals. The corresponding problem on the semiaxis was previously solved by N. Wiener–E. Hopf and by M.G. Krein. The problem on finite intervals, though significantly more difficult, may be solved using our method of operator identities. This method is also actively employed in inverse spectral problems, operator factorization and nonlinear integral equations. Applications of the obtained results to optimal synthesis, light scattering, diffraction, and hydrodynamics problems are discussed in this book, which also describes how the theory of operators with difference kernels is applied to stable processes and used to solve the famous M. Kac problems on stable processes. In this second edition these results are extensively generalized and include the case of all Levy processes. We present the convolution expression for the well-known Ito formula of the generator operator, a convolution expression that has proven to be fruitful...

  8. Finite-Time Stability of Large-Scale Systems with Interval Time-Varying Delay in Interconnection

    Directory of Open Access Journals (Sweden)

    T. La-inchua

    2017-01-01

    Full Text Available We investigate finite-time stability of a class of nonlinear large-scale systems with interval time-varying delays in interconnection. Time-delay functions are continuous but not necessarily differentiable. Based on Lyapunov stability theory and new integral bounding technique, finite-time stability of large-scale systems with interval time-varying delays in interconnection is derived. The finite-time stability criteria are delays-dependent and are given in terms of linear matrix inequalities which can be solved by various available algorithms. Numerical examples are given to illustrate effectiveness of the proposed method.

  9. Fuzzy interval Finite Element/Statistical Energy Analysis for mid-frequency analysis of built-up systems with mixed fuzzy and interval parameters

    Science.gov (United States)

    Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan

    2016-10-01

    This paper introduces mixed fuzzy and interval parametric uncertainties into the FE components of the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model for mid-frequency analysis of built-up systems, thus an uncertain ensemble combining non-parametric with mixed fuzzy and interval parametric uncertainties comes into being. A fuzzy interval Finite Element/Statistical Energy Analysis (FIFE/SEA) framework is proposed to obtain the uncertain responses of built-up systems, which are described as intervals with fuzzy bounds, termed as fuzzy-bounded intervals (FBIs) in this paper. Based on the level-cut technique, a first-order fuzzy interval perturbation FE/SEA (FFIPFE/SEA) and a second-order fuzzy interval perturbation FE/SEA method (SFIPFE/SEA) are developed to handle the mixed parametric uncertainties efficiently. FFIPFE/SEA approximates the response functions by the first-order Taylor series, while SFIPFE/SEA improves the accuracy by considering the second-order items of Taylor series, in which all the mixed second-order items are neglected. To further improve the accuracy, a Chebyshev fuzzy interval method (CFIM) is proposed, in which the Chebyshev polynomials is used to approximate the response functions. The FBIs are eventually reconstructed by assembling the extrema solutions at all cut levels. Numerical results on two built-up systems verify the effectiveness of the proposed methods.

  10. GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING

    Directory of Open Access Journals (Sweden)

    Christopher Ouma Onyango

    2010-09-01

    Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.

  11. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  12. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  13. Estimation of individual reference intervals in small sample sizes

    DEFF Research Database (Denmark)

    Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz

    2007-01-01

    In occupational health studies, the study groups most often comprise healthy subjects performing their work. Sampling is often planned in the most practical way, e.g., sampling of blood in the morning at the work site just after the work starts. Optimal use of reference intervals requires...... from various variables such as gender, age, BMI, alcohol, smoking, and menopause. The reference intervals were compared to reference intervals calculated using IFCC recommendations. Where comparable, the IFCC calculated reference intervals had a wider range compared to the variance component models...

  14. Asymptotics of linear initial boundary value problems with periodic boundary data on the half-line and finite intervals

    KAUST Repository

    Dujardin, G. M.

    2009-08-12

    This paper deals with the asymptotic behaviour of the solutions of linear initial boundary value problems with constant coefficients on the half-line and on finite intervals. We assume that the boundary data are periodic in time and we investigate whether the solution becomes time-periodic after sufficiently long time. Using Fokas\\' transformation method, we show that, for the linear Schrödinger equation, the linear heat equation and the linearized KdV equation on the half-line, the solutions indeed become periodic for large time. However, for the same linear Schrödinger equation on a finite interval, we show that the solution, in general, is not asymptotically periodic; actually, the asymptotic behaviour of the solution depends on the commensurability of the time period T of the boundary data with the square of the length of the interval over. © 2009 The Royal Society.

  15. Robust weak measurements on finite samples

    International Nuclear Information System (INIS)

    Tollaksen, Jeff

    2007-01-01

    A new weak measurement procedure is introduced for finite samples which yields accurate weak values that are outside the range of eigenvalues and which do not require an exponentially rare ensemble. This procedure provides a unique advantage in the amplification of small nonrandom signals by minimizing uncertainties in determining the weak value and by minimizing sample size. This procedure can also extend the strength of the coupling between the system and measuring device to a new regime

  16. Relativistic rise measurements with very fine sampling intervals

    International Nuclear Information System (INIS)

    Ludlam, T.; Platner, E.D.; Polychronakos, V.A.; Lindenbaum, S.J.; Kramer, M.A.; Teramoto, Y.

    1980-01-01

    The motivation of this work was to determine whether the technique of charged particle identification via the relativistic rise in the ionization loss can be significantly improved by virtue of very small sampling intervals. A fast-sampling ADC and a longitudinal drift geometry were used to provide a large number of samples from a single drift chamber gap, achieving sampling intervals roughly 10 times smaller than any previous study. A single layer drift chamber was used, and tracks of 1 meter length were simulated by combining together samples from many identified particles in this detector. These data were used to study the resolving power for particle identification as a function of sample size, averaging technique, and the number of discrimination levels (ADC bits) used for pulse height measurements

  17. Robust L2-L∞ Filtering of Time-Delay Jump Systems with Respect to the Finite-Time Interval

    Directory of Open Access Journals (Sweden)

    Shuping He

    2011-01-01

    Full Text Available This paper studied the problem of stochastic finite-time boundedness and disturbance attenuation for a class of linear time-delayed systems with Markov jumping parameters. Sufficient conditions are provided to solve this problem. The L2-L∞ filters are, respectively, designed for time-delayed Markov jump linear systems with/without uncertain parameters such that the resulting filtering error dynamic system is stochastically finite-time bounded and has the finite-time interval disturbance attenuation γ for all admissible uncertainties, time delays, and unknown disturbances. By using stochastic Lyapunov-Krasovskii functional approach, it is shown that the filter designing problem is in terms of the solutions of a set of coupled linear matrix inequalities. Simulation examples are included to demonstrate the potential of the proposed results.

  18. A summary of maintenance policies for a finite interval

    International Nuclear Information System (INIS)

    Nakagawa, T.; Mizutani, S.

    2009-01-01

    It would be an important problem to consider practically some maintenance policies for a finite time span, because the working times of most units are finite in actual fields. This paper converts the usual maintenance models to finite maintenance models. It is more difficult to study theoretically optimal policies for a finite time span than those for an infinite time span. Three usual models of periodic replacement with minimal repair, block replacement and simple replacement are transformed to finite replacement models. Further, optimal periodic and sequential policies for an imperfect preventive maintenance and an inspection model for a finite time span are considered. Optimal policies for each model are analytically derived and are numerically computed

  19. Interpolating and sampling sequences in finite Riemann surfaces

    OpenAIRE

    Ortega-Cerda, Joaquim

    2007-01-01

    We provide a description of the interpolating and sampling sequences on a space of holomorphic functions on a finite Riemann surface, where a uniform growth restriction is imposed on the holomorphic functions.

  20. Asymptotics of linear initial boundary value problems with periodic boundary data on the half-line and finite intervals

    KAUST Repository

    Dujardin, G. M.

    2009-01-01

    This paper deals with the asymptotic behaviour of the solutions of linear initial boundary value problems with constant coefficients on the half-line and on finite intervals. We assume that the boundary data are periodic in time and we investigate

  1. Gap probabilities for edge intervals in finite Gaussian and Jacobi unitary matrix ensembles

    International Nuclear Information System (INIS)

    Witte, N.S.; Forrester, P.J.

    1999-01-01

    The probabilities for gaps in the eigenvalue spectrum of the finite dimension N x N random matrix Hermite and Jacobi unitary ensembles on some single and disconnected double intervals are found. These are cases where a reflection symmetry exists and the probability factors into two other related probabilities, defined on single intervals. Our investigation uses the system of partial differential equations arising from the Fredholm determinant expression for the gap probability and the differential-recurrence equations satisfied by Hermite and Jacobi orthogonal polynomials. In our study we find second and third order nonlinear ordinary differential equations defining the probabilities in the general N case, specific explicit solutions for N = 1 and N = 2, asymptotic expansions, scaling at the edge of the Hermite spectrum as N →∞ and the Jacobi to Hermite limit both of which make correspondence to other cases reported here or known previously. (authors)

  2. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    Directory of Open Access Journals (Sweden)

    Jordi Marcé-Nogué

    2017-10-01

    Full Text Available Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches.

  3. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    Science.gov (United States)

    De Esteban-Trivigno, Soledad; Püschel, Thomas A.; Fortuny, Josep

    2017-01-01

    Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches. PMID:29043107

  4. Estimating fluvial wood discharge from timelapse photography with varying sampling intervals

    Science.gov (United States)

    Anderson, N. K.

    2013-12-01

    There is recent focus on calculating wood budgets for streams and rivers to help inform management decisions, ecological studies and carbon/nutrient cycling models. Most work has measured in situ wood in temporary storage along stream banks or estimated wood inputs from banks. Little effort has been employed monitoring and quantifying wood in transport during high flows. This paper outlines a procedure for estimating total seasonal wood loads using non-continuous coarse interval sampling and examines differences in estimation between sampling at 1, 5, 10 and 15 minutes. Analysis is performed on wood transport for the Slave River in Northwest Territories, Canada. Relative to the 1 minute dataset, precision decreased by 23%, 46% and 60% for the 5, 10 and 15 minute datasets, respectively. Five and 10 minute sampling intervals provided unbiased equal variance estimates of 1 minute sampling, whereas 15 minute intervals were biased towards underestimation by 6%. Stratifying estimates by day and by discharge increased precision over non-stratification by 4% and 3%, respectively. Not including wood transported during ice break-up, the total minimum wood load estimated at this site is 3300 × 800$ m3 for the 2012 runoff season. The vast majority of the imprecision in total wood volumes came from variance in estimating average volume per log. Comparison of proportions and variance across sample intervals using bootstrap sampling to achieve equal n. Each trial was sampled for n=100, 10,000 times and averaged. All trials were then averaged to obtain an estimate for each sample interval. Dashed lines represent values from the one minute dataset.

  5. Practical continuous-variable quantum key distribution without finite sampling bandwidth effects.

    Science.gov (United States)

    Li, Huasheng; Wang, Chao; Huang, Peng; Huang, Duan; Wang, Tao; Zeng, Guihua

    2016-09-05

    In a practical continuous-variable quantum key distribution system, finite sampling bandwidth of the employed analog-to-digital converter at the receiver's side may lead to inaccurate results of pulse peak sampling. Then, errors in the parameters estimation resulted. Subsequently, the system performance decreases and security loopholes are exposed to eavesdroppers. In this paper, we propose a novel data acquisition scheme which consists of two parts, i.e., a dynamic delay adjusting module and a statistical power feedback-control algorithm. The proposed scheme may improve dramatically the data acquisition precision of pulse peak sampling and remove the finite sampling bandwidth effects. Moreover, the optimal peak sampling position of a pulse signal can be dynamically calibrated through monitoring the change of the statistical power of the sampled data in the proposed scheme. This helps to resist against some practical attacks, such as the well-known local oscillator calibration attack.

  6. An integral equation approach to the interval reliability of systems modelled by finite semi-Markov processes

    International Nuclear Information System (INIS)

    Csenki, A.

    1995-01-01

    The interval reliability for a repairable system which alternates between working and repair periods is defined as the probability of the system being functional throughout a given time interval. In this paper, a set of integral equations is derived for this dependability measure, under the assumption that the system is modelled by an irreducible finite semi-Markov process. The result is applied to the semi-Markov model of a two-unit system with sequential preventive maintenance. The method used for the numerical solution of the resulting system of integral equations is a two-point trapezoidal rule. The system of implementation is the matrix computation package MATLAB on the Apple Macintosh SE/30. The numerical results are discussed and compared with those from simulation

  7. Finite Sample Comparison of Parametric, Semiparametric, and Wavelet Estimators of Fractional Integration

    DEFF Research Database (Denmark)

    Nielsen, Morten Ø.; Frederiksen, Per Houmann

    2005-01-01

    In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods. The es...... the time domain parametric methods, and (4) without sufficient trimming of scales the wavelet-based estimators are heavily biased.......In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods....... The estimators are briefly introduced and compared, and the criteria adopted for measuring finite sample performance are bias and root mean squared error. Most importantly, the simulations reveal that (1) the frequency domain maximum likelihood procedure is superior to the time domain parametric methods, (2) all...

  8. Extension of a chaos control method to unstable trajectories on infinite- or finite-time intervals: Experimental verification

    International Nuclear Information System (INIS)

    Yagasaki, Kazuyuki

    2007-01-01

    In experiments for single and coupled pendula, we demonstrate the effectiveness of a new control method based on dynamical systems theory for stabilizing unstable aperiodic trajectories defined on infinite- or finite-time intervals. The basic idea of the method is similar to that of the OGY method, which is a well-known, chaos control method. Extended concepts of the stable and unstable manifolds of hyperbolic trajectories are used here

  9. An Improvement to Interval Estimation for Small Samples

    Directory of Open Access Journals (Sweden)

    SUN Hui-Ling

    2017-02-01

    Full Text Available Because it is difficult and complex to determine the probability distribution of small samples,it is improper to use traditional probability theory to process parameter estimation for small samples. Bayes Bootstrap method is always used in the project. Although,the Bayes Bootstrap method has its own limitation,In this article an improvement is given to the Bayes Bootstrap method,This method extended the amount of samples by numerical simulation without changing the circumstances in a small sample of the original sample. And the new method can give the accurate interval estimation for the small samples. Finally,by using the Monte Carlo simulation to model simulation to the specific small sample problems. The effectiveness and practicability of the Improved-Bootstrap method was proved.

  10. Effects of Spatial Sampling Interval on Roughness Parameters and Microwave Backscatter over Agricultural Soil Surfaces

    Directory of Open Access Journals (Sweden)

    Matías Ernesto Barber

    2016-06-01

    Full Text Available The spatial sampling interval, as related to the ability to digitize a soil profile with a certain number of features per unit length, depends on the profiling technique itself. From a variety of profiling techniques, roughness parameters are estimated at different sampling intervals. Since soil profiles have continuous spectral components, it is clear that roughness parameters are influenced by the sampling interval of the measurement device employed. In this work, we contributed to answer which sampling interval the profiles needed to be measured at to accurately account for the microwave response of agricultural surfaces. For this purpose, a 2-D laser profiler was built and used to measure surface soil roughness at field scale over agricultural sites in Argentina. Sampling intervals ranged from large (50 mm to small ones (1 mm, with several intermediate values. Large- and intermediate-sampling-interval profiles were synthetically derived from nominal, 1 mm ones. With these data, the effect of sampling-interval-dependent roughness parameters on backscatter response was assessed using the theoretical backscatter model IEM2M. Simulations demonstrated that variations of roughness parameters depended on the working wavelength and was less important at L-band than at C- or X-band. In any case, an underestimation of the backscattering coefficient of about 1-4 dB was observed at larger sampling intervals. As a general rule a sampling interval of 15 mm can be recommended for L-band and 5 mm for C-band.

  11. The Gas Sampling Interval Effect on V˙O2peak Is Independent of Exercise Protocol.

    Science.gov (United States)

    Scheadler, Cory M; Garver, Matthew J; Hanson, Nicholas J

    2017-09-01

    There is a plethora of gas sampling intervals available during cardiopulmonary exercise testing to measure peak oxygen consumption (V˙O2peak). Different intervals can lead to altered V˙O2peak. Whether differences are affected by the exercise protocol or subject sample is not clear. The purpose of this investigation was to determine whether V˙O2peak differed because of the manipulation of sampling intervals and whether differences were independent of the protocol and subject sample. The first subject sample (24 ± 3 yr; V˙O2peak via 15-breath moving averages: 56.2 ± 6.8 mL·kg·min) completed the Bruce and the self-paced V˙O2max protocols. The second subject sample (21.9 ± 2.7 yr; V˙O2peak via 15-breath moving averages: 54.2 ± 8.0 mL·kg·min) completed the Bruce and the modified Astrand protocols. V˙O2peak was identified using five sampling intervals: 15-s block averages, 30-s block averages, 15-breath block averages, 15-breath moving averages, and 30-s block averages aligned to the end of exercise. Differences in V˙O2peak between intervals were determined using repeated-measures ANOVAs. The influence of subject sample on the sampling effect was determined using independent t-tests. There was a significant main effect of sampling interval on V˙O2peak (first sample Bruce and self-paced V˙O2max P sample Bruce and modified Astrand P sampling intervals followed a similar pattern for each protocol and subject sample, with 15-breath moving average presenting the highest V˙O2peak. The effect of manipulating gas sampling intervals on V˙O2peak appears to be protocol and sample independent. These findings highlight our recommendation that the clinical and scientific community request and report the sampling interval whenever metabolic data are presented. The standardization of reporting would assist in the comparison of V˙O2peak.

  12. A design-based approximation to the Bayes Information Criterion in finite population sampling

    Directory of Open Access Journals (Sweden)

    Enrico Fabrizi

    2014-05-01

    Full Text Available In this article, various issues related to the implementation of the usual Bayesian Information Criterion (BIC are critically examined in the context of modelling a finite population. A suitable design-based approximation to the BIC is proposed in order to avoid the derivation of the exact likelihood of the sample which is often very complex in a finite population sampling. The approximation is justified using a theoretical argument and a Monte Carlo simulation study.

  13. Finite-sample instrumental variables inference using an asymptotically pivotal statistic

    NARCIS (Netherlands)

    Bekker, P; Kleibergen, F

    2003-01-01

    We consider the K-statistic, Kleibergen's (2002, Econometrica 70, 1781-1803) adaptation of the Anderson-Rubin (AR) statistic in instrumental variables regression. Whereas Kleibergen (2002) especially analyzes the asymptotic behavior of the statistic, we focus on finite-sample properties in, a

  14. A new variable interval schedule with constant hazard rate and finite time range.

    Science.gov (United States)

    Bugallo, Mehdi; Machado, Armando; Vasconcelos, Marco

    2018-05-27

    We propose a new variable interval (VI) schedule that achieves constant probability of reinforcement in time while using a bounded range of intervals. By sampling each trial duration from a uniform distribution ranging from 0 to 2 T seconds, and then applying a reinforcement rule that depends linearly on trial duration, the schedule alternates reinforced and unreinforced trials, each less than 2 T seconds, while preserving a constant hazard function. © 2018 Society for the Experimental Analysis of Behavior.

  15. A proof of the Woodward-Lawson sampling method for a finite linear array

    Science.gov (United States)

    Somers, Gary A.

    1993-01-01

    An extension of the continuous aperture Woodward-Lawson sampling theorem has been developed for a finite linear array of equidistant identical elements with arbitrary excitations. It is shown that by sampling the array factor at a finite number of specified points in the far field, the exact array factor over all space can be efficiently reconstructed in closed form. The specified sample points lie in real space and hence are measurable provided that the interelement spacing is greater than approximately one half of a wavelength. This paper provides insight as to why the length parameter used in the sampling formulas for discrete arrays is larger than the physical span of the lattice points in contrast with the continuous aperture case where the length parameter is precisely the physical aperture length.

  16. Sampled-data-based vibration control for structural systems with finite-time state constraint and sensor outage.

    Science.gov (United States)

    Weng, Falu; Liu, Mingxin; Mao, Weijie; Ding, Yuanchun; Liu, Feifei

    2018-05-10

    The problem of sampled-data-based vibration control for structural systems with finite-time state constraint and sensor outage is investigated in this paper. The objective of designing controllers is to guarantee the stability and anti-disturbance performance of the closed-loop systems while some sensor outages happen. Firstly, based on matrix transformation, the state-space model of structural systems with sensor outages and uncertainties appearing in the mass, damping and stiffness matrices is established. Secondly, by considering most of those earthquakes or strong winds happen in a very short time, and it is often the peak values make the structures damaged, the finite-time stability analysis method is introduced to constrain the state responses in a given time interval, and the H-infinity stability is adopted in the controller design to make sure that the closed-loop system has a prescribed level of disturbance attenuation performance during the whole control process. Furthermore, all stabilization conditions are expressed in the forms of linear matrix inequalities (LMIs), whose feasibility can be easily checked by using the LMI Toolbox. Finally, numerical examples are given to demonstrate the effectiveness of the proposed theorems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Technical note: Instantaneous sampling intervals validated from continuous video observation for behavioral recording of feedlot lambs.

    Science.gov (United States)

    Pullin, A N; Pairis-Garcia, M D; Campbell, B J; Campler, M R; Proudfoot, K L

    2017-11-01

    When considering methodologies for collecting behavioral data, continuous sampling provides the most complete and accurate data set whereas instantaneous sampling can provide similar results and also increase the efficiency of data collection. However, instantaneous time intervals require validation to ensure accurate estimation of the data. Therefore, the objective of this study was to validate scan sampling intervals for lambs housed in a feedlot environment. Feeding, lying, standing, drinking, locomotion, and oral manipulation were measured on 18 crossbred lambs housed in an indoor feedlot facility for 14 h (0600-2000 h). Data from continuous sampling were compared with data from instantaneous scan sampling intervals of 5, 10, 15, and 20 min using a linear regression analysis. Three criteria determined if a time interval accurately estimated behaviors: 1) ≥ 0.90, 2) slope not statistically different from 1 ( > 0.05), and 3) intercept not statistically different from 0 ( > 0.05). Estimations for lying behavior were accurate up to 20-min intervals, whereas feeding and standing behaviors were accurate only at 5-min intervals (i.e., met all 3 regression criteria). Drinking, locomotion, and oral manipulation demonstrated poor associations () for all tested intervals. The results from this study suggest that a 5-min instantaneous sampling interval will accurately estimate lying, feeding, and standing behaviors for lambs housed in a feedlot, whereas continuous sampling is recommended for the remaining behaviors. This methodology will contribute toward the efficiency, accuracy, and transparency of future behavioral data collection in lamb behavior research.

  18. The effects of varying sampling intervals on the growth and survival ...

    African Journals Online (AJOL)

    Four different sampling intervals were investigated during a six-week outdoor nursery management of Heterobranchus longifilis (Valenciennes, 1840) fry in outdoor concrete tanks in order to determine the most suitable sampling regime for maximum productivity in terms of optimum growth and survival of hatchlings and ...

  19. Binomial Distribution Sample Confidence Intervals Estimation 1. Sampling and Medical Key Parameters Calculation

    Directory of Open Access Journals (Sweden)

    Tudor DRUGAN

    2003-08-01

    Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.

  20. Nonparametric Estimation of Interval Reliability for Discrete-Time Semi-Markov Systems

    DEFF Research Database (Denmark)

    Georgiadis, Stylianos; Limnios, Nikolaos

    2016-01-01

    In this article, we consider a repairable discrete-time semi-Markov system with finite state space. The measure of the interval reliability is given as the probability of the system being operational over a given finite-length time interval. A nonparametric estimator is proposed for the interval...

  1. High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data

    Science.gov (United States)

    Morelli, Eugene A.

    1997-01-01

    Many system identification and signal processing procedures can be done advantageously in the frequency domain. A required preliminary step for this approach is the transformation of sampled time domain data into the frequency domain. The analytical tool used for this transformation is the finite Fourier transform. Inaccuracy in the transformation can degrade system identification and signal processing results. This work presents a method for evaluating the finite Fourier transform using cubic interpolation of sampled time domain data for high accuracy, and the chirp Zeta-transform for arbitrary frequency resolution. The accuracy of the technique is demonstrated in example cases where the transformation can be evaluated analytically. Arbitrary frequency resolution is shown to be important for capturing details of the data in the frequency domain. The technique is demonstrated using flight test data from a longitudinal maneuver of the F-18 High Alpha Research Vehicle.

  2. Vibronic Boson Sampling: Generalized Gaussian Boson Sampling for Molecular Vibronic Spectra at Finite Temperature.

    Science.gov (United States)

    Huh, Joonsuk; Yung, Man-Hong

    2017-08-07

    Molecular vibroic spectroscopy, where the transitions involve non-trivial Bosonic correlation due to the Duschinsky Rotation, is strongly believed to be in a similar complexity class as Boson Sampling. At finite temperature, the problem is represented as a Boson Sampling experiment with correlated Gaussian input states. This molecular problem with temperature effect is intimately related to the various versions of Boson Sampling sharing the similar computational complexity. Here we provide a full description to this relation in the context of Gaussian Boson Sampling. We find a hierarchical structure, which illustrates the relationship among various Boson Sampling schemes. Specifically, we show that every instance of Gaussian Boson Sampling with an initial correlation can be simulated by an instance of Gaussian Boson Sampling without initial correlation, with only a polynomial overhead. Since every Gaussian state is associated with a thermal state, our result implies that every sampling problem in molecular vibronic transitions, at any temperature, can be simulated by Gaussian Boson Sampling associated with a product of vacuum modes. We refer such a generalized Gaussian Boson Sampling motivated by the molecular sampling problem as Vibronic Boson Sampling.

  3. Construction of prediction intervals for Palmer Drought Severity Index using bootstrap

    Science.gov (United States)

    Beyaztas, Ufuk; Bickici Arikan, Bugrayhan; Beyaztas, Beste Hamiye; Kahya, Ercan

    2018-04-01

    In this study, we propose an approach based on the residual-based bootstrap method to obtain valid prediction intervals using monthly, short-term (three-months) and mid-term (six-months) drought observations. The effects of North Atlantic and Arctic Oscillation indexes on the constructed prediction intervals are also examined. Performance of the proposed approach is evaluated for the Palmer Drought Severity Index (PDSI) obtained from Konya closed basin located in Central Anatolia, Turkey. The finite sample properties of the proposed method are further illustrated by an extensive simulation study. Our results revealed that the proposed approach is capable of producing valid prediction intervals for future PDSI values.

  4. Estimation of reference intervals from small samples: an example using canine plasma creatinine.

    Science.gov (United States)

    Geffré, A; Braun, J P; Trumel, C; Concordet, D

    2009-12-01

    According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.

  5. Chosen interval methods for solving linear interval systems with special type of matrix

    Science.gov (United States)

    Szyszka, Barbara

    2013-10-01

    The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.

  6. Impact of sampling interval in training data acquisition on intrafractional predictive accuracy of indirect dynamic tumor-tracking radiotherapy.

    Science.gov (United States)

    Mukumoto, Nobutaka; Nakamura, Mitsuhiro; Akimoto, Mami; Miyabe, Yuki; Yokota, Kenji; Matsuo, Yukinori; Mizowaki, Takashi; Hiraoka, Masahiro

    2017-08-01

    To explore the effect of sampling interval of training data acquisition on the intrafractional prediction error of surrogate signal-based dynamic tumor-tracking using a gimbal-mounted linac. Twenty pairs of respiratory motions were acquired from 20 patients (ten lung, five liver, and five pancreatic cancer patients) who underwent dynamic tumor-tracking with the Vero4DRT. First, respiratory motions were acquired as training data for an initial construction of the prediction model before the irradiation. Next, additional respiratory motions were acquired for an update of the prediction model due to the change of the respiratory pattern during the irradiation. The time elapsed prior to the second acquisition of the respiratory motion was 12.6 ± 3.1 min. A four-axis moving phantom reproduced patients' three dimensional (3D) target motions and one dimensional surrogate motions. To predict the future internal target motion from the external surrogate motion, prediction models were constructed by minimizing residual prediction errors for training data acquired at 80 and 320 ms sampling intervals for 20 s, and at 500, 1,000, and 2,000 ms sampling intervals for 60 s using orthogonal kV x-ray imaging systems. The accuracies of prediction models trained with various sampling intervals were estimated based on training data with each sampling interval during the training process. The intrafractional prediction errors for various prediction models were then calculated on intrafractional monitoring images taken for 30 s at the constant sampling interval of a 500 ms fairly to evaluate the prediction accuracy for the same motion pattern. In addition, the first respiratory motion was used for the training and the second respiratory motion was used for the evaluation of the intrafractional prediction errors for the changed respiratory motion to evaluate the robustness of the prediction models. The training error of the prediction model was 1.7 ± 0.7 mm in 3D for all sampling

  7. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  8. A new reliability measure based on specified minimum distances before the locations of random variables in a finite interval

    International Nuclear Information System (INIS)

    Todinov, M.T.

    2004-01-01

    A new reliability measure is proposed and equations are derived which determine the probability of existence of a specified set of minimum gaps between random variables following a homogeneous Poisson process in a finite interval. Using the derived equations, a method is proposed for specifying the upper bound of the random variables' number density which guarantees that the probability of clustering of two or more random variables in a finite interval remains below a maximum acceptable level. It is demonstrated that even for moderate number densities the probability of clustering is substantial and should not be neglected in reliability calculations. In the important special case where the random variables are failure times, models have been proposed for determining the upper bound of the hazard rate which guarantees a set of minimum failure-free operating intervals before the random failures, with a specified probability. A model has also been proposed for determining the upper bound of the hazard rate which guarantees a minimum availability target. Using the models proposed, a new strategy, models and reliability tools have been developed for setting quantitative reliability requirements which consist of determining the intersection of the hazard rate envelopes (hazard rate upper bounds) which deliver a minimum failure-free operating period before random failures, a risk of premature failure below a maximum acceptable level and a minimum required availability. It is demonstrated that setting reliability requirements solely based on an availability target does not necessarily mean a low risk of premature failure. Even at a high availability level, the probability of premature failure can be substantial. For industries characterised by a high cost of failure, the reliability requirements should involve a hazard rate envelope limiting the risk of failure below a maximum acceptable level

  9. Interval estimation methods of the mean in small sample situation and the results' comparison

    International Nuclear Information System (INIS)

    Wu Changli; Guo Chunying; Jiang Meng; Lin Yuangen

    2009-01-01

    The methods of the sample mean's interval estimation, namely the classical method, the Bootstrap method, the Bayesian Bootstrap method, the Jackknife method and the spread method of the Empirical Characteristic distribution function are described. Numerical calculation on the samples' mean intervals is carried out where the numbers of the samples are 4, 5, 6 respectively. The results indicate the Bootstrap method and the Bayesian Bootstrap method are much more appropriate than others in small sample situation. (authors)

  10. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    Science.gov (United States)

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  11. Finite-key analysis for quantum key distribution with weak coherent pulses based on Bernoulli sampling

    Science.gov (United States)

    Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato

    2017-07-01

    An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.

  12. The finite sample performance of estimators for mediation analysis under sequential conditional independence

    DEFF Research Database (Denmark)

    Huber, Martin; Lechner, Michael; Mellace, Giovanni

    Using a comprehensive simulation study based on empirical data, this paper investigates the finite sample properties of different classes of parametric and semi-parametric estimators of (natural) direct and indirect causal effects used in mediation analysis under sequential conditional independence...

  13. Modified Taylor series method for solving nonlinear differential equations with mixed boundary conditions defined on finite intervals.

    Science.gov (United States)

    Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel Antonio; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Marin-Hernandez, Antonio; Herrera-May, Agustin Leobardo; Diaz-Sanchez, Alejandro; Huerta-Chua, Jesus

    2014-01-01

    In this article, we propose the application of a modified Taylor series method (MTSM) for the approximation of nonlinear problems described on finite intervals. The issue of Taylor series method with mixed boundary conditions is circumvented using shooting constants and extra derivatives of the problem. In order to show the benefits of this proposal, three different kinds of problems are solved: three-point boundary valued problem (BVP) of third-order with a hyperbolic sine nonlinearity, two-point BVP for a second-order nonlinear differential equation with an exponential nonlinearity, and a two-point BVP for a third-order nonlinear differential equation with a radical nonlinearity. The result shows that the MTSM method is capable to generate easily computable and highly accurate approximations for nonlinear equations. 34L30.

  14. Finite mixture models for the computation of isotope ratios in mixed isotopic samples

    Science.gov (United States)

    Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas

    2013-04-01

    Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control

  15. Influence of sampling interval and number of projections on the quality of SR-XFMT reconstruction

    International Nuclear Information System (INIS)

    Deng Biao; Yu Xiaohan; Xu Hongjie

    2007-01-01

    Synchrotron Radiation based X-ray Fluorescent Microtomography (SR-XFMT) is a nondestructive technique for detecting elemental composition and distribution inside a specimen with high spatial resolution and sensitivity. In this paper, computer simulation of SR-XFMT experiment is performed. The influence of the sampling interval and the number of projections on the quality of SR-XFMT image reconstruction is analyzed. It is found that the sampling interval has greater effect on the quality of reconstruction than the number of projections. (authors)

  16. Adaptive Kalman Filter Based on Adjustable Sampling Interval in Burst Detection for Water Distribution System

    Directory of Open Access Journals (Sweden)

    Doo Yong Choi

    2016-04-01

    Full Text Available Rapid detection of bursts and leaks in water distribution systems (WDSs can reduce the social and economic costs incurred through direct loss of water into the ground, additional energy demand for water supply, and service interruptions. Many real-time burst detection models have been developed in accordance with the use of supervisory control and data acquisition (SCADA systems and the establishment of district meter areas (DMAs. Nonetheless, no consideration has been given to how frequently a flow meter measures and transmits data for predicting breaks and leaks in pipes. This paper analyzes the effect of sampling interval when an adaptive Kalman filter is used for detecting bursts in a WDS. A new sampling algorithm is presented that adjusts the sampling interval depending on the normalized residuals of flow after filtering. The proposed algorithm is applied to a virtual sinusoidal flow curve and real DMA flow data obtained from Jeongeup city in South Korea. The simulation results prove that the self-adjusting algorithm for determining the sampling interval is efficient and maintains reasonable accuracy in burst detection. The proposed sampling method has a significant potential for water utilities to build and operate real-time DMA monitoring systems combined with smart customer metering systems.

  17. Number of core samples: Mean concentrations and confidence intervals

    International Nuclear Information System (INIS)

    Jensen, L.; Cromar, R.D.; Wilmarth, S.R.; Heasler, P.G.

    1995-01-01

    This document provides estimates of how well the mean concentration of analytes are known as a function of the number of core samples, composite samples, and replicate analyses. The estimates are based upon core composite data from nine recently sampled single-shell tanks. The results can be used when determining the number of core samples needed to ''characterize'' the waste from similar single-shell tanks. A standard way of expressing uncertainty in the estimate of a mean is with a 95% confidence interval (CI). The authors investigate how the width of a 95% CI on the mean concentration decreases as the number of observations increase. Specifically, the tables and figures show how the relative half-width (RHW) of a 95% CI decreases as the number of core samples increases. The RHW of a CI is a unit-less measure of uncertainty. The general conclusions are as follows: (1) the RHW decreases dramatically as the number of core samples is increased, the decrease is much smaller when the number of composited samples or the number of replicate analyses are increase; (2) if the mean concentration of an analyte needs to be estimated with a small RHW, then a large number of core samples is required. The estimated number of core samples given in the tables and figures were determined by specifying different sizes of the RHW. Four nominal sizes were examined: 10%, 25%, 50%, and 100% of the observed mean concentration. For a majority of analytes the number of core samples required to achieve an accuracy within 10% of the mean concentration is extremely large. In many cases, however, two or three core samples is sufficient to achieve a RHW of approximately 50 to 100%. Because many of the analytes in the data have small concentrations, this level of accuracy may be satisfactory for some applications

  18. The finite sample performance of estimators for mediation analysis under sequential conditional independence

    DEFF Research Database (Denmark)

    Huber, Martin; Lechner, Michael; Mellace, Giovanni

    2016-01-01

    Using a comprehensive simulation study based on empirical data, this paper investigates the finite sample properties of different classes of parametric and semi-parametric estimators of (natural) direct and indirect causal effects used in mediation analysis under sequential conditional independen...... of the methods often (but not always) varies with the features of the data generating process....

  19. Precision of quantization of the hall conductivity in a finite-size sample: Power law

    International Nuclear Information System (INIS)

    Greshnov, A. A.; Kolesnikova, E. N.; Zegrya, G. G.

    2006-01-01

    A microscopic calculation of the conductivity in the integer quantum Hall effect (IQHE) mode is carried out. The precision of quantization is analyzed for finite-size samples. The precision of quantization shows a power-law dependence on the sample size. A new scaling parameter describing this dependence is introduced. It is also demonstrated that the precision of quantization linearly depends on the ratio between the amplitude of the disorder potential and the cyclotron energy. The data obtained are compared with the results of magnetotransport measurements in mesoscopic samples

  20. Networked control systems with communication constraints :tradeoffs between sampling intervals, delays and performance

    NARCIS (Netherlands)

    Heemels, W.P.M.H.; Teel, A.R.; Wouw, van de N.; Nesic, D.

    2010-01-01

    There are many communication imperfections in networked control systems (NCS) such as varying transmission delays, varying sampling/transmission intervals, packet loss, communication constraints and quantization effects. Most of the available literature on NCS focuses on only some of these aspects,

  1. A Novel Finite-Sum Inequality-Based Method for Robust H∞ Control of Uncertain Discrete-Time Takagi-Sugeno Fuzzy Systems With Interval-Like Time-Varying Delays.

    Science.gov (United States)

    Zhang, Xian-Ming; Han, Qing-Long; Ge, Xiaohua

    2017-09-22

    This paper is concerned with the problem of robust H∞ control of an uncertain discrete-time Takagi-Sugeno fuzzy system with an interval-like time-varying delay. A novel finite-sum inequality-based method is proposed to provide a tighter estimation on the forward difference of certain Lyapunov functional, leading to a less conservative result. First, an auxiliary vector function is used to establish two finite-sum inequalities, which can produce tighter bounds for the finite-sum terms appearing in the forward difference of the Lyapunov functional. Second, a matrix-based quadratic convex approach is employed to equivalently convert the original matrix inequality including a quadratic polynomial on the time-varying delay into two boundary matrix inequalities, which delivers a less conservative bounded real lemma (BRL) for the resultant closed-loop system. Third, based on the BRL, a novel sufficient condition on the existence of suitable robust H∞ fuzzy controllers is derived. Finally, two numerical examples and a computer-simulated truck-trailer system are provided to show the effectiveness of the obtained results.

  2. Binomial Distribution Sample Confidence Intervals Estimation 7. Absolute Risk Reduction and ARR-like Expressions

    Directory of Open Access Journals (Sweden)

    Andrei ACHIMAŞ CADARIU

    2004-08-01

    Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.

  3. How precise is the finite sample approximation of the asymptotic distribution of realised variation measures in the presence of jumps?

    DEFF Research Database (Denmark)

    Veraart, Almut

    and present a new estimator for the asymptotic ‘variance’ of the centered realised variance in the presence of jumps. Next, we compare the finite sample performance of the various estimators by means of detailed Monte Carlo studies where we study the impact of the jump activity, the jump size of the jumps......This paper studies the impact of jumps on volatility estimation and inference based on various realised variation measures such as realised variance, realised multipower variation and truncated realised multipower variation. We review the asymptotic theory of those realised variation measures...... in the price and the presence of additional independent or dependent jumps in the volatility on the finite sample performance of the various estimators. We find that the finite sample performance of realised variance, and in particular of the log–transformed realised variance, is generally good, whereas...

  4. Interval stability for complex systems

    Science.gov (United States)

    Klinshov, Vladimir V.; Kirillov, Sergey; Kurths, Jürgen; Nekorkin, Vladimir I.

    2018-04-01

    Stability of dynamical systems against strong perturbations is an important problem of nonlinear dynamics relevant to many applications in various areas. Here, we develop a novel concept of interval stability, referring to the behavior of the perturbed system during a finite time interval. Based on this concept, we suggest new measures of stability, namely interval basin stability (IBS) and interval stability threshold (IST). IBS characterizes the likelihood that the perturbed system returns to the stable regime (attractor) in a given time. IST provides the minimal magnitude of the perturbation capable to disrupt the stable regime for a given interval of time. The suggested measures provide important information about the system susceptibility to external perturbations which may be useful for practical applications. Moreover, from a theoretical viewpoint the interval stability measures are shown to bridge the gap between linear and asymptotic stability. We also suggest numerical algorithms for quantification of the interval stability characteristics and demonstrate their potential for several dynamical systems of various nature, such as power grids and neural networks.

  5. Sampling of finite elements for sparse recovery in large scale 3D electrical impedance tomography

    International Nuclear Information System (INIS)

    Javaherian, Ashkan; Moeller, Knut; Soleimani, Manuchehr

    2015-01-01

    This study proposes a method to improve performance of sparse recovery inverse solvers in 3D electrical impedance tomography (3D EIT), especially when the volume under study contains small-sized inclusions, e.g. 3D imaging of breast tumours. Initially, a quadratic regularized inverse solver is applied in a fast manner with a stopping threshold much greater than the optimum. Based on assuming a fixed level of sparsity for the conductivity field, finite elements are then sampled via applying a compressive sensing (CS) algorithm to the rough blurred estimation previously made by the quadratic solver. Finally, a sparse inverse solver is applied solely to the sampled finite elements, with the solution to the CS as its initial guess. The results show the great potential of the proposed CS-based sparse recovery in improving accuracy of sparse solution to the large-size 3D EIT. (paper)

  6. A Note on Confidence Interval for the Power of the One Sample Test

    OpenAIRE

    A. Wong

    2010-01-01

    In introductory statistics texts, the power of the test of a one-sample mean when the variance is known is widely discussed. However, when the variance is unknown, the power of the Student's -test is seldom mentioned. In this note, a general methodology for obtaining inference concerning a scalar parameter of interest of any exponential family model is proposed. The method is then applied to the one-sample mean problem with unknown variance to obtain a ( 1 − ) 100% confidence interval for...

  7. On the reconstruction of inclusions in a heat conductive body from dynamical boundary data over a finite time interval

    International Nuclear Information System (INIS)

    Ikehata, Masaru; Kawashita, Mishio

    2010-01-01

    The enclosure method was originally introduced for inverse problems concerning non-destructive evaluation governed by elliptic equations. It was developed as one of the useful approaches in inverse problems and applied for various equations. In this paper, an application of the enclosure method to an inverse initial boundary value problem for a parabolic equation with a discontinuous coefficient is given. A simple method to extract the depth of unknown inclusions in a heat conductive body from a single set of the temperature and heat flux on the boundary observed over a finite time interval is introduced. Other related results with infinitely many data are also reported. One of them gives the minimum radius of the open ball centred at a given point that contains the inclusions. The formula for the minimum radius is newly discovered

  8. Two sample Bayesian prediction intervals for order statistics based on the inverse exponential-type distributions using right censored sample

    Directory of Open Access Journals (Sweden)

    M.M. Mohie El-Din

    2011-10-01

    Full Text Available In this paper, two sample Bayesian prediction intervals for order statistics (OS are obtained. This prediction is based on a certain class of the inverse exponential-type distributions using a right censored sample. A general class of prior density functions is used and the predictive cumulative function is obtained in the two samples case. The class of the inverse exponential-type distributions includes several important distributions such the inverse Weibull distribution, the inverse Burr distribution, the loglogistic distribution, the inverse Pareto distribution and the inverse paralogistic distribution. Special cases of the inverse Weibull model such as the inverse exponential model and the inverse Rayleigh model are considered.

  9. Bayesian analysis of finite population sampling in multivariate co-exchangeable structures with separable covariance matric

    OpenAIRE

    Shaw, Simon C.; Goldstein, Michael

    2017-01-01

    We explore the effect of finite population sampling in design problems with many variables cross-classified in many ways. In particular, we investigate designs where we wish to sample individuals belonging to different groups for which the underlying covariance matrices are separable between groups and variables. We exploit the generalised conditional independence structure of the model to show how the analysis of the full model can be reduced to an interpretable series of lower dimensional p...

  10. A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling

    Directory of Open Access Journals (Sweden)

    Ying Yan

    2017-01-01

    Full Text Available Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix between interval evidences is constructed, and expert’s information is fused. Comment grades are quantified using the interval number, and cumulative probability function for evaluating the importance of indices is constructed based on the fused information. Finally, index weights are obtained by Monte Carlo random sampling. The method can process expert’s information with varying degrees of uncertainties, which possesses good compatibility. Difficulty in effectively fusing high-conflict group decision-making information and large information loss after fusion is avertible. Original expert judgments are retained rather objectively throughout the processing procedure. Cumulative probability function constructing and random sampling processes do not require any human intervention or judgment. It can be implemented by computer programs easily, thus having an apparent advantage in evaluation practices of fairly huge index systems.

  11. Compressive Sampling of EEG Signals with Finite Rate of Innovation

    Directory of Open Access Journals (Sweden)

    Poh Kok-Kiong

    2010-01-01

    Full Text Available Analyses of electroencephalographic signals and subsequent diagnoses can only be done effectively on long term recordings that preserve the signals' morphologies. Currently, electroencephalographic signals are obtained at Nyquist rate or higher, thus introducing redundancies. Existing compression methods remove these redundancies, thereby achieving compression. We propose an alternative compression scheme based on a sampling theory developed for signals with a finite rate of innovation (FRI which compresses electroencephalographic signals during acquisition. We model the signals as FRI signals and then sample them at their rate of innovation. The signals are thus effectively represented by a small set of Fourier coefficients corresponding to the signals' rate of innovation. Using the FRI theory, original signals can be reconstructed using this set of coefficients. Seventy-two hours of electroencephalographic recording are tested and results based on metrices used in compression literature and morphological similarities of electroencephalographic signals are presented. The proposed method achieves results comparable to that of wavelet compression methods, achieving low reconstruction errors while preserving the morphologiies of the signals. More importantly, it introduces a new framework to acquire electroencephalographic signals at their rate of innovation, thus entailing a less costly low-rate sampling device that does not waste precious computational resources.

  12. Model-based estimation of finite population total in stratified sampling

    African Journals Online (AJOL)

    The work presented in this paper concerns the estimation of finite population total under model – based framework. Nonparametric regression approach as a method of estimating finite population total is explored. The asymptotic properties of the estimators based on nonparametric regression are also developed under ...

  13. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification

    Directory of Open Access Journals (Sweden)

    D. Ramyachitra

    2015-09-01

    Full Text Available Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM, K-nearest neighbor (KNN, Interval Valued Classification (IVC and the improvised Interval Value based Particle Swarm Optimization (IVPSO algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  14. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification.

    Science.gov (United States)

    Ramyachitra, D; Sofia, M; Manikandan, P

    2015-09-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  15. Estimation of Finite Population Mean in Multivariate Stratified Sampling under Cost Function Using Goal Programming

    Directory of Open Access Journals (Sweden)

    Atta Ullah

    2014-01-01

    Full Text Available In practical utilization of stratified random sampling scheme, the investigator meets a problem to select a sample that maximizes the precision of a finite population mean under cost constraint. An allocation of sample size becomes complicated when more than one characteristic is observed from each selected unit in a sample. In many real life situations, a linear cost function of a sample size nh is not a good approximation to actual cost of sample survey when traveling cost between selected units in a stratum is significant. In this paper, sample allocation problem in multivariate stratified random sampling with proposed cost function is formulated in integer nonlinear multiobjective mathematical programming. A solution procedure is proposed using extended lexicographic goal programming approach. A numerical example is presented to illustrate the computational details and to compare the efficiency of proposed compromise allocation.

  16. Probabilistic finite element stiffness of a laterally loaded monopile based on an improved asymptotic sampling method

    DEFF Research Database (Denmark)

    Vahdatirad, Mohammadjavad; Bayat, Mehdi; Andersen, Lars Vabbersgaard

    2015-01-01

    shear strength of clay. Normal and Sobol sampling are employed to provide the asymptotic sampling method to generate the probability distribution of the foundation stiffnesses. Monte Carlo simulation is used as a benchmark. Asymptotic sampling accompanied with Sobol quasi random sampling demonstrates......The mechanical responses of an offshore monopile foundation mounted in over-consolidated clay are calculated by employing a stochastic approach where a nonlinear p–y curve is incorporated with a finite element scheme. The random field theory is applied to represent a spatial variation for undrained...... an efficient method for estimating the probability distribution of stiffnesses for the offshore monopile foundation....

  17. Finite element model updating of a small steel frame using neural networks

    International Nuclear Information System (INIS)

    Zapico, J L; González, M P; Alonso, R; González-Buelga, A

    2008-01-01

    This paper presents an experimental and analytical dynamic study of a small-scale steel frame. The experimental model was physically built and dynamically tested on a shaking table in a series of different configurations obtained from the original one by changing the mass and by causing structural damage. Finite element modelling and parameterization with physical meaning is iteratively tried for the original undamaged configuration. The finite element model is updated through a neural network, the natural frequencies of the model being the net input. The updating process is made more accurate and robust by using a regressive procedure, which constitutes an original contribution of this work. A novel simplified analytical model has been developed to evaluate the reduction of bending stiffness of the elements due to damage. The experimental results of the rest of the configurations have been used to validate both the updated finite element model and the analytical one. The statistical properties of the identified modal data are evaluated. From these, the statistical properties and a confidence interval for the estimated model parameters are obtained by using the Latin Hypercube sampling technique. The results obtained are successful: the updated model accurately reproduces the low modes identified experimentally for all configurations, and the statistical study of the transmission of errors yields a narrow confidence interval for all the identified parameters

  18. Finite sample performance of the E-M algorithm for ranks data modelling

    Directory of Open Access Journals (Sweden)

    Angela D'Elia

    2007-10-01

    Full Text Available We check the finite sample performance of the maximum likelihood estimators of the parameters of a mixture distribution recently introduced for modelling ranks/preference data. The estimates are derived by the E-M algorithm and the performance is evaluated both from an univariate and bivariate points of view. While the results are generally acceptable as far as it concerns the bias, the Monte Carlo experiment shows a different behaviour of the estimators efficiency for the two parameters of the mixture, mainly depending upon their location in the admissible parametric space. Some operative suggestions conclude the paer.

  19. Finite Discrete Gabor Analysis

    DEFF Research Database (Denmark)

    Søndergaard, Peter Lempel

    2007-01-01

    frequency bands at certain times. Gabor theory can be formulated for both functions on the real line and for discrete signals of finite length. The two theories are largely the same because many aspects come from the same underlying theory of locally compact Abelian groups. The two types of Gabor systems...... can also be related by sampling and periodization. This thesis extends on this theory by showing new results for window construction. It also provides a discussion of the problems associated to discrete Gabor bases. The sampling and periodization connection is handy because it allows Gabor systems...... on the real line to be well approximated by finite and discrete Gabor frames. This method of approximation is especially attractive because efficient numerical methods exists for doing computations with finite, discrete Gabor systems. This thesis presents new algorithms for the efficient computation of finite...

  20. Design of sampling tools for Monte Carlo particle transport code JMCT

    International Nuclear Information System (INIS)

    Shangguan Danhua; Li Gang; Zhang Baoyin; Deng Li

    2012-01-01

    A class of sampling tools for general Monte Carlo particle transport code JMCT is designed. Two ways are provided to sample from distributions. One is the utilization of special sampling methods for special distribution; the other is the utilization of general sampling methods for arbitrary discrete distribution and one-dimensional continuous distribution on a finite interval. Some open source codes are included in the general sampling method for the maximum convenience of users. The sampling results show sampling correctly from distribution which are popular in particle transport can be achieved with these tools, and the user's convenience can be assured. (authors)

  1. 3D visualization and finite element mesh formation from wood anatomy samples, Part I – Theoretical approach

    Directory of Open Access Journals (Sweden)

    Petr Koňas

    2009-01-01

    Full Text Available The work summarizes created algorithms for formation of finite element (FE mesh which is derived from bitmap pattern. Process of registration, segmentation and meshing is described in detail. C++ library of STL from Insight Toolkit (ITK Project together with Visualization Toolkit (VTK were used for base processing of images. Several methods for appropriate mesh output are discussed. Multiplatform application WOOD3D for the task under GNU GPL license was assembled. Several methods of segmentation and mainly different ways of contouring were included. Tetrahedral and rectilinear types of mesh were programmed. Improving of mesh quality in some simple ways is mentioned. Testing and verification of final program on wood anatomy samples of spruce and walnut was realized. Methods of microscopic anatomy samples preparation are depicted. Final utilization of formed mesh in the simple structural analysis was performed.The article discusses main problems in image analysis due to incompatible colour spaces, samples preparation, thresholding and final conversion into finite element mesh. Assembling of mentioned tasks together and evaluation of the application are main original results of the presented work. In presented program two thresholding filters were used. By utilization of ITK two following filters were included. Otsu filter based and binary filter based were used. The most problematic task occurred in a production of wood anatomy samples in the unique light conditions with minimal or zero co­lour space shift and the following appropriate definition of thresholds (corresponding thresholding parameters and connected methods (prefiltering + registration which influence the continuity and mainly separation of wood anatomy structure. Solution in samples staining is suggested with the following quick image analysis realization. Next original result of the work is complex fully automated application which offers three types of finite element mesh

  2. A Note on Confidence Interval for the Power of the One Sample Test

    Directory of Open Access Journals (Sweden)

    A. Wong

    2010-01-01

    Full Text Available In introductory statistics texts, the power of the test of a one-sample mean when the variance is known is widely discussed. However, when the variance is unknown, the power of the Student's -test is seldom mentioned. In this note, a general methodology for obtaining inference concerning a scalar parameter of interest of any exponential family model is proposed. The method is then applied to the one-sample mean problem with unknown variance to obtain a (1−100% confidence interval for the power of the Student's -test that detects the difference (−0. The calculations require only the density and the cumulative distribution functions of the standard normal distribution. In addition, the methodology presented can also be applied to determine the required sample size when the effect size and the power of a size test of mean are given.

  3. Statistical intervals a guide for practitioners

    CERN Document Server

    Hahn, Gerald J

    2011-01-01

    Presents a detailed exposition of statistical intervals and emphasizes applications in industry. The discussion differentiates at an elementary level among different kinds of statistical intervals and gives instruction with numerous examples and simple math on how to construct such intervals from sample data. This includes confidence intervals to contain a population percentile, confidence intervals on probability of meeting specified threshold value, and prediction intervals to include observation in a future sample. Also has an appendix containing computer subroutines for nonparametric stati

  4. Finite element simulation of the T-shaped ECAP processing of round samples

    Science.gov (United States)

    Shaban Ghazani, Mehdi; Fardi-Ilkhchy, Ali; Binesh, Behzad

    2018-05-01

    Grain refinement is the only mechanism that increases the yield strength and toughness of the materials simultaneously. Severe plastic deformation is one of the promising methods to refine the microstructure of materials. Among different severe plastic deformation processes, the T-shaped equal channel angular pressing (T-ECAP) is a relatively new technique. In the present study, finite element analysis was conducted to evaluate the deformation behavior of metals during T-ECAP process. The study was focused mainly on flow characteristics, plastic strain distribution and its homogeneity, damage development, and pressing force which are among the most important factors governing the sound and successful processing of nanostructured materials by severe plastic deformation techniques. The results showed that plastic strain is localized in the bottom side of sample and uniform deformation cannot be possible using T-ECAP processing. Friction coefficient between sample and die channel wall has a little effect on strain distributions in mirror plane and transverse plane of deformed sample. Also, damage analysis showed that superficial cracks may be initiated from bottom side of sample and their propagation will be limited due to the compressive state of stress. It was demonstrated that the V shaped deformation zone are existed in T-ECAP process and the pressing load needed for execution of deformation process is increased with friction.

  5. Empirical likelihood-based confidence intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity.

    Science.gov (United States)

    Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi

    2011-06-01

    For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.

  6. Monte Carlo Simulation Of The Portfolio-Balance Model Of Exchange Rates: Finite Sample Properties Of The GMM Estimator

    OpenAIRE

    Hong-Ghi Min

    2011-01-01

    Using Monte Carlo simulation of the Portfolio-balance model of the exchange rates, we report finite sample properties of the GMM estimator for testing over-identifying restrictions in the simultaneous equations model. F-form of Sargans statistic performs better than its chi-squared form while Hansens GMM statistic has the smallest bias.

  7. Taking account of sample finite dimensions in processing measurements of double differential cross sections of slow neutron scattering

    International Nuclear Information System (INIS)

    Lisichkin, Yu.V.; Dovbenko, A.G.; Efimenko, B.A.; Novikov, A.G.; Smirenkina, L.D.; Tikhonova, S.I.

    1979-01-01

    Described is a method of taking account of finite sample dimensions in processing measurement results of double differential cross sections (DDCS) of slow neutron scattering. A necessity of corrective approach to the account taken of the effect of sample finite dimensions is shown, and, in particular, the necessity to conduct preliminary processing of DDCS, the account being taken of attenuation coefficients of single scattered neutrons (SSN) for measurements on the sample with a container, and on the container. Correction for multiple scattering (MS) calculated on the base of the dynamic model should be obtained, the account being taken of resolution effects. To minimize the effect of the dynamic model used in calculations it is preferred to make absolute measurements of DDCS and to use the subraction method. The above method was realized in the set of programs for the BESM-5 computer. The FISC program computes the coefficients of SSN attenuation and correction for MS. The DDS program serves to compute a model DDCS averaged as per the resolution function of an instrument. The SCATL program is intended to prepare initial information necessary for the FISC program, and permits to compute the scattering law for all materials. Presented are the results of using the above method while processing experimental data on measuring DDCS of water by the DIN-1M spectrometer

  8. How precise is the finite sample approximation of the asymptotic distribution of realised variation measures in the presence of jumps?

    DEFF Research Database (Denmark)

    Veraart, Almut

    2011-01-01

    and present a new estimator for the asymptotic "variance" of the centered realised variance in the presence of jumps. Next, we compare the finite sample performance of the various estimators by means of detailed Monte Carlo studies. Here we study the impact of the jump activity, of the jump size of the jumps......This paper studies the impact of jumps on volatility estimation and inference based on various realised variation measures such as realised variance, realised multipower variation and truncated realised multipower variation. We review the asymptotic theory of those realised variation measures...... in the price and of the presence of additional independent or dependent jumps in the volatility. We find that the finite sample performance of realised variance and, in particular, of log--transformed realised variance is generally good, whereas the jump--robust statistics tend to struggle in the presence...

  9. INTERVAL OBSERVER FOR A BIOLOGICAL REACTOR MODEL

    Directory of Open Access Journals (Sweden)

    T. A. Kharkovskaia

    2014-05-01

    Full Text Available The method of an interval observer design for nonlinear systems with parametric uncertainties is considered. The interval observer synthesis problem for systems with varying parameters consists in the following. If there is the uncertainty restraint for the state values of the system, limiting the initial conditions of the system and the set of admissible values for the vector of unknown parameters and inputs, the interval existence condition for the estimations of the system state variables, containing the actual state at a given time, needs to be held valid over the whole considered time segment as well. Conditions of the interval observers design for the considered class of systems are shown. They are: limitation of the input and state, the existence of a majorizing function defining the uncertainty vector for the system, Lipschitz continuity or finiteness of this function, the existence of an observer gain with the suitable Lyapunov matrix. The main condition for design of such a device is cooperativity of the interval estimation error dynamics. An individual observer gain matrix selection problem is considered. In order to ensure the property of cooperativity for interval estimation error dynamics, a static transformation of coordinates is proposed. The proposed algorithm is demonstrated by computer modeling of the biological reactor. Possible applications of these interval estimation systems are the spheres of robust control, where the presence of various types of uncertainties in the system dynamics is assumed, biotechnology and environmental systems and processes, mechatronics and robotics, etc.

  10. Haemostatic reference intervals in pregnancy

    DEFF Research Database (Denmark)

    Szecsi, Pal Bela; Jørgensen, Maja; Klajnbard, Anna

    2010-01-01

    Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age-specific refe......Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age......-specific reference intervals for coagulation tests during normal pregnancy. Eight hundred one women with expected normal pregnancies were included in the study. Of these women, 391 had no complications during pregnancy, vaginal delivery, or postpartum period. Plasma samples were obtained at gestational weeks 13......-20, 21-28, 29-34, 35-42, at active labor, and on postpartum days 1 and 2. Reference intervals for each gestational period using only the uncomplicated pregnancies were calculated in all 391 women for activated partial thromboplastin time (aPTT), fibrinogen, fibrin D-dimer, antithrombin, free protein S...

  11. Finite-Time Attractivity for Diagonally Dominant Systems with Off-Diagonal Delays

    Directory of Open Access Journals (Sweden)

    T. S. Doan

    2012-01-01

    Full Text Available We introduce a notion of attractivity for delay equations which are defined on bounded time intervals. Our main result shows that linear delay equations are finite-time attractive, provided that the delay is only in the coupling terms between different components, and the system is diagonally dominant. We apply this result to a nonlinear Lotka-Volterra system and show that the delay is harmless and does not destroy finite-time attractivity.

  12. Delay-Dependent Finite-Time H∞ Controller Design for a Kind of Nonlinear Descriptor Systems via a T-S Fuzzy Model

    Directory of Open Access Journals (Sweden)

    Baoyan Zhu

    2015-01-01

    Full Text Available Delay-dependent finite-time H∞ controller design problems are investigated for a kind of nonlinear descriptor system via a T-S fuzzy model in this paper. The solvable conditions of finite-time H∞ controller are given to guarantee that the loop-closed system is impulse-free and finite-time bounded and holds the H∞ performance to a prescribed disturbance attenuation level γ. The method given is the ability to eliminate the impulsive behavior caused by descriptor systems in a finite-time interval, which confirms the existence and uniqueness of solutions in the interval. By constructing a nonsingular matrix, we overcome the difficulty that results in an infeasible linear matrix inequality (LMI. Using the FEASP solver and GEVP solver of the LMI toolbox, we perform simulations to validate the proposed methods for a nonlinear descriptor system via the T-S fuzzy model, which shows the application of the T-S fuzzy method in studying the finite-time control problem of a nonlinear system. Meanwhile the method was also applied to the biological economy system to eliminate impulsive behavior at the bifurcation value, stabilize the loop-closed system in a finite-time interval, and achieve a H∞ performance level.

  13. Fault detection for discrete-time LPV systems using interval observers

    Science.gov (United States)

    Zhang, Zhi-Hui; Yang, Guang-Hong

    2017-10-01

    This paper is concerned with the fault detection (FD) problem for discrete-time linear parameter-varying systems subject to bounded disturbances. A parameter-dependent FD interval observer is designed based on parameter-dependent Lyapunov and slack matrices. The design method is presented by translating the parameter-dependent linear matrix inequalities (LMIs) into finite ones. In contrast to the existing results based on parameter-independent and diagonal Lyapunov matrices, the derived disturbance attenuation, fault sensitivity and nonnegative conditions lead to less conservative LMI characterisations. Furthermore, without the need to design the residual evaluation functions and thresholds, the residual intervals generated by the interval observers are used directly for FD decision. Finally, simulation results are presented for showing the effectiveness and superiority of the proposed method.

  14. An approach to the selection of recommended cooling intervals for the activation analysis of unknown samples with Ge(Li) gamma-ray spectrometry

    International Nuclear Information System (INIS)

    Hirose, Akio; Ishii, Daido

    1975-01-01

    Estimation of the optimum cooling interval by the mathematic or graphic method for Ge(Li) γ-ray spectrometry performed in the presence of some Compton interferences, and the recommended cooling intervals available for activation analysis of unknown samples have been proposed, and applied to the non-destructive activation analysis of gold in pure copper. In the presence of Compton interferences, two kinds of optimum cooling intervals were discussed. One maximizes the S/N ratio of a desired photo-peak. This interval had been originated by Isenhour, et al. Using the computer technique, this work is abbreviated as tsub( s/ n). The other, which minimizes the relative standard deviation (delta s/S) of a net photo-peak counting rate of interest (S) was originated by Tomov, et al. and Quittner, et al., this work is abbreviated as tsub(opt) or t'sub(opt). All equations derived by the above authors, however, have the practical disadvantage of including a term relating to the intensity of the desired photo-peak, thus making it difficult to predict the optimum cooling interval before irradiation. Since in chemical analysis, the concentration of the desired element, or the intensity of the photo-peak of interest, should be considered as ''unknown''. In the present work, an approach to the selection of recommended cooling interval applicable to the unknown sample has been discussed, and the interval, tsub(opt), which minimizes the lower limit of detection of a desired element under given irradiation and counting conditions has been proposed. (Evans, J.)

  15. The dark side of Interval Temporal Logic: sharpening the undecidability border

    DEFF Research Database (Denmark)

    Bresolin, Davide; Monica, Dario Della; Goranko, Valentin

    2011-01-01

    on the class of models (in our case, the class of interval structures)in which it is interpreted. In this paper, we have identified several new minimal undecidable logics amongst the fragments of Halpern-Shoham logic HS, including the logic of the overlaps relation, over the classes of all and finite linear...... orders, as well as the logic of the meet and subinterval relations, over the class of dense linear orders. Together with previous undecid ability results, this work contributes to delineate the border of the dark side of interval temporal logics quite sharply....

  16. Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.

    Science.gov (United States)

    Bartley, David; Slaven, James; Harper, Martin

    2017-03-01

    The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

  17. On the Influence of the Data Sampling Interval on Computer-Derived K-Indices

    Directory of Open Access Journals (Sweden)

    A Bernard

    2011-06-01

    Full Text Available The K index was devised by Bartels et al. (1939 to provide an objective monitoring of irregular geomagnetic activity. The K index was then routinely used to monitor the magnetic activity at permanent magnetic observatories as well as at temporary stations. The increasing number of digital and sometimes unmanned observatories and the creation of INTERMAGNET put the question of computer production of K at the centre of the debate. Four algorithms were selected during the Vienna meeting (1991 and endorsed by IAGA for the computer production of K indices. We used one of them (FMI algorithm to investigate the impact of the geomagnetic data sampling interval on computer produced K values through the comparison of the computer derived K values for the period 2009, January 1st to 2010, May 31st at the Port-aux-Francais magnetic observatory using magnetic data series with different sampling rates (the smaller: 1 second; the larger: 1 minute. The impact is investigated on both 3-hour range values and K indices data series, as a function of the activity level for low and moderate geomagnetic activity.

  18. A Class of Estimators for Finite Population Mean in Double Sampling under Nonresponse Using Fractional Raw Moments

    Directory of Open Access Journals (Sweden)

    Manzoor Khan

    2014-01-01

    Full Text Available This paper presents new classes of estimators in estimating the finite population mean under double sampling in the presence of nonresponse when using information on fractional raw moments. The expressions for mean square error of the proposed classes of estimators are derived up to the first degree of approximation. It is shown that a proposed class of estimators performs better than the usual mean estimator, ratio type estimators, and Singh and Kumar (2009 estimator. An empirical study is carried out to demonstrate the performance of a proposed class of estimators.

  19. Application of the entropic coefficient for interval number optimization during interval assessment

    Directory of Open Access Journals (Sweden)

    Tynynyka A. N.

    2017-06-01

    Full Text Available In solving many statistical problems, the most precise choice of the distribution law of a random variable is required, the sample of which the authors observe. This choice requires the construction of an interval series. Therefore, the problem arises of assigning an optimal number of intervals, and this study proposes a number of formulas for solving it. Which of these formulas solves the problem more accurately? In [9], this question is investigated using the Pearson criterion. This article describes the procedure and on its basis gives formulas available in literature and proposed new formulas using the entropy coefficient. A comparison is made with the previously published results of applying Pearson's concord criterion for these purposes. Differences in the estimates of the accuracy of the formulas are found. The proposed new formulas for calculating the number of intervals showed the best results. Calculations have been made to compare the work of the same formulas for the distribution of sample data according to the normal law and the Rayleigh law.

  20. On entire functions restricted to intervals, partition of unities, and dual Gabor frames

    DEFF Research Database (Denmark)

    Christensen, Ole; Kim, Hong Oh; Kim, Rae Young

    2014-01-01

    Partition of unities appears in many places in analysis. Typically it is generated by compactly supported functions with a certain regularity. In this paper we consider partition of unities obtained as integer-translates of entire functions restricted to finite intervals. We characterize the enti...

  1. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  2. Magnetic Resonance Fingerprinting with short relaxation intervals.

    Science.gov (United States)

    Amthor, Thomas; Doneva, Mariya; Koken, Peter; Sommer, Karsten; Meineke, Jakob; Börnert, Peter

    2017-09-01

    The aim of this study was to investigate a technique for improving the performance of Magnetic Resonance Fingerprinting (MRF) in repetitive sampling schemes, in particular for 3D MRF acquisition, by shortening relaxation intervals between MRF pulse train repetitions. A calculation method for MRF dictionaries adapted to short relaxation intervals and non-relaxed initial spin states is presented, based on the concept of stationary fingerprints. The method is applicable to many different k-space sampling schemes in 2D and 3D. For accuracy analysis, T 1 and T 2 values of a phantom are determined by single-slice Cartesian MRF for different relaxation intervals and are compared with quantitative reference measurements. The relevance of slice profile effects is also investigated in this case. To further illustrate the capabilities of the method, an application to in-vivo spiral 3D MRF measurements is demonstrated. The proposed computation method enables accurate parameter estimation even for the shortest relaxation intervals, as investigated for different sampling patterns in 2D and 3D. In 2D Cartesian measurements, we achieved a scan acceleration of more than a factor of two, while maintaining acceptable accuracy: The largest T 1 values of a sample set deviated from their reference values by 0.3% (longest relaxation interval) and 2.4% (shortest relaxation interval). The largest T 2 values showed systematic deviations of up to 10% for all relaxation intervals, which is discussed. The influence of slice profile effects for multislice acquisition is shown to become increasingly relevant for short relaxation intervals. In 3D spiral measurements, a scan time reduction of 36% was achieved, maintaining the quality of in-vivo T1 and T2 maps. Reducing the relaxation interval between MRF sequence repetitions using stationary fingerprint dictionaries is a feasible method to improve the scan efficiency of MRF sequences. The method enables fast implementations of 3D spatially

  3. Confidence intervals for correlations when data are not normal.

    Science.gov (United States)

    Bishara, Anthony J; Hittner, James B

    2017-02-01

    With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.

  4. Sequential Interval Estimation of a Location Parameter with Fixed Width in the Nonregular Case

    OpenAIRE

    Koike, Ken-ichi

    2007-01-01

    For a location-scale parameter family of distributions with a finite support, a sequential confidence interval with a fixed width is obtained for the location parameter, and its asymptotic consistency and efficiency are shown. Some comparisons with the Chow-Robbins procedure are also done.

  5. National Survey of Adult and Pediatric Reference Intervals in Clinical Laboratories across Canada: A Report of the CSCC Working Group on Reference Interval Harmonization.

    Science.gov (United States)

    Adeli, Khosrow; Higgins, Victoria; Seccombe, David; Collier, Christine P; Balion, Cynthia M; Cembrowski, George; Venner, Allison A; Shaw, Julie

    2017-11-01

    Reference intervals are widely used decision-making tools in laboratory medicine, serving as health-associated standards to interpret laboratory test results. Numerous studies have shown wide variation in reference intervals, even between laboratories using assays from the same manufacturer. Lack of consistency in either sample measurement or reference intervals across laboratories challenges the expectation of standardized patient care regardless of testing location. Here, we present data from a national survey conducted by the Canadian Society of Clinical Chemists (CSCC) Reference Interval Harmonization (hRI) Working Group that examines variation in laboratory reference sample measurements, as well as pediatric and adult reference intervals currently used in clinical practice across Canada. Data on reference intervals currently used by 37 laboratories were collected through a national survey to examine the variation in reference intervals for seven common laboratory tests. Additionally, 40 clinical laboratories participated in a baseline assessment by measuring six analytes in a reference sample. Of the seven analytes examined, alanine aminotransferase (ALT), alkaline phosphatase (ALP), and creatinine reference intervals were most variable. As expected, reference interval variation was more substantial in the pediatric population and varied between laboratories using the same instrumentation. Reference sample results differed between laboratories, particularly for ALT and free thyroxine (FT4). Reference interval variation was greater than test result variation for the majority of analytes. It is evident that there is a critical lack of harmonization in laboratory reference intervals, particularly for the pediatric population. Furthermore, the observed variation in reference intervals across instruments cannot be explained by the bias between the results obtained on instruments by different manufacturers. Copyright © 2017 The Canadian Society of Clinical Chemists

  6. Homogenization-based interval analysis for structural-acoustic problem involving periodical composites and multi-scale uncertain-but-bounded parameters.

    Science.gov (United States)

    Chen, Ning; Yu, Dejie; Xia, Baizhan; Liu, Jian; Ma, Zhengdong

    2017-04-01

    This paper presents a homogenization-based interval analysis method for the prediction of coupled structural-acoustic systems involving periodical composites and multi-scale uncertain-but-bounded parameters. In the structural-acoustic system, the macro plate structure is assumed to be composed of a periodically uniform microstructure. The equivalent macro material properties of the microstructure are computed using the homogenization method. By integrating the first-order Taylor expansion interval analysis method with the homogenization-based finite element method, a homogenization-based interval finite element method (HIFEM) is developed to solve a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters. The corresponding formulations of the HIFEM are deduced. A subinterval technique is also introduced into the HIFEM for higher accuracy. Numerical examples of a hexahedral box and an automobile passenger compartment are given to demonstrate the efficiency of the presented method for a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters.

  7. Generalized Confidence Intervals and Fiducial Intervals for Some Epidemiological Measures

    Directory of Open Access Journals (Sweden)

    Ionut Bebu

    2016-06-01

    Full Text Available For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The measures considered include the common odds ratio (OR from several studies, the number needed to treat (NNT, and the prevalence ratio. For each parameter, confidence intervals are constructed using the concepts of generalized pivotal quantities and fiducial quantities. Numerical results show that the confidence intervals so obtained exhibit satisfactory performance in terms of maintaining the coverage probabilities even when the sample sizes are not large. An appealing feature of the proposed solutions is that they are not based on maximization of the likelihood, and hence are free from convergence issues associated with the numerical calculation of the maximum likelihood estimators, especially in the context of the log-binomial model. The results are illustrated with a number of examples. The overall conclusion is that the proposed methodologies based on generalized pivotal quantities and fiducial quantities provide an accurate and unified approach for the interval estimation of the various epidemiological measures in the context of binary outcome data with or without covariates.

  8. Probabilistic finite elements

    Science.gov (United States)

    Belytschko, Ted; Wing, Kam Liu

    1987-01-01

    In the Probabilistic Finite Element Method (PFEM), finite element methods have been efficiently combined with second-order perturbation techniques to provide an effective method for informing the designer of the range of response which is likely in a given problem. The designer must provide as input the statistical character of the input variables, such as yield strength, load magnitude, and Young's modulus, by specifying their mean values and their variances. The output then consists of the mean response and the variance in the response. Thus the designer is given a much broader picture of the predicted performance than with simply a single response curve. These methods are applicable to a wide class of problems, provided that the scale of randomness is not too large and the probabilistic density functions possess decaying tails. By incorporating the computational techniques we have developed in the past 3 years for efficiency, the probabilistic finite element methods are capable of handling large systems with many sources of uncertainties. Sample results for an elastic-plastic ten-bar structure and an elastic-plastic plane continuum with a circular hole subject to cyclic loadings with the yield stress on the random field are given.

  9. Transmission of electrons with flat passbands in finite superlattices

    International Nuclear Information System (INIS)

    Barajas-Aguilar, A H; Rodríguez-Magdaleno, K A; Martínez-Orozco, J C; Enciso-Muñoz, A; Contreras-Solorio, D A

    2013-01-01

    Using the transfer matrix method and the Ben Daniel-Duke equation for variable mass electrons propagation, we calculate the transmittance for symmetric finite superlattices where the width and the height of the potential barriers follow a linear dependence. The width and height of the barriers decreases from the center to the ends of the superlattice. The transmittance presents intervals of stopbands and quite flat passbands.

  10. Joint interval reliability for Markov systems with an application in transmission line reliability

    International Nuclear Information System (INIS)

    Csenki, Attila

    2007-01-01

    We consider Markov reliability models whose finite state space is partitioned into the set of up states U and the set of down states D . Given a collection of k disjoint time intervals I l =[t l ,t l +x l ], l=1,...,k, the joint interval reliability is defined as the probability of the system being in U for all time instances in I 1 union ... union I k . A closed form expression is derived here for the joint interval reliability for this class of models. The result is applied to power transmission lines in a two-state fluctuating environment. We use the Linux versions of the free packages Maxima and Scilab in our implementation for symbolic and numerical work, respectively

  11. Biostratigraphic analysis of core samples from wells drilled in the Devonian shale interval of the Appalachian and Illinois Basins

    Energy Technology Data Exchange (ETDEWEB)

    Martin, S.J.; Zielinski, R.E.

    1978-07-14

    A palynological investigation was performed on 55 samples of core material from four wells drilled in the Devonian Shale interval of the Appalachian and Illinois Basins. Using a combination of spores and acritarchs, it was possible to divide the Middle Devonian from the Upper Devonian and to make subdivisions within the Middle and Upper Devonian. The age of the palynomorphs encountered in this study is Upper Devonian.

  12. Effect of dislocation pile-up on size-dependent yield strength in finite single-crystal micro-samples

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp [Department of Mechanical Engineering, Osaka University, Suita 565-0871 (Japan); Zhang, Xu [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China); School of Mechanics and Engineering Science, Zhengzhou University, Zhengzhou 450001 (China); Shang, Fulin [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China)

    2015-07-07

    Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources and pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.

  13. Predicting fecal coliform using the interval-to-interval approach and SWAT in the Miyun watershed, China.

    Science.gov (United States)

    Bai, Jianwen; Shen, Zhenyao; Yan, Tiezhu; Qiu, Jiali; Li, Yangyang

    2017-06-01

    Pathogens in manure can cause waterborne-disease outbreaks, serious illness, and even death in humans. Therefore, information about the transformation and transport of bacteria is crucial for determining their source. In this study, the Soil and Water Assessment Tool (SWAT) was applied to simulate fecal coliform bacteria load in the Miyun Reservoir watershed, China. The data for the fecal coliform were obtained at three sampling sites, Chenying (CY), Gubeikou (GBK), and Xiahui (XH). The calibration processes of the fecal coliform were conducted using the CY and GBK sites, and validation was conducted at the XH site. An interval-to-interval approach was designed and incorporated into the processes of fecal coliform calibration and validation. The 95% confidence interval of the predicted values and the 95% confidence interval of measured values were considered during calibration and validation in the interval-to-interval approach. Compared with the traditional point-to-point comparison, this method can improve simulation accuracy. The results indicated that the simulation of fecal coliform using the interval-to-interval approach was reasonable for the watershed. This method could provide a new research direction for future model calibration and validation studies.

  14. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    Science.gov (United States)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  15. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time

    Science.gov (United States)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which—as shown on the contact process—provides a significant improvement of the large deviation function estimators compared to the standard one.

  16. Single interval Rényi entropy at low temperature

    Science.gov (United States)

    Chen, Bin; Wu, Jie-qiang

    2014-08-01

    In this paper, we calculate the Rényi entropy of one single interval on a circle at finite temperature in 2D CFT. In the low temperature limit, we expand the thermal density matrix level by level in the vacuum Verma module, and calculate the first few leading terms in e -π/ T L explicitly. On the other hand, we compute the same Rényi entropy holographically. After considering the dependence of the Rényi entropy on the temperature, we manage to fix the interval-independent constant terms in the classical part of holographic Rényi entropy. We furthermore extend the analysis in [9] to higher orders and find exact agreement between the results from field theory and bulk computations in the large central charge limit. Our work provides another piece of evidence to support holographic computation of Rényi entropy in AdS3/CFT2 correspondence, even with thermal effect.

  17. Graph sampling

    OpenAIRE

    Zhang, L.-C.; Patone, M.

    2017-01-01

    We synthesise the existing theory of graph sampling. We propose a formal definition of sampling in finite graphs, and provide a classification of potential graph parameters. We develop a general approach of Horvitz–Thompson estimation to T-stage snowball sampling, and present various reformulations of some common network sampling methods in the literature in terms of the outlined graph sampling theory.

  18. Joint interval reliability for Markov systems with an application in transmission line reliability

    Energy Technology Data Exchange (ETDEWEB)

    Csenki, Attila [School of Computing and Mathematics, University of Bradford, Bradford, West Yorkshire, BD7 1DP (United Kingdom)]. E-mail: a.csenki@bradford.ac.uk

    2007-06-15

    We consider Markov reliability models whose finite state space is partitioned into the set of up states {sub U} and the set of down states {sub D}. Given a collection of k disjoint time intervals I{sub l}=[t{sub l},t{sub l}+x{sub l}], l=1,...,k, the joint interval reliability is defined as the probability of the system being in {sub U} for all time instances in I{sub 1} union ... union I{sub k}. A closed form expression is derived here for the joint interval reliability for this class of models. The result is applied to power transmission lines in a two-state fluctuating environment. We use the Linux versions of the free packages Maxima and Scilab in our implementation for symbolic and numerical work, respectively.

  19. Finite-element solidification modelling of metals and binary alloys

    International Nuclear Information System (INIS)

    Mathew, P.M.

    1986-12-01

    In the Canadian Nuclear Fuel Waste Management Program, cast metals and alloys are being evaluated for their ability to support a metallic fuel waste container shell under disposal vault conditions and to determine their performance as an additional barrier to radionuclide release. These materials would be cast to fill residual free space inside the container and allowed to solidify without major voids. To model their solidification characteristics following casting, a finite-element model, FAXMOD-3, was adopted. Input parameters were modified to account for the latent heat of fusion of the metals and alloys considered. This report describes the development of the solidification model and its theoretical verification. To model the solidification of pure metals and alloys that melt at a distinct temperature, the latent heat of fusion was incorporated as a double-ramp function in the specific heat-temperature relationship, within an interval of +- 1 K around the solidification temperature. Comparison of calculated results for lead, tin and lead-tin eutectic melts, unidirectionally cooled with and without superheat, showed good agreement with an alternative technique called the integral profile method. To model the solidification of alloys that melt over a temperature interval, the fraction of solid in the solid-liquid region, as calculated from the Scheil equation, was used to determine the fraction of latent heat to be liberated over a temperature interval within the solid-liquid zone. Comparison of calculated results for unidirectionally cooled aluminum-4 wt.% copper melt, with and without superheat, showed good agreement with alternative finite-difference techniques

  20. Interval selection with machine-dependent intervals

    OpenAIRE

    Bohmova K.; Disser Y.; Mihalak M.; Widmayer P.

    2013-01-01

    We study an offline interval scheduling problem where every job has exactly one associated interval on every machine. To schedule a set of jobs, exactly one of the intervals associated with each job must be selected, and the intervals selected on the same machine must not intersect.We show that deciding whether all jobs can be scheduled is NP-complete already in various simple cases. In particular, by showing the NP-completeness for the case when all the intervals associated with the same job...

  1. A finite Zitterbewegung model for relativistic quantum mechanics

    International Nuclear Information System (INIS)

    Noyes, H.P.

    1990-01-01

    Starting from steps of length h/mc and time intervals h/mc 2 , which imply a quasi-local Zitterbewegung with velocity steps ±c, we employ discrimination between bit-strings of finite length to construct a necessary 3+1 dimensional event-space for relativistic quantum mechanics. By using the combinatorial hierarchy to label the strings, we provide a successful start on constructing the coupling constants and mass ratios implied by the scheme. Agreement with experiments is surprisingly accurate. 22 refs., 1 fig

  2. On solving wave equations on fixed bounded intervals involving Robin boundary conditions with time-dependent coefficients

    Science.gov (United States)

    van Horssen, Wim T.; Wang, Yandong; Cao, Guohua

    2018-06-01

    In this paper, it is shown how characteristic coordinates, or equivalently how the well-known formula of d'Alembert, can be used to solve initial-boundary value problems for wave equations on fixed, bounded intervals involving Robin type of boundary conditions with time-dependent coefficients. A Robin boundary condition is a condition that specifies a linear combination of the dependent variable and its first order space-derivative on a boundary of the interval. Analytical methods, such as the method of separation of variables (SOV) or the Laplace transform method, are not applicable to those types of problems. The obtained analytical results by applying the proposed method, are in complete agreement with those obtained by using the numerical, finite difference method. For problems with time-independent coefficients in the Robin boundary condition(s), the results of the proposed method also completely agree with those as for instance obtained by the method of separation of variables, or by the finite difference method.

  3. Complete Blood Count Reference Intervals for Healthy Han Chinese Adults

    Science.gov (United States)

    Mu, Runqing; Guo, Wei; Qiao, Rui; Chen, Wenxiang; Jiang, Hong; Ma, Yueyun; Shang, Hong

    2015-01-01

    Background Complete blood count (CBC) reference intervals are important to diagnose diseases, screen blood donors, and assess overall health. However, current reference intervals established by older instruments and technologies and those from American and European populations are not suitable for Chinese samples due to ethnic, dietary, and lifestyle differences. The aim of this multicenter collaborative study was to establish CBC reference intervals for healthy Han Chinese adults. Methods A total of 4,642 healthy individuals (2,136 males and 2,506 females) were recruited from six clinical centers in China (Shenyang, Beijing, Shanghai, Guangzhou, Chengdu, and Xi’an). Blood samples collected in K2EDTA anticoagulant tubes were analyzed. Analysis of variance was performed to determine differences in consensus intervals according to the use of data from the combined sample and selected samples. Results Median and mean platelet counts from the Chengdu center were significantly lower than those from other centers. Red blood cell count (RBC), hemoglobin (HGB), and hematocrit (HCT) values were higher in males than in females at all ages. Other CBC parameters showed no significant instrument-, region-, age-, or sex-dependent difference. Thalassemia carriers were found to affect the lower or upper limit of different RBC profiles. Conclusion We were able to establish consensus intervals for CBC parameters in healthy Han Chinese adults. RBC, HGB, and HCT intervals were established for each sex. The reference interval for platelets for the Chengdu center should be established independently. PMID:25769040

  4. 3D visualization and finite element mesh formation from wood anatomy samples, Part II – Algorithm approach

    Directory of Open Access Journals (Sweden)

    Petr Koňas

    2009-01-01

    Full Text Available Paper presents new original application WOOD3D in form of program code assembling. The work extends the previous article “Part I – Theoretical approach” in detail description of implemented C++ classes of utilized projects Visualization Toolkit (VTK, Insight Toolkit (ITK and MIMX. Code is written in CMake style and it is available as multiplatform application. Currently GNU Linux (32/64b and MS Windows (32/64b platforms were released. Article discusses various filter classes for image filtering. Mainly Otsu and Binary threshold filters are classified for anatomy wood samples thresholding. Registration of images series is emphasized for difference of colour spaces compensation is included. Resulted work flow of image analysis is new methodological approach for images processing through the composition, visualization, filtering, registration and finite element mesh formation. Application generates script in ANSYS parametric design language (APDL which is fully compatible with ANSYS finite element solver and designer environment. The script includes the whole definition of unstructured finite element mesh formed by individual elements and nodes. Due to simple notation, the same script can be used for generation of geometrical entities in element positions. Such formed volumetric entities are prepared for further geometry approximation (e.g. by boolean or more advanced methods. Hexahedral and tetrahedral types of mesh elements are formed on user request with specified mesh options. Hexahedral meshes are formed both with uniform element size and with anisotropic character. Modified octree method for hexahedral mesh with anisotropic character was declared in application. Multicore CPUs in the application are supported for fast image analysis realization. Visualization of image series and consequent 3D image are realized in VTK format sufficiently known and public format, visualized in GPL application Paraview. Future work based on mesh

  5. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    Science.gov (United States)

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  6. A finite Zitterbewegung model for relativistic quantum mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Noyes, H.P.

    1990-02-19

    Starting from steps of length h/mc and time intervals h/mc{sup 2}, which imply a quasi-local Zitterbewegung with velocity steps {plus minus}c, we employ discrimination between bit-strings of finite length to construct a necessary 3+1 dimensional event-space for relativistic quantum mechanics. By using the combinatorial hierarchy to label the strings, we provide a successful start on constructing the coupling constants and mass ratios implied by the scheme. Agreement with experiments is surprisingly accurate. 22 refs., 1 fig.

  7. Nonlinear Finite Strain Consolidation Analysis with Secondary Consolidation Behavior

    Directory of Open Access Journals (Sweden)

    Jieqing Huang

    2014-01-01

    Full Text Available This paper aims to analyze nonlinear finite strain consolidation with secondary consolidation behavior. On the basis of some assumptions about the secondary consolidation behavior, the continuity equation of pore water in Gibson’s consolidation theory is modified. Taking the nonlinear compressibility and nonlinear permeability of soils into consideration, the governing equation for finite strain consolidation analysis is derived. Based on the experimental data of Hangzhou soft clay samples, the new governing equation is solved with the finite element method. Afterwards, the calculation results of this new method and other two methods are compared. It can be found that Gibson’s method may underestimate the excess pore water pressure during primary consolidation. The new method which takes the secondary consolidation behavior, the nonlinear compressibility, and nonlinear permeability of soils into consideration can precisely estimate the settlement rate and the final settlement of Hangzhou soft clay sample.

  8. Using finite mixture models in thermal-hydraulics system code uncertainty analysis

    Energy Technology Data Exchange (ETDEWEB)

    Carlos, S., E-mail: scarlos@iqn.upv.es [Department d’Enginyeria Química i Nuclear, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Sánchez, A. [Department d’Estadística Aplicada i Qualitat, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Ginestar, D. [Department de Matemàtica Aplicada, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Martorell, S. [Department d’Enginyeria Química i Nuclear, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain)

    2013-09-15

    Highlights: • Best estimate codes simulation needs uncertainty quantification. • The output variables can present multimodal probability distributions. • The analysis of multimodal distribution is performed using finite mixture models. • Two methods to reconstruct output variable probability distribution are used. -- Abstract: Nuclear Power Plant safety analysis is mainly based on the use of best estimate (BE) codes that predict the plant behavior under normal or accidental conditions. As the BE codes introduce uncertainties due to uncertainty in input parameters and modeling, it is necessary to perform uncertainty assessment (UA), and eventually sensitivity analysis (SA), of the results obtained. These analyses are part of the appropriate treatment of uncertainties imposed by current regulation based on the adoption of the best estimate plus uncertainty (BEPU) approach. The most popular approach for uncertainty assessment, based on Wilks’ method, obtains a tolerance/confidence interval, but it does not completely characterize the output variable behavior, which is required for an extended UA and SA. However, the development of standard UA and SA impose high computational cost due to the large number of simulations needed. In order to obtain more information about the output variable and, at the same time, to keep computational cost as low as possible, there has been a recent shift toward developing metamodels (model of model), or surrogate models, that approximate or emulate complex computer codes. In this way, there exist different techniques to reconstruct the probability distribution using the information provided by a sample of values as, for example, the finite mixture models. In this paper, the Expectation Maximization and the k-means algorithms are used to obtain a finite mixture model that reconstructs the output variable probability distribution from data obtained with RELAP-5 simulations. Both methodologies have been applied to a separated

  9. Haemostatic reference intervals in pregnancy

    DEFF Research Database (Denmark)

    Szecsi, Pal Bela; Jørgensen, Maja; Klajnbard, Anna

    2010-01-01

    largely unchanged during pregnancy, delivery, and postpartum and were within non-pregnant reference intervals. However, levels of fibrinogen, D-dimer, and coagulation factors VII, VIII, and IX increased markedly. Protein S activity decreased substantially, while free protein S decreased slightly and total......Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age......-20, 21-28, 29-34, 35-42, at active labor, and on postpartum days 1 and 2. Reference intervals for each gestational period using only the uncomplicated pregnancies were calculated in all 391 women for activated partial thromboplastin time (aPTT), fibrinogen, fibrin D-dimer, antithrombin, free protein S...

  10. Reference Intervals of Common Clinical Chemistry Analytes for Adults in Hong Kong.

    Science.gov (United States)

    Lo, Y C; Armbruster, David A

    2012-04-01

    Defining reference intervals is a major challenge because of the difficulty in recruiting volunteers to participate and testing samples from a significant number of healthy reference individuals. Historical literature citation intervals are often suboptimal because they're be based on obsolete methods and/or only a small number of poorly defined reference samples. Blood donors in Hong Kong gave permission for additional blood to be collected for reference interval testing. The samples were tested for twenty-five routine analytes on the Abbott ARCHITECT clinical chemistry system. Results were analyzed using the Rhoads EP evaluator software program, which is based on the CLSI/IFCC C28-A guideline, and defines the reference interval as the 95% central range. Method specific reference intervals were established for twenty-five common clinical chemistry analytes for a Chinese ethnic population. The intervals were defined for each gender separately and for genders combined. Gender specific or combined gender intervals were adapted as appropriate for each analyte. A large number of healthy, apparently normal blood donors from a local ethnic population were tested to provide current reference intervals for a new clinical chemistry system. Intervals were determined following an accepted international guideline. Laboratories using the same or similar methodologies may adapt these intervals if deemed validated and deemed suitable for their patient population. Laboratories using different methodologies may be able to successfully adapt the intervals for their facilities using the reference interval transference technique based on a method comparison study.

  11. Modified stochastic fragmentation of an interval as an ageing process

    Science.gov (United States)

    Fortin, Jean-Yves

    2018-02-01

    We study a stochastic model based on modified fragmentation of a finite interval. The mechanism consists of cutting the interval at a random location and substituting a unique fragment on the right of the cut to regenerate and preserve the interval length. This leads to a set of segments of random sizes, with the accumulation of small fragments near the origin. This model is an example of record dynamics, with the presence of ‘quakes’ and slow dynamics. The fragment size distribution is a universal inverse power law with logarithmic corrections. The exact distribution for the fragment number as function of time is simply related to the unsigned Stirling numbers of the first kind. Two-time correlation functions are defined, and computed exactly. They satisfy scaling relations, and exhibit aging phenomena. In particular, the probability that the same number of fragments is found at two different times t>s is asymptotically equal to [4πlog(s)]-1/2 when s\\gg 1 and the ratio t/s is fixed, in agreement with the numerical simulations. The same process with a reset impedes the aging phenomenon-beyond a typical time scale defined by the reset parameter.

  12. Explicit isospectral flows associated to the AKNS operator on the unit interval. II

    Science.gov (United States)

    Amour, Laurent

    2012-10-01

    Explicit flows associated to any tangent vector fields on any isospectral manifold for the AKNS operator acting in L2 × L2 on the unit interval are written down. The manifolds are of infinite dimension (and infinite codimension). The flows are called isospectral and also are Hamiltonian flows. It is proven that they may be explicitly expressed in terms of regularized determinants of infinite matrix-valued functions with entries depending only on the spectral data at the starting point of the flow. The tangent vector fields are decomposed as ∑ξkTk where ξ ∈ ℓ2 and the Tk ∈ L2 × L2 form a particular basis of the tangent vector spaces of the infinite dimensional manifold. The paper here is a continuation of Amour ["Explicit isospectral flows for the AKNS operator on the unit interval," Inverse Probl. 25, 095008 (2009)], 10.1088/0266-5611/25/9/095008 where, except for a finite number, all the components of the sequence ξ are zero in order to obtain an explicit expression for the isospectral flows. The regularized determinants induce counter-terms allowing for the consideration of finite quantities when the sequences ξ run all over ℓ2.

  13. Confidence interval procedures for Monte Carlo transport simulations

    International Nuclear Information System (INIS)

    Pederson, S.P.

    1997-01-01

    The problem of obtaining valid confidence intervals based on estimates from sampled distributions using Monte Carlo particle transport simulation codes such as MCNP is examined. Such intervals can cover the true parameter of interest at a lower than nominal rate if the sampled distribution is extremely right-skewed by large tallies. Modifications to the standard theory of confidence intervals are discussed and compared with some existing heuristics, including batched means normality tests. Two new types of diagnostics are introduced to assess whether the conditions of central limit theorem-type results are satisfied: the relative variance of the variance determines whether the sample size is sufficiently large, and estimators of the slope of the right tail of the distribution are used to indicate the number of moments that exist. A simulation study is conducted to quantify the relationship between various diagnostics and coverage rates and to find sample-based quantities useful in indicating when intervals are expected to be valid. Simulated tally distributions are chosen to emulate behavior seen in difficult particle transport problems. Measures of variation in the sample variance s 2 are found to be much more effective than existing methods in predicting when coverage will be near nominal rates. Batched means tests are found to be overly conservative in this regard. A simple but pathological MCNP problem is presented as an example of false convergence using existing heuristics. The new methods readily detect the false convergence and show that the results of the problem, which are a factor of 4 too small, should not be used. Recommendations are made for applying these techniques in practice, using the statistical output currently produced by MCNP

  14. A suitable low-order, eight-node tetrahedral finite element for solids

    International Nuclear Information System (INIS)

    Key, S.W.; Heinstein, M.S.; Stone, C.M.; Mello, F.J.; Blanford, M.L.; Budge, K.G.

    1998-03-01

    To use the all-tetrahedral mesh generation existing today, the authors have explored the creation of a computationally efficient eight-node tetrahedral finite element (a four-node tetrahedral finite element enriched with four mid-face nodal points). The derivation of the element's gradient operator, studies in obtaining a suitable mass lumping, and the element's performance in applications are presented. In particular they examine the eight-node tetrahedral finite element's behavior in longitudinal plane wave propagation, in transverse cylindrical wave propagation, and in simulating Taylor bar impacts. The element samples only constant strain states and, therefore, has 12 hour-glass modes. In this regard it bears similarities to the eight-node, mean-quadrature hexahedral finite element. Comparisons with the results obtained from the mean-quadrature eight-node hexahedral finite element and the four-node tetrahedral finite element are included. Given automatic all-tetrahedral meshing, the eight-node, constant-strain tetrahedral finite element is a suitable replacement for the eight-node hexahedral finite element in those cases where mesh generation requires an inordinate amount of user intervention and direction to obtain acceptable mesh properties

  15. Evaluation of bacterial motility from non-Gaussianity of finite-sample trajectories using the large deviation principle

    International Nuclear Information System (INIS)

    Hanasaki, Itsuo; Kawano, Satoyuki

    2013-01-01

    Motility of bacteria is usually recognized in the trajectory data and compared with Brownian motion, but the diffusion coefficient is insufficient to evaluate it. In this paper, we propose a method based on the large deviation principle. We show that it can be used to evaluate the non-Gaussian characteristics of model Escherichia coli motions and to distinguish combinations of the mean running duration and running speed that lead to the same diffusion coefficient. Our proposed method does not require chemical stimuli to induce the chemotaxis in a specific direction, and it is applicable to various types of self-propelling motions for which no a priori information of, for example, threshold parameters for run and tumble or head/tail direction is available. We also address the issue of the finite-sample effect on the large deviation quantities, but we propose to make use of it to characterize the nature of motility. (paper)

  16. Finite mode analysis through harmonic waveguides

    NARCIS (Netherlands)

    Alieva, T.; Wolf, K.B.

    2000-01-01

    The mode analysis of signals in a multimodal shallow harmonic waveguide whose eigenfrequencies are equally spaced and finite can be performed by an optoelectronic device, of which the optical part uses the guide to sample the wave field at a number of sensors along its axis and the electronic part

  17. Two intervals Rényi entanglement entropy of compact free boson on torus

    International Nuclear Information System (INIS)

    Liu, Feihu; Liu, Xiao

    2016-01-01

    We compute the N=2 Rényi entanglement entropy of two intervals at equal time in a circle, for the theory of a 2D compact complex free scalar at finite temperature. This is carried out by performing functional integral on a genus 3 ramified cover of the torus, wherein the quantum part of the integral is captured by the four point function of twist fields on the worldsheet torus, and the classical piece is given by summing over winding modes of the genus 3 surface onto the target space torus. The final result is given in terms of a product of theta functions and certain multi-dimensional theta functions. We demonstrate the T-duality invariance of the result. We also study its low temperature limit. In the case in which the size of the intervals and of their separation are much smaller than the whole system, our result is in exact agreement with the known result for two intervals on an infinite system at zero temperature http://dx.doi.org/10.1088/1742-5468/2009/11/P11001. In the case in which the separation between the two intervals is much smaller than the interval length, the leading thermal corrections take the same universal form as proposed in http://dx.doi.org/10.1103/PhysRevLett.112.171603, http://dx.doi.org/10.1103/PhysRevD.91.105013 for Rényi entanglement entropy of a single interval.

  18. Performance of finite order distribution-generated universal portfolios

    Science.gov (United States)

    Pang, Sook Theng; Liew, How Hui; Chang, Yun Fah

    2017-04-01

    A Constant Rebalanced Portfolio (CRP) is an investment strategy which reinvests by redistributing wealth equally among a set of stocks. The empirical performance of the distribution-generated universal portfolio strategies are analysed experimentally concerning 10 higher volume stocks from different categories in Kuala Lumpur Stock Exchange. The time interval of study is from January 2000 to December 2015, which includes the credit crisis from September 2008 to March 2009. The performance of the finite-order universal portfolio strategies has been shown to be better than Constant Rebalanced Portfolio with some selected parameters of proposed universal portfolios.

  19. A suitable low-order, eight-node tetrahedral finite element for solids

    Energy Technology Data Exchange (ETDEWEB)

    Key, S.W.; Heinstein, M.S.; Stone, C.M.; Mello, F.J.; Blanford, M.L.; Budge, K.G.

    1998-03-01

    To use the all-tetrahedral mesh generation existing today, the authors have explored the creation of a computationally efficient eight-node tetrahedral finite element (a four-node tetrahedral finite element enriched with four mid-face nodal points). The derivation of the element`s gradient operator, studies in obtaining a suitable mass lumping, and the element`s performance in applications are presented. In particular they examine the eight-node tetrahedral finite element`s behavior in longitudinal plane wave propagation, in transverse cylindrical wave propagation, and in simulating Taylor bar impacts. The element samples only constant strain states and, therefore, has 12 hour-glass modes. In this regard it bears similarities to the eight-node, mean-quadrature hexahedral finite element. Comparisons with the results obtained from the mean-quadrature eight-node hexahedral finite element and the four-node tetrahedral finite element are included. Given automatic all-tetrahedral meshing, the eight-node, constant-strain tetrahedral finite element is a suitable replacement for the eight-node hexahedral finite element in those cases where mesh generation requires an inordinate amount of user intervention and direction to obtain acceptable mesh properties.

  20. Different radiation impedance models for finite porous materials

    DEFF Research Database (Denmark)

    Nolan, Melanie; Jeong, Cheol-Ho; Brunskog, Jonas

    2015-01-01

    The Sabine absorption coefficients of finite absorbers are measured in a reverberation chamber according to the international standard ISO 354. They vary with the specimen size essentially due to diffraction at the specimen edges, which can be seen as the radiation impedance differing from...... the infinite case. Thus, in order to predict the Sabine absorption coefficients of finite porous samples, one can incorporate models of the radiation impedance. In this study, different radiation impedance models are compared with two experimental examples. Thomasson’s model is compared to Rhazi’s method when...

  1. Kuramoto model with uniformly spaced frequencies: Finite-N asymptotics of the locking threshold.

    Science.gov (United States)

    Ottino-Löffler, Bertrand; Strogatz, Steven H

    2016-06-01

    We study phase locking in the Kuramoto model of coupled oscillators in the special case where the number of oscillators, N, is large but finite, and the oscillators' natural frequencies are evenly spaced on a given interval. In this case, stable phase-locked solutions are known to exist if and only if the frequency interval is narrower than a certain critical width, called the locking threshold. For infinite N, the exact value of the locking threshold was calculated 30 years ago; however, the leading corrections to it for finite N have remained unsolved analytically. Here we derive an asymptotic formula for the locking threshold when N≫1. The leading correction to the infinite-N result scales like either N^{-3/2} or N^{-1}, depending on whether the frequencies are evenly spaced according to a midpoint rule or an end-point rule. These scaling laws agree with numerical results obtained by Pazó [D. Pazó, Phys. Rev. E 72, 046211 (2005)PLEEE81539-375510.1103/PhysRevE.72.046211]. Moreover, our analysis yields the exact prefactors in the scaling laws, which also match the numerics.

  2. Finite difference method for inner-layer equations in the resistive MagnetoHydroDynamic stability analysis

    International Nuclear Information System (INIS)

    Tokuda, Shinji; Watanabe, Tomoko.

    1996-08-01

    The matching problem in resistive MagnetoHydroDynamic stability analysis by the asymptotic matching method has been reformulated as an initial-boundary value problem for the inner-layer equations describing the plasma dynamics in the thin layer around a rational surface. The third boundary conditions at boundaries of a finite interval are imposed on the inner layer equations in the formulation instead of asymptotic conditions at infinities. The finite difference method for this problem has been applied to model equations whose solutions are known in a closed form. It has been shown that the initial value problem and the associated eigenvalue problem for the model equations can be solved by the finite difference method with numerical stability. The formulation presented here enables the asymptotic matching method to be a practical method for the resistive MHD stability analysis. (author)

  3. Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data

    Science.gov (United States)

    Bonett, Douglas G.; Price, Robert M.

    2012-01-01

    Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…

  4. Efficient Estimation for Diffusions Sampled at High Frequency Over a Fixed Time Interval

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt; Sørensen, Michael

    Parametric estimation for diffusion processes is considered for high frequency observations over a fixed time interval. The processes solve stochastic differential equations with an unknown parameter in the diffusion coefficient. We find easily verified conditions on approximate martingale...

  5. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies.

    Science.gov (United States)

    Kottas, Martina; Kuss, Oliver; Zapf, Antonia

    2014-02-19

    The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used.

  6. Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.

    Science.gov (United States)

    Lee, Sunbok; Lei, Man-Kit; Brody, Gene H

    2015-06-01

    Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).

  7. A programmable finite state module for use with the Fermilab Tevatron Clock

    International Nuclear Information System (INIS)

    Beechy, D.

    1987-10-01

    A VME module has been designed which implements several programmable finite state machines that use the Tevatron Clock signal as inputs. In addition to normal finite state machine type outputs, the module, called the VME Finite State Machine, or VFSM, records a history of changes of state so that the exact path through the state diagram can be determined. There is also provision for triggering and recording from an external digitizer so that samples can be taken and recorded under very precisely defined circumstances

  8. Continuous time modelling with individually varying time intervals for oscillating and non-oscillating processes.

    Science.gov (United States)

    Voelkle, Manuel C; Oud, Johan H L

    2013-02-01

    When designing longitudinal studies, researchers often aim at equal intervals. In practice, however, this goal is hardly ever met, with different time intervals between assessment waves and different time intervals between individuals being more the rule than the exception. One of the reasons for the introduction of continuous time models by means of structural equation modelling has been to deal with irregularly spaced assessment waves (e.g., Oud & Delsing, 2010). In the present paper we extend the approach to individually varying time intervals for oscillating and non-oscillating processes. In addition, we show not only that equal intervals are unnecessary but also that it can be advantageous to use unequal sampling intervals, in particular when the sampling rate is low. Two examples are provided to support our arguments. In the first example we compare a continuous time model of a bivariate coupled process with varying time intervals to a standard discrete time model to illustrate the importance of accounting for the exact time intervals. In the second example the effect of different sampling intervals on estimating a damped linear oscillator is investigated by means of a Monte Carlo simulation. We conclude that it is important to account for individually varying time intervals, and encourage researchers to conceive of longitudinal studies with different time intervals within and between individuals as an opportunity rather than a problem. © 2012 The British Psychological Society.

  9. Finite-time barriers to reaction front propagation

    Science.gov (United States)

    Locke, Rory; Mahoney, John; Mitchell, Kevin

    2015-11-01

    Front propagation in advection-reaction-diffusion systems gives rise to rich geometric patterns. It has been shown for time-independent and time-periodic fluid flows that invariant manifolds, termed burning invariant manifolds (BIMs), serve as one-sided dynamical barriers to the propagation of reaction front. More recently, theoretical work has suggested that one-sided barriers, termed burning Lagrangian Coherent structures (bLCSs), exist for fluid velocity data prescribed over a finite time interval, with no assumption on the time-dependence of the flow. In this presentation, we use a time-varying fluid ``wind'' in a double-vortex channel flow to demonstrate that bLCSs form the (locally) most attracting or repelling fronts.

  10. Finite-size scaling of clique percolation on two-dimensional Moore lattices

    Science.gov (United States)

    Dong, Jia-Qi; Shen, Zhou; Zhang, Yongwen; Huang, Zi-Gang; Huang, Liang; Chen, Xiaosong

    2018-05-01

    Clique percolation has attracted much attention due to its significance in understanding topological overlap among communities and dynamical instability of structured systems. Rich critical behavior has been observed in clique percolation on Erdős-Rényi (ER) random graphs, but few works have discussed clique percolation on finite dimensional systems. In this paper, we have defined a series of characteristic events, i.e., the historically largest size jumps of the clusters, in the percolating process of adding bonds and developed a new finite-size scaling scheme based on the interval of the characteristic events. Through the finite-size scaling analysis, we have found, interestingly, that, in contrast to the clique percolation on an ER graph where the critical exponents are parameter dependent, the two-dimensional (2D) clique percolation simply shares the same critical exponents with traditional site or bond percolation, independent of the clique percolation parameters. This has been corroborated by bridging two special types of clique percolation to site percolation on 2D lattices. Mechanisms for the difference of the critical behaviors between clique percolation on ER graphs and on 2D lattices are also discussed.

  11. Finite element and finite difference methods in electromagnetic scattering

    CERN Document Server

    Morgan, MA

    2013-01-01

    This second volume in the Progress in Electromagnetic Research series examines recent advances in computational electromagnetics, with emphasis on scattering, as brought about by new formulations and algorithms which use finite element or finite difference techniques. Containing contributions by some of the world's leading experts, the papers thoroughly review and analyze this rapidly evolving area of computational electromagnetics. Covering topics ranging from the new finite-element based formulation for representing time-harmonic vector fields in 3-D inhomogeneous media using two coupled sca

  12. Discussion of “Prediction intervals for short-term wind farm generation forecasts” and “Combined nonparametric prediction intervals for wind power generation”

    DEFF Research Database (Denmark)

    Pinson, Pierre; Tastu, Julija

    2014-01-01

    A new score for the evaluation of interval forecasts, the so-called coverage width-based criterion (CWC), was proposed and utilized.. This score has been used for the tuning (in-sample) and genuine evaluation (out-ofsample) of prediction intervals for various applications, e.g., electric load [1......], electricity prices [2], general purpose prediction [3], and wind power generation [4], [5]. Indeed, two papers by the same authors appearing in the IEEE Transactions On Sustainable Energy employ that score and use it to conclude on the comparative quality of alternative approaches to interval forecasting...

  13. An efficient sampling approach for variance-based sensitivity analysis based on the law of total variance in the successive intervals without overlapping

    Science.gov (United States)

    Yun, Wanying; Lu, Zhenzhou; Jiang, Xian

    2018-06-01

    To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.

  14. Concurrent variable-interval variable-ratio schedules in a dynamic choice environment.

    Science.gov (United States)

    Bell, Matthew C; Baum, William M

    2017-11-01

    Most studies of operant choice have focused on presenting subjects with a fixed pair of schedules across many experimental sessions. Using these methods, studies of concurrent variable- interval variable-ratio schedules helped to evaluate theories of choice. More recently, a growing literature has focused on dynamic choice behavior. Those dynamic choice studies have analyzed behavior on a number of different time scales using concurrent variable-interval schedules. Following the dynamic choice approach, the present experiment examined performance on concurrent variable-interval variable-ratio schedules in a rapidly changing environment. Our objectives were to compare performance on concurrent variable-interval variable-ratio schedules with extant data on concurrent variable-interval variable-interval schedules using a dynamic choice procedure and to extend earlier work on concurrent variable-interval variable-ratio schedules. We analyzed performances at different time scales, finding strong similarities between concurrent variable-interval variable-interval and concurrent variable-interval variable- ratio performance within dynamic choice procedures. Time-based measures revealed almost identical performance in the two procedures compared with response-based measures, supporting the view that choice is best understood as time allocation. Performance at the smaller time scale of visits accorded with the tendency seen in earlier research toward developing a pattern of strong preference for and long visits to the richer alternative paired with brief "samples" at the leaner alternative ("fix and sample"). © 2017 Society for the Experimental Analysis of Behavior.

  15. Possible Intervals for T- and M-Orders of Solutions of Linear Differential Equations in the Unit Disc

    Directory of Open Access Journals (Sweden)

    Martin Chuaqui

    2011-01-01

    f(k+ak-1(zf(k-1+⋯+a1(zf′+a0(zf=0 with polynomial coefficients. In the present paper, it is shown by an example that a unit disc counterpart of such finite set does not contain all possible T- and M-orders of solutions, with respect to Nevanlinna characteristic and maximum modulus, if the coefficients are analytic functions belonging either to weighted Bergman spaces or to weighted Hardy spaces. In contrast to a finite set, possible intervals for T- and M-orders are introduced to give detailed information about the growth of solutions. Finally, these findings yield sharp lower bounds for the sums of T- and M-orders of functions in the solution bases.

  16. A simple finite element method for boundary value problems with a Riemann–Liouville derivative

    KAUST Repository

    Jin, Bangti; Lazarov, Raytcho; Lu, Xiliang; Zhou, Zhi

    2016-01-01

    © 2015 Elsevier B.V. All rights reserved. We consider a boundary value problem involving a Riemann-Liouville fractional derivative of order α∈(3/2,2) on the unit interval (0,1). The standard Galerkin finite element approximation converges slowly due to the presence of singularity term xα-1 in the solution representation. In this work, we develop a simple technique, by transforming it into a second-order two-point boundary value problem with nonlocal low order terms, whose solution can reconstruct directly the solution to the original problem. The stability of the variational formulation, and the optimal regularity pickup of the solution are analyzed. A novel Galerkin finite element method with piecewise linear or quadratic finite elements is developed, and L2(D) error estimates are provided. The approach is then applied to the corresponding fractional Sturm-Liouville problem, and error estimates of the eigenvalue approximations are given. Extensive numerical results fully confirm our theoretical study.

  17. A simple finite element method for boundary value problems with a Riemann–Liouville derivative

    KAUST Repository

    Jin, Bangti

    2016-02-01

    © 2015 Elsevier B.V. All rights reserved. We consider a boundary value problem involving a Riemann-Liouville fractional derivative of order α∈(3/2,2) on the unit interval (0,1). The standard Galerkin finite element approximation converges slowly due to the presence of singularity term xα-1 in the solution representation. In this work, we develop a simple technique, by transforming it into a second-order two-point boundary value problem with nonlocal low order terms, whose solution can reconstruct directly the solution to the original problem. The stability of the variational formulation, and the optimal regularity pickup of the solution are analyzed. A novel Galerkin finite element method with piecewise linear or quadratic finite elements is developed, and L2(D) error estimates are provided. The approach is then applied to the corresponding fractional Sturm-Liouville problem, and error estimates of the eigenvalue approximations are given. Extensive numerical results fully confirm our theoretical study.

  18. Reliable calculation in probabilistic logic: Accounting for small sample size and model uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Ferson, S. [Applied Biomathematics, Setauket, NY (United States)

    1996-12-31

    A variety of practical computational problems arise in risk and safety assessments, forensic statistics and decision analyses in which the probability of some event or proposition E is to be estimated from the probabilities of a finite list of related subevents or propositions F,G,H,.... In practice, the analyst`s knowledge may be incomplete in two ways. First, the probabilities of the subevents may be imprecisely known from statistical estimations, perhaps based on very small sample sizes. Second, relationships among the subevents may be known imprecisely. For instance, there may be only limited information about their stochastic dependencies. Representing probability estimates as interval ranges on has been suggested as a way to address the first source of imprecision. A suite of AND, OR and NOT operators defined with reference to the classical Frochet inequalities permit these probability intervals to be used in calculations that address the second source of imprecision, in many cases, in a best possible way. Using statistical confidence intervals as inputs unravels the closure properties of this approach however, requiring that probability estimates be characterized by a nested stack of intervals for all possible levels of statistical confidence, from a point estimate (0% confidence) to the entire unit interval (100% confidence). The corresponding logical operations implied by convolutive application of the logical operators for every possible pair of confidence intervals reduces by symmetry to a manageably simple level-wise iteration. The resulting calculus can be implemented in software that allows users to compute comprehensive and often level-wise best possible bounds on probabilities for logical functions of events.

  19. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  20. Composite Finite Sums

    KAUST Repository

    Alabdulmohsin, Ibrahim M.

    2018-03-07

    In this chapter, we extend the previous results of Chap. 2 to the more general case of composite finite sums. We describe what composite finite sums are and how their analysis can be reduced to the analysis of simple finite sums using the chain rule. We apply these techniques, next, on numerical integration and on some identities of Ramanujan.

  1. Composite Finite Sums

    KAUST Repository

    Alabdulmohsin, Ibrahim M.

    2018-01-01

    In this chapter, we extend the previous results of Chap. 2 to the more general case of composite finite sums. We describe what composite finite sums are and how their analysis can be reduced to the analysis of simple finite sums using the chain rule. We apply these techniques, next, on numerical integration and on some identities of Ramanujan.

  2. Correlator of nucleon currents in finite temperature pion gas

    International Nuclear Information System (INIS)

    Eletsky, V.L.

    1990-01-01

    A retarded correlator of two currents with nucleon quantum numbers is calculated for finite temperature T π in the chiral limit. It is shown that for euclidean momenta the leading one-loop corrections arise from direct interaction of thermal pions with the currents. A dispersive representation for the correlator shows that this interaction smears the nucleon pole over a frequency interval with width ≅ T. This interaction does not change the exponential fall-off of the correlator in euclidean space but gives an O(T 2 /F 2 π ) contribution to the pre-exponential factor. (orig.)

  3. Interval Size and Affect: An Ethnomusicological Perspective

    Directory of Open Access Journals (Sweden)

    Sarha Moore

    2013-08-01

    Full Text Available This commentary addresses Huron and Davis's question of whether "The Harmonic Minor Provides an Optimum Way of Reducing Average Melodic Interval Size, Consistent with Sad Affect Cues" within any non-Western musical cultures. The harmonic minor scale and other semitone-heavy scales, such as Bhairav raga and Hicaz makam, are featured widely in the musical cultures of North India and the Middle East. Do melodies from these genres also have a preponderance of semitone intervals and low incidence of the augmented second interval, as in Huron and Davis's sample? Does the presence of more semitone intervals in a melody affect its emotional connotations in different cultural settings? Are all semitone intervals equal in their effect? My own ethnographic research within these cultures reveals comparable connotations in melodies that linger on semitone intervals, centered on concepts of tension and metaphors of falling. However, across different musical cultures there may also be neutral or lively interpretations of these same pitch sets, dependent on context, manner of performance, and tradition. Small pitch movement may also be associated with social functions such as prayer or lullabies, and may not be described as "sad." "Sad," moreover may not connote the same affect cross-culturally.

  4. Differentially Private Confidence Intervals for Empirical Risk Minimization

    OpenAIRE

    Wang, Yue; Kifer, Daniel; Lee, Jaewoo

    2018-01-01

    The process of data mining with differential privacy produces results that are affected by two types of noise: sampling noise due to data collection and privacy noise that is designed to prevent the reconstruction of sensitive information. In this paper, we consider the problem of designing confidence intervals for the parameters of a variety of differentially private machine learning models. The algorithms can provide confidence intervals that satisfy differential privacy (as well as the mor...

  5. Axial anomaly at finite temperature and finite density

    International Nuclear Information System (INIS)

    Qian Zhixin; Su Rukeng; Yu, P.K.N.

    1994-01-01

    The U(1) axial anomaly in a hot fermion medium is investigated by using the real time Green's function method. After calculating the lowest order triangle diagrams, we find that finite temperature as well as finite fermion density does not affect the axial anomaly. The higher order corrections for the axial anomaly are discussed. (orig.)

  6. Wild bootstrapping in finite populations with auxiliary information

    NARCIS (Netherlands)

    R. Helmers (Roelof); M.H. Wegkamp

    1995-01-01

    textabstractConsider a finite population $u$, which can be viewed as a realization of a superpopulation model. A simple ratio model (linear regression, without intercept) with heteroscedastic errors is supposed to have generated u. A random sample is drawn without replacement from $u$. In this

  7. Locally Finite Root Supersystems

    OpenAIRE

    Yousofzadeh, Malihe

    2013-01-01

    We introduce the notion of locally finite root supersystems as a generalization of both locally finite root systems and generalized root systems. We classify irreducible locally finite root supersystems.

  8. Profile-likelihood Confidence Intervals in Item Response Theory Models.

    Science.gov (United States)

    Chalmers, R Philip; Pek, Jolynn; Liu, Yang

    2017-01-01

    Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.

  9. Indication of multiscaling in the volatility return intervals of stock markets

    Science.gov (United States)

    Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H. Eugene

    2008-01-01

    The distribution of the return intervals τ between price volatilities above a threshold height q for financial records has been approximated by a scaling behavior. To explore how accurate is the scaling and therefore understand the underlined nonlinear mechanism, we investigate intraday data sets of 500 stocks which consist of Standard & Poor’s 500 index. We show that the cumulative distribution of return intervals has systematic deviations from scaling. We support this finding by studying the m -th moment μm≡⟨(τ/⟨τ⟩)m⟩1/m , which show a certain trend with the mean interval ⟨τ⟩ . We generate surrogate records using the Schreiber method, and find that their cumulative distributions almost collapse to a single curve and moments are almost constant for most ranges of ⟨τ⟩ . Those substantial differences suggest that nonlinear correlations in the original volatility sequence account for the deviations from a single scaling law. We also find that the original and surrogate records exhibit slight tendencies for short and long ⟨τ⟩ , due to the discreteness and finite size effects of the records, respectively. To avoid as possible those effects for testing the multiscaling behavior, we investigate the moments in the range 10volatility.

  10. An Improved Asymptotic Sampling Approach For Stochastic Finite Element Stiffness of a Laterally Loaded Monopile

    DEFF Research Database (Denmark)

    Vahdatirad, Mohammadjavad; Bayat, Mehdi; Andersen, Lars Vabbersgaard

    2012-01-01

    In this study a stochastic approach is conducted to obtain the horizontal and rotational stiffness of an offshore monopile foundation. A nonlinear stochastic p-y curve is integrated into a finite element scheme for calculation of the monopile response in over-consolidated clay having spatial...

  11. A finite Hankel algorithm for intense optical beam propagation in saturable medium

    International Nuclear Information System (INIS)

    Bardin, C.; Babuel-Peyrissac, J.P.; Marinier, J.P.; Mattar, F.P.

    1985-01-01

    Many physical problems, especially light-propagation, that involve the Laplacian operator, are naturally connected with Fourier or Hankel transforms (in case of axial symmetry), which both remove the Laplacian term in the transformed space. Sometimes the analytical calculation can be handled at its end, giving a series or an integral representation of the solution. Otherwise, an analytical pre-treatment of the original equation may be done, leading to numerical computation techniques as opposed to self-adaptive stretching and rezoning techniques, which do not use Fourier or Hankel transforms. The authors present here some basic mathematical properties of infinite and finite Hankel transform, their connection with physics and their adaptation to numerical calculation. The finite Hankel transform is well-suited to numerical computation, because it deals with a finite interval, and the precision of the calculation can be easily controlled by the number of zeros of J 0 (x) to be taken. Moreover, they use a special quadrature formula which is well connected to integral conservation laws. The inconvenience of having to sum a series is reduced by the use of vectorized computers, and in the future will be still more reduced with parallel processors. A finite-Hankel code has been performed on CRAY-XMP in order to solve the propagation of a CW optical beam in a saturable absorber. For large diffractions or when a very small radial grid is required for the description of the optical field, this FHT algorithm has been found to perform better than a direct finite-difference code

  12. Simple Finite Sums

    KAUST Repository

    Alabdulmohsin, Ibrahim M.

    2018-01-01

    We will begin our treatment of summability calculus by analyzing what will be referred to, throughout this book, as simple finite sums. Even though the results of this chapter are particular cases of the more general results presented in later chapters, they are important to start with for a few reasons. First, this chapter serves as an excellent introduction to what summability calculus can markedly accomplish. Second, simple finite sums are encountered more often and, hence, they deserve special treatment. Third, the results presented in this chapter for simple finite sums will, themselves, be used as building blocks for deriving the most general results in subsequent chapters. Among others, we establish that fractional finite sums are well-defined mathematical objects and show how various identities related to the Euler constant as well as the Riemann zeta function can actually be derived in an elementary manner using fractional finite sums.

  13. Simple Finite Sums

    KAUST Repository

    Alabdulmohsin, Ibrahim M.

    2018-03-07

    We will begin our treatment of summability calculus by analyzing what will be referred to, throughout this book, as simple finite sums. Even though the results of this chapter are particular cases of the more general results presented in later chapters, they are important to start with for a few reasons. First, this chapter serves as an excellent introduction to what summability calculus can markedly accomplish. Second, simple finite sums are encountered more often and, hence, they deserve special treatment. Third, the results presented in this chapter for simple finite sums will, themselves, be used as building blocks for deriving the most general results in subsequent chapters. Among others, we establish that fractional finite sums are well-defined mathematical objects and show how various identities related to the Euler constant as well as the Riemann zeta function can actually be derived in an elementary manner using fractional finite sums.

  14. Prediction Interval: What to Expect When You're Expecting … A Replication.

    Directory of Open Access Journals (Sweden)

    Jeffrey R Spence

    Full Text Available A challenge when interpreting replications is determining whether the results of a replication "successfully" replicate the original study. Looking for consistency between two studies is challenging because individual studies are susceptible to many sources of error that can cause study results to deviate from each other and the population effect in unpredictable directions and magnitudes. In the current paper, we derive methods to compute a prediction interval, a range of results that can be expected in a replication due to chance (i.e., sampling error, for means and commonly used indexes of effect size: correlations and d-values. The prediction interval is calculable based on objective study characteristics (i.e., effect size of the original study and sample sizes of the original study and planned replication even when sample sizes across studies are unequal. The prediction interval provides an a priori method for assessing if the difference between an original and replication result is consistent with what can be expected due to sample error alone. We provide open-source software tools that allow researchers, reviewers, replicators, and editors to easily calculate prediction intervals.

  15. Finite-volume spectra of the Lee-Yang model

    Energy Technology Data Exchange (ETDEWEB)

    Bajnok, Zoltan [MTA Lendület Holographic QFT Group, Wigner Research Centre for Physics,H-1525 Budapest 114, P.O.B. 49 (Hungary); Deeb, Omar el [MTA Lendület Holographic QFT Group, Wigner Research Centre for Physics,H-1525 Budapest 114, P.O.B. 49 (Hungary); Physics Department, Faculty of Science, Beirut Arab University (BAU),Beirut (Lebanon); Pearce, Paul A. [School of Mathematics and Statistics, University of Melbourne,Parkville, Victoria 3010 (Australia)

    2015-04-15

    We consider the non-unitary Lee-Yang minimal model M(2,5) in three different finite geometries: (i) on the interval with integrable boundary conditions labelled by the Kac labels (r,s)=(1,1),(1,2), (ii) on the circle with periodic boundary conditions and (iii) on the periodic circle including an integrable purely transmitting defect. We apply φ{sub 1,3} integrable perturbations on the boundary and on the defect and describe the flow of the spectrum. Adding a Φ{sub 1,3} integrable perturbation to move off-criticality in the bulk, we determine the finite size spectrum of the massive scattering theory in the three geometries via Thermodynamic Bethe Ansatz (TBA) equations. We derive these integral equations for all excitations by solving, in the continuum scaling limit, the TBA functional equations satisfied by the transfer matrices of the associated A{sub 4} RSOS lattice model of Forrester and Baxter in Regime III. The excitations are classified in terms of (m,n) systems. The excited state TBA equations agree with the previously conjectured equations in the boundary and periodic cases. In the defect case, new TBA equations confirm previously conjectured transmission factors.

  16. Bootstrap confidence intervals for three-way methods

    NARCIS (Netherlands)

    Kiers, Henk A.L.

    Results from exploratory three-way analysis techniques such as CANDECOMP/PARAFAC and Tucker3 analysis are usually presented without giving insight into uncertainties due to sampling. Here a bootstrap procedure is proposed that produces percentile intervals for all output parameters. Special

  17. An isoparametric shell of revolution finite element for harmonic loadings of any order

    International Nuclear Information System (INIS)

    Johnson, J.J.; Charman, C.M.

    1981-01-01

    A general isoparametric shell of revolution finite element subjected to any order harmonic loading is presented. Derivation of the element properties, its implementation in a general purpose finite element program, and its application to a sample problem are discussed. The element is isoparametric, that is, the variation of the displacements along the meridian of the shell and the shape of the meridian itself are approximated in an identical manner. The element has been implemented in the computer program MODSAP. A sample problem of a cooling tower subjected to wind loading is presented. (orig./HP)

  18. Design-based Sample and Probability Law-Assumed Sample: Their Role in Scientific Investigation.

    Science.gov (United States)

    Ojeda, Mario Miguel; Sahai, Hardeo

    2002-01-01

    Discusses some key statistical concepts in probabilistic and non-probabilistic sampling to provide an overview for understanding the inference process. Suggests a statistical model constituting the basis of statistical inference and provides a brief review of the finite population descriptive inference and a quota sampling inferential theory.…

  19. Interpregnancy interval and risk of autistic disorder.

    Science.gov (United States)

    Gunnes, Nina; Surén, Pål; Bresnahan, Michaeline; Hornig, Mady; Lie, Kari Kveim; Lipkin, W Ian; Magnus, Per; Nilsen, Roy Miodini; Reichborn-Kjennerud, Ted; Schjølberg, Synnve; Susser, Ezra Saul; Øyen, Anne-Siri; Stoltenberg, Camilla

    2013-11-01

    A recent California study reported increased risk of autistic disorder in children conceived within a year after the birth of a sibling. We assessed the association between interpregnancy interval and risk of autistic disorder using nationwide registry data on pairs of singleton full siblings born in Norway. We defined interpregnancy interval as the time from birth of the first-born child to conception of the second-born child in a sibship. The outcome of interest was autistic disorder in the second-born child. Analyses were restricted to sibships in which the second-born child was born in 1990-2004. Odds ratios (ORs) were estimated by fitting ordinary logistic models and logistic generalized additive models. The study sample included 223,476 singleton full-sibling pairs. In sibships with interpregnancy intervals autistic disorder, compared with 0.13% in the reference category (≥ 36 months). For interpregnancy intervals shorter than 9 months, the adjusted OR of autistic disorder in the second-born child was 2.18 (95% confidence interval 1.42-3.26). The risk of autistic disorder in the second-born child was also increased for interpregnancy intervals of 9-11 months in the adjusted analysis (OR = 1.71 [95% CI = 1.07-2.64]). Consistent with a previous report from California, interpregnancy intervals shorter than 1 year were associated with increased risk of autistic disorder in the second-born child. A possible explanation is depletion of micronutrients in mothers with closely spaced pregnancies.

  20. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  1. Groebner Finite Path Algebras

    OpenAIRE

    Leamer, Micah J.

    2004-01-01

    Let K be a field and Q a finite directed multi-graph. In this paper I classify all path algebras KQ and admissible orders with the property that all of their finitely generated ideals have finite Groebner bases. MS

  2. A spreadsheet template compatible with Microsoft Excel and iWork Numbers that returns the simultaneous confidence intervals for all pairwise differences between multiple sample means.

    Science.gov (United States)

    Brown, Angus M

    2010-04-01

    The objective of the method described in this paper is to develop a spreadsheet template for the purpose of comparing multiple sample means. An initial analysis of variance (ANOVA) test on the data returns F--the test statistic. If F is larger than the critical F value drawn from the F distribution at the appropriate degrees of freedom, convention dictates rejection of the null hypothesis and allows subsequent multiple comparison testing to determine where the inequalities between the sample means lie. A variety of multiple comparison methods are described that return the 95% confidence intervals for differences between means using an inclusive pairwise comparison of the sample means. 2009 Elsevier Ireland Ltd. All rights reserved.

  3. Resampling methods in Microsoft Excel® for estimating reference intervals.

    Science.gov (United States)

    Theodorsson, Elvar

    2015-01-01

    Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. 
The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular.
 Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.

  4. Random Sampling with Interspike-Intervals of the Exponential Integrate and Fire Neuron: A Computational Interpretation of UP-States.

    Directory of Open Access Journals (Sweden)

    Andreas Steimer

    Full Text Available Oscillations between high and low values of the membrane potential (UP and DOWN states respectively are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs of the exponential integrate and fire (EIF model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing

  5. Random Sampling with Interspike-Intervals of the Exponential Integrate and Fire Neuron: A Computational Interpretation of UP-States.

    Science.gov (United States)

    Steimer, Andreas; Schindler, Kaspar

    2015-01-01

    Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational

  6. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    Science.gov (United States)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  7. Reference Intervals for Urinary Cotinine Levels and the Influence of Sampling Time and Other Predictors on Its Excretion Among Italian Schoolchildren

    Directory of Open Access Journals (Sweden)

    Carmela Protano

    2018-04-01

    Full Text Available (1 Background: Environmental Tobacco Smoke (ETS exposure remains a public health problem worldwide. The aims are to establish urinary (u- cotinine reference values for healthy Italian children, to evaluate the role of the sampling time and of other factors on children’s u-cotinine excretion. (2 Methods: A cross-sectional study was performed on 330 children. Information on participants was gathered by a questionnaire and u-cotinine was determined in two samples for each child, collected during the evening and the next morning. (3 Results: Reference intervals (as the 2.5th and 97.5th percentiles of the distribution in evening and morning samples were respectively equal to 0.98–4.29 and 0.91–4.50 µg L−1 (ETS unexposed and 1.39–16.34 and 1.49–20.95 µg L−1 (ETS exposed. No statistical differences were recovered between median values found in evening and morning samples, both in ETS unexposed and exposed. Significant predictors of u-cotinine excretions were ponderal status according to body mass index of children (β = 0.202; p-value = 0.041 for evening samples; β = 0.169; p-value = 0.039 for morning samples and paternal educational level (β = −0.258; p-value = 0.010; for evening samples; β = −0.013; p-value = 0.003 for morning samples. (4 Conclusions: The results evidenced the need of further studies for assessing the role of confounding factors on ETS exposure, and the necessity of educational interventions on smokers for rising their awareness about ETS.

  8. SIMULATION FROM ENDPOINT-CONDITIONED, CONTINUOUS-TIME MARKOV CHAINS ON A FINITE STATE SPACE, WITH APPLICATIONS TO MOLECULAR EVOLUTION.

    Science.gov (United States)

    Hobolth, Asger; Stone, Eric A

    2009-09-01

    Analyses of serially-sampled data often begin with the assumption that the observations represent discrete samples from a latent continuous-time stochastic process. The continuous-time Markov chain (CTMC) is one such generative model whose popularity extends to a variety of disciplines ranging from computational finance to human genetics and genomics. A common theme among these diverse applications is the need to simulate sample paths of a CTMC conditional on realized data that is discretely observed. Here we present a general solution to this sampling problem when the CTMC is defined on a discrete and finite state space. Specifically, we consider the generation of sample paths, including intermediate states and times of transition, from a CTMC whose beginning and ending states are known across a time interval of length T. We first unify the literature through a discussion of the three predominant approaches: (1) modified rejection sampling, (2) direct sampling, and (3) uniformization. We then give analytical results for the complexity and efficiency of each method in terms of the instantaneous transition rate matrix Q of the CTMC, its beginning and ending states, and the length of sampling time T. In doing so, we show that no method dominates the others across all model specifications, and we give explicit proof of which method prevails for any given Q, T, and endpoints. Finally, we introduce and compare three applications of CTMCs to demonstrate the pitfalls of choosing an inefficient sampler.

  9. The Determining Finite Automata Process

    Directory of Open Access Journals (Sweden)

    M. S. Vinogradova

    2017-01-01

    Full Text Available The theory of formal languages widely uses finite state automata both in implementation of automata-based approach to programming, and in synthesis of logical control algorithms.To ensure unambiguous operation of the algorithms, the synthesized finite state automata must be deterministic. Within the approach to the synthesis of the mobile robot controls, for example, based on the theory of formal languages, there are problems concerning the construction of various finite automata, but such finite automata, as a rule, will not be deterministic. The algorithm of determinization can be applied to the finite automata, as specified, in various ways. The basic ideas of the algorithm of determinization can be most simply explained using the representations of a finite automaton in the form of a weighted directed graph.The paper deals with finite automata represented as weighted directed graphs, and discusses in detail the procedure for determining the finite automata represented in this way. Gives a detailed description of the algorithm for determining finite automata. A large number of examples illustrate a capability of the determinization algorithm.

  10. Basic Finite Element Method

    International Nuclear Information System (INIS)

    Lee, Byeong Hae

    1992-02-01

    This book gives descriptions of basic finite element method, which includes basic finite element method and data, black box, writing of data, definition of VECTOR, definition of matrix, matrix and multiplication of matrix, addition of matrix, and unit matrix, conception of hardness matrix like spring power and displacement, governed equation of an elastic body, finite element method, Fortran method and programming such as composition of computer, order of programming and data card and Fortran card, finite element program and application of nonelastic problem.

  11. Does the time interval between antimüllerian hormone serum sampling and initiation of ovarian stimulation affect its predictive ability in in vitro fertilization-intracytoplasmic sperm injection cycles with a gonadotropin-releasing hormone antagonist?

    DEFF Research Database (Denmark)

    Polyzos, Nikolaos P; Nelson, Scott M; Stoop, Dominic

    2013-01-01

    To investigate whether the time interval between serum antimüllerian hormone (AMH) sampling and initiation of ovarian stimulation for in vitro fertilization-intracytoplasmic sperm injection (IVF-ICSI) may affect the predictive ability of the marker for low and excessive ovarian response.......To investigate whether the time interval between serum antimüllerian hormone (AMH) sampling and initiation of ovarian stimulation for in vitro fertilization-intracytoplasmic sperm injection (IVF-ICSI) may affect the predictive ability of the marker for low and excessive ovarian response....

  12. Fermi-edge exciton-polaritons in doped semiconductor microcavities with finite hole mass

    Science.gov (United States)

    Pimenov, Dimitri; von Delft, Jan; Glazman, Leonid; Goldstein, Moshe

    2017-10-01

    The coupling between a 2D semiconductor quantum well and an optical cavity gives rise to combined light-matter excitations, the exciton-polaritons. These were usually measured when the conduction band is empty, making the single polariton physics a simple single-body problem. The situation is dramatically different in the presence of a finite conduction-band population, where the creation or annihilation of a single exciton involves a many-body shakeup of the Fermi sea. Recent experiments in this regime revealed a strong modification of the exciton-polariton spectrum. Previous theoretical studies concerned with nonzero Fermi energy mostly relied on the approximation of an immobile valence-band hole with infinite mass, which is appropriate for low-mobility samples only; for high-mobility samples, one needs to consider a mobile hole with large but finite mass. To bridge this gap, we present an analytical diagrammatic approach and tackle a model with short-ranged (screened) electron-hole interaction, studying it in two complementary regimes. We find that the finite hole mass has opposite effects on the exciton-polariton spectra in the two regimes: in the first, where the Fermi energy is much smaller than the exciton binding energy, excitonic features are enhanced by the finite mass. In the second regime, where the Fermi energy is much larger than the exciton binding energy, finite mass effects cut off the excitonic features in the polariton spectra, in qualitative agreement with recent experiments.

  13. Convex Interval Games

    NARCIS (Netherlands)

    Alparslan-Gok, S.Z.; Brânzei, R.; Tijs, S.H.

    2008-01-01

    In this paper, convex interval games are introduced and some characterizations are given. Some economic situations leading to convex interval games are discussed. The Weber set and the Shapley value are defined for a suitable class of interval games and their relations with the interval core for

  14. PETA KENDALI X DENGAN UKURAN SAMPEL DAN INTERVAL PENGAMBILAN SAMPEL YANG BERVARIASI

    Directory of Open Access Journals (Sweden)

    Tanti Octavia

    2000-01-01

    Full Text Available Shewhart X chart is widely used in statistical process control for monitoring variable data and has shown good performance in detecting large mean shift but less sensitive in detecting moderate to small process shift. X chart with variable sample size and sampling interval (VSSI X chart is proposed to enhance the ability of detecting moderate to small process shift. The performance of VSSI X chart is compared with those of Shewhart X chart, VSS X chart (Variable Sample Size X chart and VSI X chart (Variable Sampling Interval X chart. Performance of these control charts is presented in the form of ATS (Average Time to Signal which is obtained from computer simulation and markov chain approach. The VSSI X chart shows better performance in detecting moderate mean shift. The simulation is then continued for VSSI X chart and VSS X chart with minimum sample size n 1=1 and n 1=2. Abstract in Bahasa Indonesia : Peta kendali X Shewhart telah umum digunakan dalam pengendalian proses statistis untuk data variabel dan terbukti berfungsi dengan baik untuk mendeteksi pergeseran rerata yang besar, namun kurang cepat dalam mendeteksi pergeseran rerata yang sedang hingga kecil. Untuk mengatasi kelemahan ini, diusulkan penggunaan peta kendali X dengan ukuran sampel dan interval pengambilan sampel yang bervariasi (peta kendali VSSI. Kinerja peta kendali X VSSI dibandingkan dengan kinerja peta kendali Shewhart, peta kendali X VSS (peta kendali X dengan ukuran sampel yang bervariasi, dan peta kendali X VSI (peta kendali X dengan interval waktu pengambilan sampel yang bervariasi. Kinerja peta kendali dinyatakan dalam nilai ATS (Average Time to Signal yang didapatkan dari hasil simulasi program komputer maupun perhitungan Rantai Markov. Peta kendali X VSSI terbukti mempunyai kinerja yang lebih baik dalam mendeteksi pergeseran rerata yang sedang. Selain itu juga disimulasikan penggunaan peta kendali X VSSI dan peta kendali X VSS dengan ukuran sampel minimum n1=1 dan n1

  15. Hermitian Mindlin Plate Wavelet Finite Element Method for Load Identification

    Directory of Open Access Journals (Sweden)

    Xiaofeng Xue

    2016-01-01

    Full Text Available A new Hermitian Mindlin plate wavelet element is proposed. The two-dimensional Hermitian cubic spline interpolation wavelet is substituted into finite element functions to construct frequency response function (FRF. It uses a system’s FRF and response spectrums to calculate load spectrums and then derives loads in the time domain via the inverse fast Fourier transform. By simulating different excitation cases, Hermitian cubic spline wavelets on the interval (HCSWI finite elements are used to reverse load identification in the Mindlin plate. The singular value decomposition (SVD method is adopted to solve the ill-posed inverse problem. Compared with ANSYS results, HCSWI Mindlin plate element can accurately identify the applied load. Numerical results show that the algorithm of HCSWI Mindlin plate element is effective. The accuracy of HCSWI can be verified by comparing the FRF of HCSWI and ANSYS elements with the experiment data. The experiment proves that the load identification of HCSWI Mindlin plate is effective and precise by using the FRF and response spectrums to calculate the loads.

  16. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

    Science.gov (United States)

    Kumar, Sricharan; Srivistava, Ashok N.

    2012-01-01

    Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

  17. Group-invariant finite Fourier transforms

    International Nuclear Information System (INIS)

    Shenefelt, M.H.

    1988-01-01

    The computation of the finite Fourier transform of functions is one of the most used computations in crystallography. Since the Fourier transform involved in 3-dimensional, the size of the computation becomes very large even for relatively few sample points along each edge. In this thesis, there is a family of algorithms that reduce the computation of Fourier transform of functions respecting the symmetries. Some properties of these algorithms are: (1) The algorithms make full use of the group of symmetries of a crystal. (2) The algorithms can be factored and combined according to the prime factorization of the number of points in the sample space. (3) The algorithms are organized into a family using the group structure of the crystallographic groups to make iterative procedures possible

  18. Finite-time braiding exponents

    Science.gov (United States)

    Budišić, Marko; Thiffeault, Jean-Luc

    2015-08-01

    Topological entropy of a dynamical system is an upper bound for the sum of positive Lyapunov exponents; in practice, it is strongly indicative of the presence of mixing in a subset of the domain. Topological entropy can be computed by partition methods, by estimating the maximal growth rate of material lines or other material elements, or by counting the unstable periodic orbits of the flow. All these methods require detailed knowledge of the velocity field that is not always available, for example, when ocean flows are measured using a small number of floating sensors. We propose an alternative calculation, applicable to two-dimensional flows, that uses only a sparse set of flow trajectories as its input. To represent the sparse set of trajectories, we use braids, algebraic objects that record how trajectories exchange positions with respect to a projection axis. Material curves advected by the flow are represented as simplified loop coordinates. The exponential rate at which a braid stretches loops over a finite time interval is the Finite-Time Braiding Exponent (FTBE). We study FTBEs through numerical simulations of the Aref Blinking Vortex flow, as a representative of a general class of flows having a single invariant component with positive topological entropy. The FTBEs approach the value of the topological entropy from below as the length and number of trajectories is increased; we conjecture that this result holds for a general class of ergodic, mixing systems. Furthermore, FTBEs are computed robustly with respect to the numerical time step, details of braid representation, and choice of initial conditions. We find that, in the class of systems we describe, trajectories can be re-used to form different braids, which greatly reduces the amount of data needed to assess the complexity of the flow.

  19. A finite landscape?

    International Nuclear Information System (INIS)

    Acharya, B.S.; Douglas, M.R.

    2006-06-01

    We present evidence that the number of string/M theory vacua consistent with experiments is finite. We do this both by explicit analysis of infinite sequences of vacua and by applying various mathematical finiteness theorems. (author)

  20. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    Science.gov (United States)

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  1. Fractional finite Fourier transform.

    Science.gov (United States)

    Khare, Kedar; George, Nicholas

    2004-07-01

    We show that a fractional version of the finite Fourier transform may be defined by using prolate spheroidal wave functions of order zero. The transform is linear and additive in its index and asymptotically goes over to Namias's definition of the fractional Fourier transform. As a special case of this definition, it is shown that the finite Fourier transform may be inverted by using information over a finite range of frequencies in Fourier space, the inversion being sensitive to noise. Numerical illustrations for both forward (fractional) and inverse finite transforms are provided.

  2. The prognostic value of the QT interval and QT interval dispersion in all-cause and cardiac mortality and morbidity in a population of Danish citizens.

    Science.gov (United States)

    Elming, H; Holm, E; Jun, L; Torp-Pedersen, C; Køber, L; Kircshoff, M; Malik, M; Camm, J

    1998-09-01

    To evaluate the prognostic value of the QT interval and QT interval dispersion in total and in cardiovascular mortality, as well as in cardiac morbidity, in a general population. The QT interval was measured in all leads from a standard 12-lead ECG in a random sample of 1658 women and 1797 men aged 30-60 years. QT interval dispersion was calculated from the maximal difference between QT intervals in any two leads. All cause mortality over 13 years, and cardiovascular mortality as well as cardiac morbidity over 11 years, were the main outcome parameters. Subjects with a prolonged QT interval (430 ms or more) or prolonged QT interval dispersion (80 ms or more) were at higher risk of cardiovascular death and cardiac morbidity than subjects whose QT interval was less than 360 ms, or whose QT interval dispersion was less than 30 ms. Cardiovascular death relative risk ratios, adjusted for age, gender, myocardial infarct, angina pectoris, diabetes mellitus, arterial hypertension, smoking habits, serum cholesterol level, and heart rate were 2.9 for the QT interval (95% confidence interval 1.1-7.8) and 4.4 for QT interval dispersion (95% confidence interval 1.0-19-1). Fatal and non-fatal cardiac morbidity relative risk ratios were similar, at 2.7 (95% confidence interval 1.4-5.5) for the QT interval and 2.2 (95% confidence interval 1.1-4.0) for QT interval dispersion. Prolongation of the QT interval and QT interval dispersion independently affected the prognosis of cardiovascular mortality and cardiac fatal and non-fatal morbidity in a general population over 11 years.

  3. Restricted Interval Valued Neutrosophic Sets and Restricted Interval Valued Neutrosophic Topological Spaces

    Directory of Open Access Journals (Sweden)

    Anjan Mukherjee

    2016-08-01

    Full Text Available In this paper we introduce the concept of restricted interval valued neutrosophic sets (RIVNS in short. Some basic operations and properties of RIVNS are discussed. The concept of restricted interval valued neutrosophic topology is also introduced together with restricted interval valued neutrosophic finer and restricted interval valued neutrosophic coarser topology. We also define restricted interval valued neutrosophic interior and closer of a restricted interval valued neutrosophic set. Some theorems and examples are cites. Restricted interval valued neutrosophic subspace topology is also studied.

  4. The finite-difference and finite-element modeling of seismic wave propagation and earthquake motion

    International Nuclear Information System (INIS)

    Moszo, P.; Kristek, J.; Galis, M.; Pazak, P.; Balazovijech, M.

    2006-01-01

    Numerical modeling of seismic wave propagation and earthquake motion is an irreplaceable tool in investigation of the Earth's structure, processes in the Earth, and particularly earthquake phenomena. Among various numerical methods, the finite-difference method is the dominant method in the modeling of earthquake motion. Moreover, it is becoming more important in the seismic exploration and structural modeling. At the same time we are convinced that the best time of the finite-difference method in seismology is in the future. This monograph provides tutorial and detailed introduction to the application of the finite-difference, finite-element, and hybrid finite-difference-finite-element methods to the modeling of seismic wave propagation and earthquake motion. The text does not cover all topics and aspects of the methods. We focus on those to which we have contributed. (Author)

  5. Phase transitions in finite systems

    Energy Technology Data Exchange (ETDEWEB)

    Chomaz, Ph. [Grand Accelerateur National d' Ions Lourds (GANIL), DSM-CEA / IN2P3-CNRS, 14 - Caen (France); Gulminelli, F. [Caen Univ., 14 (France). Lab. de Physique Corpusculaire

    2002-07-01

    In this series of lectures we will first review the general theory of phase transition in the framework of information theory and briefly address some of the well known mean field solutions of three dimensional problems. The theory of phase transitions in finite systems will then be discussed, with a special emphasis to the conceptual problems linked to a thermodynamical description for small, short-lived, open systems as metal clusters and data samples coming from nuclear collisions. The concept of negative heat capacity developed in the early seventies in the context of self-gravitating systems will be reinterpreted in the general framework of convexity anomalies of thermo-statistical potentials. The connection with the distribution of the order parameter will lead us to a definition of first order phase transitions in finite systems based on topology anomalies of the event distribution in the space of observations. Finally a careful study of the thermodynamical limit will provide a bridge with the standard theory of phase transitions and show that in a wide class of physical situations the different statistical ensembles are irreducibly inequivalent. (authors)

  6. Phase transitions in finite systems

    International Nuclear Information System (INIS)

    Chomaz, Ph.; Gulminelli, F.

    2002-01-01

    In this series of lectures we will first review the general theory of phase transition in the framework of information theory and briefly address some of the well known mean field solutions of three dimensional problems. The theory of phase transitions in finite systems will then be discussed, with a special emphasis to the conceptual problems linked to a thermodynamical description for small, short-lived, open systems as metal clusters and data samples coming from nuclear collisions. The concept of negative heat capacity developed in the early seventies in the context of self-gravitating systems will be reinterpreted in the general framework of convexity anomalies of thermo-statistical potentials. The connection with the distribution of the order parameter will lead us to a definition of first order phase transitions in finite systems based on topology anomalies of the event distribution in the space of observations. Finally a careful study of the thermodynamical limit will provide a bridge with the standard theory of phase transitions and show that in a wide class of physical situations the different statistical ensembles are irreducibly inequivalent. (authors)

  7. High-Order Entropy Stable Finite Difference Schemes for Nonlinear Conservation Laws: Finite Domains

    Science.gov (United States)

    Fisher, Travis C.; Carpenter, Mark H.

    2013-01-01

    Developing stable and robust high-order finite difference schemes requires mathematical formalism and appropriate methods of analysis. In this work, nonlinear entropy stability is used to derive provably stable high-order finite difference methods with formal boundary closures for conservation laws. Particular emphasis is placed on the entropy stability of the compressible Navier-Stokes equations. A newly derived entropy stable weighted essentially non-oscillatory finite difference method is used to simulate problems with shocks and a conservative, entropy stable, narrow-stencil finite difference approach is used to approximate viscous terms.

  8. Event- and interval-based measurement of stuttering: a review.

    Science.gov (United States)

    Valente, Ana Rita S; Jesus, Luis M T; Hall, Andreia; Leahy, Margaret

    2015-01-01

    Event- and interval-based measurements are two different ways of computing frequency of stuttering. Interval-based methodology emerged as an alternative measure to overcome problems associated with reproducibility in the event-based methodology. No review has been made to study the effect of methodological factors in interval-based absolute reliability data or to compute the agreement between the two methodologies in terms of inter-judge, intra-judge and accuracy (i.e., correspondence between raters' scores and an established criterion). To provide a review related to reproducibility of event-based and time-interval measurement, and to verify the effect of methodological factors (training, experience, interval duration, sample presentation order and judgment conditions) on agreement of time-interval measurement; in addition, to determine if it is possible to quantify the agreement between the two methodologies The first two authors searched for articles on ERIC, MEDLINE, PubMed, B-on, CENTRAL and Dissertation Abstracts during January-February 2013 and retrieved 495 articles. Forty-eight articles were selected for review. Content tables were constructed with the main findings. Articles related to event-based measurements revealed values of inter- and intra-judge greater than 0.70 and agreement percentages beyond 80%. The articles related to time-interval measures revealed that, in general, judges with more experience with stuttering presented significantly higher levels of intra- and inter-judge agreement. Inter- and intra-judge values were beyond the references for high reproducibility values for both methodologies. Accuracy (regarding the closeness of raters' judgements with an established criterion), intra- and inter-judge agreement were higher for trained groups when compared with non-trained groups. Sample presentation order and audio/video conditions did not result in differences in inter- or intra-judge results. A duration of 5 s for an interval appears to be

  9. Finite rotation shells basic equations and finite elements for Reissner kinematics

    CERN Document Server

    Wisniewski, K

    2010-01-01

    This book covers theoretical and computational aspects of non-linear shells. Several advanced topics of shell equations and finite elements - not included in standard textbooks on finite elements - are addressed, and the book includes an extensive bibliography.

  10. Estimating reliable paediatric reference intervals in clinical chemistry and haematology.

    Science.gov (United States)

    Ridefelt, Peter; Hellberg, Dan; Aldrimer, Mattias; Gustafsson, Jan

    2014-01-01

    Very few high-quality studies on paediatric reference intervals for general clinical chemistry and haematology analytes have been performed. Three recent prospective community-based projects utilising blood samples from healthy children in Sweden, Denmark and Canada have substantially improved the situation. The present review summarises current reference interval studies for common clinical chemistry and haematology analyses. ©2013 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  11. Finite strain analysis of metavolcanics and metapyroclastics in gold-bearing shear zone of the Dungash area, Central Eastern Desert, Egypt

    Science.gov (United States)

    Kassem, Osama M. K.; Abd El Rahim, Said H.

    2014-11-01

    The Dungash gold mine area is situated in an EW-trending quartz vein along a shear zone in metavolcanic and metasedimentary host rocks in the Eastern Desert of Egypt. These rocks are associated with the major geologic structures, which are attributed to various deformational stages of the Neoproterozoic basement rocks. Field geology, finite strain and microstructural analyses were carried out and the relation-ships between the lithological contacts and major/minor structures have been studied. The R f/ϕ and Fry methods were applied on the metavolcano-sedimentary and metapyroclastic samples from 5 quartz veins samples, 7 metavolcanics samples, 3 metasedimentary samples and 4 metapyroclastic samples in Dungash area. Finite-strain data show that a low to moderate range of deformation of the metavolcano-sedimentary samples and axial ratios in the XZ section range from 1.70 to 4.80 for the R f/ϕ method and from 1.65 to 4.50 for the Fry method. We conclude that finite strain in the deformed rocks is of the same order of magnitude for all units of metavolcano-sedimentary rocks. Furthermore, the contact between principal rock units is sheared in the Dungash area under brittle to semi-ductile deformation conditions. In this case, the accumulated finite strain is associated with the deformation during thrusting to assemble nappe structure. It indicates that the sheared contacts have been formed during the accumulation of finite strain.

  12. Indirect methods for reference interval determination - review and recommendations.

    Science.gov (United States)

    Jones, Graham R D; Haeckel, Rainer; Loh, Tze Ping; Sikaris, Ken; Streichert, Thomas; Katayev, Alex; Barth, Julian H; Ozarda, Yesim

    2018-04-19

    Reference intervals are a vital part of the information supplied by clinical laboratories to support interpretation of numerical pathology results such as are produced in clinical chemistry and hematology laboratories. The traditional method for establishing reference intervals, known as the direct approach, is based on collecting samples from members of a preselected reference population, making the measurements and then determining the intervals. An alternative approach is to perform analysis of results generated as part of routine pathology testing and using appropriate statistical techniques to determine reference intervals. This is known as the indirect approach. This paper from a working group of the International Federation of Clinical Chemistry (IFCC) Committee on Reference Intervals and Decision Limits (C-RIDL) aims to summarize current thinking on indirect approaches to reference intervals. The indirect approach has some major potential advantages compared with direct methods. The processes are faster, cheaper and do not involve patient inconvenience, discomfort or the risks associated with generating new patient health information. Indirect methods also use the same preanalytical and analytical techniques used for patient management and can provide very large numbers for assessment. Limitations to the indirect methods include possible effects of diseased subpopulations on the derived interval. The IFCC C-RIDL aims to encourage the use of indirect methods to establish and verify reference intervals, to promote publication of such intervals with clear explanation of the process used and also to support the development of improved statistical techniques for these studies.

  13. Sampling Development

    Science.gov (United States)

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…

  14. Encoding of temporal intervals in the rat hindlimb sensorimotor cortex

    Directory of Open Access Journals (Sweden)

    Eric Bean Knudsen

    2012-09-01

    Full Text Available The gradual buildup of neural activity over experimentally imposed delay periods, termed climbing activity, is well documented and is a potential mechanism by which interval time is encoded by distributed cortico-thalamico-striatal networks in the brain. Additionally, when multiple delay periods are incorporated, this activity has been shown to scale its rate of climbing proportional to the delay period. However, it remains unclear whether these patterns of activity occur within areas of motor cortex dedicated to hindlimb movement. Moreover, the effects of behavioral training (e.g. motor tasks under different reward conditions but with similar behavioral output are not well addressed. To address this, we recorded activity from the hindlimb sensorimotor cortex (HLSMC of two groups of rats performing a skilled hindlimb press task. In one group, rats were trained only to a make a valid press within a finite window after cue presentation for reward (non-interval trained, nIT; n=5, while rats in the second group were given duration-specific cues in which they had to make presses of either short or long duration to receive reward (interval trained, IT; n=6. Using PETH analyses, we show that cells recorded from both groups showed climbing activity during the task in similar proportions (35% IT and 47% nIT, however only climbing activity from IT rats was temporally scaled to press duration. Furthermore, using single trial decoding techniques (Wiener filter, we show that press duration can be inferred using climbing activity from IT animals (R=0.61 significantly better than nIT animals (R=0.507, p<0.01, suggesting IT animals encode press duration through temporally scaled climbing activity. Thus, if temporal intervals are behaviorally relevant then the activity of climbing neurons is temporally scaled to encode the passage of time.

  15. Finite Boltzmann schemes

    NARCIS (Netherlands)

    Sman, van der R.G.M.

    2006-01-01

    In the special case of relaxation parameter = 1 lattice Boltzmann schemes for (convection) diffusion and fluid flow are equivalent to finite difference/volume (FD) schemes, and are thus coined finite Boltzmann (FB) schemes. We show that the equivalence is inherent to the homology of the

  16. Development of a partitioned finite volume-finite element fluid-structure interaction scheme for strongly-coupled problems

    CSIR Research Space (South Africa)

    Suliman, Ridhwaan

    2012-07-01

    Full Text Available -linear deformations are accounted for. As will be demonstrated, the finite volume approach exhibits similar disad- vantages to the linear Q4 finite element formulation when undergoing bending. An enhanced finite volume approach is discussed and compared with finite...

  17. Experimental demonstration of the finite measurement time effect on the Feynman-{alpha} technique

    Energy Technology Data Exchange (ETDEWEB)

    Wallerbos, E.J.M.; Hoogenboom, J.E

    1998-09-01

    The reactivity of a subcritical system is determined by fitting two different theoretical models to a measured Feynman-{alpha} curve. The first model is the expression usually found in the literature, which can be shown to be the expectation value of the experimental quality if the measurement time is infinite. The second model is a new expression which is the expectation value of the experimental quantity for a finite measurement time. The reactivity inferred with the new model is seen to be independent of the length of the fitting interval, whereas the reactivity inferred with the conventional model is seen to vary. This difference demonstrates the effect of the finite measurement time. As a reference, the reactivity is also measured with the pulsed-neutron source method. It is seen to be in good agreement with the reactivity obtained with the Feynman-{alpha} technique when the new expression is applied.

  18. Ordering, symbols and finite-dimensional approximations of path integrals

    International Nuclear Information System (INIS)

    Kashiwa, Taro; Sakoda, Seiji; Zenkin, S.V.

    1994-01-01

    We derive general form of finite-dimensional approximations of path integrals for both bosonic and fermionic canonical systems in terms of symbols of operators determined by operator ordering. We argue that for a system with a given quantum Hamiltonian such approximations are independent of the type of symbols up to terms of O(ε), where ε of is infinitesimal time interval determining the accuracy of the approximations. A new class of such approximations is found for both c-number and Grassmannian dynamical variables. The actions determined by the approximations are non-local and have no classical continuum limit except the cases of pq- and qp-ordering. As an explicit example the fermionic oscillator is considered in detail. (author)

  19. Finite quantum field theories

    International Nuclear Information System (INIS)

    Lucha, W.; Neufeld, H.

    1986-01-01

    We investigate the relation between finiteness of a four-dimensional quantum field theory and global supersymmetry. To this end we consider the most general quantum field theory and analyse the finiteness conditions resulting from the requirement of the absence of divergent contributions to the renormalizations of the parameters of the theory. In addition to the gauge bosons, both fermions and scalar bosons turn out to be a necessary ingredient in a non-trivial finite gauge theory. In all cases discussed, the supersymmetric theory restricted by two well-known constraints on the dimensionless couplings proves to be the unique solution of the finiteness conditions. (Author)

  20. Economic Statistical Design of Variable Sampling Interval X¯$\\overline X $ Control Chart Based on Surrogate Variable Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Lee Tae-Hoon

    2016-12-01

    Full Text Available In many cases, a X¯$\\overline X $ control chart based on a performance variable is used in industrial fields. Typically, the control chart monitors the measurements of a performance variable itself. However, if the performance variable is too costly or impossible to measure, and a less expensive surrogate variable is available, the process may be more efficiently controlled using surrogate variables. In this paper, we present a model for the economic statistical design of a VSI (Variable Sampling Interval X¯$\\overline X $ control chart using a surrogate variable that is linearly correlated with the performance variable. We derive the total average profit model from an economic viewpoint and apply the model to a Very High Temperature Reactor (VHTR nuclear fuel measurement system and derive the optimal result using genetic algorithms. Compared with the control chart based on a performance variable, the proposed model gives a larger expected net income per unit of time in the long-run if the correlation between the performance variable and the surrogate variable is relatively high. The proposed model was confined to the sample mean control chart under the assumption that a single assignable cause occurs according to the Poisson process. However, the model may also be extended to other types of control charts using a single or multiple assignable cause assumptions such as VSS (Variable Sample Size X¯$\\overline X $ control chart, EWMA, CUSUM charts and so on.

  1. FINITE MARKOV CHAINS IN THE MODEL REPRESENTATION OF THE HUMAN OPERATOR ACTIVITY IN QUASI-FUNCTIONAL ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    M. V. Serzhantova

    2016-05-01

    Full Text Available Subject of Research. We analyze the problems of finite Markov chains apparatus application for simulating a human operator activity in the quasi-static functional environment. It is shown that the functional environment stochastic nature is generated by a factor of interval character of human operator properties. Method. The problem is solved in the class of regular (recurrent finite Markov chains with three states of the human operator: with a favorable, median and unfavorable combination of the values of mathematical model parameters of the human operator in a quasi-static functional environment. The finite Markov chain is designed taking into account the factors of human operator tiredness and interval character of parameters of the model representation of his properties. The device is based on the usage of mathematical approximation of the standard curve of the human operator activity performance during work shift. The standard curve of the human operator activity performance is based on the extensive research experience of functional activity of the human operator with the help of photos of the day, his action timing and ergonomic generalizations. Main Results. The apparatus of regular finite Markov chains gave the possibility to evaluate correctly the human operator activity performance in a quasi-static functional environment with the use of the main information component of these chains as a vector of final probabilities. In addition, we managed to build an algorithmic basis for estimating the stationary time (time study for transit of human operator from arbitrary initial functional state into a state corresponding to a vector of final probabilities for a used chain after it reaches the final state based on the analysis of the eigenvalues spectrum of the matrix of transition probabilities for a regular (recurrent finite Markov chain. Practical Relevance. Obtained theoretical results are confirmed by illustrative examples, which

  2. A game theoretic approach to a finite-time disturbance attenuation problem

    Science.gov (United States)

    Rhee, Ihnseok; Speyer, Jason L.

    1991-01-01

    A disturbance attenuation problem over a finite-time interval is considered by a game theoretic approach where the control, restricted to a function of the measurement history, plays against adversaries composed of the process and measurement disturbances, and the initial state. A zero-sum game, formulated as a quadratic cost criterion subject to linear time-varying dynamics and measurements, is solved by a calculus of variation technique. By first maximizing the quadratic cost criterion with respect to the process disturbance and initial state, a full information game between the control and the measurement residual subject to the estimator dynamics results. The resulting solution produces an n-dimensional compensator which expresses the controller as a linear combination of the measurement history. A disturbance attenuation problem is solved based on the results of the game problem. For time-invariant systems it is shown that under certain conditions the time-varying controller becomes time-invariant on the infinite-time interval. The resulting controller satisfies an H(infinity) norm bound.

  3. Using an R Shiny to Enhance the Learning Experience of Confidence Intervals

    Science.gov (United States)

    Williams, Immanuel James; Williams, Kelley Kim

    2018-01-01

    Many students find understanding confidence intervals difficult, especially because of the amalgamation of concepts such as confidence levels, standard error, point estimates and sample sizes. An R Shiny application was created to assist the learning process of confidence intervals using graphics and data from the US National Basketball…

  4. Finite fields and applications

    CERN Document Server

    Mullen, Gary L

    2007-01-01

    This book provides a brief and accessible introduction to the theory of finite fields and to some of their many fascinating and practical applications. The first chapter is devoted to the theory of finite fields. After covering their construction and elementary properties, the authors discuss the trace and norm functions, bases for finite fields, and properties of polynomials over finite fields. Each of the remaining chapters details applications. Chapter 2 deals with combinatorial topics such as the construction of sets of orthogonal latin squares, affine and projective planes, block designs, and Hadamard matrices. Chapters 3 and 4 provide a number of constructions and basic properties of error-correcting codes and cryptographic systems using finite fields. Each chapter includes a set of exercises of varying levels of difficulty which help to further explain and motivate the material. Appendix A provides a brief review of the basic number theory and abstract algebra used in the text, as well as exercises rel...

  5. On finite quantum field theories

    International Nuclear Information System (INIS)

    Rajpoot, S.; Taylor, J.G.

    1984-01-01

    The properties that make massless versions of N = 4 super Yang-Mills theory and a class of N = 2 supersymmetric theories finite are: (I) a universal coupling for the gauge and matter interactions, (II) anomaly-free representations to which the bosonic and fermionic matter belong, and (III) no charge renormalisation, i.e. β(g) = 0. It was conjectured that field theories constructed out of N = 1 matter multiplets are also finite if they too share the above properties. Explicit calculations have verified these theories to be finite up to two loops. The implications of the finiteness conditions for N = 1 finite field theories with SU(M) gauge symmetry are discussed. (orig.)

  6. Finite size scaling theory

    International Nuclear Information System (INIS)

    Rittenberg, V.

    1983-01-01

    Fischer's finite-size scaling describes the cross over from the singular behaviour of thermodynamic quantities at the critical point to the analytic behaviour of the finite system. Recent extensions of the method--transfer matrix technique, and the Hamiltonian formalism--are discussed in this paper. The method is presented, with equations deriving scaling function, critical temperature, and exponent v. As an application of the method, a 3-states Hamiltonian with Z 3 global symmetry is studied. Diagonalization of the Hamiltonian for finite chains allows one to estimate the critical exponents, and also to discover new phase transitions at lower temperatures. The critical points lambda, and indices v estimated for finite-scaling are given

  7. The delay function in finite difference models for nuclear channels thermo-hydraulic transients

    International Nuclear Information System (INIS)

    Agazzi, A.

    1977-01-01

    The study of the thermo-hydraulic transients in a nuclear reactor core often requires a bi- or tri-dimensional mathematical simulation of a reactor channel. The equations involved are generally solved by means of finite-difference methods. The determination of the spatial mesh-width and the time interval is strongly conditioned by the necessity of a good accuracy in the description of the delay function which defines the transfer of thermal perturbations along the cooling channel. In this paper the effects of both space and time discretization on the delay function are considered and for the classical cases of inlet temperature step and ramp universal functions and diagrams are given in order to make possible the determination of optimal spatial mesh-width and time interval, once the requested accuracy of the model is fixed in advance

  8. Transport and dispersion of pollutants in surface impoundments: a finite element model

    International Nuclear Information System (INIS)

    Yeh, G.T.

    1980-07-01

    A surface impoundment model in finite element (SIMFE) is presented to enable the simulation of flow circulations and pollutant transport and dispersion in natural or artificial lakes, reservoirs or ponds with any number of islands. This surface impoundment model consists of two sub-models: hydrodynamic and pollutant transport models. Both submodels are simulated by the finite element method. While the hydrodynamic model is solved by the standard Galerkin finite element scheme, the pollutant transport model can be solved by any of the twelve optional finite element schemes built in the program. Theoretical approximations and the numerical algorithm of SIMFE are described. Detail instruction of the application are given and listing of FORTRAN IV source program are provided. Two sample problems are given. One is for an idealized system with a known solution to show the accuracy and partial validation of the models. The other is applied to Prairie Island for a set of hypothetical input data, typifying a class of problems to which SIMFE may be applied

  9. Transport and dispersion of pollutants in surface impoundments: a finite element model

    Energy Technology Data Exchange (ETDEWEB)

    Yeh, G.T.

    1980-07-01

    A surface impoundment model in finite element (SIMFE) is presented to enable the simulation of flow circulations and pollutant transport and dispersion in natural or artificial lakes, reservoirs or ponds with any number of islands. This surface impoundment model consists of two sub-models: hydrodynamic and pollutant transport models. Both submodels are simulated by the finite element method. While the hydrodynamic model is solved by the standard Galerkin finite element scheme, the pollutant transport model can be solved by any of the twelve optional finite element schemes built in the program. Theoretical approximations and the numerical algorithm of SIMFE are described. Detail instruction of the application are given and listing of FORTRAN IV source program are provided. Two sample problems are given. One is for an idealized system with a known solution to show the accuracy and partial validation of the models. The other is applied to Prairie Island for a set of hypothetical input data, typifying a class of problems to which SIMFE may be applied.

  10. Programming with Intervals

    Science.gov (United States)

    Matsakis, Nicholas D.; Gross, Thomas R.

    Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.

  11. $\\delta$-Expansion at Finite Temperature

    OpenAIRE

    Ramos, Rudnei O.

    1996-01-01

    We apply the $\\delta$-expansion perturbation scheme to the $\\lambda \\phi^{4}$ self-interacting scalar field theory in 3+1 D at finite temperature. In the $\\delta$-expansion the interaction term is written as $\\lambda (\\phi^{2})^{ 1 + \\delta}$ and $\\delta$ is considered as the perturbation parameter. We compute within this perturbative approach the renormalized mass at finite temperature at a finite order in $\\delta$. The results are compared with the usual loop-expansion at finite temperature.

  12. Finite spatial volume approach to finite temperature field theory

    International Nuclear Information System (INIS)

    Weiss, Nathan

    1981-01-01

    A relativistic quantum field theory at finite temperature T=β -1 is equivalent to the same field theory at zero temperature but with one spatial dimension of finite length β. This equivalence is discussed for scalars, for fermions, and for gauge theories. The relationship is checked for free field theory. The translation of correlation functions between the two formulations is described with special emphasis on the nonlocal order parameters of gauge theories. Possible applications are mentioned. (auth)

  13. Finite-dimensional calculus

    International Nuclear Information System (INIS)

    Feinsilver, Philip; Schott, Rene

    2009-01-01

    We discuss topics related to finite-dimensional calculus in the context of finite-dimensional quantum mechanics. The truncated Heisenberg-Weyl algebra is called a TAA algebra after Tekin, Aydin and Arik who formulated it in terms of orthofermions. It is shown how to use a matrix approach to implement analytic representations of the Heisenberg-Weyl algebra in univariate and multivariate settings. We provide examples for the univariate case. Krawtchouk polynomials are presented in detail, including a review of Krawtchouk polynomials that illustrates some curious properties of the Heisenberg-Weyl algebra, as well as presenting an approach to computing Krawtchouk expansions. From a mathematical perspective, we are providing indications as to how to implement infinite terms Rota's 'finite operator calculus'.

  14. Comparison of Bootstrap Confidence Intervals Using Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)

    Roberto S. Flowers-Cano

    2018-02-01

    Full Text Available Design of hydraulic works requires the estimation of design hydrological events by statistical inference from a probability distribution. Using Monte Carlo simulations, we compared coverage of confidence intervals constructed with four bootstrap techniques: percentile bootstrap (BP, bias-corrected bootstrap (BC, accelerated bias-corrected bootstrap (BCA and a modified version of the standard bootstrap (MSB. Different simulation scenarios were analyzed. In some cases, the mother distribution function was fit to the random samples that were generated. In other cases, a distribution function different to the mother distribution was fit to the samples. When the fitted distribution had three parameters, and was the same as the mother distribution, the intervals constructed with the four techniques had acceptable coverage. However, the bootstrap techniques failed in several of the cases in which the fitted distribution had two parameters.

  15. Finite element design for the HPHT synthesis of diamond

    Science.gov (United States)

    Li, Rui; Ding, Mingming; Shi, Tongfei

    2018-06-01

    The finite element method is used to simulate the steady-state temperature field in diamond synthesis cell. The 2D and 3D models of the China-type cubic press with large deformation of the synthesis cell was established successfully, which has been verified by situ measurements of synthesis cell. The assembly design, component design and process design for the HPHT synthesis of diamond based on the finite element simulation were presented one by one. The temperature field in a high-pressure synthetic cavity for diamond production is optimized by adjusting the cavity assembly. A series of analysis about the influence of the pressure media parameters on the temperature field are examined through adjusting the model parameters. Furthermore, the formation mechanism of wasteland was studied in detail. It indicates that the wasteland is inevitably exists in the synthesis sample, the distribution of growth region of the diamond with hex-octahedral is move to the center of the synthesis sample from near the heater as the power increasing, and the growth conditions of high quality diamond is locating at the center of the synthesis sample. These works can offer suggestion and advice to the development and optimization of a diamond production process.

  16. Mesh-size errors in diffusion-theory calculations using finite-difference and finite-element methods

    International Nuclear Information System (INIS)

    Baker, A.R.

    1982-07-01

    A study has been performed of mesh-size errors in diffusion-theory calculations using finite-difference and finite-element methods. As the objective was to illuminate the issues, the study was performed for a 1D slab model of a reactor with one neutron-energy group for which analytical solutions were possible. A computer code SLAB was specially written to perform the finite-difference and finite-element calculations and also to obtain the analytical solutions. The standard finite-difference equations were obtained by starting with an expansion of the neutron current in powers of the mesh size, h, and keeping terms as far as h 2 . It was confirmed that these equations led to the well-known result that the criticality parameter varied with the square of the mesh size. An improved form of the finite-difference equations was obtained by continuing the expansion for the neutron current as far as the term in h 4 . In this case, the critical parameter varied as the fourth power of the mesh size. The finite-element solutions for 2 and 3 nodes per element revealed that the criticality parameter varied as the square and fourth power of the mesh size, respectively. Numerical results are presented for a bare reactive core of uniform composition with 2 zones of different uniform mesh and for a reactive core with an absorptive reflector. (author)

  17. Speeding up the first-passage for subdiffusion by introducing a finite potential barrier

    International Nuclear Information System (INIS)

    Palyulin, Vladimir V; Metzler, Ralf

    2014-01-01

    We show that for a subdiffusive continuous time random walk with scale-free waiting time distribution the first-passage dynamics on a finite interval can be optimized by introduction of a piecewise linear potential barrier. Analytical results for the survival probability and first-passage density based on the fractional Fokker–Planck equation are shown to agree well with Monte Carlo simulations results. As an application we discuss an improved design for efficient translocation of gradient copolymers compared to homopolymer translocation in a quasi-equilibrium approximation. (fast track communications)

  18. Nilpotent -local finite groups

    Science.gov (United States)

    Cantarero, José; Scherer, Jérôme; Viruel, Antonio

    2014-10-01

    We provide characterizations of -nilpotency for fusion systems and -local finite groups that are inspired by known result for finite groups. In particular, we generalize criteria by Atiyah, Brunetti, Frobenius, Quillen, Stammbach and Tate.

  19. Mathematical simulation of the thermal diffusion in dentine irradiated with Nd:YAG laser using finite difference method

    Science.gov (United States)

    Moriyama, Eduardo H.; Zangaro, Renato A.; Lobo, Paulo D. d. C.; Villaverde, Antonio G. J. B.; Watanabe-Sei, Ii; Pacheco, Marcos T. T.; Otsuka, Daniel K.

    2002-06-01

    Thermal damage in dental pulp during Nd:YAG laser irradiation have been studied by several researchers; but due to dentin inhomogeneous structure, laser interaction with dentin in the hypersensitivity treatment are not fully understood. In this work, heat distribution profile on human dentine samples irradiated with Nd:YAG laser was simulated at surface and subjacent layers. Calculations were carried out using the Crank-Nicolson's finite difference method. Sixteen dentin samples with 1,5 mm of thickness were evenly distributed into four groups and irradiated with Nd:YAG laser pulses, according to the following scheme: (I) 1 pulse of 900 mJ, (II) 2 pulses of 450 mJ, (III) 3 pulses of 300 mJ, (IV) 6 pulses of 150 mJ; corresponding to a total laser energy of 900 mJ. The pulse interval was 300ms, the pulse duration of 900 ms and irradiated surface area of 0,005 mm2. Laser induced morphological changes in dentin were observed for all the irradiated samples. The heat distribution throughout the dentin layer, from the external dentin surface to the pulpal chamber wall, was calculated for each case, in order to obtain further information about the pulsed Nd:YAG laser-oral hard tissue interaction. The simulation showed significant differences in the final temperature at the pulpal chamber, depending on the exposition time and the energy contained in the laser pulse.

  20. Sampling theory, a renaissance compressive sensing and other developments

    CERN Document Server

    2015-01-01

    Reconstructing or approximating objects from seemingly incomplete information is a frequent challenge in mathematics, science, and engineering. A multitude of tools designed to recover hidden information are based on Shannon’s classical sampling theorem, a central pillar of Sampling Theory. The growing need to efficiently obtain precise and tailored digital representations of complex objects and phenomena requires the maturation of available tools in Sampling Theory as well as the development of complementary, novel mathematical theories. Today, research themes such as Compressed Sensing and Frame Theory re-energize the broad area of Sampling Theory. This volume illustrates the renaissance that the area of Sampling Theory is currently experiencing. It touches upon trendsetting areas such as Compressed Sensing, Finite Frames, Parametric Partial Differential Equations, Quantization, Finite Rate of Innovation, System Theory, as well as sampling in Geometry and Algebraic Topology.

  1. Population-Based Pediatric Reference Intervals in General Clinical Chemistry: A Swedish Survey.

    Science.gov (United States)

    Ridefelt, Peter

    2015-01-01

    Very few high quality studies on pediatric reference intervals for general clinical chemistry and hematology analytes have been performed. Three recent prospective community-based projects utilising blood samples from healthy children in Sweden, Denmark and Canada have substantially improved the situation. The Swedish survey included 701 healthy children. Reference intervals for general clinical chemistry and hematology were defined.

  2. Finite flavour groups of fermions

    International Nuclear Information System (INIS)

    Grimus, Walter; Ludl, Patrick Otto

    2012-01-01

    We present an overview of the theory of finite groups, with regard to their application as flavour symmetries in particle physics. In a general part, we discuss useful theorems concerning group structure, conjugacy classes, representations and character tables. In a specialized part, we attempt to give a fairly comprehensive review of finite subgroups of SO(3) and SU(3), in which we apply and illustrate the general theory. Moreover, we also provide a concise description of the symmetric and alternating groups and comment on the relationship between finite subgroups of U(3) and finite subgroups of SU(3). Although in this review we give a detailed description of a wide range of finite groups, the main focus is on the methods which allow the exploration of their different aspects. (topical review)

  3. Finite elements and approximation

    CERN Document Server

    Zienkiewicz, O C

    2006-01-01

    A powerful tool for the approximate solution of differential equations, the finite element is extensively used in industry and research. This book offers students of engineering and physics a comprehensive view of the principles involved, with numerous illustrative examples and exercises.Starting with continuum boundary value problems and the need for numerical discretization, the text examines finite difference methods, weighted residual methods in the context of continuous trial functions, and piecewise defined trial functions and the finite element method. Additional topics include higher o

  4. Finite element modeling of trolling-mode AFM.

    Science.gov (United States)

    Sajjadi, Mohammadreza; Pishkenari, Hossein Nejat; Vossoughi, Gholamreza

    2018-06-01

    Trolling mode atomic force microscopy (TR-AFM) has overcome many imaging problems in liquid environments by considerably reducing the liquid-resonator interaction forces. The finite element model of the TR-AFM resonator considering the effects of fluid and nanoneedle flexibility is presented in this research, for the first time. The model is verified by ABAQUS software. The effect of installation angle of the microbeam relative to the horizon and the effect of fluid on the system behavior are investigated. Using the finite element model, frequency response curve of the system is obtained and validated around the frequency of the operating mode by the available experimental results, in air and liquid. The changes in the natural frequencies in the presence of liquid are studied. The effects of tip-sample interaction on the excitation of higher order modes of the system are also investigated in air and liquid environments. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. On sampling social networking services

    OpenAIRE

    Wang, Baiyang

    2012-01-01

    This article aims at summarizing the existing methods for sampling social networking services and proposing a faster confidence interval for related sampling methods. It also includes comparisons of common network sampling techniques.

  6. Solvable model of spin-dependent transport through a finite array of quantum dots

    International Nuclear Information System (INIS)

    Avdonin, S A; Dmitrieva, L A; Kuperin, Yu A; Sartan, V V

    2005-01-01

    The problem of spin-dependent transport of electrons through a finite array of quantum dots attached to a 1D quantum wire (spin gun) for various semiconductor materials is studied. The Breit-Fermi term for spin-spin interaction in the effective Hamiltonian of the device is shown to result in a dependence of transmission coefficient on the spin orientation. The difference of transmission probabilities for singlet and triplet channels can reach a few per cent for a single quantum dot. For several quantum dots in the array due to interference effects it can reach approximately 100% for some energy intervals. For the same energy intervals the conductance of the device reaches the value ∼1 in [e 2 /πℎ] units. As a result a model of the spin gun which transforms the spin-unpolarized electron beam into a completely polarized one is suggested

  7. CLSI-based transference of CALIPER pediatric reference intervals to Beckman Coulter AU biochemical assays.

    Science.gov (United States)

    Abou El Hassan, Mohamed; Stoianov, Alexandra; Araújo, Petra A T; Sadeghieh, Tara; Chan, Man Khun; Chen, Yunqi; Randell, Edward; Nieuwesteeg, Michelle; Adeli, Khosrow

    2015-11-01

    The CALIPER program has established a comprehensive database of pediatric reference intervals using largely the Abbott ARCHITECT biochemical assays. To expand clinical application of CALIPER reference standards, the present study is aimed at transferring CALIPER reference intervals from the Abbott ARCHITECT to Beckman Coulter AU assays. Transference of CALIPER reference intervals was performed based on the CLSI guidelines C28-A3 and EP9-A2. The new reference intervals were directly verified using up to 100 reference samples from the healthy CALIPER cohort. We found a strong correlation between Abbott ARCHITECT and Beckman Coulter AU biochemical assays, allowing the transference of the vast majority (94%; 30 out of 32 assays) of CALIPER reference intervals previously established using Abbott assays. Transferred reference intervals were, in general, similar to previously published CALIPER reference intervals, with some exceptions. Most of the transferred reference intervals were sex-specific and were verified using healthy reference samples from the CALIPER biobank based on CLSI criteria. It is important to note that the comparisons performed between the Abbott and Beckman Coulter assays make no assumptions as to assay accuracy or which system is more correct/accurate. The majority of CALIPER reference intervals were transferrable to Beckman Coulter AU assays, allowing the establishment of a new database of pediatric reference intervals. This further expands the utility of the CALIPER database to clinical laboratories using the AU assays; however, each laboratory should validate these intervals for their analytical platform and local population as recommended by the CLSI. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  8. Parallel iterative procedures for approximate solutions of wave propagation by finite element and finite difference methods

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S. [Purdue Univ., West Lafayette, IN (United States)

    1994-12-31

    Parallel iterative procedures based on domain decomposition techniques are defined and analyzed for the numerical solution of wave propagation by finite element and finite difference methods. For finite element methods, in a Lagrangian framework, an efficient way for choosing the algorithm parameter as well as the algorithm convergence are indicated. Some heuristic arguments for finding the algorithm parameter for finite difference schemes are addressed. Numerical results are presented to indicate the effectiveness of the methods.

  9. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang

    2010-01-01

    Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  10. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  11. Effect of a data buffer on the recorded distribution of time intervals for random events

    Energy Technology Data Exchange (ETDEWEB)

    Barton, J C [Polytechnic of North London (UK)

    1976-03-15

    The use of a data buffer enables the distribution of the time intervals between events to be studied for times less than the recording system dead-time but the usual negative exponential distribution for random events has to be modified. The theory for this effect is developed for an n-stage buffer followed by an asynchronous recorder. Results are evaluated for the values of n from 1 to 5. In the language of queueing theory the system studied is of type M/D/1/n+1, i.e. with constant service time and a finite number of places.

  12. «Paralipomena» on uniqueness in inverse scattering from a finite number of data

    Directory of Open Access Journals (Sweden)

    R. Persico

    2007-06-01

    Full Text Available This paper shows new proof of non-uniqueness of the solution for the retrieving of a compact-supported function starting from a finite number of samples of its spectrum. As will be shown, this is relevant for linear inverse scattering problems, that in many cases can be recast as the reconstruction of a compact supported function from a finite set of samples of its spectrum. Since this reconstruction is not unique, from a practical point of view, any linear inverse scattering algorithm that can be recast in terms of a Fourier relationship between unknowns and data necessarily «trusts» on the absence of invisible objects in the particular situation at hand.

  13. An Interval Bound Algorithm of optimizing reactor core loading pattern by using reactivity interval schema

    International Nuclear Information System (INIS)

    Gong Zhaohu; Wang Kan; Yao Dong

    2011-01-01

    Highlights: → We present a new Loading Pattern Optimization method - Interval Bound Algorithm (IBA). → IBA directly uses the reactivity of fuel assemblies and burnable poison. → IBA can optimize fuel assembly orientation in a coupled way. → Numerical experiment shows that IBA outperforms genetic algorithm and engineers. → We devise DDWF technique to deal with multiple objectives and constraints. - Abstract: In order to optimize the core loading pattern in Nuclear Power Plants, the paper presents a new optimization method - Interval Bound Algorithm (IBA). Similar to the typical population based algorithms, e.g. genetic algorithm, IBA maintains a population of solutions and evolves them during the optimization process. IBA acquires the solution by statistical learning and sampling the control variable intervals of the population in each iteration. The control variables are the transforms of the reactivity of fuel assemblies or the worth of burnable poisons, which are the crucial heuristic information for loading pattern optimization problems. IBA can deal with the relationship between the dependent variables by defining the control variables. Based on the IBA algorithm, a parallel Loading Pattern Optimization code, named IBALPO, has been developed. To deal with multiple objectives and constraints, the Dynamic Discontinuous Weight Factors (DDWF) for the fitness function have been used in IBALPO. Finally, the code system has been used to solve a realistic reloading problem and a better pattern has been obtained compared with the ones searched by engineers and genetic algorithm, thus the performance of the code is proved.

  14. A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling

    OpenAIRE

    Yan, Ying; Suo, Bin

    2017-01-01

    Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix b...

  15. Introduction to finite temperature and finite density QCD

    International Nuclear Information System (INIS)

    Kitazawa, Masakiyo

    2014-01-01

    It has been pointed out that QCD (Quantum Chromodynamics) in the circumstances of medium at finite temperature and density shows numbers of phenomena similar to the characteristics of solid state physics, e.g. phase transitions. In the past ten years, the very high temperature and density matter came to be observed experimentally at the heavy ion collisions. At the same time, the numerical QCD analysis at finite temperature and density attained quantitative level analysis possible owing to the remarkable progress of computers. In this summer school lecture, it has been set out to give not only the recent results, but also the spontaneous breaking of the chiral symmetry, the fundamental theory of finite temperature and further expositions as in the following four sections. The first section is titled as 'Introduction to Finite Temperature and Density QCD' with subsections of 1.1 standard model and QCD, 1.2 phase transition and phase structure of QCD, 1.3 lattice QCD and thermodynamic quantity, 1.4 heavy ion collision experiments, and 1.5 neutron stars. The second one is 'Equilibrium State' with subsections of 2.1 chiral symmetry, 2.2 vacuum state: BCS theory, 2.3 NJL (Nambu-Jona-Lasinio) model, and 2.4 color superconductivity. The third one is 'Static fluctuations' with subsections of 3.1 fluctuations, 3.2 moment and cumulant, 3.3 increase of fluctuations at critical points, 3.4 analysis of fluctuations by lattice QCD and Taylor expansion, and 3.5 experimental exploration of QCD phase structure. The fourth one is 'Dynamical Structure' with 4.1 linear response theory, 4.2 spectral functions, 4.3 Matsubara function, and 4.4 analyses of dynamical structure by lattice QCD. (S. Funahashi)

  16. Active earth pressure model tests versus finite element analysis

    Science.gov (United States)

    Pietrzak, Magdalena

    2017-06-01

    The purpose of the paper is to compare failure mechanisms observed in small scale model tests on granular sample in active state, and simulated by finite element method (FEM) using Plaxis 2D software. Small scale model tests were performed on rectangular granular sample retained by a rigid wall. Deformation of the sample resulted from simple wall translation in the direction `from the soil" (active earth pressure state. Simple Coulomb-Mohr model for soil can be helpful in interpreting experimental findings in case of granular materials. It was found that the general alignment of strain localization pattern (failure mechanism) may belong to macro scale features and be dominated by a test boundary conditions rather than the nature of the granular sample.

  17. Finite element computational fluid mechanics

    International Nuclear Information System (INIS)

    Baker, A.J.

    1983-01-01

    This book analyzes finite element theory as applied to computational fluid mechanics. It includes a chapter on using the heat conduction equation to expose the essence of finite element theory, including higher-order accuracy and convergence in a common knowledge framework. Another chapter generalizes the algorithm to extend application to the nonlinearity of the Navier-Stokes equations. Other chapters are concerned with the analysis of a specific fluids mechanics problem class, including theory and applications. Some of the topics covered include finite element theory for linear mechanics; potential flow; weighted residuals/galerkin finite element theory; inviscid and convection dominated flows; boundary layers; parabolic three-dimensional flows; and viscous and rotational flows

  18. Designs and finite geometries

    CERN Document Server

    1996-01-01

    Designs and Finite Geometries brings together in one place important contributions and up-to-date research results in this important area of mathematics. Designs and Finite Geometries serves as an excellent reference, providing insight into some of the most important research issues in the field.

  19. Monte Carlo Finite Volume Element Methods for the Convection-Diffusion Equation with a Random Diffusion Coefficient

    Directory of Open Access Journals (Sweden)

    Qian Zhang

    2014-01-01

    Full Text Available The paper presents a framework for the construction of Monte Carlo finite volume element method (MCFVEM for the convection-diffusion equation with a random diffusion coefficient, which is described as a random field. We first approximate the continuous stochastic field by a finite number of random variables via the Karhunen-Loève expansion and transform the initial stochastic problem into a deterministic one with a parameter in high dimensions. Then we generate independent identically distributed approximations of the solution by sampling the coefficient of the equation and employing finite volume element variational formulation. Finally the Monte Carlo (MC method is used to compute corresponding sample averages. Statistic error is estimated analytically and experimentally. A quasi-Monte Carlo (QMC technique with Sobol sequences is also used to accelerate convergence, and experiments indicate that it can improve the efficiency of the Monte Carlo method.

  20. A hybrid finite-volume and finite difference scheme for depth-integrated non-hydrostatic model

    Science.gov (United States)

    Yin, Jing; Sun, Jia-wen; Wang, Xing-gang; Yu, Yong-hai; Sun, Zhao-chen

    2017-06-01

    A depth-integrated, non-hydrostatic model with hybrid finite difference and finite volume numerical algorithm is proposed in this paper. By utilizing a fraction step method, the governing equations are decomposed into hydrostatic and non-hydrostatic parts. The first part is solved by using the finite volume conservative discretization method, whilst the latter is considered by solving discretized Poisson-type equations with the finite difference method. The second-order accuracy, both in time and space, of the finite volume scheme is achieved by using an explicit predictor-correction step and linear construction of variable state in cells. The fluxes across the cell faces are computed in a Godunov-based manner by using MUSTA scheme. Slope and flux limiting technique is used to equip the algorithm with total variation dimensioning property for shock capturing purpose. Wave breaking is treated as a shock by switching off the non-hydrostatic pressure in the steep wave front locally. The model deals with moving wet/dry front in a simple way. Numerical experiments are conducted to verify the proposed model.

  1. On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods

    Science.gov (United States)

    Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.

    2003-01-01

    Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.

  2. Generalized rate-code model for neuron ensembles with finite populations

    International Nuclear Information System (INIS)

    Hasegawa, Hideo

    2007-01-01

    We have proposed a generalized Langevin-type rate-code model subjected to multiplicative noise, in order to study stationary and dynamical properties of an ensemble containing a finite number N of neurons. Calculations using the Fokker-Planck equation have shown that, owing to the multiplicative noise, our rate model yields various kinds of stationary non-Gaussian distributions such as Γ, inverse-Gaussian-like, and log-normal-like distributions, which have been experimentally observed. The dynamical properties of the rate model have been studied with the use of the augmented moment method (AMM), which was previously proposed by the author from a macroscopic point of view for finite-unit stochastic systems. In the AMM, the original N-dimensional stochastic differential equations (DEs) are transformed into three-dimensional deterministic DEs for the means and fluctuations of local and global variables. The dynamical responses of the neuron ensemble to pulse and sinusoidal inputs calculated by the AMM are in good agreement with those obtained by direct simulation. The synchronization in the neuronal ensemble is discussed. The variabilities of the firing rate and of the interspike interval are shown to increase with increasing magnitude of multiplicative noise, which may be a conceivable origin of the observed large variability in cortical neurons

  3. Tooth Fracture Detection in Spiral Bevel Gears System by Harmonic Response Based on Finite Element Method

    Directory of Open Access Journals (Sweden)

    Yuan Chen

    2017-01-01

    Full Text Available Spiral bevel gears occupy several advantages such as high contact ratio, strong carrying capacity, and smooth operation, which become one of the most widely used components in high-speed stage of the aeronautical transmission system. Its dynamic characteristics are addressed by many scholars. However, spiral bevel gears, especially tooth fracture occurrence and monitoring, are not to be investigated, according to the limited published issues. Therefore, this paper establishes a three-dimensional model and finite element model of the Gleason spiral bevel gear pair. The model considers the effect of tooth root fracture on the system due to fatigue. Finite element method is used to compute the mesh generation, set the boundary condition, and carry out the dynamic load. The harmonic response spectra of the base under tooth fracture are calculated and the influence of main parameters on monitoring failure is investigated as well. The results show that the change of torque affects insignificantly the determination of whether or not the system has tooth fracture. The intermediate frequency interval (200 Hz–1000 Hz is the best interval to judge tooth fracture occurrence. The best fault test region is located in the working area where the system is going through meshing. The simulation calculation provides a theoretical reference for spiral bevel gear system test and fault diagnosis.

  4. VALIDATION OF CRACK INTERACTION LIMIT MODEL FOR PARALLEL EDGE CRACKS USING TWO-DIMENSIONAL FINITE ELEMENT ANALYSIS

    Directory of Open Access Journals (Sweden)

    R. Daud

    2013-06-01

    Full Text Available Shielding interaction effects of two parallel edge cracks in finite thickness plates subjected to remote tension load is analyzed using a developed finite element analysis program. In the present study, the crack interaction limit is evaluated based on the fitness of service (FFS code, and focus is given to the weak crack interaction region as the crack interval exceeds the length of cracks (b > a. Crack interaction factors are evaluated based on stress intensity factors (SIFs for Mode I SIFs using a displacement extrapolation technique. Parametric studies involved a wide range of crack-to-width (0.05 ≤ a/W ≤ 0.5 and crack interval ratios (b/a > 1. For validation, crack interaction factors are compared with single edge crack SIFs as a state of zero interaction. Within the considered range of parameters, the proposed numerical evaluation used to predict the crack interaction factor reduces the error of existing analytical solution from 1.92% to 0.97% at higher a/W. In reference to FFS codes, the small discrepancy in the prediction of the crack interaction factor validates the reliability of the numerical model to predict crack interaction limits under shielding interaction effects. In conclusion, the numerical model gave a successful prediction in estimating the crack interaction limit, which can be used as a reference for the shielding orientation of other cracks.

  5. A Comparative Test of the Interval-Scale Properties of Magnitude Estimation and Case III Scaling and Recommendations for Equal-Interval Frequency Response Anchors.

    Science.gov (United States)

    Schriesheim, Chester A.; Novelli, Luke, Jr.

    1989-01-01

    Differences between recommended sets of equal-interval response anchors derived from scaling techniques using magnitude estimations and Thurstone Case III pair-comparison treatment of complete ranks were compared. Differences in results for 205 undergraduates reflected differences in the samples as well as in the tasks and computational…

  6. Assessing accuracy of point fire intervals across landscapes with simulation modelling

    Science.gov (United States)

    Russell A. Parsons; Emily K. Heyerdahl; Robert E. Keane; Brigitte Dorner; Joseph Fall

    2007-01-01

    We assessed accuracy in point fire intervals using a simulation model that sampled four spatially explicit simulated fire histories. These histories varied in fire frequency and size and were simulated on a flat landscape with two forest types (dry versus mesic). We used three sampling designs (random, systematic grids, and stratified). We assessed the sensitivity of...

  7. INTERVALS OF ACTIVE PLAY AND BREAK IN BASKETBALL GAMES

    Directory of Open Access Journals (Sweden)

    Pavle Rubin

    2010-09-01

    Full Text Available The problem of the research comes from the need for decomposition of a basketball game. The aim was to determine the intervals of active game (“live ball” - term defined by rules and break (“dead ball” - term defined by rules, by analyzing basketball games. In order to obtain the relevant information, basketball games from five different competitions (top level of quality were analyzed. The sample consists of seven games played in the 2006/2007 season: NCAA Play - Off Final game, Adriatic League finals, ULEB Cup final game, Euroleague (2 games and the NBA league (2 games. The most important information gained by this research is that the average interval of active play lasts approximately 47 seconds, while the average break interval lasts approximately 57 seconds. This information is significant for coaches and should be used in planning the training process.

  8. Comparing confidence intervals for Goodman and Kruskal's gamma coefficient

    NARCIS (Netherlands)

    van der Ark, L.A.; van Aert, R.C.M.

    2015-01-01

    This study was motivated by the question which type of confidence interval (CI) one should use to summarize sample variance of Goodman and Kruskal's coefficient gamma. In a Monte-Carlo study, we investigated the coverage and computation time of the Goodman-Kruskal CI, the Cliff-consistent CI, the

  9. Relativistic finite-temperature Thomas-Fermi model

    Science.gov (United States)

    Faussurier, Gérald

    2017-11-01

    We investigate the relativistic finite-temperature Thomas-Fermi model, which has been proposed recently in an astrophysical context. Assuming a constant distribution of protons inside the nucleus of finite size avoids severe divergence of the electron density with respect to a point-like nucleus. A formula for the nuclear radius is chosen to treat any element. The relativistic finite-temperature Thomas-Fermi model matches the two asymptotic regimes, i.e., the non-relativistic and the ultra-relativistic finite-temperature Thomas-Fermi models. The equation of state is considered in detail. For each version of the finite-temperature Thomas-Fermi model, the pressure, the kinetic energy, and the entropy are calculated. The internal energy and free energy are also considered. The thermodynamic consistency of the three models is considered by working from the free energy. The virial question is also studied in the three cases as well as the relationship with the density functional theory. The relativistic finite-temperature Thomas-Fermi model is far more involved than the non-relativistic and ultra-relativistic finite-temperature Thomas-Fermi models that are very close to each other from a mathematical point of view.

  10. On interval and cyclic interval edge colorings of (3,5)-biregular graphs

    DEFF Research Database (Denmark)

    Casselgren, Carl Johan; Petrosyan, Petros; Toft, Bjarne

    2017-01-01

    A proper edge coloring f of a graph G with colors 1,2,3,…,t is called an interval coloring if the colors on the edges incident to every vertex of G form an interval of integers. The coloring f is cyclic interval if for every vertex v of G, the colors on the edges incident to v either form an inte...

  11. Transition to collective oscillations in finite Kuramoto ensembles

    Science.gov (United States)

    Peter, Franziska; Pikovsky, Arkady

    2018-03-01

    We present an alternative approach to finite-size effects around the synchronization transition in the standard Kuramoto model. Our main focus lies on the conditions under which a collective oscillatory mode is well defined. For this purpose, the minimal value of the amplitude of the complex Kuramoto order parameter appears as a proper indicator. The dependence of this minimum on coupling strength varies due to sampling variations and correlates with the sample kurtosis of the natural frequency distribution. The skewness of the frequency sample determines the frequency of the resulting collective mode. The effects of kurtosis and skewness hold in the thermodynamic limit of infinite ensembles. We prove this by integrating a self-consistency equation for the complex Kuramoto order parameter for two families of distributions with controlled kurtosis and skewness, respectively.

  12. Supersymmetric theories and finiteness

    International Nuclear Information System (INIS)

    Helayel-Neto, J.A.

    1989-01-01

    We attempt here to present a short survey of the all-order finite Lagrangian field theories known at present in four-and two-dimensional space-times. The question of the possible relevance of these ultraviolet finite models in the formulation of consistent unified frameworks for the fundamental forces is also addressed to. (author)

  13. A first course in finite elements

    CERN Document Server

    Fish, Jacob

    2007-01-01

    Developed from the authors, combined total of 50 years undergraduate and graduate teaching experience, this book presents the finite element method formulated as a general-purpose numerical procedure for solving engineering problems governed by partial differential equations.  Focusing on the formulation and application of the finite element method through the integration of finite element theory, code development, and software application, the book is both introductory and self-contained, as well as being a hands-on experience for any student. This authoritative text on Finite Elements:Adopts

  14. Features of finite quantum field theories

    International Nuclear Information System (INIS)

    Boehm, M.; Denner, A.

    1987-01-01

    We analyse general features of finite quantum field theories. A quantum field theory is considered to be finite, if the corresponding renormalization constants evaluated in the dimensional regularization scheme are free from divergences in all orders of perturbation theory. We conclude that every finite renormalizable quantum field theory with fields of spin one or less must contain both scalar fields and fermion fields and nonabelian gauge fields. Some secific nonsupersymmetric models are found to be finite at the one- and two-loop level. (orig.)

  15. Pediatric Reference Intervals for Free Thyroxine and Free Triiodothyronine

    Science.gov (United States)

    Jang, Megan; Guo, Tiedong; Soldin, Steven J.

    2009-01-01

    Background The clinical value of free thyroxine (FT4) and free triiodothyronine (FT3) analysis depends on the reference intervals with which they are compared. We determined age- and sex-specific reference intervals for neonates, infants, and children 0–18 years of age for FT4 and FT3 using tandem mass spectrometry. Methods Reference intervals were calculated for serum FT4 (n = 1426) and FT3 (n = 1107) obtained from healthy children between January 1, 2008, and June 30, 2008, from Children's National Medical Center and Georgetown University Medical Center Bioanalytical Core Laboratory, Washington, DC. Serum samples were analyzed using isotope dilution liquid chromatography tandem mass spectrometry (LC/MS/MS) with deuterium-labeled internal standards. Results FT4 reference intervals were very similar for males and females of all ages and ranged between 1.3 and 2.4 ng/dL for children 1 to 18 years old. FT4 reference intervals for 1- to 12-month-old infants were 1.3–2.8 ng/dL. These 2.5 to 97.5 percentile intervals were much tighter than reference intervals obtained using immunoassay platforms 0.48–2.78 ng/dL for males and 0.85–2.09 ng/dL for females. Similarly, FT3 intervals were consistent and similar for males and females and for all ages, ranging between 1.5 pg/mL and approximately 6.0 pg/mL for children 1 month of age to 18 years old. Conclusions This is the first study to provide pediatric reference intervals of FT4 and FT3 for children from birth to 18 years of age using LC/MS/MS. Analysis using LC/MS/MS provides more specific quantification of thyroid hormones. A comparison of the ultrafiltration tandem mass spectrometric method with equilibrium dialysis showed very good correlation. PMID:19583487

  16. Assessing performance and validating finite element simulations using probabilistic knowledge

    Energy Technology Data Exchange (ETDEWEB)

    Dolin, Ronald M.; Rodriguez, E. A. (Edward A.)

    2002-01-01

    Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrence results are used to validate finite element predictions.

  17. Entropy Analysis of RR and QT Interval Variability during Orthostatic and Mental Stress in Healthy Subjects

    Directory of Open Access Journals (Sweden)

    Mathias Baumert

    2014-12-01

    Full Text Available Autonomic activity affects beat-to-beat variability of heart rate and QT interval. The aim of this study was to explore whether entropy measures are suitable to detect changes in neural outflow to the heart elicited by two different stress paradigms. We recorded short-term ECG in 11 normal subjects during an experimental protocol that involved head-up tilt and mental arithmetic stress and computed sample entropy, cross-sample entropy and causal interactions based on conditional entropy from RR and QT interval time series. Head-up tilt resulted in a significant reduction in sample entropy of RR intervals and cross-sample entropy, while mental arithmetic stress resulted in a significant reduction in coupling directed from RR to QT. In conclusion, measures of entropy are suitable to detect changes in neural outflow to the heart and decoupling of repolarisation variability from heart rate variability elicited by orthostatic or mental arithmetic stress.

  18. Finite-Time Nonfragile Synchronization of Stochastic Complex Dynamical Networks with Semi-Markov Switching Outer Coupling

    Directory of Open Access Journals (Sweden)

    Rathinasamy Sakthivel

    2018-01-01

    Full Text Available The problem of robust nonfragile synchronization is investigated in this paper for a class of complex dynamical networks subject to semi-Markov jumping outer coupling, time-varying coupling delay, randomly occurring gain variation, and stochastic noise over a desired finite-time interval. In particular, the network topology is assumed to follow a semi-Markov process such that it may switch from one to another at different instants. In this paper, the random gain variation is represented by a stochastic variable that is assumed to satisfy the Bernoulli distribution with white sequences. Based on these hypotheses and the Lyapunov-Krasovskii stability theory, a new finite-time stochastic synchronization criterion is established for the considered network in terms of linear matrix inequalities. Moreover, the control design parameters that guarantee the required criterion are computed by solving a set of linear matrix inequality constraints. An illustrative example is finally given to show the effectiveness and advantages of the developed analytical results.

  19. Comparison of sampling techniques for use in SYVAC

    International Nuclear Information System (INIS)

    Dalrymple, G.J.

    1984-01-01

    The Stephen Howe review (reference TR-STH-1) recommended the use of a deterministic generator (DG) sampling technique for sampling the input values to the SYVAC (SYstems Variability Analysis Code) program. This technique was compared with Monte Carlo simple random sampling (MC) by taking a 1000 run case of SYVAC using MC as the reference case. The results show that DG appears relatively inaccurate for most values of consequence when used with 11 sample intervals. If 22 sample intervals are used then DG generates cumulative distribution functions that are statistically similar to the reference distribution. 400 runs of DG or MC are adequate to generate a representative cumulative distribution function. The MC technique appears to perform better than DG for the same number of runs. However, the DG predicts higher doses and in view of the importance of generating data in the high dose region this sampling technique with 22 sample intervals is recommended for use in SYVAC. (author)

  20. Circadian profile of QT interval and QT interval variability in 172 healthy volunteers

    DEFF Research Database (Denmark)

    Bonnemeier, Hendrik; Wiegand, Uwe K H; Braasch, Wiebke

    2003-01-01

    of sleep. QT and R-R intervals revealed a characteristic day-night-pattern. Diurnal profiles of QT interval variability exhibited a significant increase in the morning hours (6-9 AM; P ... lower at day- and nighttime. Aging was associated with an increase of QT interval mainly at daytime and a significant shift of the T wave apex towards the end of the T wave. The circadian profile of ventricular repolarization is strongly related to the mean R-R interval, however, there are significant...

  1. Strong interaction at finite temperature

    Indian Academy of Sciences (India)

    Quantum chromodynamics; finite temperature; chiral perturbation theory; QCD sum rules. PACS Nos 11.10. ..... at finite temperature. The self-energy diagrams of figure 2 modify it to ..... method of determination at present. Acknowledgement.

  2. Experimental and numerical investigation of low-drag intervals in turbulent boundary layer

    Science.gov (United States)

    Park, Jae Sung; Ryu, Sangjin; Lee, Jin

    2017-11-01

    It has been widely investigated that there is a substantial intermittency between high and low drag states in wall-bounded shear flows. Recent experimental and computational studies in a turbulent channel flow have identified low-drag time intervals based on wall shear stress measurements. These intervals are a weak turbulence state characterized by low-speed streaks and weak streamwise vortices. In this study, the spatiotemporal dynamics of low-drag intervals in a turbulent boundary layer is investigated using experiments and simulations. The low-drag intervals are monitored based on the wall shear stress measurement. We show that near the wall conditionally-sampled mean velocity profiles during low-drag intervals closely approach that of a low-drag nonlinear traveling wave solution as well as that of the so-called maximum drag reduction asymptote. This observation is consistent with the channel flow studies. Interestingly, the large spatial stretching of the streak is very evident in the wall-normal direction during low-drag intervals. Lastly, a possible connection between the mean velocity profile during the low-drag intervals and the Blasius profile will be discussed. This work was supported by startup funds from the University of Nebraska-Lincoln.

  3. CLSI-based transference of the CALIPER database of pediatric reference intervals from Abbott to Beckman, Ortho, Roche and Siemens Clinical Chemistry Assays: direct validation using reference samples from the CALIPER cohort.

    Science.gov (United States)

    Estey, Mathew P; Cohen, Ashley H; Colantonio, David A; Chan, Man Khun; Marvasti, Tina Binesh; Randell, Edward; Delvin, Edgard; Cousineau, Jocelyne; Grey, Vijaylaxmi; Greenway, Donald; Meng, Qing H; Jung, Benjamin; Bhuiyan, Jalaluddin; Seccombe, David; Adeli, Khosrow

    2013-09-01

    The CALIPER program recently established a comprehensive database of age- and sex-stratified pediatric reference intervals for 40 biochemical markers. However, this database was only directly applicable for Abbott ARCHITECT assays. We therefore sought to expand the scope of this database to biochemical assays from other major manufacturers, allowing for a much wider application of the CALIPER database. Based on CLSI C28-A3 and EP9-A2 guidelines, CALIPER reference intervals were transferred (using specific statistical criteria) to assays performed on four other commonly used clinical chemistry platforms including Beckman Coulter DxC800, Ortho Vitros 5600, Roche Cobas 6000, and Siemens Vista 1500. The resulting reference intervals were subjected to a thorough validation using 100 reference specimens (healthy community children and adolescents) from the CALIPER bio-bank, and all testing centers participated in an external quality assessment (EQA) evaluation. In general, the transferred pediatric reference intervals were similar to those established in our previous study. However, assay-specific differences in reference limits were observed for many analytes, and in some instances were considerable. The results of the EQA evaluation generally mimicked the similarities and differences in reference limits among the five manufacturers' assays. In addition, the majority of transferred reference intervals were validated through the analysis of CALIPER reference samples. This study greatly extends the utility of the CALIPER reference interval database which is now directly applicable for assays performed on five major analytical platforms in clinical use, and should permit the worldwide application of CALIPER pediatric reference intervals. Copyright © 2013 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  4. Two Scales, Hybrid Model for Soils, Involving Artificial Neural Network and Finite Element Procedure

    Directory of Open Access Journals (Sweden)

    Krasiński Marcin

    2015-02-01

    Full Text Available A hybrid ANN-FE solution is presented as a result of two level analysis of soils: a level of a laboratory sample and a level of engineering geotechnical problem. Engineering properties of soils (sands are represented directly in the form of ANN (this is in contrast with our former paper where ANN approximated constitutive relationships. Initially the ANN is trained with Duncan formula (Duncan and Chang [2], then it is re-trained (calibrated with some available experimental data, specific for the soil considered. The obtained approximation of the constitutive parameters is used directly in finite element method at the level of a single element at the scale of the laboratory sample to check the correct representation of the laboratory test. Then, the finite element that was successfully tested at the level of laboratory sample is used at the macro level to solve engineering problems involving the soil for which it was calibrated.

  5. FINELM: a multigroup finite element diffusion code

    International Nuclear Information System (INIS)

    Higgs, C.E.; Davierwalla, D.M.

    1981-06-01

    FINELM is a FORTRAN IV program to solve the Neutron Diffusion Equation in X-Y, R-Z, R-theta, X-Y-Z and R-theta-Z geometries using the method of Finite Elements. Lagrangian elements of linear or higher degree to approximate the spacial flux distribution have been provided. The method of dissections, coarse mesh rebalancing and Chebyshev acceleration techniques are available. Simple user defined input is achieved through extensive input subroutines. The input preparation is described followed by a program structure description. Sample test cases are provided. (Auth.)

  6. Finiteness of quantum field theories and supersymmetry

    International Nuclear Information System (INIS)

    Lucha, W.; Neufeld, H.

    1986-01-01

    We study the consequences of finiteness for a general renormalizable quantum field theory by analysing the finiteness conditions resulting from the requirement of absence of divergent contributions to the renormalizations of the parameters of an arbitrary gauge theory. In all cases considered, the well-known two-loop finite supersymmetric theories prove to be the unique solution of the finiteness criterion. (Author)

  7. A sliding point contact model for the finite element structures code EURDYN

    International Nuclear Information System (INIS)

    Smith, B.L.

    1986-01-01

    A method is developed by which sliding point contact between two moving deformable structures may be incorporated within a lumped mass finite element formulation based on displacements. The method relies on a simple mechanical interpretation of the contact constraint in terms of equivalent nodal forces and avoids the use of nodal connectivity via a master slave arrangement or pseudo contact element. The methodology has been iplemented into the EURDYN finite element program for the (2D axisymmetric) version coupled to the hydro code SEURBNUK. Sample calculations are presented illustrating the use of the model in various contact situations. Effects due to separation and impact of structures are also included. (author)

  8. Bias Assessment of General Chemistry Analytes using Commutable Samples.

    Science.gov (United States)

    Koerbin, Gus; Tate, Jillian R; Ryan, Julie; Jones, Graham Rd; Sikaris, Ken A; Kanowski, David; Reed, Maxine; Gill, Janice; Koumantakis, George; Yen, Tina; St John, Andrew; Hickman, Peter E; Simpson, Aaron; Graham, Peter

    2014-11-01

    Harmonisation of reference intervals for routine general chemistry analytes has been a goal for many years. Analytical bias may prevent this harmonisation. To determine if analytical bias is present when comparing methods, the use of commutable samples, or samples that have the same properties as the clinical samples routinely analysed, should be used as reference samples to eliminate the possibility of matrix effect. The use of commutable samples has improved the identification of unacceptable analytical performance in the Netherlands and Spain. The International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) has undertaken a pilot study using commutable samples in an attempt to determine not only country specific reference intervals but to make them comparable between countries. Australia and New Zealand, through the Australasian Association of Clinical Biochemists (AACB), have also undertaken an assessment of analytical bias using commutable samples and determined that of the 27 general chemistry analytes studied, 19 showed sufficiently small between method biases as to not prevent harmonisation of reference intervals. Application of evidence based approaches including the determination of analytical bias using commutable material is necessary when seeking to harmonise reference intervals.

  9. Observations on finite quantum mechanics

    International Nuclear Information System (INIS)

    Balian, R.; Itzykson, C.

    1986-01-01

    We study the canonical transformations of the quantum mechanics on a finite phase space. For simplicity we assume that the configuration variable takes an odd prime number 4 K±1 of distinct values. We show that the canonical group is unitarily implemented. It admits a maximal abelian subgroup of order 4 K, commuting with the finite Fourier transform F, a finite analogue of the harmonic oscillator group. This provides a natural construction of F 1/K and of an orthogonal basis of eigenstates of F [fr

  10. Automatic Construction of Finite Algebras

    Institute of Scientific and Technical Information of China (English)

    张健

    1995-01-01

    This paper deals with model generation for equational theories,i.e.,automatically generating (finite)models of a given set of (logical) equations.Our method of finite model generation and a tool for automatic construction of finite algebras is described.Some examples are given to show the applications of our program.We argue that,the combination of model generators and theorem provers enables us to get a better understanding of logical theories.A brief comparison betwween our tool and other similar tools is also presented.

  11. The Optimal Confidence Intervals for Agricultural Products’ Price Forecasts Based on Hierarchical Historical Errors

    Directory of Open Access Journals (Sweden)

    Yi Wang

    2016-12-01

    Full Text Available With the levels of confidence and system complexity, interval forecasts and entropy analysis can deliver more information than point forecasts. In this paper, we take receivers’ demands as our starting point, use the trade-off model between accuracy and informativeness as the criterion to construct the optimal confidence interval, derive the theoretical formula of the optimal confidence interval and propose a practical and efficient algorithm based on entropy theory and complexity theory. In order to improve the estimation precision of the error distribution, the point prediction errors are STRATIFIED according to prices and the complexity of the system; the corresponding prediction error samples are obtained by the prices stratification; and the error distributions are estimated by the kernel function method and the stability of the system. In a stable and orderly environment for price forecasting, we obtain point prediction error samples by the weighted local region and RBF (Radial basis function neural network methods, forecast the intervals of the soybean meal and non-GMO (Genetically Modified Organism soybean continuous futures closing prices and implement unconditional coverage, independence and conditional coverage tests for the simulation results. The empirical results are compared from various interval evaluation indicators, different levels of noise, several target confidence levels and different point prediction methods. The analysis shows that the optimal interval construction method is better than the equal probability method and the shortest interval method and has good anti-noise ability with the reduction of system entropy; the hierarchical estimation error method can obtain higher accuracy and better interval estimation than the non-hierarchical method in a stable system.

  12. The Statistics of Radio Astronomical Polarimetry: Disjoint, Superposed, and Composite Samples

    Energy Technology Data Exchange (ETDEWEB)

    Straten, W. van [Centre for Astrophysics and Supercomputing, Swinburne University of Technology, Hawthorn, VIC 3122 (Australia); Tiburzi, C., E-mail: willem.van.straten@aut.ac.nz [Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn (Germany)

    2017-02-01

    A statistical framework is presented for the study of the orthogonally polarized modes of radio pulsar emission via the covariances between the Stokes parameters. To accommodate the typically heavy-tailed distributions of single-pulse radio flux density, the fourth-order joint cumulants of the electric field are used to describe the superposition of modes with arbitrary probability distributions. The framework is used to consider the distinction between superposed and disjoint modes, with particular attention to the effects of integration over finite samples. If the interval over which the polarization state is estimated is longer than the timescale for switching between two or more disjoint modes of emission, then the modes are unresolved by the instrument. The resulting composite sample mean exhibits properties that have been attributed to mode superposition, such as depolarization. Because the distinction between disjoint modes and a composite sample of unresolved disjoint modes depends on the temporal resolution of the observing instrumentation, the arguments in favor of superposed modes of pulsar emission are revisited, and observational evidence for disjoint modes is described. In principle, the four-dimensional covariance matrix that describes the distribution of sample mean Stokes parameters can be used to distinguish between disjoint modes, superposed modes, and a composite sample of unresolved disjoint modes. More comprehensive and conclusive interpretation of the covariance matrix requires more detailed consideration of various relevant phenomena, including temporally correlated subpulse modulation (e.g., jitter), statistical dependence between modes (e.g., covariant intensities and partial coherence), and multipath propagation effects (e.g., scintillation and scattering).

  13. Toward finite quantum field theories

    International Nuclear Information System (INIS)

    Rajpoot, S.; Taylor, J.G.

    1986-01-01

    The properties that make the N=4 super Yang-Mills theory free from ultraviolet divergences are (i) a universal coupling for gauge and matter interactions, (ii) anomaly-free representations, (iii) no charge renormalization, and (iv) if masses are explicitly introduced into the theory, then these are required to satisfy the mass-squared supertrace sum rule Σsub(s=0.1/2)(-1)sup(2s+1)(2s+1)M 2 sub(s)=O. Finite N=2 theories are found to satisfy the above criteria. The missing member in this class of field theories are finite field theories consisting of N=1 superfields. These theories are discussed in the light of the above finiteness properties. In particular, the representations of all simple classical groups satisfying the anomaly-free and no-charge renormalization conditions for finite N=1 field theories are discussed. A consequence of these restrictions on the allowed representations is that an N=1 finite SU(5)-based model of strong and electroweak interactions can contain at most five conventional families of quarks and leptons, a constraint almost compatible with the one deduced from cosmological arguments. (author)

  14. Cross-Linked Fluorescent Supramolecular Nanoparticles as Finite Tattoo Pigments with Controllable Intradermal Retention Times.

    Science.gov (United States)

    Choi, Jin-Sil; Zhu, Yazhen; Li, Hongsheng; Peyda, Parham; Nguyen, Thuy Tien; Shen, Mo Yuan; Yang, Yang Michael; Zhu, Jingyi; Liu, Mei; Lee, Mandy M; Sun, Shih-Sheng; Yang, Yang; Yu, Hsiao-Hua; Chen, Kai; Chuang, Gary S; Tseng, Hsian-Rong

    2017-01-24

    Tattooing has been utilized by the medical community for precisely demarcating anatomic landmarks. This practice is especially important for identifying biopsy sites of nonmelanoma skin cancer (NMSC) due to the long interval (i.e., up to 3 months) between the initial diagnostic biopsy and surgical treatment. Commercially available tattoo pigments possess several issues, which include causing poor cosmesis, being mistaken for a melanocytic lesion, requiring additional removal procedures when no longer desired, and potentially inducing inflammatory responses. The ideal tattoo pigment for labeling of skin biopsy sites for NMSC requires (i) invisibility under ambient light, (ii) fluorescence under a selective light source, (iii) a finite intradermal retention time (ca. 3 months), and (iv) biocompatibility. Herein, we introduce cross-linked fluorescent supramolecular nanoparticles (c-FSNPs) as a "finite tattoo" pigment, with optimized photophysical properties and intradermal retention time to achieve successful in vivo finite tattooing. Fluorescent supramolecular nanoparticles encapsulate a fluorescent conjugated polymer, poly[5-methoxy-2-(3-sulfopropoxy)-1,4-phenylenevinylene] (MPS-PPV), into a core via a supramolecular synthetic approach. FSNPs which possess fluorescent properties superior to those of the free MPS-PPV are obtained through a combinatorial screening process. Covalent cross-linking of FSNPs results in micrometer-sized c-FSNPs, which exhibit a size-dependent intradermal retention. The 1456 nm sized c-FSNPs display an ideal intradermal retention time (ca. 3 months) for NMSC lesion labeling, as observed in an in vivo tattoo study. In addition, the c-FSNPs induce undetectable inflammatory responses after tattooing. We believe that the c-FSNPs can serve as a "finite tattoo" pigment to label potential malignant NMSC lesions.

  15. Generalized finite elements

    International Nuclear Information System (INIS)

    Wachspress, E.

    2009-01-01

    Triangles and rectangles are the ubiquitous elements in finite element studies. Only these elements admit polynomial basis functions. Rational functions provide a basis for elements having any number of straight and curved sides. Numerical complexities initially associated with rational bases precluded extensive use. Recent analysis has reduced these difficulties and programs have been written to illustrate effectiveness. Although incorporation in major finite element software requires considerable effort, there are advantages in some applications which warrant implementation. An outline of the basic theory and of recent innovations is presented here. (authors)

  16. Characterization of resonances using finite size effects

    International Nuclear Information System (INIS)

    Pozsgay, B.; Takacs, G.

    2006-01-01

    We develop methods to extract resonance widths from finite volume spectra of (1+1)-dimensional quantum field theories. Our two methods are based on Luscher's description of finite size corrections, and are dubbed the Breit-Wigner and the improved ''mini-Hamiltonian'' method, respectively. We establish a consistent framework for the finite volume description of sufficiently narrow resonances that takes into account the finite size corrections and mass shifts properly. Using predictions from form factor perturbation theory, we test the two methods against finite size data from truncated conformal space approach, and find excellent agreement which confirms both the theoretical framework and the numerical validity of the methods. Although our investigation is carried out in 1+1 dimensions, the extension to physical 3+1 space-time dimensions appears straightforward, given sufficiently accurate finite volume spectra

  17. Solution of the multigroup diffusion equation for two-dimensional triangular regions by finite Fourier transformation

    International Nuclear Information System (INIS)

    Takeshi, Y.; Keisuke, K.

    1983-01-01

    The multigroup neutron diffusion equation for two-dimensional triangular geometry is solved by the finite Fourier transformation method. Using the zero-th-order equation of the integral equation derived by this method, simple algebraic expressions for the flux are derived and solved by the alternating direction implicit method. In sample calculations for a benchmark problem of a fast breeder reactor, it is shown that the present method gives good results with fewer mesh points than the usual finite difference method

  18. Hypotensive response magnitude and duration in hypertensives: continuous and interval exercise.

    Science.gov (United States)

    Carvalho, Raphael Santos Teodoro de; Pires, Cássio Mascarenhas Robert; Junqueira, Gustavo Cardoso; Freitas, Dayana; Marchi-Alves, Leila Maria

    2015-03-01

    Although exercise training is known to promote post-exercise hypotension, there is currently no consistent argument about the effects of manipulating its various components (intensity, duration, rest periods, types of exercise, training methods) on the magnitude and duration of hypotensive response. To compare the effect of continuous and interval exercises on hypotensive response magnitude and duration in hypertensive patients by using ambulatory blood pressure monitoring (ABPM). The sample consisted of 20 elderly hypertensives. Each participant underwent three ABPM sessions: one control ABPM, without exercise; one ABPM after continuous exercise; and one ABPM after interval exercise. Systolic blood pressure (SBP), diastolic blood pressure (DBP), mean arterial pressure (MAP), heart rate (HR) and double product (DP) were monitored to check post-exercise hypotension and for comparison between each ABPM. ABPM after continuous exercise and after interval exercise showed post-exercise hypotension and a significant reduction (p ABPM. Comparing ABPM after continuous and ABPM after interval exercise, a significant reduction (p < 0.05) in SBP, DBP, MAP and DP was observed in the latter. Continuous and interval exercise trainings promote post-exercise hypotension with reduction in SBP, DBP, MAP and DP in the 20 hours following exercise. Interval exercise training causes greater post-exercise hypotension and lower cardiovascular overload as compared with continuous exercise.

  19. Topological transitions at finite temperatures: A real-time numerical approach

    International Nuclear Information System (INIS)

    Grigoriev, D.Yu.; Rubakov, V.A.; Shaposhnikov, M.E.

    1989-01-01

    We study topological transitions at finite temperatures within the (1+1)-dimensional abelian Higgs model by a numerical simulation in real time. Basic ideas of the real-time approach are presented and some peculiarities of the Metropolis technique are discussed. It is argued that the processes leading to topological transitions are of classical origin; the transitions can be observed by solving the classical field equations in real time. We show that the topological transitions actually pass via the sphaleron configuration. The transition rate as a function of temperature is found to be in good agreement with the analytical predictions. No extra suppression of the rate is observed. The conditions of applicability of our approach are discussed. The temperature interval where the low-temperature broken phase persists is estimated. (orig.)

  20. Structural modeling techniques by finite element method

    International Nuclear Information System (INIS)

    Kang, Yeong Jin; Kim, Geung Hwan; Ju, Gwan Jeong

    1991-01-01

    This book includes introduction table of contents chapter 1 finite element idealization introduction summary of the finite element method equilibrium and compatibility in the finite element solution degrees of freedom symmetry and anti symmetry modeling guidelines local analysis example references chapter 2 static analysis structural geometry finite element models analysis procedure modeling guidelines references chapter 3 dynamic analysis models for dynamic analysis dynamic analysis procedures modeling guidelines and modeling guidelines.

  1. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

    Science.gov (United States)

    Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

    2017-12-01

    The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

  2. The Distribution of the Sample Minimum-Variance Frontier

    OpenAIRE

    Raymond Kan; Daniel R. Smith

    2008-01-01

    In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...

  3. Supersymmetry at finite temperature

    International Nuclear Information System (INIS)

    Clark, T.E.; Love, S.T.

    1983-01-01

    Finite-temperature supersymmetry (SUSY) is characterized by unbroken Ward identities for SUSY variations of ensemble averages of Klein-operator inserted imaginary time-ordered products of fields. Path-integral representations of these products are defined and the Feynman rules in superspace are given. The finite-temperature no-renormalization theorem is derived. Spontaneously broken SUSY at zero temperature is shown not to be restored at high temperature. (orig.)

  4. An introduction to finite tight frames

    CERN Document Server

    Waldron, Shayne F D

    2018-01-01

    This textbook is an introduction to the theory and applications of finite tight frames, an area that has developed rapidly in the last decade. Stimulating much of this growth are the applications of finite frames to diverse fields such as signal processing, quantum information theory, multivariate orthogonal polynomials, and remote sensing. Key features and topics: * First book entirely devoted to finite frames * Extensive exercises and MATLAB examples for classroom use * Important examples, such as harmonic and Heisenberg frames, are presented in preliminary chapters, encouraging readers to explore and develop an intuitive feeling for tight frames * Later chapters delve into general theory details and recent research results * Many illustrations showing the special aspects of the geometry of finite frames * Provides an overview of the field of finite tight frames * Discusses future research directions in the field Featuring exercises and MATLAB examples in each chapter, the book is well suited as a textbook ...

  5. Non-linear finite element modeling

    DEFF Research Database (Denmark)

    Mikkelsen, Lars Pilgaard

    The note is written for courses in "Non-linear finite element method". The note has been used by the author teaching non-linear finite element modeling at Civil Engineering at Aalborg University, Computational Mechanics at Aalborg University Esbjerg, Structural Engineering at the University...

  6. Growth Estimators and Confidence Intervals for the Mean of Negative Binomial Random Variables with Unknown Dispersion

    Directory of Open Access Journals (Sweden)

    David Shilane

    2013-01-01

    Full Text Available The negative binomial distribution becomes highly skewed under extreme dispersion. Even at moderately large sample sizes, the sample mean exhibits a heavy right tail. The standard normal approximation often does not provide adequate inferences about the data's expected value in this setting. In previous work, we have examined alternative methods of generating confidence intervals for the expected value. These methods were based upon Gamma and Chi Square approximations or tail probability bounds such as Bernstein's inequality. We now propose growth estimators of the negative binomial mean. Under high dispersion, zero values are likely to be overrepresented in the data. A growth estimator constructs a normal-style confidence interval by effectively removing a small, predetermined number of zeros from the data. We propose growth estimators based upon multiplicative adjustments of the sample mean and direct removal of zeros from the sample. These methods do not require estimating the nuisance dispersion parameter. We will demonstrate that the growth estimators' confidence intervals provide improved coverage over a wide range of parameter values and asymptotically converge to the sample mean. Interestingly, the proposed methods succeed despite adding both bias and variance to the normal approximation.

  7. The finite-dimensional Freeman thesis.

    Science.gov (United States)

    Rudolph, Lee

    2008-06-01

    I suggest a modification--and mathematization--of Freeman's thesis on the relations among "perception", "the finite brain", and "the world", based on my recent proposal that the theory of finite topological spaces is both an adequate and a natural mathematical foundation for human psychology.

  8. Quantification of transuranic elements by time interval correlation spectroscopy of the detected neutrons

    Science.gov (United States)

    Baeten; Bruggeman; Paepen; Carchon

    2000-03-01

    The non-destructive quantification of transuranic elements in nuclear waste management or in safeguards verifications is commonly performed by passive neutron assay techniques. To minimise the number of unknown sample-dependent parameters, Neutron Multiplicity Counting (NMC) is applied. We developed a new NMC-technique, called Time Interval Correlation Spectroscopy (TICS), which is based on the measurement of Rossi-alpha time interval distributions. Compared to other NMC-techniques, TICS offers several advantages.

  9. Comparing confidence intervals for Goodman and Kruskal’s gamma coefficient

    NARCIS (Netherlands)

    van der Ark, L.A.; van Aert, R.C.M.

    2015-01-01

    This study was motivated by the question which type of confidence interval (CI) one should use to summarize sample variance of Goodman and Kruskal's coefficient gamma. In a Monte-Carlo study, we investigated the coverage and computation time of the Goodman–Kruskal CI, the Cliff-consistent CI, the

  10. Sound radiation from finite surfaces

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2013-01-01

    A method to account for the effect of finite size in acoustic power radiation problem of planar surfaces using spatial windowing is developed. Cremer and Heckl presents a very useful formula for the power radiating from a structure using the spatially Fourier transformed velocity, which combined...... with spatially windowing of a plane waves can be used to take into account the finite size. In the present paper, this is developed by means of a radiation impedance for finite surfaces, that is used instead of the radiation impedance for infinite surfaces. In this way, the spatial windowing is included...

  11. Photon propagators at finite temperature

    International Nuclear Information System (INIS)

    Yee, J.H.

    1982-07-01

    We have used the real time formalism to compute the one-loop finite temperature corrections to the photon self energies in spinor and scalar QED. We show that, for a real photon, only the transverse components develop the temperature-dependent masses, while, for an external static electromagnetic field applied to the finite temperature system, only the static electric field is screened by thermal fluctuations. After showing how to compute systematically the imaginary parts of the finite temperature Green functions, we have attempted to give a microscopic interpretation of the imaginary parts of the self energies. (author)

  12. Axial anomaly at finite temperature

    International Nuclear Information System (INIS)

    Chaturvedi, S.; Gupte, Neelima; Srinivasan, V.

    1985-01-01

    The Jackiw-Bardeen-Adler anomaly for QED 4 and QED 2 are calculated at finite temperature. It is found that the anomaly is independent of temperature. Ishikawa's method [1984, Phys. Rev. Lett. vol. 53 1615] for calculating the quantised Hall effect is extended to finite temperature. (author)

  13. Collaborative Systems – Finite State Machines

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2011-01-01

    Full Text Available In this paper the finite state machines are defined and formalized. There are presented the collaborative banking systems and their correspondence is done with finite state machines. It highlights the role of finite state machines in the complexity analysis and performs operations on very large virtual databases as finite state machines. It builds the state diagram and presents the commands and documents transition between the collaborative systems states. The paper analyzes the data sets from Collaborative Multicash Servicedesk application and performs a combined analysis in order to determine certain statistics. Indicators are obtained, such as the number of requests by category and the load degree of an agent in the collaborative system.

  14. Robust mixed finite element methods to deal with incompressibility in finite strain in an industrial framework

    International Nuclear Information System (INIS)

    Al-Akhrass, Dina

    2014-01-01

    Simulations in solid mechanics exhibit several difficulties, as dealing with incompressibility, with nonlinearities due to finite strains, contact laws, or constitutive laws. The basic motivation of our work is to propose efficient finite element methods capable of dealing with incompressibility in finite strain context, and using elements of low order. During the three last decades, many approaches have been proposed in the literature to overcome the incompressibility problem. Among them, mixed formulations offer an interesting theoretical framework. In this work, a three-field mixed formulation (displacement, pressure, volumetric strain) is investigated. In some cases, this formulation can be condensed in a two-field (displacement - pressure) mixed formulation. However, it is well-known that the discrete problem given by the Galerkin finite element technique, does not inherit the 'inf-sup' stability condition from the continuous problem. Hence, the interpolation orders in displacement and pressure have to be chosen in a way to satisfy the Brezzi-Babuska stability conditions when using Galerkin approaches. Interpolation orders must be chosen so as to satisfy this condition. Two possibilities are considered: to use stable finite element satisfying this requirement, or to use finite element that does not satisfy this condition, and to add terms stabilizing the FE Galerkin formulation. The latter approach allows the use of equal order interpolation. In this work, stable finite element P2/P1 and P2/P1/P1 are used as reference, and compared to P1/P1 and P1/P1/P1 formulations stabilized with a bubble function or with a VMS method (Variational Multi-Scale) based on a sub-grid-space orthogonal to the FE space. A finite strain model based on logarithmic strain is selected. This approach is extended to three and two field mixed formulations with stable or stabilized elements. These approaches are validated on academic cases and used on industrial cases. (author)

  15. Using the confidence interval confidently.

    Science.gov (United States)

    Hazra, Avijit

    2017-10-01

    Biomedical research is seldom done with entire populations but rather with samples drawn from a population. Although we work with samples, our goal is to describe and draw inferences regarding the underlying population. It is possible to use a sample statistic and estimates of error in the sample to get a fair idea of the population parameter, not as a single value, but as a range of values. This range is the confidence interval (CI) which is estimated on the basis of a desired confidence level. Calculation of the CI of a sample statistic takes the general form: CI = Point estimate ± Margin of error, where the margin of error is given by the product of a critical value (z) derived from the standard normal curve and the standard error of point estimate. Calculation of the standard error varies depending on whether the sample statistic of interest is a mean, proportion, odds ratio (OR), and so on. The factors affecting the width of the CI include the desired confidence level, the sample size and the variability in the sample. Although the 95% CI is most often used in biomedical research, a CI can be calculated for any level of confidence. A 99% CI will be wider than 95% CI for the same sample. Conflict between clinical importance and statistical significance is an important issue in biomedical research. Clinical importance is best inferred by looking at the effect size, that is how much is the actual change or difference. However, statistical significance in terms of P only suggests whether there is any difference in probability terms. Use of the CI supplements the P value by providing an estimate of actual clinical effect. Of late, clinical trials are being designed specifically as superiority, non-inferiority or equivalence studies. The conclusions from these alternative trial designs are based on CI values rather than the P value from intergroup comparison.

  16. Finite p′-nilpotent groups. II

    Directory of Open Access Journals (Sweden)

    S. Srinivasan

    1987-01-01

    Full Text Available In this paper we continue the study of finite p′-nilpotent groups that was started in the first part of this paper. Here we give a complete characterization of all finite groups that are not p′-nilpotent but all of whose proper subgroups are p′-nilpotent.

  17. Electrical machine analysis using finite elements

    CERN Document Server

    Bianchi, Nicola

    2005-01-01

    OUTLINE OF ELECTROMAGNETIC FIELDSVector AnalysisElectromagnetic FieldsFundamental Equations SummaryReferencesBASIC PRINCIPLES OF FINITE ELEMENT METHODSIntroductionField Problems with Boundary ConditionsClassical Method for the Field Problem SolutionThe Classical Residual Method (Galerkin's Method)The Classical Variational Method (Rayleigh-Ritz's Method)The Finite Element MethodReferencesAPPLICATIONS OF THE FINITE ELEMENT METHOD TO TWO-DIMENSIONAL FIELDSIntroductionLinear Interpolation of the Function fApplication of the Variational MethodSimple Descriptions of Electromagnetic FieldsAppendix: I

  18. Extreme Quantum Memory Advantage for Rare-Event Sampling

    Science.gov (United States)

    Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.

    2018-02-01

    We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  19. Extreme Quantum Memory Advantage for Rare-Event Sampling

    Directory of Open Access Journals (Sweden)

    Cina Aghamohammadi

    2018-02-01

    Full Text Available We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r, for any large real number r. Then, for a sequence of processes each labeled by an integer size N, we compare how the classical and quantum required memories scale with N. In this setting, since both memories can diverge as N→∞, the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N→∞, but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  20. What is finiteness? (Abhishek Banerjee) (Indian Institute of Science)

    Indian Academy of Sciences (India)

    Do finites get enough respect? • Finiteness is easy, no? • Just count whether 1, 2, 3,... • But then we miss out on the true richness of the concept of finitness. • There's more finiteness around. In fact, finiteness is what helps us really understand things. 5 ...

  1. Kirkwood-Buff integrals of finite systems: shape effects

    Science.gov (United States)

    Dawass, Noura; Krüger, Peter; Simon, Jean-Marc; Vlugt, Thijs J. H.

    2018-06-01

    The Kirkwood-Buff (KB) theory provides an important connection between microscopic density fluctuations in liquids and macroscopic properties. Recently, Krüger et al. derived equations for KB integrals for finite subvolumes embedded in a reservoir. Using molecular simulation of finite systems, KB integrals can be computed either from density fluctuations inside such subvolumes, or from integrals of radial distribution functions (RDFs). Here, based on the second approach, we establish a framework to compute KB integrals for subvolumes with arbitrary convex shapes. This requires a geometric function w(x) which depends on the shape of the subvolume, and the relative position inside the subvolume. We present a numerical method to compute w(x) based on Umbrella Sampling Monte Carlo (MC). We compute KB integrals of a liquid with a model RDF for subvolumes with different shapes. KB integrals approach the thermodynamic limit in the same way: for sufficiently large volumes, KB integrals are a linear function of area over volume, which is independent of the shape of the subvolume.

  2. Synchronizing data from irregularly sampled sensors

    Science.gov (United States)

    Uluyol, Onder

    2017-07-11

    A system and method include receiving a set of sampled measurements for each of multiple sensors, wherein the sampled measurements are at irregular intervals or different rates, re-sampling the sampled measurements of each of the multiple sensors at a higher rate than one of the sensor's set of sampled measurements, and synchronizing the sampled measurements of each of the multiple sensors.

  3. Finite Metric Spaces of Strictly negative Type

    DEFF Research Database (Denmark)

    Hjorth, Poul G.

    If a finite metric space is of strictly negative type then its transfinite diameter is uniquely realized by an infinite extent (“load vector''). Finite metric spaces that have this property include all trees, and all finite subspaces of Euclidean and Hyperbolic spaces. We prove that if the distance...

  4. The King model for electrons in a finite-size ultracold plasma

    Energy Technology Data Exchange (ETDEWEB)

    Vrinceanu, D; Collins, L A [Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Balaraman, G S [School of Physics, Georgia Institute of Technology, Atlanta, GA 30332 (United States)

    2008-10-24

    A self-consistent model for a finite-size non-neutral ultracold plasma is obtained by extending a conventional model of globular star clusters. This model describes the dynamics of electrons at quasi-equilibrium trapped within the potential created by a cloud of stationary ions. A random sample of electron positions and velocities can be generated with the statistical properties defined by this model.

  5. Quantization and representation theory of finite W algebras

    International Nuclear Information System (INIS)

    Boer, J. de; Tjin, T.

    1993-01-01

    In this paper we study the finitely generated algebras underlying W algebras. These so called 'finite W algebras' are constructed as Poisson reductions of Kirillov Poisson structures on simple Lie algebras. The inequivalent reductions are labeled by the inequivalent embeddings of sl 2 into the simple Lie algebra in question. For arbitrary embeddings a coordinate free formula for the reduced Poisson structure is derived. We also prove that any finite W algebra can be embedded into the Kirillov Poisson algebra of a (semi)simple Lie algebra (generalized Miura map). Furthermore it is shown that generalized finite Toda systems are reductions of a system describing a free particle moving on a group manifold and that they have finite W symmetry. In the second part we BRST quantize the finite W algebras. The BRST cohomoloy is calculated using a spectral sequence (which is different from the one used by Feigin and Frenkel). This allows us to quantize all finite W algebras in one stroke. Examples are given. In the last part of the paper we study the representation theory of finite W algebras. It is shown, using a quantum inversion of the generalized Miura transformation, that the representations of finite W algebras can be constructed from the representations of a certain Lie subalgebra of the original simple Lie algebra. As a byproduct of this we are able to construct the Fock realizations of arbitrary finite W algebras. (orig.)

  6. Statistics of return intervals between long heartbeat intervals and their usability for online prediction of disorders

    International Nuclear Information System (INIS)

    Bogachev, Mikhail I; Bunde, Armin; Kireenkov, Igor S; Nifontov, Eugene M

    2009-01-01

    We study the statistics of return intervals between large heartbeat intervals (above a certain threshold Q) in 24 h records obtained from healthy subjects. We find that both the linear and the nonlinear long-term memory inherent in the heartbeat intervals lead to power-laws in the probability density function P Q (r) of the return intervals. As a consequence, the probability W Q (t; Δt) that at least one large heartbeat interval will occur within the next Δt heartbeat intervals, with an increasing elapsed number of intervals t after the last large heartbeat interval, follows a power-law. Based on these results, we suggest a method of obtaining a priori information about the occurrence of the next large heartbeat interval, and thus to predict it. We show explicitly that the proposed method, which exploits long-term memory, is superior to the conventional precursory pattern recognition technique, which focuses solely on short-term memory. We believe that our results can be straightforwardly extended to obtain more reliable predictions in other physiological signals like blood pressure, as well as in other complex records exhibiting multifractal behaviour, e.g. turbulent flow, precipitation, river flows and network traffic.

  7. Implicit and fully implicit exponential finite difference methods

    Indian Academy of Sciences (India)

    Burgers' equation; exponential finite difference method; implicit exponential finite difference method; ... This paper describes two new techniques which give improved exponential finite difference solutions of Burgers' equation. ... Current Issue

  8. Probability Distribution for Flowing Interval Spacing

    International Nuclear Information System (INIS)

    Kuzio, S.

    2001-01-01

    The purpose of this analysis is to develop a probability distribution for flowing interval spacing. A flowing interval is defined as a fractured zone that transmits flow in the Saturated Zone (SZ), as identified through borehole flow meter surveys (Figure 1). This analysis uses the term ''flowing interval spacing'' as opposed to fractured spacing, which is typically used in the literature. The term fracture spacing was not used in this analysis because the data used identify a zone (or a flowing interval) that contains fluid-conducting fractures but does not distinguish how many or which fractures comprise the flowing interval. The flowing interval spacing is measured between the midpoints of each flowing interval. Fracture spacing within the SZ is defined as the spacing between fractures, with no regard to which fractures are carrying flow. The Development Plan associated with this analysis is entitled, ''Probability Distribution for Flowing Interval Spacing'', (CRWMS M and O 2000a). The parameter from this analysis may be used in the TSPA SR/LA Saturated Zone Flow and Transport Work Direction and Planning Documents: (1) ''Abstraction of Matrix Diffusion for SZ Flow and Transport Analyses'' (CRWMS M and O 1999a) and (2) ''Incorporation of Heterogeneity in SZ Flow and Transport Analyses'', (CRWMS M and O 1999b). A limitation of this analysis is that the probability distribution of flowing interval spacing may underestimate the effect of incorporating matrix diffusion processes in the SZ transport model because of the possible overestimation of the flowing interval spacing. Larger flowing interval spacing results in a decrease in the matrix diffusion processes. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be determined from the data. Because each flowing interval probably has more than one fracture contributing to a flowing interval, the true flowing interval spacing could be

  9. Determination of the postmortem interval by Laser Induced Breakdown Spectroscopy using swine skeletal muscles

    International Nuclear Information System (INIS)

    Marín-Roldan, A.; Manzoor, S.; Moncayo, S.; Navarro-Villoslada, F.; Izquierdo-Hornillos, R.C.; Caceres, J.O.

    2013-01-01

    Skin and muscle samples are useful to discriminate individuals as well as their postmortem interval (PMI) in crime scenes and natural or caused disasters. In this study, a simple and fast method based on Laser Induced Breakdown Spectroscopy (LIBS) has been developed to estimate PMI using swine skeletal muscle samples. Environmental conditions (moisture, temperature, fauna, etc.) having strong influence on the PMI determination were considered. Time-dependent changes in the emission intensity ratio for Mg, Na, Hα and K were observed, as a result of the variations in their concentration due to chemical reactions in tissues and were correlated with PMI. This relationship, which has not been reported previously in the forensic literature, offers a simple and potentially valuable means of estimating the PMI. - Highlights: • LIBS has been applied for Postmortem Interval estimation. • Environmental and sample storage conditions have been considered. • Significant correlation of elemental emission intensity with PMI has been observed. • Pig skeletal muscle samples have been used

  10. Determination of the postmortem interval by Laser Induced Breakdown Spectroscopy using swine skeletal muscles

    Energy Technology Data Exchange (ETDEWEB)

    Marín-Roldan, A.; Manzoor, S.; Moncayo, S.; Navarro-Villoslada, F.; Izquierdo-Hornillos, R.C.; Caceres, J.O., E-mail: jcaceres@quim.ucm.es

    2013-10-01

    Skin and muscle samples are useful to discriminate individuals as well as their postmortem interval (PMI) in crime scenes and natural or caused disasters. In this study, a simple and fast method based on Laser Induced Breakdown Spectroscopy (LIBS) has been developed to estimate PMI using swine skeletal muscle samples. Environmental conditions (moisture, temperature, fauna, etc.) having strong influence on the PMI determination were considered. Time-dependent changes in the emission intensity ratio for Mg, Na, Hα and K were observed, as a result of the variations in their concentration due to chemical reactions in tissues and were correlated with PMI. This relationship, which has not been reported previously in the forensic literature, offers a simple and potentially valuable means of estimating the PMI. - Highlights: • LIBS has been applied for Postmortem Interval estimation. • Environmental and sample storage conditions have been considered. • Significant correlation of elemental emission intensity with PMI has been observed. • Pig skeletal muscle samples have been used.

  11. Two-sorted Point-Interval Temporal Logics

    DEFF Research Database (Denmark)

    Balbiani, Philippe; Goranko, Valentin; Sciavicco, Guido

    2011-01-01

    There are two natural and well-studied approaches to temporal ontology and reasoning: point-based and interval-based. Usually, interval-based temporal reasoning deals with points as particular, duration-less intervals. Here we develop explicitly two-sorted point-interval temporal logical framework...... whereby time instants (points) and time periods (intervals) are considered on a par, and the perspective can shift between them within the formal discourse. We focus on fragments involving only modal operators that correspond to the inter-sort relations between points and intervals. We analyze...

  12. High-intensity interval training: Modulating interval duration in overweight/obese men.

    Science.gov (United States)

    Smith-Ryan, Abbie E; Melvin, Malia N; Wingfield, Hailee L

    2015-05-01

    High-intensity interval training (HIIT) is a time-efficient strategy shown to induce various cardiovascular and metabolic adaptations. Little is known about the optimal tolerable combination of intensity and volume necessary for adaptations, especially in clinical populations. In a randomized controlled pilot design, we evaluated the effects of two types of interval training protocols, varying in intensity and interval duration, on clinical outcomes in overweight/obese men. Twenty-five men [body mass index (BMI) > 25 kg · m(2)] completed baseline body composition measures: fat mass (FM), lean mass (LM) and percent body fat (%BF) and fasting blood glucose, lipids and insulin (IN). A graded exercise cycling test was completed for peak oxygen consumption (VO2peak) and power output (PO). Participants were randomly assigned to high-intensity short interval (1MIN-HIIT), high-intensity interval (2MIN-HIIT) or control groups. 1MIN-HIIT and 2MIN-HIIT completed 3 weeks of cycling interval training, 3 days/week, consisting of either 10 × 1 min bouts at 90% PO with 1 min rests (1MIN-HIIT) or 5 × 2 min bouts with 1 min rests at undulating intensities (80%-100%) (2MIN-HIIT). There were no significant training effects on FM (Δ1.06 ± 1.25 kg) or %BF (Δ1.13% ± 1.88%), compared to CON. Increases in LM were not significant but increased by 1.7 kg and 2.1 kg for 1MIN and 2MIN-HIIT groups, respectively. Increases in VO2peak were also not significant for 1MIN (3.4 ml·kg(-1) · min(-1)) or 2MIN groups (2.7 ml · kg(-1) · min(-1)). IN sensitivity (HOMA-IR) improved for both training groups (Δ-2.78 ± 3.48 units; p < 0.05) compared to CON. HIIT may be an effective short-term strategy to improve cardiorespiratory fitness and IN sensitivity in overweight males.

  13. Reference intervals and variation for urinary epinephrine, norepinephrine and cortisol in healthy men and women in Denmark

    DEFF Research Database (Denmark)

    Hansen, Åse Marie; Garde, A H; Christensen, J M

    2001-01-01

    Reference intervals for urinary epinephrine, norepinephrine and cortisol in 120 healthy individuals performing their routine work were established according to the International Union of Pure and Applied Chemistry (IUPAC) and the International Federation of Clinical Chemistry and Laboratory...... Medicine (IFCC) for use in the risk assessment of exposure to occupational stress. Reference intervals were established for three different times of the day: in morning samples (05.45-07.15) the limit of detection (LOD) was 2.10 micromol epinephrine/mol creatinine (82 women) and 2.86 micromol epinephrine....../mol creatinine (37 men), and the reference interval was 3.6-29.1 micromol norepinephrine/mol creatinine and 2.3-52.8 micromol cortisol/mol creatinine (119 women and men); in afternoon samples (15.30-18.30) the reference interval was 0.64-10.8 micromol epinephrine/mol creatinine (82 women), 1.20-11.2 micromol...

  14. Parallel DSMC Solution of Three-Dimensional Flow Over a Finite Flat Plate

    Science.gov (United States)

    Nance, Robert P.; Wilmoth, Richard G.; Moon, Bongki; Hassan, H. A.; Saltz, Joel

    1994-01-01

    This paper describes a parallel implementation of the direct simulation Monte Carlo (DSMC) method. Runtime library support is used for scheduling and execution of communication between nodes, and domain decomposition is performed dynamically to maintain a good load balance. Performance tests are conducted using the code to evaluate various remapping and remapping-interval policies, and it is shown that a one-dimensional chain-partitioning method works best for the problems considered. The parallel code is then used to simulate the Mach 20 nitrogen flow over a finite-thickness flat plate. It is shown that the parallel algorithm produces results which compare well with experimental data. Moreover, it yields significantly faster execution times than the scalar code, as well as very good load-balance characteristics.

  15. Performance and scalability of finite-difference and finite-element wave-propagation modeling on Intel's Xeon Phi

    NARCIS (Netherlands)

    Zhebel, E.; Minisini, S.; Kononov, A.; Mulder, W.A.

    2013-01-01

    With the rapid developments in parallel compute architectures, algorithms for seismic modeling and imaging need to be reconsidered in terms of parallelization. The aim of this paper is to compare scalability of seismic modeling algorithms: finite differences, continuous mass-lumped finite elements

  16. Effect size, confidence intervals and statistical power in psychological research.

    Directory of Open Access Journals (Sweden)

    Téllez A.

    2015-07-01

    Full Text Available Quantitative psychological research is focused on detecting the occurrence of certain population phenomena by analyzing data from a sample, and statistics is a particularly helpful mathematical tool that is used by researchers to evaluate hypotheses and make decisions to accept or reject such hypotheses. In this paper, the various statistical tools in psychological research are reviewed. The limitations of null hypothesis significance testing (NHST and the advantages of using effect size and its respective confidence intervals are explained, as the latter two measurements can provide important information about the results of a study. These measurements also can facilitate data interpretation and easily detect trivial effects, enabling researchers to make decisions in a more clinically relevant fashion. Moreover, it is recommended to establish an appropriate sample size by calculating the optimum statistical power at the moment that the research is designed. Psychological journal editors are encouraged to follow APA recommendations strictly and ask authors of original research studies to report the effect size, its confidence intervals, statistical power and, when required, any measure of clinical significance. Additionally, we must account for the teaching of statistics at the graduate level. At that level, students do not receive sufficient information concerning the importance of using different types of effect sizes and their confidence intervals according to the different types of research designs; instead, most of the information is focused on the various tools of NHST.

  17. Some Characterizations of Convex Interval Games

    NARCIS (Netherlands)

    Brânzei, R.; Tijs, S.H.; Alparslan-Gok, S.Z.

    2008-01-01

    This paper focuses on new characterizations of convex interval games using the notions of exactness and superadditivity. We also relate big boss interval games with concave interval games and obtain characterizations of big boss interval games in terms of exactness and subadditivity.

  18. An introduction to finite projective planes

    CERN Document Server

    Albert, Abraham Adrian

    2015-01-01

    Geared toward both beginning and advanced undergraduate and graduate students, this self-contained treatment offers an elementary approach to finite projective planes. Following a review of the basics of projective geometry, the text examines finite planes, field planes, and coordinates in an arbitrary plane. Additional topics include central collineations and the little Desargues' property, the fundamental theorem, and examples of finite non-Desarguesian planes.Virtually no knowledge or sophistication on the part of the student is assumed, and every algebraic system that arises is defined and

  19. Finite element analysis of trabecular bone structures : a comparison of image-based meshing techniques

    NARCIS (Netherlands)

    Ulrich, D.; Rietbergen, van B.; Weinans, H.; Rüegsegger, P.

    1998-01-01

    In this study, we investigate if finite element (FE) analyses of human trabecular bone architecture based on 168 microm images can provide relevant information about the bone mechanical characteristics. Three human trabecular bone samples, one taken from the femoral head, one from the iliac crest,

  20. A least squares principle unifying finite element, finite difference and nodal methods for diffusion theory

    International Nuclear Information System (INIS)

    Ackroyd, R.T.

    1987-01-01

    A least squares principle is described which uses a penalty function treatment of boundary and interface conditions. Appropriate choices of the trial functions and vectors employed in a dual representation of an approximate solution established complementary principles for the diffusion equation. A geometrical interpretation of the principles provides weighted residual methods for diffusion theory, thus establishing a unification of least squares, variational and weighted residual methods. The complementary principles are used with either a trial function for the flux or a trial vector for the current to establish for regular meshes a connection between finite element, finite difference and nodal methods, which can be exact if the mesh pitches are chosen appropriately. Whereas the coefficients in the usual nodal equations have to be determined iteratively, those derived via the complementary principles are given explicitly in terms of the data. For the further development of the connection between finite element, finite difference and nodal methods, some hybrid variational methods are described which employ both a trial function and a trial vector. (author)

  1. Confidence Intervals: From tests of statistical significance to confidence intervals, range hypotheses and substantial effects

    Directory of Open Access Journals (Sweden)

    Dominic Beaulieu-Prévost

    2006-03-01

    Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.

  2. FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL ...

    African Journals Online (AJOL)

    FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL STRESSES IN ... the transverse residual stress in the x-direction (σx) had a maximum value of 375MPa ... the finite element method are in fair agreement with the experimental results.

  3. Electrocardiographic Abnormalities and QTc Interval in Patients Undergoing Hemodialysis.

    Directory of Open Access Journals (Sweden)

    Yuxin Nie

    Full Text Available Sudden cardiac death is one of the primary causes of mortality in chronic hemodialysis (HD patients. Prolonged QTc interval is associated with increased rate of sudden cardiac death. The aim of this article is to assess the abnormalities found in electrocardiograms (ECGs, and to explore factors that can influence the QTc interval.A total of 141 conventional HD patients were enrolled in this study. ECG tests were conducted on each patient before a single dialysis session and 15 minutes before the end of dialysis session (at peak stress. Echocardiography tests were conducted before dialysis session began. Blood samples were drawn by phlebotomy immediately before and after the dialysis session.Before dialysis, 93.62% of the patients were in sinus rhythm, and approximately 65% of the patients showed a prolonged QTc interval (i.e., a QTc interval above 440 ms in males and above 460ms in females. A comparison of ECG parameters before dialysis and at peak stress showed increases in heart rate (77.45±11.92 vs. 80.38±14.65 bpm, p = 0.001 and QTc interval (460.05±24.53 ms vs. 470.93±24.92 ms, p<0.001. After dividing patients into two groups according to the QTc interval, lower pre-dialysis serum concentrations of potassium (K+, calcium (Ca2+, phosphorus, calcium* phosphorus (Ca*P, and higher concentrations of plasma brain natriuretic peptide (BNP were found in the group with prolonged QTc intervals. Patients in this group also had a larger left atrial diameter (LAD and a thicker interventricular septum, and they tended to be older than patients in the other group. Then patients were divided into two groups according to ΔQTc (ΔQTc = QTc peak-stress- QTc pre-HD. When analyzing the patients whose QTc intervals were longer at peak stress than before HD, we found that they had higher concentrations of Ca2+ and P5+ and lower concentrations of K+, ferritin, UA, and BNP. They were also more likely to be female. In addition, more cardiac construction

  4. Multivariate interval-censored survival data

    DEFF Research Database (Denmark)

    Hougaard, Philip

    2014-01-01

    Interval censoring means that an event time is only known to lie in an interval (L,R], with L the last examination time before the event, and R the first after. In the univariate case, parametric models are easily fitted, whereas for non-parametric models, the mass is placed on some intervals, de...

  5. Reference Intervals of Alpha-Fetoprotein and Carcinoembryonic Antigen in the Apparently Healthy Population.

    Science.gov (United States)

    Zhang, Gao-Ming; Guo, Xu-Xiao; Ma, Xiao-Bo; Zhang, Guo-Ming

    2016-12-12

    BACKGROUND The aim of this study was to calculate 95% reference intervals and double-sided limits of serum alpha-fetoprotein (AFP) and carcinoembryonic antigen (CEA) according to the CLSI EP28-A3 guideline. MATERIAL AND METHODS Serum AFP and CEA values were measured in samples from 26 000 healthy subjects in the Shuyang area receiving general health checkups. The 95% reference intervals and upper limits were calculated by using MedCalc. RESULTS We provided continuous reference intervals from 20 years old to 90 years old for AFP and CEA. The reference intervals were: AFP, 1.31-7.89 ng/ml (males) and 1.01-7.10 ng/ml (females); CEA, 0.51-4.86 ng/ml (males) and 0.35-3.45ng/ml (females). AFP and CEA were significantly positively correlated with age in both males (r=0.196 and r=0.198) and females (r=0.121 and r=0.197). CONCLUSIONS Different races or populations and different detection systems may result in different reference intervals for AFP and CEA. Continuous reference intervals of age changes are more accurate than age groups.

  6. Ventricular Cycle Length Characteristics Estimative of Prolonged RR Interval during Atrial Fibrillation

    Science.gov (United States)

    CIACCIO, EDWARD J.; BIVIANO, ANGELO B.; GAMBHIR, ALOK; EINSTEIN, ANDREW J.; GARAN, HASAN

    2014-01-01

    Background When atrial fibrillation (AF) is incessant, imaging during a prolonged ventricular RR interval may improve image quality. It was hypothesized that long RR intervals could be predicted from preceding RR values. Methods From the PhysioNet database, electrocardiogram RR intervals were obtained from 74 persistent AF patients. An RR interval lengthened by at least 250 ms beyond the immediately preceding RR interval (termed T0 and T1, respectively) was considered prolonged. A two-parameter scatterplot was used to predict the occurrence of a prolonged interval T0. The scatterplot parameters were: (1) RR variability (RRv) estimated as the average second derivative from 10 previous pairs of RR differences, T13–T2, and (2) Tm–T1, the difference between Tm, the mean from T13 to T2, and T1. For each patient, scatterplots were constructed using preliminary data from the first hour. The ranges of parameters 1 and 2 were adjusted to maximize the proportion of prolonged RR intervals within range. These constraints were used for prediction of prolonged RR in test data collected during the second hour. Results The mean prolonged event was 1.0 seconds in duration. Actual prolonged events were identified with a mean positive predictive value (PPV) of 80% in the test set. PPV was >80% in 36 of 74 patients. An average of 10.8 prolonged RR intervals per 60 minutes was correctly identified. Conclusions A method was developed to predict prolonged RR intervals using two parameters and prior statistical sampling for each patient. This or similar methodology may help improve cardiac imaging in many longstanding persistent AF patients. PMID:23998759

  7. Zero and finite field μSR spin glass Ag:Mn

    International Nuclear Information System (INIS)

    Brown, J.A.; Heffner, R.H.; Leon, M.; Olsen, C.E.; Schillaci, M.E.; Dodds, S.A.; Estle, T.L.; MacLaughlin, D.E.

    1981-01-01

    In this paper we present μSR data taken in both zero and finite fields for a Ag:Mn (1.6 at%) spin glass sample. The data allow us to determine, in the context of a particular model, the fluctuation rate of the Mn ions as a function of temperature. This rate decreases smoothly but very rapidly near the glass temperature, Tsub(g). The corresponding behavior in Cu:Mn is more gradual. (orig.)

  8. Hypotensive Response Magnitude and Duration in Hypertensives: Continuous and Interval Exercise

    Directory of Open Access Journals (Sweden)

    Raphael Santos Teodoro de Carvalho

    2015-03-01

    Full Text Available Background: Although exercise training is known to promote post-exercise hypotension, there is currently no consistent argument about the effects of manipulating its various components (intensity, duration, rest periods, types of exercise, training methods on the magnitude and duration of hypotensive response. Objective: To compare the effect of continuous and interval exercises on hypotensive response magnitude and duration in hypertensive patients by using ambulatory blood pressure monitoring (ABPM. Methods: The sample consisted of 20 elderly hypertensives. Each participant underwent three ABPM sessions: one control ABPM, without exercise; one ABPM after continuous exercise; and one ABPM after interval exercise. Systolic blood pressure (SBP, diastolic blood pressure (DBP, mean arterial pressure (MAP, heart rate (HR and double product (DP were monitored to check post-exercise hypotension and for comparison between each ABPM. Results: ABPM after continuous exercise and after interval exercise showed post-exercise hypotension and a significant reduction (p < 0.05 in SBP, DBP, MAP and DP for 20 hours as compared with control ABPM. Comparing ABPM after continuous and ABPM after interval exercise, a significant reduction (p < 0.05 in SBP, DBP, MAP and DP was observed in the latter. Conclusion: Continuous and interval exercise trainings promote post-exercise hypotension with reduction in SBP, DBP, MAP and DP in the 20 hours following exercise. Interval exercise training causes greater post-exercise hypotension and lower cardiovascular overload as compared with continuous exercise.

  9. Characterization of finite spaces having dispersion points

    International Nuclear Information System (INIS)

    Al-Bsoul, A. T

    1997-01-01

    In this paper we shall characterize the finite spaces having dispersion points. Also, we prove that the dispersion point of a finite space with a dispersion points fixed under all non constant continuous functions which answers the question raised by J. C obb and W. Voxman in 1980 affirmatively for finite space. Some open problems are given. (author). 16 refs

  10. Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model

    International Nuclear Information System (INIS)

    Ellefsen, Karl J.; Smith, David B.

    2016-01-01

    Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples. - Highlights: • We evaluate a clustering procedure by applying it to geochemical data. • The procedure generates a hierarchy of clusters. • Different levels of the hierarchy show geochemical processes at different spatial scales. • The clustering method is Bayesian finite mixture modeling. • Model parameters are estimated with Hamiltonian Monte Carlo sampling.

  11. Percolation through voids around overlapping spheres: A dynamically based finite-size scaling analysis

    Science.gov (United States)

    Priour, D. J.

    2014-01-01

    The percolation threshold for flow or conduction through voids surrounding randomly placed spheres is calculated. With large-scale Monte Carlo simulations, we give a rigorous continuum treatment to the geometry of the impenetrable spheres and the spaces between them. To properly exploit finite-size scaling, we examine multiple systems of differing sizes, with suitable averaging over disorder, and extrapolate to the thermodynamic limit. An order parameter based on the statistical sampling of stochastically driven dynamical excursions and amenable to finite-size scaling analysis is defined, calculated for various system sizes, and used to determine the critical volume fraction ϕc=0.0317±0.0004 and the correlation length exponent ν =0.92±0.05.

  12. Finite element methods a practical guide

    CERN Document Server

    Whiteley, Jonathan

    2017-01-01

    This book presents practical applications of the finite element method to general differential equations. The underlying strategy of deriving the finite element solution is introduced using linear ordinary differential equations, thus allowing the basic concepts of the finite element solution to be introduced without being obscured by the additional mathematical detail required when applying this technique to partial differential equations. The author generalizes the presented approach to partial differential equations which include nonlinearities. The book also includes variations of the finite element method such as different classes of meshes and basic functions. Practical application of the theory is emphasised, with development of all concepts leading ultimately to a description of their computational implementation illustrated using Matlab functions. The target audience primarily comprises applied researchers and practitioners in engineering, but the book may also be beneficial for graduate students.

  13. Finite Volumes for Complex Applications VII

    CERN Document Server

    Ohlberger, Mario; Rohde, Christian

    2014-01-01

    The methods considered in the 7th conference on "Finite Volumes for Complex Applications" (Berlin, June 2014) have properties which offer distinct advantages for a number of applications. The second volume of the proceedings covers reviewed contributions reporting successful applications in the fields of fluid dynamics, magnetohydrodynamics, structural analysis, nuclear physics, semiconductor theory and other topics. The finite volume method in its various forms is a space discretization technique for partial differential equations based on the fundamental physical principle of conservation. Recent decades have brought significant success in the theoretical understanding of the method. Many finite volume methods preserve further qualitative or asymptotic properties, including maximum principles, dissipativity, monotone decay of free energy, and asymptotic stability. Due to these properties, finite volume methods belong to the wider class of compatible discretization methods, which preserve qualitative propert...

  14. Discretization of convection-diffusion equations with finite-difference scheme derived from simplified analytical solutions

    International Nuclear Information System (INIS)

    Kriventsev, Vladimir

    2000-09-01

    Most of thermal hydraulic processes in nuclear engineering can be described by general convection-diffusion equations that are often can be simulated numerically with finite-difference method (FDM). An effective scheme for finite-difference discretization of such equations is presented in this report. The derivation of this scheme is based on analytical solutions of a simplified one-dimensional equation written for every control volume of the finite-difference mesh. These analytical solutions are constructed using linearized representations of both diffusion coefficient and source term. As a result, the Efficient Finite-Differencing (EFD) scheme makes it possible to significantly improve the accuracy of numerical method even using mesh systems with fewer grid nodes that, in turn, allows to speed-up numerical simulation. EFD has been carefully verified on the series of sample problems for which either analytical or very precise numerical solutions can be found. EFD has been compared with other popular FDM schemes including novel, accurate (as well as sophisticated) methods. Among the methods compared were well-known central difference scheme, upwind scheme, exponential differencing and hybrid schemes of Spalding. Also, newly developed finite-difference schemes, such as the the quadratic upstream (QUICK) scheme of Leonard, the locally analytic differencing (LOAD) scheme of Wong and Raithby, the flux-spline scheme proposed by Varejago and Patankar as well as the latest LENS discretization of Sakai have been compared. Detailed results of this comparison are given in this report. These tests have shown a high efficiency of the EFD scheme. For most of sample problems considered EFD has demonstrated the numerical error that appeared to be in orders of magnitude lower than that of other discretization methods. Or, in other words, EFD has predicted numerical solution with the same given numerical error but using much fewer grid nodes. In this report, the detailed

  15. Statistics a biomedical introduction

    CERN Document Server

    Brown, Byron Wm

    2009-01-01

    CHAPTER 1: Introduction 1 CHAPTER 2: Elementary Rules of Probability 13 CHAPTER 3: Populations, Samples, and the Distribution of the Sample Mean 37 1. Populations and Distributions 38 2. Sampling from Finite Populations 64 3. The Distribution of the Sample Mean 72 CHAPTER 4: Analysis of Matched Pairs Using Sample Means 85 1. A Confidence Interval for the Treatment Effect 86 2. A Hypothesis Test for the Treatment Effect 96 3. Determining the Sample Size 102 CHAPTER 5: Analysis of the Two-Sample Location Problem Using Sample Means 109 1. A Confidence Interval for the Diffe

  16. A multigrid solution method for mixed hybrid finite elements

    Energy Technology Data Exchange (ETDEWEB)

    Schmid, W. [Universitaet Augsburg (Germany)

    1996-12-31

    We consider the multigrid solution of linear equations arising within the discretization of elliptic second order boundary value problems of the form by mixed hybrid finite elements. Using the equivalence of mixed hybrid finite elements and non-conforming nodal finite elements, we construct a multigrid scheme for the corresponding non-conforming finite elements, and, by this equivalence, for the mixed hybrid finite elements, following guidelines from Arbogast/Chen. For a rectangular triangulation of the computational domain, this non-conforming schemes are the so-called nodal finite elements. We explicitly construct prolongation and restriction operators for this type of non-conforming finite elements. We discuss the use of plain multigrid and the multilevel-preconditioned cg-method and compare their efficiency in numerical tests.

  17. Moving mesh finite element method for finite time extinction of distributed parameter systems with positive exponential feedback

    International Nuclear Information System (INIS)

    Garnadi, A.D.

    1997-01-01

    In the distributed parameter systems with exponential feedback, non-global existence of solution is not always exist. For some positive initial values, there exist finite time T such that the solution goes to infinity, i.e. finite time extinction or blow-up. Here is present a numerical solution using Moving Mesh Finite Element to solve the distributed parameter systems with exponential feedback close to blow-up time. The numerical behavior of the mesh close to the time of extinction is the prime interest in this study

  18. Finite Markov processes and their applications

    CERN Document Server

    Iosifescu, Marius

    2007-01-01

    A self-contained treatment of finite Markov chains and processes, this text covers both theory and applications. Author Marius Iosifescu, vice president of the Romanian Academy and director of its Center for Mathematical Statistics, begins with a review of relevant aspects of probability theory and linear algebra. Experienced readers may start with the second chapter, a treatment of fundamental concepts of homogeneous finite Markov chain theory that offers examples of applicable models.The text advances to studies of two basic types of homogeneous finite Markov chains: absorbing and ergodic ch

  19. Finite-volume scheme for anisotropic diffusion

    Energy Technology Data Exchange (ETDEWEB)

    Es, Bram van, E-mail: bramiozo@gmail.com [Centrum Wiskunde & Informatica, P.O. Box 94079, 1090GB Amsterdam (Netherlands); FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, The Netherlands" 1 (Netherlands); Koren, Barry [Eindhoven University of Technology (Netherlands); Blank, Hugo J. de [FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, The Netherlands" 1 (Netherlands)

    2016-02-01

    In this paper, we apply a special finite-volume scheme, limited to smooth temperature distributions and Cartesian grids, to test the importance of connectivity of the finite volumes. The area of application is nuclear fusion plasma with field line aligned temperature gradients and extreme anisotropy. We apply the scheme to the anisotropic heat-conduction equation, and compare its results with those of existing finite-volume schemes for anisotropic diffusion. Also, we introduce a general model adaptation of the steady diffusion equation for extremely anisotropic diffusion problems with closed field lines.

  20. Finite element analysis of piezoelectric materials

    International Nuclear Information System (INIS)

    Lowrie, F.; Stewart, M.; Cain, M.; Gee, M.

    1999-01-01

    This guide is intended to help people wanting to do finite element analysis of piezoelectric materials by answering some of the questions that are peculiar to piezoelectric materials. The document is not intended as a complete beginners guide for finite element analysis in general as this is better dealt with by the individual software producers. The guide is based around the commercial package ANSYS as this is a popular package amongst piezoelectric material users, however much of the information will still be useful to users of other finite element codes. (author)

  1. Understanding compressive deformation behavior of porous Ti using finite element analysis

    Energy Technology Data Exchange (ETDEWEB)

    Roy, Sandipan; Khutia, Niloy [Department of Aerospace Engineering and Applied Mechanics, Indian Institute of Engineering Science and Technology, Shibpur (India); Das, Debdulal [Department of Metallurgy and Materials Engineering, Indian Institute of Engineering Science and Technology, Shibpur (India); Das, Mitun, E-mail: mitun@cgcri.res.in [Bioceramics and Coating Division, CSIR-Central Glass and Ceramic Research Institute, Kolkata (India); Balla, Vamsi Krishna [Bioceramics and Coating Division, CSIR-Central Glass and Ceramic Research Institute, Kolkata (India); Bandyopadhyay, Amit [W. M. Keck Biomedical Materials Research Laboratory, School of Mechanical and Materials Engineering, Washington State University, Pullman, WA 99164 (United States); Chowdhury, Amit Roy, E-mail: arcbesu@gmail.com [Department of Aerospace Engineering and Applied Mechanics, Indian Institute of Engineering Science and Technology, Shibpur (India)

    2016-07-01

    In the present study, porous commercially pure (CP) Ti samples with different volume fraction of porosities were fabricated using a commercial additive manufacturing technique namely laser engineered net shaping (LENS™). Mechanical behavior of solid and porous samples was evaluated at room temperature under quasi-static compressive loading. Fracture surfaces of the failed samples were analyzed to determine the failure modes. Finite Element (FE) analysis using representative volume element (RVE) model and micro-computed tomography (CT) based model have been performed to understand the deformation behavior of laser deposited solid and porous CP-Ti samples. In vitro cell culture on laser processed porous CP-Ti surfaces showed normal cell proliferation with time, and confirmed non-toxic nature of these samples. - Highlights: • Porous CP-Ti samples fabricated using additive manufacturing technique • Compressive deformation behavior of porous samples closely matches with micro-CT and RVE based analysis • In vitro studies showed better cell proliferation with time on porous CP-Ti surfaces.

  2. Understanding compressive deformation behavior of porous Ti using finite element analysis

    International Nuclear Information System (INIS)

    Roy, Sandipan; Khutia, Niloy; Das, Debdulal; Das, Mitun; Balla, Vamsi Krishna; Bandyopadhyay, Amit; Chowdhury, Amit Roy

    2016-01-01

    In the present study, porous commercially pure (CP) Ti samples with different volume fraction of porosities were fabricated using a commercial additive manufacturing technique namely laser engineered net shaping (LENS™). Mechanical behavior of solid and porous samples was evaluated at room temperature under quasi-static compressive loading. Fracture surfaces of the failed samples were analyzed to determine the failure modes. Finite Element (FE) analysis using representative volume element (RVE) model and micro-computed tomography (CT) based model have been performed to understand the deformation behavior of laser deposited solid and porous CP-Ti samples. In vitro cell culture on laser processed porous CP-Ti surfaces showed normal cell proliferation with time, and confirmed non-toxic nature of these samples. - Highlights: • Porous CP-Ti samples fabricated using additive manufacturing technique • Compressive deformation behavior of porous samples closely matches with micro-CT and RVE based analysis • In vitro studies showed better cell proliferation with time on porous CP-Ti surfaces

  3. From Finite Time to Finite Physical Dimensions Thermodynamics: The Carnot Engine and Onsager's Relations Revisited

    Science.gov (United States)

    Feidt, Michel; Costea, Monica

    2018-04-01

    Many works have been devoted to finite time thermodynamics since the Curzon and Ahlborn [1] contribution, which is generally considered as its origin. Nevertheless, previous works in this domain have been revealed [2], [3], and recently, results of the attempt to correlate Finite Time Thermodynamics with Linear Irreversible Thermodynamics according to Onsager's theory were reported [4]. The aim of the present paper is to extend and improve the approach relative to thermodynamic optimization of generic objective functions of a Carnot engine with linear response regime presented in [4]. The case study of the Carnot engine is revisited within the steady state hypothesis, when non-adiabaticity of the system is considered, and heat loss is accounted for by an overall heat leak between the engine heat reservoirs. The optimization is focused on the main objective functions connected to engineering conditions, namely maximum efficiency or power output, except the one relative to entropy that is more fundamental. Results given in reference [4] relative to the maximum power output and minimum entropy production as objective function are reconsidered and clarified, and the change from finite time to finite physical dimension was shown to be done by the heat flow rate at the source. Our modeling has led to new results of the Carnot engine optimization and proved that the primary interest for an engineer is mainly connected to what we called Finite Physical Dimensions Optimal Thermodynamics.

  4. Correction of Sample-Time Error for Time-Interleaved Sampling System Using Cubic Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Qin Guo-jie

    2014-08-01

    Full Text Available Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.

  5. Establishment of a paediatric age-related reference interval for the measurement of urinary total fractionated metanephrines.

    LENUS (Irish Health Repository)

    Griffin, Alison

    2012-02-01

    INTRODUCTION: Normetanephrine and metanephrine are intermediate metabolites of noradrenaline and adrenaline metabolism. To assess whether normetanephrine and metanephrine analysis may aid in the diagnosis of Neuroblastoma, a reference interval for these metabolites must first be established. AIM: The overall aim of this study was to establish a paediatric age-related reference interval for the measurement of total fractionated metanephrines. METHODS: A total of 267 urine samples were analysed following acid hydrolysis. This releases the metanephrines from their sulphate-bound metabolites. The samples were analysed using reverse phase high-performance liquid chromatography with electro-chemical detection on a Gilson automated sequential trace enrichment of dialysate sample system. RESULTS: Data were analysed using Minitab Release version 14. Outliers were removed using the Dixon\\/Reed one-third rule. Partitioning of the age groups was achieved using Harris and Boyd\\'s standard normal deviate test. Non-parametric analysis of the data was performed, followed by the establishment of the 2.5th and the 97.5th reference limits. CONCLUSIONS: The established reference intervals are described in Table 2.

  6. Finite-size effects on band structure of CdS nanocrystallites studied by positron annihilation

    International Nuclear Information System (INIS)

    Kar, Soumitra; Biswas, Subhajit; Chaudhuri, Subhadra; Nambissan, P.M.G.

    2005-01-01

    Quantum confinement effects in nanocrystalline CdS were studied using positrons as spectroscopic probes to explore the defect characteristics. The lifetime of positrons annihilating at the vacancy clusters on nanocrystalline grain surfaces increased remarkably consequent to the onset of such finite-size effects. The Doppler broadened line shape was also found to reflect rather sensitively such distinct changes in the electron momentum redistribution scanned by the positrons, owing to the widening of the band gap. The nanocrystalline sizes of the samples used were confirmed from x-ray diffraction and high resolution transmission electron microscopy and the optical absorption results supported the quantum size effects. Positron annihilation results indicated distinct qualitative changes between CdS nanorods and the bulk sample, notwithstanding the identical x-ray diffraction pattern and close resemblance of the optical absorption spectra. The results are promising in the event of positron annihilation being proved to be a very successful tool for the study of such finite-size effects in semiconductor nanoparticles

  7. On the problems of PPS sampling in multi-character surveys ...

    African Journals Online (AJOL)

    This paper, which is on the problems of PPS sampling in multi-character surveys, compares the efficiency of some estimators used in PPSWR sampling for multiple characteristics. From a superpopulation model, we computed the expected variances of the different estimators for each of the first two finite populations ...

  8. Reference intervals for selected serum biochemistry analytes in cheetahs Acinonyx jubatus.

    Science.gov (United States)

    Hudson-Lamb, Gavin C; Schoeman, Johan P; Hooijberg, Emma H; Heinrich, Sonja K; Tordiffe, Adrian S W

    2016-02-26

    Published haematologic and serum biochemistry reference intervals are very scarce for captive cheetahs and even more for free-ranging cheetahs. The current study was performed to establish reference intervals for selected serum biochemistry analytes in cheetahs. Baseline serum biochemistry analytes were analysed from 66 healthy Namibian cheetahs. Samples were collected from 30 captive cheetahs at the AfriCat Foundation and 36 free-ranging cheetahs from central Namibia. The effects of captivity-status, age, sex and haemolysis score on the tested serum analytes were investigated. The biochemistry analytes that were measured were sodium, potassium, magnesium, chloride, urea and creatinine. The 90% confidence interval of the reference limits was obtained using the non-parametric bootstrap method. Reference intervals were preferentially determined by the non-parametric method and were as follows: sodium (128 mmol/L - 166 mmol/L), potassium (3.9 mmol/L - 5.2 mmol/L), magnesium (0.8 mmol/L - 1.2 mmol/L), chloride (97 mmol/L - 130 mmol/L), urea (8.2 mmol/L - 25.1 mmol/L) and creatinine (88 µmol/L - 288 µmol/L). Reference intervals from the current study were compared with International Species Information System values for cheetahs and found to be narrower. Moreover, age, sex and haemolysis score had no significant effect on the serum analytes in this study. Separate reference intervals for captive and free-ranging cheetahs were also determined. Captive cheetahs had higher urea values, most likely due to dietary factors. This study is the first to establish reference intervals for serum biochemistry analytes in cheetahs according to international guidelines. These results can be used for future health and disease assessments in both captive and free-ranging cheetahs.

  9. Finite temperature field theory

    CERN Document Server

    Das, Ashok

    1997-01-01

    This book discusses all three formalisms used in the study of finite temperature field theory, namely the imaginary time formalism, the closed time formalism and thermofield dynamics. Applications of the formalisms are worked out in detail. Gauge field theories and symmetry restoration at finite temperature are among the practical examples discussed in depth. The question of gauge dependence of the effective potential and the Nielsen identities are explained. The nonrestoration of some symmetries at high temperature (such as supersymmetry) and theories on nonsimply connected space-times are al

  10. Finite Size Scaling of Perceptron

    OpenAIRE

    Korutcheva, Elka; Tonchev, N.

    2000-01-01

    We study the first-order transition in the model of a simple perceptron with continuous weights and large, bit finite value of the inputs. Making the analogy with the usual finite-size physical systems, we calculate the shift and the rounding exponents near the transition point. In the case of a general perceptron with larger variety of inputs, the analysis only gives bounds for the exponents.

  11. Finite p′-nilpotent groups. I

    Directory of Open Access Journals (Sweden)

    S. Srinivasan

    1987-01-01

    Full Text Available In this paper we consider finite p′-nilpotent groups which is a generalization of finite p-nilpotent groups. This generalization leads us to consider the various special subgroups such as the Frattini subgroup, Fitting subgroup, and the hypercenter in this generalized setting. The paper also considers the conditions under which product of p′-nilpotent groups will be a p′-nilpotent group.

  12. Finite automata over magmas: models and some applications in Cryptography

    Directory of Open Access Journals (Sweden)

    Volodymyr V. Skobelev

    2018-05-01

    Full Text Available In the paper the families of finite semi-automata and reversible finite Mealy and Moore automata over finite magmas are defined and analyzed in detail. On the base of these models it is established that the set of finite quasigroups is the most acceptable subset of the set of finite magmas at resolving model problems in Cryptography, such as design of iterated hash functions and stream ciphers. Defined families of finite semi-automata and reversible finite automata over finite $T$-quasigroups are investigated in detail. It is established that in this case models time and space complexity for simulation of the functioning during one instant of automaton time can be much lower than in general case.

  13. 'Aussie normals': an a priori study to develop clinical chemistry reference intervals in a healthy Australian population.

    Science.gov (United States)

    Koerbin, G; Cavanaugh, J A; Potter, J M; Abhayaratna, W P; West, N P; Glasgow, N; Hawkins, C; Armbruster, D; Oakman, C; Hickman, P E

    2015-02-01

    Development of reference intervals is difficult, time consuming, expensive and beyond the scope of most laboratories. The Aussie Normals study is a direct a priori study to determine reference intervals in healthy Australian adults. All volunteers completed a health and lifestyle questionnaire and exclusion was based on conditions such as pregnancy, diabetes, renal or cardiovascular disease. Up to 91 biochemical analyses were undertaken on a variety of analytical platforms using serum samples collected from 1856 volunteers. We report on our findings for 40 of these analytes and two calculated parameters performed on the Abbott ARCHITECTci8200/ci16200 analysers. Not all samples were analysed for all assays due to volume requirements or assay/instrument availability. Results with elevated interference indices and those deemed unsuitable after clinical evaluation were removed from the database. Reference intervals were partitioned based on the method of Harris and Boyd into three scenarios, combined gender, males and females and age and gender. We have performed a detailed reference interval study on a healthy Australian population considering the effects of sex, age and body mass. These reference intervals may be adapted to other manufacturer's analytical methods using method transference.

  14. Domain decomposition methods for mortar finite elements

    Energy Technology Data Exchange (ETDEWEB)

    Widlund, O.

    1996-12-31

    In the last few years, domain decomposition methods, previously developed and tested for standard finite element methods and elliptic problems, have been extended and modified to work for mortar and other nonconforming finite element methods. A survey will be given of work carried out jointly with Yves Achdou, Mario Casarin, Maksymilian Dryja and Yvon Maday. Results on the p- and h-p-version finite elements will also be discussed.

  15. Factoring polynomials over arbitrary finite fields

    NARCIS (Netherlands)

    Lange, T.; Winterhof, A.

    2000-01-01

    We analyse an extension of Shoup's (Inform. Process. Lett. 33 (1990) 261–267) deterministic algorithm for factoring polynomials over finite prime fields to arbitrary finite fields. In particular, we prove the existence of a deterministic algorithm which completely factors all monic polynomials of

  16. The unified method: III. Nonlinearizable problems on the interval

    International Nuclear Information System (INIS)

    Lenells, J; Fokas, A S

    2012-01-01

    Boundary value problems for integrable nonlinear evolution PDEs formulated on the finite interval can be analyzed by the unified method introduced by one of the authors and extensively used in the literature. The implementation of this general method to this particular class of problems yields the solution in terms of the unique solution of a matrix Riemann–Hilbert problem formulated in the complex k-plane (the Fourier plane), which has a jump matrix with explicit (x, t)-dependence involving six scalar functions of k, called the spectral functions. Two of these functions depend on the initial data, whereas the other four depend on all boundary values. The most difficult step of the new method is the characterization of the latter four spectral functions in terms of the given initial and boundary data, i.e. the elimination of the unknown boundary values. Here, we present an effective characterization of the spectral functions in terms of the given initial and boundary data. We present two different characterizations of this problem. One is based on the analysis of the so-called global relation, on the analysis of the equations obtained from the global relation via certain transformations leaving the dispersion relation of the associated linearized PDE invariant and on the computation of the large k asymptotics of the eigenfunctions defining the relevant spectral functions. The other is based on the analysis of the global relation and on the introduction of the so-called Gelfand–Levitan–Marchenko representations of the eigenfunctions defining the relevant spectral functions. We also show that these two different characterizations are equivalent and that in the limit when the length of the interval tends to infinity, the relevant formulas reduce to the analogous formulas obtained recently for the case of boundary value problems formulated on the half-line. (paper)

  17. Nationwide Multicenter Reference Interval Study for 28 Common Biochemical Analytes in China.

    Science.gov (United States)

    Xia, Liangyu; Chen, Ming; Liu, Min; Tao, Zhihua; Li, Shijun; Wang, Liang; Cheng, Xinqi; Qin, Xuzhen; Han, Jianhua; Li, Pengchang; Hou, Li'an; Yu, Songlin; Ichihara, Kiyoshi; Qiu, Ling

    2016-03-01

    A nationwide multicenter study was conducted in the China to explore sources of variation of reference values and establish reference intervals for 28 common biochemical analytes, as a part of the International Federation of Clinical Chemistry and Laboratory Medicine, Committee on Reference Intervals and Decision Limits (IFCC/C-RIDL) global study on reference values. A total of 3148 apparently healthy volunteers were recruited in 6 cities covering a wide area in China. Blood samples were tested in 2 central laboratories using Beckman Coulter AU5800 chemistry analyzers. Certified reference materials and value-assigned serum panel were used for standardization of test results. Multiple regression analysis was performed to explore sources of variation. Need for partition of reference intervals was evaluated based on 3-level nested ANOVA. After secondary exclusion using the latent abnormal values exclusion method, reference intervals were derived by a parametric method using the modified Box-Cox formula. Test results of 20 analytes were made traceable to reference measurement procedures. By the ANOVA, significant sex-related and age-related differences were observed in 12 and 12 analytes, respectively. A small regional difference was observed in the results for albumin, glucose, and sodium. Multiple regression analysis revealed BMI-related changes in results of 9 analytes for man and 6 for woman. Reference intervals of 28 analytes were computed with 17 analytes partitioned by sex and/or age. In conclusion, reference intervals of 28 common chemistry analytes applicable to Chinese Han population were established by use of the latest methodology. Reference intervals of 20 analytes traceable to reference measurement procedures can be used as common reference intervals, whereas others can be used as the assay system-specific reference intervals in China.

  18. The finite-difference and finite-element modeling of seismic wave propagation and earthquake motion

    International Nuclear Information System (INIS)

    Moczo, P.; Kristek, J.; Pazak, P.; Balazovjech, M.; Moczo, P.; Kristek, J.; Galis, M.

    2007-01-01

    Numerical modeling of seismic wave propagation and earthquake motion is an irreplaceable tool in investigation of the Earth's structure, processes in the Earth, and particularly earthquake phenomena. Among various numerical methods, the finite-difference method is the dominant method in the modeling of earthquake motion. Moreover, it is becoming more important in the seismic exploration and structural modeling. At the same time we are convinced that the best time of the finite-difference method in seismology is in the future. This monograph provides tutorial and detailed introduction to the application of the finite difference (FD), finite-element (FE), and hybrid FD-FE methods to the modeling of seismic wave propagation and earthquake motion. The text does not cover all topics and aspects of the methods. We focus on those to which we have contributed. We present alternative formulations of equation of motion for a smooth elastic continuum. We then develop alternative formulations for a canonical problem with a welded material interface and free surface. We continue with a model of an earthquake source. We complete the general theoretical introduction by a chapter on the constitutive laws for elastic and viscoelastic media, and brief review of strong formulations of the equation of motion. What follows is a block of chapters on the finite-difference and finite-element methods. We develop FD targets for the free surface and welded material interface. We then present various FD schemes for a smooth continuum, free surface, and welded interface. We focus on the staggered-grid and mainly optimally-accurate FD schemes. We also present alternative formulations of the FE method. We include the FD and FE implementations of the traction-at-split-nodes method for simulation of dynamic rupture propagation. The FD modeling is applied to the model of the deep sedimentary Grenoble basin, France. The FD and FE methods are combined in the hybrid FD-FE method. The hybrid

  19. Probabilistic finite element modeling of waste rollover

    International Nuclear Information System (INIS)

    Khaleel, M.A.; Cofer, W.F.; Al-fouqaha, A.A.

    1995-09-01

    Stratification of the wastes in many Hanford storage tanks has resulted in sludge layers which are capable of retaining gases formed by chemical and/or radiolytic reactions. As the gas is produced, the mechanisms of gas storage evolve until the resulting buoyancy in the sludge leads to instability, at which point the sludge ''rolls over'' and a significant volume of gas is suddenly released. Because the releases may contain flammable gases, these episodes of release are potentially hazardous. Mitigation techniques are desirable for more controlled releases at more frequent intervals. To aid the mitigation efforts, a methodology for predicting of sludge rollover at specific times is desired. This methodology would then provide a rational basis for the development of a schedule for the mitigation procedures. In addition, a knowledge of the sensitivity of the sludge rollovers to various physical and chemical properties within the tanks would provide direction for efforts to reduce the frequency and severity of these events. In this report, the use of probabilistic finite element analyses for computing the probability of rollover and the sensitivity of rollover probability to various parameters is described

  20. Finite element analysis theory and application with ANSYS

    CERN Document Server

    Moaveni, Saeed

    2015-01-01

    For courses in Finite Element Analysis, offered in departments of Mechanical or Civil and Environmental Engineering. While many good textbooks cover the theory of finite element modeling, Finite Element Analysis: Theory and Application with ANSYS is the only text available that incorporates ANSYS as an integral part of its content. Moaveni presents the theory of finite element analysis, explores its application as a design/modeling tool, and explains in detail how to use ANSYS intelligently and effectively. Teaching and Learning Experience This program will provide a better teaching and learning experience-for you and your students. It will help: *Present the Theory of Finite Element Analysis: The presentation of theoretical aspects of finite element analysis is carefully designed not to overwhelm students. *Explain How to Use ANSYS Effectively: ANSYS is incorporated as an integral part of the content throughout the book. *Explore How to Use FEA as a Design/Modeling Tool: Open-ended design problems help stude...

  1. Nonuniform grid implicit spatial finite difference method for acoustic wave modeling in tilted transversely isotropic media

    KAUST Repository

    Chu, Chunlei; Stoffa, Paul L.

    2012-01-01

    sampled models onto vertically nonuniform grids. We use a 2D TTI salt model to demonstrate its effectiveness and show that the nonuniform grid implicit spatial finite difference method can produce highly accurate seismic modeling results with enhanced

  2. ∗-supplemented subgroups of finite groups

    Indian Academy of Sciences (India)

    A subgroup H of a group G is said to be M∗-supplemented in G if ... normal subgroups and determined the structure of finite groups by using some ...... [12] Monakhov V S and Shnyparkov A V, On the p-supersolubility of a finite group with a.

  3. Why do probabilistic finite element analysis ?

    CERN Document Server

    Thacker, Ben H

    2008-01-01

    The intention of this book is to provide an introduction to performing probabilistic finite element analysis. As a short guideline, the objective is to inform the reader of the use, benefits and issues associated with performing probabilistic finite element analysis without excessive theory or mathematical detail.

  4. Symbolic computation with finite biquandles

    OpenAIRE

    Creel, Conrad; Nelson, Sam

    2007-01-01

    A method of computing a basis for the second Yang-Baxter cohomology of a finite biquandle with coefficients in Q and Z_p from a matrix presentation of the finite biquandle is described. We also describe a method for computing the Yang-Baxter cocycle invariants of an oriented knot or link represented as a signed Gauss code. We provide a URL for our Maple implementations of these algorithms.

  5. Finite element application to global reactor analysis

    International Nuclear Information System (INIS)

    Schmidt, F.A.R.

    1981-01-01

    The Finite Element Method is described as a Coarse Mesh Method with general basis and trial functions. Various consequences concerning programming and application of Finite Element Methods in reactor physics are drawn. One of the conclusions is that the Finite Element Method is a valuable tool in solving global reactor analysis problems. However, problems which can be described by rectangular boxes still can be solved with special coarse mesh programs more efficiently. (orig.) [de

  6. Clifford algebra in finite quantum field theories

    International Nuclear Information System (INIS)

    Moser, M.

    1997-12-01

    We consider the most general power counting renormalizable and gauge invariant Lagrangean density L invariant with respect to some non-Abelian, compact, and semisimple gauge group G. The particle content of this quantum field theory consists of gauge vector bosons, real scalar bosons, fermions, and ghost fields. We assume that the ultimate grand unified theory needs no cutoff. This yields so-called finiteness conditions, resulting from the demand for finite physical quantities calculated by the bare Lagrangean. In lower loop order, necessary conditions for finiteness are thus vanishing beta functions for dimensionless couplings. The complexity of the finiteness conditions for a general quantum field theory makes the discussion of non-supersymmetric theories rather cumbersome. Recently, the F = 1 class of finite quantum field theories has been proposed embracing all supersymmetric theories. A special type of F = 1 theories proposed turns out to have Yukawa couplings which are equivalent to generators of a Clifford algebra representation. These algebraic structures are remarkable all the more than in the context of a well-known conjecture which states that finiteness is maybe related to global symmetries (such as supersymmetry) of the Lagrangean density. We can prove that supersymmetric theories can never be of this Clifford-type. It turns out that these Clifford algebra representations found recently are a consequence of certain invariances of the finiteness conditions resulting from a vanishing of the renormalization group β-function for the Yukawa couplings. We are able to exclude almost all such Clifford-like theories. (author)

  7. Finite temperature dynamics of a Holstein polaron: The thermo-field dynamics approach

    Science.gov (United States)

    Chen, Lipeng; Zhao, Yang

    2017-12-01

    Combining the multiple Davydov D2 Ansatz with the method of thermo-field dynamics, we study finite temperature dynamics of a Holstein polaron on a lattice. It has been demonstrated, using the hierarchy equations of motion method as a benchmark, that our approach provides an efficient, robust description of finite temperature dynamics of the Holstein polaron in the simultaneous presence of diagonal and off-diagonal exciton-phonon coupling. The method of thermo-field dynamics handles temperature effects in the Hilbert space with key numerical advantages over other treatments of finite-temperature dynamics based on quantum master equations in the Liouville space or wave function propagation with Monte Carlo importance sampling. While for weak to moderate diagonal coupling temperature increases inhibit polaron mobility, it is found that off-diagonal coupling induces phonon-assisted transport that dominates at high temperatures. Results on the mean square displacements show that band-like transport features dominate the diagonal coupling cases, and there exists a crossover from band-like to hopping transport with increasing temperature when including off-diagonal coupling. As a proof of concept, our theory provides a unified treatment of coherent and incoherent transport in molecular crystals and is applicable to any temperature.

  8. Determination of finite-difference weights using scaled binomial windows

    KAUST Repository

    Chu, Chunlei; Stoffa, Paul L.

    2012-01-01

    The finite-difference method evaluates a derivative through a weighted summation of function values from neighboring grid nodes. Conventional finite-difference weights can be calculated either from Taylor series expansions or by Lagrange interpolation polynomials. The finite-difference method can be interpreted as a truncated convolutional counterpart of the pseudospectral method in the space domain. For this reason, we also can derive finite-difference operators by truncating the convolution series of the pseudospectral method. Various truncation windows can be employed for this purpose and they result in finite-difference operators with different dispersion properties. We found that there exists two families of scaled binomial windows that can be used to derive conventional finite-difference operators analytically. With a minor change, these scaled binomial windows can also be used to derive optimized finite-difference operators with enhanced dispersion properties. © 2012 Society of Exploration Geophysicists.

  9. Determination of finite-difference weights using scaled binomial windows

    KAUST Repository

    Chu, Chunlei

    2012-05-01

    The finite-difference method evaluates a derivative through a weighted summation of function values from neighboring grid nodes. Conventional finite-difference weights can be calculated either from Taylor series expansions or by Lagrange interpolation polynomials. The finite-difference method can be interpreted as a truncated convolutional counterpart of the pseudospectral method in the space domain. For this reason, we also can derive finite-difference operators by truncating the convolution series of the pseudospectral method. Various truncation windows can be employed for this purpose and they result in finite-difference operators with different dispersion properties. We found that there exists two families of scaled binomial windows that can be used to derive conventional finite-difference operators analytically. With a minor change, these scaled binomial windows can also be used to derive optimized finite-difference operators with enhanced dispersion properties. © 2012 Society of Exploration Geophysicists.

  10. Finite-Element Software for Conceptual Design

    DEFF Research Database (Denmark)

    Lindemann, J.; Sandberg, G.; Damkilde, Lars

    2010-01-01

    and research. Forcepad is an effort to provide a conceptual design and teaching tool in a finite-element software package. Forcepad is a two-dimensional finite-element application based on the same conceptual model as image editing applications such as Adobe Photoshop or Microsoft Paint. Instead of using...

  11. Correction Effect of Finite Pulse Duration for High Thermal Diffusivity Materials

    International Nuclear Information System (INIS)

    Park, Dae Gyu; Kim, Hee Moon; Baik, Seung Je; Yoo, Byoung Ok; Ahn, Sang Bok; Ryu, Woo Seok

    2010-01-01

    In the laser pulsed flash method, a pulse of energy is incident on one of two parallel faces of a sample. The subsequent temperature history of the opposite face is then related to the thermal diffusivity. When the heat pulse is of infinitesimal duration, the diffusivity is obtained from the transient response of the rear face temperature proposed by Parker et al. The diffusivity αis computed from relation 2222121.37cattαππ≡= (1) Where a is the sample thickness and is the time required for the rear face temperature to reach half-maximum, and t c ≡a 2 / π 2 t 1/2 is the characteristic rise time of the rear face temperature. When the pulse-time 1/2tτis not infinitesimal, but becomes comparable to tc, it is apparent that the rise in temperature of the rear face will be retarded, and will be greater than 1.37 t c . This retardation has been called the ' finite pulse-time effect.' Equation (1) is accurate to 1% for tc > ∼ 501/2tτ. For many substances, this inequality cannot be achieved with conventional optical sources (e.g. τ. 10 -3 sec for a solid state laser) unless the sample thickness is so large that its rise in temperature is too small for accurate measurement. One must therefore make an appropriate correction for the retardation of the temperature wave. Purpose of study are to observe impact of finite pulse time effect in appropriate sample thickness and to verify the effect of pulse correction using Cape and Lehman method for high thermal diffusivity materials

  12. Electron-phonon coupling from finite differences

    Science.gov (United States)

    Monserrat, Bartomeu

    2018-02-01

    The interaction between electrons and phonons underlies multiple phenomena in physics, chemistry, and materials science. Examples include superconductivity, electronic transport, and the temperature dependence of optical spectra. A first-principles description of electron-phonon coupling enables the study of the above phenomena with accuracy and material specificity, which can be used to understand experiments and to predict novel effects and functionality. In this topical review, we describe the first-principles calculation of electron-phonon coupling from finite differences. The finite differences approach provides several advantages compared to alternative methods, in particular (i) any underlying electronic structure method can be used, and (ii) terms beyond the lowest order in the electron-phonon interaction can be readily incorporated. But these advantages are associated with a large computational cost that has until recently prevented the widespread adoption of this method. We describe some recent advances, including nondiagonal supercells and thermal lines, that resolve these difficulties, and make the calculation of electron-phonon coupling from finite differences a powerful tool. We review multiple applications of the calculation of electron-phonon coupling from finite differences, including the temperature dependence of optical spectra, superconductivity, charge transport, and the role of defects in semiconductors. These examples illustrate the advantages of finite differences, with cases where semilocal density functional theory is not appropriate for the calculation of electron-phonon coupling and many-body methods such as the GW approximation are required, as well as examples in which higher-order terms in the electron-phonon interaction are essential for an accurate description of the relevant phenomena. We expect that the finite difference approach will play a central role in future studies of the electron-phonon interaction.

  13. Finite Element Simulation of Diametral Strength Test of Hydroxyapatite

    International Nuclear Information System (INIS)

    Ozturk, Fahrettin; Toros, Serkan; Evis, Zafer

    2011-01-01

    In this study, the diametral strength test of sintered hydroxyapatite was simulated by the finite element software, ABAQUS/Standard. Stress distributions on diametral test sample were determined. The effect of sintering temperature on stress distribution of hydroxyapatite was studied. It was concluded that high sintering temperatures did not reduce the stress on hydroxyapatite. It had a negative effect on stress distribution of hydroxyapatite after 1300 deg. C. In addition to the porosity, other factors (sintering temperature, presence of phases and the degree of crystallinity) affect the diametral strength of the hydroxyapatite.

  14. Finite N=1 SUSY gauge field theories

    International Nuclear Information System (INIS)

    Kazakov, D.I.

    1986-01-01

    The authors give a detailed description of the method to construct finite N=1 SUSY gauge field theories in the framework of N=1 superfields within dimensional regularization. The finiteness of all Green functions is based on supersymmetry and gauge invariance and is achieved by a proper choice of matter content of the theory and Yukawa couplings in the form Y i =f i (ε)g, where g is the gauge coupling, and the function f i (ε) is regular at ε=0 and is calculated in perturbation theory. Necessary and sufficient conditions for finiteness are determined already in the one-loop approximation. The correspondence with an earlier proposed approach to construct finite theories based on aigenvalue solutions of renormalization-group equations is established

  15. Reference intervals for selected serum biochemistry analytes in cheetahs (Acinonyx jubatus

    Directory of Open Access Journals (Sweden)

    Gavin C. Hudson-Lamb

    2016-02-01

    Full Text Available Published haematologic and serum biochemistry reference intervals are very scarce for captive cheetahs and even more for free-ranging cheetahs. The current study was performed to establish reference intervals for selected serum biochemistry analytes in cheetahs. Baseline serum biochemistry analytes were analysed from 66 healthy Namibian cheetahs. Samples were collected from 30 captive cheetahs at the AfriCat Foundation and 36 free-ranging cheetahs from central Namibia. The effects of captivity-status, age, sex and haemolysis score on the tested serum analytes were investigated. The biochemistry analytes that were measured were sodium, potassium, magnesium, chloride, urea and creatinine. The 90% confidence interval of the reference limits was obtained using the non-parametric bootstrap method. Reference intervals were preferentially determined by the non-parametric method and were as follows: sodium (128 mmol/L – 166 mmol/L, potassium (3.9 mmol/L – 5.2 mmol/L, magnesium (0.8 mmol/L – 1.2 mmol/L, chloride (97 mmol/L – 130 mmol/L, urea (8.2 mmol/L – 25.1 mmol/L and creatinine (88 µmol/L – 288 µmol/L. Reference intervals from the current study were compared with International Species Information System values for cheetahs and found to be narrower. Moreover, age, sex and haemolysis score had no significant effect on the serum analytes in this study. Separate reference intervals for captive and free-ranging cheetahs were also determined. Captive cheetahs had higher urea values, most likely due to dietary factors. This study is the first to establish reference intervals for serum biochemistry analytes in cheetahs according to international guidelines. These results can be used for future health and disease assessments in both captive and free-ranging cheetahs.

  16. Harmonising Reference Intervals for Three Calculated Parameters used in Clinical Chemistry.

    Science.gov (United States)

    Hughes, David; Koerbin, Gus; Potter, Julia M; Glasgow, Nicholas; West, Nic; Abhayaratna, Walter P; Cavanaugh, Juleen; Armbruster, David; Hickman, Peter E

    2016-08-01

    For more than a decade there has been a global effort to harmonise all phases of the testing process, with particular emphasis on the most frequently utilised measurands. In addition, it is recognised that calculated parameters derived from these measurands should also be a target for harmonisation. Using data from the Aussie Normals study we report reference intervals for three calculated parameters: serum osmolality, serum anion gap and albumin-adjusted serum calcium. The Aussie Normals study was an a priori study that analysed samples from 1856 healthy volunteers. The nine analytes used for the calculations in this study were measured on Abbott Architect analysers. The data demonstrated normal (Gaussian) distributions for the albumin-adjusted serum calcium, the anion gap (using potassium in the calculation) and the calculated serum osmolality (using both the Bhagat et al. and Smithline and Gardner formulae). To assess the suitability of these reference intervals for use as harmonised reference intervals, we reviewed data from the Royal College of Pathologists of Australasia/Australasian Association of Clinical Biochemists (RCPA/AACB) bias survey. We conclude that the reference intervals for the calculated serum osmolality (using the Smithline and Gardner formulae) may be suitable for use as a common reference interval. Although a common reference interval for albumin-adjusted serum calcium may be possible, further investigations (including a greater range of albumin concentrations) are needed. This is due to the bias between the Bromocresol Green (BCG) and Bromocresol Purple (BCP) methods at lower serum albumin concentrations. Problems with the measurement of Total CO 2 in the bias survey meant that we could not use the data for assessing the suitability of a common reference interval for the anion gap. Further study is required.

  17. In vitro human keratinocyte migration rates are associated with SNPs in the KRT1 interval.

    Directory of Open Access Journals (Sweden)

    Heng Tao

    Full Text Available Efforts to develop effective therapeutic treatments for promoting fast wound healing after injury to the epidermis are hindered by a lack of understanding of the factors involved. Re-epithelialization is an essential step of wound healing involving the migration of epidermal keratinocytes over the wound site. Here, we examine genetic variants in the keratin-1 (KRT1 locus for association with migration rates of human epidermal keratinocytes (HEK isolated from different individuals. Although the role of intermediate filament genes, including KRT1, in wound activated keratinocytes is well established, this is the first study to examine if genetic variants in humans contribute to differences in the migration rates of these cells. Using an in vitro scratch wound assay we observe quantifiable variation in HEK migration rates in two independent sets of samples; 24 samples in the first set and 17 samples in the second set. We analyze genetic variants in the KRT1 interval and identify SNPs significantly associated with HEK migration rates in both samples sets. Additionally, we show in the first set of samples that the average migration rate of HEK cells homozygous for one common haplotype pattern in the KRT1 interval is significantly faster than that of HEK cells homozygous for a second common haplotype pattern. Our study demonstrates that genetic variants in the KRT1 interval contribute to quantifiable differences in the migration rates of keratinocytes isolated from different individuals. Furthermore we show that in vitro cell assays can successfully be used to deconstruct complex traits into simple biological model systems for genetic association studies.

  18. Reviewing interval cancers: Time well spent?

    International Nuclear Information System (INIS)

    Gower-Thomas, Kate; Fielder, Hilary M.P.; Branston, Lucy; Greening, Sarah; Beer, Helen; Rogers, Cerilan

    2002-01-01

    OBJECTIVES: To categorize interval cancers, and thus identify false-negatives, following prevalent and incident screens in the Welsh breast screening programme. SETTING: Breast Test Wales (BTW) Llandudno, Cardiff and Swansea breast screening units. METHODS: Five hundred and sixty interval breast cancers identified following negative mammographic screening between 1989 and 1997 were reviewed by eight screening radiologists. The blind review was achieved by mixing the screening films of women who subsequently developed an interval cancer with screen negative films of women who did not develop cancer, in a ratio of 4:1. Another radiologist used patients' symptomatic films to record a reference against which the reviewers' reports of the screening films were compared. Interval cancers were categorized as 'true', 'occult', 'false-negative' or 'unclassified' interval cancers or interval cancers with minimal signs, based on the National Health Service breast screening programme (NHSBSP) guidelines. RESULTS: Of the classifiable interval films, 32% were false-negatives, 55% were true intervals and 12% occult. The proportion of false-negatives following incident screens was half that following prevalent screens (P = 0.004). Forty percent of the seed films were recalled by the panel. CONCLUSIONS: Low false-negative interval cancer rates following incident screens (18%) versus prevalent screens (36%) suggest that lower cancer detection rates at incident screens may have resulted from fewer cancers than expected being present, rather than from a failure to detect tumours. The panel method for categorizing interval cancers has significant flaws as the results vary markedly with different protocol and is no more accurate than other, quicker and more timely methods. Gower-Thomas, K. et al. (2002)

  19. Incompleteness in the finite domain

    Czech Academy of Sciences Publication Activity Database

    Pudlák, Pavel

    2017-01-01

    Roč. 23, č. 4 (2017), s. 405-441 ISSN 1079-8986 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : finite domain Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.742, year: 2016 https://www.cambridge.org/core/journals/bulletin-of-symbolic-logic/article/incompleteness-in-the-finite-domain/D239B1761A73DCA534A4805A76D81C76

  20. An actual load forecasting methodology by interval grey modeling based on the fractional calculus.

    Science.gov (United States)

    Yang, Yang; Xue, Dingyü

    2017-07-17

    The operation processes for thermal power plant are measured by the real-time data, and a large number of historical interval data can be obtained from the dataset. Within defined periods of time, the interval information could provide important information for decision making and equipment maintenance. Actual load is one of the most important parameters, and the trends hidden in the historical data will show the overall operation status of the equipments. However, based on the interval grey parameter numbers, the modeling and prediction process is more complicated than the one with real numbers. In order not lose any information, the geometric coordinate features are used by the coordinates of area and middle point lines in this paper, which are proved with the same information as the original interval data. The grey prediction model for interval grey number by the fractional-order accumulation calculus is proposed. Compared with integer-order model, the proposed method could have more freedom with better performance for modeling and prediction, which can be widely used in the modeling process and prediction for the small amount interval historical industry sequence samples. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  1. An efficient finite element solution for gear dynamics

    International Nuclear Information System (INIS)

    Cooley, C G; Parker, R G; Vijayakar, S M

    2010-01-01

    A finite element formulation for the dynamic response of gear pairs is proposed. Following an established approach in lumped parameter gear dynamic models, the static solution is used as the excitation in a frequency domain solution of the finite element vibration model. The nonlinear finite element/contact mechanics formulation provides accurate calculation of the static solution and average mesh stiffness that are used in the dynamic simulation. The frequency domain finite element calculation of dynamic response compares well with numerically integrated (time domain) finite element dynamic results and previously published experimental results. Simulation time with the proposed formulation is two orders of magnitude lower than numerically integrated dynamic results. This formulation admits system level dynamic gearbox response, which may include multiple gear meshes, flexible shafts, rolling element bearings, housing structures, and other deformable components.

  2. Determinants of birth interval in a rural Mediterranean population (La Alpujarra, Spain).

    Science.gov (United States)

    Polo, V; Luna, F; Fuster, V

    2000-10-01

    The fertility pattern, in terms of birth intervals, in a rural population not practicing contraception belonging to La Alta Alpujarra Oriental (southeast Spain) is analyzed. During the first half of the 20th century, this population experienced a considerable degree of geographical and cultural isolation. Because of this population's high variability in fertility and therefore in birth intervals, the analysis was limited to a homogenous subsample of 154 families, each with at least five pregnancies. This limitation allowed us to analyze, among and within families, effects of a set of variables on the interbirth pattern, and to avoid possible problems of pseudoreplication. Information on birth date of the mother, age at marriage, children's birth date and death date, birth order, and frequency of miscarriages was collected. Our results indicate that interbirth intervals depend on an exponential effect of maternal age, especially significant after the age of 35. This effect is probably related to the biological degenerative processes of female fertility with age. A linear increase of birth intervals with birth order within families was found as well as a reduction of intervals among families experiencing an infant death. Our sample size was insufficient to detect a possible replacement behavior in the case of infant death. High natality and mortality rates, a secular decrease of natality rates, a log-normal birth interval, and family-size distributions suggest that La Alpujarra has been a natural fertility population following a demographic transition process.

  3. Finite size scaling and lattice gauge theory

    International Nuclear Information System (INIS)

    Berg, B.A.

    1986-01-01

    Finite size (Fisher) scaling is investigated for four dimensional SU(2) and SU(3) lattice gauge theories without quarks. It allows to disentangle violations of (asymptotic) scaling and finite volume corrections. Mass spectrum, string tension, deconfinement temperature and lattice β-function are considered. For appropriate volumes, Monte Carlo investigations seem to be able to control the finite volume continuum limit. Contact is made with Luescher's small volume expansion and possibly also with the asymptotic large volume behavior. 41 refs., 19 figs

  4. Analysis of wave motion in one-dimensional structures through fast-Fourier-transform-based wavelet finite element method

    Science.gov (United States)

    Shen, Wei; Li, Dongsheng; Zhang, Shuaifang; Ou, Jinping

    2017-07-01

    This paper presents a hybrid method that combines the B-spline wavelet on the interval (BSWI) finite element method and spectral analysis based on fast Fourier transform (FFT) to study wave propagation in One-Dimensional (1D) structures. BSWI scaling functions are utilized to approximate the theoretical wave solution in the spatial domain and construct a high-accuracy dynamic stiffness matrix. Dynamic reduction on element level is applied to eliminate the interior degrees of freedom of BSWI elements and substantially reduce the size of the system matrix. The dynamic equations of the system are then transformed and solved in the frequency domain through FFT-based spectral analysis which is especially suitable for parallel computation. A comparative analysis of four different finite element methods is conducted to demonstrate the validity and efficiency of the proposed method when utilized in high-frequency wave problems. Other numerical examples are utilized to simulate the influence of crack and delamination on wave propagation in 1D rods and beams. Finally, the errors caused by FFT and their corresponding solutions are presented.

  5. Vis-NIR spectrometric determination of Brix and sucrose in sugar production samples using kernel partial least squares with interval selection based on the successive projections algorithm.

    Science.gov (United States)

    de Almeida, Valber Elias; de Araújo Gomes, Adriano; de Sousa Fernandes, David Douglas; Goicoechea, Héctor Casimiro; Galvão, Roberto Kawakami Harrop; Araújo, Mario Cesar Ugulino

    2018-05-01

    This paper proposes a new variable selection method for nonlinear multivariate calibration, combining the Successive Projections Algorithm for interval selection (iSPA) with the Kernel Partial Least Squares (Kernel-PLS) modelling technique. The proposed iSPA-Kernel-PLS algorithm is employed in a case study involving a Vis-NIR spectrometric dataset with complex nonlinear features. The analytical problem consists of determining Brix and sucrose content in samples from a sugar production system, on the basis of transflectance spectra. As compared to full-spectrum Kernel-PLS, the iSPA-Kernel-PLS models involve a smaller number of variables and display statistically significant superiority in terms of accuracy and/or bias in the predictions. Published by Elsevier B.V.

  6. Finite Verb Morphology in the Spontaneous Speech of Dutch-Speaking Children With Hearing Loss.

    Science.gov (United States)

    Hammer, Annemiek; Coene, Martine

    2016-01-01

    In this study, the acquisition of Dutch finite verb morphology is investigated in children with cochlear implants (CIs) with profound hearing loss and in children with hearing aids (HAs) with moderate to severe hearing loss. Comparing these two groups of children increases our insight into how hearing experience and audibility affect the acquisition of morphosyntax. Spontaneous speech samples were analyzed of 48 children with CIs and 29 children with HAs, ages 4 to 7 years. These language samples were analyzed by means of standardized language analysis involving mean length of utterance, the number of finite verbs produced, and target-like subject-verb agreement. The outcomes were interpreted relative to expectations based on the performance of typically developing peers with normal hearing. Outcomes of all measures were correlated with hearing level in the group of HA users and age at implantation in the group of CI users. For both groups, the number of finite verbs that were produced in 50-utterance sample was on par with mean length of utterance and at the lower bound of the normal distribution. No significant differences were found between children with CIs and HAs on any of the measures under investigation. Yet, both groups produced more subject-verb agreement errors than are to be expected for typically developing hearing peers. No significant correlation was found between the hearing level of the children and the relevant measures of verb morphology, both with respect to the overall number of verbs that were used and the number of errors that children made. Within the group of CI users, the outcomes were significantly correlated with age at implantation. When producing finite verb morphology, profoundly deaf children wearing CIs perform similarly to their peers with moderate-to-severe hearing loss wearing HAs. Hearing loss negatively affects the acquisition of subject-verb agreement regardless of the hearing device (CI or HA) that the child is wearing. The

  7. On a linear method in bootstrap confidence intervals

    Directory of Open Access Journals (Sweden)

    Andrea Pallini

    2007-10-01

    Full Text Available A linear method for the construction of asymptotic bootstrap confidence intervals is proposed. We approximate asymptotically pivotal and non-pivotal quantities, which are smooth functions of means of n independent and identically distributed random variables, by using a sum of n independent smooth functions of the same analytical form. Errors are of order Op(n-3/2 and Op(n-2, respectively. The linear method allows a straightforward approximation of bootstrap cumulants, by considering the set of n independent smooth functions as an original random sample to be resampled with replacement.

  8. Solution of multi-group diffusion equation in x-y-z geometry by finite Fourier transformation

    International Nuclear Information System (INIS)

    Kobayashi, Keisuke

    1975-01-01

    The multi-group diffusion equation in three-dimensional x-y-z geometry is solved by finite Fourier transformation. Applying the Fourier transformation to a finite region with constant nuclear cross sections, the fluxes and currents at the material boundaries are obtained in terms of the Fourier series. Truncating the series after the first term, and assuming that the source term is piecewise linear within each mesh box, a set of coupled equations is obtained in the form of three-point equations for each coordinate. These equations can be easily solved by the alternative direction implicit method. Thus a practical procedure is established that could be applied to replace the currently used difference equation. This equation is used to solve the multi-group diffusion equation by means of the source iteration method; and sample calculations for thermal and fast reactors show that the present method yields accurate results with a smaller number of mesh points than the usual finite difference equations. (auth.)

  9. Regularization of finite temperature string theories

    International Nuclear Information System (INIS)

    Leblanc, Y.; Knecht, M.; Wallet, J.C.

    1990-01-01

    The tachyonic divergences occurring in the free energy of various string theories at finite temperature are eliminated through the use of regularization schemes and analytic continuations. For closed strings, we obtain finite expressions which, however, develop an imaginary part above the Hagedorn temperature, whereas open string theories are still plagued with dilatonic divergences. (orig.)

  10. Preservation theorems on finite structures

    International Nuclear Information System (INIS)

    Hebert, M.

    1994-09-01

    This paper concerns classical Preservation results applied to finite structures. We consider binary relations for which a strong form of preservation theorem (called strong interpolation) exists in the usual case. This includes most classical cases: embeddings, extensions, homomorphisms into and onto, sandwiches, etc. We establish necessary and sufficient syntactic conditions for the preservation theorems for sentences and for theories to hold in the restricted context of finite structures. We deduce that for all relations above, the restricted theorem for theories hold provided the language is finite. For the sentences the restricted version fails in most cases; in fact the ''homomorphism into'' case seems to be the only possible one, but the efforts to show that have failed. We hope our results may help to solve this frustrating problem; in the meantime, they are used to put a lower bound on the level of complexity of potential counterexamples. (author). 8 refs

  11. Maximizing the retention level for proportional reinsurance under  -regulation of the finite time surplus process with unit-equalized interarrival time

    Directory of Open Access Journals (Sweden)

    Sukanya Somprom

    2016-07-01

    Full Text Available The research focuses on an insurance model controlled by proportional reinsurance in the finite-time surplus process with a unit-equalized time interval. We prove the existence of the maximal retention level for independent and identically distributed claim processes under α-regulation, i.e., a model where the insurance company has to manage the probability of insolvency to be at most α. In addition, we illustrate the maximal retention level for exponential claims by applying the bisection technique.

  12. Finite Element Methods and Their Applications

    CERN Document Server

    Chen, Zhangxin

    2005-01-01

    This book serves as a text for one- or two-semester courses for upper-level undergraduates and beginning graduate students and as a professional reference for people who want to solve partial differential equations (PDEs) using finite element methods. The author has attempted to introduce every concept in the simplest possible setting and maintain a level of treatment that is as rigorous as possible without being unnecessarily abstract. Quite a lot of attention is given to discontinuous finite elements, characteristic finite elements, and to the applications in fluid and solid mechanics including applications to porous media flow, and applications to semiconductor modeling. An extensive set of exercises and references in each chapter are provided.

  13. Modelling robot's behaviour using finite automata

    Science.gov (United States)

    Janošek, Michal; Žáček, Jaroslav

    2017-07-01

    This paper proposes a model of a robot's behaviour described by finite automata. We split robot's knowledge into several knowledge bases which are used by the inference mechanism of the robot's expert system to make a logic deduction. Each knowledgebase is dedicated to the particular behaviour domain and the finite automaton helps us switching among these knowledge bases with the respect of actual situation. Our goal is to simplify and reduce complexity of one big knowledgebase splitting it into several pieces. The advantage of this model is that we can easily add new behaviour by adding new knowledgebase and add this behaviour into the finite automaton and define necessary states and transitions.

  14. Quantum channels with a finite memory

    International Nuclear Information System (INIS)

    Bowen, Garry; Mancini, Stefano

    2004-01-01

    In this paper we study quantum communication channels with correlated noise effects, i.e., quantum channels with memory. We derive a model for correlated noise channels that includes a channel memory state. We examine the case where the memory is finite, and derive bounds on the classical and quantum capacities. For the entanglement-assisted and unassisted classical capacities it is shown that these bounds are attainable for certain classes of channel. Also, we show that the structure of any finite-memory state is unimportant in the asymptotic limit, and specifically, for a perfect finite-memory channel where no information is lost to the environment, achieving the upper bound implies that the channel is asymptotically noiseless

  15. On Forecasting Macro-Economic Indicators with the Help of Finite-Difference Equations and Econometric Methods

    Directory of Open Access Journals (Sweden)

    Polshkov Yulian M.

    2013-11-01

    Full Text Available The article considers data on the gross domestic product, consumer expenditures, gross investments and volume of foreign trade for the national economy. It is assumed that time is a discrete variable with one year iteration. The article uses finite-difference equations. It considers models with a high degree of the regulatory function of the state with respect to the consumer market. The econometric component is based on the hypothesis that each of the above said macro-economic indicators for this year depends on the gross domestic product for the previous time periods. Such an assumption gives a possibility to engage the least-squares method for building up linear models of the pair regression. The article obtains the time series model, which allows building point and interval forecasts for the gross domestic product for the next year based on the values of the gross domestic product for the current and previous years. The article draws a conclusion that such forecasts could be considered justified at least in the short-term prospect. From the mathematical point of view the built model is a heterogeneous finite-difference equation of the second order with constant ratios. The article describes specific features of such equations. It illustrates graphically the analytical view of solutions of the finite-difference equation. This gives grounds to differentiate national economies as sustainable growth economies, one-sided, weak or being in the stage of successful re-formation. The article conducts comparison of the listed types with specific economies of modern states.

  16. Thermal capacity of ternary oxide YBa2Cu3O7-y in 300-1100 K interval

    International Nuclear Information System (INIS)

    Sharpataya, G.A.; Ozerova, Z.P.; Kolnovalova, I.A.; Lazarev, V.B.; Shaplygin, I.S.

    1991-01-01

    Thermal capacity of YBa 2 Cu 3 O 7-y samples with different thermal prehistory is measured using a differential scanning calorimeter within 300-1100 K interval. It is shown that the combination of thermal capacity temperature dependence curves in these samples demonstrates reversibility and temperature limits of oxygen absorption and release processes with the corresponding changes of the formular index by oxygen from 6.85-6.90 to 6.35 and vice versa. Thermal capacity anomaly, corresponding to the reversible structural transition from orthorhombic to tetragonal phase with the simultaneous oxygen loss is observed within 630-1000 K interval

  17. Two-dimensional finite element heat transfer model of softwood. Part II, Macrostructural effects

    Science.gov (United States)

    Hongmei Gu; John F. Hunt

    2006-01-01

    A two-dimensional finite element model was used to study the effects of structural features on transient heat transfer in softwood lumber with various orientations. Transient core temperature was modeled for lumber samples “cut” from various locations within a simulated log. The effects of ring orientation, earlywood to latewood (E/L) ratio, and ring density were...

  18. Finite element analysis of a finite-strain plasticity problem

    International Nuclear Information System (INIS)

    Crose, J.G.; Fong, H.H.

    1984-01-01

    A finite-strain plasticity analysis was performed of an engraving process in a plastic rotating band during the firing of a gun projectile. The aim was to verify a nonlinear feature of the NIFDI/RB code: plastic large deformation analysis of nearly incompressible materials using a deformation theory of plasticity approach and a total Lagrangian scheme. (orig.)

  19. Finite element modeling of ultrasonic inspection of weldments

    International Nuclear Information System (INIS)

    Dewey, B.R.; Adler, L.; Oliver, B.F.; Pickard, C.A.

    1983-01-01

    High performance weldments for critical service applications require 100% inspection. Balanced against the adaptability of the ultrasonic method for automated inspection are the difficulties encountered with nonhomogeneous and anisotropic materials. This research utilizes crystals and bicrystals of nickel to model austenitic weld metal, where the anisotropy produces scattering and mode conversion, making detection and measurement of actual defects difficult. Well characterized samples of Ni are produced in a levitation zone melting facility. Crystals in excess of 25 mm diameter and length are large enough to permit ultrasonic measurements of attenuation, wave speed, and spectral content. At the same time, the experiments are duplicated as finite element models for comparison purposes

  20. Axisymmetric Alfvén resonances in a multi-component plasma at finite ion gyrofrequency

    Directory of Open Access Journals (Sweden)

    D. Yu. Klimushkin

    2006-05-01

    Full Text Available This paper deals with the spatial structure of zero azimuthal wave number ULF oscillations in a 1-D inhomogeneous multi-component plasma when a finite ion gyrofrequency is taken into account. Such oscillations may occur in the terrestrial magnetosphere as Pc1-3 waves or in the magnetosphere of the planet Mercury. The wave field was found to have a sharp peak on some magnetic surfaces, an analogy of the Alfvén (field line resonance in one-fluid MHD theory. The resonance can only take place for waves with frequencies in the intervals ω<ωch or Ω0<ω< ωcp, where ωch and ωcp are heavy and light ions gyrofrequencies, and Ω0 is a kind of hybrid frequency. Contrary to ordinary Alfvén resonance, the wave resonance under consideration takes place even at the zero azimuthal wave number. The radial component of the wave electric field has a pole-type singularity, while the azimuthal component is finite but has a branching point singularity on the resonance surface. The later singularity can disappear at some frequencies. In the region adjacent to the resonant surface the mode is standing across the magnetic shells.

  1. Finite moments approach to the time-dependent neutron transport equation

    International Nuclear Information System (INIS)

    Kim, Sang Hyun

    1994-02-01

    Currently, nodal techniques are widely used in solving the multidimensional diffusion equation because of savings in computing time and storage. Thanks to the development of computer technology, one can now solve the transport equation instead of the diffusion equation to obtain more accurate solution. The finite moments method, one of the nodal methods, attempts to represent the fluxes in the cell and on cell surfaces more rigorously by retaining additional spatial moments. Generally, there are two finite moments schemes to solve the time-dependent transport equation. In one, the time variable is treated implicitly with finite moments method in space variable (implicit finite moments method), the other method uses finite moments method in both space and time (space-time finite moments method). In this study, these two schemes are applied to two types of time-dependent neutron transport problems. One is a fixed source problem, the other a heterogeneous fast reactor problem with delayed neutrons. From the results, it is observed that the two finite moments methods give almost the same solutions in both benchmark problems. However, the space-time finite moments method requires a little longer computing time than that of the implicit finite moments method. In order to reduce the longer computing time in the space-time finite moments method, a new iteration strategy is exploited, where a few time-stepwise calculation, in which original time steps are grouped into several coarse time divisions, is performed sequentially instead of performing iterations over the entire time steps. This strategy results in significant reduction of the computing time and we observe that 2-or 3-stepwise calculation is preferable. In addition, we propose a new finite moments method which is called mixed finite moments method in this thesis. Asymptotic analysis for the finite moments method shows that accuracy of the solution in a heterogeneous problem mainly depends on the accuracy of the

  2. A note on powers in finite fields

    Science.gov (United States)

    Aabrandt, Andreas; Lundsgaard Hansen, Vagn

    2016-08-01

    The study of solutions to polynomial equations over finite fields has a long history in mathematics and is an interesting area of contemporary research. In recent years, the subject has found important applications in the modelling of problems from applied mathematical fields such as signal analysis, system theory, coding theory and cryptology. In this connection, it is of interest to know criteria for the existence of squares and other powers in arbitrary finite fields. Making good use of polynomial division in polynomial rings over finite fields, we have examined a classical criterion of Euler for squares in odd prime fields, giving it a formulation that is apt for generalization to arbitrary finite fields and powers. Our proof uses algebra rather than classical number theory, which makes it convenient when presenting basic methods of applied algebra in the classroom.

  3. Chiral crossover transition in a finite volume

    Science.gov (United States)

    Shi, Chao; Jia, Wenbao; Sun, An; Zhang, Liping; Zong, Hongshi

    2018-02-01

    Finite volume effects on the chiral crossover transition of strong interactions at finite temperature are studied by solving the quark gap equation within a cubic volume of finite size L. With the anti-periodic boundary condition, our calculation shows the chiral quark condensate, which characterizes the strength of dynamical chiral symmetry breaking, decreases as L decreases below 2.5 fm. We further study the finite volume effects on the pseudo-transition temperature {T}{{c}} of the crossover, showing a significant decrease in {T}{{c}} as L decreases below 3 fm. Supported by National Natural Science Foundation of China (11475085, 11535005, 11690030, 51405027), the Fundamental Research Funds for the Central Universities (020414380074), China Postdoctoral Science Foundation (2016M591808) and Open Research Foundation of State Key Lab. of Digital Manufacturing Equipment & Technology in Huazhong University of Science & Technology (DMETKF2015015)

  4. Programming the finite element method

    CERN Document Server

    Smith, I M; Margetts, L

    2013-01-01

    Many students, engineers, scientists and researchers have benefited from the practical, programming-oriented style of the previous editions of Programming the Finite Element Method, learning how to develop computer programs to solve specific engineering problems using the finite element method. This new fifth edition offers timely revisions that include programs and subroutine libraries fully updated to Fortran 2003, which are freely available online, and provides updated material on advances in parallel computing, thermal stress analysis, plasticity return algorithms, convection boundary c

  5. The theory of finitely generated commutative semigroups

    CERN Document Server

    Rédei, L; Stark, M; Gravett, K A H

    1966-01-01

    The Theory of Finitely Generated Commutative Semigroups describes a theory of finitely generated commutative semigroups which is founded essentially on a single """"fundamental theorem"""" and exhibits resemblance in many respects to the algebraic theory of numbers. The theory primarily involves the investigation of the F-congruences (F is the the free semimodule of the rank n, where n is a given natural number). As applications, several important special cases are given. This volume is comprised of five chapters and begins with preliminaries on finitely generated commutative semigroups before

  6. Books and monographs on finite element technology

    Science.gov (United States)

    Noor, A. K.

    1985-01-01

    The present paper proviees a listing of all of the English books and some of the foreign books on finite element technology, taking into account also a list of the conference proceedings devoted solely to finite elements. The references are divided into categories. Attention is given to fundamentals, mathematical foundations, structural and solid mechanics applications, fluid mechanics applications, other applied science and engineering applications, computer implementation and software systems, computational and modeling aspects, special topics, boundary element methods, proceedings of symmposia and conferences on finite element technology, bibliographies, handbooks, and historical accounts.

  7. Finite unified models

    Energy Technology Data Exchange (ETDEWEB)

    Kapetanakis, D. (Technische Univ. Muenchen, Garching (Germany). Physik Dept.); Mondragon, M. (Technische Univ. Muenchen, Garching (Germany). Physik Dept.); Zoupanos, G. (National Technical Univ., Athens (Greece). Physics Dept.)

    1993-09-01

    We present phenomenologically viable SU(5) unified models which are finite to all orders before the spontaneous symmetry breaking. In the case of two models with three families the top quark mass is predicted to be 178.8 GeV. (orig.)

  8. Finite unified models

    International Nuclear Information System (INIS)

    Kapetanakis, D.; Mondragon, M.; Zoupanos, G.

    1993-01-01

    We present phenomenologically viable SU(5) unified models which are finite to all orders before the spontaneous symmetry breaking. In the case of two models with three families the top quark mass is predicted to be 178.8 GeV. (orig.)

  9. CoCrMo cellular structures made by Electron Beam Melting studied by local tomography and finite element modelling

    Energy Technology Data Exchange (ETDEWEB)

    Petit, Clémence [INSA de Lyon, MATEIS CNRS UMR5510, Université de Lyon, 69621 Villeurbanne (France); Maire, Eric, E-mail: eric.maire@insa-lyon.fr [INSA de Lyon, MATEIS CNRS UMR5510, Université de Lyon, 69621 Villeurbanne (France); Meille, Sylvain; Adrien, Jérôme [INSA de Lyon, MATEIS CNRS UMR5510, Université de Lyon, 69621 Villeurbanne (France); Kurosu, Shingo; Chiba, Akihiko [Institute for Materials Research, Tohoku University, Sendai 980-0812 (Japan)

    2016-06-15

    The work focuses on the structural and mechanical characterization of Co-Cr-Mo cellular samples with cubic pore structure made by Electron Beam Melting (EBM). X-ray tomography was used to characterize the architecture of the sample. High resolution images were also obtained thanks to local tomography in which the specimen is placed close to the X-ray source. These images enabled to observe some defects due to the fabrication process: small pores in the solid phase, partially melted particles attached to the surface. Then, in situ compression tests were performed in the tomograph. The images of the deformed sample show a progressive buckling of the vertical struts leading to final fracture. The deformation initiated where the defects were present in the strut i.e. in regions with reduced local thickness. The finite element modelling confirmed the high stress concentrations of these weak points leading to the fracture of the sample. - Highlights: • CoCrMo samples fabricated by Electron Beam Melting (EBM) process are considered. • X-ray Computed Tomography is used to observe the structure of the sample. • The mechanical properties are tested thanks to an in situ test in the tomograph. • A finite element model is developed to model the mechanical behaviour.

  10. Finite-strain analysis of Metavolcano-sedimentary rocks at Gabel El Mayet area, Central Eastern Desert, Egypt

    Science.gov (United States)

    Kassem, Osama M. K.; Abd El Rahim, Said H.

    2010-09-01

    Finite strain was estimated in the metavolcano-sedimentary rocks, which surround by serpentinites of Gabel El Mayet area. Finite strain shows a relationship to nappe contacts between the metavolcano-sedimentary rocks and serpentinite and sheds light on the nature of the subhorizontal foliation typical for the Gable Mayet shear zone. We used the Rf/ ϕ and Fry methods on feldspar porphyroclasts and mafic grains from 10 metasedimentary and six metavolcanic samples in Gabel El Mayet region. Our finite-strain data show that the metavolcano-sedimentary rocks were moderately deformed and axial ratios in the XZ section range from 1.9 to 3.9. The long axes of the finite-strain ellipsoids trend W/WNW in the north and W/WSW in the south of the Gabel El Mayet shear zone. Furthermore, the short axes are subvertical to a subhorizontal foliation. The strain magnitudes increase towards the tectonic contacts between the metavolcano-sedimentary rocks and serpentinite. The data indicate oblate strain symmetry in the metavolcano-sedimentary rocks. Hence, our strain data also indicate flattening strain. We assume that the metasedimentary and metavolcanics rocks have similar deformation behaviour. The fact that finite strain accumulated during the metamorphism indicates that the nappe contacts formed during the accumulation of finite strain and thus during thrusting. We conclude that the nappe contacts formed during progressive thrusting under brittle to semi-brittle deformation conditions by simple shear and involved a component of vertical shortening, which caused the subhorizontal foliation in the Gabel El Mayet shear zone.

  11. Algebraic complexities and algebraic curves over finite fields.

    Science.gov (United States)

    Chudnovsky, D V; Chudnovsky, G V

    1987-04-01

    We consider the problem of minimal (multiplicative) complexity of polynomial multiplication and multiplication in finite extensions of fields. For infinite fields minimal complexities are known [Winograd, S. (1977) Math. Syst. Theory 10, 169-180]. We prove lower and upper bounds on minimal complexities over finite fields, both linear in the number of inputs, using the relationship with linear coding theory and algebraic curves over finite fields.

  12. The Acute Effects of Interval-Type Exercise on Glycemic Control in Type 2 Diabetes Subjects: Importance of Interval Length. A Controlled, Counterbalanced, Crossover Study.

    Directory of Open Access Journals (Sweden)

    Ida Jakobsen

    Full Text Available Interval-type exercise is effective for improving glycemic control, but the optimal approach is unknown. The purpose of this study was to determine the importance of the interval length on changes in postprandial glycemic control following a single exercise bout. Twelve subjects with type 2 diabetes completed a cross-over study with three 1-hour interventions performed in a non-randomized but counter-balanced order: 1 Interval walking consisting of repeated cycles of 3 min slow (aiming for 54% of Peak oxygen consumption rate [VO2peak] and 3 min fast (aiming for 89% of VO2peak walking (IW3; 2 Interval walking consisting of repeated cycles of 1 min slow and 1 min fast walking (IW1 and 3 No walking (CON. The exercise interventions were matched with regards to walking speed, and VO2 and heart rate was assessed throughout all interventions. A 4-hour liquid mixed meal tolerance test commenced 30 min after each intervention, with blood samples taken regularly. IW3 and IW1 resulted in comparable mean VO2 and heart rates. Overall mean postprandial blood glucose levels were lower after IW3 compared to CON (10.3±3.0 vs. 11.1±3.3 mmol/L; P 0.05 for both. Conversely blood glucose levels at specific time points during the MMTT differed significantly following both IW3 and IW1 as compared to CON. Our findings support the previously found blood glucose lowering effect of IW3 and suggest that reducing the interval length, while keeping the walking speed and time spend on fast and slow walking constant, does not result in additional improvements.ClinicalTrials.gov NCT02257190.

  13. On characters of finite groups

    CERN Document Server

    Broué, Michel

    2017-01-01

    This book explores the classical and beautiful character theory of finite groups. It does it by using some rudiments of the language of categories. Originally emerging from two courses offered at Peking University (PKU), primarily for third-year students, it is now better suited for graduate courses, and provides broader coverage than books that focus almost exclusively on groups. The book presents the basic tools, notions and theorems of character theory (including a new treatment of the control of fusion and isometries), and introduces readers to the categorical language at several levels. It includes and proves the major results on characteristic zero representations without any assumptions about the base field. The book includes a dedicated chapter on graded representations and applications of polynomial invariants of finite groups, and its closing chapter addresses the more recent notion of the Drinfeld double of a finite group and the corresponding representation of GL_2(Z).

  14. Finite and profinite quantum systems

    CERN Document Server

    Vourdas, Apostolos

    2017-01-01

    This monograph provides an introduction to finite quantum systems, a field at the interface between quantum information and number theory, with applications in quantum computation and condensed matter physics. The first major part of this monograph studies the so-called `qubits' and `qudits', systems with periodic finite lattice as position space. It also discusses the so-called mutually unbiased bases, which have applications in quantum information and quantum cryptography. Quantum logic and its applications to quantum gates is also studied. The second part studies finite quantum systems, where the position takes values in a Galois field. This combines quantum mechanics with Galois theory. The third part extends the discussion to quantum systems with variables in profinite groups, considering the limit where the dimension of the system becomes very large. It uses the concepts of inverse and direct limit and studies quantum mechanics on p-adic numbers. Applications of the formalism include quantum optics and ...

  15. Compton scattering at finite temperature: thermal field dynamics approach

    International Nuclear Information System (INIS)

    Juraev, F.I.

    2006-01-01

    Full text: Compton scattering is a classical problem of quantum electrodynamics and has been studied in its early beginnings. Perturbation theory and Feynman diagram technique enables comprehensive analysis of this problem on the basis of which famous Klein-Nishina formula is obtained [1, 2]. In this work this problem is extended to the case of finite temperature. Finite-temperature effects in Compton scattering is of practical importance for various processes in relativistic thermal plasmas in astrophysics. Recently Compton effect have been explored using closed-time path formalism with temperature corrections estimated [3]. It was found that the thermal cross section can be larger than that for zero-temperature by several orders of magnitude for the high temperature realistic in astrophysics [3]. In our work we use a main tool to account finite-temperature effects, a real-time finite-temperature quantum field theory, so-called thermofield dynamics [4, 5]. Thermofield dynamics is a canonical formalism to explore field-theoretical processes at finite temperature. It consists of two steps, doubling of Fock space and Bogolyubov transformations. Doubling leads to appearing additional degrees of freedom, called tilded operators which together with usual field operators create so-called thermal doublet. Bogolyubov transformations make field operators temperature-dependent. Using this formalism we treat Compton scattering at finite temperature via replacing in transition amplitude zero-temperature propagators by finite-temperature ones. As a result finite-temperature extension of the Klein-Nishina formula is obtained in which differential cross section is represented as a sum of zero-temperature cross section and finite-temperature correction. The obtained result could be useful in quantum electrodynamics of lasers and for relativistic thermal plasma processes in astrophysics where correct account of finite-temperature effects is important. (author)

  16. Experimental and finite element analyses of plastic deformation behavior in vortex extrusion

    International Nuclear Information System (INIS)

    Shahbaz, M.; Pardis, N.; Kim, J.G.; Ebrahimi, R.; Kim, H.S.

    2016-01-01

    Vortex extrusion (VE) is a single pass severe plastic deformation (SPD) technique which can impose high strain values with almost uniform distribution within cross section of the processed material. This technique needs no additional facilities for installation on any conventional extrusion equipment. In this study the deformation behavior of material during VE is investigated and the results are compared with those of conventional extrusion (CE). These investigations include finite element analysis, visioplasticity, and microstructural characterization of the processed samples. The results indicate that the VE process can accumulate a higher strain value by applying an additional torsional deformation. The role of this additional deformation mode on the microstructural evolution of the VE sample is discussed and compared with the results obtained on the CE samples.

  17. Finiteness of Lorentzian 10j symbols and partition functions

    International Nuclear Information System (INIS)

    Christensen, J Daniel

    2006-01-01

    We give a short and simple proof that the Lorentzian 10j symbol, which forms a key part of the Barrett-Crane model of Lorentzian quantum gravity, is finite. The argument is very general, and applies to other integrals. For example, we show that the Lorentzian and Riemannian causal 10j symbols are finite, despite their singularities. Moreover, we show that integrals that arise in Cherrington's work are finite. Cherrington has shown that this implies that the Lorentzian partition function for a single triangulation is finite, even for degenerate triangulations. Finally, we also show how to use these methods to prove finiteness of integrals based on other graphs and other homogeneous domains

  18. Surgery simulation using fast finite elements

    DEFF Research Database (Denmark)

    Bro-Nielsen, Morten

    1996-01-01

    This paper describes our recent work on real-time surgery simulation using fast finite element models of linear elasticity. In addition, we discuss various improvements in terms of speed and realism......This paper describes our recent work on real-time surgery simulation using fast finite element models of linear elasticity. In addition, we discuss various improvements in terms of speed and realism...

  19. FEMWATER: a finite-element model of water flow through saturated-unsaturated porous media

    International Nuclear Information System (INIS)

    Yeh, G.T.; Ward, D.S.

    1980-10-01

    Upon examining the Water Movement Through Saturated-Unsaturated Porous Media: A Finite-Element Galerkin Model, it was felt that the model should be modified and expanded. The modification is made in calculating the flow field in a manner consistent with the finite element approach, in evaluating the moisture-content increasing rate within the region of interest, and in numerically computing the nonlinear terms. With these modifications, the flow field is continuous everywhere in the flow regime, including element boundaries and nodal points, and the mass loss through boundaries is much reduced. Expansion is made to include four additional numerical schemes which would be more appropriate for many situations. Also, to save computer storage, all arrays pertaining to the boundary condition information are compressed to smaller dimension, and to ease the treatment of different problems, all arrays are variably dimensioned in all subroutines. This report is intended to document these efforts. In addition, in the derivation of finite-element equations, matrix component representation is used, which is believed more readable than the matrix representation in its entirety. Two identical sample problems are simulated to show the difference between the original and revised models

  20. The finite volume element (FVE) and multigrid method for the incompressible Navier-Stokes equations

    International Nuclear Information System (INIS)

    Gu Lizhen; Bao Weizhu

    1992-01-01

    The authors apply FVE method to discrete INS equations with the original variable, in which the bilinear square finite element and the square finite volume are chosen. The discrete schemes of INS equations are presented. The FMV multigrid algorithm is applied to solve that discrete system, where DGS iteration is used as smoother, DGS distributive mode for the INS discrete system is also presented. The sample problems for the square cavity flow with Reynolds number Re≤100 are successfully calculated. The numerical solutions show that the results with 1 FMV is satisfactory and when Re is not large, The FVE discrete scheme of the conservative INS equations and that of non-conservative INS equations with linearization both can provide almost same accuracy

  1. CLSI-based transference and verification of CALIPER pediatric reference intervals for 29 Ortho VITROS 5600 chemistry assays.

    Science.gov (United States)

    Higgins, Victoria; Truong, Dorothy; Woroch, Amy; Chan, Man Khun; Tahmasebi, Houman; Adeli, Khosrow

    2018-03-01

    Evidence-based reference intervals (RIs) are essential to accurately interpret pediatric laboratory test results. To fill gaps in pediatric RIs, the Canadian Laboratory Initiative on Pediatric Reference Intervals (CALIPER) project developed an age- and sex-specific pediatric RI database based on healthy pediatric subjects. Originally established for Abbott ARCHITECT assays, CALIPER RIs were transferred to assays on Beckman, Roche, Siemens, and Ortho analytical platforms. This study provides transferred reference intervals for 29 biochemical assays for the Ortho VITROS 5600 Chemistry System (Ortho). Based on Clinical Laboratory Standards Institute (CLSI) guidelines, a method comparison analysis was performed by measuring approximately 200 patient serum samples using Abbott and Ortho assays. The equation of the line of best fit was calculated and the appropriateness of the linear model was assessed. This equation was used to transfer RIs from Abbott to Ortho assays. Transferred RIs were verified using 84 healthy pediatric serum samples from the CALIPER cohort. RIs for most chemistry analytes successfully transferred from Abbott to Ortho assays. Calcium and CO 2 did not meet statistical criteria for transference (r 2 reference intervals, 29 successfully verified with approximately 90% of results from reference samples falling within transferred confidence limits. Transferred RIs for total bilirubin, magnesium, and LDH did not meet verification criteria and are not reported. This study broadens the utility of the CALIPER pediatric RI database to laboratories using Ortho VITROS 5600 biochemical assays. Clinical laboratories should verify CALIPER reference intervals for their specific analytical platform and local population as recommended by CLSI. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  2. Probability Distribution for Flowing Interval Spacing

    International Nuclear Information System (INIS)

    S. Kuzio

    2004-01-01

    Fracture spacing is a key hydrologic parameter in analyses of matrix diffusion. Although the individual fractures that transmit flow in the saturated zone (SZ) cannot be identified directly, it is possible to determine the fractured zones that transmit flow from flow meter survey observations. The fractured zones that transmit flow as identified through borehole flow meter surveys have been defined in this report as flowing intervals. The flowing interval spacing is measured between the midpoints of each flowing interval. The determination of flowing interval spacing is important because the flowing interval spacing parameter is a key hydrologic parameter in SZ transport modeling, which impacts the extent of matrix diffusion in the SZ volcanic matrix. The output of this report is input to the ''Saturated Zone Flow and Transport Model Abstraction'' (BSC 2004 [DIRS 170042]). Specifically, the analysis of data and development of a data distribution reported herein is used to develop the uncertainty distribution for the flowing interval spacing parameter for the SZ transport abstraction model. Figure 1-1 shows the relationship of this report to other model reports that also pertain to flow and transport in the SZ. Figure 1-1 also shows the flow of key information among the SZ reports. It should be noted that Figure 1-1 does not contain a complete representation of the data and parameter inputs and outputs of all SZ reports, nor does it show inputs external to this suite of SZ reports. Use of the developed flowing interval spacing probability distribution is subject to the limitations of the assumptions discussed in Sections 5 and 6 of this analysis report. The number of fractures in a flowing interval is not known. Therefore, the flowing intervals are assumed to be composed of one flowing zone in the transport simulations. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be

  3. Finite-size scaling of the entanglement entropy of the quantum Ising chain with homogeneous, periodically modulated and random couplings

    International Nuclear Information System (INIS)

    Iglói, Ferenc; Lin, Yu-Cheng

    2008-01-01

    Using free-fermionic techniques we study the entanglement entropy of a block of contiguous spins in a large finite quantum Ising chain in a transverse field, with couplings of different types: homogeneous, periodically modulated and random. We carry out a systematic study of finite-size effects at the quantum critical point, and evaluate subleading corrections both for open and for periodic boundary conditions. For a block corresponding to a half of a finite chain, the position of the maximum of the entropy as a function of the control parameter (e.g. the transverse field) can define the effective critical point in the finite sample. On the basis of homogeneous chains, we demonstrate that the scaling behavior of the entropy near the quantum phase transition is in agreement with the universality hypothesis, and calculate the shift of the effective critical point, which has different scaling behaviors for open and for periodic boundary conditions

  4. Discrete-ordinates finite-element method for atmospheric radiative transfer and remote sensing

    International Nuclear Information System (INIS)

    Gerstl, S.A.W.; Zardecki, A.

    1985-01-01

    Advantages and disadvantages of modern discrete-ordinates finite-element methods for the solution of radiative transfer problems in meteorology, climatology, and remote sensing applications are evaluated. After the common basis of the formulation of radiative transfer problems in the fields of neutron transport and atmospheric optics is established, the essential features of the discrete-ordinates finite-element method are described including the limitations of the method and their remedies. Numerical results are presented for 1-D and 2-D atmospheric radiative transfer problems where integral as well as angular dependent quantities are compared with published results from other calculations and with measured data. These comparisons provide a verification of the discrete-ordinates results for a wide spectrum of cases with varying degrees of absorption, scattering, and anisotropic phase functions. Accuracy and computational speed are also discussed. Since practically all discrete-ordinates codes offer a builtin adjoint capability, the general concept of the adjoint method is described and illustrated by sample problems. Our general conclusion is that the strengths of the discrete-ordinates finite-element method outweight its weaknesses. We demonstrate that existing general-purpose discrete-ordinates codes can provide a powerful tool to analyze radiative transfer problems through the atmosphere, especially when 2-D geometries must be considered

  5. Modes in a nonneutral plasma column of finite length

    International Nuclear Information System (INIS)

    Rasband, S. Neil; Spencer, Ross L.

    2002-01-01

    A Galerkin, finite-element, nonuniform mesh computation of the mode equation for waves in a non-neutral plasma of finite length in a Cold-Fluid model gives an accurate calculation of the mode eigenfrequencies and eigenfunctions. We report on studies of the following: (1) finite-length Trivelpiece-Gould modes with flat-top and realistic density profiles, (2) finite-length diocotron modes with flat density profiles. We compare with the frequency equation of Fine and Driscoll [Phys Plasmas 5, 601 (1998)

  6. Examining the cost efficiency of Chinese hydroelectric companies using a finite mixture model

    International Nuclear Information System (INIS)

    Barros, Carlos Pestana; Chen, Zhongfei; Managi, Shunsuke; Antunes, Olinda Sequeira

    2013-01-01

    This paper evaluates the operational activities of Chinese hydroelectric power companies over the period 2000–2010 using a finite mixture model that controls for unobserved heterogeneity. In so doing, a stochastic frontier latent class model, which allows for the existence of different technologies, is adopted to estimate cost frontiers. This procedure not only enables us to identify different groups among the hydro-power companies analysed, but also permits the analysis of their cost efficiency. The main result is that three groups are identified in the sample, each equipped with different technologies, suggesting that distinct business strategies need to be adapted to the characteristics of China's hydro-power companies. Some managerial implications are developed. - Highlights: ► This paper evaluates the operational activities of Chinese electricity hydric companies. ► This study uses data from 2000 to 2010 using a finite mixture model. ► The model procedure identifies different groups of Chinese hydric companies analysed. ► Three groups are identified in the sample, each equipped with completely different “technologies”. ► This suggests that distinct business strategies need to be adapted to the characteristics of the hydric companies

  7. FINITE ELEMENT ANALYSIS OF STRUCTURES

    Directory of Open Access Journals (Sweden)

    PECINGINA OLIMPIA-MIOARA

    2015-05-01

    Full Text Available The application of finite element method is analytical when solutions can not be applied for deeper study analyzes static, dynamic or other types of requirements in different points of the structures .In practice it is necessary to know the behavior of the structure or certain parts components of the machine under the influence of certain factors static and dynamic . The application of finite element in the optimization of components leads to economic growth , to increase reliability and durability organs studied, thus the machine itself.

  8. ANSYS mechanical APDL for finite element analysis

    CERN Document Server

    Thompson, Mary Kathryn

    2017-01-01

    ANSYS Mechanical APDL for Finite Element Analysis provides a hands-on introduction to engineering analysis using one of the most powerful commercial general purposes finite element programs on the market. Students will find a practical and integrated approach that combines finite element theory with best practices for developing, verifying, validating and interpreting the results of finite element models, while engineering professionals will appreciate the deep insight presented on the program's structure and behavior. Additional topics covered include an introduction to commands, input files, batch processing, and other advanced features in ANSYS. The book is written in a lecture/lab style, and each topic is supported by examples, exercises and suggestions for additional readings in the program documentation. Exercises gradually increase in difficulty and complexity, helping readers quickly gain confidence to independently use the program. This provides a solid foundation on which to build, preparing readers...

  9. The finite element method in engineering, 2nd edition

    International Nuclear Information System (INIS)

    Rao, S.S.

    1986-01-01

    This work provides a systematic introduction to the various aspects of the finite element method as applied to engineering problems. Contents include: introduction to finite element method; solution of finite element equations; solid and structural mechanics; static analysis; dynamic analysis; heat transfer; fluid mechanics and additional applications

  10. A combined finite volume-nonconforming finite element scheme for compressible two phase flow in porous media

    KAUST Repository

    Saad, Bilal Mohammed; Saad, Mazen Naufal B M

    2014-01-01

    We propose and analyze a combined finite volume-nonconforming finite element scheme on general meshes to simulate the two compressible phase flow in porous media. The diffusion term, which can be anisotropic and heterogeneous, is discretized by piecewise linear nonconforming triangular finite elements. The other terms are discretized by means of a cell-centered finite volume scheme on a dual mesh, where the dual volumes are constructed around the sides of the original mesh. The relative permeability of each phase is decentred according the sign of the velocity at the dual interface. This technique also ensures the validity of the discrete maximum principle for the saturation under a non restrictive shape regularity of the space mesh and the positiveness of all transmissibilities. Next, a priori estimates on the pressures and a function of the saturation that denote capillary terms are established. These stabilities results lead to some compactness arguments based on the use of the Kolmogorov compactness theorem, and allow us to derive the convergence of a subsequence of the sequence of approximate solutions to a weak solution of the continuous equations, provided the mesh size tends to zero. The proof is given for the complete system when the density of the each phase depends on its own pressure. © 2014 Springer-Verlag Berlin Heidelberg.

  11. A combined finite volume-nonconforming finite element scheme for compressible two phase flow in porous media

    KAUST Repository

    Saad, Bilal Mohammed

    2014-06-28

    We propose and analyze a combined finite volume-nonconforming finite element scheme on general meshes to simulate the two compressible phase flow in porous media. The diffusion term, which can be anisotropic and heterogeneous, is discretized by piecewise linear nonconforming triangular finite elements. The other terms are discretized by means of a cell-centered finite volume scheme on a dual mesh, where the dual volumes are constructed around the sides of the original mesh. The relative permeability of each phase is decentred according the sign of the velocity at the dual interface. This technique also ensures the validity of the discrete maximum principle for the saturation under a non restrictive shape regularity of the space mesh and the positiveness of all transmissibilities. Next, a priori estimates on the pressures and a function of the saturation that denote capillary terms are established. These stabilities results lead to some compactness arguments based on the use of the Kolmogorov compactness theorem, and allow us to derive the convergence of a subsequence of the sequence of approximate solutions to a weak solution of the continuous equations, provided the mesh size tends to zero. The proof is given for the complete system when the density of the each phase depends on its own pressure. © 2014 Springer-Verlag Berlin Heidelberg.

  12. Finite difference techniques for nonlinear hyperbolic conservation laws

    International Nuclear Information System (INIS)

    Sanders, R.

    1985-01-01

    The present study is concerned with numerical approximations to the initial value problem for nonlinear systems of conservative laws. Attention is given to the development of a class of conservation form finite difference schemes which are based on the finite volume method (i.e., the method of averages). These schemes do not fit into the classical framework of conservation form schemes discussed by Lax and Wendroff (1960). The finite volume schemes are specifically intended to approximate solutions of multidimensional problems in the absence of rectangular geometries. In addition, the development is reported of different schemes which utilize the finite volume approach for time discretization. Particular attention is given to local time discretization and moving spatial grids. 17 references

  13. Fixed-location hydroacoustic monitoring designs for estimating fish passage using stratified random and systematic sampling

    International Nuclear Information System (INIS)

    Skalski, J.R.; Hoffman, A.; Ransom, B.H.; Steig, T.W.

    1993-01-01

    Five alternate sampling designs are compared using 15 d of 24-h continuous hydroacoustic data to identify the most favorable approach to fixed-location hydroacoustic monitoring of salmonid outmigrants. Four alternative aproaches to systematic sampling are compared among themselves and with stratified random sampling (STRS). Stratifying systematic sampling (STSYS) on a daily basis is found to reduce sampling error in multiday monitoring studies. Although sampling precision was predictable with varying levels of effort in STRS, neither magnitude nor direction of change in precision was predictable when effort was varied in systematic sampling (SYS). Furthermore, modifying systematic sampling to include replicated (e.g., nested) sampling (RSYS) is further shown to provide unbiased point and variance estimates as does STRS. Numerous short sampling intervals (e.g., 12 samples of 1-min duration per hour) must be monitored hourly using RSYS to provide efficient, unbiased point and interval estimates. For equal levels of effort, STRS outperformed all variations of SYS examined. Parametric approaches to confidence interval estimates are found to be superior to nonparametric interval estimates (i.e., bootstrap and jackknife) in estimating total fish passage. 10 refs., 1 fig., 8 tabs

  14. Geometric Least Square Models for Deriving [0,1]-Valued Interval Weights from Interval Fuzzy Preference Relations Based on Multiplicative Transitivity

    Directory of Open Access Journals (Sweden)

    Xuan Yang

    2015-01-01

    Full Text Available This paper presents a geometric least square framework for deriving [0,1]-valued interval weights from interval fuzzy preference relations. By analyzing the relationship among [0,1]-valued interval weights, multiplicatively consistent interval judgments, and planes, a geometric least square model is developed to derive a normalized [0,1]-valued interval weight vector from an interval fuzzy preference relation. Based on the difference ratio between two interval fuzzy preference relations, a geometric average difference ratio between one interval fuzzy preference relation and the others is defined and employed to determine the relative importance weights for individual interval fuzzy preference relations. A geometric least square based approach is further put forward for solving group decision making problems. An individual decision numerical example and a group decision making problem with the selection of enterprise resource planning software products are furnished to illustrate the effectiveness and applicability of the proposed models.

  15. Using Finite Element Method

    Directory of Open Access Journals (Sweden)

    M.H.R. Ghoreishy

    2008-02-01

    Full Text Available This research work is devoted to the footprint analysis of a steel-belted radial tyre (185/65R14 under vertical static load using finite element method. Two models have been developed in which in the first model the tread patterns were replaced by simple ribs while the second model was consisted of details of the tread blocks. Linear elastic and hyper elastic (Arruda-Boyce material models were selected to describe the mechanical behavior of the reinforcing and rubbery parts, respectively. The above two finite element models of the tyre were analyzed under inflation pressure and vertical static loads. The second model (with detailed tread patterns was analyzed with and without friction effect between tread and contact surfaces. In every stage of the analysis, the results were compared with the experimental data to confirm the accuracy and applicability of the model. Results showed that neglecting the tread pattern design not only reduces the computational cost and effort but also the differences between computed deformations do not show significant changes. However, more complicated variables such as shape and area of the footprint zone and contact pressure are affected considerably by the finite element model selected for the tread blocks. In addition, inclusion of friction even in static state changes these variables significantly.

  16. Finiteness in Jordanian Arabic: A Semantic and Morphosyntactic Approach

    Science.gov (United States)

    Al-Aqarbeh, Rania

    2011-01-01

    Previous research on finiteness has been dominated by the studies in tensed languages, e.g. English. Consequently, finiteness has been identified with tense. The traditional definition influences the morphological, semantic, and syntactic characterization of finiteness which has also been equated with tense and its realization. The present study…

  17. Finite anticanonical transformations in field-antifield formalism

    Energy Technology Data Exchange (ETDEWEB)

    Batalin, Igor A.; Tyutin, Igor V. [P.N. Lebedev Physical Institute, Moscow (Russian Federation); Tomsk State Pedagogical University, Tomsk (Russian Federation); Lavrov, Peter M. [Tomsk State Pedagogical University, Tomsk (Russian Federation); National Research Tomsk State University, Tomsk (Russian Federation)

    2015-06-15

    We study the role of arbitrary (finite) anticanonical transformations in the field-antifield formalism and the gauge-fixing procedure based on the use of these transformations. The properties of the generating functionals of the Green functions subjected to finite anticanonical transformations are considered. (orig.)

  18. Delay-Dependent Guaranteed Cost Control of an Interval System with Interval Time-Varying Delay

    Directory of Open Access Journals (Sweden)

    Xiao Min

    2009-01-01

    Full Text Available This paper concerns the problem of the delay-dependent robust stability and guaranteed cost control for an interval system with time-varying delay. The interval system with matrix factorization is provided and leads to less conservative conclusions than solving a square root. The time-varying delay is assumed to belong to an interval and the derivative of the interval time-varying delay is not a restriction, which allows a fast time-varying delay; also its applicability is broad. Based on the Lyapunov-Ktasovskii approach, a delay-dependent criterion for the existence of a state feedback controller, which guarantees the closed-loop system stability, the upper bound of cost function, and disturbance attenuation lever for all admissible uncertainties as well as out perturbation, is proposed in terms of linear matrix inequalities (LMIs. The criterion is derived by free weighting matrices that can reduce the conservatism. The effectiveness has been verified in a number example and the compute results are presented to validate the proposed design method.

  19. Universal conditions for finite renormalizable quantum field theories

    International Nuclear Information System (INIS)

    Kranner, G.

    1990-10-01

    Analyzing general renormalization constants in covariant gauge and minimal subtraction, we consider universal conditions for cancelling UV-divergences in renormalizable field theories with simple gauge groups, and give constructive methods for finding nonsupersymmetric finite models. The divergent parts of the renormalization constants for fields explicitly depend on the gauge parameter ξ. Finite theories simply need finite couplings. We show that respective FinitenessConditions imply a hierarchy, the center of which are the FCs for the gauge coupling g and the Yukawa couplings of the massless theory. To gain more information about F we analyze the Yukawa-FC in greater detail. Doing so algebraically, we find out and fix all inner symmetries. Additionally, Yuakawa-couplings must be invariant under gauge transformation. Then it becomes extremely difficult to obey a FC, yield rational numbers for F ∼ 1, and satisfy the factorization-condition, unless F = 1. The particular structure of the F = 1-system allows for a most general ansatz. We figure out the simplest case, getting precisely just couplings and particle content of a general N=1-supersymmetric theory. We list a class of roughly 4000 types of theories, containing all supersymmetric, completely finite, and many more finite theories as well. (Author, shortened by Quittner) 11 figs., 54 refs

  20. Implicit finite-difference simulations of seismic wave propagation

    KAUST Repository

    Chu, Chunlei; Stoffa, Paul L.

    2012-01-01

    We propose a new finite-difference modeling method, implicit both in space and in time, for the scalar wave equation. We use a three-level implicit splitting time integration method for the temporal derivative and implicit finite-difference operators of arbitrary order for the spatial derivatives. Both the implicit splitting time integration method and the implicit spatial finite-difference operators require solving systems of linear equations. We show that it is possible to merge these two sets of linear systems, one from implicit temporal discretizations and the other from implicit spatial discretizations, to reduce the amount of computations to develop a highly efficient and accurate seismic modeling algorithm. We give the complete derivations of the implicit splitting time integration method and the implicit spatial finite-difference operators, and present the resulting discretized formulas for the scalar wave equation. We conduct a thorough numerical analysis on grid dispersions of this new implicit modeling method. We show that implicit spatial finite-difference operators greatly improve the accuracy of the implicit splitting time integration simulation results with only a slight increase in computational time, compared with explicit spatial finite-difference operators. We further verify this conclusion by both 2D and 3D numerical examples. © 2012 Society of Exploration Geophysicists.

  1. Topics in quantum field theories at finite temperature

    International Nuclear Information System (INIS)

    Kao, Y.C.

    1985-01-01

    Studies on four topics in quantum field theories at finite temperature are presented in this thesis. In Chapter 1, it is shown that the chiral anomaly has no finite temperature corrections by Fujikawa's path integral approach. Chapter 2 deals with the chiral condensate in the finite temperature Schwinger model. The cluster decomposition property is employed to find . No finite critical temperature is found and the chiral condensate vanishes only at infinite temperature. In Chapter 3, the finite temperature behavior of the fermion-number breaking (Rubakov-Callan) condensate around a 't Hooft-Polyakov monopole is studied. It is found that the Rubakov-Callan condensate is suppressed exponentially from the monopole core at high temperature. The limitation of the techniques is understanding the behavior of the condensate for all temperature is also discussed. Chapter 4 is on the topological mass terms in (2 + 1)-dimensional gauge theories. The authors finds that if the gauge bosons have no topological mass at tree level, no topological mass induced radiatively up to two-loop order in either Abelian or non-Abelian theories with massive fermions. The Pauli-Villars regularization is used for fermion loops. The one-loop contributions to the topological mass terms at finite temperature are calculated and the quantization constraints in this case are discussed

  2. Implicit finite-difference simulations of seismic wave propagation

    KAUST Repository

    Chu, Chunlei

    2012-03-01

    We propose a new finite-difference modeling method, implicit both in space and in time, for the scalar wave equation. We use a three-level implicit splitting time integration method for the temporal derivative and implicit finite-difference operators of arbitrary order for the spatial derivatives. Both the implicit splitting time integration method and the implicit spatial finite-difference operators require solving systems of linear equations. We show that it is possible to merge these two sets of linear systems, one from implicit temporal discretizations and the other from implicit spatial discretizations, to reduce the amount of computations to develop a highly efficient and accurate seismic modeling algorithm. We give the complete derivations of the implicit splitting time integration method and the implicit spatial finite-difference operators, and present the resulting discretized formulas for the scalar wave equation. We conduct a thorough numerical analysis on grid dispersions of this new implicit modeling method. We show that implicit spatial finite-difference operators greatly improve the accuracy of the implicit splitting time integration simulation results with only a slight increase in computational time, compared with explicit spatial finite-difference operators. We further verify this conclusion by both 2D and 3D numerical examples. © 2012 Society of Exploration Geophysicists.

  3. RATIO ESTIMATORS FOR THE CO-EFFICIENT OF VARIATION IN A FINITE POPULATION

    Directory of Open Access Journals (Sweden)

    Archana V

    2011-04-01

    Full Text Available The Co-efficient of variation (C.V is a relative measure of dispersion and is free from unit of measurement. Hence it is widely used by the scientists in the disciplines of agriculture, biology, economics and environmental science. Although a lot of work has been reported in the past for the estimation of population C.V in infinite population models, they are not directly applicable for the finite populations. In this paper we have proposed six new estimators of the population C.V in finite population using ratio and product type estimators. The bias and mean square error of these estimators are derived for the simple random sampling design. The performance of the estimators is compared using a real life dataset. The ratio estimator using the information on the population C.V of the auxiliary variable emerges as the best estimator

  4. Polyelectrolyte Bundles: Finite size at thermodynamic equilibrium?

    Science.gov (United States)

    Sayar, Mehmet

    2005-03-01

    Experimental observation of finite size aggregates formed by polyelectrolytes such as DNA and F-actin, as well as synthetic polymers like poly(p-phenylene), has created a lot of attention in recent years. Here, bundle formation in rigid rod-like polyelectrolytes is studied via computer simulations. For the case of hydrophobically modified polyelectrolytes finite size bundles are observed even in the presence of only monovalent counterions. Furthermore, in the absence of a hydrophobic backbone, we have also observed formation of finite size aggregates via multivalent counterion condensation. The size distribution of such aggregates and the stability is analyzed in this study.

  5. A sampling approach to constructing Lyapunov functions for nonlinear continuous–time systems

    NARCIS (Netherlands)

    Bobiti, R.V.; Lazar, M.

    2016-01-01

    The problem of constructing a Lyapunov function for continuous-time nonlinear dynamical systems is tackled in this paper via a sampling-based approach. The main idea of the sampling-based method is to verify a Lyapunov-type inequality for a finite number of points (known state vectors) in the

  6. Statistical variability and confidence intervals for planar dose QA pass rates

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher; Kumaraswamy, Lalith; Podgorsak, Matthew B. [Department of Physics, State University of New York at Buffalo, Buffalo, New York 14260 (United States) and Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Department of Biostatistics, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Molecular and Cellular Biophysics and Biochemistry, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States) and Department of Physiology and Biophysics, State University of New York at Buffalo, Buffalo, New York 14214 (United States)

    2011-11-15

    Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics of various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization

  7. Finite elements of nonlinear continua

    CERN Document Server

    Oden, John Tinsley

    1972-01-01

    Geared toward undergraduate and graduate students, this text extends applications of the finite element method from linear problems in elastic structures to a broad class of practical, nonlinear problems in continuum mechanics. It treats both theory and applications from a general and unifying point of view.The text reviews the thermomechanical principles of continuous media and the properties of the finite element method, and then brings them together to produce discrete physical models of nonlinear continua. The mathematical properties of these models are analyzed, along with the numerical s

  8. Finite connectivity attractor neural networks

    International Nuclear Information System (INIS)

    Wemmenhove, B; Coolen, A C C

    2003-01-01

    We study a family of diluted attractor neural networks with a finite average number of (symmetric) connections per neuron. As in finite connectivity spin glasses, their equilibrium properties are described by order parameter functions, for which we derive an integral equation in replica symmetric approximation. A bifurcation analysis of this equation reveals the locations of the paramagnetic to recall and paramagnetic to spin-glass transition lines in the phase diagram. The line separating the retrieval phase from the spin-glass phase is calculated at zero temperature. All phase transitions are found to be continuous

  9. Finite-Larmor-radius stability theory of EBT plasmas

    International Nuclear Information System (INIS)

    Berk, H.L.; Cheng, C.Z.; Rosenbluth, M.N.; Van Dam, J.W.

    1982-11-01

    An eikonal ballooning-mode formalism is developed to describe curvature-driven modes of hot electron plasmas in bumpy tori. The formalism treats frequencies comparable to the ion-cyclotron frequency, as well as arbitrary finite Larmor radius and field polarization, although the detailed analysis is restricted to E/sub parallel/ = 0. Moderate hot-electron finite-Larmor-radius effects are found to lower the background beta core limit, whereas strong finite-Lamor-radius effects produce stabilization

  10. Finite Mathematics and Discrete Mathematics: Is There a Difference?

    Science.gov (United States)

    Johnson, Marvin L.

    Discrete mathematics and finite mathematics differ in a number of ways. First, finite mathematics has a longer history and is therefore more stable in terms of course content. Finite mathematics courses emphasize certain particular mathematical tools which are useful in solving the problems of business and the social sciences. Discrete mathematics…

  11. Introduction to finite and spectral element methods using Matlab

    CERN Document Server

    Pozrikidis, Constantine

    2014-01-01

    The Finite Element Method in One Dimension. Further Applications in One Dimension. High-Order and Spectral Elements in One Dimension. The Finite Element Method in Two Dimensions. Quadratic and Spectral Elements in Two Dimensions. Applications in Mechanics. Viscous Flow. Finite and Spectral Element Methods in Three Dimensions. Appendices. References. Index.

  12. Cycles through all finite vertex sets in infinite graphs

    DEFF Research Database (Denmark)

    Kundgen, Andre; Li, Binlong; Thomassen, Carsten

    2017-01-01

    is contained in a cycle of G. We apply this to extend a number of results and conjectures on finite graphs to Hamiltonian curves in infinite locally finite graphs. For example, Barnette’s conjecture (that every finite planar cubic 3-connected bipartite graph is Hamiltonian) is equivalent to the statement...

  13. Divergence-Measure Fields, Sets of Finite Perimeter, and Conservation Laws

    Science.gov (United States)

    Chen, Gui-Qiang; Torres, Monica

    2005-02-01

    Divergence-measure fields in L∞ over sets of finite perimeter are analyzed. A notion of normal traces over boundaries of sets of finite perimeter is introduced, and the Gauss-Green formula over sets of finite perimeter is established for divergence-measure fields in L∞. The normal trace introduced here over a class of surfaces of finite perimeter is shown to be the weak-star limit of the normal traces introduced in Chen & Frid [6] over the Lipschitz deformation surfaces, which implies their consistency. As a corollary, an extension theorem of divergence-measure fields in L∞ over sets of finite perimeter is also established. Then we apply the theory to the initial-boundary value problem of nonlinear hyperbolic conservation laws over sets of finite perimeter.

  14. Interval MULTIMOORA method with target values of attributes based on interval distance and preference degree: biomaterials selection

    Science.gov (United States)

    Hafezalkotob, Arian; Hafezalkotob, Ashkan

    2017-06-01

    A target-based MADM method covers beneficial and non-beneficial attributes besides target values for some attributes. Such techniques are considered as the comprehensive forms of MADM approaches. Target-based MADM methods can also be used in traditional decision-making problems in which beneficial and non-beneficial attributes only exist. In many practical selection problems, some attributes have given target values. The values of decision matrix and target-based attributes can be provided as intervals in some of such problems. Some target-based decision-making methods have recently been developed; however, a research gap exists in the area of MADM techniques with target-based attributes under uncertainty of information. We extend the MULTIMOORA method for solving practical material selection problems in which material properties and their target values are given as interval numbers. We employ various concepts of interval computations to reduce degeneration of uncertain data. In this regard, we use interval arithmetic and introduce innovative formula for interval distance of interval numbers to create interval target-based normalization technique. Furthermore, we use a pairwise preference matrix based on the concept of degree of preference of interval numbers to calculate the maximum, minimum, and ranking of these numbers. Two decision-making problems regarding biomaterials selection of hip and knee prostheses are discussed. Preference degree-based ranking lists for subordinate parts of the extended MULTIMOORA method are generated by calculating the relative degrees of preference for the arranged assessment values of the biomaterials. The resultant rankings for the problem are compared with the outcomes of other target-based models in the literature.

  15. Study on the pupal morphogenesis of Chrysomya rufifacies (Macquart) (Diptera: Calliphoridae) for postmortem interval estimation.

    Science.gov (United States)

    Ma, Ting; Huang, Jia; Wang, Jiang-Feng

    2015-08-01

    Chrysomya rufifacies (Macquart) is one of the most common species of blow flies at the scene of death in Southern China. Pupae are useful in postmortem interval (PMI) estimation due to their sedentary nature and longer duration of association with the corpse. However, to determine the age of a pupa is more difficult than that of a larva, due to the fact that morphological changes are rarely visible during pupal development. In this study, eggs of C. rufifacies were reared in climatic chambers under four different constant temperatures (20, 24, 28 and 32°C each±1°C, respectively) with same rearing conditions such as foodstuff, substrate, photoperiod and relative humidity. Ten duplicate pupae were sampled at 8-h intervals from prepupae to emergence under the different constant temperatures, respectively. The pupae were sampled, killed, fixed, dissected and with the puparium removed, the external morphological changes of the pupae were observed, recorded and photographed. The morphological characters of C. rufifacies pupae were described. Based on the visible external morphological characters during pupal morphogenesis at 28°C±1°C, the developmental period of C. rufifacies was divided into nine developmental stages and recorded in detailed description. Based on above-mentioned nine developmental stages, some visible external morphological characters were selected as indications for developmental stages. These indications mapped to 8-h sampling intervals at the four different constant temperatures were also described in this study. It is demonstrated that generally the duration of each developmental stage of C. rufifacies pupae is inversely correlated to appropriate developmental temperatures. This study provides relatively systematic pupal developmental data of C. rufifacies for the estimation of PMI. In addition, further work may improve by focus on other environmental factors, histological analysis, more thorough external examination by shortening sampling

  16. Introduction to finite element analysis using MATLAB and Abaqus

    CERN Document Server

    Khennane, Amar

    2013-01-01

    There are some books that target the theory of the finite element, while others focus on the programming side of things. Introduction to Finite Element Analysis Using MATLAB(R) and Abaqus accomplishes both. This book teaches the first principles of the finite element method. It presents the theory of the finite element method while maintaining a balance between its mathematical formulation, programming implementation, and application using commercial software. The computer implementation is carried out using MATLAB, while the practical applications are carried out in both MATLAB and Abaqus. MA

  17. Recurrence interval analysis of trading volumes.

    Science.gov (United States)

    Ren, Fei; Zhou, Wei-Xing

    2010-06-01

    We study the statistical properties of the recurrence intervals τ between successive trading volumes exceeding a certain threshold q. The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes.

  18. Expressing Intervals in Automated Service Negotiation

    Science.gov (United States)

    Clark, Kassidy P.; Warnier, Martijn; van Splunter, Sander; Brazier, Frances M. T.

    During automated negotiation of services between autonomous agents, utility functions are used to evaluate the terms of negotiation. These terms often include intervals of values which are prone to misinterpretation. It is often unclear if an interval embodies a continuum of real numbers or a subset of natural numbers. Furthermore, it is often unclear if an agent is expected to choose only one value, multiple values, a sub-interval or even multiple sub-intervals. Additional semantics are needed to clarify these issues. Normally, these semantics are stored in a domain ontology. However, ontologies are typically domain specific and static in nature. For dynamic environments, in which autonomous agents negotiate resources whose attributes and relationships change rapidly, semantics should be made explicit in the service negotiation. This paper identifies issues that are prone to misinterpretation and proposes a notation for expressing intervals. This notation is illustrated using an example in WS-Agreement.

  19. Optimization of powered Stirling heat engine with finite speed thermodynamics

    International Nuclear Information System (INIS)

    Ahmadi, Mohammad H.; Ahmadi, Mohammad Ali; Pourfayaz, Fathollah; Bidi, Mokhtar; Hosseinzade, Hadi; Feidt, Michel

    2016-01-01

    Highlights: • Based on finite speed method and direct method, the optimal performance is investigated. • The effects of major parameters on the optimal performance are investigated. • The accuracy of the results was compared with previous works. - Abstract: Popular thermodynamic analyses including finite time thermodynamic analysis was lately developed based upon external irreversibilities while internal irreversibilities such as friction, pressure drop and entropy generation were not considered. The aforementioned disadvantage reduces the reliability of the finite time thermodynamic analysis in the design of an accurate Stirling engine model. Consequently, the finite time thermodynamic analysis could not sufficiently satisfy researchers for implementing in design and optimization issues. In this study, finite speed thermodynamic analysis was employed instead of finite time thermodynamic analysis for studying Stirling heat engine. The finite speed thermodynamic analysis approach is based on the first law of thermodynamics for a closed system with finite speed and the direct method. The effects of heat source temperature, regenerating effectiveness, volumetric ratio, piston stroke as well as rotational speed are included in the analysis. Moreover, maximum output power in optimal rotational speed was calculated while pressure losses in the Stirling engine were systematically considered. The result reveals the accuracy and the reliability of the finite speed thermodynamic method in thermodynamic analysis of Stirling heat engine. The outcomes can help researchers in the design of an appropriate and efficient Stirling engine.

  20. A scoping review of the psychological responses to interval exercise: is interval exercise a viable alternative to traditional exercise?

    Science.gov (United States)

    Stork, Matthew J; Banfield, Laura E; Gibala, Martin J; Martin Ginis, Kathleen A

    2017-12-01

    While considerable evidence suggests that interval exercise confers numerous physiological adaptations linked to improved health, its psychological consequences and behavioural implications are less clear and the subject of intense debate. The purpose of this scoping review was to catalogue studies investigating the psychological responses to interval exercise in order to identify what psychological outcomes have been assessed, the research methods used, and the results. A secondary objective was to identify research issues and gaps. Forty-two published articles met the review inclusion/exclusion criteria. These studies involved 1258 participants drawn from various active/inactive and healthy/unhealthy populations, and 55 interval exercise protocols (69% high-intensity interval training [HIIT], 27% sprint interval training [SIT], and 4% body-weight interval training [BWIT]). Affect and enjoyment were the most frequently studied psychological outcomes. Post-exercise assessments indicate that overall, enjoyment of, and preferences for interval exercise are equal or greater than for continuous exercise, and participants can hold relatively positive social cognitions regarding interval exercise. Although several methodological issues (e.g., inconsistent use of terminology, measures and protocols) and gaps (e.g., data on adherence and real-world protocols) require attention, from a psychological perspective, the emerging data support the viability of interval exercise as an alternative to continuous exercise.

  1. BRIDGING GAPS BETWEEN ZOO AND WILDLIFE MEDICINE: ESTABLISHING REFERENCE INTERVALS FOR FREE-RANGING AFRICAN LIONS (PANTHERA LEO).

    Science.gov (United States)

    Broughton, Heather M; Govender, Danny; Shikwambana, Purvance; Chappell, Patrick; Jolles, Anna

    2017-06-01

    The International Species Information System has set forth an extensive database of reference intervals for zoologic species, allowing veterinarians and game park officials to distinguish normal health parameters from underlying disease processes in captive wildlife. However, several recent studies comparing reference values from captive and free-ranging animals have found significant variation between populations, necessitating the development of separate reference intervals in free-ranging wildlife to aid in the interpretation of health data. Thus, this study characterizes reference intervals for six biochemical analytes, eleven hematologic or immune parameters, and three hormones using samples from 219 free-ranging African lions ( Panthera leo ) captured in Kruger National Park, South Africa. Using the original sample population, exclusion criteria based on physical examination were applied to yield a final reference population of 52 clinically normal lions. Reference intervals were then generated via 90% confidence intervals on log-transformed data using parametric bootstrapping techniques. In addition to the generation of reference intervals, linear mixed-effect models and generalized linear mixed-effect models were used to model associations of each focal parameter with the following independent variables: age, sex, and body condition score. Age and sex were statistically significant drivers for changes in hepatic enzymes, renal values, hematologic parameters, and leptin, a hormone related to body fat stores. Body condition was positively correlated with changes in monocyte counts. Given the large variation in reference values taken from captive versus free-ranging lions, it is our hope that this study will serve as a baseline for future clinical evaluations and biomedical research targeting free-ranging African lions.

  2. Precise magnetostatic field using the finite element method

    International Nuclear Information System (INIS)

    Nascimento, Francisco Rogerio Teixeira do

    2013-01-01

    The main objective of this work is to simulate electromagnetic fields using the Finite Element Method. Even in the easiest case of electrostatic and magnetostatic numerical simulation some problems appear when the nodal finite element is used. It is difficult to model vector fields with scalar functions mainly in non-homogeneous materials. With the aim to solve these problems two types of techniques are tried: the adaptive remeshing using nodal elements and the edge finite element that ensure the continuity of tangential components. Some numerical analysis of simple electromagnetic problems with homogeneous and non-homogeneous materials are performed using first, the adaptive remeshing based in various error indicators and second, the numerical solution of waveguides using edge finite element. (author)

  3. Interpretability degrees of finitely axiomatized sequential theories

    NARCIS (Netherlands)

    Visser, Albert

    In this paper we show that the degrees of interpretability of finitely axiomatized extensions-in-the-same-language of a finitely axiomatized sequential theory-like Elementary Arithmetic EA, IΣ1, or the Gödel-Bernays theory of sets and classes GB-have suprema. This partially answers a question posed

  4. Interpretability Degrees of Finitely Axiomatized Sequential Theories

    NARCIS (Netherlands)

    Visser, Albert

    2012-01-01

    In this paper we show that the degrees of interpretability of finitely axiomatized extensions-in-the-same-language of a finitely axiomatized sequential theory —like Elementary Arithmetic EA, IΣ1, or the Gödel-Bernays theory of sets and classes GB— have suprema. This partially answers a question

  5. Finite Element Analysis of Pipe T-Joint

    OpenAIRE

    P.M.Gedkar; Dr. D.V. Bhope

    2012-01-01

    This paper reports stress analysis of two pressurized cylindrical intersection using finite element method. The different combinations of dimensions of run pipe and the branch pipe are used to investigate thestresses in pipe at the intersection. In this study the stress analysis is accomplished by finite element package ANSYS.

  6. Dynamic pricing and learning with finite inventories

    NARCIS (Netherlands)

    den Boer, A.V.; Zwart, Bert

    2013-01-01

    We study a dynamic pricing problem with finite inventory and parametric uncertainty on the demand distribution. Products are sold during selling seasons of finite length, and inventory that is unsold at the end of a selling season, perishes. The goal of the seller is to determine a pricing strategy

  7. Dynamic pricing and learning with finite inventories

    NARCIS (Netherlands)

    den Boer, A.V.; Zwart, Bert

    We study a dynamic pricing problem with finite inventory and parametric uncertainty on the demand distribution. Products are sold during selling seasons of finite length, and inventory that is unsold at the end of a selling season perishes. The goal of the seller is to determine a pricing strategy

  8. Dynamic Pricing and Learning with Finite Inventories

    NARCIS (Netherlands)

    A.P. Zwart (Bert); A.V. den Boer (Arnoud)

    2015-01-01

    htmlabstractWe study a dynamic pricing problem with finite inventory and parametric uncertainty on the demand distribution. Products are sold during selling seasons of finite length, and inventory that is unsold at the end of a selling season perishes. The goal of the seller is to determine a

  9. Dynamic pricing and learning with finite inventories

    NARCIS (Netherlands)

    Boer, den A.V.; Zwart, B.

    2015-01-01

    We study a dynamic pricing problem with finite inventory and parametric uncertainty on the demand distribution. Products are sold during selling seasons of finite length, and inventory that is unsold at the end of a selling season perishes. The goal of the seller is to determine a pricing strategy

  10. Probabilistic finite elements for fracture mechanics

    Science.gov (United States)

    Besterfield, Glen

    1988-01-01

    The probabilistic finite element method (PFEM) is developed for probabilistic fracture mechanics (PFM). A finite element which has the near crack-tip singular strain embedded in the element is used. Probabilistic distributions, such as expectation, covariance and correlation stress intensity factors, are calculated for random load, random material and random crack length. The method is computationally quite efficient and can be expected to determine the probability of fracture or reliability.

  11. A study on the nonlinear finite element analysis of reinforced concrete structures: shell finite element formulation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang Jin; Seo, Jeong Moon

    2000-08-01

    The main goal of this research is to establish a methodology of finite element analysis of containment building predicting not only global behaviour but also local failure mode. In this report, we summerize some existing numerical analysis techniques to be improved for containment building. In other words, a complete description of the standard degenerated shell finite element formulation is provided for nonlinear stress analysis of nuclear containment structure. A shell finite element is derived using the degenerated solid concept which does not rely on a specific shell theory. Reissner-Mindlin assumptions are adopted to consider the transverse shear deformation effect. In order to minimize the sensitivity of the constitutive equation to structural types, microscopic material model is adopted. The four solution algorithms based on the standard Newton-Raphson method are discussed. Finally, two numerical examples are carried out to test the performance of the adopted shell medel.

  12. A study on the nonlinear finite element analysis of reinforced concrete structures: shell finite element formulation

    International Nuclear Information System (INIS)

    Lee, Sang Jin; Seo, Jeong Moon

    2000-08-01

    The main goal of this research is to establish a methodology of finite element analysis of containment building predicting not only global behaviour but also local failure mode. In this report, we summerize some existing numerical analysis techniques to be improved for containment building. In other words, a complete description of the standard degenerated shell finite element formulation is provided for nonlinear stress analysis of nuclear containment structure. A shell finite element is derived using the degenerated solid concept which does not rely on a specific shell theory. Reissner-Mindlin assumptions are adopted to consider the transverse shear deformation effect. In order to minimize the sensitivity of the constitutive equation to structural types, microscopic material model is adopted. The four solution algorithms based on the standard Newton-Raphson method are discussed. Finally, two numerical examples are carried out to test the performance of the adopted shell medel

  13. Finite size effects of a pion matrix element

    International Nuclear Information System (INIS)

    Guagnelli, M.; Jansen, K.; Palombi, F.; Petronzio, R.; Shindler, A.; Wetzorke, I.

    2004-01-01

    We investigate finite size effects of the pion matrix element of the non-singlet, twist-2 operator corresponding to the average momentum of non-singlet quark densities. Using the quenched approximation, they come out to be surprisingly large when compared to the finite size effects of the pion mass. As a consequence, simulations of corresponding nucleon matrix elements could be affected by finite size effects even stronger which could lead to serious systematic uncertainties in their evaluation

  14. On Algebraic Study of Type-2 Fuzzy Finite State Automata

    Directory of Open Access Journals (Sweden)

    Anupam K. Singh

    2017-08-01

    Full Text Available Theories of fuzzy sets and type-2 fuzzy sets are powerful mathematical tools for modeling various types of uncertainty. In this paper we introduce the concept of type-2 fuzzy finite state automata and discuss the algebraic study of type-2 fuzzy finite state automata, i.e., to introduce the concept of homomorphisms between two type-2 fuzzy finite state automata, to associate a type-2 fuzzy transformation semigroup with a type-2 fuzzy finite state automata. Finally, we discuss several product of type-2 fuzzy finite state automata and shown that these product is a categorical product.

  15. Parametric study of unconstrained high-pressure torsion- Finite element analysis

    International Nuclear Information System (INIS)

    Halloumi, A; Busquet, M; Descartes, S

    2014-01-01

    The high-pressure torsion (HPT) experiments have been investigated numerically. An axisymmetric model with twist was developed with commercial finite element software (Abaqus) to study locally the specificity of the stress and strain history within the transformed layers produced during HPT processing. The material local behaviour law in the plastic domain was modelled. A parametric study highlights the role of the imposed parameters (friction coefficient at the interfaces anvil surfaces/sample, imposed pressure) on the stress/strain distribution in the sample bulk for two materials: ultra-high purity iron and steel grade R260. The present modelling provides a tool to investigate and to analyse the effect of pressure and friction on the local stress and strain history during the HPT process and to couple with experimental results

  16. Interval Solution for Nonlinear Programming of Maximizing the Fatigue Life of V-Belt under Polymorphic Uncertain Environment

    Directory of Open Access Journals (Sweden)

    Zhong Wan

    2013-01-01

    Full Text Available In accord with the practical engineering design conditions, a nonlinear programming model is constructed for maximizing the fatigue life of V-belt drive in which some polymorphic uncertainties are incorporated. For a given satisfaction level and a confidence level, an equivalent formulation of this uncertain optimization model is obtained where only interval parameters are involved. Based on the concepts of maximal and minimal range inequalities for describing interval inequality, the interval parameter model is decomposed into two standard nonlinear programming problems, and an algorithm, called two-step based sampling algorithm, is developed to find an interval optimal solution for the original problem. Case study is employed to demonstrate the validity and practicability of the constructed model and the algorithm.

  17. The importance of the sampling frequency in determining short-time-averaged irradiance and illuminance for rapidly changing cloud cover

    International Nuclear Information System (INIS)

    Delaunay, J.J.; Rommel, M.; Geisler, J.

    1994-01-01

    The sampling interval is an important parameter which must be chosen carefully, if measurements of the direct, global, and diffuse irradiance or illuminance are carried out to determine their averages over a given period. Using measurements from a day with rapidly moving clouds, we investigated the influence of the sampling interval on the uncertainly of the calculated 15-min averages. We conclude, for this averaging period, that the sampling interval should not exceed 60 s and 10 s for measurement of the diffuse and global components respectively, to reduce the influence of the sampling interval below 2%. For the direct component, even a 5 s sampling interval is too long to reach this influence level for days with extremely quickly changing insolation conditions. (author)

  18. Correct Bayesian and frequentist intervals are similar

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1986-01-01

    This paper argues that Bayesians and frequentists will normally reach numerically similar conclusions, when dealing with vague data or sparse data. It is shown that both statistical methodologies can deal reasonably with vague data. With sparse data, in many important practical cases Bayesian interval estimates and frequentist confidence intervals are approximately equal, although with discrete data the frequentist intervals are somewhat longer. This is not to say that the two methodologies are equally easy to use: The construction of a frequentist confidence interval may require new theoretical development. Bayesians methods typically require numerical integration, perhaps over many variables. Also, Bayesian can easily fall into the trap of over-optimism about their amount of prior knowledge. But in cases where both intervals are found correctly, the two intervals are usually not very different. (orig.)

  19. Reference interval computation: which method (not) to choose?

    Science.gov (United States)

    Pavlov, Igor Y; Wilson, Andrew R; Delgado, Julio C

    2012-07-11

    When different methods are applied to reference interval (RI) calculation the results can sometimes be substantially different, especially for small reference groups. If there are no reliable RI data available, there is no way to confirm which method generates results closest to the true RI. We randomly drawn samples obtained from a public database for 33 markers. For each sample, RIs were calculated by bootstrapping, parametric, and Box-Cox transformed parametric methods. Results were compared to the values of the population RI. For approximately half of the 33 markers, results of all 3 methods were within 3% of the true reference value. For other markers, parametric results were either unavailable or deviated considerably from the true values. The transformed parametric method was more accurate than bootstrapping for sample size of 60, very close to bootstrapping for sample size 120, but in some cases unavailable. We recommend against using parametric calculations to determine RIs. The transformed parametric method utilizing Box-Cox transformation would be preferable way of RI calculation, if it satisfies normality test. If not, the bootstrapping is always available, and is almost as accurate and precise as the transformed parametric method. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. The probability distribution of maintenance cost of a system affected by the gamma process of degradation: Finite time solution

    International Nuclear Information System (INIS)

    Cheng, Tianjin; Pandey, Mahesh D.; Weide, J.A.M. van der

    2012-01-01

    The stochastic gamma process has been widely used to model uncertain degradation in engineering systems and structures. The optimization of the condition-based maintenance (CBM) policy is typically based on the minimization of the asymptotic cost rate. In the financial planning of a maintenance program, however, a more accurate prediction interval for the cost is needed for prudent decision making. The prediction interval cannot be estimated unless the probability distribution of cost is known. In this context, the asymptotic cost rate has a limited utility. This paper presents the derivation of the probability distribution of maintenance cost, when the system degradation is modelled as a stochastic gamma process. A renewal equation is formulated to derive the characteristic function, then the discrete Fourier transform of the characteristic function leads to the complete probability distribution of cost in a finite time setting. The proposed approach is useful for a precise estimation of prediction limits and optimization of the maintenance cost.