WorldWideScience

Sample records for finite sample size

  1. Precision of quantization of the hall conductivity in a finite-size sample: Power law

    International Nuclear Information System (INIS)

    Greshnov, A. A.; Kolesnikova, E. N.; Zegrya, G. G.

    2006-01-01

    A microscopic calculation of the conductivity in the integer quantum Hall effect (IQHE) mode is carried out. The precision of quantization is analyzed for finite-size samples. The precision of quantization shows a power-law dependence on the sample size. A new scaling parameter describing this dependence is introduced. It is also demonstrated that the precision of quantization linearly depends on the ratio between the amplitude of the disorder potential and the cyclotron energy. The data obtained are compared with the results of magnetotransport measurements in mesoscopic samples

  2. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  3. Effect of dislocation pile-up on size-dependent yield strength in finite single-crystal micro-samples

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp [Department of Mechanical Engineering, Osaka University, Suita 565-0871 (Japan); Zhang, Xu [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China); School of Mechanics and Engineering Science, Zhengzhou University, Zhengzhou 450001 (China); Shang, Fulin [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China)

    2015-07-07

    Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources and pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.

  4. Robust weak measurements on finite samples

    International Nuclear Information System (INIS)

    Tollaksen, Jeff

    2007-01-01

    A new weak measurement procedure is introduced for finite samples which yields accurate weak values that are outside the range of eigenvalues and which do not require an exponentially rare ensemble. This procedure provides a unique advantage in the amplification of small nonrandom signals by minimizing uncertainties in determining the weak value and by minimizing sample size. This procedure can also extend the strength of the coupling between the system and measuring device to a new regime

  5. Characterization of resonances using finite size effects

    International Nuclear Information System (INIS)

    Pozsgay, B.; Takacs, G.

    2006-01-01

    We develop methods to extract resonance widths from finite volume spectra of (1+1)-dimensional quantum field theories. Our two methods are based on Luscher's description of finite size corrections, and are dubbed the Breit-Wigner and the improved ''mini-Hamiltonian'' method, respectively. We establish a consistent framework for the finite volume description of sufficiently narrow resonances that takes into account the finite size corrections and mass shifts properly. Using predictions from form factor perturbation theory, we test the two methods against finite size data from truncated conformal space approach, and find excellent agreement which confirms both the theoretical framework and the numerical validity of the methods. Although our investigation is carried out in 1+1 dimensions, the extension to physical 3+1 space-time dimensions appears straightforward, given sufficiently accurate finite volume spectra

  6. Finite size scaling theory

    International Nuclear Information System (INIS)

    Rittenberg, V.

    1983-01-01

    Fischer's finite-size scaling describes the cross over from the singular behaviour of thermodynamic quantities at the critical point to the analytic behaviour of the finite system. Recent extensions of the method--transfer matrix technique, and the Hamiltonian formalism--are discussed in this paper. The method is presented, with equations deriving scaling function, critical temperature, and exponent v. As an application of the method, a 3-states Hamiltonian with Z 3 global symmetry is studied. Diagonalization of the Hamiltonian for finite chains allows one to estimate the critical exponents, and also to discover new phase transitions at lower temperatures. The critical points lambda, and indices v estimated for finite-scaling are given

  7. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    Science.gov (United States)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  8. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time

    Science.gov (United States)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which—as shown on the contact process—provides a significant improvement of the large deviation function estimators compared to the standard one.

  9. Polyelectrolyte Bundles: Finite size at thermodynamic equilibrium?

    Science.gov (United States)

    Sayar, Mehmet

    2005-03-01

    Experimental observation of finite size aggregates formed by polyelectrolytes such as DNA and F-actin, as well as synthetic polymers like poly(p-phenylene), has created a lot of attention in recent years. Here, bundle formation in rigid rod-like polyelectrolytes is studied via computer simulations. For the case of hydrophobically modified polyelectrolytes finite size bundles are observed even in the presence of only monovalent counterions. Furthermore, in the absence of a hydrophobic backbone, we have also observed formation of finite size aggregates via multivalent counterion condensation. The size distribution of such aggregates and the stability is analyzed in this study.

  10. Finite size effects of a pion matrix element

    International Nuclear Information System (INIS)

    Guagnelli, M.; Jansen, K.; Palombi, F.; Petronzio, R.; Shindler, A.; Wetzorke, I.

    2004-01-01

    We investigate finite size effects of the pion matrix element of the non-singlet, twist-2 operator corresponding to the average momentum of non-singlet quark densities. Using the quenched approximation, they come out to be surprisingly large when compared to the finite size effects of the pion mass. As a consequence, simulations of corresponding nucleon matrix elements could be affected by finite size effects even stronger which could lead to serious systematic uncertainties in their evaluation

  11. Electrokinetic Flow in Microchannels with Finite Reservoir Size Effects

    International Nuclear Information System (INIS)

    Yan, D; Yang, C; Nguyen, N-T; Huang, X

    2006-01-01

    In electrokinetically-driven microfluidic applications, reservoirs are indispensable and have finite sizes. During operation processes, as the liquid level difference in reservoirs keeps changing as time elapses, the flow characteristics in a microchannel exhibit a combination of the electroosmotic flow and the time-dependent induced backpressure-driven flow. In this work, an assessment of the finite reservoir size effect on electroosmotic flows is presented theoretically and experimentally. A model is developed to describe the timedependent electrokinetic flow with finite reservoir size effects. The theoretical analysis shows that under certain conditions the finite reservoir size effect is significant. The important parameters that describe the effect of finite reservoir size on the flow characteristics are discussed. A new concept denoted as 'effective pumping period' is introduced to characterize the reservoir size effect. The proposed model clearly identifies the mechanisms of the finitereservoir size effects and is further confirmed by using micro-PIV technique. The results of this study can be used for facilitating the design of microfluidic devices

  12. Finite Size Scaling of Perceptron

    OpenAIRE

    Korutcheva, Elka; Tonchev, N.

    2000-01-01

    We study the first-order transition in the model of a simple perceptron with continuous weights and large, bit finite value of the inputs. Making the analogy with the usual finite-size physical systems, we calculate the shift and the rounding exponents near the transition point. In the case of a general perceptron with larger variety of inputs, the analysis only gives bounds for the exponents.

  13. Mesh-size errors in diffusion-theory calculations using finite-difference and finite-element methods

    International Nuclear Information System (INIS)

    Baker, A.R.

    1982-07-01

    A study has been performed of mesh-size errors in diffusion-theory calculations using finite-difference and finite-element methods. As the objective was to illuminate the issues, the study was performed for a 1D slab model of a reactor with one neutron-energy group for which analytical solutions were possible. A computer code SLAB was specially written to perform the finite-difference and finite-element calculations and also to obtain the analytical solutions. The standard finite-difference equations were obtained by starting with an expansion of the neutron current in powers of the mesh size, h, and keeping terms as far as h 2 . It was confirmed that these equations led to the well-known result that the criticality parameter varied with the square of the mesh size. An improved form of the finite-difference equations was obtained by continuing the expansion for the neutron current as far as the term in h 4 . In this case, the critical parameter varied as the fourth power of the mesh size. The finite-element solutions for 2 and 3 nodes per element revealed that the criticality parameter varied as the square and fourth power of the mesh size, respectively. Numerical results are presented for a bare reactive core of uniform composition with 2 zones of different uniform mesh and for a reactive core with an absorptive reflector. (author)

  14. Percolation through voids around overlapping spheres: A dynamically based finite-size scaling analysis

    Science.gov (United States)

    Priour, D. J.

    2014-01-01

    The percolation threshold for flow or conduction through voids surrounding randomly placed spheres is calculated. With large-scale Monte Carlo simulations, we give a rigorous continuum treatment to the geometry of the impenetrable spheres and the spaces between them. To properly exploit finite-size scaling, we examine multiple systems of differing sizes, with suitable averaging over disorder, and extrapolate to the thermodynamic limit. An order parameter based on the statistical sampling of stochastically driven dynamical excursions and amenable to finite-size scaling analysis is defined, calculated for various system sizes, and used to determine the critical volume fraction ϕc=0.0317±0.0004 and the correlation length exponent ν =0.92±0.05.

  15. Finite-State Complexity and the Size of Transducers

    Directory of Open Access Journals (Sweden)

    Cristian Calude

    2010-08-01

    Full Text Available Finite-state complexity is a variant of algorithmic information theory obtained by replacing Turing machines with finite transducers. We consider the state-size of transducers needed for minimal descriptions of arbitrary strings and, as our main result, we show that the state-size hierarchy with respect to a standard encoding is infinite. We consider also hierarchies yielded by more general computable encodings.

  16. Finite-size scaling a collection of reprints

    CERN Document Server

    1988-01-01

    Over the past few years, finite-size scaling has become an increasingly important tool in studies of critical systems. This is partly due to an increased understanding of finite-size effects by analytical means, and partly due to our ability to treat larger systems with large computers. The aim of this volume was to collect those papers which have been important for this progress and which illustrate novel applications of the method. The emphasis has been placed on relatively recent developments, including the use of the &egr;-expansion and of conformal methods.

  17. Finite-size effects on band structure of CdS nanocrystallites studied by positron annihilation

    International Nuclear Information System (INIS)

    Kar, Soumitra; Biswas, Subhajit; Chaudhuri, Subhadra; Nambissan, P.M.G.

    2005-01-01

    Quantum confinement effects in nanocrystalline CdS were studied using positrons as spectroscopic probes to explore the defect characteristics. The lifetime of positrons annihilating at the vacancy clusters on nanocrystalline grain surfaces increased remarkably consequent to the onset of such finite-size effects. The Doppler broadened line shape was also found to reflect rather sensitively such distinct changes in the electron momentum redistribution scanned by the positrons, owing to the widening of the band gap. The nanocrystalline sizes of the samples used were confirmed from x-ray diffraction and high resolution transmission electron microscopy and the optical absorption results supported the quantum size effects. Positron annihilation results indicated distinct qualitative changes between CdS nanorods and the bulk sample, notwithstanding the identical x-ray diffraction pattern and close resemblance of the optical absorption spectra. The results are promising in the event of positron annihilation being proved to be a very successful tool for the study of such finite-size effects in semiconductor nanoparticles

  18. Finite size effects in simulations of protein aggregation.

    Directory of Open Access Journals (Sweden)

    Amol Pawar

    Full Text Available It is becoming increasingly clear that the soluble protofibrillar species that proceed amyloid fibril formation are associated with a range of neurodegenerative disorders such as Alzheimer's and Parkinson diseases. Computer simulations of the processes that lead to the formation of these oligomeric species are starting to make significant contributions to our understanding of the determinants of protein aggregation. We simulate different systems at constant concentration but with a different number of peptides and we study the how the finite number of proteins affects the underlying free energy of the system and therefore the relative stability of the species involved in the process. If not taken into account, this finite size effect can undermine the validity of theoretical predictions regarding the relative stability of the species involved and the rates of conversion from one to the other. We discuss the reasons that give rise to this finite size effect form both a probabilistic and energy fluctuations point of view and also how this problem can be dealt by a finite size scaling analysis.

  19. Finite size scaling and lattice gauge theory

    International Nuclear Information System (INIS)

    Berg, B.A.

    1986-01-01

    Finite size (Fisher) scaling is investigated for four dimensional SU(2) and SU(3) lattice gauge theories without quarks. It allows to disentangle violations of (asymptotic) scaling and finite volume corrections. Mass spectrum, string tension, deconfinement temperature and lattice β-function are considered. For appropriate volumes, Monte Carlo investigations seem to be able to control the finite volume continuum limit. Contact is made with Luescher's small volume expansion and possibly also with the asymptotic large volume behavior. 41 refs., 19 figs

  20. Dynamic properties of epidemic spreading on finite size complex networks

    Science.gov (United States)

    Li, Ying; Liu, Yang; Shan, Xiu-Ming; Ren, Yong; Jiao, Jian; Qiu, Ben

    2005-11-01

    The Internet presents a complex topological structure, on which computer viruses can easily spread. By using theoretical analysis and computer simulation methods, the dynamic process of disease spreading on finite size networks with complex topological structure is investigated. On the finite size networks, the spreading process of SIS (susceptible-infected-susceptible) model is a finite Markov chain with an absorbing state. Two parameters, the survival probability and the conditional infecting probability, are introduced to describe the dynamic properties of disease spreading on finite size networks. Our results can help understanding computer virus epidemics and other spreading phenomena on communication and social networks. Also, knowledge about the dynamic character of virus spreading is helpful for adopting immunity policy.

  1. Finite groups with three conjugacy class sizes of some elements

    Indian Academy of Sciences (India)

    Conjugacy class sizes; p-nilpotent groups; finite groups. 1. Introduction. All groups ... group G has exactly two conjugacy class sizes of elements of prime power order. .... [5] Huppert B, Character Theory of Finite Groups, de Gruyter Exp. Math.

  2. Finite-size scaling in two-dimensional superfluids

    International Nuclear Information System (INIS)

    Schultka, N.; Manousakis, E.

    1994-01-01

    Using the x-y model and a nonlocal updating scheme called cluster Monte Carlo, we calculate the superfluid density of a two-dimensional superfluid on large-size square lattices LxL up to 400x400. This technique allows us to approach temperatures close to the critical point, and by studying a wide range of L values and applying finite-size scaling theory we are able to extract the critical properties of the system. We calculate the superfluid density and from that we extract the renormalization-group beta function. We derive finite-size scaling expressions using the Kosterlitz-Thouless-Nelson renormalization group equations and show that they are in very good agreement with our numerical results. This allows us to extrapolate our results to the infinite-size limit. We also find that the universal discontinuity of the superfluid density at the critical temperature is in very good agreement with the Kosterlitz-Thouless-Nelson calculation and experiments

  3. Quark bag coupling to finite size pions

    International Nuclear Information System (INIS)

    De Kam, J.; Pirner, H.J.

    1982-01-01

    A standard approximation in theories of quark bags coupled to a pion field is to treat the pion as an elementary field ignoring its substructure and finite size. A difficulty associated with these treatments in the lack of stability of the quark bag due to the rapid increase of the pion pressure on the bad as the bag size diminishes. We investigate the effects of the finite size of the qanti q pion on the pion quark bag coupling by means of a simple nonlocal pion quark interaction. With this amendment the pion pressure on the bag vanishes if the bag size goes to zero. No stability problems are encountered in this description. Furthermore, for extended pions, no longer a maximum is set to the bag parameter B. Therefore 'little bag' solutions may be found provided that B is large enough. We also discuss the possibility of a second minimum in the bag energy function. (orig.)

  4. The King model for electrons in a finite-size ultracold plasma

    Energy Technology Data Exchange (ETDEWEB)

    Vrinceanu, D; Collins, L A [Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Balaraman, G S [School of Physics, Georgia Institute of Technology, Atlanta, GA 30332 (United States)

    2008-10-24

    A self-consistent model for a finite-size non-neutral ultracold plasma is obtained by extending a conventional model of globular star clusters. This model describes the dynamics of electrons at quasi-equilibrium trapped within the potential created by a cloud of stationary ions. A random sample of electron positions and velocities can be generated with the statistical properties defined by this model.

  5. Finite size effects for giant magnons on physical strings

    International Nuclear Information System (INIS)

    Minahan, J.A.; Ohlsson Sax, O.

    2008-01-01

    Using finite gap methods, we find the leading order finite size corrections for an arbitrary number of giant magnons on physical strings, where the sum of the momenta is a multiple of 2π. Our results are valid for the Hofman-Maldacena fundamental giant magnons as well as their dyonic generalizations. The energy corrections turn out to be surprisingly simple, especially if all the magnons are fundamental, and at leading order are independent of the magnon flavors. We also show how to use the Bethe ansatz to find finite size corrections for dyonic giant magnons with large R-charges

  6. Finite-size analysis of continuous-variable measurement-device-independent quantum key distribution

    Science.gov (United States)

    Zhang, Xueying; Zhang, Yichen; Zhao, Yijia; Wang, Xiangyu; Yu, Song; Guo, Hong

    2017-10-01

    We study the impact of the finite-size effect on the continuous-variable measurement-device-independent quantum key distribution (CV-MDI QKD) protocol, mainly considering the finite-size effect on the parameter estimation procedure. The central-limit theorem and maximum likelihood estimation theorem are used to estimate the parameters. We also analyze the relationship between the number of exchanged signals and the optimal modulation variance in the protocol. It is proved that when Charlie's position is close to Bob, the CV-MDI QKD protocol has the farthest transmission distance in the finite-size scenario. Finally, we discuss the impact of finite-size effects related to the practical detection in the CV-MDI QKD protocol. The overall results indicate that the finite-size effect has a great influence on the secret-key rate of the CV-MDI QKD protocol and should not be ignored.

  7. Finite-size scaling of survival probability in branching processes.

    Science.gov (United States)

    Garcia-Millan, Rosalba; Font-Clos, Francesc; Corral, Álvaro

    2015-04-01

    Branching processes pervade many models in statistical physics. We investigate the survival probability of a Galton-Watson branching process after a finite number of generations. We derive analytically the existence of finite-size scaling for the survival probability as a function of the control parameter and the maximum number of generations, obtaining the critical exponents as well as the exact scaling function, which is G(y)=2ye(y)/(e(y)-1), with y the rescaled distance to the critical point. Our findings are valid for any branching process of the Galton-Watson type, independently of the distribution of the number of offspring, provided its variance is finite. This proves the universal behavior of the finite-size effects in branching processes, including the universality of the metric factors. The direct relation to mean-field percolation is also discussed.

  8. Finite-size polyelectrolyte bundles at thermodynamic equilibrium

    Science.gov (United States)

    Sayar, M.; Holm, C.

    2007-01-01

    We present the results of extensive computer simulations performed on solutions of monodisperse charged rod-like polyelectrolytes in the presence of trivalent counterions. To overcome energy barriers we used a combination of parallel tempering and hybrid Monte Carlo techniques. Our results show that for small values of the electrostatic interaction the solution mostly consists of dispersed single rods. The potential of mean force between the polyelectrolyte monomers yields an attractive interaction at short distances. For a range of larger values of the Bjerrum length, we find finite-size polyelectrolyte bundles at thermodynamic equilibrium. Further increase of the Bjerrum length eventually leads to phase separation and precipitation. We discuss the origin of the observed thermodynamic stability of the finite-size aggregates.

  9. Chiral anomaly and anomalous finite-size conductivity in graphene

    Science.gov (United States)

    Shen, Shun-Qing; Li, Chang-An; Niu, Qian

    2017-09-01

    Graphene is a monolayer of carbon atoms packed into a hexagon lattice to host two spin degenerate pairs of massless two-dimensional Dirac fermions with different chirality. It is known that the existence of non-zero electric polarization in reduced momentum space which is associated with a hidden chiral symmetry will lead to the zero-energy flat band of a zigzag nanoribbon and some anomalous transport properties. Here it is proposed that the Adler-Bell-Jackiw chiral anomaly or non-conservation of chiral charges of Dirac fermions at different valleys can be realized in a confined ribbon of finite width, even in the absence of a magnetic field. In the laterally diffusive regime, the finite-size correction to conductivity is always positive and is inversely proportional to the square of the lateral dimension W, which is different from the finite-size correction inversely proportional to W from the boundary modes. This anomalous finite-size conductivity reveals the signature of the chiral anomaly in graphene, and it is measurable experimentally. This finding provides an alternative platform to explore the purely quantum mechanical effect in graphene.

  10. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    Science.gov (United States)

    Fung, Tak; Keenan, Kevin

    2014-01-01

    The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  11. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    Directory of Open Access Journals (Sweden)

    Tak Fung

    Full Text Available The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%, a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L., occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  12. A stochastic-field description of finite-size spiking neural networks.

    Science.gov (United States)

    Dumont, Grégory; Payeur, Alexandre; Longtin, André

    2017-08-01

    Neural network dynamics are governed by the interaction of spiking neurons. Stochastic aspects of single-neuron dynamics propagate up to the network level and shape the dynamical and informational properties of the population. Mean-field models of population activity disregard the finite-size stochastic fluctuations of network dynamics and thus offer a deterministic description of the system. Here, we derive a stochastic partial differential equation (SPDE) describing the temporal evolution of the finite-size refractory density, which represents the proportion of neurons in a given refractory state at any given time. The population activity-the density of active neurons per unit time-is easily extracted from this refractory density. The SPDE includes finite-size effects through a two-dimensional Gaussian white noise that acts both in time and along the refractory dimension. For an infinite number of neurons the standard mean-field theory is recovered. A discretization of the SPDE along its characteristic curves allows direct simulations of the activity of large but finite spiking networks; this constitutes the main advantage of our approach. Linearizing the SPDE with respect to the deterministic asynchronous state allows the theoretical investigation of finite-size activity fluctuations. In particular, analytical expressions for the power spectrum and autocorrelation of activity fluctuations are obtained. Moreover, our approach can be adapted to incorporate multiple interacting populations and quasi-renewal single-neuron dynamics.

  13. Stochastic synchronization in finite size spiking networks

    Science.gov (United States)

    Doiron, Brent; Rinzel, John; Reyes, Alex

    2006-09-01

    We study a stochastic synchronization of spiking activity in feedforward networks of integrate-and-fire model neurons. A stochastic mean field analysis shows that synchronization occurs only when the network size is sufficiently small. This gives evidence that the dynamics, and hence processing, of finite size populations can be drastically different from that observed in the infinite size limit. Our results agree with experimentally observed synchrony in cortical networks, and further strengthen the link between synchrony and propagation in cortical systems.

  14. Finite-size scaling of survival probability in branching processes

    OpenAIRE

    Garcia-Millan, Rosalba; Font-Clos, Francesc; Corral, Alvaro

    2014-01-01

    Branching processes pervade many models in statistical physics. We investigate the survival probability of a Galton-Watson branching process after a finite number of generations. We reveal the finite-size scaling law of the survival probability for a given branching process ruled by a probability distribution of the number of offspring per element whose standard deviation is finite, obtaining the exact scaling function as well as the critical exponents. Our findings prove the universal behavi...

  15. Finite size effects and chiral symmetry breaking in quenched three-dimensional QED

    International Nuclear Information System (INIS)

    Hands, S.; Kogut, J.B.

    1990-01-01

    Finite size effects and the chiral condensate are studied in three-dimensional QED by the Lanczos and the conjugate-gradient algorithms. Very substantial finite size effects are observed, but studies on L 3 lattices with L ranging from 8 to 80 indicate the development of a non-vanishing chiral condensate in the continuum limit of the theory. The systematics of the finite size effects and the fermion mass dependence in the conjugate-gradient algorithm are clarified in this extensive study. (orig.)

  16. Finite size scaling and phenomenological renormalization

    International Nuclear Information System (INIS)

    Derrida, B.; Seze, L. de; Vannimenus, J.

    1981-05-01

    The basic equations of the phenomenological renormalization method are recalled. A simple derivation using finite-size scaling is presented. The convergence of the method is studied analytically for the Ising model. Using this method we give predictions for the 2d bond percolation. Finally we discuss how the method can be applied to random systems

  17. Theory of critical phenomena in finite-size systems scaling and quantum effects

    CERN Document Server

    Brankov, Jordan G; Tonchev, Nicholai S

    2000-01-01

    The aim of this book is to familiarise the reader with the rich collection of ideas, methods and results available in the theory of critical phenomena in systems with confined geometry. The existence of universal features of the finite-size effects arising due to highly correlated classical or quantum fluctuations is explained by the finite-size scaling theory. This theory (1) offers an interpretation of experimental results on finite-size effects in real systems; (2) gives the most reliable tool for extrapolation to the thermodynamic limit of data obtained by computer simulations; (3) reveals

  18. Deconfinement phase transition and finite-size scaling in SU(2) lattice gauge theory

    International Nuclear Information System (INIS)

    Mogilevskij, O.A.

    1988-01-01

    Calculation technique for deconfinement phase transition parameters based on application of finite-size scaling theory is suggested. The essence of the technique lies in plotting of universal scaling function on the basis of numerical data obtained at different-size final lattices and discrimination of phase transition parameters for infinite lattice system. Finite-size scaling technique was developed as applied to spin system theory. β critical index for Polyakov loop and SU(2) deconfinement temperature of lattice gauge theory are calculated on the basis of finite-size scaling technique. The obtained value agrees with critical index of magnetization in Ising three-dimensional model

  19. Finite size scaling and spectral density studies

    International Nuclear Information System (INIS)

    Berg, B.A.

    1991-01-01

    Finite size scaling (FSS) and spectral density (SD) studies are reported for the deconfining phase transition. This talk concentrates on Monte Carlo (MC) results for pure SU(3) gauge theory, obtained in collaboration with Alves and Sanielevici, but the methods are expected to be useful for full QCD as well. (orig.)

  20. fB from finite size effects in lattice QCD

    International Nuclear Information System (INIS)

    Guagnelli, M.; Palombi, F.; Petronzio, R.; Tantalo, N.

    2003-01-01

    We discuss a novel method to calculate f B on the lattice, introduced in [1], based on the study of the dependence of finite size effects upon the heavy quark mass of flavoured mesons and on a non-perturbative recursive finite size technique. This method avoids the systematic errors related to extrapolations from the static limit or to the tuning of the coefficients of effective Lagrangian and the results admit an extrapolation to the continuum limit. We show the results of a first estimate at finite lattice spacing, but close to the continuum limit, giving f B = 170(11)(5)(22) MeV. We also obtain f B s = 192(9)(5)(24)MeV. The first error is statistical, the second is our estimate of the systematic error from the method and the third the systematic error from the specific approximations adopted in this first exploratory calculation. The method can be generalized to two-scale problems in lattice QCD

  1. Finite-size modifications of the magnetic properties of clusters

    DEFF Research Database (Denmark)

    Hendriksen, Peter Vang; Linderoth, Søren; Lindgård, Per-Anker

    1993-01-01

    relative to the bulk, and the consequent neutron-scattering cross section exhibits discretely spaced wave-vector-broadened eigenstates. The implications of the finite size on thermodynamic properties, like the temperature dependence of the magnetization and the critical temperature, are also elucidated. We...... find the temperature dependence of the cluster magnetization to be well described by an effective power law, M(mean) is-proportional-to 1 - BT(alpha), with a size-dependent, but structure-independent, exponent larger than the bulk value. The critical temperature of the clusters is calculated from...... the spin-wave spectrum by a method based on the correlation theory and the spherical approximation generalized to the case of finite systems. A size-dependent reduction of the critical temperature by up to 50% for the smallest clusters is found. The trends found for the model clusters are extrapolated...

  2. Sampling of finite elements for sparse recovery in large scale 3D electrical impedance tomography

    International Nuclear Information System (INIS)

    Javaherian, Ashkan; Moeller, Knut; Soleimani, Manuchehr

    2015-01-01

    This study proposes a method to improve performance of sparse recovery inverse solvers in 3D electrical impedance tomography (3D EIT), especially when the volume under study contains small-sized inclusions, e.g. 3D imaging of breast tumours. Initially, a quadratic regularized inverse solver is applied in a fast manner with a stopping threshold much greater than the optimum. Based on assuming a fixed level of sparsity for the conductivity field, finite elements are then sampled via applying a compressive sensing (CS) algorithm to the rough blurred estimation previously made by the quadratic solver. Finally, a sparse inverse solver is applied solely to the sampled finite elements, with the solution to the CS as its initial guess. The results show the great potential of the proposed CS-based sparse recovery in improving accuracy of sparse solution to the large-size 3D EIT. (paper)

  3. Three-point correlation functions of giant magnons with finite size

    International Nuclear Information System (INIS)

    Ahn, Changrim; Bozhilov, Plamen

    2011-01-01

    We compute holographic three-point correlation functions or structure constants of a zero-momentum dilaton operator and two (dyonic) giant magnon string states with a finite-size length in the semiclassical approximation. We show that the semiclassical structure constants match exactly with the three-point functions between two su(2) magnon single trace operators with finite size and the Lagrangian in the large 't Hooft coupling constant limit. A special limit J>>√(λ) of our result is compared with the relevant result based on the Luescher corrections.

  4. Coulomb systems seen as critical systems: Finite-size effects in two dimensions

    International Nuclear Information System (INIS)

    Jancovici, B.; Manificat, G.; Pisani, C.

    1994-01-01

    It is known that the free energy at criticality of a finite two-dimensional system of characteristic size L has in general a term which behaves like log L as L → ∞; the coefficient of this term is universal. There are solvable models of two-dimensional classical Coulomb systems which exhibit the same finite-size correction (except for its sign) although the particle correlations are short-ranged, i.e., noncritical. Actually, the electrical potential and electrical field correlations are critical at all temperatures (as long as the Coulomb system is a conductor), as a consequence of the perfect screening property of Coulomb systems. This is why Coulomb systems have to exhibit critical finite-size effects

  5. Finite-size effects in the three-state quantum asymmetric clock model

    International Nuclear Information System (INIS)

    Gehlen, G. v.; Rittenberg, V.

    1983-04-01

    The one-dimensional quantum Hamiltonian of the asymmetric three-state clock model is studied using finite-size scaling. Various boundary conditions are considered on chains containing up to eight sites. We calculate the boundary of the commensurate phase and the mass gap index. The model shows an interesting finite-size dependence in connexion with the presence of the incommensurate phase indicating that for the infinite system there is no Lifshitz point. (orig.)

  6. Multipartite geometric entanglement in finite size XY model

    Energy Technology Data Exchange (ETDEWEB)

    Blasone, Massimo; Dell' Anno, Fabio; De Siena, Silvio; Giampaolo, Salvatore Marco; Illuminati, Fabrizio, E-mail: blasone@sa.infn.i [Dipartimento di Matematica e Informatica, Universita degli Studi di Salerno, Via Ponte don Melillo, I-84084 Fisciano (Italy)

    2009-06-01

    We investigate the behavior of the multipartite entanglement in the finite size XY model by means of the hierarchical geometric measure of entanglement. By selecting specific components of the hierarchy, we study both global entanglement and genuinely multipartite entanglement.

  7. Finite size effects in quark-gluon plasma formation

    International Nuclear Information System (INIS)

    Gopie, Andy; Ogilvie, Michael C.

    1999-01-01

    Using lattice simulations of quenched QCD we estimate the finite size effects present when a gluon plasma equilibrates in a slab geometry, i.e., finite width but large transverse dimensions. Significant differences are observed in the free energy density for the slab when compared with bulk behavior. A small shift in the critical temperature is also seen. The free energy required to liberate heavy quarks relative to bulk is measured using Polyakov loops; the additional free energy required is on the order of 30 - 40 MeV at 2 - 3 T c

  8. Finite-size effects on multibody neutrino exchange

    CERN Document Server

    Abada, A; Rodríguez-Quintero, J; Abada, As

    1998-01-01

    The effect of multibody massless neutrino exchanges between neutrons inside a finite-size neutron star is studied. We use an effective Lagrangian, which incorporates the effect of the neutrons on the neutrinos. Following Schwinger, it is shown that the total interaction energy density is computed by comparing the zero point energy of the neutrino sea with and without the star. It has already been shown that in an infinite-size star the total energy due to neutrino exchange vanishes exactly. The opposite claim that massless neutrino exchange would produce a huge energy is due to an improper summation of an infrared-divergent quantity. The same vanishing of the total energy has been proved exactly in the case of a finite star in a one-dimensional toy model. Here we study the three-dimensional case. We first consider the effect of a sharp star border, assumed to be a plane. We find that there is a non- vanishing of the zero point energy density difference between the inside and the outside due to the refraction ...

  9. Finite mixture models for the computation of isotope ratios in mixed isotopic samples

    Science.gov (United States)

    Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas

    2013-04-01

    Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control

  10. Isobaric expansion coefficient and isothermal compressibility for a finite-size ideal Fermi gas system

    International Nuclear Information System (INIS)

    Su, Guozhen; Chen, Liwei; Chen, Jincan

    2014-01-01

    Due to quantum size effects (QSEs), the isobaric thermal expansion coefficient and isothermal compressibility well defined for macroscopic systems are invalid for finite-size systems. The two parameters are redefined and calculated for a finite-size ideal Fermi gas confined in a rectangular container. It is found that the isobaric thermal expansion coefficient and isothermal compressibility are generally anisotropic, i.e., they are generally different in different directions. Moreover, it is found the thermal expansion coefficient may be negative in some directions under the condition that the pressures in all directions are kept constant. - Highlights: • Isobaric thermal expansion coefficient and isothermal compressibility are redefined. • The two parameters are calculated for a finite-size ideal Fermi gas. • The two parameters are generally anisotropic for a finite-size system. • Isobaric thermal expansion coefficient may be negative in some directions

  11. Frictional behaviour of sandstone: A sample-size dependent triaxial investigation

    Science.gov (United States)

    Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus

    2017-01-01

    Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.

  12. Exchange bias in finite sized NiO nanoparticles with Ni clusters

    International Nuclear Information System (INIS)

    Gandhi, Ashish Chhaganlal; Lin, Jauyn Grace

    2017-01-01

    Structural and magnetic properties of finite sized NiO nanoparticles are investigated with synchrotron X-ray diffraction (XRD), transmission electron microscopy, magnetometer and ferromagnetic resonance (FMR) spectroscopy. A minor Ni phase is detected with synchrotron XRD, attributed to the oxygen defects in the NiO core. A considerable exchange bias of ~100 Oe is observed at 50 K and it drops abruptly and vanishes above 150 K, in association with the reduction of frozen spins. FMR data indicate a strong interaction between ferromagnetic (FM) and antiferromagnetic (AFM) phases below 150 K, consistent with the picture of isolated FM clusters in AFM matrix. - Highlights: • Structural and magnetic properties of finite sized NiO nanoparticles are systematically investigated with several advanced techniques. • A strong interaction between ferromagnetic and antiferromagnetic phases is found below 150 K. • Exchange bias field in finite sized NiO nanoparticles is due to anisotropy energy of Ni clusters over riding the domain wall energy of NiO.

  13. Exchange bias in finite sized NiO nanoparticles with Ni clusters

    Energy Technology Data Exchange (ETDEWEB)

    Gandhi, Ashish Chhaganlal; Lin, Jauyn Grace, E-mail: jglin@ntu.edu.tw

    2017-02-15

    Structural and magnetic properties of finite sized NiO nanoparticles are investigated with synchrotron X-ray diffraction (XRD), transmission electron microscopy, magnetometer and ferromagnetic resonance (FMR) spectroscopy. A minor Ni phase is detected with synchrotron XRD, attributed to the oxygen defects in the NiO core. A considerable exchange bias of ~100 Oe is observed at 50 K and it drops abruptly and vanishes above 150 K, in association with the reduction of frozen spins. FMR data indicate a strong interaction between ferromagnetic (FM) and antiferromagnetic (AFM) phases below 150 K, consistent with the picture of isolated FM clusters in AFM matrix. - Highlights: • Structural and magnetic properties of finite sized NiO nanoparticles are systematically investigated with several advanced techniques. • A strong interaction between ferromagnetic and antiferromagnetic phases is found below 150 K. • Exchange bias field in finite sized NiO nanoparticles is due to anisotropy energy of Ni clusters over riding the domain wall energy of NiO.

  14. Finite-size scaling for quantum chains with an oscillatory energy gap

    International Nuclear Information System (INIS)

    Hoeger, C.; Gehlen, G. von; Rittenberg, V.

    1984-07-01

    We show that the existence of zeroes of the energy gap for finite quantum chains is related to a nonvanishing wavevector. Finite-size scaling ansaetze are formulated for incommensurable and oscillatory structures. The ansaetze are verified in the one-dimensional XY model in a transverse field. (orig.)

  15. Finite-Size Effects for Some Bootstrap Percolation Models

    NARCIS (Netherlands)

    Enter, A.C.D. van; Adler, Joan; Duarte, J.A.M.S.

    The consequences of Schonmann's new proof that the critical threshold is unity for certain bootstrap percolation models are explored. It is shown that this proof provides an upper bound for the finite-size scaling in these systems. Comparison with data for one case demonstrates that this scaling

  16. Estimation of Finite Population Mean in Multivariate Stratified Sampling under Cost Function Using Goal Programming

    Directory of Open Access Journals (Sweden)

    Atta Ullah

    2014-01-01

    Full Text Available In practical utilization of stratified random sampling scheme, the investigator meets a problem to select a sample that maximizes the precision of a finite population mean under cost constraint. An allocation of sample size becomes complicated when more than one characteristic is observed from each selected unit in a sample. In many real life situations, a linear cost function of a sample size nh is not a good approximation to actual cost of sample survey when traveling cost between selected units in a stratum is significant. In this paper, sample allocation problem in multivariate stratified random sampling with proposed cost function is formulated in integer nonlinear multiobjective mathematical programming. A solution procedure is proposed using extended lexicographic goal programming approach. A numerical example is presented to illustrate the computational details and to compare the efficiency of proposed compromise allocation.

  17. Transient hydrodynamic finite-size effects in simulations under periodic boundary conditions

    Science.gov (United States)

    Asta, Adelchi J.; Levesque, Maximilien; Vuilleumier, Rodolphe; Rotenberg, Benjamin

    2017-06-01

    We use lattice-Boltzmann and analytical calculations to investigate transient hydrodynamic finite-size effects induced by the use of periodic boundary conditions. These effects are inevitable in simulations at the molecular, mesoscopic, or continuum levels of description. We analyze the transient response to a local perturbation in the fluid and obtain the local velocity correlation function via linear response theory. This approach is validated by comparing the finite-size effects on the steady-state velocity with the known results for the diffusion coefficient. We next investigate the full time dependence of the local velocity autocorrelation function. We find at long times a crossover between the expected t-3 /2 hydrodynamic tail and an oscillatory exponential decay, and study the scaling with the system size of the crossover time, exponential rate and amplitude, and oscillation frequency. We interpret these results from the analytic solution of the compressible Navier-Stokes equation for the slowest modes, which are set by the system size. The present work not only provides a comprehensive analysis of hydrodynamic finite-size effects in bulk fluids, which arise regardless of the level of description and simulation algorithm, but also establishes the lattice-Boltzmann method as a suitable tool to investigate such effects in general.

  18. Finite-key-size effect in a commercial plug-and-play QKD system

    Science.gov (United States)

    Chaiwongkhot, Poompong; Sajeed, Shihan; Lydersen, Lars; Makarov, Vadim

    2017-12-01

    A security evaluation against the finite-key-size effect was performed for a commercial plug-and-play quantum key distribution (QKD) system. We demonstrate the ability of an eavesdropper to force the system to distill key from a smaller length of sifted-key. We also derive a key-rate equation that is specific for this system. This equation provides bounds above the upper bound of secure key under finite-key-size analysis. From this equation and our experimental data, we show that the keys that have been distilled from the smaller sifted-key size fall above our bound. Thus, their security is not covered by finite-key-size analysis. Experimentally, we could consistently force the system to generate the key outside of the bound. We also test manufacturer’s software update. Although all the keys after the patch fall under our bound, their security cannot be guaranteed under this analysis. Our methodology can be used for security certification and standardization of QKD systems.

  19. Finite size effects and symmetry breaking in the evolution of networks of competing Boolean nodes

    International Nuclear Information System (INIS)

    Liu, M; Bassler, K E

    2011-01-01

    Finite size effects on the evolutionary dynamics of Boolean networks are analyzed. In the model considered, Boolean networks evolve via a competition between nodes that punishes those in the majority. Previous studies have found that large networks evolve to a statistical steady state that is both critical and highly canalized, and that the evolution of canalization, which is a form of robustness found in genetic regulatory networks, is associated with a particular symmetry of the evolutionary dynamics. Here, it is found that finite size networks evolve in a fundamentally different way than infinitely large networks do. The symmetry of the evolutionary dynamics of infinitely large networks that selects for canalizing Boolean functions is broken in the evolutionary dynamics of finite size networks. In finite size networks, there is an additional selection for input-inverting Boolean functions that output a value opposite to the majority of input values. The reason for the symmetry breaking in the evolutionary dynamics is found to be due to the need for nodes in finite size networks to behave differently in order to cooperate so that the system collectively performs as efficiently as possible. The results suggest that both finite size effects and symmetry are fundamental for understanding the evolution of real-world complex networks, including genetic regulatory networks.

  20. Layout Optimization of Structures with Finite-size Features using Multiresolution Analysis

    DEFF Research Database (Denmark)

    Chellappa, S.; Diaz, A. R.; Bendsøe, Martin P.

    2004-01-01

    A scheme for layout optimization in structures with multiple finite-sized heterogeneities is presented. Multiresolution analysis is used to compute reduced operators (stiffness matrices) representing the elastic behavior of material distributions with heterogeneities of sizes that are comparable...

  1. Finite-size-scaling analysis of subsystem data in the dilute Ising model

    International Nuclear Information System (INIS)

    Hennecke, M.

    1993-01-01

    Monte Carlo simulation results for the magnetization of subsystems of finite lattices are used to determine the critical temperature and a critical exponent of the simple-cubic Ising model with quenched site dilution, at a concentration of p=40%. Particular attention is paid to the effect of the finite size of the systems from which the subsystem results are obtained. This finiteness of the lattices involved is shown to be a source of large deviations of critical temperatures and exponents estimated from subsystem data from their values in the thermodynamic limit. By the use of different lattice sizes, the results T c (40%)=1.209±0.002 and ν(40%)=0.78±0.01 could be extrapolated

  2. Finite-size effects for anisotropic bootstrap percolation : Logarithmic corrections

    NARCIS (Netherlands)

    van Enter, Aernout C. D.; Hulshof, Tim

    In this note we analyse an anisotropic, two-dimensional bootstrap percolation model introduced by Gravner and Griffeath. We present upper and lower bounds on the finite-size effects. We discuss the similarities with the semi-oriented model introduced by Duarte.

  3. Finite-size effects for anisotropic bootstrap percolation: logerithmic corrections

    NARCIS (Netherlands)

    Enter, van A.C.D.; Hulshof, T.

    2007-01-01

    In this note we analyse an anisotropic, two-dimensional bootstrap percolation model introduced by Gravner and Griffeath. We present upper and lower bounds on the finite-size effects. We discuss the similarities with the semi-oriented model introduced by Duarte.

  4. Finite-key analysis for quantum key distribution with weak coherent pulses based on Bernoulli sampling

    Science.gov (United States)

    Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato

    2017-07-01

    An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.

  5. Finite-size scaling of clique percolation on two-dimensional Moore lattices

    Science.gov (United States)

    Dong, Jia-Qi; Shen, Zhou; Zhang, Yongwen; Huang, Zi-Gang; Huang, Liang; Chen, Xiaosong

    2018-05-01

    Clique percolation has attracted much attention due to its significance in understanding topological overlap among communities and dynamical instability of structured systems. Rich critical behavior has been observed in clique percolation on Erdős-Rényi (ER) random graphs, but few works have discussed clique percolation on finite dimensional systems. In this paper, we have defined a series of characteristic events, i.e., the historically largest size jumps of the clusters, in the percolating process of adding bonds and developed a new finite-size scaling scheme based on the interval of the characteristic events. Through the finite-size scaling analysis, we have found, interestingly, that, in contrast to the clique percolation on an ER graph where the critical exponents are parameter dependent, the two-dimensional (2D) clique percolation simply shares the same critical exponents with traditional site or bond percolation, independent of the clique percolation parameters. This has been corroborated by bridging two special types of clique percolation to site percolation on 2D lattices. Mechanisms for the difference of the critical behaviors between clique percolation on ER graphs and on 2D lattices are also discussed.

  6. Geometric measures of multipartite entanglement in finite-size spin chains

    Energy Technology Data Exchange (ETDEWEB)

    Blasone, M; Dell' Anno, F; De Siena, S; Giampaolo, S M; Illuminati, F, E-mail: illuminati@sa.infn.i [Dipartimento di Matematica e Informatica, Universita degli Studi di Salerno, Via Ponte don Melillo, I-84084 Fisciano (Italy)

    2010-09-01

    We investigate the behaviour of multipartite entanglement in finite-size quantum spin systems, resorting to a hierarchy of geometric measures of multipartite entanglement recently introduced in the literature. In particular, we investigate the ground-state entanglement in the XY model defined on finite chains of N sites with periodic boundary conditions. We analyse the behaviour of the geometric measures of (N- 1)-partite and (N/2)-partite entanglement and compare them with the Wei-Goldbart geometric measure of global entanglement.

  7. Geometric measures of multipartite entanglement in finite-size spin chains

    International Nuclear Information System (INIS)

    Blasone, M; Dell'Anno, F; De Siena, S; Giampaolo, S M; Illuminati, F

    2010-01-01

    We investigate the behaviour of multipartite entanglement in finite-size quantum spin systems, resorting to a hierarchy of geometric measures of multipartite entanglement recently introduced in the literature. In particular, we investigate the ground-state entanglement in the XY model defined on finite chains of N sites with periodic boundary conditions. We analyse the behaviour of the geometric measures of (N- 1)-partite and (N/2)-partite entanglement and compare them with the Wei-Goldbart geometric measure of global entanglement.

  8. Finite size effects in neutron star and nuclear matter simulations

    Energy Technology Data Exchange (ETDEWEB)

    Giménez Molinelli, P.A., E-mail: pagm@df.uba.ar; Dorso, C.O.

    2015-01-15

    In this work we study molecular dynamics simulations of symmetric nuclear and neutron star matter using a semi-classical nucleon interaction model. Our aim is to gain insight on the nature of the so-called “finite size effects”, unavoidable in this kind of simulations, and to understand what they actually affect. To do so, we explore different geometries for the periodic boundary conditions imposed on the simulation cell: cube, hexagonal prism and truncated octahedron. For nuclear matter simulations we show that, at sub-saturation densities and low temperatures, the solutions are non-homogeneous structures reminiscent of the “nuclear pasta” phases expected in neutron star matter simulations, but only one structure per cell and shaped by specific artificial aspects of the simulations—for the same physical conditions (i.e. number density and temperature) different cells yield different solutions. The particular shape of the solution at low enough temperature and a given density can be predicted analytically by surface minimization. We also show that even if this behavior is due to the imposition of periodic boundary conditions on finite systems, this does not mean that it vanishes for very large systems, and it is actually independent of the system size. We conclude that, for nuclear matter simulations, the cells' size sets the only characteristic length scale for the inhomogeneities, and the geometry of the periodic cell determines the shape of those inhomogeneities. To model neutron star matter we add a screened Coulomb interaction between protons, and perform simulations in the three cell geometries. Our simulations indeed produce the well known nuclear pasta, with (in most cases) several structures per cell. However, we find that for systems not too large results are affected by finite size in different ways depending on the geometry of the cell. In particular, at the same certain physical conditions and system size, the hexagonal prism yields a

  9. Finite size effects on the helical edge states on the Lieb lattice

    International Nuclear Information System (INIS)

    Chen Rui; Zhou Bin

    2016-01-01

    For a two-dimensional Lieb lattice, that is, a line-centered square lattice, the inclusion of the intrinsic spin–orbit (ISO) coupling opens a topologically nontrivial gap, and gives rise to the quantum spin Hall (QSH) effect characterized by two pairs of gapless helical edge states within the bulk gap. Generally, due to the finite size effect in QSH systems, the edge states on the two sides of a strip of finite width can couple together to open a gap in the spectrum. In this paper, we investigate the finite size effect of helical edge states on the Lieb lattice with ISO coupling under three different kinds of boundary conditions, i.e., the straight, bearded and asymmetry edges. The spectrum and wave function of edge modes are derived analytically for a tight-binding model on the Lieb lattice. For a strip Lieb lattice with two straight edges, the ISO coupling induces the Dirac-like bulk states to localize at the edges to become the helical edge states with the same Dirac-like spectrum. Moreover, it is found that in the case with two straight edges the gapless Dirac-like spectrum remains unchanged with decreasing the width of the strip Lieb lattice, and no gap is opened in the edge band. It is concluded that the finite size effect of QSH states is absent in the case with the straight edges. However, in the other two cases with the bearded and asymmetry edges, the energy gap induced by the finite size effect is still opened with decreasing the width of the strip. It is also proposed that the edge band dispersion can be controlled by applying an on-site potential energy on the outermost atoms. (paper)

  10. Finite size effects in the intermittency analysis of the fragment-size correlations

    International Nuclear Information System (INIS)

    Bozek, P.; Ploszajczak, M.; Tucholski, A.

    1991-01-01

    An influence of the finite size effect on the fragment-size correlations in the nuclear multifragmentation is studied using the method of scaled factorial moments for a 1 - dim percolation model and for a statistical model of the fragmentation process, which for a certain value of a tuning parameter yields the power-law behaviour of the fragment-size distribution. It is shown that the statistical models of this type contain only repulsive correlations due to the conservation laws. The comparison of the results with those obtained in the non-critical 1 - dim percolation and in the 3 - dim percolation at around the critical point is presented. Correlations in the 1 - dim percolation model are analysed analytically and the mechanism of the attractive correlations in 1 - dim and 3 - dim is identified. (author) 30 refs., 7 figs

  11. Simulation of the electron acoustic instability for a finite-size electron beam system

    International Nuclear Information System (INIS)

    Lin, C.S.; Winske, D.

    1987-01-01

    Satellite observations at midlatitudes (≅20,000 km) near the earth's dayside polar cusp boundary layer indicate that the upward electron beams have a narrow latitudinal width up to 0.1 0 . In the cusp boundary layer where the electron population consists of a finite-size electron beam in a background of uniform cold and hot electrons, the electron acoustic mode is unstable inside the electron beam but damped outside the electron beam. Simulations of the electron acoustic instability for a finite-size beam system are carried out with a particle-in-cell code to investigate the heating phenomena associated with the instability and the width of the heating region. The simulations show that the finite-size electron beam radiates electrostatic electron acoustic waves. The decay length of the electron acoustic waves outside the beam in the simulation agrees with the spatial decay length derived from the linear dispersion equation

  12. How precise is the finite sample approximation of the asymptotic distribution of realised variation measures in the presence of jumps?

    DEFF Research Database (Denmark)

    Veraart, Almut

    and present a new estimator for the asymptotic ‘variance’ of the centered realised variance in the presence of jumps. Next, we compare the finite sample performance of the various estimators by means of detailed Monte Carlo studies where we study the impact of the jump activity, the jump size of the jumps......This paper studies the impact of jumps on volatility estimation and inference based on various realised variation measures such as realised variance, realised multipower variation and truncated realised multipower variation. We review the asymptotic theory of those realised variation measures...... in the price and the presence of additional independent or dependent jumps in the volatility on the finite sample performance of the various estimators. We find that the finite sample performance of realised variance, and in particular of the log–transformed realised variance, is generally good, whereas...

  13. Finite-size giant magnons on η-deformed AdS5×S5

    Directory of Open Access Journals (Sweden)

    Changrim Ahn

    2014-10-01

    Full Text Available We consider strings moving in the Rt×Sη3 subspace of the η-deformed AdS5×S5 and obtain a class of solutions depending on several parameters. They are characterized by the string energy and two angular momenta. Finite-size dyonic giant magnon belongs to this class of solutions. Further on, we restrict ourselves to the case of giant magnon with one nonzero angular momentum, and obtain the leading finite-size correction to the dispersion relation.

  14. Finite-size scaling theory and quantum hamiltonian Field theory: the transverse Ising model

    International Nuclear Information System (INIS)

    Hamer, C.J.; Barber, M.N.

    1979-01-01

    Exact results for the mass gap, specific heat and susceptibility of the one-dimensional transverse Ising model on a finite lattice are generated by constructing a finite matrix representation of the Hamiltonian using strong-coupling eigenstates. The critical behaviour of the limiting infinite chain is analysed using finite-size scaling theory. In this way, excellent estimates (to within 1/2% accuracy) are found for the critical coupling and the exponents α, ν and γ

  15. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  16. Finite size and dynamical effects in pair production by an external field

    International Nuclear Information System (INIS)

    Martin, C.; Vautherin, D.

    1988-12-01

    We evaluate the rate of pair production in a uniform electric field confined into a bounded region in space. Using the Balian-Bloch expansion of Green's functions we obtain explicit expressions for finite size corrections to Schwinger's formula. The case of a time-dependent boundary, relevant to describe energy deposition by quark-antiquark pair production in ultrarelativistic collisions, is also investigated. We find that finite size effects are important in nuclear collisions. They decrease when the strength of the chromo-electric field between the nuclei is large. As a result, the rate of energy deposition increases sharply with the mass number A of the colliding nuclei

  17. Interpolating and sampling sequences in finite Riemann surfaces

    OpenAIRE

    Ortega-Cerda, Joaquim

    2007-01-01

    We provide a description of the interpolating and sampling sequences on a space of holomorphic functions on a finite Riemann surface, where a uniform growth restriction is imposed on the holomorphic functions.

  18. Simulation of finite size effects of the fiber bundle model

    Science.gov (United States)

    Hao, Da-Peng; Tang, Gang; Xun, Zhi-Peng; Xia, Hui; Han, Kui

    2018-01-01

    In theory, the macroscopic fracture of materials should correspond with the thermodynamic limit of the fiber bundle model. However, the simulation of a fiber bundle model with an infinite size is unrealistic. To study the finite size effects of the fiber bundle model, fiber bundle models of various size are simulated in detail. The effects of system size on the constitutive behavior, critical stress, maximum avalanche size, avalanche size distribution, and increased step number of external load are explored. The simulation results imply that there is no feature size or cut size for macroscopic mechanical and statistical properties of the model. The constitutive curves near the macroscopic failure for various system size can collapse well with a simple scaling relationship. Simultaneously, the introduction of a simple extrapolation method facilitates the acquisition of more accurate simulation results in a large-limit system, which is better for comparison with theoretical results.

  19. Pyroelectric properties of finite size ferroelectric thin films with structural transition zones

    International Nuclear Information System (INIS)

    Zhou Jing; Lue Tianquan; Sun Punan; Xie Wenguang; Cao Wenwu

    2009-01-01

    A Fermi-type Green's function is used to study pyroelectric properties of the thin film with finite sizes in three dimensions based on a modified transverse Ising model. The results demonstrate that a decrease in the lateral size of the film has a disadvantageous influence on the pyroelectric coefficient of the thin film.

  20. Distribution of quantum states in enclosures of finite size I

    International Nuclear Information System (INIS)

    Souto, J.H.; Chaba, A.N.

    1989-01-01

    The expression for the density of states of a particle in a three-dimensional rectangular box of finite size can be obtained directly by Poissons's Summation formula. The expression for the case of an enclosure in the form of an infinite rectangular slab is derived. (A.C.A.S.) [pt

  1. Finite-size corrections to the free energies of crystalline solids

    NARCIS (Netherlands)

    Polson, J.M.; Trizac, E.; Pronk, S.; Frenkel, D.

    2000-01-01

    We analyze the finite-size corrections to the free energy of crystals with a fixed center of mass. When we explicitly correct for the leading (ln N/N) corrections, the remaining free energy is found to depend linearly on 1/N. Extrapolating to the thermodynamic limit (N → ∞), we estimate the free

  2. The finite horizon economic lot sizing problem in job shops : the multiple cycle approach

    NARCIS (Netherlands)

    Ouenniche, J.; Bertrand, J.W.M.

    2001-01-01

    This paper addresses the multi-product, finite horizon, static demand, sequencing, lot sizing and scheduling problem in a job shop environment where the planning horizon length is finite and fixed by management. The objective pursued is to minimize the sum of setup costs, and work-in-process and

  3. Choosing a suitable sample size in descriptive sampling

    International Nuclear Information System (INIS)

    Lee, Yong Kyun; Choi, Dong Hoon; Cha, Kyung Joon

    2010-01-01

    Descriptive sampling (DS) is an alternative to crude Monte Carlo sampling (CMCS) in finding solutions to structural reliability problems. It is known to be an effective sampling method in approximating the distribution of a random variable because it uses the deterministic selection of sample values and their random permutation,. However, because this method is difficult to apply to complex simulations, the sample size is occasionally determined without thorough consideration. Input sample variability may cause the sample size to change between runs, leading to poor simulation results. This paper proposes a numerical method for choosing a suitable sample size for use in DS. Using this method, one can estimate a more accurate probability of failure in a reliability problem while running a minimal number of simulations. The method is then applied to several examples and compared with CMCS and conventional DS to validate its usefulness and efficiency

  4. Many-body localization in disorder-free systems: The importance of finite-size constraints

    Energy Technology Data Exchange (ETDEWEB)

    Papić, Z., E-mail: zpapic@perimeterinstitute.ca [School of Physics and Astronomy, University of Leeds, Leeds, LS2 9JT (United Kingdom); Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada); Stoudenmire, E. Miles [Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada); Abanin, Dmitry A. [Department of Theoretical Physics, University of Geneva, 24 quai Ernest-Ansermet, 1211 Geneva (Switzerland); Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5 (Canada)

    2015-11-15

    Recently it has been suggested that many-body localization (MBL) can occur in translation-invariant systems, and candidate 1D models have been proposed. We find that such models, in contrast to MBL systems with quenched disorder, typically exhibit much more severe finite-size effects due to the presence of two or more vastly different energy scales. In a finite system, this can artificially split the density of states (DOS) into bands separated by large gaps. We argue for such models to faithfully represent the thermodynamic limit behavior, the ratio of relevant coupling must exceed a certain system-size depedent cutoff, chosen such that various bands in the DOS overlap one another. Setting the parameters this way to minimize finite-size effects, we study several translation-invariant MBL candidate models using exact diagonalization. Based on diagnostics including entanglement and local observables, we observe thermal (ergodic), rather than MBL-like behavior. Our results suggest that MBL in translation-invariant systems with two or more very different energy scales is less robust than perturbative arguments suggest, possibly pointing to the importance of non-perturbative effects which induce delocalization in the thermodynamic limit.

  5. Finite-size scaling of the entanglement entropy of the quantum Ising chain with homogeneous, periodically modulated and random couplings

    International Nuclear Information System (INIS)

    Iglói, Ferenc; Lin, Yu-Cheng

    2008-01-01

    Using free-fermionic techniques we study the entanglement entropy of a block of contiguous spins in a large finite quantum Ising chain in a transverse field, with couplings of different types: homogeneous, periodically modulated and random. We carry out a systematic study of finite-size effects at the quantum critical point, and evaluate subleading corrections both for open and for periodic boundary conditions. For a block corresponding to a half of a finite chain, the position of the maximum of the entropy as a function of the control parameter (e.g. the transverse field) can define the effective critical point in the finite sample. On the basis of homogeneous chains, we demonstrate that the scaling behavior of the entropy near the quantum phase transition is in agreement with the universality hypothesis, and calculate the shift of the effective critical point, which has different scaling behaviors for open and for periodic boundary conditions

  6. Finite size effects on hydrogen bonds in confined water

    International Nuclear Information System (INIS)

    Musat, R.; Renault, J.P.; Le Caer, S.; Pommeret, S.; Candelaresi, M.; Palmer, D.J.; Righini, R.

    2008-01-01

    Femtosecond IR spectroscopy was used to study water confined in 1-50 nm pores. The results show that even large pores induce significant changes (for example excited-state lifetimes) to the hydrogen-bond network, which are independent of pore diameter between 1 and 50 nm. Thus, the changes are not surface-induced but rather finite size effects, and suggest a confinement-induced enhancement of the acidic character of water. (authors)

  7. Directional anisotropy, finite size effect and elastic properties of hexagonal boron nitride

    International Nuclear Information System (INIS)

    Thomas, Siby; Ajith, K M; Valsakumar, M C

    2016-01-01

    Classical molecular dynamics simulations have been performed to analyze the elastic and mechanical properties of two-dimensional (2D) hexagonal boron nitride (h-BN) using a Tersoff-type interatomic empirical potential. We present a systematic study of h-BN for various system sizes. Young’s modulus and Poisson’s ratio are found to be anisotropic for finite sheets whereas they are isotropic for the infinite sheet. Both of them increase with system size in accordance with a power law. It is concluded from the computed values of elastic constants that h-BN sheets, finite or infinite, satisfy Born’s criterion for mechanical stability. Due to the the strong in-plane sp 2 bonds and the small mass of boron and nitrogen atoms, h-BN possesses high longitudinal and shear velocities. The variation of bending rigidity with system size is calculated using the Foppl–von Karman approach by coupling the in-plane bending and out-of-plane stretching modes of the 2D h-BN. (paper)

  8. Overcoming time scale and finite size limitations to compute nucleation rates from small scale well tempered metadynamics simulations

    Science.gov (United States)

    Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele

    2016-12-01

    Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.

  9. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  10. Finite size effects in the evaporation rate of 3He clusters

    International Nuclear Information System (INIS)

    Guirao, A.; Pi, M.; Barranco, M.

    1991-01-01

    We have computed the density of states and the evaporation rate of 3 He clusters, paying special attention to finite size effects which modify the 3 He level density parameter and chemical potential from their bulk values. Ready-to-use liquid-drop expansions of these quantities are given. (orig.)

  11. Finite-size effect and the components of multifractality in financial volatility

    International Nuclear Information System (INIS)

    Zhou Weixing

    2012-01-01

    Highlights: ► The apparent multifractality can be decomposed quantitatively. ► There is a marked finite-size effect in the detection of multifractality. ► The effective multifractality can be further decomposed into two components. ► A time series exhibits effective multifractality only if it possesses nonlinearity. ► The daily DJIA volatility is analyzed as an example. - Abstract: Many financial variables are found to exhibit multifractal nature, which is usually attributed to the influence of temporal correlations and fat-tailedness in the probability distribution (PDF). Based on the partition function approach of multifractal analysis, we show that there is a marked finite-size effect in the detection of multifractality, and the effective multifractality is the apparent multifractality after removing the finite-size effect. We find that the effective multifractality can be further decomposed into two components, the PDF component and the nonlinearity component. Referring to the normal distribution, we can determine the PDF component by comparing the effective multifractality of the original time series and the surrogate data that have a normal distribution and keep the same linear and nonlinear correlations as the original data. We demonstrate our method by taking the daily volatility data of Dow Jones Industrial Average from 26 May 1896 to 27 April 2007 as an example. Extensive numerical experiments show that a time series exhibits effective multifractality only if it possesses nonlinearity and the PDF has an impact on the effective multifractality only when the time series possesses nonlinearity. Our method can also be applied to judge the presence of multifractality and determine its components of multifractal time series in other complex systems.

  12. Finite-size giant magnons on η-deformed AdS{sub 5}×S{sup 5}

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Changrim, E-mail: ahn@ewha.ac.kr; Bozhilov, Plamen, E-mail: bozhilov@inrne.bas.bg

    2014-10-07

    We consider strings moving in the R{sub t}×S{sub η}{sup 3} subspace of the η-deformed AdS{sub 5}×S{sup 5} and obtain a class of solutions depending on several parameters. They are characterized by the string energy and two angular momenta. Finite-size dyonic giant magnon belongs to this class of solutions. Further on, we restrict ourselves to the case of giant magnon with one nonzero angular momentum, and obtain the leading finite-size correction to the dispersion relation.

  13. Asymmetric fluid criticality. II. Finite-size scaling for simulations.

    Science.gov (United States)

    Kim, Young C; Fisher, Michael E

    2003-10-01

    The vapor-liquid critical behavior of intrinsically asymmetric fluids is studied in finite systems of linear dimensions L focusing on periodic boundary conditions, as appropriate for simulations. The recently propounded "complete" thermodynamic (L--> infinity) scaling theory incorporating pressure mixing in the scaling fields as well as corrections to scaling [Phys. Rev. E 67, 061506 (2003)] is extended to finite L, initially in a grand canonical representation. The theory allows for a Yang-Yang anomaly in which, when L--> infinity, the second temperature derivative (d2musigma/dT2) of the chemical potential along the phase boundary musigmaT diverges when T-->Tc-. The finite-size behavior of various special critical loci in the temperature-density or (T,rho) plane, in particular, the k-inflection susceptibility loci and the Q-maximal loci--derived from QL(T,L) is identical with 2L/L where m is identical with rho-L--is carefully elucidated and shown to be of value in estimating Tc and rhoc. Concrete illustrations are presented for the hard-core square-well fluid and for the restricted primitive model electrolyte including an estimate of the correlation exponent nu that confirms Ising-type character. The treatment is extended to the canonical representation where further complications appear.

  14. Practical continuous-variable quantum key distribution without finite sampling bandwidth effects.

    Science.gov (United States)

    Li, Huasheng; Wang, Chao; Huang, Peng; Huang, Duan; Wang, Tao; Zeng, Guihua

    2016-09-05

    In a practical continuous-variable quantum key distribution system, finite sampling bandwidth of the employed analog-to-digital converter at the receiver's side may lead to inaccurate results of pulse peak sampling. Then, errors in the parameters estimation resulted. Subsequently, the system performance decreases and security loopholes are exposed to eavesdroppers. In this paper, we propose a novel data acquisition scheme which consists of two parts, i.e., a dynamic delay adjusting module and a statistical power feedback-control algorithm. The proposed scheme may improve dramatically the data acquisition precision of pulse peak sampling and remove the finite sampling bandwidth effects. Moreover, the optimal peak sampling position of a pulse signal can be dynamically calibrated through monitoring the change of the statistical power of the sampled data in the proposed scheme. This helps to resist against some practical attacks, such as the well-known local oscillator calibration attack.

  15. Fluctuations, Finite-Size Effects and the Thermodynamic Limit in Computer Simulations: Revisiting the Spatial Block Analysis Method

    Directory of Open Access Journals (Sweden)

    Maziar Heidari

    2018-03-01

    Full Text Available The spatial block analysis (SBA method has been introduced to efficiently extrapolate thermodynamic quantities from finite-size computer simulations of a large variety of physical systems. In the particular case of simple liquids and liquid mixtures, by subdividing the simulation box into blocks of increasing size and calculating volume-dependent fluctuations of the number of particles, it is possible to extrapolate the bulk isothermal compressibility and Kirkwood–Buff integrals in the thermodynamic limit. Only by explicitly including finite-size effects, ubiquitous in computer simulations, into the SBA method, the extrapolation to the thermodynamic limit can be achieved. In this review, we discuss two of these finite-size effects in the context of the SBA method due to (i the statistical ensemble and (ii the finite integration domains used in computer simulations. To illustrate the method, we consider prototypical liquids and liquid mixtures described by truncated and shifted Lennard–Jones (TSLJ potentials. Furthermore, we show some of the most recent developments of the SBA method, in particular its use to calculate chemical potentials of liquids in a wide range of density/concentration conditions.

  16. The large sample size fallacy.

    Science.gov (United States)

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  17. Sample size in qualitative interview studies

    DEFF Research Database (Denmark)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit Kristiane

    2016-01-01

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is “saturation.” Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose...... the concept “information power” to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power...... and during data collection of a qualitative study is discussed....

  18. Finite Sample Comparison of Parametric, Semiparametric, and Wavelet Estimators of Fractional Integration

    DEFF Research Database (Denmark)

    Nielsen, Morten Ø.; Frederiksen, Per Houmann

    2005-01-01

    In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods. The es...... the time domain parametric methods, and (4) without sufficient trimming of scales the wavelet-based estimators are heavily biased.......In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods....... The estimators are briefly introduced and compared, and the criteria adopted for measuring finite sample performance are bias and root mean squared error. Most importantly, the simulations reveal that (1) the frequency domain maximum likelihood procedure is superior to the time domain parametric methods, (2) all...

  19. Concepts in sample size determination

    Directory of Open Access Journals (Sweden)

    Umadevi K Rao

    2012-01-01

    Full Text Available Investigators involved in clinical, epidemiological or translational research, have the drive to publish their results so that they can extrapolate their findings to the population. This begins with the preliminary step of deciding the topic to be studied, the subjects and the type of study design. In this context, the researcher must determine how many subjects would be required for the proposed study. Thus, the number of individuals to be included in the study, i.e., the sample size is an important consideration in the design of many clinical studies. The sample size determination should be based on the difference in the outcome between the two groups studied as in an analytical study, as well as on the accepted p value for statistical significance and the required statistical power to test a hypothesis. The accepted risk of type I error or alpha value, which by convention is set at the 0.05 level in biomedical research defines the cutoff point at which the p value obtained in the study is judged as significant or not. The power in clinical research is the likelihood of finding a statistically significant result when it exists and is typically set to >80%. This is necessary since the most rigorously executed studies may fail to answer the research question if the sample size is too small. Alternatively, a study with too large a sample size will be difficult and will result in waste of time and resources. Thus, the goal of sample size planning is to estimate an appropriate number of subjects for a given study design. This article describes the concepts in estimating the sample size.

  20. Excited-state quantum phase transitions in systems with two degrees of freedom: II. Finite-size effects

    Energy Technology Data Exchange (ETDEWEB)

    Stránský, Pavel [Institute of Particle and Nuclear Physics, Faculty of Mathematics and Physics, Charles University, V Holešovičkách 2, 18000 Prague (Czech Republic); Macek, Michal [Institute of Particle and Nuclear Physics, Faculty of Mathematics and Physics, Charles University, V Holešovičkách 2, 18000 Prague (Czech Republic); Center for Theoretical Physics, Sloane Physics Laboratory, Yale University, New Haven, CT 06520-8120 (United States); Leviatan, Amiram [Racah Institute of Physics, The Hebrew University, 91904 Jerusalem (Israel); Cejnar, Pavel, E-mail: pavel.cejnar@mff.cuni.cz [Institute of Particle and Nuclear Physics, Faculty of Mathematics and Physics, Charles University, V Holešovičkách 2, 18000 Prague (Czech Republic)

    2015-05-15

    This article extends our previous analysis Stránský et al. (2014) of Excited-State Quantum Phase Transitions (ESQPTs) in systems of dimension two. We focus on the oscillatory component of the quantum state density in connection with ESQPT structures accompanying a first-order ground-state transition. It is shown that a separable (integrable) system can develop rather strong finite-size precursors of ESQPT expressed as singularities in the oscillatory component of the state density. The singularities originate in effectively 1-dimensional dynamics and in some cases appear in multiple replicas with increasing excitation energy. Using a specific model example, we demonstrate that these precursors are rather resistant to proliferation of chaotic dynamics. - Highlights: • Oscillatory components of state density and spectral flow studied near ESQPTs. • Enhanced finite-size precursors of ESQPT caused by fully/partly separable dynamics. • These precursors appear due to criticality of a subsystem with lower dimension. • Separability-induced finite-size effects disappear in case of fully chaotic dynamics.

  1. The attention-weighted sample-size model of visual short-term memory: Attention capture predicts resource allocation and memory load.

    Science.gov (United States)

    Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren

    2016-09-01

    We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  2. A finite size scaling test of an SU(2) gauge-spin system

    International Nuclear Information System (INIS)

    Tomiya, M.; Hattori, T.

    1984-01-01

    We calculate the correlation functions in the SU(2) gauge-spin system with spins in the fundamental representation. We analyze the result making use of finite size scaling. There is a possibility that there are no second order phase transition lines in this model, contrary to previous assertions. (orig.)

  3. Thermodynamic theory of intrinsic finite-size effects in PbTiO3 nanocrystals. I. Nanoparticle size-dependent tetragonal phase stability

    Science.gov (United States)

    Akdogan, E. K.; Safari, A.

    2007-03-01

    We propose a phenomenological intrinsic finite-size effect model for single domain, mechanically free, and surface charge compensated ΔG-P ⃗s-ξ space, which describes the decrease in tetragonal phase stability with decreasing ξ rigorously.

  4. Improved sample size determination for attributes and variables sampling

    International Nuclear Information System (INIS)

    Stirpe, D.; Picard, R.R.

    1985-01-01

    Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs

  5. Pairing mechanism in Bi-O superconductors: A finite-size chain calculation

    International Nuclear Information System (INIS)

    Aligia, A.A.; Nunez Regueiro, M.D.; Gagliano, E.R.

    1989-01-01

    We have studied the pairing mechanism in BiO 3 systems by calculating the binding energy of a pair of holes in finite Bi-O chains, for parameters that simulate three-dimensional behavior. In agreement with previous results using perturbation theory in the hopping t, for covalent Bi-O binding and parameters for which the parent compound has a disproportionate ground state, pairing induced by the presence of biexcitons is obtained for sufficiently large interatomic Coulomb repulsion. The analysis of appropriate correlation functions shows a rapid metallization of the system as t and the number of holes increase. This fact shrinks the region of parameters for which the finite-size calculations can be trusted without further study. The same model for other parameters yields pairing in two other regimes: bipolaronic and magnetic excitonic

  6. A design-based approximation to the Bayes Information Criterion in finite population sampling

    Directory of Open Access Journals (Sweden)

    Enrico Fabrizi

    2014-05-01

    Full Text Available In this article, various issues related to the implementation of the usual Bayesian Information Criterion (BIC are critically examined in the context of modelling a finite population. A suitable design-based approximation to the BIC is proposed in order to avoid the derivation of the exact likelihood of the sample which is often very complex in a finite population sampling. The approximation is justified using a theoretical argument and a Monte Carlo simulation study.

  7. Finite-sample instrumental variables inference using an asymptotically pivotal statistic

    NARCIS (Netherlands)

    Bekker, P; Kleibergen, F

    2003-01-01

    We consider the K-statistic, Kleibergen's (2002, Econometrica 70, 1781-1803) adaptation of the Anderson-Rubin (AR) statistic in instrumental variables regression. Whereas Kleibergen (2002) especially analyzes the asymptotic behavior of the statistic, we focus on finite-sample properties in, a

  8. Experimental determination of size distributions: analyzing proper sample sizes

    International Nuclear Information System (INIS)

    Buffo, A; Alopaeus, V

    2016-01-01

    The measurement of various particle size distributions is a crucial aspect for many applications in the process industry. Size distribution is often related to the final product quality, as in crystallization or polymerization. In other cases it is related to the correct evaluation of heat and mass transfer, as well as reaction rates, depending on the interfacial area between the different phases or to the assessment of yield stresses of polycrystalline metals/alloys samples. The experimental determination of such distributions often involves laborious sampling procedures and the statistical significance of the outcome is rarely investigated. In this work, we propose a novel rigorous tool, based on inferential statistics, to determine the number of samples needed to obtain reliable measurements of size distribution, according to specific requirements defined a priori. Such methodology can be adopted regardless of the measurement technique used. (paper)

  9. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    Science.gov (United States)

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  10. Finite-size effects on current correlation functions

    Science.gov (United States)

    Chen, Shunda; Zhang, Yong; Wang, Jiao; Zhao, Hong

    2014-02-01

    We study why the calculation of current correlation functions (CCFs) still suffers from finite-size effects even when the periodic boundary condition is taken. Two important one-dimensional, momentum-conserving systems are investigated as examples. Intriguingly, it is found that the state of a system recurs in the sense of microcanonical ensemble average, and such recurrence may result in oscillations in CCFs. Meanwhile, we find that the sound mode collisions induce an extra time decay in a current so that its correlation function decays faster (slower) in a smaller (larger) system. Based on these two unveiled mechanisms, a procedure for correctly evaluating the decay rate of a CCF is proposed, with which our analysis suggests that the global energy CCF decays as ˜t-2/3 in the diatomic hard-core gas model and in a manner close to ˜t-1/2 in the Fermi-Pasta-Ulam-β model.

  11. The square lattice Ising model on the rectangle II: finite-size scaling limit

    Science.gov (United States)

    Hucht, Alfred

    2017-06-01

    Based on the results published recently (Hucht 2017 J. Phys. A: Math. Theor. 50 065201), the universal finite-size contributions to the free energy of the square lattice Ising model on the L× M rectangle, with open boundary conditions in both directions, are calculated exactly in the finite-size scaling limit L, M\\to∞ , T\\to Tc , with fixed temperature scaling variable x\\propto(T/Tc-1)M and fixed aspect ratio ρ\\propto L/M . We derive exponentially fast converging series for the related Casimir potential and Casimir force scaling functions. At the critical point T=Tc we confirm predictions from conformal field theory (Cardy and Peschel 1988 Nucl. Phys. B 300 377, Kleban and Vassileva 1991 J. Phys. A: Math. Gen. 24 3407). The presence of corners and the related corner free energy has dramatic impact on the Casimir scaling functions and leads to a logarithmic divergence of the Casimir potential scaling function at criticality.

  12. Finite-size corrections in simulation of dipolar fluids

    Science.gov (United States)

    Belloni, Luc; Puibasset, Joël

    2017-12-01

    Monte Carlo simulations of dipolar fluids are performed at different numbers of particles N = 100-4000. For each size of the cubic cell, the non-spherically symmetric pair distribution function g(r,Ω) is accumulated in terms of projections gmnl(r) onto rotational invariants. The observed N dependence is in very good agreement with the theoretical predictions for the finite-size corrections of different origins: the explicit corrections due to the absence of fluctuations in the number of particles within the canonical simulation and the implicit corrections due to the coupling between the environment around a given particle and that around its images in the neighboring cells. The latter dominates in fluids of strong dipolar coupling characterized by low compressibility and high dielectric constant. The ability to clean with great precision the simulation data from these corrections combined with the use of very powerful anisotropic integral equation techniques means that exact correlation functions both in real and Fourier spaces, Kirkwood-Buff integrals, and bridge functions can be derived from box sizes as small as N ≈ 100, even with existing long-range tails. In the presence of dielectric discontinuity with the external medium surrounding the central box and its replica within the Ewald treatment of the Coulombic interactions, the 1/N dependence of the gmnl(r) is shown to disagree with the, yet well-accepted, prediction of the literature.

  13. How precise is the finite sample approximation of the asymptotic distribution of realised variation measures in the presence of jumps?

    DEFF Research Database (Denmark)

    Veraart, Almut

    2011-01-01

    and present a new estimator for the asymptotic "variance" of the centered realised variance in the presence of jumps. Next, we compare the finite sample performance of the various estimators by means of detailed Monte Carlo studies. Here we study the impact of the jump activity, of the jump size of the jumps......This paper studies the impact of jumps on volatility estimation and inference based on various realised variation measures such as realised variance, realised multipower variation and truncated realised multipower variation. We review the asymptotic theory of those realised variation measures...... in the price and of the presence of additional independent or dependent jumps in the volatility. We find that the finite sample performance of realised variance and, in particular, of log--transformed realised variance is generally good, whereas the jump--robust statistics tend to struggle in the presence...

  14. Finite-size resonance dielectric cylinder in a rectangular waveguide

    International Nuclear Information System (INIS)

    Chuprina, V.N.; Khizhnyak, N.A.

    1988-01-01

    The problem on resonance spread of an electromagnetic wave by a dielectric circular cylinder of finite size in a rectangular waveguide is solved by a numerical-analytical method. The cylinder axes are parallel. The cylinder can be used as a resonance tuning element in accelerating SHF-sections. Problems on cutting off linear algebraic equation systems, to which relations of macroscopic electrodynamics in the integral differential form written for the concrete problem considered here are reduced by analytical transformations, are investigated in the stage of numerical analysis. Theoretical dependences of the insertion of the voltage standing wave coefficient on the generator wave length calculated for different values of problem parameters are constracted

  15. Sample size calculations for case-control studies

    Science.gov (United States)

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  16. Acceleration statistics of finite-sized particles in turbulent flow: the role of Faxen forces

    OpenAIRE

    Calzavarini, Enrico; Volk, Romain; Bourgoin, Mickael; Leveque, Emmanuel; Pinton, Jean-Francois; Toschi, Federico

    2008-01-01

    International audience; The dynamics of particles in turbulence when the particle size is larger than the dissipative scale of the carrier flow are studied. Recent experiments have highlighted signatures of particles' finiteness on their statistical properties, namely a decrease of their acceleration variance, an increase of correlation times (at increasing the particles size) and an independence of the probability density function of the acceleration once normalized to their variance. These ...

  17. Finite-size effects in transcript sequencing count distribution: its power-law correction necessarily precedes downstream normalization and comparative analysis.

    Science.gov (United States)

    Wong, Wing-Cheong; Ng, Hong-Kiat; Tantoso, Erwin; Soong, Richie; Eisenhaber, Frank

    2018-02-12

    Though earlier works on modelling transcript abundance from vertebrates to lower eukaroytes have specifically singled out the Zip's law, the observed distributions often deviate from a single power-law slope. In hindsight, while power-laws of critical phenomena are derived asymptotically under the conditions of infinite observations, real world observations are finite where the finite-size effects will set in to force a power-law distribution into an exponential decay and consequently, manifests as a curvature (i.e., varying exponent values) in a log-log plot. If transcript abundance is truly power-law distributed, the varying exponent signifies changing mathematical moments (e.g., mean, variance) and creates heteroskedasticity which compromises statistical rigor in analysis. The impact of this deviation from the asymptotic power-law on sequencing count data has never truly been examined and quantified. The anecdotal description of transcript abundance being almost Zipf's law-like distributed can be conceptualized as the imperfect mathematical rendition of the Pareto power-law distribution when subjected to the finite-size effects in the real world; This is regardless of the advancement in sequencing technology since sampling is finite in practice. Our conceptualization agrees well with our empirical analysis of two modern day NGS (Next-generation sequencing) datasets: an in-house generated dilution miRNA study of two gastric cancer cell lines (NUGC3 and AGS) and a publicly available spike-in miRNA data; Firstly, the finite-size effects causes the deviations of sequencing count data from Zipf's law and issues of reproducibility in sequencing experiments. Secondly, it manifests as heteroskedasticity among experimental replicates to bring about statistical woes. Surprisingly, a straightforward power-law correction that restores the distribution distortion to a single exponent value can dramatically reduce data heteroskedasticity to invoke an instant increase in

  18. Diffusion of Finite-Size Particles in Confined Geometries

    KAUST Repository

    Bruna, Maria; Chapman, S. Jonathan

    2013-01-01

    The diffusion of finite-size hard-core interacting particles in two- or three-dimensional confined domains is considered in the limit that the confinement dimensions become comparable to the particle's dimensions. The result is a nonlinear diffusion equation for the one-particle probability density function, with an overall collective diffusion that depends on both the excluded-volume and the narrow confinement. By including both these effects, the equation is able to interpolate between severe confinement (for example, single-file diffusion) and unconfined diffusion. Numerical solutions of both the effective nonlinear diffusion equation and the stochastic particle system are presented and compared. As an application, the case of diffusion under a ratchet potential is considered, and the change in transport properties due to excluded-volume and confinement effects is examined. © 2013 Society for Mathematical Biology.

  19. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    Science.gov (United States)

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure

  20. Wake-Driven Dynamics of Finite-Sized Buoyant Spheres in Turbulence

    Science.gov (United States)

    Mathai, Varghese; Prakash, Vivek N.; Brons, Jon; Sun, Chao; Lohse, Detlef

    2015-09-01

    Particles suspended in turbulent flows are affected by the turbulence and at the same time act back on the flow. The resulting coupling can give rise to rich variability in their dynamics. Here we report experimental results from an investigation of finite-sized buoyant spheres in turbulence. We find that even a marginal reduction in the particle's density from that of the fluid can result in strong modification of its dynamics. In contrast to classical spatial filtering arguments and predictions of particle models, we find that the particle acceleration variance increases with size. We trace this reversed trend back to the growing contribution from wake-induced forces, unaccounted for in current particle models in turbulence. Our findings highlight the need for improved multiphysics based models that account for particle wake effects for a faithful representation of buoyant-sphere dynamics in turbulence.

  1. Fermi surface of the one-dimensional Hubbard model. Finite-size effects

    Energy Technology Data Exchange (ETDEWEB)

    Bourbonnais, C.; Nelisse, H.; Reid, A.; Tremblay, A.M.S. (Dept. de Physique and Centre de Recherche en Physique du Solide (C.R.P.S.), Univ. de Sherbrooke, Quebec (Canada))

    1989-12-01

    The results reported here, using a standard numerical algorithm and a simple low temperature extrapolation, appear consistent with numerical results of Sorella et al. for the one-dimensional Hubbard model in the half-filled and quarter-filled band cases. However, it is argued that the discontinuity at the Fermi level found in the quarter-filled case is likely to come from the zero-temperature finite-size dependence of the quasiparticle weight Z, which is also discussed here. (orig.).

  2. A proof of the Woodward-Lawson sampling method for a finite linear array

    Science.gov (United States)

    Somers, Gary A.

    1993-01-01

    An extension of the continuous aperture Woodward-Lawson sampling theorem has been developed for a finite linear array of equidistant identical elements with arbitrary excitations. It is shown that by sampling the array factor at a finite number of specified points in the far field, the exact array factor over all space can be efficiently reconstructed in closed form. The specified sample points lie in real space and hence are measurable provided that the interelement spacing is greater than approximately one half of a wavelength. This paper provides insight as to why the length parameter used in the sampling formulas for discrete arrays is larger than the physical span of the lattice points in contrast with the continuous aperture case where the length parameter is precisely the physical aperture length.

  3. Solving wave propagation within finite-sized composite media with linear embedding via Green's operators

    NARCIS (Netherlands)

    Lancellotti, V.; Tijhuis, A.G.

    2012-01-01

    The calculation of electromagnetic (EM) fields and waves inside finite-sized structures comprised of different media can benefit from a diakoptics method such as linear embedding via Green's operators (LEGO). Unlike scattering problems, the excitation of EM waves within the bulk dielectric requires

  4. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    Science.gov (United States)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution

  5. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    International Nuclear Information System (INIS)

    Dednam, W; Botha, A E

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution

  6. Exploiting finite-size-effects to simulate full QCD with light quarks - a progress report

    International Nuclear Information System (INIS)

    Orth, B.; Eicker, N.; Lippert, Th.; Schilling, K.; Schroers, W.; Sroczynski, Z.

    2002-01-01

    We present a report on the status of the GRAL project (Going Realistic And Light), which aims at simulating full QCD with two dynamical Wilson quarks below the vector meson decay threshold, m ps /m v < 0.5, making use of finite-size-scaling techniques

  7. Neuromuscular dose-response studies: determining sample size.

    Science.gov (United States)

    Kopman, A F; Lien, C A; Naguib, M

    2011-02-01

    Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

  8. Diffusion of Finite-Size Particles in Confined Geometries

    KAUST Repository

    Bruna, Maria

    2013-05-10

    The diffusion of finite-size hard-core interacting particles in two- or three-dimensional confined domains is considered in the limit that the confinement dimensions become comparable to the particle\\'s dimensions. The result is a nonlinear diffusion equation for the one-particle probability density function, with an overall collective diffusion that depends on both the excluded-volume and the narrow confinement. By including both these effects, the equation is able to interpolate between severe confinement (for example, single-file diffusion) and unconfined diffusion. Numerical solutions of both the effective nonlinear diffusion equation and the stochastic particle system are presented and compared. As an application, the case of diffusion under a ratchet potential is considered, and the change in transport properties due to excluded-volume and confinement effects is examined. © 2013 Society for Mathematical Biology.

  9. Estimating Sample Size for Usability Testing

    Directory of Open Access Journals (Sweden)

    Alex Cazañas

    2017-02-01

    Full Text Available One strategy used to assure that an interface meets user requirements is to conduct usability testing. When conducting such testing one of the unknowns is sample size. Since extensive testing is costly, minimizing the number of participants can contribute greatly to successful resource management of a project. Even though a significant number of models have been proposed to estimate sample size in usability testing, there is still not consensus on the optimal size. Several studies claim that 3 to 5 users suffice to uncover 80% of problems in a software interface. However, many other studies challenge this assertion. This study analyzed data collected from the user testing of a web application to verify the rule of thumb, commonly known as the “magic number 5”. The outcomes of the analysis showed that the 5-user rule significantly underestimates the required sample size to achieve reasonable levels of problem detection.

  10. Non-conventional screening of the Coulomb interaction in low-dimensional and finite-size systems

    NARCIS (Netherlands)

    van den Brink, J.; Sawatzky, G.A.

    2000-01-01

    We study the screening of the Coulomb interaction in non-polar systems by polarizable atoms. We show that in low dimensions and small finite-size systems this screening deviates strongly from that conventionally assumed. In fact in one dimension the short-range interaction is strongly screened and

  11. Numerical Evaluation of Size Effect on the Stress-Strain Behaviour of Geotextile-Reinforced Sand

    DEFF Research Database (Denmark)

    Hosseinpour, I.; Mirmoradi, S.H.; Barari, Amin

    2010-01-01

    This paper studies the effect of sample size on the stress-strain behavior and strength characteristics of geotextile reinforced sand using the finite element numerical analysis. The effect of sample size was investigated by studying the effects of varying the number of geotextile layers, the con......This paper studies the effect of sample size on the stress-strain behavior and strength characteristics of geotextile reinforced sand using the finite element numerical analysis. The effect of sample size was investigated by studying the effects of varying the number of geotextile layers...... on the mechanical behavior of reinforced sand decreases with an increase in the sample size....

  12. High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data

    Science.gov (United States)

    Morelli, Eugene A.

    1997-01-01

    Many system identification and signal processing procedures can be done advantageously in the frequency domain. A required preliminary step for this approach is the transformation of sampled time domain data into the frequency domain. The analytical tool used for this transformation is the finite Fourier transform. Inaccuracy in the transformation can degrade system identification and signal processing results. This work presents a method for evaluating the finite Fourier transform using cubic interpolation of sampled time domain data for high accuracy, and the chirp Zeta-transform for arbitrary frequency resolution. The accuracy of the technique is demonstrated in example cases where the transformation can be evaluated analytically. Arbitrary frequency resolution is shown to be important for capturing details of the data in the frequency domain. The technique is demonstrated using flight test data from a longitudinal maneuver of the F-18 High Alpha Research Vehicle.

  13. Elaboration of austenitic stainless steel samples with bimodal grain size distributions and investigation of their mechanical behavior

    Science.gov (United States)

    Flipon, B.; de la Cruz, L. Garcia; Hug, E.; Keller, C.; Barbe, F.

    2017-10-01

    Samples of 316L austenitic stainless steel with bimodal grain size distributions are elaborated using two distinct routes. The first one is based on powder metallurgy using spark plasma sintering of two powders with different particle sizes. The second route applies the reverse-annealing method: it consists in inducing martensitic phase transformation by plastic strain and further annealing in order to obtain two austenitic grain populations with different sizes. Microstructural analy ses reveal that both methods are suitable to generate significative grain size contrast and to control this contrast according to the elaboration conditions. Mechanical properties under tension are then characterized for different grain size distributions. Crystal plasticity finite element modelling is further applied in a configuration of bimodal distribution to analyse the role played by coarse grains within a matrix of fine grains, considering not only their volume fraction but also their spatial arrangement.

  14. Finite-size anomalies of the Drude weight: Role of symmetries and ensembles

    Science.gov (United States)

    Sánchez, R. J.; Varma, V. K.

    2017-12-01

    We revisit the numerical problem of computing the high temperature spin stiffness, or Drude weight, D of the spin-1 /2 X X Z chain using exact diagonalization to systematically analyze its dependence on system symmetries and ensemble. Within the canonical ensemble and for states with zero total magnetization, we find D vanishes exactly due to spin-inversion symmetry for all but the anisotropies Δ˜M N=cos(π M /N ) with N ,M ∈Z+ coprimes and N >M , provided system sizes L ≥2 N , for which states with different spin-inversion signature become degenerate due to the underlying s l2 loop algebra symmetry. All these loop-algebra degenerate states carry finite currents which we conjecture [based on data from the system sizes and anisotropies Δ˜M N (with N magnetic flux not only breaks spin-inversion in the zero magnetization sector but also lifts the loop-algebra degeneracies in all symmetry sectors—this effect is more pertinent at smaller Δ due to the larger contributions to D coming from the low-magnetization sectors which are more sensitive to the system's symmetries. Thus we generically find a finite D for fluxed rings and arbitrary 0 lifted.

  15. Sample Size Determination for One- and Two-Sample Trimmed Mean Tests

    Science.gov (United States)

    Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng

    2008-01-01

    Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…

  16. Vibronic Boson Sampling: Generalized Gaussian Boson Sampling for Molecular Vibronic Spectra at Finite Temperature.

    Science.gov (United States)

    Huh, Joonsuk; Yung, Man-Hong

    2017-08-07

    Molecular vibroic spectroscopy, where the transitions involve non-trivial Bosonic correlation due to the Duschinsky Rotation, is strongly believed to be in a similar complexity class as Boson Sampling. At finite temperature, the problem is represented as a Boson Sampling experiment with correlated Gaussian input states. This molecular problem with temperature effect is intimately related to the various versions of Boson Sampling sharing the similar computational complexity. Here we provide a full description to this relation in the context of Gaussian Boson Sampling. We find a hierarchical structure, which illustrates the relationship among various Boson Sampling schemes. Specifically, we show that every instance of Gaussian Boson Sampling with an initial correlation can be simulated by an instance of Gaussian Boson Sampling without initial correlation, with only a polynomial overhead. Since every Gaussian state is associated with a thermal state, our result implies that every sampling problem in molecular vibronic transitions, at any temperature, can be simulated by Gaussian Boson Sampling associated with a product of vacuum modes. We refer such a generalized Gaussian Boson Sampling motivated by the molecular sampling problem as Vibronic Boson Sampling.

  17. Leading order finite size effects with spins for inspiralling compact binaries

    Energy Technology Data Exchange (ETDEWEB)

    Levi, Michele [Université Pierre et Marie Curie-Paris VI, CNRS-UMR 7095, Institut d’Astrophysique de Paris, 98 bis Boulevard Arago, 75014 Paris (France); Sorbonne Universités, Institut Lagrange de Paris, 98 bis Boulevard Arago, 75014 Paris (France); Steinhoff, Jan [Max-Planck-Institute for Gravitational Physics - Albert-Einstein-Institute,Am Mühlenberg 1, 14476 Potsdam-Golm (Germany); Centro Multidisciplinar de Astrofisica, Instituto Superior Tecnico, Universidade de Lisboa,Avenida Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2015-06-10

    The leading order finite size effects due to spin, namely that of the cubic and quartic in spin interactions, are derived for the first time for generic compact binaries via the effective field theory for gravitating spinning objects. These corrections enter at the third and a half and fourth post-Newtonian orders, respectively, for rapidly rotating compact objects. Hence, we complete the leading order finite size effects with spin up to the fourth post-Newtonian accuracy. We arrive at this by augmenting the point particle effective action with new higher dimensional nonminimal coupling worldline operators, involving higher-order derivatives of the gravitational field, and introducing new Wilson coefficients, corresponding to constants, which describe the octupole and hexadecapole deformations of the object due to spin. These Wilson coefficients are fixed to unity in the black hole case. The nonminimal coupling worldline operators enter the action with the electric and magnetic components of the Weyl tensor of even and odd parity, coupled to even and odd worldline spin tensors, respectively. Moreover, the non relativistic gravitational field decomposition, which we employ, demonstrates a coupling hierarchy of the gravito-magnetic vector and the Newtonian scalar, to the odd and even in spin operators, respectively, which extends that of minimal coupling. This observation is useful for the construction of the Feynman diagrams, and provides an instructive analogy between the leading order spin-orbit and cubic in spin interactions, and between the leading order quadratic and quartic in spin interactions.

  18. Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size.

    Science.gov (United States)

    Schwalger, Tilo; Deger, Moritz; Gerstner, Wulfram

    2017-04-01

    Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50-2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations.

  19. Sample size determination for mediation analysis of longitudinal data.

    Science.gov (United States)

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  20. Sample size of the reference sample in a case-augmented study.

    Science.gov (United States)

    Ghosh, Palash; Dewanji, Anup

    2017-05-01

    The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  1. 40 CFR 80.127 - Sample size guidelines.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Sample size guidelines. 80.127 Section 80.127 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the...

  2. Finite-size effects on two-particle production in continuous and discrete spectrum

    CERN Document Server

    Lednicky, R

    2005-01-01

    The effect of a finite space-time extent of particle production region on the lifetime measurement of hadronic atoms produced by a high energy beam in a thin target is discussed. Particularly, it is found that the neglect of this effect on the pionium lifetime measurement in the experiment DIRAC at CERN could lead to the lifetime overestimation on the level of the expected 10% statistical error. It is argued that the data on correlations of identical particles obtained in the same experimental conditions, together with transport code simulation, allow to diminish the systematic error in the extracted lifetime to an acceptable level. The theoretical systematic errors arising in the calculation of the finite-size effect due to the neglect of non-equal emission times in the pair c.m.s., the space-time coherence and the residual charge are shown to be negligible.

  3. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    Science.gov (United States)

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  5. [Practical aspects regarding sample size in clinical research].

    Science.gov (United States)

    Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S

    1996-01-01

    The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.

  6. The Optimal Inhomogeneity for Superconductivity: Finite Size Studies

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, W-F.

    2010-04-06

    We report the results of exact diagonalization studies of Hubbard models on a 4 x 4 square lattice with periodic boundary conditions and various degrees and patterns of inhomogeneity, which are represented by inequivalent hopping integrals t and t{prime}. We focus primarily on two patterns, the checkerboard and the striped cases, for a large range of values of the on-site repulsion U and doped hole concentration, x. We present evidence that superconductivity is strongest for U of order the bandwidth, and intermediate inhomogeneity, 0 < t{prime} < t. The maximum value of the 'pair-binding energy' we have found with purely repulsive interactions is {Delta}{sub pb} = 0.32t for the checkerboard Hubbard model with U = 8t and t{prime} = 0.5t. Moreover, for near optimal values, our results are insensitive to changes in boundary conditions, suggesting that the correlation length is sufficiently short that finite size effects are already unimportant.

  7. Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size

    Science.gov (United States)

    Gerstner, Wulfram

    2017-01-01

    Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50–2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations. PMID:28422957

  8. Sample size calculation in metabolic phenotyping studies.

    Science.gov (United States)

    Billoir, Elise; Navratil, Vincent; Blaise, Benjamin J

    2015-09-01

    The number of samples needed to identify significant effects is a key question in biomedical studies, with consequences on experimental designs, costs and potential discoveries. In metabolic phenotyping studies, sample size determination remains a complex step. This is due particularly to the multiple hypothesis-testing framework and the top-down hypothesis-free approach, with no a priori known metabolic target. Until now, there was no standard procedure available to address this purpose. In this review, we discuss sample size estimation procedures for metabolic phenotyping studies. We release an automated implementation of the Data-driven Sample size Determination (DSD) algorithm for MATLAB and GNU Octave. Original research concerning DSD was published elsewhere. DSD allows the determination of an optimized sample size in metabolic phenotyping studies. The procedure uses analytical data only from a small pilot cohort to generate an expanded data set. The statistical recoupling of variables procedure is used to identify metabolic variables, and their intensity distributions are estimated by Kernel smoothing or log-normal density fitting. Statistically significant metabolic variations are evaluated using the Benjamini-Yekutieli correction and processed for data sets of various sizes. Optimal sample size determination is achieved in a context of biomarker discovery (at least one statistically significant variation) or metabolic exploration (a maximum of statistically significant variations). DSD toolbox is encoded in MATLAB R2008A (Mathworks, Natick, MA) for Kernel and log-normal estimates, and in GNU Octave for log-normal estimates (Kernel density estimates are not robust enough in GNU octave). It is available at http://www.prabi.fr/redmine/projects/dsd/repository, with a tutorial at http://www.prabi.fr/redmine/projects/dsd/wiki. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  9. Exact solution for the inhomogeneous Dicke model in the canonical ensemble: Thermodynamical limit and finite-size corrections

    Energy Technology Data Exchange (ETDEWEB)

    Pogosov, W.V., E-mail: walter.pogosov@gmail.com [N.L. Dukhov All-Russia Research Institute of Automatics, Moscow (Russian Federation); Institute for Theoretical and Applied Electrodynamics, Russian Academy of Sciences, Moscow (Russian Federation); Moscow Institute of Physics and Technology, Dolgoprudny (Russian Federation); Shapiro, D.S. [N.L. Dukhov All-Russia Research Institute of Automatics, Moscow (Russian Federation); Moscow Institute of Physics and Technology, Dolgoprudny (Russian Federation); V.A. Kotel' nikov Institute of Radio Engineering and Electronics, Russian Academy of Sciences, Moscow (Russian Federation); National University of Science and Technology MISIS, Moscow (Russian Federation); Bork, L.V. [N.L. Dukhov All-Russia Research Institute of Automatics, Moscow (Russian Federation); Institute for Theoretical and Experimental Physics, Moscow (Russian Federation); Onishchenko, A.I. [Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, Dubna (Russian Federation); Moscow Institute of Physics and Technology, Dolgoprudny (Russian Federation); Skobeltsyn Institute of Nuclear Physics, Moscow State University, Moscow (Russian Federation)

    2017-06-15

    We consider an exactly solvable inhomogeneous Dicke model which describes an interaction between a disordered ensemble of two-level systems with single mode boson field. The existing method for evaluation of Richardson–Gaudin equations in the thermodynamical limit is extended to the case of Bethe equations in Dicke model. Using this extension, we present expressions both for the ground state and lowest excited states energies as well as leading-order finite-size corrections to these quantities for an arbitrary distribution of individual spin energies. We then evaluate these quantities for an equally-spaced distribution (constant density of states). In particular, we study evolution of the spectral gap and other related quantities. We also reveal regions on the phase diagram, where finite-size corrections are of particular importance.

  10. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  11. Sample size determination and power

    CERN Document Server

    Ryan, Thomas P, Jr

    2013-01-01

    THOMAS P. RYAN, PhD, teaches online advanced statistics courses for Northwestern University and The Institute for Statistics Education in sample size determination, design of experiments, engineering statistics, and regression analysis.

  12. Unexpected finite size effects in interfacial systems: Why bigger is not always better—Increase in uncertainty of surface tension with bulk phase width

    Science.gov (United States)

    Longford, Francis G. J.; Essex, Jonathan W.; Skylaris, Chris-Kriton; Frey, Jeremy G.

    2018-06-01

    We present an unexpected finite size effect affecting interfacial molecular simulations that is proportional to the width-to-surface-area ratio of the bulk phase Ll/A. This finite size effect has a significant impact on the variance of surface tension values calculated using the virial summation method. A theoretical derivation of the origin of the effect is proposed, giving a new insight into the importance of optimising system dimensions in interfacial simulations. We demonstrate the consequences of this finite size effect via a new way to estimate the surface energetic and entropic properties of simulated air-liquid interfaces. Our method is based on macroscopic thermodynamic theory and involves comparing the internal energies of systems with varying dimensions. We present the testing of these methods using simulations of the TIP4P/2005 water forcefield and a Lennard-Jones fluid model of argon. Finally, we provide suggestions of additional situations, in which this finite size effect is expected to be significant, as well as possible ways to avoid its impact.

  13. Finite-size effects and switching times for Moran process with mutation.

    Science.gov (United States)

    DeVille, Lee; Galiardi, Meghan

    2017-04-01

    We consider the Moran process with two populations competing under an iterated Prisoner's Dilemma in the presence of mutation, and concentrate on the case where there are multiple evolutionarily stable strategies. We perform a complete bifurcation analysis of the deterministic system which arises in the infinite population size. We also study the Master equation and obtain asymptotics for the invariant distribution and metastable switching times for the stochastic process in the case of large but finite population. We also show that the stochastic system has asymmetries in the form of a skew for parameter values where the deterministic limit is symmetric.

  14. Finite-size effect on the dynamic and sensing performances of graphene resonators: the role of edge stress.

    Science.gov (United States)

    Kim, Chang-Wan; Dai, Mai Duc; Eom, Kilho

    2016-01-01

    We have studied the finite-size effect on the dynamic behavior of graphene resonators and their applications in atomic mass detection using a continuum elastic model such as modified plate theory. In particular, we developed a model based on von Karman plate theory with including the edge stress, which arises from the imbalance between the coordination numbers of bulk atoms and edge atoms of graphene. It is shown that as the size of a graphene resonator decreases, the edge stress depending on the edge structure of a graphene resonator plays a critical role on both its dynamic and sensing performances. We found that the resonance behavior of graphene can be tuned not only through edge stress but also through nonlinear vibration, and that the detection sensitivity of a graphene resonator can be controlled by using the edge stress. Our study sheds light on the important role of the finite-size effect in the effective design of graphene resonators for their mass sensing applications.

  15. Sample size determination in clinical trials with multiple endpoints

    CERN Document Server

    Sozu, Takashi; Hamasaki, Toshimitsu; Evans, Scott R

    2015-01-01

    This book integrates recent methodological developments for calculating the sample size and power in trials with more than one endpoint considered as multiple primary or co-primary, offering an important reference work for statisticians working in this area. The determination of sample size and the evaluation of power are fundamental and critical elements in the design of clinical trials. If the sample size is too small, important effects may go unnoticed; if the sample size is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. Recently many clinical trials have been designed with more than one endpoint considered as multiple primary or co-primary, creating a need for new approaches to the design and analysis of these clinical trials. The book focuses on the evaluation of power and sample size determination when comparing the effects of two interventions in superiority clinical trials with multiple endpoints. Methods for sample size calculation in clin...

  16. Effects of diffraction and target finite size on coherent transition radiation spectra in bunch length measurements

    Energy Technology Data Exchange (ETDEWEB)

    Castellano, M.; Cianchi, A.; Verzilov, V.A. [Istituto Nazionale di Fisica Nucleare, Frascati, RM (Italy). Laboratori Nazionali di Frascati; Orlandi, G. [Istituto Nazionale di Fisica Nucleare, Rome (Italy)]|[Rome Univ., Tor Vergata, Rome (Italy)

    1999-07-01

    Effects of diffraction and the size of the target on TR in the context of CTR-based bunch length measurements are studied on the basis of Kirchhoff diffraction theory. Spectra of TR from the finite-size target for several schemes of measurements are calculated in the far-infrared region showing strong distortion at low frequencies. Influence of the effect on the accuracy of bunch length measurements is estimated.

  17. Finite-Size Effects in Single Chain Magnets: An Experimental and Theoretical Study

    Science.gov (United States)

    Bogani, L.; Caneschi, A.; Fedi, M.; Gatteschi, D.; Massi, M.; Novak, M. A.; Pini, M. G.; Rettori, A.; Sessoli, R.; Vindigni, A.

    2004-05-01

    The problem of finite-size effects in s=1/2 Ising systems showing slow dynamics of the magnetization is investigated introducing diamagnetic impurities in a Co2+-radical chain. The static magnetic properties have been measured and analyzed considering the peculiarities induced by the ferrimagnetic character of the compound. The dynamic susceptibility shows that an Arrhenius law is observed with the same energy barrier for the pure and the doped compounds while the prefactor decreases, as theoretically predicted. Multiple spin reversal has also been investigated.

  18. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    Science.gov (United States)

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  19. Surface and finite size effect on fluctuations dynamics in nanoparticles with long-range order

    Science.gov (United States)

    Morozovska, A. N.; Eliseev, E. A.

    2010-02-01

    The influence of surface and finite size on the dynamics of the order parameter fluctuations and critical phenomena in the three-dimensional (3D)-confined systems with long-range order was not considered theoretically. In this paper, we study the influence of surface and finite size on the dynamics of the order parameter fluctuations in the particles of arbitrary shape. We consider concrete examples of the spherical and cylindrical ferroic nanoparticles within Landau-Ginzburg-Devonshire phenomenological approach. Allowing for the strong surface energy contribution in micro and nanoparticles, the analytical expressions derived for the Ornstein-Zernike correlator of the long-range order parameter spatial-temporal fluctuations, dynamic generalized susceptibility, relaxation times, and correlation radii discrete spectra are different from those known for bulk systems. Obtained analytical expressions for the correlation function of the order parameter spatial-temporal fluctuations in micro and nanosized systems can be useful for the quantitative analysis of the dynamical structural factors determined from magnetic resonance diffraction and scattering spectra. Besides the practical importance of the correlation function for the analysis of the experimental data, derived expressions for the fluctuations strength determine the fundamental limits of phenomenological theories applicability for 3D-confined systems.

  20. Predicting sample size required for classification performance

    Directory of Open Access Journals (Sweden)

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  1. Finite-size effect on optimal efficiency of heat engines.

    Science.gov (United States)

    Tajima, Hiroyasu; Hayashi, Masahito

    2017-07-01

    The optimal efficiency of quantum (or classical) heat engines whose heat baths are n-particle systems is given by the strong large deviation. We give the optimal work extraction process as a concrete energy-preserving unitary time evolution among the heat baths and the work storage. We show that our optimal work extraction turns the disordered energy of the heat baths to the ordered energy of the work storage, by evaluating the ratio of the entropy difference to the energy difference in the heat baths and the work storage, respectively. By comparing the statistical mechanical optimal efficiency with the macroscopic thermodynamic bound, we evaluate the accuracy of the macroscopic thermodynamics with finite-size heat baths from the statistical mechanical viewpoint. We also evaluate the quantum coherence effect on the optimal efficiency of the cycle processes without restricting their cycle time by comparing the classical and quantum optimal efficiencies.

  2. Estimation of sample size and testing power (Part 4).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  3. The critical behaviour of self-dual Z(N) spin systems - Finite size scaling and conformal invariance

    International Nuclear Information System (INIS)

    Alcaraz, F.C.

    1986-01-01

    Critical properties of a family of self-dual two dimensional Z(N) models whose bulk free energy is exacly known at the self-dual point are studied. The analysis is performed by studing the finite size behaviour of the corresponding one dimensional quantum Hamiltonians which also possess an exact solution at their self-dual point. By exploring finite size scaling ideas and the conformal invariance of the critical infinite system the critical temperature and critical exponents as well as the central charge associated with the underlying conformal algebra are calculated for N up to 8. The results strongly suggest that the recently constructed Z(N) quantum field theory of Zamolodchikov and Fateev (1985) is the underlying field theory associated with these statistical mechanical systems. It is also tested, for the Z(5) case, the conjecture that these models correspond to the bifurcation points, in the phase diagram of the general Z(N) spin model, where a massless phase originates. (Author) [pt

  4. Theoretical studies of finite size effects and screening effects caused by a STM tip in Luettinger liquids

    International Nuclear Information System (INIS)

    Guigou, Marine

    2009-01-01

    This thesis takes place in the field of condensed matter. More precisely, we focus on the finite size effects and the screening effects caused by a STM tip in a quantum wire. For that, we use, first, the Luettinger liquid theory, which allows to describe strongly correlated systems and secondly, the Keldysh formalism, which is necessary to treat the out-of-equilibrium systems. For these studies, we consider, the currant, the noise and the conductance. The noise presents a non-Poissonian behaviour, when finite size effects appear. Through the photo-assisted transport, it is shown that those effects hide the effects of the Coulomb interactions. Considering the proximity between the STM tip, used as a probe or as an injector, and a quantum wire, screening effects appear. We can conclude that they play a similar role to those of Coulomb interactions. (author) [fr

  5. Sample size determination for equivalence assessment with multiple endpoints.

    Science.gov (United States)

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  6. Preeminence and prerequisites of sample size calculations in clinical trials

    OpenAIRE

    Richa Singhal; Rakesh Rana

    2015-01-01

    The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary out...

  7. The finite sample performance of estimators for mediation analysis under sequential conditional independence

    DEFF Research Database (Denmark)

    Huber, Martin; Lechner, Michael; Mellace, Giovanni

    Using a comprehensive simulation study based on empirical data, this paper investigates the finite sample properties of different classes of parametric and semi-parametric estimators of (natural) direct and indirect causal effects used in mediation analysis under sequential conditional independence...

  8. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    Science.gov (United States)

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  9. Optimal sample size for probability of detection curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2013-01-01

    Highlights: • We investigate sample size requirement to develop probability of detection curves. • We develop simulations to determine effective inspection target sizes, number and distribution. • We summarize these findings and provide guidelines for the NDE practitioner. -- Abstract: The use of probability of detection curves to quantify the reliability of non-destructive examination (NDE) systems is common in the aeronautical industry, but relatively less so in the nuclear industry, at least in European countries. Due to the nature of the components being inspected, sample sizes tend to be much lower. This makes the manufacturing of test pieces with representative flaws, in sufficient numbers, so to draw statistical conclusions on the reliability of the NDT system under investigation, quite costly. The European Network for Inspection and Qualification (ENIQ) has developed an inspection qualification methodology, referred to as the ENIQ Methodology. It has become widely used in many European countries and provides assurance on the reliability of NDE systems, but only qualitatively. The need to quantify the output of inspection qualification has become more important as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. A measure of the NDE reliability is necessary to quantify risk reduction after inspection and probability of detection (POD) curves provide such a metric. The Joint Research Centre, Petten, The Netherlands supported ENIQ by investigating the question of the sample size required to determine a reliable POD curve. As mentioned earlier manufacturing of test pieces with defects that are typically found in nuclear power plants (NPPs) is usually quite expensive. Thus there is a tendency to reduce sample sizes, which in turn increases the uncertainty associated with the resulting POD curve. The main question in conjunction with POS curves is the appropriate sample size. Not

  10. Sample size for morphological traits of pigeonpea

    Directory of Open Access Journals (Sweden)

    Giovani Facco

    2015-12-01

    Full Text Available The objectives of this study were to determine the sample size (i.e., number of plants required to accurately estimate the average of morphological traits of pigeonpea (Cajanus cajan L. and to check for variability in sample size between evaluation periods and seasons. Two uniformity trials (i.e., experiments without treatment were conducted for two growing seasons. In the first season (2011/2012, the seeds were sown by broadcast seeding, and in the second season (2012/2013, the seeds were sown in rows spaced 0.50 m apart. The ground area in each experiment was 1,848 m2, and 360 plants were marked in the central area, in a 2 m × 2 m grid. Three morphological traits (e.g., number of nodes, plant height and stem diameter were evaluated 13 times during the first season and 22 times in the second season. Measurements for all three morphological traits were normally distributed and confirmed through the Kolmogorov-Smirnov test. Randomness was confirmed using the Run Test, and the descriptive statistics were calculated. For each trait, the sample size (n was calculated for the semiamplitudes of the confidence interval (i.e., estimation error equal to 2, 4, 6, ..., 20% of the estimated mean with a confidence coefficient (1-? of 95%. Subsequently, n was fixed at 360 plants, and the estimation error of the estimated percentage of the average for each trait was calculated. Variability of the sample size for the pigeonpea culture was observed between the morphological traits evaluated, among the evaluation periods and between seasons. Therefore, to assess with an accuracy of 6% of the estimated average, at least 136 plants must be evaluated throughout the pigeonpea crop cycle to determine the sample size for the traits (e.g., number of nodes, plant height and stem diameter in the different evaluation periods and between seasons. 

  11. Finite size and geometrical non-linear effects during crack pinning by heterogeneities: An analytical and experimental study

    Science.gov (United States)

    Vasoya, Manish; Unni, Aparna Beena; Leblond, Jean-Baptiste; Lazarus, Veronique; Ponson, Laurent

    2016-04-01

    Crack pinning by heterogeneities is a central toughening mechanism in the failure of brittle materials. So far, most analytical explorations of the crack front deformation arising from spatial variations of fracture properties have been restricted to weak toughness contrasts using first order approximation and to defects of small dimensions with respect to the sample size. In this work, we investigate the non-linear effects arising from larger toughness contrasts by extending the approximation to the second order, while taking into account the finite sample thickness. Our calculations predict the evolution of a planar crack lying on the mid-plane of a plate as a function of material parameters and loading conditions, especially in the case of a single infinitely elongated obstacle. Peeling experiments are presented which validate the approach and evidence that the second order term broadens its range of validity in terms of toughness contrast values. The work highlights the non-linear response of the crack front to strong defects and the central role played by the thickness of the specimen on the pinning process.

  12. Finite-size, chemical-potential and magnetic effects on the phase transition in a four-fermion interacting model

    Energy Technology Data Exchange (ETDEWEB)

    Correa, E.B.S. [Universidade Federal do Sul e Sudeste do Para, Instituto de Ciencias Exatas, Maraba (Brazil); Centro Brasileiro de Pesquisas Fisicas-CBPF/MCTI, Rio de Janeiro (Brazil); Linhares, C.A. [Universidade do Estado do Rio de Janeiro, Instituto de Fisica, Rio de Janeiro (Brazil); Malbouisson, A.P.C. [Centro Brasileiro de Pesquisas Fisicas-CBPF/MCTI, Rio de Janeiro (Brazil); Malbouisson, J.M.C. [Universidade Federal da Bahia, Instituto de Fisica, Salvador (Brazil); Santana, A.E. [Universidade de Brasilia, Instituto de Fisica, Brasilia, DF (Brazil)

    2017-04-15

    We study effects coming from finite size, chemical potential and from a magnetic background on a massive version of a four-fermion interacting model. This is performed in four dimensions as an application of recent developments for dealing with field theories defined on toroidal spaces. We study effects of the magnetic field and chemical potential on the size-dependent phase structure of the model, in particular, how the applied magnetic field affects the size-dependent critical temperature. A connection with some aspects of the hadronic phase transition is established. (orig.)

  13. Modeling and Analysis of Size-Dependent Structural Problems by Using Low- Order Finite Elements with Strain Gradient Plasticity

    International Nuclear Information System (INIS)

    Park, Moon Shik; Suh, Yeong Sung; Song, Seung

    2011-01-01

    An elasto-plastic finite element method using the theory of strain gradient plasticity is proposed to evaluate the size dependency of structural plasticity that occurs when the configuration size decreases to micron scale. For this method, we suggest a low-order plane and three-dimensional displacement-based elements, eliminating the need for a high order, many degrees of freedom, a mixed element, or super elements, which have been considered necessary in previous researches. The proposed method can be performed in the framework of nonlinear incremental analysis in which plastic strains are calculated and averaged at nodes. These strains are then interpolated and differentiated for gradient calculation. We adopted a strain-gradient-hardening constitutive equation from the Taylor dislocation model, which requires the plastic strain gradient. The developed finite elements are tested numerically on the basis of typical size-effect problems such as micro-bending, micro-torsion, and micro-voids. With respect to the strain gradient plasticity, i.e., the size effects, the results obtained by using the proposed method, which are simple in their calculation, are in good agreement with the experimental results cited in previously published papers

  14. Effects of the finite particle size in turbulent wall-bounded flows of dense suspensions

    Science.gov (United States)

    Costa, Pedro; Picano, Francesco; Brandt, Luca; Breugem, Wim-Paul

    2018-05-01

    We use interface-resolved simulations to study finite-size effects in turbulent channel flow of neutrally-buoyant spheres. Two cases with particle sizes differing by a factor of 2, at the same solid volume fraction of 20% and bulk Reynolds number are considered. These are complemented with two reference single-phase flows: the unladen case, and the flow of a Newtonian fluid with the effective suspension viscosity of the same mixture in the laminar regime. As recently highlighted in Costa et al. (PRL 117, 134501), a particle-wall layer is responsible for deviations of the statistics from what is observed in the continuum limit where the suspension is modeled as a Newtonian fluid with an effective viscosity. Here we investigate the fluid and particle dynamics in this layer and in the bulk. In the particle-wall layer, the near wall inhomogeneity has an influence on the suspension micro-structure over a distance proportional to the particle size. In this layer, particles have a significant (apparent) slip velocity that is reflected in the distribution of wall shear stresses. This is characterized by extreme events (both much higher and much lower than the mean). Based on these observations we provide a scaling for the particle-to-fluid apparent slip velocity as a function of the flow parameters. We also extend the flow scaling laws in to second-order Eulerian statistics in the homogeneous suspension region away from the wall. Finite-size effects in the bulk of the channel become important for larger particles, while negligible for lower-order statistics and smaller particles. Finally, we study the particle dynamics along the wall-normal direction. Our results suggest that 1-point dispersion is dominated by particle-turbulence (and not particle-particle) interactions, while differences in 2-point dispersion and collisional dynamics are consistent with a picture of shear-driven interactions.

  15. Preeminence and prerequisites of sample size calculations in clinical trials

    Directory of Open Access Journals (Sweden)

    Richa Singhal

    2015-01-01

    Full Text Available The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary outcome is a continuous variable and when it is a proportion or a qualitative variable.

  16. Different radiation impedance models for finite porous materials

    DEFF Research Database (Denmark)

    Nolan, Melanie; Jeong, Cheol-Ho; Brunskog, Jonas

    2015-01-01

    The Sabine absorption coefficients of finite absorbers are measured in a reverberation chamber according to the international standard ISO 354. They vary with the specimen size essentially due to diffraction at the specimen edges, which can be seen as the radiation impedance differing from...... the infinite case. Thus, in order to predict the Sabine absorption coefficients of finite porous samples, one can incorporate models of the radiation impedance. In this study, different radiation impedance models are compared with two experimental examples. Thomasson’s model is compared to Rhazi’s method when...

  17. SMPBS: Web server for computing biomolecular electrostatics using finite element solvers of size modified Poisson-Boltzmann equation.

    Science.gov (United States)

    Xie, Yang; Ying, Jinyong; Xie, Dexuan

    2017-03-30

    SMPBS (Size Modified Poisson-Boltzmann Solvers) is a web server for computing biomolecular electrostatics using finite element solvers of the size modified Poisson-Boltzmann equation (SMPBE). SMPBE not only reflects ionic size effects but also includes the classic Poisson-Boltzmann equation (PBE) as a special case. Thus, its web server is expected to have a broader range of applications than a PBE web server. SMPBS is designed with a dynamic, mobile-friendly user interface, and features easily accessible help text, asynchronous data submission, and an interactive, hardware-accelerated molecular visualization viewer based on the 3Dmol.js library. In particular, the viewer allows computed electrostatics to be directly mapped onto an irregular triangular mesh of a molecular surface. Due to this functionality and the fast SMPBE finite element solvers, the web server is very efficient in the calculation and visualization of electrostatics. In addition, SMPBE is reconstructed using a new objective electrostatic free energy, clearly showing that the electrostatics and ionic concentrations predicted by SMPBE are optimal in the sense of minimizing the objective electrostatic free energy. SMPBS is available at the URL: smpbs.math.uwm.edu © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  18. Finite size and Coulomb corrections: from nuclei to nuclear liquid vapor phase diagram

    International Nuclear Information System (INIS)

    Moretto, L.G.; Elliott, J.B.; Phair, L.

    2003-01-01

    In this paper we consider the problem of obtaining the infinite symmetric uncharged nuclear matter phase diagram from a thermal nuclear reaction. In the first part we shall consider the Coulomb interaction which, because of its long range makes the definition of phases problematic. This Coulomb effect seems truly devastating since it does not allow one to define nuclear phase transitions much above A ∼ 30. However there may be a solution to this difficulty. If we consider the emission of particles with a sizable charge, we notice that a large Coulomb barrier Bc is present. For T << Bc these channels may be considered effectively closed. Consequently the unbound channels may not play a role on a suitably short time scale. Then a phase transition may still be definable in an approximate way. In the second part of the article we shall deal with the finite size problem by means of a new method, the complement method, which shall permit a straightforward extrapolation to the infinite system. The complement approach consists of evaluating the change in free energy occurring when a particle or cluster is moved from one (finite) phase to another. In the case of a liquid drop in equilibrium with its vapor, this is done by extracting a vapor particle of any given size from the drop and evaluating the energy and entropy changes associated with both the vapor particle and the residual liquid drop (complement)

  19. Revisiting sample size: are big trials the answer?

    Science.gov (United States)

    Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J

    2012-07-18

    The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.

  20. Neutron density decay constant in a non-multiplying lattice of finite size

    International Nuclear Information System (INIS)

    Deniz, V.C.

    1965-01-01

    This report presents a general theory, using the integral transport method, for obtaining the neutron density decay constant in a finite non-multiplying lattice. The theory is applied to obtain the expression for the diffusion coefficient. The case of a homogeneous medium with 1/v absorption and of finite size in all directions is treated in detail, assuming an isotropic scattering law. The decay constant is obtained up to the B 6 term. The expressions for the diffusion coefficient and for the diffusion cooling coefficient are the same as those obtained for a slab geometry by Nelkin, using the expansion in spherical harmonics of the Fourier transform in the spatial variable. Furthermore, explicit forms are obtained for the flux and the current. It is shown that the deviation of the actual flux from a Maxwellian is the flux generated in the medium, extended to infinity and deprived of its absorbing power, by various sources, each of which has a zero integral over all velocities. The study of the current permits the generalization of Fick's law. An independent integral method, valid for homogeneous media, is also presented. (author) [fr

  1. Static and high-frequency magnetic properties of stripe domain structure in a plate of finite sizes

    International Nuclear Information System (INIS)

    Mal'ginova, S.D.; Doroshenko, R.A.; Shul'ga, N.V.

    2006-01-01

    A model that enables to carry out self-consistent calculations of the main parameters of stripe domain structure (DS) and at the same time those of properties of domain walls (DW) of a multiple-axis finite (in all directions) ferromagnet depending on the sizes of a sample, material parameters and intensity of a magnetic field is offered. The calculations of the properties of DS (direction of magnetization in domains, widths, ferromagnetic resonance, etc.) are carried out on a computer for plates (1 1 0), rectangular shapes of a cubic ferromagnet with axes of light magnetization along trigonal directions in a magnetic field [-1 1 0]. It is shown, that in plates of different shapes there can be a structure with Neel DW alongside with DS with Bloch DW. Their features are noticeably exhibited, in particular, in different dependence of the number of domains, and also frequencies of a ferromagnetic resonance from a magnetic field

  2. Test of a sample container for shipment of small size plutonium samples with PAT-2

    International Nuclear Information System (INIS)

    Kuhn, E.; Aigner, H.; Deron, S.

    1981-11-01

    A light-weight container for the air transport of plutonium, to be designated PAT-2, has been developed in the USA and is presently undergoing licensing. The very limited effective space for bearing plutonium required the design of small size sample canisters to meet the needs of international safeguards for the shipment of plutonium samples. The applicability of a small canister for the sampling of small size powder and solution samples has been tested in an intralaboratory experiment. The results of the experiment, based on the concept of pre-weighed samples, show that the tested canister can successfully be used for the sampling of small size PuO 2 -powder samples of homogeneous source material, as well as for dried aliquands of plutonium nitrate solutions. (author)

  3. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size

    Directory of Open Access Journals (Sweden)

    R. Eric Heidel

    2016-01-01

    Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  4. CT dose survey in adults: what sample size for what precision?

    International Nuclear Information System (INIS)

    Taylor, Stephen; Muylem, Alain van; Howarth, Nigel; Gevenois, Pierre Alain; Tack, Denis

    2017-01-01

    To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)

  5. CT dose survey in adults: what sample size for what precision?

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Stephen [Hopital Ambroise Pare, Department of Radiology, Mons (Belgium); Muylem, Alain van [Hopital Erasme, Department of Pneumology, Brussels (Belgium); Howarth, Nigel [Clinique des Grangettes, Department of Radiology, Chene-Bougeries (Switzerland); Gevenois, Pierre Alain [Hopital Erasme, Department of Radiology, Brussels (Belgium); Tack, Denis [EpiCURA, Clinique Louis Caty, Department of Radiology, Baudour (Belgium)

    2017-01-15

    To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)

  6. Sample-size dependence of diversity indices and the determination of sufficient sample size in a high-diversity deep-sea environment

    OpenAIRE

    Soetaert, K.; Heip, C.H.R.

    1990-01-01

    Diversity indices, although designed for comparative purposes, often cannot be used as such, due to their sample-size dependence. It is argued here that this dependence is more pronounced in high diversity than in low diversity assemblages and that indices more sensitive to rarer species require larger sample sizes to estimate diversity with reasonable precision than indices which put more weight on commoner species. This was tested for Hill's diversity number N sub(0) to N sub( proportional ...

  7. Finite-size scaling method for the Berezinskii–Kosterlitz–Thouless transition

    International Nuclear Information System (INIS)

    Hsieh, Yun-Da; Kao, Ying-Jer; Sandvik, Anders W

    2013-01-01

    We test an improved finite-size scaling method for reliably extracting the critical temperature T BKT of a Berezinskii–Kosterlitz–Thouless (BKT) transition. Using known single-parameter logarithmic corrections to the spin stiffness ρ s at T BKT in combination with the Kosterlitz–Nelson relation between the transition temperature and the stiffness, ρ s (T BKT ) = 2T BKT /π, we define a size-dependent transition temperature T BKT (L 1 ,L 2 ) based on a pair of system sizes L 1 ,L 2 , e.g., L 2 = 2L 1 . We use Monte Carlo data for the standard two-dimensional classical XY model to demonstrate that this quantity is well behaved and can be reliably extrapolated to the thermodynamic limit using the next expected logarithmic correction beyond the ones included in defining T BKT (L 1 ,L 2 ). For the Monte Carlo calculations we use GPU (graphical processing unit) computing to obtain high-precision data for L up to 512. We find that the sub-leading logarithmic corrections have significant effects on the extrapolation. Our result T BKT = 0.8935(1) is several error bars above the previously best estimates of the transition temperature, T BKT ≈ 0.8929. If only the leading log-correction is used, the result is, however, consistent with the lower value, suggesting that previous works have underestimated T BKT because of the neglect of sub-leading logarithms. Our method is easy to implement in practice and should be applicable to generic BKT transitions. (paper)

  8. Sample size calculation for comparing two negative binomial rates.

    Science.gov (United States)

    Zhu, Haiyuan; Lakkis, Hassan

    2014-02-10

    Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Estimation of sample size and testing power (part 5).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-02-01

    Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.

  10. Insights on finite size effects in ab initio study of CO adsorption and dissociation on Fe 110 surface

    International Nuclear Information System (INIS)

    Chakrabarty, Aurab; Bouhali, Othmane; Mousseau, Normand; Becquart, Charlotte S.; El-Mellouhi, Fedwa

    2016-01-01

    Adsorption and dissociation of hydrocarbons on metallic surfaces represent crucial steps on the path to carburization, eventually leading to dusting corrosion. While adsorption of CO molecules on Fe surface is a barrier-less exothermic process, this is not the case for the dissociation of CO into C and O adatoms and the diffusion of C beneath the surface that are found to be associated with large energy barriers. In practice, these barriers can be affected by numerous factors that combine to favour the CO-Fe reaction such as the abundance of CO and other hydrocarbons as well as the presence of structural defects. From a numerical point of view, studying these factors is challenging and a step-by-step approach is necessary to assess, in particular, the influence of the finite box size on the reaction parameters for adsorption and dissociation of CO on metal surfaces. Here, we use density functional theory (DFT) total energy calculations with the climbing-image nudged elastic band method to estimate the adsorption energies and dissociation barriers for different CO coverages with surface supercells of different sizes. We further compute the effect of periodic boundary condition for DFT calculations and find that the contribution from van der Waals interaction in the computation of adsorption parameters is important as they contribute to correcting the finite-size error in small systems. The dissociation process involves carbon insertion into the Fe surface causing a lattice deformation that requires a larger surface system for unrestricted relaxation. We show that, in the larger surface systems associated with dilute CO-coverages, C-insertion is energetically more favourable, leading to a significant decrease in the dissociation barrier. This observation suggests that a large surface system with dilute coverage is necessary for all similar metal-hydrocarbon reactions in order to study their fundamental electronic mechanisms, as an isolated phenomenon, free from

  11. Insights on finite size effects in ab initio study of CO adsorption and dissociation on Fe 110 surface

    Energy Technology Data Exchange (ETDEWEB)

    Chakrabarty, Aurab, E-mail: aurab.chakrabarty@qatar.tamu.edu; Bouhali, Othmane [Texas A& M University at Qatar, P.O. Box 23874, Doha (Qatar); Mousseau, Normand [Département de Physique and RQMP, Université de Montréal, Case Postale 6128, Succursale Centre-ville, Montréal, Québec H3C 3J7 (Canada); Becquart, Charlotte S. [UMET, UMR CNRS 8207, ENSCL, Université Lille I, 59655 Villeneuve d' Ascq Cédex (France); El-Mellouhi, Fedwa [Qatar Environment and Energy Research Institute, Hamad Bin Khalifa University, P.O. Box 5825, Doha (Qatar)

    2016-08-07

    Adsorption and dissociation of hydrocarbons on metallic surfaces represent crucial steps on the path to carburization, eventually leading to dusting corrosion. While adsorption of CO molecules on Fe surface is a barrier-less exothermic process, this is not the case for the dissociation of CO into C and O adatoms and the diffusion of C beneath the surface that are found to be associated with large energy barriers. In practice, these barriers can be affected by numerous factors that combine to favour the CO-Fe reaction such as the abundance of CO and other hydrocarbons as well as the presence of structural defects. From a numerical point of view, studying these factors is challenging and a step-by-step approach is necessary to assess, in particular, the influence of the finite box size on the reaction parameters for adsorption and dissociation of CO on metal surfaces. Here, we use density functional theory (DFT) total energy calculations with the climbing-image nudged elastic band method to estimate the adsorption energies and dissociation barriers for different CO coverages with surface supercells of different sizes. We further compute the effect of periodic boundary condition for DFT calculations and find that the contribution from van der Waals interaction in the computation of adsorption parameters is important as they contribute to correcting the finite-size error in small systems. The dissociation process involves carbon insertion into the Fe surface causing a lattice deformation that requires a larger surface system for unrestricted relaxation. We show that, in the larger surface systems associated with dilute CO-coverages, C-insertion is energetically more favourable, leading to a significant decrease in the dissociation barrier. This observation suggests that a large surface system with dilute coverage is necessary for all similar metal-hydrocarbon reactions in order to study their fundamental electronic mechanisms, as an isolated phenomenon, free from

  12. Holographic relaxation of finite size isolated quantum systems

    International Nuclear Information System (INIS)

    Abajo-Arrastia, Javier; Silva, Emilia da; Lopez, Esperanza; Mas, Javier; Serantes, Alexandre

    2014-01-01

    We study holographically the out of equilibrium dynamics of a finite size closed quantum system in 2+1 dimensions, modelled by the collapse of a shell of a massless scalar field in AdS_4. In global coordinates there exists a variety of evolutions towards final black hole formation which we relate with different patterns of relaxation in the dual field theory. For large scalar initial data rapid thermalization is achieved as a priori expected. Interesting phenomena appear for small enough amplitudes. Such shells do not generate a black hole by direct collapse, but quite generically, an apparent horizon emerges after enough bounces off the AdS boundary. We relate this bulk evolution with relaxation processes at strong coupling which delay in reaching an ergodic stage. Besides the dynamics of bulk fields, we monitor the entanglement entropy, finding that it oscillates quasi-periodically before final equilibration. The radial position of the travelling shell is brought in correspondence with the evolution of the pattern of entanglement in the dual field theory. We propose, thereafter, that the observed oscillations are the dual counterpart of the quantum revivals studied in the literature. The entanglement entropy is not only able to portrait the streaming of entangled excitations, but it is also a useful probe of interaction effects

  13. Investigating size effects of complex nanostructures through Young-Laplace equation and finite element analysis

    International Nuclear Information System (INIS)

    Lu, Dingjie; Xie, Yi Min; Huang, Xiaodong; Zhou, Shiwei; Li, Qing

    2015-01-01

    Analytical studies on the size effects of a simply-shaped beam fixed at both ends have successfully explained the sudden changes of effective Young's modulus as its diameter decreases below 100 nm. Yet they are invalid for complex nanostructures ubiquitously existing in nature. In accordance with a generalized Young-Laplace equation, one of the representative size effects is transferred to non-uniformly distributed pressure against an external surface due to the imbalance of inward and outward loads. Because the magnitude of pressure depends on the principal curvatures, iterative steps have to be adopted to gradually stabilize the structure in finite element analysis. Computational results are in good agreement with both experiment data and theoretical prediction. Furthermore, the investigation on strengthened and softened Young's modulus for two complex nanostructures demonstrates that the proposed computational method provides a general and effective approach to analyze the size effects for nanostructures in arbitrary shape

  14. Effects of sample size on the second magnetization peak in ...

    Indian Academy of Sciences (India)

    the sample size decreases – a result that could be interpreted as a size effect in the order– disorder vortex matter phase transition. However, local magnetic measurements trace this effect to metastable disordered vortex states, revealing the same order–disorder transition induction in samples of different size. Keywords.

  15. Finite size scaling analysis on Nagel-Schreckenberg model for traffic flow

    Science.gov (United States)

    Balouchi, Ashkan; Browne, Dana

    2015-03-01

    The traffic flow problem as a many-particle non-equilibrium system has caught the interest of physicists for decades. Understanding the traffic flow properties and though obtaining the ability to control the transition from the free-flow phase to the jammed phase plays a critical role in the future world of urging self-driven cars technology. We have studied phase transitions in one-lane traffic flow through the mean velocity, distributions of car spacing, dynamic susceptibility and jam persistence -as candidates for an order parameter- using the Nagel-Schreckenberg model to simulate traffic flow. The length dependent transition has been observed for a range of maximum velocities greater than a certain value. Finite size scaling analysis indicates power-law scaling of these quantities at the onset of the jammed phase.

  16. Constrained statistical inference: sample-size tables for ANOVA and regression

    Directory of Open Access Journals (Sweden)

    Leonard eVanbrabant

    2015-01-01

    Full Text Available Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient beta1 is larger than beta2 and beta3. The corresponding hypothesis is H: beta1 > {beta2, beta3} and this is known as an (order constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a prespecified power (say, 0.80 for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30% to 50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., beta1 > beta2 results in a higher power than assigning a positive or a negative sign to the parameters (e.g., beta1 > 0.

  17. Generic finite size scaling for discontinuous nonequilibrium phase transitions into absorbing states

    Science.gov (United States)

    de Oliveira, M. M.; da Luz, M. G. E.; Fiore, C. E.

    2015-12-01

    Based on quasistationary distribution ideas, a general finite size scaling theory is proposed for discontinuous nonequilibrium phase transitions into absorbing states. Analogously to the equilibrium case, we show that quantities such as response functions, cumulants, and equal area probability distributions all scale with the volume, thus allowing proper estimates for the thermodynamic limit. To illustrate these results, five very distinct lattice models displaying nonequilibrium transitions—to single and infinitely many absorbing states—are investigated. The innate difficulties in analyzing absorbing phase transitions are circumvented through quasistationary simulation methods. Our findings (allied to numerical studies in the literature) strongly point to a unifying discontinuous phase transition scaling behavior for equilibrium and this important class of nonequilibrium systems.

  18. Sample Size in Qualitative Interview Studies: Guided by Information Power.

    Science.gov (United States)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit

    2015-11-27

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.

  19. Finite Size Effects in Submonolayer Catalysts Investigated by CO Electrosorption on PtsML/Pd(100).

    Science.gov (United States)

    Yuan, Qiuyi; Doan, Hieu A; Grabow, Lars C; Brankovic, Stanko R

    2017-10-04

    A combination of scanning tunneling microscopy, subtractively normalized interfacial Fourier transform infrared spectroscopy (SNIFTIRS), and density functional theory (DFT) is used to quantify the local strain in 2D Pt clusters on the 100 facet of Pd and its effect on CO chemisorption. Good agreement between SNIFTIRS experiments and DFT simulations provide strong evidence that, in the absence of coherent strain between Pt and Pd, finite size effects introduce local compressive strain, which alters the chemisorption properties of the surface. Though this effect has been widely neglected in prior studies, our results suggest that accurate control over cluster sizes in submonolayer catalyst systems can be an effective approach to fine-tune their catalytic properties.

  20. Modeling of finite-size droplets and particles in multiphase flows

    Directory of Open Access Journals (Sweden)

    Prashant Khare

    2015-08-01

    Full Text Available The conventional point-particle approach for treating the dispersed phase in a continuous flowfield is extended by taking into account the effect of finite particle size, using a Gaussian interpolation from Lagrangian points to the Eulerian field. The inter-phase exchange terms in the conservation equations are distributed over the volume encompassing the particle size, as opposed to the Dirac delta function generally used in the point-particle approach. The proposed approach is benchmarked against three different flow configurations in a numerical framework based on large eddy simulation (LES turbulence closure. First, the flow over a circular cylinder is simulated for a Reynolds number of 3900 at 1 atm pressure. Results show good agreement with experimental data for the mean streamwise velocity and the vortex shedding frequency in the wake region. The calculated flowfield exhibits correct physics, which the conventional point-particle approach fails to capture. The second case deals with diesel jet injection in quiescent environment over a pressure range of 1.1–5.0 MPa. The calculated jet penetration depth closely matches measurements. It decreases with increasing chamber pressure, due to enhanced drag force in a denser fluid environment. Finally, water and acetone jet injection normal to air crossflow is studied at 1 atm. The calculated jet penetration and Sauter mean diameter of liquid droplets compare very well with measurements.

  1. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    Science.gov (United States)

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  2. The finite sample performance of estimators for mediation analysis under sequential conditional independence

    DEFF Research Database (Denmark)

    Huber, Martin; Lechner, Michael; Mellace, Giovanni

    2016-01-01

    Using a comprehensive simulation study based on empirical data, this paper investigates the finite sample properties of different classes of parametric and semi-parametric estimators of (natural) direct and indirect causal effects used in mediation analysis under sequential conditional independen...... of the methods often (but not always) varies with the features of the data generating process....

  3. Influence of surface and finite size effects on the structural and magnetic properties of nanocrystalline lanthanum strontium perovskite manganites

    Energy Technology Data Exchange (ETDEWEB)

    Žvátora, Pavel [Department of Analytical Chemistry, Institute of Chemical Technology Prague, Technická 5, 166 28 Prague (Czech Republic); Veverka, Miroslav; Veverka, Pavel; Knížek, Karel; Závěta, Karel; Pollert, Emil [Department of Magnetism and Superconductors, Institute of Physics AS CR, Cukrovarnická 10/112, 162 00 Prague (Czech Republic); Král, Vladimír [Department of Analytical Chemistry, Institute of Chemical Technology Prague, Technická 5, 166 28 Prague (Czech Republic); Zentiva Development (Part of Sanofi Group), U Kabelovny 130, 102 37 Prague (Czech Republic); Goglio, Graziella; Duguet, Etienne [CNRS, University of Bordeaux, ICMCB, UPR 9048, 33600 Pessac (France); Kaman, Ondřej, E-mail: kamano@seznam.cz [Department of Magnetism and Superconductors, Institute of Physics AS CR, Cukrovarnická 10/112, 162 00 Prague (Czech Republic); Department of Cell Biology, Faculty of Science, Charles University, Viničná 7, 128 40 Prague (Czech Republic)

    2013-08-15

    Syntheses of nanocrystalline perovskite phases of the general formula La{sub 1−x}Sr{sub x}MnO{sub 3+δ} were carried out employing sol–gel technique followed by thermal treatment at 700–900 °C under oxygen flow. The prepared samples exhibit a rhombohedral structure with space group R3{sup ¯}c in the whole investigated range of composition 0.20≤x≤0.45. The studies were aimed at the chemical composition including oxygen stoichiometry and extrinsic properties, i.e. size of the particles, both influencing the resulting structural and magnetic properties. The oxygen stoichiometry was determined by chemical analysis revealing oxygen excess in most of the studied phases. The excess was particularly high for the samples with the smallest crystallites (12–28 nm) while comparative bulk materials showed moderate non-stoichiometry. These differences are tentatively attributed to the surface effects in view of the volume fraction occupied by the upper layer whose atomic composition does not comply with the ideal bulk stoichiometry. - Graphical abstract: Evolution of the particle size with annealing temperature in the nanocrystalline La{sub 0.70}Sr{sub 0.30}MnO{sub 3+δ} phase. Display Omitted - Highlights: • The magnetic behaviour of nanocrystalline La{sub 1−x}Sr{sub x}MnO{sub 3+δ} phases was analyzed on the basis of their crystal structure, chemical composition and size of the particles. • Their Curie temperature and magnetization are markedly affected by finite size and surface effects. • The oxygen excess observed in the La{sub 1−x}Sr{sub x}MnO{sub 3+δ} nanoparticles might be generated by the surface layer with deviated oxygen stoichiometry.

  4. A Markov model for the temporal dynamics of balanced random networks of finite size

    Science.gov (United States)

    Lagzi, Fereshteh; Rotter, Stefan

    2014-01-01

    The balanced state of recurrent networks of excitatory and inhibitory spiking neurons is characterized by fluctuations of population activity about an attractive fixed point. Numerical simulations show that these dynamics are essentially nonlinear, and the intrinsic noise (self-generated fluctuations) in networks of finite size is state-dependent. Therefore, stochastic differential equations with additive noise of fixed amplitude cannot provide an adequate description of the stochastic dynamics. The noise model should, rather, result from a self-consistent description of the network dynamics. Here, we consider a two-state Markovian neuron model, where spikes correspond to transitions from the active state to the refractory state. Excitatory and inhibitory input to this neuron affects the transition rates between the two states. The corresponding nonlinear dependencies can be identified directly from numerical simulations of networks of leaky integrate-and-fire neurons, discretized at a time resolution in the sub-millisecond range. Deterministic mean-field equations, and a noise component that depends on the dynamic state of the network, are obtained from this model. The resulting stochastic model reflects the behavior observed in numerical simulations quite well, irrespective of the size of the network. In particular, a strong temporal correlation between the two populations, a hallmark of the balanced state in random recurrent networks, are well represented by our model. Numerical simulations of such networks show that a log-normal distribution of short-term spike counts is a property of balanced random networks with fixed in-degree that has not been considered before, and our model shares this statistical property. Furthermore, the reconstruction of the flow from simulated time series suggests that the mean-field dynamics of finite-size networks are essentially of Wilson-Cowan type. We expect that this novel nonlinear stochastic model of the interaction between

  5. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    Science.gov (United States)

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  6. Finite size vertex correction to the strong decay of ηc and χc states and a determination of αs(mc)

    International Nuclear Information System (INIS)

    Ping Ronggang; Jiang Huanqing; Zou Bingsong

    2002-01-01

    In previous calculations of the strong decay of a charmonium, the first-order momentum dependence of the quark propagator is kept. It was found that the finite-size vertex correction to the Γ(J/ψ→3g) process is large. The authors calculate the two-gluon decay widths of η e , χ c0 and χ c2 by including the full momentum dependence of the quark propagator. Comparing to the zero-order calculation the authors find that the finite-size vertex correction factor to the two-gluon decay widths of η c is 1.32, and for the two-gluon decays of χ c0 and χ c2 , the vertex correction factors are 1.45 and 1.26, respectively. With the corrected decay widths Γ(η c →2g) authors extract the value as α s (m c ) = 0.28 +- 0.05 which agrees with that calculated from the Γ(J/ψ→3g) process with the same correction. The finite-size vertex correction to the process Γ(η c →3g) is not as large as that to the process Γ(J/ψ→3g)

  7. Thermodynamic theory of intrinsic finite size effects in PbTiO3 nanocrystals. II. Dielectric and piezoelectric properties

    Science.gov (United States)

    Akdogan, E. K.; Safari, A.

    2007-03-01

    We compute the intrinsic dielectric and piezoelectric properties of single domain, mechanically free, and surface charge compensated PbTiO3 nanocrystals (n-Pt) with no depolarization fields, undergoing a finite size induced first order tetragonal→cubic ferrodistortive phase transition. By using a Landau-Devonshire type free energy functional, in which Landau coefficients are a function of nanoparticle size, we demonstrate substantial deviations from bulk properties in the range <150 nm. We find a decrease in dielectric susceptibility at the transition temperature with decreasing particle size, which we verify to be in conformity with predictions of lattice dynamics considerations. We also find an anomalous increase in piezocharge coefficients near ˜15 nm , the critical size for n-Pt.

  8. Asymptotic investigation of the nonlinear boundary value dynamic problem for the systems with finite sizes

    International Nuclear Information System (INIS)

    Andrianov, I.V.; Danishevsky, V.V.

    1994-01-01

    Asymptotic approaches for nonlinear dynamics of continual system are developed well for the infinite in spatial variables. For the systems with finite sizes we have an infinite number of resonance, and Poincare-Lighthill-Go method does riot work. Using of averaging procedure or method of multiple scales leads to the infinite systems of nonlinear algebraic or ordinary differential equations systems and then using truncation method. which does not gives possibility to obtain all important properties of the solutions

  9. Finite-size effect of η-deformed AdS5×S5 at strong coupling

    Directory of Open Access Journals (Sweden)

    Changrim Ahn

    2017-04-01

    Full Text Available We compute Lüscher corrections for a giant magnon in the η-deformed (AdS5×S5η using the su(2|2q-invariant S-matrix at strong coupling and compare with the finite-size effect of the corresponding string state, derived previously. We find that these two results match and confirm that the su(2|2q-invariant S-matrix is describing world-sheet excitations of the η-deformed background.

  10. Sample size choices for XRCT scanning of highly unsaturated soil mixtures

    Directory of Open Access Journals (Sweden)

    Smith Jonathan C.

    2016-01-01

    Full Text Available Highly unsaturated soil mixtures (clay, sand and gravel are used as building materials in many parts of the world, and there is increasing interest in understanding their mechanical and hydraulic behaviour. In the laboratory, x-ray computed tomography (XRCT is becoming more widely used to investigate the microstructures of soils, however a crucial issue for such investigations is the choice of sample size, especially concerning the scanning of soil mixtures where there will be a range of particle and void sizes. In this paper we present a discussion (centred around a new set of XRCT scans on sample sizing for scanning of samples comprising soil mixtures, where a balance has to be made between realistic representation of the soil components and the desire for high resolution scanning, We also comment on the appropriateness of differing sample sizes in comparison to sample sizes used for other geotechnical testing. Void size distributions for the samples are presented and from these some hypotheses are made as to the roles of inter- and intra-aggregate voids in the mechanical behaviour of highly unsaturated soils.

  11. Ruin probabilities with compounding assets for discrete time finite horizon problem, independent period claim sizes and general premium structure

    NARCIS (Netherlands)

    Kok, de A.G.

    2003-01-01

    In this paper, we present fast and accurate approximations for the probability of ruin over a finite number of periods, assuming inhomogeneous independent claim size distributions and arbitrary premium income in subsequent periods. We develop exact recursive expressions for the non-ruin

  12. Ruin probabilities with compounding assets for discrete time finite horizon problems, independent period claim sizes and general premium structure

    NARCIS (Netherlands)

    Kok, de A.G.

    2003-01-01

    In this paper we present fast and accurate approximations for the probability of ruin over a finite number of periods, assuming inhomogeneous independent claim size distributions and arbitrary premium income in subsequent periods. We develop exact recursive expressions for the non-ruin probabilities

  13. Effect of Finite Particle Size on Convergence of Point Particle Models in Euler-Lagrange Multiphase Dispersed Flow

    Science.gov (United States)

    Nili, Samaun; Park, Chanyoung; Haftka, Raphael T.; Kim, Nam H.; Balachandar, S.

    2017-11-01

    Point particle methods are extensively used in simulating Euler-Lagrange multiphase dispersed flow. When particles are much smaller than the Eulerian grid the point particle model is on firm theoretical ground. However, this standard approach of evaluating the gas-particle coupling at the particle center fails to converge as the Eulerian grid is reduced below particle size. We present an approach to model the interaction between particles and fluid for finite size particles that permits convergence. We use the generalized Faxen form to compute the force on a particle and compare the results against traditional point particle method. We apportion the different force components on the particle to fluid cells based on the fraction of particle volume or surface in the cell. The application is to a one-dimensional model of shock propagation through a particle-laden field at moderate volume fraction, where the convergence is achieved for a well-formulated force model and back coupling for finite size particles. Comparison with 3D direct fully resolved numerical simulations will be used to check if the approach also improves accuracy compared to the point particle model. Work supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378.

  14. Finite size scaling analysis of disordered electron systems

    International Nuclear Information System (INIS)

    Markos, P.

    2012-01-01

    We demonstrated the application of the finite size scaling method to the analysis of the transition of the disordered system from the metallic to the insulating regime. The method enables us to calculate the critical point and the critical exponent which determines the divergence of the correlation length in the vicinity of the critical point. The universality of the metal-insulator transition was verified by numerical analysis of various physical parameters and the critical exponent was calculated with high accuracy for different disordered models. Numerically obtained value of the critical exponent for the three dimensional disordered model (1) has been recently supported by the semi-analytical work and verified by experimental optical measurements equivalent to the three dimensional disordered model (1). Another unsolved problem of the localization is the disagreement between numerical results and predictions of the analytical theories. At present, no analytical theory confirms numerically obtained values of critical exponents. The reason for this disagreement lies in the statistical character of the process of localization. The theory must consider all possible scattering processes on randomly distributed impurities. All physical variables are statistical quantities with broad probability distributions. It is in general not know how to calculate analytically their mean values. We believe that detailed numerical analysis of various disordered systems bring inspiration for the formulation of analytical theory. (authors)

  15. Finite-size and asymptotic behaviors of the gyration radius of knotted cylindrical self-avoiding polygons.

    Science.gov (United States)

    Shimamura, Miyuki K; Deguchi, Tetsuo

    2002-05-01

    Several nontrivial properties are shown for the mean-square radius of gyration R2(K) of ring polymers with a fixed knot type K. Through computer simulation, we discuss both finite size and asymptotic behaviors of the gyration radius under the topological constraint for self-avoiding polygons consisting of N cylindrical segments with radius r. We find that the average size of ring polymers with the knot K can be much larger than that of no topological constraint. The effective expansion due to the topological constraint depends strongly on the parameter r that is related to the excluded volume. The topological expansion is particularly significant for the small r case, where the simulation result is associated with that of random polygons with the knot K.

  16. Sound radiation from finite surfaces

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2013-01-01

    A method to account for the effect of finite size in acoustic power radiation problem of planar surfaces using spatial windowing is developed. Cremer and Heckl presents a very useful formula for the power radiating from a structure using the spatially Fourier transformed velocity, which combined...... with spatially windowing of a plane waves can be used to take into account the finite size. In the present paper, this is developed by means of a radiation impedance for finite surfaces, that is used instead of the radiation impedance for infinite surfaces. In this way, the spatial windowing is included...

  17. Decision Support on Small size Passive Samples

    Directory of Open Access Journals (Sweden)

    Vladimir Popukaylo

    2018-05-01

    Full Text Available A construction technique of adequate mathematical models for small size passive samples, in conditions when classical probabilistic-statis\\-tical methods do not allow obtaining valid conclusions was developed.

  18. Forced sound transmission through a finite-sized single leaf panel subject to a point source excitation.

    Science.gov (United States)

    Wang, Chong

    2018-03-01

    In the case of a point source in front of a panel, the wavefront of the incident wave is spherical. This paper discusses spherical sound waves transmitting through a finite sized panel. The forced sound transmission performance that predominates in the frequency range below the coincidence frequency is the focus. Given the point source located along the centerline of the panel, forced sound transmission coefficient is derived through introducing the sound radiation impedance for spherical incident waves. It is found that in addition to the panel mass, forced sound transmission loss also depends on the distance from the source to the panel as determined by the radiation impedance. Unlike the case of plane incident waves, sound transmission performance of a finite sized panel does not necessarily converge to that of an infinite panel, especially when the source is away from the panel. For practical applications, the normal incidence sound transmission loss expression of plane incident waves can be used if the distance between the source and panel d and the panel surface area S satisfy d/S>0.5. When d/S ≈0.1, the diffuse field sound transmission loss expression may be a good approximation. An empirical expression for d/S=0  is also given.

  19. Simple and multiple linear regression: sample size considerations.

    Science.gov (United States)

    Hanley, James A

    2016-11-01

    The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.

    Science.gov (United States)

    Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S

    2016-10-01

    The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.

  1. Charge and finite size corrections for virtual photon spectra in second order Born approximation

    International Nuclear Information System (INIS)

    Durgapal, P.

    1982-01-01

    The purpose of this work is to investigate the effects of finite nuclear size and charge on the spectrum of virtual photons emitted when a relativistic electron is scattered in the field of an atomic nucleus. The method consisted in expanding the scattering cross section in terms of integrals over the nuclear inelastic form factor with a kernel which was evaluated in second order Born approximation and was derived from the elastic-electron scattering form factor. The kernel could be evaluated analytically provided the elastic form factor contained only poles. For this reason the author used a Yukawa form factor. Before calculating the second order term the author studied the first order term containing finite size effects in the inelastic form factor. The author observed that the virtual photon spectrum is insensitive to the details of the inelastic distribution over a large range of energies and depends only on the transition radius. This gave the author the freedom of choosing an inelastic distribution for which the form factor has only poles and the author chose a modified form of the exponential distribution, which enabled the author to evaluate the matrix element analytically. The remaining integral over the physical momentum transfer was performed numerically. The author evaluated the virtual photon spectra for E1 and M1 transitions for a variety of electron energies using several nuclei and compared the results with the distorted wave calculations. Except for low energy and high Z, the second order results compared well with the distorted wave calculations

  2. The attention-weighted sample-size model of visual short-term memory

    DEFF Research Database (Denmark)

    Smith, Philip L.; Lilburn, Simon D.; Corbett, Elaine A.

    2016-01-01

    exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items...

  3. Size validity of plasma-metamaterial cloaking monitored by scattering wave in finite-difference time-domain method

    Directory of Open Access Journals (Sweden)

    Alexandre Bambina

    2018-01-01

    Full Text Available Limitation of the cloak-size reduction is investigated numerically by a finite-difference time-domain (FDTD method. A metallic pole that imitates an antenna is cloaked with an anisotropic and parameter-gradient medium against electromagnetic-wave propagation in microwave range. The cloaking structure is a metamaterial submerged in a plasma confined in a vacuum chamber made of glass. The smooth-permittivity plasma can be compressed in the radial direction, which enables us to decrease the size of the cloak. Theoretical analysis is performed numerically by comparing scattering waves in various cases; there exists a high reduction of the scattering wave when the radius of the cloak is larger than a quarter of one wavelength. This result indicates that the required size of the cloaking layer is more than an object scale in the Rayleigh scattering regime.

  4. Breaking Free of Sample Size Dogma to Perform Innovative Translational Research

    Science.gov (United States)

    Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.

    2011-01-01

    Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197

  5. Proposal for element size and time increment selection guideline by 3-D finite element method for elastic waves propagation analysis

    International Nuclear Information System (INIS)

    Ishida, Hitoshi; Meshii, Toshiyuki

    2008-01-01

    This paper proposes a guideline for selection of element size and time increment by 3-D finite element method, which is applied to elastic wave propagation analysis for a long distance of a large structure. An element size and a time increment are determined by quantitative evaluation of strain, which must be 0 on the analysis model with a uniform motion, caused by spatial and time discretization. (author)

  6. Diffusion of finite-sized hard-core interacting particles in a one-dimensional box: Tagged particle dynamics.

    Science.gov (United States)

    Lizana, L; Ambjörnsson, T

    2009-11-01

    We solve a nonequilibrium statistical-mechanics problem exactly, namely, the single-file dynamics of N hard-core interacting particles (the particles cannot pass each other) of size Delta diffusing in a one-dimensional system of finite length L with reflecting boundaries at the ends. We obtain an exact expression for the conditional probability density function rhoT(yT,t|yT,0) that a tagged particle T (T=1,...,N) is at position yT at time t given that it at time t=0 was at position yT,0. Using a Bethe ansatz we obtain the N -particle probability density function and, by integrating out the coordinates (and averaging over initial positions) of all particles but particle T , we arrive at an exact expression for rhoT(yT,t|yT,0) in terms of Jacobi polynomials or hypergeometric functions. Going beyond previous studies, we consider the asymptotic limit of large N , maintaining L finite, using a nonstandard asymptotic technique. We derive an exact expression for rhoT(yT,t|yT,0) for a tagged particle located roughly in the middle of the system, from which we find that there are three time regimes of interest for finite-sized systems: (A) for times much smaller than the collision time tparticle concentration and D is the diffusion constant for each particle, the tagged particle undergoes a normal diffusion; (B) for times much larger than the collision time t >taucoll but times smaller than the equilibrium time ttaue , rhoT(yT,t|yT,0) approaches a polynomial-type equilibrium probability density function. Notably, only regimes (A) and (B) are found in the previously considered infinite systems.

  7. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    Science.gov (United States)

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  8. Finite size scaling of the Higgs-Yukawa model near the Gaussian fixed point

    Energy Technology Data Exchange (ETDEWEB)

    Chu, David Y.J.; Lin, C.J. David [National Chiao-Tung Univ., Hsinchu, Taiwan (China); Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Knippschild, Bastian [HISKP, Bonn (Germany); Nagy, Attila [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Univ. Berlin (Germany)

    2016-12-15

    We study the scaling properties of Higgs-Yukawa models. Using the technique of Finite-Size Scaling, we are able to derive scaling functions that describe the observables of the model in the vicinity of a Gaussian fixed point. A feasibility study of our strategy is performed for the pure scalar theory in the weak-coupling regime. Choosing the on-shell renormalisation scheme gives us an advantage to fit the scaling functions against lattice data with only a small number of fit parameters. These formulae can be used to determine the universality of the observed phase transitions, and thus play an essential role in future investigations of Higgs-Yukawa models, in particular in the strong Yukawa coupling region.

  9. Sample Size and Saturation in PhD Studies Using Qualitative Interviews

    Directory of Open Access Journals (Sweden)

    Mark Mason

    2010-08-01

    Full Text Available A number of issues can affect sample size in qualitative research; however, the guiding principle should be the concept of saturation. This has been explored in detail by a number of authors but is still hotly debated, and some say little understood. A sample of PhD studies using qualitative approaches, and qualitative interviews as the method of data collection was taken from theses.com and contents analysed for their sample sizes. Five hundred and sixty studies were identified that fitted the inclusion criteria. Results showed that the mean sample size was 31; however, the distribution was non-random, with a statistically significant proportion of studies, presenting sample sizes that were multiples of ten. These results are discussed in relation to saturation. They suggest a pre-meditated approach that is not wholly congruent with the principles of qualitative research. URN: urn:nbn:de:0114-fqs100387

  10. Bayesian analysis of finite population sampling in multivariate co-exchangeable structures with separable covariance matric

    OpenAIRE

    Shaw, Simon C.; Goldstein, Michael

    2017-01-01

    We explore the effect of finite population sampling in design problems with many variables cross-classified in many ways. In particular, we investigate designs where we wish to sample individuals belonging to different groups for which the underlying covariance matrices are separable between groups and variables. We exploit the generalised conditional independence structure of the model to show how the analysis of the full model can be reduced to an interpretable series of lower dimensional p...

  11. Sample size allocation in multiregional equivalence studies.

    Science.gov (United States)

    Liao, Jason J Z; Yu, Ziji; Li, Yulan

    2018-06-17

    With the increasing globalization of drug development, the multiregional clinical trial (MRCT) has gained extensive use. The data from MRCTs could be accepted by regulatory authorities across regions and countries as the primary sources of evidence to support global marketing drug approval simultaneously. The MRCT can speed up patient enrollment and drug approval, and it makes the effective therapies available to patients all over the world simultaneously. However, there are many challenges both operationally and scientifically in conducting a drug development globally. One of many important questions to answer for the design of a multiregional study is how to partition sample size into each individual region. In this paper, two systematic approaches are proposed for the sample size allocation in a multiregional equivalence trial. A numerical evaluation and a biosimilar trial are used to illustrate the characteristics of the proposed approaches. Copyright © 2018 John Wiley & Sons, Ltd.

  12. Sampling strategies for estimating brook trout effective population size

    Science.gov (United States)

    Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher

    2012-01-01

    The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...

  13. Compressive Sampling of EEG Signals with Finite Rate of Innovation

    Directory of Open Access Journals (Sweden)

    Poh Kok-Kiong

    2010-01-01

    Full Text Available Analyses of electroencephalographic signals and subsequent diagnoses can only be done effectively on long term recordings that preserve the signals' morphologies. Currently, electroencephalographic signals are obtained at Nyquist rate or higher, thus introducing redundancies. Existing compression methods remove these redundancies, thereby achieving compression. We propose an alternative compression scheme based on a sampling theory developed for signals with a finite rate of innovation (FRI which compresses electroencephalographic signals during acquisition. We model the signals as FRI signals and then sample them at their rate of innovation. The signals are thus effectively represented by a small set of Fourier coefficients corresponding to the signals' rate of innovation. Using the FRI theory, original signals can be reconstructed using this set of coefficients. Seventy-two hours of electroencephalographic recording are tested and results based on metrices used in compression literature and morphological similarities of electroencephalographic signals are presented. The proposed method achieves results comparable to that of wavelet compression methods, achieving low reconstruction errors while preserving the morphologiies of the signals. More importantly, it introduces a new framework to acquire electroencephalographic signals at their rate of innovation, thus entailing a less costly low-rate sampling device that does not waste precious computational resources.

  14. Group-invariant finite Fourier transforms

    International Nuclear Information System (INIS)

    Shenefelt, M.H.

    1988-01-01

    The computation of the finite Fourier transform of functions is one of the most used computations in crystallography. Since the Fourier transform involved in 3-dimensional, the size of the computation becomes very large even for relatively few sample points along each edge. In this thesis, there is a family of algorithms that reduce the computation of Fourier transform of functions respecting the symmetries. Some properties of these algorithms are: (1) The algorithms make full use of the group of symmetries of a crystal. (2) The algorithms can be factored and combined according to the prime factorization of the number of points in the sample space. (3) The algorithms are organized into a family using the group structure of the crystallographic groups to make iterative procedures possible

  15. Model-based estimation of finite population total in stratified sampling

    African Journals Online (AJOL)

    The work presented in this paper concerns the estimation of finite population total under model – based framework. Nonparametric regression approach as a method of estimating finite population total is explored. The asymptotic properties of the estimators based on nonparametric regression are also developed under ...

  16. Sample Size Induced Brittle-to-Ductile Transition of Single-Crystal Aluminum Nitride

    Science.gov (United States)

    2015-08-01

    ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal Aluminum...originator. ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal...Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal Aluminum Nitride 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT

  17. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    Energy Technology Data Exchange (ETDEWEB)

    Rocklin, Gabriel J. [Department of Pharmaceutical Chemistry, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550, USA and Biophysics Graduate Program, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550 (United States); Mobley, David L. [Departments of Pharmaceutical Sciences and Chemistry, University of California Irvine, 147 Bison Modular, Building 515, Irvine, California 92697-0001, USA and Department of Chemistry, University of New Orleans, 2000 Lakeshore Drive, New Orleans, Louisiana 70148 (United States); Dill, Ken A. [Laufer Center for Physical and Quantitative Biology, 5252 Stony Brook University, Stony Brook, New York 11794-0001 (United States); Hünenberger, Philippe H., E-mail: phil@igc.phys.chem.ethz.ch [Laboratory of Physical Chemistry, Swiss Federal Institute of Technology, ETH, 8093 Zürich (Switzerland)

    2013-11-14

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol{sup −1}) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non

  18. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    Science.gov (United States)

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.

    2013-11-01

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol-1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB

  19. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: an accurate correction scheme for electrostatic finite-size effects.

    Science.gov (United States)

    Rocklin, Gabriel J; Mobley, David L; Dill, Ken A; Hünenberger, Philippe H

    2013-11-14

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol(-1)) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB

  20. Sample size determination for logistic regression on a logit-normal distribution.

    Science.gov (United States)

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  1. Adaptive beamlet-based finite-size pencil beam dose calculation for independent verification of IMRT and VMAT

    International Nuclear Information System (INIS)

    Park, Justin C.; Li, Jonathan G.; Arhjoul, Lahcen; Yan, Guanghua; Lu, Bo; Fan, Qiyong; Liu, Chihray

    2015-01-01

    Purpose: The use of sophisticated dose calculation procedure in modern radiation therapy treatment planning is inevitable in order to account for complex treatment fields created by multileaf collimators (MLCs). As a consequence, independent volumetric dose verification is time consuming, which affects the efficiency of clinical workflow. In this study, the authors present an efficient adaptive beamlet-based finite-size pencil beam (AB-FSPB) dose calculation algorithm that minimizes the computational procedure while preserving the accuracy. Methods: The computational time of finite-size pencil beam (FSPB) algorithm is proportional to the number of infinitesimal and identical beamlets that constitute an arbitrary field shape. In AB-FSPB, dose distribution from each beamlet is mathematically modeled such that the sizes of beamlets to represent an arbitrary field shape no longer need to be infinitesimal nor identical. As a result, it is possible to represent an arbitrary field shape with combinations of different sized and minimal number of beamlets. In addition, the authors included the model parameters to consider MLC for its rounded edge and transmission. Results: Root mean square error (RMSE) between treatment planning system and conventional FSPB on a 10 × 10 cm 2 square field using 10 × 10, 2.5 × 2.5, and 0.5 × 0.5 cm 2 beamlet sizes were 4.90%, 3.19%, and 2.87%, respectively, compared with RMSE of 1.10%, 1.11%, and 1.14% for AB-FSPB. This finding holds true for a larger square field size of 25 × 25 cm 2 , where RMSE for 25 × 25, 2.5 × 2.5, and 0.5 × 0.5 cm 2 beamlet sizes were 5.41%, 4.76%, and 3.54% in FSPB, respectively, compared with RMSE of 0.86%, 0.83%, and 0.88% for AB-FSPB. It was found that AB-FSPB could successfully account for the MLC transmissions without major discrepancy. The algorithm was also graphical processing unit (GPU) compatible to maximize its computational speed. For an intensity modulated radiation therapy (∼12 segments) and a

  2. Adaptive beamlet-based finite-size pencil beam dose calculation for independent verification of IMRT and VMAT.

    Science.gov (United States)

    Park, Justin C; Li, Jonathan G; Arhjoul, Lahcen; Yan, Guanghua; Lu, Bo; Fan, Qiyong; Liu, Chihray

    2015-04-01

    The use of sophisticated dose calculation procedure in modern radiation therapy treatment planning is inevitable in order to account for complex treatment fields created by multileaf collimators (MLCs). As a consequence, independent volumetric dose verification is time consuming, which affects the efficiency of clinical workflow. In this study, the authors present an efficient adaptive beamlet-based finite-size pencil beam (AB-FSPB) dose calculation algorithm that minimizes the computational procedure while preserving the accuracy. The computational time of finite-size pencil beam (FSPB) algorithm is proportional to the number of infinitesimal and identical beamlets that constitute an arbitrary field shape. In AB-FSPB, dose distribution from each beamlet is mathematically modeled such that the sizes of beamlets to represent an arbitrary field shape no longer need to be infinitesimal nor identical. As a result, it is possible to represent an arbitrary field shape with combinations of different sized and minimal number of beamlets. In addition, the authors included the model parameters to consider MLC for its rounded edge and transmission. Root mean square error (RMSE) between treatment planning system and conventional FSPB on a 10 × 10 cm(2) square field using 10 × 10, 2.5 × 2.5, and 0.5 × 0.5 cm(2) beamlet sizes were 4.90%, 3.19%, and 2.87%, respectively, compared with RMSE of 1.10%, 1.11%, and 1.14% for AB-FSPB. This finding holds true for a larger square field size of 25 × 25 cm(2), where RMSE for 25 × 25, 2.5 × 2.5, and 0.5 × 0.5 cm(2) beamlet sizes were 5.41%, 4.76%, and 3.54% in FSPB, respectively, compared with RMSE of 0.86%, 0.83%, and 0.88% for AB-FSPB. It was found that AB-FSPB could successfully account for the MLC transmissions without major discrepancy. The algorithm was also graphical processing unit (GPU) compatible to maximize its computational speed. For an intensity modulated radiation therapy (∼12 segments) and a volumetric modulated arc

  3. Sample size reassessment for a two-stage design controlling the false discovery rate.

    Science.gov (United States)

    Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin

    2015-11-01

    Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.

  4. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    Science.gov (United States)

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Probabilistic finite element stiffness of a laterally loaded monopile based on an improved asymptotic sampling method

    DEFF Research Database (Denmark)

    Vahdatirad, Mohammadjavad; Bayat, Mehdi; Andersen, Lars Vabbersgaard

    2015-01-01

    shear strength of clay. Normal and Sobol sampling are employed to provide the asymptotic sampling method to generate the probability distribution of the foundation stiffnesses. Monte Carlo simulation is used as a benchmark. Asymptotic sampling accompanied with Sobol quasi random sampling demonstrates......The mechanical responses of an offshore monopile foundation mounted in over-consolidated clay are calculated by employing a stochastic approach where a nonlinear p–y curve is incorporated with a finite element scheme. The random field theory is applied to represent a spatial variation for undrained...... an efficient method for estimating the probability distribution of stiffnesses for the offshore monopile foundation....

  6. Sample size optimization in nuclear material control. 1

    International Nuclear Information System (INIS)

    Gladitz, J.

    1982-01-01

    Equations have been derived and exemplified which allow the determination of the minimum variables sample size for given false alarm and detection probabilities of nuclear material losses and diversions, respectively. (author)

  7. 3D visualization and finite element mesh formation from wood anatomy samples, Part II – Algorithm approach

    Directory of Open Access Journals (Sweden)

    Petr Koňas

    2009-01-01

    Full Text Available Paper presents new original application WOOD3D in form of program code assembling. The work extends the previous article “Part I – Theoretical approach” in detail description of implemented C++ classes of utilized projects Visualization Toolkit (VTK, Insight Toolkit (ITK and MIMX. Code is written in CMake style and it is available as multiplatform application. Currently GNU Linux (32/64b and MS Windows (32/64b platforms were released. Article discusses various filter classes for image filtering. Mainly Otsu and Binary threshold filters are classified for anatomy wood samples thresholding. Registration of images series is emphasized for difference of colour spaces compensation is included. Resulted work flow of image analysis is new methodological approach for images processing through the composition, visualization, filtering, registration and finite element mesh formation. Application generates script in ANSYS parametric design language (APDL which is fully compatible with ANSYS finite element solver and designer environment. The script includes the whole definition of unstructured finite element mesh formed by individual elements and nodes. Due to simple notation, the same script can be used for generation of geometrical entities in element positions. Such formed volumetric entities are prepared for further geometry approximation (e.g. by boolean or more advanced methods. Hexahedral and tetrahedral types of mesh elements are formed on user request with specified mesh options. Hexahedral meshes are formed both with uniform element size and with anisotropic character. Modified octree method for hexahedral mesh with anisotropic character was declared in application. Multicore CPUs in the application are supported for fast image analysis realization. Visualization of image series and consequent 3D image are realized in VTK format sufficiently known and public format, visualized in GPL application Paraview. Future work based on mesh

  8. Mechanisms of self-organization and finite size effects in a minimal agent based model

    International Nuclear Information System (INIS)

    Alfi, V; Cristelli, M; Pietronero, L; Zaccaria, A

    2009-01-01

    We present a detailed analysis of the self-organization phenomenon in which the stylized facts originate from finite size effects with respect to the number of agents considered and disappear in the limit of an infinite population. By introducing the possibility that agents can enter or leave the market depending on the behavior of the price, it is possible to show that the system self-organizes in a regime with a finite number of agents which corresponds to the stylized facts. The mechanism for entering or leaving the market is based on the idea that a too stable market is unappealing for traders, while the presence of price movements attracts agents to enter and speculate on the market. We show that this mechanism is also compatible with the idea that agents are scared by a noisy and risky market at shorter timescales. We also show that the mechanism for self-organization is robust with respect to variations of the exit/entry rules and that the attempt to trigger the system to self-organize in a region without stylized facts leads to an unrealistic dynamics. We study the self-organization in a specific agent based model but we believe that the basic ideas should be of general validity

  9. Impact of shoe size in a sample of elderly individuals

    Directory of Open Access Journals (Sweden)

    Daniel López-López

    Full Text Available Summary Introduction: The use of an improper shoe size is common in older people and is believed to have a detrimental effect on the quality of life related to foot health. The objective is to describe and compare, in a sample of participants, the impact of shoes that fit properly or improperly, as well as analyze the scores related to foot health and health overall. Method: A sample of 64 participants, with a mean age of 75.3±7.9 years, attended an outpatient center where self-report data was recorded, the measurements of the size of the feet and footwear were determined and the scores compared between the group that wears the correct size of shoes and another group of individuals who do not wear the correct size of shoes, using the Spanish version of the Foot Health Status Questionnaire. Results: The group wearing an improper shoe size showed poorer quality of life regarding overall health and specifically foot health. Differences between groups were evaluated using a t-test for independent samples resulting statistically significant (p<0.05 for the dimension of pain, function, footwear, overall foot health, and social function. Conclusion: Inadequate shoe size has a significant negative impact on quality of life related to foot health. The degree of negative impact seems to be associated with age, sex, and body mass index (BMI.

  10. Rescaled Range Analysis and Detrended Fluctuation Analysis: Finite Sample Properties and Confidence Intervals

    Czech Academy of Sciences Publication Activity Database

    Krištoufek, Ladislav

    4/2010, č. 3 (2010), s. 236-250 ISSN 1802-4696 R&D Projects: GA ČR GD402/09/H045; GA ČR GA402/09/0965 Grant - others:GA UK(CZ) 118310 Institutional research plan: CEZ:AV0Z10750506 Keywords : rescaled range analysis * detrended fluctuation analysis * Hurst exponent * long-range dependence Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2010/E/kristoufek-rescaled range analysis and detrended fluctuation analysis finite sample properties and confidence intervals.pdf

  11. Numerical simulation of temperature distribution using finite difference equations and estimation of the grain size during friction stir processing

    International Nuclear Information System (INIS)

    Arora, H.S.; Singh, H.; Dhindaw, B.K.

    2012-01-01

    Highlights: ► Magnesium alloy AE42 was friction stir processed under different cooling conditions. ► Heat flow model was developed using finite difference heat equations. ► Generalized MATLAB code was developed for solving heat flow model. ► Regression equation for estimation of grain size was developed. - Abstract: The present investigation is aimed at developing a heat flow model to simulate temperature history during friction stir processing (FSP). A new approach of developing implicit form of finite difference heat equations solved using MATLAB code was used. A magnesium based alloy AE42 was friction stir processed (FSPed) at different FSP parameters and cooling conditions. Temperature history was continuously recorded in the nugget zone during FSP using data acquisition system and k type thermocouples. The developed code was validated at different FSP parameters and cooling conditions during FSP experimentation. The temperature history at different locations in the nugget zone at different instants of time was further utilized for the estimation of grain growth rate and final average grain size of the FSPed specimen. A regression equation relating the final grain size, maximum temperature during FSP and the cooling rate was developed. The metallurgical characterization was done using optical microscopy, SEM, and FIB-SIM analysis. The simulated temperature profiles and final average grain size were found to be in good agreement with the experimental results. The presence of fine precipitate particles generated in situ in the investigated magnesium alloy also contributed in the evolution of fine grain structure through Zener pining effect at the grain boundaries.

  12. Suppression of bottomonia states in finite size quark gluon plasma in PbPb collisions at LHC

    International Nuclear Information System (INIS)

    Shukla, P.; Abdulsalam, Abdulla; Kumar, Vineet

    2012-01-01

    The paper estimated the suppression of bottomonium states in an expanding QGP of finite lifetime and size with the conditions relevant for PbPb collisions at LHC. The recent results on the properties of ϒ states have been used as ingredient in the study. The nuclear modification factor and the ratios of yields of ϒ states are then obtained as a function of transverse momentum and centrality. The study has compared the calculations with the bottomonia yields measured in Pb+Pb collisions at √S NN = 2.76 TeV

  13. Finite nuclear size and Lamb shift of p-wave atomic states

    International Nuclear Information System (INIS)

    Milstein, A.I.; Sushkov, O.P.; Terekhov, I.S.

    2003-01-01

    We consider corrections to the Lamb shift of the p-wave atomic states due to the finite nuclear size (FNS). In other words, these are radiative corrections to the atomic isotope shift related to the FNS. It is shown that the structure of the corrections is qualitatively different to that for the s-wave states. The perturbation theory expansion for the relative correction for a p 1/2 state starts with a α ln(1/Zα) term, while for the s 1/2 states it starts with a Zα 2 term. Here, α is the fine-structure constant and Z is the nuclear charge. In the present work, we calculate the α terms for that 2p states, the result for the 2p 1/2 state reads (8α/9π){ln[1/(Zα) 2 ]+0.710}. Even more interesting are the p 3/2 states. In this case the 'correction' is several orders of magnitude larger than the 'leading' FNS shift. However, absolute values of energy shifts related to these corrections are very small

  14. Finite sample performance of the E-M algorithm for ranks data modelling

    Directory of Open Access Journals (Sweden)

    Angela D'Elia

    2007-10-01

    Full Text Available We check the finite sample performance of the maximum likelihood estimators of the parameters of a mixture distribution recently introduced for modelling ranks/preference data. The estimates are derived by the E-M algorithm and the performance is evaluated both from an univariate and bivariate points of view. While the results are generally acceptable as far as it concerns the bias, the Monte Carlo experiment shows a different behaviour of the estimators efficiency for the two parameters of the mixture, mainly depending upon their location in the admissible parametric space. Some operative suggestions conclude the paer.

  15. Threshold-dependent sample sizes for selenium assessment with stream fish tissue

    Science.gov (United States)

    Hitt, Nathaniel P.; Smith, David R.

    2015-01-01

    Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased

  16. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  17. Optimum sample size to estimate mean parasite abundance in fish parasite surveys

    Directory of Open Access Journals (Sweden)

    Shvydka S.

    2018-03-01

    Full Text Available To reach ethically and scientifically valid mean abundance values in parasitological and epidemiological studies this paper considers analytic and simulation approaches for sample size determination. The sample size estimation was carried out by applying mathematical formula with predetermined precision level and parameter of the negative binomial distribution estimated from the empirical data. A simulation approach to optimum sample size determination aimed at the estimation of true value of the mean abundance and its confidence interval (CI was based on the Bag of Little Bootstraps (BLB. The abundance of two species of monogenean parasites Ligophorus cephali and L. mediterraneus from Mugil cephalus across the Azov-Black Seas localities were subjected to the analysis. The dispersion pattern of both helminth species could be characterized as a highly aggregated distribution with the variance being substantially larger than the mean abundance. The holistic approach applied here offers a wide range of appropriate methods in searching for the optimum sample size and the understanding about the expected precision level of the mean. Given the superior performance of the BLB relative to formulae with its few assumptions, the bootstrap procedure is the preferred method. Two important assessments were performed in the present study: i based on CIs width a reasonable precision level for the mean abundance in parasitological surveys of Ligophorus spp. could be chosen between 0.8 and 0.5 with 1.6 and 1x mean of the CIs width, and ii the sample size equal 80 or more host individuals allows accurate and precise estimation of mean abundance. Meanwhile for the host sample size in range between 25 and 40 individuals, the median estimates showed minimal bias but the sampling distribution skewed to the low values; a sample size of 10 host individuals yielded to unreliable estimates.

  18. Sample size for post-marketing safety studies based on historical controls.

    Science.gov (United States)

    Wu, Yu-te; Makuch, Robert W

    2010-08-01

    As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.

  19. Sample size computation for association studies using case–parents ...

    Indian Academy of Sciences (India)

    ple size needed to reach a given power (Knapp 1999; Schaid. 1999; Chen and Deng 2001; Brown 2004). In their seminal paper, Risch and Merikangas (1996) showed that for a mul- tiplicative mode of inheritance (MOI) for the susceptibility gene, sample size depends on two parameters: the frequency of the risk allele at the ...

  20. Power-law correlations and finite-size effects in silica particle aggregates studied by small-angle neutron scattering

    DEFF Research Database (Denmark)

    Freltoft, T.; Kjems, Jørgen; Sinha, S. K.

    1986-01-01

    Small-angle neutron scattering from normal, compressed, and water-suspended powders of aggregates of fine silica particles has been studied. The samples possessed average densities ranging from 0.008 to 0.45 g/cm3. Assuming power-law correlations between particles and a finite correlation length ξ......, the authors derive the scattering function S(q) from specific models for particle-particle correlation in these systems. S(q) was found to provide a satisfactory fit to the data for all samples studied. The fractal dimension df corresponding to the power-law correlation was 2.61±0.1 for all dry samples, and 2...

  1. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    OpenAIRE

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...

  2. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    Science.gov (United States)

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  3. Synchronization of finite-size particles by a traveling wave in a cylindrical flow

    Science.gov (United States)

    Melnikov, D. E.; Pushkin, D. O.; Shevtsova, V. M.

    2013-09-01

    Motion of small finite-size particles suspended in a cylindrical thermocapillary flow with an azimuthally traveling wave is studied experimentally and numerically. At certain flow regimes the particles spontaneously align in dynamic accumulation structures (PAS) of spiral shape. We find that long-time trajectories of individual particles in this flow fall into three basic categories that can be described, borrowing the dynamical systems terminology, as the stable periodic, the quasiperiodic, and the quasistable periodic orbits. Besides these basic types of orbits, we observe the "doubled" periodic orbits and shuttle-like particle trajectories. We find that ensembles of particles having periodic orbits give rise to one-dimensional spiral PAS, while ensembles of particles having quasiperiodic orbits form two-dimensional PAS of toroidal shape. We expound the reasons why these types of orbits and the emergence of the corresponding accumulation structures should naturally be anticipated based on the phase locking theory of PAS formation. We give a further discussion of PAS features, such as the finite thickness of PAS spirals and the probable scenarios of the spiral PAS destruction. Finally, in numerical simulations of inertial particles we observe formation of the spiral structures corresponding to the 3:1 "resonance" between the particle turnover frequency and the wave oscillations frequency, thus confirming another prediction of the phase locking theory. In view of the generality of the arguments involved, we expect the importance of this structure-forming mechanism to go far beyond the realm of the laboratory-friendly thermocapillary flows.

  4. Sample size in psychological research over the past 30 years.

    Science.gov (United States)

    Marszalek, Jacob M; Barber, Carolyn; Kohlhart, Julie; Holmes, Cooper B

    2011-04-01

    The American Psychological Association (APA) Task Force on Statistical Inference was formed in 1996 in response to a growing body of research demonstrating methodological issues that threatened the credibility of psychological research, and made recommendations to address them. One issue was the small, even dramatically inadequate, size of samples used in studies published by leading journals. The present study assessed the progress made since the Task Force's final report in 1999. Sample sizes reported in four leading APA journals in 1955, 1977, 1995, and 2006 were compared using nonparametric statistics, while data from the last two waves were fit to a hierarchical generalized linear growth model for more in-depth analysis. Overall, results indicate that the recommendations for increasing sample sizes have not been integrated in core psychological research, although results slightly vary by field. This and other implications are discussed in the context of current methodological critique and practice.

  5. A flexible method for multi-level sample size determination

    International Nuclear Information System (INIS)

    Lu, Ming-Shih; Sanborn, J.B.; Teichmann, T.

    1997-01-01

    This paper gives a flexible method to determine sample sizes for both systematic and random error models (this pertains to sampling problems in nuclear safeguard questions). In addition, the method allows different attribute rejection limits. The new method could assist achieving a higher detection probability and enhance inspection effectiveness

  6. Hierarchical finite element modeling of SiCp/Al2124 T4 composites with dislocation plasticity and size dependent failure

    International Nuclear Information System (INIS)

    Suh, Yeong Sung; Kim, Yong Bae

    2012-01-01

    The strength of particle reinforced metal matrix composites is, in general, known to be increased by the geometrically necessary dislocations punched around a particle that form during cooling after consolidation because of coefficient of thermal expansion (CTE) mismatch between the particle and the matrix. An additional strength increase may also be observed, since another type of geometrically necessary dislocation can be formed during extensive deformation as a result of the strain gradient plasticity due to the elastic plastic mismatch between the particle and the matrix. In this paper, the magnitudes of these two types of dislocations are calculated based on the dislocation plasticity. The dislocations are then converted to the respective strengths and allocated hierarchically to the matrix around the particle in the axisymmetric finite element unit cell model. the proposed method is shown to be very effective by performing finite element strength analysis of SiC p /Al2124 T4 composites that included ductile in the matrix and particle matrix decohesion. The predicted results for different particle sizes and volume fractions show that the length scale effect of the particle size obviously affects the strength and failure behavior of the particle reinforced metal matrix composites

  7. Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.

    Science.gov (United States)

    Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham

    2017-12-01

    During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  8. Finite-size effect of the dyonic giant magnons in N=6 super Chern-Simons theory

    International Nuclear Information System (INIS)

    Ahn, Changrim; Bozhilov, P.

    2009-01-01

    We consider finite-size effects for the dyonic giant magnon of the type IIA string theory on AdS 4 xCP 3 by applying the Luescher μ-term formula which is derived from a recently proposed S matrix for the N=6 super Chern-Simons theory. We compute explicitly the effect for the case of a symmetric configuration where the two external bound states, each of A and B particles, have the same momentum p and spin J 2 . We compare this with the classical string theory result which we computed by reducing it to the Neumann-Rosochatius system. The two results match perfectly.

  9. 1/ f noise from the laws of thermodynamics for finite-size fluctuations.

    Science.gov (United States)

    Chamberlin, Ralph V; Nasir, Derek M

    2014-07-01

    Computer simulations of the Ising model exhibit white noise if thermal fluctuations are governed by Boltzmann's factor alone; whereas we find that the same model exhibits 1/f noise if Boltzmann's factor is extended to include local alignment entropy to all orders. We show that this nonlinear correction maintains maximum entropy during equilibrium fluctuations. Indeed, as with the usual way to resolve Gibbs' paradox that avoids entropy reduction during reversible processes, the correction yields the statistics of indistinguishable particles. The correction also ensures conservation of energy if an instantaneous contribution from local entropy is included. Thus, a common mechanism for 1/f noise comes from assuming that finite-size fluctuations strictly obey the laws of thermodynamics, even in small parts of a large system. Empirical evidence for the model comes from its ability to match the measured temperature dependence of the spectral-density exponents in several metals and to show non-Gaussian fluctuations characteristic of nanoscale systems.

  10. Sample Size Calculation for Controlling False Discovery Proportion

    Directory of Open Access Journals (Sweden)

    Shulian Shang

    2012-01-01

    Full Text Available The false discovery proportion (FDP, the proportion of incorrect rejections among all rejections, is a direct measure of abundance of false positive findings in multiple testing. Many methods have been proposed to control FDP, but they are too conservative to be useful for power analysis. Study designs for controlling the mean of FDP, which is false discovery rate, have been commonly used. However, there has been little attempt to design study with direct FDP control to achieve certain level of efficiency. We provide a sample size calculation method using the variance formula of the FDP under weak-dependence assumptions to achieve the desired overall power. The relationship between design parameters and sample size is explored. The adequacy of the procedure is assessed by simulation. We illustrate the method using estimated correlations from a prostate cancer dataset.

  11. A normative inference approach for optimal sample sizes in decisions from experience

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  12. Big Data, Small Sample.

    Science.gov (United States)

    Gerlovina, Inna; van der Laan, Mark J; Hubbard, Alan

    2017-05-20

    Multiple comparisons and small sample size, common characteristics of many types of "Big Data" including those that are produced by genomic studies, present specific challenges that affect reliability of inference. Use of multiple testing procedures necessitates calculation of very small tail probabilities of a test statistic distribution. Results based on large deviation theory provide a formal condition that is necessary to guarantee error rate control given practical sample sizes, linking the number of tests and the sample size; this condition, however, is rarely satisfied. Using methods that are based on Edgeworth expansions (relying especially on the work of Peter Hall), we explore the impact of departures of sampling distributions from typical assumptions on actual error rates. Our investigation illustrates how far the actual error rates can be from the declared nominal levels, suggesting potentially wide-spread problems with error rate control, specifically excessive false positives. This is an important factor that contributes to "reproducibility crisis". We also review some other commonly used methods (such as permutation and methods based on finite sampling inequalities) in their application to multiple testing/small sample data. We point out that Edgeworth expansions, providing higher order approximations to the sampling distribution, offer a promising direction for data analysis that could improve reliability of studies relying on large numbers of comparisons with modest sample sizes.

  13. JacketSE: An Offshore Wind Turbine Jacket Sizing Tool; Theory Manual and Sample Usage with Preliminary Validation

    Energy Technology Data Exchange (ETDEWEB)

    Damiani, Rick [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-02-08

    This manual summarizes the theory and preliminary verifications of the JacketSE module, which is an offshore jacket sizing tool that is part of the Wind-Plant Integrated System Design & Engineering Model toolbox. JacketSE is based on a finite-element formulation and on user-prescribed inputs and design standards' criteria (constraints). The physics are highly simplified, with a primary focus on satisfying ultimate limit states and modal performance requirements. Preliminary validation work included comparing industry data and verification against ANSYS, a commercial finite-element analysis package. The results are encouraging, and future improvements to the code are recommended in this manual.

  14. Rock sampling. [method for controlling particle size distribution

    Science.gov (United States)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  15. Finite Discrete Gabor Analysis

    DEFF Research Database (Denmark)

    Søndergaard, Peter Lempel

    2007-01-01

    frequency bands at certain times. Gabor theory can be formulated for both functions on the real line and for discrete signals of finite length. The two theories are largely the same because many aspects come from the same underlying theory of locally compact Abelian groups. The two types of Gabor systems...... can also be related by sampling and periodization. This thesis extends on this theory by showing new results for window construction. It also provides a discussion of the problems associated to discrete Gabor bases. The sampling and periodization connection is handy because it allows Gabor systems...... on the real line to be well approximated by finite and discrete Gabor frames. This method of approximation is especially attractive because efficient numerical methods exists for doing computations with finite, discrete Gabor systems. This thesis presents new algorithms for the efficient computation of finite...

  16. Effects of sample size on the second magnetization peak in ...

    Indian Academy of Sciences (India)

    8+ crystals are observed at low temperatures, above the temperature where the SMP totally disappears. In particular, the onset of the SMP shifts to lower fields as the sample size decreases - a result that could be interpreted as a size effect in ...

  17. Sample size for estimation of the Pearson correlation coefficient in cherry tomato tests

    Directory of Open Access Journals (Sweden)

    Bruno Giacomini Sari

    2017-09-01

    Full Text Available ABSTRACT: The aim of this study was to determine the required sample size for estimation of the Pearson coefficient of correlation between cherry tomato variables. Two uniformity tests were set up in a protected environment in the spring/summer of 2014. The observed variables in each plant were mean fruit length, mean fruit width, mean fruit weight, number of bunches, number of fruits per bunch, number of fruits, and total weight of fruits, with calculation of the Pearson correlation matrix between them. Sixty eight sample sizes were planned for one greenhouse and 48 for another, with the initial sample size of 10 plants, and the others were obtained by adding five plants. For each planned sample size, 3000 estimates of the Pearson correlation coefficient were obtained through bootstrap re-samplings with replacement. The sample size for each correlation coefficient was determined when the 95% confidence interval amplitude value was less than or equal to 0.4. Obtaining estimates of the Pearson correlation coefficient with high precision is difficult for parameters with a weak linear relation. Accordingly, a larger sample size is necessary to estimate them. Linear relations involving variables dealing with size and number of fruits per plant have less precision. To estimate the coefficient of correlation between productivity variables of cherry tomato, with a confidence interval of 95% equal to 0.4, it is necessary to sample 275 plants in a 250m² greenhouse, and 200 plants in a 200m² greenhouse.

  18. Effect of sample size on bias correction performance

    Science.gov (United States)

    Reiter, Philipp; Gutjahr, Oliver; Schefczyk, Lukas; Heinemann, Günther; Casper, Markus C.

    2014-05-01

    The output of climate models often shows a bias when compared to observed data, so that a preprocessing is necessary before using it as climate forcing in impact modeling (e.g. hydrology, species distribution). A common bias correction method is the quantile matching approach, which adapts the cumulative distribution function of the model output to the one of the observed data by means of a transfer function. Especially for precipitation we expect the bias correction performance to strongly depend on sample size, i.e. the length of the period used for calibration of the transfer function. We carry out experiments using the precipitation output of ten regional climate model (RCM) hindcast runs from the EU-ENSEMBLES project and the E-OBS observational dataset for the period 1961 to 2000. The 40 years are split into a 30 year calibration period and a 10 year validation period. In the first step, for each RCM transfer functions are set up cell-by-cell, using the complete 30 year calibration period. The derived transfer functions are applied to the validation period of the respective RCM precipitation output and the mean absolute errors in reference to the observational dataset are calculated. These values are treated as "best fit" for the respective RCM. In the next step, this procedure is redone using subperiods out of the 30 year calibration period. The lengths of these subperiods are reduced from 29 years down to a minimum of 1 year, only considering subperiods of consecutive years. This leads to an increasing number of repetitions for smaller sample sizes (e.g. 2 for a length of 29 years). In the last step, the mean absolute errors are statistically tested against the "best fit" of the respective RCM to compare the performances. In order to analyze if the intensity of the effect of sample size depends on the chosen correction method, four variations of the quantile matching approach (PTF, QUANT/eQM, gQM, GQM) are applied in this study. The experiments are further

  19. 3D visualization and finite element mesh formation from wood anatomy samples, Part I – Theoretical approach

    Directory of Open Access Journals (Sweden)

    Petr Koňas

    2009-01-01

    Full Text Available The work summarizes created algorithms for formation of finite element (FE mesh which is derived from bitmap pattern. Process of registration, segmentation and meshing is described in detail. C++ library of STL from Insight Toolkit (ITK Project together with Visualization Toolkit (VTK were used for base processing of images. Several methods for appropriate mesh output are discussed. Multiplatform application WOOD3D for the task under GNU GPL license was assembled. Several methods of segmentation and mainly different ways of contouring were included. Tetrahedral and rectilinear types of mesh were programmed. Improving of mesh quality in some simple ways is mentioned. Testing and verification of final program on wood anatomy samples of spruce and walnut was realized. Methods of microscopic anatomy samples preparation are depicted. Final utilization of formed mesh in the simple structural analysis was performed.The article discusses main problems in image analysis due to incompatible colour spaces, samples preparation, thresholding and final conversion into finite element mesh. Assembling of mentioned tasks together and evaluation of the application are main original results of the presented work. In presented program two thresholding filters were used. By utilization of ITK two following filters were included. Otsu filter based and binary filter based were used. The most problematic task occurred in a production of wood anatomy samples in the unique light conditions with minimal or zero co­lour space shift and the following appropriate definition of thresholds (corresponding thresholding parameters and connected methods (prefiltering + registration which influence the continuity and mainly separation of wood anatomy structure. Solution in samples staining is suggested with the following quick image analysis realization. Next original result of the work is complex fully automated application which offers three types of finite element mesh

  20. Overestimation of test performance by ROC analysis: Effect of small sample size

    International Nuclear Information System (INIS)

    Seeley, G.W.; Borgstrom, M.C.; Patton, D.D.; Myers, K.J.; Barrett, H.H.

    1984-01-01

    New imaging systems are often observer-rated by ROC techniques. For practical reasons the number of different images, or sample size (SS), is kept small. Any systematic bias due to small SS would bias system evaluation. The authors set about to determine whether the area under the ROC curve (AUC) would be systematically biased by small SS. Monte Carlo techniques were used to simulate observer performance in distinguishing signal (SN) from noise (N) on a 6-point scale; P(SN) = P(N) = .5. Four sample sizes (15, 25, 50 and 100 each of SN and N), three ROC slopes (0.8, 1.0 and 1.25), and three intercepts (0.8, 1.0 and 1.25) were considered. In each of the 36 combinations of SS, slope and intercept, 2000 runs were simulated. Results showed a systematic bias: the observed AUC exceeded the expected AUC in every one of the 36 combinations for all sample sizes, with the smallest sample sizes having the largest bias. This suggests that evaluations of imaging systems using ROC curves based on small sample size systematically overestimate system performance. The effect is consistent but subtle (maximum 10% of AUC standard deviation), and is probably masked by the s.d. in most practical settings. Although there is a statistically significant effect (F = 33.34, P<0.0001) due to sample size, none was found for either the ROC curve slope or intercept. Overestimation of test performance by small SS seems to be an inherent characteristic of the ROC technique that has not previously been described

  1. Guided wave radiation in a finite-sized metallic or composite plate-like structure for its nondestructive testing

    International Nuclear Information System (INIS)

    Stevenin, Mathilde

    2016-01-01

    Different models are developed to provide generic tools for simulating nondestructive methods relying on elastic guided waves applied to metallic or composite plates. Various inspection methods of these structures exist or are under study. Most of them make use of ultrasonic sources of finite size; all are sensitive to reflection phenomena resulting from the finite size of the monitored objects. The developed models deal with transducer diffraction effects and edge reflection. As the interpretation of signals measured in guided wave inspection often uses the concept of modes, the models themselves are explicitly modal. The case of isotropic plates (metal) and anisotropic (multilayer composites) are considered; a general approach under the stationary phase approximation allows us to consider all the cases of interest. For the first, the validity of a Fraunhofer-like approximation leads to a very efficient computation of the direct and reflected fields radiated by a source. For the second, special attention is paid to the treatment of caustics. The stationary phase approximation being difficult to generalize, a model (so-called 'pencil model') of more geometrical nature is proposed with a high degree of genericity. It chains terms of isotropic or anisotropic propagation and terms of interaction with a boundary. The equivalence of the stationary phase approximation and the pencil model is demonstrated in the case of the radiation and reflection in an isotropic plate, for which an experimental validation is proceeded. (author) [fr

  2. Test of methods for retrospective activity size distribution determination from filter samples

    International Nuclear Information System (INIS)

    Meisenberg, Oliver; Tschiersch, Jochen

    2015-01-01

    Determining the activity size distribution of radioactive aerosol particles requires sophisticated and heavy equipment, which makes measurements at large number of sites difficult and expensive. Therefore three methods for a retrospective determination of size distributions from aerosol filter samples in the laboratory were tested for their applicability. Extraction into a carrier liquid with subsequent nebulisation showed size distributions with a slight but correctable bias towards larger diameters compared with the original size distribution. Yields in the order of magnitude of 1% could be achieved. Sonication-assisted extraction into a carrier liquid caused a coagulation mode to appear in the size distribution. Sonication-assisted extraction into the air did not show acceptable results due to small yields. The method of extraction into a carrier liquid without sonication was applied to aerosol samples from Chernobyl in order to calculate inhalation dose coefficients for 137 Cs based on the individual size distribution. The effective dose coefficient is about half of that calculated with a default reference size distribution. - Highlights: • Activity size distributions can be recovered after aerosol sampling on filters. • Extraction into a carrier liquid and subsequent nebulisation is appropriate. • This facilitates the determination of activity size distributions for individuals. • Size distributions from this method can be used for individual dose coefficients. • Dose coefficients were calculated for the workers at the new Chernobyl shelter

  3. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    Science.gov (United States)

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the

  4. Sample sizes and model comparison metrics for species distribution models

    Science.gov (United States)

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  5. Influence of Sample Size on Automatic Positional Accuracy Assessment Methods for Urban Areas

    Directory of Open Access Journals (Sweden)

    Francisco J. Ariza-López

    2018-05-01

    Full Text Available In recent years, new approaches aimed to increase the automation level of positional accuracy assessment processes for spatial data have been developed. However, in such cases, an aspect as significant as sample size has not yet been addressed. In this paper, we study the influence of sample size when estimating the planimetric positional accuracy of urban databases by means of an automatic assessment using polygon-based methodology. Our study is based on a simulation process, which extracts pairs of homologous polygons from the assessed and reference data sources and applies two buffer-based methods. The parameter used for determining the different sizes (which range from 5 km up to 100 km has been the length of the polygons’ perimeter, and for each sample size 1000 simulations were run. After completing the simulation process, the comparisons between the estimated distribution functions for each sample and population distribution function were carried out by means of the Kolmogorov–Smirnov test. Results show a significant reduction in the variability of estimations when sample size increased from 5 km to 100 km.

  6. Sample size determination for disease prevalence studies with partially validated data.

    Science.gov (United States)

    Qiu, Shi-Fang; Poon, Wai-Yin; Tang, Man-Lai

    2016-02-01

    Disease prevalence is an important topic in medical research, and its study is based on data that are obtained by classifying subjects according to whether a disease has been contracted. Classification can be conducted with high-cost gold standard tests or low-cost screening tests, but the latter are subject to the misclassification of subjects. As a compromise between the two, many research studies use partially validated datasets in which all data points are classified by fallible tests, and some of the data points are validated in the sense that they are also classified by the completely accurate gold-standard test. In this article, we investigate the determination of sample sizes for disease prevalence studies with partially validated data. We use two approaches. The first is to find sample sizes that can achieve a pre-specified power of a statistical test at a chosen significance level, and the second is to find sample sizes that can control the width of a confidence interval with a pre-specified confidence level. Empirical studies have been conducted to demonstrate the performance of various testing procedures with the proposed sample sizes. The applicability of the proposed methods are illustrated by a real-data example. © The Author(s) 2012.

  7. A qualitative test for intrinsic size effect on ferroelectric phase transitions

    OpenAIRE

    Wang, Jin; Tagantsev, Alexander K.; Setter, Nava

    2010-01-01

    The size effect in ferroelectrics is treated as a competition between the geometrical symmetry of the ferroelectric sample and its crystalline symmetry. The manifestation of this competition is shown to be polarization rotation, which is driven by temperature and/or size variations, thus providing a qualitative indication of intrinsic finite size effect on ferroelectrics. The concept is demonstrated in a simple case of PbTiO3 nanowires having their axis parallel to [111]C direction, where the...

  8. Optimal Sample Size for Probability of Detection Curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2012-01-01

    The use of Probability of Detection (POD) curves to quantify NDT reliability is common in the aeronautical industry, but relatively less so in the nuclear industry. The European Network for Inspection Qualification's (ENIQ) Inspection Qualification Methodology is based on the concept of Technical Justification, a document assembling all the evidence to assure that the NDT system in focus is indeed capable of finding the flaws for which it was designed. This methodology has become widely used in many countries, but the assurance it provides is usually of qualitative nature. The need to quantify the output of inspection qualification has become more important, especially as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. To credit the inspections in structural reliability evaluations, a measure of the NDT reliability is necessary. A POD curve provides such metric. In 2010 ENIQ developed a technical report on POD curves, reviewing the statistical models used to quantify inspection reliability. Further work was subsequently carried out to investigate the issue of optimal sample size for deriving a POD curve, so that adequate guidance could be given to the practitioners of inspection reliability. Manufacturing of test pieces with cracks that are representative of real defects found in nuclear power plants (NPP) can be very expensive. Thus there is a tendency to reduce sample sizes and in turn reduce the conservatism associated with the POD curve derived. Not much guidance on the correct sample size can be found in the published literature, where often qualitative statements are given with no further justification. The aim of this paper is to summarise the findings of such work. (author)

  9. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    Science.gov (United States)

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  10. Length and temperature dependence of the mechanical properties of finite-size carbyne

    Science.gov (United States)

    Yang, Xueming; Huang, Yanhui; Cao, Bingyang; To, Albert C.

    2017-09-01

    Carbyne is an ideal one-dimensional conductor and the thinnest interconnection in an ultimate nano-device and it requires an understanding of the mechanical properties that affect device performance and reliability. Here, we report the mechanical properties of finite-size carbyne, obtained by a molecular dynamics simulation study based on the adaptive intermolecular reactive empirical bond order potential. To avoid confusion in assigning the effective cross-sectional area of carbyne, the value of the effective cross-sectional area of carbyne (4.148 Å2) was deduced via experiment and adopted in our study. Ends-constraints effects on the ultimate stress (maximum force) of the carbyne chains are investigated, revealing that the molecular dynamics simulation results agree very well with the experimental results. The ultimate strength, Young's Modulus and maximum strain of carbyne are rather sensitive to the temperature and all decrease with the temperature. Opposite tendencies of the length dependence of the overall ultimate strength and maximum strain of carbyne at room temperature and very low temperature have been found, and analyses show that this originates in the ends effect of carbyne.

  11. What is the optimum sample size for the study of peatland testate amoeba assemblages?

    Science.gov (United States)

    Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J

    2017-10-01

    Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.

  12. [Sample size calculation in clinical post-marketing evaluation of traditional Chinese medicine].

    Science.gov (United States)

    Fu, Yingkun; Xie, Yanming

    2011-10-01

    In recent years, as the Chinese government and people pay more attention on the post-marketing research of Chinese Medicine, part of traditional Chinese medicine breed has or is about to begin after the listing of post-marketing evaluation study. In the post-marketing evaluation design, sample size calculation plays a decisive role. It not only ensures the accuracy and reliability of post-marketing evaluation. but also assures that the intended trials will have a desired power for correctly detecting a clinically meaningful difference of different medicine under study if such a difference truly exists. Up to now, there is no systemic method of sample size calculation in view of the traditional Chinese medicine. In this paper, according to the basic method of sample size calculation and the characteristic of the traditional Chinese medicine clinical evaluation, the sample size calculation methods of the Chinese medicine efficacy and safety are discussed respectively. We hope the paper would be beneficial to medical researchers, and pharmaceutical scientists who are engaged in the areas of Chinese medicine research.

  13. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  14. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Science.gov (United States)

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  15. Thermal conductivity of graphene mediated by strain and size

    International Nuclear Information System (INIS)

    Kuang, Youdi; Shi, Sanqiang; Wang, Xinjiang

    2016-01-01

    Based on first-principles calculations and full iterative solution of the linearized Boltzmann–Peierls transport equation for phonons, we systematically investigate effects of strain, size and temperature on the thermal conductivity k of suspended graphene. The calculated size-dependent and temperature-dependent k for finite samples agree well with experimental data. The results show that, contrast to the convergent room-temperature k = 5450 W/m-K of unstrained graphene at a sample size ~8 cm, k of strained graphene diverges with increasing the sample size even at high temperature. Out-of-plane acoustic phonons are responsible for the significant size effect in unstrained and strained graphene due to their ultralong mean free path and acoustic phonons with wavelength smaller than 10 nm contribute 80% to the intrinsic room temperature k of unstrained graphene. Tensile strain hardens the flexural modes and increases their lifetimes, causing interesting dependence of k on sample size and strain due to the competition between boundary scattering and intrinsic phonon–phonon scattering. k of graphene can be tuned within a large range by strain for the size larger than 500 μm. These findings shed light on the nature of thermal transport in two-dimensional materials and may guide predicting and engineering k of graphene by varying strain and size

  16. Determining sample size for assessing species composition in ...

    African Journals Online (AJOL)

    Species composition is measured in grasslands for a variety of reasons. Commonly, observations are made using the wheel-point apparatus, but the problem of determining optimum sample size has not yet been satisfactorily resolved. In this study the wheel-point apparatus was used to record 2 000 observations in each of ...

  17. Finite size effects in lattice QCD with dynamical Wilson fermions

    Energy Technology Data Exchange (ETDEWEB)

    Orth, B.

    2004-06-01

    Due to limited computing resources choosing the parameters for a full lattice QCD simulation always amounts to a compromise between the competing objectives of a lattice spacing as small, quarks as light, and a volume as large as possible. Aiming at pushing unquenched simulations with the standard Wilson action towards the computationally expensive regime of small quark masses, the GRAL project addresses the question whether computing time can be saved by sticking to lattices with rather modest numbers of grid sites and extrapolating the finite-volume results to the infinite volume (prior to the usual chiral and continuum extrapolations). In this context we investigate in this work finite-size effects in simulated light hadron masses. Understanding their systematic volume dependence may not only help saving computer time in light quark simulations with the Wilson action, but also guide future simulations with dynamical chiral fermions which for a foreseeable time will be restricted to rather small lattices. We analyze data from hybrid Monte Carlo simulations with the N{sub f} = 2 Wilson action at two values of the coupling parameter, {beta} = 5.6 (lattice spacing {alpha} {approx} 0.08 fm) and {beta} = 5.32144 ({alpha} {approx} 0.13 fm). The larger {beta} corresponds to the coupling used previously by SESAM/T{chi}L. The considered hopping parameters {kappa} = 0.1575, 0.158 (at the larger {beta}) and {kappa} = 0.1665 (at the smaller {beta}) correspond to quark masses of 85, 50 and 36% of the strange quark mass, respectively. At each quark mass we study at least three different lattice extents in the range from L = 10 to L = 24 (0.85-2.04 fm). Estimates of autocorrelation times in the stochastic updating process and of the computational cost of every run are given. For each simulated sea quark mass we calculate quark propagators and hadronic correlation functions in order to extract the pion, rho and nucleon masses as well as the pion decay constant and the quark mass

  18. Finite-size effects in the spectrum of the OSp (3 | 2) superspin chain

    Science.gov (United States)

    Frahm, Holger; Martins, Márcio J.

    2015-05-01

    The low energy spectrum of a spin chain with OSp (3 | 2) supergroup symmetry is studied based on the Bethe ansatz solution of the related vertex model. This model is a lattice realization of intersecting loops in two dimensions with loop fugacity z = 1 which provides a framework to study the critical properties of the unusual low temperature Goldstone phase of the O (N) sigma model for N = 1 in the context of an integrable model. Our finite-size analysis provides strong evidence for the existence of continua of scaling dimensions, the lowest of them starting at the ground state. Based on our data we conjecture that the so-called watermelon correlation functions decay logarithmically with exponents related to the quadratic Casimir operator of OSp (3 | 2). The presence of a continuous spectrum is not affected by a change to the boundary conditions although the density of states in the continua appears to be modified.

  19. Finite-size effects in the spectrum of the OSp(3|2 superspin chain

    Directory of Open Access Journals (Sweden)

    Holger Frahm

    2015-05-01

    Full Text Available The low energy spectrum of a spin chain with OSp(3|2 supergroup symmetry is studied based on the Bethe ansatz solution of the related vertex model. This model is a lattice realization of intersecting loops in two dimensions with loop fugacity z=1 which provides a framework to study the critical properties of the unusual low temperature Goldstone phase of the O(N sigma model for N=1 in the context of an integrable model. Our finite-size analysis provides strong evidence for the existence of continua of scaling dimensions, the lowest of them starting at the ground state. Based on our data we conjecture that the so-called watermelon correlation functions decay logarithmically with exponents related to the quadratic Casimir operator of OSp(3|2. The presence of a continuous spectrum is not affected by a change to the boundary conditions although the density of states in the continua appears to be modified.

  20. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    Science.gov (United States)

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  1. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    Science.gov (United States)

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  2. The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.

    Science.gov (United States)

    Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J

    2018-07-01

    This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  3. Finite element simulation of the T-shaped ECAP processing of round samples

    Science.gov (United States)

    Shaban Ghazani, Mehdi; Fardi-Ilkhchy, Ali; Binesh, Behzad

    2018-05-01

    Grain refinement is the only mechanism that increases the yield strength and toughness of the materials simultaneously. Severe plastic deformation is one of the promising methods to refine the microstructure of materials. Among different severe plastic deformation processes, the T-shaped equal channel angular pressing (T-ECAP) is a relatively new technique. In the present study, finite element analysis was conducted to evaluate the deformation behavior of metals during T-ECAP process. The study was focused mainly on flow characteristics, plastic strain distribution and its homogeneity, damage development, and pressing force which are among the most important factors governing the sound and successful processing of nanostructured materials by severe plastic deformation techniques. The results showed that plastic strain is localized in the bottom side of sample and uniform deformation cannot be possible using T-ECAP processing. Friction coefficient between sample and die channel wall has a little effect on strain distributions in mirror plane and transverse plane of deformed sample. Also, damage analysis showed that superficial cracks may be initiated from bottom side of sample and their propagation will be limited due to the compressive state of stress. It was demonstrated that the V shaped deformation zone are existed in T-ECAP process and the pressing load needed for execution of deformation process is increased with friction.

  4. Importance of elastic finite-size effects: Neutral defects in ionic compounds

    Science.gov (United States)

    Burr, P. A.; Cooper, M. W. D.

    2017-09-01

    Small system sizes are a well-known source of error in density functional theory (DFT) calculations, yet computational constraints frequently dictate the use of small supercells, often as small as 96 atoms in oxides and compound semiconductors. In ionic compounds, electrostatic finite-size effects have been well characterized, but self-interaction of charge-neutral defects is often discounted or assumed to follow an asymptotic behavior and thus easily corrected with linear elastic theory. Here we show that elastic effects are also important in the description of defects in ionic compounds and can lead to qualitatively incorrect conclusions if inadequately small supercells are used; moreover, the spurious self-interaction does not follow the behavior predicted by linear elastic theory. Considering the exemplar cases of metal oxides with fluorite structure, we show that numerous previous studies, employing 96-atom supercells, misidentify the ground-state structure of (charge-neutral) Schottky defects. We show that the error is eliminated by employing larger cells (324, 768, and 1500 atoms), and careful analysis determines that elastic, not electrostatic, effects are responsible. The spurious self-interaction was also observed in nonoxide ionic compounds irrespective of the computational method used, thereby resolving long-standing discrepancies between DFT and force-field methods, previously attributed to the level of theory. The surprising magnitude of the elastic effects is a cautionary tale for defect calculations in ionic materials, particularly when employing computationally expensive methods (e.g., hybrid functionals) or when modeling large defect clusters. We propose two computationally practicable methods to test the magnitude of the elastic self-interaction in any ionic system. In commonly studied oxides, where electrostatic effects would be expected to be dominant, it is the elastic effects that dictate the need for larger supercells: greater than 96 atoms.

  5. Does increasing the size of bi-weekly samples of records influence results when using the Global Trigger Tool? An observational study of retrospective record reviews of two different sample sizes.

    Science.gov (United States)

    Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold

    2016-04-25

    To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Retrospective observational study. A Norwegian 524-bed general hospital trust. 1920 medical records selected from 1 January to 31 December 2010. Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  6. Transition to collective oscillations in finite Kuramoto ensembles

    Science.gov (United States)

    Peter, Franziska; Pikovsky, Arkady

    2018-03-01

    We present an alternative approach to finite-size effects around the synchronization transition in the standard Kuramoto model. Our main focus lies on the conditions under which a collective oscillatory mode is well defined. For this purpose, the minimal value of the amplitude of the complex Kuramoto order parameter appears as a proper indicator. The dependence of this minimum on coupling strength varies due to sampling variations and correlates with the sample kurtosis of the natural frequency distribution. The skewness of the frequency sample determines the frequency of the resulting collective mode. The effects of kurtosis and skewness hold in the thermodynamic limit of infinite ensembles. We prove this by integrating a self-consistency equation for the complex Kuramoto order parameter for two families of distributions with controlled kurtosis and skewness, respectively.

  7. Predictors of Citation Rate in Psychology: Inconclusive Influence of Effect and Sample Size.

    Science.gov (United States)

    Hanel, Paul H P; Haase, Jennifer

    2017-01-01

    In the present article, we investigate predictors of how often a scientific article is cited. Specifically, we focus on the influence of two often neglected predictors of citation rate: effect size and sample size, using samples from two psychological topical areas. Both can be considered as indicators of the importance of an article and post hoc (or observed) statistical power, and should, especially in applied fields, predict citation rates. In Study 1, effect size did not have an influence on citation rates across a topical area, both with and without controlling for numerous variables that have been previously linked to citation rates. In contrast, sample size predicted citation rates, but only while controlling for other variables. In Study 2, sample and partly effect sizes predicted citation rates, indicating that the relations vary even between scientific topical areas. Statistically significant results had more citations in Study 2 but not in Study 1. The results indicate that the importance (or power) of scientific findings may not be as strongly related to citation rate as is generally assumed.

  8. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  9. Fast mean and variance computation of the diffuse sound transmission through finite-sized thick and layered wall and floor systems

    Science.gov (United States)

    Decraene, Carolina; Dijckmans, Arne; Reynders, Edwin P. B.

    2018-05-01

    A method is developed for computing the mean and variance of the diffuse field sound transmission loss of finite-sized layered wall and floor systems that consist of solid, fluid and/or poroelastic layers. This is achieved by coupling a transfer matrix model of the wall or floor to statistical energy analysis subsystem models of the adjacent room volumes. The modal behavior of the wall is approximately accounted for by projecting the wall displacement onto a set of sinusoidal lateral basis functions. This hybrid modal transfer matrix-statistical energy analysis method is validated on multiple wall systems: a thin steel plate, a polymethyl methacrylate panel, a thick brick wall, a sandwich panel, a double-leaf wall with poro-elastic material in the cavity, and a double glazing. The predictions are compared with experimental data and with results obtained using alternative prediction methods such as the transfer matrix method with spatial windowing, the hybrid wave based-transfer matrix method, and the hybrid finite element-statistical energy analysis method. These comparisons confirm the prediction accuracy of the proposed method and the computational efficiency against the conventional hybrid finite element-statistical energy analysis method.

  10. Size selective isocyanate aerosols personal air sampling using porous plastic foams

    International Nuclear Information System (INIS)

    Cong Khanh Huynh; Trinh Vu Duc

    2009-01-01

    As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.

  11. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  12. An integrated approach for multi-level sample size determination

    International Nuclear Information System (INIS)

    Lu, M.S.; Teichmann, T.; Sanborn, J.B.

    1997-01-01

    Inspection procedures involving the sampling of items in a population often require steps of increasingly sensitive measurements, with correspondingly smaller sample sizes; these are referred to as multilevel sampling schemes. In the case of nuclear safeguards inspections verifying that there has been no diversion of Special Nuclear Material (SNM), these procedures have been examined often and increasingly complex algorithms have been developed to implement them. The aim in this paper is to provide an integrated approach, and, in so doing, to describe a systematic, consistent method that proceeds logically from level to level with increasing accuracy. The authors emphasize that the methods discussed are generally consistent with those presented in the references mentioned, and yield comparable results when the error models are the same. However, because of its systematic, integrated approach the proposed method elucidates the conceptual understanding of what goes on, and, in many cases, simplifies the calculations. In nuclear safeguards inspections, an important aspect of verifying nuclear items to detect any possible diversion of nuclear fissile materials is the sampling of such items at various levels of sensitivity. The first step usually is sampling by ''attributes'' involving measurements of relatively low accuracy, followed by further levels of sampling involving greater accuracy. This process is discussed in some detail in the references given; also, the nomenclature is described. Here, the authors outline a coordinated step-by-step procedure for achieving such multilevel sampling, and they develop the relationships between the accuracy of measurement and the sample size required at each stage, i.e., at the various levels. The logic of the underlying procedures is carefully elucidated; the calculations involved and their implications, are clearly described, and the process is put in a form that allows systematic generalization

  13. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    Directory of Open Access Journals (Sweden)

    Elias Chaibub Neto

    Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  14. Analytical realization of finite-size scaling for Anderson localization. Does the band of critical states exist for d > 2?

    International Nuclear Information System (INIS)

    Suslov, I. M.

    2006-01-01

    An analytical realization is suggested for the finite-size scaling algorithm based on the consideration of auxiliary quasi-1D systems. Comparison of the obtained analytical results with the results of numerical calculations indicates that the Anderson transition point splits into the band of critical states. This conclusion is supported by direct numerical evidence (Edwards, Thouless, 1972; Last, Thouless, 1974; Schreiber, 1985). The possibility of restoring the conventional picture still exists but requires a radical reinterpretation of the raw numerical data

  15. Finite size giant magnons in the SU(2) x SU(2) sector of AdS4 x CP3

    International Nuclear Information System (INIS)

    Lukowski, Tomasz; Sax, Olof Ohlsson

    2008-01-01

    We use the algebraic curve and Luescher's μ-term to calculate the leading order finite size corrections to the dispersion relation of giant magnons in the SU(2) x SU(2) sector of AdS 4 x CP 3 . We consider a single magnon as well as one magnon in each SU(2). In addition the algebraic curve computation is generalized to give the leading order correction for an arbitrary multi-magnon state in the SU(2) x SU(2) sector.

  16. Computing Confidence Bounds for Power and Sample Size of the General Linear Univariate Model

    OpenAIRE

    Taylor, Douglas J.; Muller, Keith E.

    1995-01-01

    The power of a test, the probability of rejecting the null hypothesis in favor of an alternative, may be computed using estimates of one or more distributional parameters. Statisticians frequently fix mean values and calculate power or sample size using a variance estimate from an existing study. Hence computed power becomes a random variable for a fixed sample size. Likewise, the sample size necessary to achieve a fixed power varies randomly. Standard statistical practice requires reporting ...

  17. Finite size effects on the experimental observables of the Glauber model: a theoretical and experimental investigation

    International Nuclear Information System (INIS)

    Vindigni, A.; Bogani, L.; Gatteschi, D.; Sessoli, R.; Rettori, A.; Novak, M.A.

    2004-01-01

    We investigate the relaxation time, τ, of a dilute Glauber kinetic Ising chain obtained by ac susceptibility and SQUID magnetometry on a Co(II)-organic radical Ising 1D ferrimagnet doped with Zn(II). Theoretically we predicted a crossover in the temperature-dependence of τ, when the average segment is of the same order of the correlation length. Comparing the experimental results with theory we conclude that in the investigated temperature range the correlation length exceeds the finite length also in the pure sample

  18. Finite size effects on the experimental observables of the Glauber model: a theoretical and experimental investigation

    Energy Technology Data Exchange (ETDEWEB)

    Vindigni, A. E-mail: alessandro.vindigni@unifi.it; Bogani, L.; Gatteschi, D.; Sessoli, R.; Rettori, A.; Novak, M.A

    2004-05-01

    We investigate the relaxation time, {tau}, of a dilute Glauber kinetic Ising chain obtained by ac susceptibility and SQUID magnetometry on a Co(II)-organic radical Ising 1D ferrimagnet doped with Zn(II). Theoretically we predicted a crossover in the temperature-dependence of {tau}, when the average segment is of the same order of the correlation length. Comparing the experimental results with theory we conclude that in the investigated temperature range the correlation length exceeds the finite length also in the pure sample.

  19. Finite size effects on the experimental observables of the Glauber model: a theoretical and experimental investigation

    Science.gov (United States)

    Vindigni, A.; Bogani, L.; Gatteschi, D.; Sessoli, R.; Rettori, A.; Novak, M. A.

    2004-05-01

    We investigate the relaxation time, τ, of a dilute Glauber kinetic Ising chain obtained by ac susceptibility and SQUID magnetometry on a Co(II)-organic radical Ising 1D ferrimagnet doped with Zn(II). Theoretically we predicted a crossover in the temperature-dependence of τ, when the average segment is of the same order of the correlation length. Comparing the experimental results with theory we conclude that in the investigted temperature range the correlation length exceeds the finite length also in the pure sample.

  20. Estimation of sample size and testing power (Part 3).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2011-12-01

    This article introduces the definition and sample size estimation of three special tests (namely, non-inferiority test, equivalence test and superiority test) for qualitative data with the design of one factor with two levels having a binary response variable. Non-inferiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is not clinically inferior to that of the positive control drug. Equivalence test refers to the research design of which the objective is to verify that the experimental drug and the control drug have clinically equivalent efficacy. Superiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is clinically superior to that of the control drug. By specific examples, this article introduces formulas of sample size estimation for the three special tests, and their SAS realization in detail.

  1. The finite-size effect in thin liquid crystal systems

    Science.gov (United States)

    Śliwa, I.

    2018-05-01

    Effects of surface ordering in liquid crystal systems confined between cell plates are of great theoretical and experimental interest. Liquid crystals introduced in thin cells are known to be strongly stabilized and ordered by cell plates. We introduce a new theoretical method for analyzing the effect of surfaces on local molecular ordering in thin liquid crystal systems with planar geometry of the smectic layers. Our results show that, due to the interplay between pair long-range intermolecular forces and nonlocal, relatively short-range, surface interactions, both orientational and translational orders of liquid crystal molecules across confining cells are very complex. In particular, it is demonstrated that the SmA, nematic, and isotropic phases can coexist. The phase transitions from SmA to nematic, as well as from nematic to isotropic phases, occur not simultaneously in the whole volume of the system but begin to appear locally in some regions of the LC sample. Phase transition temperatures are demonstrated to be strongly affected by the thickness of the LC system. The dependence of the corresponding shifts of phase transition temperatures on the layer number is shown to exhibit a power law character. This new type of scaling behavior is concerned with the coexistence of local phases in finite systems. The influence of a specific character of interactions of molecules with surfaces and other molecules on values of the resulting critical exponents is also analyzed.

  2. Dependence of exponents on text length versus finite-size scaling for word-frequency distributions

    Science.gov (United States)

    Corral, Álvaro; Font-Clos, Francesc

    2017-08-01

    Some authors have recently argued that a finite-size scaling law for the text-length dependence of word-frequency distributions cannot be conceptually valid. Here we give solid quantitative evidence for the validity of this scaling law, using both careful statistical tests and analytical arguments based on the generalized central-limit theorem applied to the moments of the distribution (and obtaining a novel derivation of Heaps' law as a by-product). We also find that the picture of word-frequency distributions with power-law exponents that decrease with text length [X. Yan and P. Minnhagen, Physica A 444, 828 (2016), 10.1016/j.physa.2015.10.082] does not stand with rigorous statistical analysis. Instead, we show that the distributions are perfectly described by power-law tails with stable exponents, whose values are close to 2, in agreement with the classical Zipf's law. Some misconceptions about scaling are also clarified.

  3. Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.

    Science.gov (United States)

    Youssef, Noha H; Elshahed, Mostafa S

    2008-09-01

    Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.

  4. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    Science.gov (United States)

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  5. Monte Carlo Simulation Of The Portfolio-Balance Model Of Exchange Rates: Finite Sample Properties Of The GMM Estimator

    OpenAIRE

    Hong-Ghi Min

    2011-01-01

    Using Monte Carlo simulation of the Portfolio-balance model of the exchange rates, we report finite sample properties of the GMM estimator for testing over-identifying restrictions in the simultaneous equations model. F-form of Sargans statistic performs better than its chi-squared form while Hansens GMM statistic has the smallest bias.

  6. Atomistic origin of size effects in fatigue behavior of metallic glasses

    Science.gov (United States)

    Sha, Zhendong; Wong, Wei Hin; Pei, Qingxiang; Branicio, Paulo Sergio; Liu, Zishun; Wang, Tiejun; Guo, Tianfu; Gao, Huajian

    2017-07-01

    While many experiments and simulations on metallic glasses (MGs) have focused on their tensile ductility under monotonic loading, the fatigue mechanisms of MGs under cyclic loading still remain largely elusive. Here we perform molecular dynamics (MD) and finite element simulations of tension-compression fatigue tests in MGs to elucidate their fatigue mechanisms with focus on the sample size effect. Shear band (SB) thickening is found to be the inherent fatigue mechanism for nanoscale MGs. The difference in fatigue mechanisms between macroscopic and nanoscale MGs originates from whether the SB forms partially or fully through the cross-section of the specimen. Furthermore, a qualitative investigation of the sample size effect suggests that small sample size increases the fatigue life while large sample size promotes cyclic softening and necking. Our observations on the size-dependent fatigue behavior can be rationalized by the Gurson model and the concept of surface tension of the nanovoids. The present study sheds light on the fatigue mechanisms of MGs and can be useful in interpreting previous experimental results.

  7. Generating Random Samples of a Given Size Using Social Security Numbers.

    Science.gov (United States)

    Erickson, Richard C.; Brauchle, Paul E.

    1984-01-01

    The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)

  8. Radiative nonrecoil nuclear finite size corrections of order $\\alpha(Z \\alpha)^5$ to the Lamb shift in light muonic atoms

    OpenAIRE

    Faustov, R. N.; Martynenko, A. P.; Martynenko, F. A.; Sorokin, V. V.

    2017-01-01

    On the basis of quasipotential method in quantum electrodynamics we calculate nuclear finite size radiative corrections of order α(Zα)5 to the Lamb shift in muonic hydrogen and helium. To construct the interaction potential of particles, which gives the necessary contributions to the energy spectrum, we use the method of projection operators to states with a definite spin. Separate analytic expressions for the contributions of the muon self-energy, the muon vertex operator and the amplitude...

  9. On sample size and different interpretations of snow stability datasets

    Science.gov (United States)

    Schirmer, M.; Mitterer, C.; Schweizer, J.

    2009-04-01

    Interpretations of snow stability variations need an assessment of the stability itself, independent of the scale investigated in the study. Studies on stability variations at a regional scale have often chosen stability tests such as the Rutschblock test or combinations of various tests in order to detect differences in aspect and elevation. The question arose: ‘how capable are such stability interpretations in drawing conclusions'. There are at least three possible errors sources: (i) the variance of the stability test itself; (ii) the stability variance at an underlying slope scale, and (iii) that the stability interpretation might not be directly related to the probability of skier triggering. Various stability interpretations have been proposed in the past that provide partly different results. We compared a subjective one based on expert knowledge with a more objective one based on a measure derived from comparing skier-triggered slopes vs. slopes that have been skied but not triggered. In this study, the uncertainties are discussed and their effects on regional scale stability variations will be quantified in a pragmatic way. An existing dataset with very large sample sizes was revisited. This dataset contained the variance of stability at a regional scale for several situations. The stability in this dataset was determined using the subjective interpretation scheme based on expert knowledge. The question to be answered was how many measurements were needed to obtain similar results (mainly stability differences in aspect or elevation) as with the complete dataset. The optimal sample size was obtained in several ways: (i) assuming a nominal data scale the sample size was determined with a given test, significance level and power, and by calculating the mean and standard deviation of the complete dataset. With this method it can also be determined if the complete dataset consists of an appropriate sample size. (ii) Smaller subsets were created with similar

  10. Finite element method calculations of GMI in thin films and sandwiched structures: Size and edge effects

    International Nuclear Information System (INIS)

    Garcia-Arribas, A.; Barandiaran, J.M.; Cos, D. de

    2008-01-01

    The impedance values of magnetic thin films and magnetic/conductor/magnetic sandwiched structures with different widths are computed using the finite element method (FEM). The giant magneto-impedance (GMI) is calculated from the difference of the impedance values obtained with high and low permeability of the magnetic material. The results depend considerably on the width of the sample, demonstrating that edge effects are decisive for the GMI performance. It is shown that, besides the usual skin effect that is responsible for GMI, an 'unexpected' increase of the current density takes place at the lateral edge of the sample. In magnetic thin films this effect is dominant when the permeability is low. In the trilayers, it is combined with the lack of shielding of the central conductor at the edge. The resulting effects on GMI are shown to be large for both kinds of samples. The conclusions of this study are of great importance for the successful design of miniaturized GMI devices

  11. Support vector regression to predict porosity and permeability: Effect of sample size

    Science.gov (United States)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function

  12. The PowerAtlas: a power and sample size atlas for microarray experimental design and research

    Directory of Open Access Journals (Sweden)

    Wang Jelai

    2006-02-01

    Full Text Available Abstract Background Microarrays permit biologists to simultaneously measure the mRNA abundance of thousands of genes. An important issue facing investigators planning microarray experiments is how to estimate the sample size required for good statistical power. What is the projected sample size or number of replicate chips needed to address the multiple hypotheses with acceptable accuracy? Statistical methods exist for calculating power based upon a single hypothesis, using estimates of the variability in data from pilot studies. There is, however, a need for methods to estimate power and/or required sample sizes in situations where multiple hypotheses are being tested, such as in microarray experiments. In addition, investigators frequently do not have pilot data to estimate the sample sizes required for microarray studies. Results To address this challenge, we have developed a Microrarray PowerAtlas 1. The atlas enables estimation of statistical power by allowing investigators to appropriately plan studies by building upon previous studies that have similar experimental characteristics. Currently, there are sample sizes and power estimates based on 632 experiments from Gene Expression Omnibus (GEO. The PowerAtlas also permits investigators to upload their own pilot data and derive power and sample size estimates from these data. This resource will be updated regularly with new datasets from GEO and other databases such as The Nottingham Arabidopsis Stock Center (NASC. Conclusion This resource provides a valuable tool for investigators who are planning efficient microarray studies and estimating required sample sizes.

  13. The exact solution and the finite-size behaviour of the Osp(1vertical stroke 2)-invariant spin chain

    International Nuclear Information System (INIS)

    Martins, M.J.

    1995-01-01

    We have solved exactly the Osp(1vertical stroke 2) spin chain by the Bethe ansatz approach. Our solution is based on an equivalence between the Osp(1vertical stroke 2) chain and a certain special limit of the Izergin-Korepin vertex model. The completeness of the Bethe ansatz equations is discussed for a system with four sites and the appearance of special string structures is noted. The Bethe ansatz presents an important phase factor which distinguishes the even and odd sectors of the theory. The finite-size properties are governed by a conformal field theory with central charge c=1. (orig.)

  14. Finite-size fluctuations and photon statistics near the polariton condensation transition in a single-mode microcavity

    International Nuclear Information System (INIS)

    Eastham, P. R.; Littlewood, P. B.

    2006-01-01

    We consider polariton condensation in a generalized Dicke model, describing a single-mode cavity containing quantum dots, and extend our previous mean-field theory to allow for finite-size fluctuations. Within the fluctuation-dominated regime the correlation functions differ from their (trivial) mean-field values. We argue that the low-energy physics of the model, which determines the photon statistics in this fluctuation-dominated crossover regime, is that of the (quantum) anharmonic oscillator. The photon statistics at the crossover are different in the high-temperature and low-temperature limits. When the temperature is high enough for quantum effects to be neglected we recover behavior similar to that of a conventional laser. At low enough temperatures, however, we find qualitatively different behavior due to quantum effects

  15. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    Science.gov (United States)

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  16. A parametric study of laser spot size and coverage on the laser shock peening induced residual stress in thin aluminium samples

    Directory of Open Access Journals (Sweden)

    M. Sticchi

    2015-07-01

    Full Text Available Laser Shock Peening is a fatigue enhancement treatment using laser energy to induce compressive Residual Stresses (RS in the outer layers of metallic components. This work describes the variations of introduced RS-field with peen size and coverage for thin metal samples treated with under-water-LSP. The specimens under investigation were of aluminium alloy AA2024-T351, AA2139-T3, AA7050-T76 and AA7075-T6, with thickness 1.9 mm. The RS were measured by using Hole Drilling with Electronic Speckle Pattern Interferometry and X-ray Diffraction. Of particular interest are the effects of the above mentioned parameters on the zero-depth value, which gives indication of the amount of RS through the thickness, and on the value of the surface compressive stresses, which indicates the magnitude of induced stresses. A 2D-axisymmetrical Finite Element model was created for a preliminary estimation of the stress field trend. From experimental results, correlated with numerical and analytical analysis, the following conclusions can be drawn: increasing the spot size the zero-depth value increases with no significant change of the maximum compressive stress; the increase of coverage leads to significant increase of the compressive stress; thin samples of Al-alloy with low Hugoniot Elastic Limit (HEL reveal deeper compression field than alloy with higher HEL value.

  17. Differentiating gold nanorod samples using particle size and shape distributions from transmission electron microscope images

    Science.gov (United States)

    Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.

    2018-04-01

    Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.

  18. Bayesian sample size determination for cost-effectiveness studies with censored data.

    Directory of Open Access Journals (Sweden)

    Daniel P Beavers

    Full Text Available Cost-effectiveness models are commonly utilized to determine the combined clinical and economic impact of one treatment compared to another. However, most methods for sample size determination of cost-effectiveness studies assume fully observed costs and effectiveness outcomes, which presents challenges for survival-based studies in which censoring exists. We propose a Bayesian method for the design and analysis of cost-effectiveness data in which costs and effectiveness may be censored, and the sample size is approximated for both power and assurance. We explore two parametric models and demonstrate the flexibility of the approach to accommodate a variety of modifications to study assumptions.

  19. Development of sample size allocation program using hypergeometric distribution

    International Nuclear Information System (INIS)

    Kim, Hyun Tae; Kwack, Eun Ho; Park, Wan Soo; Min, Kyung Soo; Park, Chan Sik

    1996-01-01

    The objective of this research is the development of sample allocation program using hypergeometric distribution with objected-oriented method. When IAEA(International Atomic Energy Agency) performs inspection, it simply applies a standard binomial distribution which describes sampling with replacement instead of a hypergeometric distribution which describes sampling without replacement in sample allocation to up to three verification methods. The objective of the IAEA inspection is the timely detection of diversion of significant quantities of nuclear material, therefore game theory is applied to its sampling plan. It is necessary to use hypergeometric distribution directly or approximate distribution to secure statistical accuracy. Improved binomial approximation developed by Mr. J. L. Jaech and correctly applied binomial approximation are more closer to hypergeometric distribution in sample size calculation than the simply applied binomial approximation of the IAEA. Object-oriented programs of 1. sample approximate-allocation with correctly applied standard binomial approximation, 2. sample approximate-allocation with improved binomial approximation, and 3. sample approximate-allocation with hypergeometric distribution were developed with Visual C ++ and corresponding programs were developed with EXCEL(using Visual Basic for Application). 8 tabs., 15 refs. (Author)

  20. 3D finite element modeling of epiretinal stimulation: Impact of prosthetic electrode size and distance from the retina.

    Science.gov (United States)

    Sui, Xiaohong; Huang, Yu; Feng, Fuchen; Huang, Chenhui; Chan, Leanne Lai Hang; Wang, Guoxing

    2015-05-01

    A novel 3-dimensional (3D) finite element model was established to systematically investigate the impact of the diameter (Φ) of disc electrodes and the electrode-to-retina distance on the effectiveness of stimulation. The 3D finite element model was established based on a disc platinum stimulating electrode and a 6-layered retinal structure. The ground electrode was placed in the extraocular space in direct attachment with sclera and treated as a distant return electrode. An established criterion of electric-field strength of 1000 Vm-1 was adopted as the activation threshold for RGCs. The threshold current (TC) increased linearly with increasing Φ and electrode-to-retina distance and remained almost unchanged with further increases in diameter. However, the threshold charge density (TCD) increased dramatically with decreasing electrode diameter. TCD exceeded the electrode safety limit for an electrode diameter of 50 µm at an electrode-to-retina distance of 50 to 200 μm. The electric field distributions illustrated that smaller electrode diameters and shorter electrode-to-retina distances were preferred due to more localized excitation of RGC area under stimulation of different threshold currents in terms of varied electrode size and electrode-to-retina distances. Under the condition of same-amplitude current stimulation, a large electrode exhibited an improved potential spatial selectivity at large electrode-to-retina distances. Modeling results were consistent with those reported in animal electrophysiological experiments and clinical trials, validating the 3D finite element model of epiretinal stimulation. The computational model proved to be useful in optimizing the design of an epiretinal stimulating electrode for prosthesis.

  1. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    Science.gov (United States)

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  2. Relativistic finite-temperature Thomas-Fermi model

    Science.gov (United States)

    Faussurier, Gérald

    2017-11-01

    We investigate the relativistic finite-temperature Thomas-Fermi model, which has been proposed recently in an astrophysical context. Assuming a constant distribution of protons inside the nucleus of finite size avoids severe divergence of the electron density with respect to a point-like nucleus. A formula for the nuclear radius is chosen to treat any element. The relativistic finite-temperature Thomas-Fermi model matches the two asymptotic regimes, i.e., the non-relativistic and the ultra-relativistic finite-temperature Thomas-Fermi models. The equation of state is considered in detail. For each version of the finite-temperature Thomas-Fermi model, the pressure, the kinetic energy, and the entropy are calculated. The internal energy and free energy are also considered. The thermodynamic consistency of the three models is considered by working from the free energy. The virial question is also studied in the three cases as well as the relationship with the density functional theory. The relativistic finite-temperature Thomas-Fermi model is far more involved than the non-relativistic and ultra-relativistic finite-temperature Thomas-Fermi models that are very close to each other from a mathematical point of view.

  3. Effects of sample size on robustness and prediction accuracy of a prognostic gene signature

    Directory of Open Access Journals (Sweden)

    Kim Seon-Young

    2009-05-01

    Full Text Available Abstract Background Few overlap between independently developed gene signatures and poor inter-study applicability of gene signatures are two of major concerns raised in the development of microarray-based prognostic gene signatures. One recent study suggested that thousands of samples are needed to generate a robust prognostic gene signature. Results A data set of 1,372 samples was generated by combining eight breast cancer gene expression data sets produced using the same microarray platform and, using the data set, effects of varying samples sizes on a few performances of a prognostic gene signature were investigated. The overlap between independently developed gene signatures was increased linearly with more samples, attaining an average overlap of 16.56% with 600 samples. The concordance between predicted outcomes by different gene signatures also was increased with more samples up to 94.61% with 300 samples. The accuracy of outcome prediction also increased with more samples. Finally, analysis using only Estrogen Receptor-positive (ER+ patients attained higher prediction accuracy than using both patients, suggesting that sub-type specific analysis can lead to the development of better prognostic gene signatures Conclusion Increasing sample sizes generated a gene signature with better stability, better concordance in outcome prediction, and better prediction accuracy. However, the degree of performance improvement by the increased sample size was different between the degree of overlap and the degree of concordance in outcome prediction, suggesting that the sample size required for a study should be determined according to the specific aims of the study.

  4. Pore-scale simulations of drainage in granular materials: Finite size effects and the representative elementary volume

    Science.gov (United States)

    Yuan, Chao; Chareyre, Bruno; Darve, Félix

    2016-09-01

    A pore-scale model is introduced for two-phase flow in dense packings of polydisperse spheres. The model is developed as a component of a more general hydromechanical coupling framework based on the discrete element method, which will be elaborated in future papers and will apply to various processes of interest in soil science, in geomechanics and in oil and gas production. Here the emphasis is on the generation of a network of pores mapping the void space between spherical grains, and the definition of local criteria governing the primary drainage process. The pore space is decomposed by Regular Triangulation, from which a set of pores connected by throats are identified. A local entry capillary pressure is evaluated for each throat, based on the balance of capillary pressure and surface tension at equilibrium. The model reflects the possible entrapment of disconnected patches of the receding wetting phase. It is validated by a comparison with drainage experiments. In the last part of the paper, a series of simulations are reported to illustrate size and boundary effects, key questions when studying small samples made of spherical particles be it in simulations or experiments. Repeated tests on samples of different sizes give evolution of water content which are not only scattered but also strongly biased for small sample sizes. More than 20,000 spheres are needed to reduce the bias on saturation below 0.02. Additional statistics are generated by subsampling a large sample of 64,000 spheres. They suggest that the minimal sampling volume for evaluating saturation is one hundred times greater that the sampling volume needed for measuring porosity with the same accuracy. This requirement in terms of sample size induces a need for efficient computer codes. The method described herein has a low algorithmic complexity in order to satisfy this requirement. It will be well suited to further developments toward coupled flow-deformation problems in which evolution of the

  5. Taking account of sample finite dimensions in processing measurements of double differential cross sections of slow neutron scattering

    International Nuclear Information System (INIS)

    Lisichkin, Yu.V.; Dovbenko, A.G.; Efimenko, B.A.; Novikov, A.G.; Smirenkina, L.D.; Tikhonova, S.I.

    1979-01-01

    Described is a method of taking account of finite sample dimensions in processing measurement results of double differential cross sections (DDCS) of slow neutron scattering. A necessity of corrective approach to the account taken of the effect of sample finite dimensions is shown, and, in particular, the necessity to conduct preliminary processing of DDCS, the account being taken of attenuation coefficients of single scattered neutrons (SSN) for measurements on the sample with a container, and on the container. Correction for multiple scattering (MS) calculated on the base of the dynamic model should be obtained, the account being taken of resolution effects. To minimize the effect of the dynamic model used in calculations it is preferred to make absolute measurements of DDCS and to use the subraction method. The above method was realized in the set of programs for the BESM-5 computer. The FISC program computes the coefficients of SSN attenuation and correction for MS. The DDS program serves to compute a model DDCS averaged as per the resolution function of an instrument. The SCATL program is intended to prepare initial information necessary for the FISC program, and permits to compute the scattering law for all materials. Presented are the results of using the above method while processing experimental data on measuring DDCS of water by the DIN-1M spectrometer

  6. Volatile and non-volatile elements in grain-size separated samples of Apollo 17 lunar soils

    International Nuclear Information System (INIS)

    Giovanoli, R.; Gunten, H.R. von; Kraehenbuehl, U.; Meyer, G.; Wegmueller, F.; Gruetter, A.; Wyttenbach, A.

    1977-01-01

    Three samples of Apollo 17 lunar soils (75081, 72501 and 72461) were separated into 9 grain-size fractions between 540 and 1 μm mean diameter. In order to detect mineral fractionations caused during the separation procedures major elements were determined by instrumental neutron activation analyses performed on small aliquots of the separated samples. Twenty elements were measured in each size fraction using instrumental and radiochemical neutron activation techniques. The concentration of the main elements in sample 75081 does not change with the grain-size. Exceptions are Fe and Ti which decrease slightly and Al which increases slightly with the decrease in the grain-size. These changes in the composition in main elements suggest a decrease in Ilmenite and an increase in Anorthite with decreasing grain-size. However, it can be concluded that the mineral composition of the fractions changes less than a factor of 2. Samples 72501 and 72461 are not yet analyzed for the main elements. (Auth.)

  7. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    Science.gov (United States)

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  8. Three-year-olds obey the sample size principle of induction: the influence of evidence presentation and sample size disparity on young children's generalizations.

    Science.gov (United States)

    Lawson, Chris A

    2014-07-01

    Three experiments with 81 3-year-olds (M=3.62years) examined the conditions that enable young children to use the sample size principle (SSP) of induction-the inductive rule that facilitates generalizations from large rather than small samples of evidence. In Experiment 1, children exhibited the SSP when exemplars were presented sequentially but not when exemplars were presented simultaneously. Results from Experiment 3 suggest that the advantage of sequential presentation is not due to the additional time to process the available input from the two samples but instead may be linked to better memory for specific individuals in the large sample. In addition, findings from Experiments 1 and 2 suggest that adherence to the SSP is mediated by the disparity between presented samples. Overall, these results reveal that the SSP appears early in development and is guided by basic cognitive processes triggered during the acquisition of input. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. On the role of the grain size in the magnetic behavior of sintered permanent magnets

    Science.gov (United States)

    Efthimiadis, K. G.; Ntallis, N.

    2018-02-01

    In this work the finite elements method is used to simulate, by micromagnetic modeling, the magnetic behavior of sintered anisotropic magnets. Hysteresis loops were simulated for different grain sizes in an oriented multigrain sample. By keeping out other parameters that contribute to the magnetic microstructure, such as the sample size, the grain morphology and the grain boundaries mismatch, it has been found that the grain size affects the magnetic properties only if the grains are exchange-decoupled. In this case, as the grain size decreases, a decrease in the nucleation field of a reverse magnetic domain is observed and an increase in the coercive field due to the pinning of the magnetic domain walls at the grain boundaries.

  10. Sample size methods for estimating HIV incidence from cross-sectional surveys.

    Science.gov (United States)

    Konikoff, Jacob; Brookmeyer, Ron

    2015-12-01

    Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this article, we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this article at the Biometrics website on Wiley Online Library. © 2015, The International Biometric Society.

  11. Alternating-time temporal logic with finite-memory strategies

    DEFF Research Database (Denmark)

    Vester, Steen

    2013-01-01

    on finite-memory strategies. One where the memory size allowed is bounded and one where the memory size is unbounded (but must be finite). This is motivated by the high complexity of model-checking with perfect recall semantics and the severe limitations of memoryless strategies. We show that both types...... of semantics introduced are different from perfect recall and memoryless semantics and next focus on the decidability and complexity of model-checking in both complete and incomplete information games for ATL/ATL*. In particular, we show that the complexity of model-checking with bounded-memory semantics...... is Delta_2p-complete for ATL and PSPACE-complete for ATL* in incomplete information games just as in the memoryless case. We also present a proof that ATL and ATL* model-checking is undecidable for n >= 3 players with finite-memory semantics in incomplete information games....

  12. φφ Back-to-Back Correlations in Finite Expanding Systems

    International Nuclear Information System (INIS)

    Padula, S. S.; Krein, G.; Hama, Y.; Panda, P. K.; Csoergo, T.

    2006-01-01

    Back-to-Back Correlations (BBC) of particle-antiparticle pairs are predicted to appear if hot and dense hadronic matter is formed in high energy nucleus-nucleus collisions. The BBC are related to in-medium mass-modification and squeezing of the quanta involved. Although the suppression of finite emission times were already known, the effects of finite system sizes and of collective phenomena had not been studied yet. Thus, for testing the survival and magnitude of the effect in more realistic situations, we study the BBC when mass-modification occurs in a finite sized, thermalized medium, considering a non-relativistically expanding fireball with finite emission time, and evaluating the width of the back-to-back correlation function. We show that the BBC signal indeed survives the expansion and flow effects, with sufficient magnitude to be observed at RHIC

  13. Radiative nonrecoil nuclear finite size corrections of order α(Zα)5 to the Lamb shift in light muonic atoms

    Science.gov (United States)

    Faustov, R. N.; Martynenko, A. P.; Martynenko, F. A.; Sorokin, V. V.

    2017-12-01

    On the basis of quasipotential method in quantum electrodynamics we calculate nuclear finite size radiative corrections of order α(Zα) 5 to the Lamb shift in muonic hydrogen and helium. To construct the interaction potential of particles, which gives the necessary contributions to the energy spectrum, we use the method of projection operators to states with a definite spin. Separate analytic expressions for the contributions of the muon self-energy, the muon vertex operator and the amplitude with spanning photon are obtained. We present also numerical results for these contributions using modern experimental data on the electromagnetic form factors of light nuclei.

  14. Finite element analysis of the three different posterior malleolus fixation strategies in relation to different fracture sizes.

    Science.gov (United States)

    Anwar, Adeel; Lv, Decheng; Zhao, Zhi; Zhang, Zhen; Lu, Ming; Nazir, Muhammad Umar; Qasim, Wasim

    2017-04-01

    Appropriate fixation method for the posterior malleolar fractures (PMF) according to the fracture size is still not clear. Aim of this study was to evaluate the outcomes of the different fixation methods used for fixation of PMF by finite element analysis (FEA) and to compare the effect of fixation constructs on the size of the fracture computationally. Three dimensional model of the tibia was reconstructed from computed tomography (CT) images. PMF of 30%, 40% and 50% fragment sizes were simulated through computational processing. Two antero-posterior (AP) lag screws, two postero-anterior (PA) lag screws and posterior buttress plate were analysed for three different fracture volumes. The simulated loads of 350N and 700N were applied to the proximal tibial end. Models were fixed distally in all degrees of freedom. In single limb standing condition, the posterior plate group produced the lowest relative displacement (RD) among all the groups (0.01, 0.03 and 0.06mm). Further nodal analysis of the highest RD fracture group showed a higher mean displacement of 4.77mm and 4.23mm in AP and PA lag screws model (p=0.000). The amounts of stress subjected to these implants, 134.36MPa and 140.75MPa were also significantly lower (p=0.000). There was a negative correlation (p=0.021) between implant stress and the displacement which signifies a less stable fixation using AP and PA lag screws. Progressively increasing fracture size demands more stable fixation construct because RD increases significantly. Posterior buttress plate produces superior stability and lowest RD in PMF models irrespective of the fragment size. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  16. Evaluation of pump pulsation in respirable size-selective sampling: part II. Changes in sampling efficiency.

    Science.gov (United States)

    Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M; Harper, Martin

    2014-01-01

    This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the

  17. Sample-size effects in fast-neutron gamma-ray production measurements: solid-cylinder samples

    International Nuclear Information System (INIS)

    Smith, D.L.

    1975-09-01

    The effects of geometry, absorption and multiple scattering in (n,Xγ) reaction measurements with solid-cylinder samples are investigated. Both analytical and Monte-Carlo methods are employed in the analysis. Geometric effects are shown to be relatively insignificant except in definition of the scattering angles. However, absorption and multiple-scattering effects are quite important; accurate microscopic differential cross sections can be extracted from experimental data only after a careful determination of corrections for these processes. The results of measurements performed using several natural iron samples (covering a wide range of sizes) confirm validity of the correction procedures described herein. It is concluded that these procedures are reliable whenever sufficiently accurate neutron and photon cross section and angular distribution information is available for the analysis. (13 figures, 5 tables) (auth)

  18. Subclinical delusional ideation and appreciation of sample size and heterogeneity in statistical judgment.

    Science.gov (United States)

    Galbraith, Niall D; Manktelow, Ken I; Morris, Neil G

    2010-11-01

    Previous studies demonstrate that people high in delusional ideation exhibit a data-gathering bias on inductive reasoning tasks. The current study set out to investigate the factors that may underpin such a bias by examining healthy individuals, classified as either high or low scorers on the Peters et al. Delusions Inventory (PDI). More specifically, whether high PDI scorers have a relatively poor appreciation of sample size and heterogeneity when making statistical judgments. In Expt 1, high PDI scorers made higher probability estimates when generalizing from a sample of 1 with regard to the heterogeneous human property of obesity. In Expt 2, this effect was replicated and was also observed in relation to the heterogeneous property of aggression. The findings suggest that delusion-prone individuals are less appreciative of the importance of sample size when making statistical judgments about heterogeneous properties; this may underpin the data gathering bias observed in previous studies. There was some support for the hypothesis that threatening material would exacerbate high PDI scorers' indifference to sample size.

  19. Radiative nonrecoil nuclear finite size corrections of order α(Zα)5 to the hyperfine splitting of S-states in muonic hydrogen

    International Nuclear Information System (INIS)

    Faustov, R.N.; Martynenko, A.P.; Martynenko, G.A.; Sorokin, V.V.

    2014-01-01

    On the basis of quasipotential method in quantum electrodynamics we calculate nuclear finite size radiative corrections of order α(Zα) 5 to the hyperfine structure of S-wave energy levels in muonic hydrogen and muonic deuterium. For the construction of the particle interaction operator we employ the projection operators on the particle bound states with definite spins. The calculation is performed in the infrared safe Fried–Yennie gauge. Modern experimental data on the electromagnetic form factors of the proton and deuteron are used.

  20. Page sample size in web accessibility testing: how many pages is enough?

    NARCIS (Netherlands)

    Velleman, Eric Martin; van der Geest, Thea

    2013-01-01

    Various countries and organizations use a different sampling approach and sample size of web pages in accessibility conformance tests. We are conducting a systematic analysis to determine how many pages is enough for testing whether a website is compliant with standard accessibility guidelines. This

  1. Sensitivity of Mantel Haenszel Model and Rasch Model as Viewed From Sample Size

    OpenAIRE

    ALWI, IDRUS

    2011-01-01

    The aims of this research is to study the sensitivity comparison of Mantel Haenszel and Rasch Model for detection differential item functioning, observed from the sample size. These two differential item functioning (DIF) methods were compared using simulate binary item respon data sets of varying sample size,  200 and 400 examinees were used in the analyses, a detection method of differential item functioning (DIF) based on gender difference. These test conditions were replication 4 tim...

  2. Extreme Quantum Memory Advantage for Rare-Event Sampling

    Science.gov (United States)

    Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.

    2018-02-01

    We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  3. Extreme Quantum Memory Advantage for Rare-Event Sampling

    Directory of Open Access Journals (Sweden)

    Cina Aghamohammadi

    2018-02-01

    Full Text Available We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r, for any large real number r. Then, for a sequence of processes each labeled by an integer size N, we compare how the classical and quantum required memories scale with N. In this setting, since both memories can diverge as N→∞, the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N→∞, but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  4. Extreme value statistics and finite-size scaling at the ecological extinction/laminar-turbulence transition

    Science.gov (United States)

    Shih, Hong-Yan; Goldenfeld, Nigel

    Experiments on transitional turbulence in pipe flow seem to show that turbulence is a transient metastable state since the measured mean lifetime of turbulence puffs does not diverge asymptotically at a critical Reynolds number. Yet measurements reveal that the lifetime scales with Reynolds number in a super-exponential way reminiscent of extreme value statistics, and simulations and experiments in Couette and channel flow exhibit directed percolation type scaling phenomena near a well-defined transition. This universality class arises from the interplay between small-scale turbulence and a large-scale collective zonal flow, which exhibit predator-prey behavior. Why is asymptotically divergent behavior not observed? Using directed percolation and a stochastic individual level model of predator-prey dynamics related to transitional turbulence, we investigate the relation between extreme value statistics and power law critical behavior, and show that the paradox is resolved by carefully defining what is measured in the experiments. We theoretically derive the super-exponential scaling law, and using finite-size scaling, show how the same data can give both super-exponential behavior and power-law critical scaling.

  5. Evaluation of cavity size, kind, and filling technique of composite shrinkage by finite element.

    Science.gov (United States)

    Jafari, Toloo; Alaghehmad, Homayoon; Moodi, Ehsan

    2018-01-01

    Cavity preparation reduces the rigidity of tooth and its resistance to deformation. The purpose of this study was to evaluate the dimensional changes of the repaired teeth using two types of light cure composite and two methods of incremental and bulk filling by the use of finite element method. In this computerized in vitro experimental study, an intact maxillary premolar was scanned using cone beam computed tomography instrument (SCANORA, Switzerland), then each section of tooth image was transmitted to Ansys software using AUTOCAD. Then, eight sizes of cavity preparations and two methods of restoration (bulk and incremental) using two different types of composite resin materials (Heliomolar, Brilliant) were proposed on software and analysis was completed with Ansys software. Dimensional change increased by widening and deepening of the cavities. It was also increased using Brilliant composite resin and incremental filling technique. Increase in depth and type of filling technique has the greatest role of dimensional change after curing, but the type of composite resin does not have a significant role.

  6. Transient queue-size distribution in a finite-capacity queueing system with server breakdowns and Bernoulli feedback

    Science.gov (United States)

    Kempa, Wojciech M.

    2017-12-01

    A finite-capacity queueing system with server breakdowns is investigated, in which successive exponentially distributed failure-free times are followed by repair periods. After the processing a customer may either rejoin the queue (feedback) with probability q, or definitely leave the system with probability 1 - q. The system of integral equations for transient queue-size distribution, conditioned by the initial level of buffer saturation, is build. The solution of the corresponding system written for Laplace transforms is found using the linear algebraic approach. The considered queueing system can be successfully used in modelling production lines with machine failures, in which the parameter q may be considered as a typical fraction of items demanding corrections. Morever, this queueing model can be applied in the analysis of real TCP/IP performance, where q stands for the fraction of packets requiring retransmission.

  7. Finite-size effects on the vortex-glass transition in thin YBa2Cu3O7-δ films

    International Nuclear Information System (INIS)

    Woeltgens, P.J.M.; Dekker, C.; Koch, R.H.; Hussey, B.W.; Gupta, A.

    1995-01-01

    Nonlinear current-voltage characteristics have been measured at high magnetic fields in YBa 2 Cu 3 O 7-δ films of a thickness t ranging from 3000 down to 16 A. Critical-scaling analyses of the data for the thinner films (t≤400 A) reveal deviations from the vortex-glass critical scaling appropriate for three-dimensional (3D) systems. This is argued to be a finite-size effect. At large current densities J, the vortices are probed at length scales smaller than the film thickness, i.e., 3D vortex-glass behavior is observed. At low J by contrast, the vortex excitations involve typical length scales exceeding the film thickness, resulting in 2D behavior. Further evidence for this picture is found directly from the 3D vortex-glass correlation length, which, upon approach of the glass transition temperature, appears to level off at the film thickness. The results indicate that a vortex-glass phase transition does occur at finite temperature in 3D systems, but not in 2D systems. In the latter an onset of 2D correlations occurs towards zero temperature. This is demonstrated in our thinnest film (16 A), which, in a magnetic field, displays a 2D vortex-glass correlation length which critically diverges at zero temperature

  8. Research Note Pilot survey to assess sample size for herbaceous ...

    African Journals Online (AJOL)

    A pilot survey to determine sub-sample size (number of point observations per plot) for herbaceous species composition assessments, using a wheel-point apparatus applying the nearest-plant method, was conducted. Three plots differing in species composition on the Zululand coastal plain were selected, and on each plot ...

  9. A Fourier analysis on the maximum acceptable grid size for discrete proton beam dose calculation

    International Nuclear Information System (INIS)

    Li, Haisen S.; Romeijn, H. Edwin; Dempsey, James F.

    2006-01-01

    We developed an analytical method for determining the maximum acceptable grid size for discrete dose calculation in proton therapy treatment plan optimization, so that the accuracy of the optimized dose distribution is guaranteed in the phase of dose sampling and the superfluous computational work is avoided. The accuracy of dose sampling was judged by the criterion that the continuous dose distribution could be reconstructed from the discrete dose within a 2% error limit. To keep the error caused by the discrete dose sampling under a 2% limit, the dose grid size cannot exceed a maximum acceptable value. The method was based on Fourier analysis and the Shannon-Nyquist sampling theorem as an extension of our previous analysis for photon beam intensity modulated radiation therapy [J. F. Dempsey, H. E. Romeijn, J. G. Li, D. A. Low, and J. R. Palta, Med. Phys. 32, 380-388 (2005)]. The proton beam model used for the analysis was a near mono-energetic (of width about 1% the incident energy) and monodirectional infinitesimal (nonintegrated) pencil beam in water medium. By monodirection, we mean that the proton particles are in the same direction before entering the water medium and the various scattering prior to entrance to water is not taken into account. In intensity modulated proton therapy, the elementary intensity modulation entity for proton therapy is either an infinitesimal or finite sized beamlet. Since a finite sized beamlet is the superposition of infinitesimal pencil beams, the result of the maximum acceptable grid size obtained with infinitesimal pencil beam also applies to finite sized beamlet. The analytic Bragg curve function proposed by Bortfeld [T. Bortfeld, Med. Phys. 24, 2024-2033 (1997)] was employed. The lateral profile was approximated by a depth dependent Gaussian distribution. The model included the spreads of the Bragg peak and the lateral profiles due to multiple Coulomb scattering. The dependence of the maximum acceptable dose grid size on the

  10. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. A simple nomogram for sample size for estimating sensitivity and specificity of medical tests

    Directory of Open Access Journals (Sweden)

    Malhotra Rajeev

    2010-01-01

    Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.

  12. Estimating sample size for a small-quadrat method of botanical ...

    African Journals Online (AJOL)

    Reports the results of a study conducted to determine an appropriate sample size for a small-quadrat method of botanical survey for application in the Mixed Bushveld of South Africa. Species density and grass density were measured using a small-quadrat method in eight plant communities in the Nylsvley Nature Reserve.

  13. Norm Block Sample Sizes: A Review of 17 Individually Administered Intelligence Tests

    Science.gov (United States)

    Norfolk, Philip A.; Farmer, Ryan L.; Floyd, Randy G.; Woods, Isaac L.; Hawkins, Haley K.; Irby, Sarah M.

    2015-01-01

    The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd's method for…

  14. Sample size for monitoring sirex populations and their natural enemies

    Directory of Open Access Journals (Sweden)

    Susete do Rocio Chiarello Penteado

    2016-09-01

    Full Text Available The woodwasp Sirex noctilio Fabricius (Hymenoptera: Siricidae was introduced in Brazil in 1988 and became the main pest in pine plantations. It has spread to about 1.000.000 ha, at different population levels, in the states of Rio Grande do Sul, Santa Catarina, Paraná, São Paulo and Minas Gerais. Control is done mainly by using a nematode, Deladenus siricidicola Bedding (Nematoda: Neothylenchidae. The evaluation of the efficiency of natural enemies has been difficult because there are no appropriate sampling systems. This study tested a hierarchical sampling system to define the sample size to monitor the S. noctilio population and the efficiency of their natural enemies, which was found to be perfectly adequate.

  15. Collection of size fractionated particulate matter sample for neutron activation analysis in Japan

    International Nuclear Information System (INIS)

    Otoshi, Tsunehiko; Nakamatsu, Hiroaki; Oura, Yasuji; Ebihara, Mitsuru

    2004-01-01

    According to the decision of the 2001 Workshop on Utilization of Research Reactor (Neutron Activation Analysis (NAA) Section), size fractionated particulate matter collection for NAA was started from 2002 at two sites in Japan. The two monitoring sites, ''Tokyo'' and ''Sakata'', were classified into ''urban'' and ''rural''. In each site, two size fractions, namely PM 2-10 '' and PM 2 '' particles (aerodynamic particle size between 2 to 10 micrometer and less than 2 micrometer, respectively) were collected every month on polycarbonate membrane filters. Average concentrations of PM 10 (sum of PM 2-10 and PM 2 samples) during the common sampling period of August to November 2002 in each site were 0.031mg/m 3 in Tokyo, and 0.022mg/m 3 in Sakata. (author)

  16. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  17. A two-stage Bayesian design with sample size reestimation and subgroup analysis for phase II binary response trials.

    Science.gov (United States)

    Zhong, Wei; Koopmeiners, Joseph S; Carlin, Bradley P

    2013-11-01

    Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by (Whitehead et al., 2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a "conclusiveness" condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use

    Science.gov (United States)

    Arthur, Steve M.; Schwartz, Charles C.

    1999-01-01

    We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.

  19. Modified FlowCAM procedure for quantifying size distribution of zooplankton with sample recycling capacity.

    Directory of Open Access Journals (Sweden)

    Esther Wong

    Full Text Available We have developed a modified FlowCAM procedure for efficiently quantifying the size distribution of zooplankton. The modified method offers the following new features: 1 prevents animals from settling and clogging with constant bubbling in the sample container; 2 prevents damage to sample animals and facilitates recycling by replacing the built-in peristaltic pump with an external syringe pump, in order to generate negative pressure, creates a steady flow by drawing air from the receiving conical flask (i.e. vacuum pump, and transfers plankton from the sample container toward the main flowcell of the imaging system and finally into the receiving flask; 3 aligns samples in advance of imaging and prevents clogging with an additional flowcell placed ahead of the main flowcell. These modifications were designed to overcome the difficulties applying the standard FlowCAM procedure to studies where the number of individuals per sample is small, and since the FlowCAM can only image a subset of a sample. Our effective recycling procedure allows users to pass the same sample through the FlowCAM many times (i.e. bootstrapping the sample in order to generate a good size distribution. Although more advanced FlowCAM models are equipped with syringe pump and Field of View (FOV flowcells which can image all particles passing through the flow field; we note that these advanced setups are very expensive, offer limited syringe and flowcell sizes, and do not guarantee recycling. In contrast, our modifications are inexpensive and flexible. Finally, we compared the biovolumes estimated by automated FlowCAM image analysis versus conventional manual measurements, and found that the size of an individual zooplankter can be estimated by the FlowCAM image system after ground truthing.

  20. Reliable calculation in probabilistic logic: Accounting for small sample size and model uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Ferson, S. [Applied Biomathematics, Setauket, NY (United States)

    1996-12-31

    A variety of practical computational problems arise in risk and safety assessments, forensic statistics and decision analyses in which the probability of some event or proposition E is to be estimated from the probabilities of a finite list of related subevents or propositions F,G,H,.... In practice, the analyst`s knowledge may be incomplete in two ways. First, the probabilities of the subevents may be imprecisely known from statistical estimations, perhaps based on very small sample sizes. Second, relationships among the subevents may be known imprecisely. For instance, there may be only limited information about their stochastic dependencies. Representing probability estimates as interval ranges on has been suggested as a way to address the first source of imprecision. A suite of AND, OR and NOT operators defined with reference to the classical Frochet inequalities permit these probability intervals to be used in calculations that address the second source of imprecision, in many cases, in a best possible way. Using statistical confidence intervals as inputs unravels the closure properties of this approach however, requiring that probability estimates be characterized by a nested stack of intervals for all possible levels of statistical confidence, from a point estimate (0% confidence) to the entire unit interval (100% confidence). The corresponding logical operations implied by convolutive application of the logical operators for every possible pair of confidence intervals reduces by symmetry to a manageably simple level-wise iteration. The resulting calculus can be implemented in software that allows users to compute comprehensive and often level-wise best possible bounds on probabilities for logical functions of events.

  1. Estimation of sample size and testing power (part 6).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-03-01

    The design of one factor with k levels (k ≥ 3) refers to the research that only involves one experimental factor with k levels (k ≥ 3), and there is no arrangement for other important non-experimental factors. This paper introduces the estimation of sample size and testing power for quantitative data and qualitative data having a binary response variable with the design of one factor with k levels (k ≥ 3).

  2. Chiral crossover transition in a finite volume

    Science.gov (United States)

    Shi, Chao; Jia, Wenbao; Sun, An; Zhang, Liping; Zong, Hongshi

    2018-02-01

    Finite volume effects on the chiral crossover transition of strong interactions at finite temperature are studied by solving the quark gap equation within a cubic volume of finite size L. With the anti-periodic boundary condition, our calculation shows the chiral quark condensate, which characterizes the strength of dynamical chiral symmetry breaking, decreases as L decreases below 2.5 fm. We further study the finite volume effects on the pseudo-transition temperature {T}{{c}} of the crossover, showing a significant decrease in {T}{{c}} as L decreases below 3 fm. Supported by National Natural Science Foundation of China (11475085, 11535005, 11690030, 51405027), the Fundamental Research Funds for the Central Universities (020414380074), China Postdoctoral Science Foundation (2016M591808) and Open Research Foundation of State Key Lab. of Digital Manufacturing Equipment & Technology in Huazhong University of Science & Technology (DMETKF2015015)

  3. On the Structure of Cortical Microcircuits Inferred from Small Sample Sizes.

    Science.gov (United States)

    Vegué, Marina; Perin, Rodrigo; Roxin, Alex

    2017-08-30

    The structure in cortical microcircuits deviates from what would be expected in a purely random network, which has been seen as evidence of clustering. To address this issue, we sought to reproduce the nonrandom features of cortical circuits by considering several distinct classes of network topology, including clustered networks, networks with distance-dependent connectivity, and those with broad degree distributions. To our surprise, we found that all of these qualitatively distinct topologies could account equally well for all reported nonrandom features despite being easily distinguishable from one another at the network level. This apparent paradox was a consequence of estimating network properties given only small sample sizes. In other words, networks that differ markedly in their global structure can look quite similar locally. This makes inferring network structure from small sample sizes, a necessity given the technical difficulty inherent in simultaneous intracellular recordings, problematic. We found that a network statistic called the sample degree correlation (SDC) overcomes this difficulty. The SDC depends only on parameters that can be estimated reliably given small sample sizes and is an accurate fingerprint of every topological family. We applied the SDC criterion to data from rat visual and somatosensory cortex and discovered that the connectivity was not consistent with any of these main topological classes. However, we were able to fit the experimental data with a more general network class, of which all previous topologies were special cases. The resulting network topology could be interpreted as a combination of physical spatial dependence and nonspatial, hierarchical clustering. SIGNIFICANCE STATEMENT The connectivity of cortical microcircuits exhibits features that are inconsistent with a simple random network. Here, we show that several classes of network models can account for this nonrandom structure despite qualitative differences in

  4. Particle Sampling and Real Time Size Distribution Measurement in H2/O2/TEOS Diffusion Flame

    International Nuclear Information System (INIS)

    Ahn, K.H.; Jung, C.H.; Choi, M.; Lee, J.S.

    2001-01-01

    Growth characteristics of silica particles have been studied experimentally using in situ particle sampling technique from H 2 /O 2 /Tetraethylorthosilicate (TEOS) diffusion flame with carefully devised sampling probe. The particle morphology and the size comparisons are made between the particles sampled by the local thermophoretic method from the inside of the flame and by the electrostatic collector sampling method after the dilution sampling probe. The Transmission Electron Microscope (TEM) image processed data of these two sampling techniques are compared with Scanning Mobility Particle Sizer (SMPS) measurement. TEM image analysis of two sampling methods showed a good agreement with SMPS measurement. The effects of flame conditions and TEOS flow rates on silica particle size distributions are also investigated using the new particle dilution sampling probe. It is found that the particle size distribution characteristics and morphology are mostly governed by the coagulation process and sintering process in the flame. As the flame temperature increases, the effect of coalescence or sintering becomes an important particle growth mechanism which reduces the coagulation process. However, if the flame temperature is not high enough to sinter the aggregated particles then the coagulation process is a dominant particle growth mechanism. In a certain flame condition a secondary particle formation is observed which results in a bimodal particle size distribution

  5. Finite volume spectrum of 2D field theories from Hirota dynamics

    International Nuclear Information System (INIS)

    Gromov, Nikolay; Kazakov, Vladimir; Vieira, Pedro; Univ. do Porto

    2008-12-01

    We propose, using the example of the O(4) sigma model, a general method for solving integrable two dimensional relativistic sigma models in a finite size periodic box. Our starting point is the so-called Y-system, which is equivalent to the thermodynamic Bethe ansatz equations of Yang and Yang. It is derived from the Zamolodchikov scattering theory in the cross channel, for virtual particles along the non-compact direction of the space-time cylinder. The method is based on the integrable Hirota dynamics that follows from the Y-system. The outcome is a nonlinear integral equation for a single complex function, valid for an arbitrary quantum state and accompanied by the finite size analogue of Bethe equations. It is close in spirit to the Destri-deVega (DdV) equation. We present the numerical data for the energy of various states as a function of the size, and derive the general Luescher-type formulas for the finite size corrections. We also re-derive by our method the DdV equation for the SU(2) chiral Gross-Neveu model. (orig.)

  6. The Sample Size Influence in the Accuracy of the Image Classification of the Remote Sensing

    Directory of Open Access Journals (Sweden)

    Thomaz C. e C. da Costa

    2004-12-01

    Full Text Available Landuse/landcover maps produced by classification of remote sensing images incorporate uncertainty. This uncertainty is measured by accuracy indices using reference samples. The size of the reference sample is defined by approximation by a binomial function without the use of a pilot sample. This way the accuracy are not estimated, but fixed a priori. In case of divergency between the estimated and a priori accuracy the error of the sampling will deviate from the expected error. The size using pilot sample (theorically correct procedure justify when haven´t estimate of accuracy for work area, referent the product remote sensing utility.

  7. A Class of Estimators for Finite Population Mean in Double Sampling under Nonresponse Using Fractional Raw Moments

    Directory of Open Access Journals (Sweden)

    Manzoor Khan

    2014-01-01

    Full Text Available This paper presents new classes of estimators in estimating the finite population mean under double sampling in the presence of nonresponse when using information on fractional raw moments. The expressions for mean square error of the proposed classes of estimators are derived up to the first degree of approximation. It is shown that a proposed class of estimators performs better than the usual mean estimator, ratio type estimators, and Singh and Kumar (2009 estimator. An empirical study is carried out to demonstrate the performance of a proposed class of estimators.

  8. Olber's Paradox Revisited in a Static and Finite Universe

    Science.gov (United States)

    Couture, Gilles

    2012-01-01

    Building a Universe populated by stars identical to our Sun and taking into consideration the wave-particle duality of light, the biological limits of the human eye, the finite size of stars and the finiteness of our Universe, we conclude that the sky could very well be dark at night. Besides the human eye, the dominant parameter is the finite…

  9. Minimizing cell size dependence in micromagnetics simulations with thermal noise

    Energy Technology Data Exchange (ETDEWEB)

    MartInez, E [Departamento de Ingenieria Electromecanica, Universidad de Burgos, Plaza Misael Banuelos, s/n, E-09001, Burgos (Spain); Lopez-DIaz, L [Departamento de Fisica Aplicada. Universidad Salamanca. Plaza de la Merced s/n. Salamanca E-37008 (Spain); Torres, L [Departamento de Fisica Aplicada. Universidad Salamanca. Plaza de la Merced s/n. Salamanca E-37008 (Spain); GarcIa-Cervera, C J [Department of Mathematics. University of California, Santa Barbara, CA 93106 (United States)

    2007-02-21

    Langevin dynamics treats finite temperature effects in a micromagnetics framework by adding a thermal fluctuation field to the effective field. Several works have addressed the dependence of numerical results on the cell size used to split the ferromagnetic samples on the nanoscale regime. In this paper, some former problems dealing with the dependence on the spatial discretization at finite temperature have been revised. We have focused our attention on the stability of the numerical schemes used to integrate the Langevin equation. In particular, a detailed analysis of results was carried out as a function of the time step. It was confirmed that the mentioned dependence can be minimized if an unconditional stable integration method is used to numerically solve the Langevin equation.

  10. Minimizing cell size dependence in micromagnetics simulations with thermal noise

    International Nuclear Information System (INIS)

    MartInez, E; Lopez-DIaz, L; Torres, L; GarcIa-Cervera, C J

    2007-01-01

    Langevin dynamics treats finite temperature effects in a micromagnetics framework by adding a thermal fluctuation field to the effective field. Several works have addressed the dependence of numerical results on the cell size used to split the ferromagnetic samples on the nanoscale regime. In this paper, some former problems dealing with the dependence on the spatial discretization at finite temperature have been revised. We have focused our attention on the stability of the numerical schemes used to integrate the Langevin equation. In particular, a detailed analysis of results was carried out as a function of the time step. It was confirmed that the mentioned dependence can be minimized if an unconditional stable integration method is used to numerically solve the Langevin equation

  11. Effects of heater location and heater size on the natural convection heat transfer in a square cavity using finite element method

    Energy Technology Data Exchange (ETDEWEB)

    Ngo, Ich Long; Byon, Chan [Yeungnam University, Gyeongsan (Korea, Republic of)

    2015-07-15

    Finite element method was used to investigate the effects of heater location and heater size on the natural convection heat transfer in a 2D square cavity heated partially or fully from below and cooled from above. Rayleigh number (5 X 10{sup 2} ≤ Ra ≤ 5X10{sup 5}), heater size (0.1 ≤ D/L ≤ 1.0), and heater location (0.1 ≤ x{sub h}/L ≤ 0.5) were considered. Numerical results indicated that the average Nusselt number (Nu{sub m}) increases as the heater size decreases. In addition, when x{sub h}/L is less than 0.4, Nu{sub m} increases as x{sub h}/L increases, and Num decreases again for a larger value of x{sub h}/L. However, this trend changes when Ra is less than 10{sup 4}, suggesting that Nu{sub m} attains its maximum value at the region close to the bottom surface center. This study aims to gain insight into the behaviors of natural convection in order to potentially improve internal natural convection heat transfer.

  12. Electronic states in crystals of finite size quantum confinement of bloch waves

    CERN Document Server

    Ren, Shang Yuan

    2017-01-01

    This book presents an analytical theory of the electronic states in ideal low dimensional systems and finite crystals based on a differential equation theory approach. It provides precise and fundamental understandings on the electronic states in ideal low-dimensional systems and finite crystals, and offers new insights into some of the basic problems in low-dimensional systems, such as the surface states and quantum confinement effects, etc., some of which are quite different from what is traditionally believed in the solid state physics community. Many previous predictions have been confirmed in subsequent investigations by other authors on various relevant problems. In this new edition, the theory is further extended to one-dimensional photonic crystals and phononic crystals, and a general theoretical formalism for investigating the existence and properties of surface states/modes in semi-infinite one-dimensional crystals is developed. In addition, there are various revisions and improvements, including us...

  13. Radiative nonrecoil nuclear finite size corrections of order α(Zα){sup 5} to the hyperfine splitting of S-states in muonic hydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Faustov, R.N. [Dorodnicyn Computing Centre, Russian Academy of Science, Vavilov Str. 40, 119991 Moscow (Russian Federation); Martynenko, A.P. [Samara State University, Pavlov Str. 1, 443011 Samara (Russian Federation); Samara State Aerospace University named after S.P. Korolyov, Moskovskoye Shosse 34, 443086 Samara (Russian Federation); Martynenko, G.A.; Sorokin, V.V. [Samara State University, Pavlov Str. 1, 443011 Samara (Russian Federation)

    2014-06-02

    On the basis of quasipotential method in quantum electrodynamics we calculate nuclear finite size radiative corrections of order α(Zα){sup 5} to the hyperfine structure of S-wave energy levels in muonic hydrogen and muonic deuterium. For the construction of the particle interaction operator we employ the projection operators on the particle bound states with definite spins. The calculation is performed in the infrared safe Fried–Yennie gauge. Modern experimental data on the electromagnetic form factors of the proton and deuteron are used.

  14. An exact solution to the extended Hubbard model in 2D for finite size system

    Science.gov (United States)

    Harir, S.; Bennai, M.; Boughaleb, Y.

    2008-08-01

    An exact analytical diagonalization is used to solve the two-dimensional extended Hubbard model (EHM) for a system with finite size. We have considered an EHM including on-site and off-site interactions with interaction energies U and V, respectively, for a square lattice containing 4×4 sites at one-eighth filling with periodic boundary conditions, recently treated by Kovacs and Gulacsi (2006 Phil. Mag. 86 2073). Taking into account the symmetric properties of this square lattice and using a translation linear operator, we have constructed a r-space basis only with 85 state-vectors which describe all possible distributions for four electrons in the 4×4 square lattice. The diagonalization of the 85×85 matrix energy allows us to study the local properties of the above system as a function of the on-site and off-site interactions energies, where we have shown that the off-site interaction encourages the existence of the double occupancies at the first excited state and induces a supplementary conductivity of the system.

  15. Finite-size behaviour of generalized susceptibilities in the whole phase plane of the Potts model

    Science.gov (United States)

    Pan, Xue; Zhang, Yanhua; Chen, Lizhu; Xu, Mingmei; Wu, Yuanfang

    2018-01-01

    We study the sign distribution of generalized magnetic susceptibilities in the temperature-external magnetic field plane using the three-dimensional three-state Potts model. We find that the sign of odd-order susceptibility is opposite in the symmetric (disorder) and broken (order) phases, but that of the even-order one remains positive when it is far away from the phase boundary. When the critical point is approached from the crossover side, negative fourth-order magnetic susceptibility is observable. It is also demonstrated that non-monotonic behavior occurs in the temperature dependence of the generalized susceptibilities of the energy. The finite-size scaling behavior of the specific heat in this model is mainly controlled by the critical exponent of the magnetic susceptibility in the three-dimensional Ising universality class. Supported by Fund Project of National Natural Science Foundation of China (11647093, 11405088, 11521064), Fund Project of Sichuan Provincial Department of Education (16ZB0339), Fund Project of Chengdu Technological University (2016RC004) and the Major State Basic Research Development Program of China (2014CB845402)

  16. Impact of high-frequency pumping on anomalous finite-size effects in three-dimensional topological insulators

    Science.gov (United States)

    Pervishko, Anastasiia A.; Yudin, Dmitry; Shelykh, Ivan A.

    2018-02-01

    Lowering of the thickness of a thin-film three-dimensional topological insulator down to a few nanometers results in the gap opening in the spectrum of topologically protected two-dimensional surface states. This phenomenon, which is referred to as the anomalous finite-size effect, originates from hybridization between the states propagating along the opposite boundaries. In this work, we consider a bismuth-based topological insulator and show how the coupling to an intense high-frequency linearly polarized pumping can further be used to manipulate the value of a gap. We address this effect within recently proposed Brillouin-Wigner perturbation theory that allows us to map a time-dependent problem into a stationary one. Our analysis reveals that both the gap and the components of the group velocity of the surface states can be tuned in a controllable fashion by adjusting the intensity of the driving field within an experimentally accessible range and demonstrate the effect of light-induced band inversion in the spectrum of the surface states for high enough values of the pump.

  17. Elastodynamic models for extending GTD to penumbra and finite size flaws

    International Nuclear Information System (INIS)

    Djakou, A Kamta; Darmon, M; Potel, C

    2016-01-01

    The scattering of elastic waves from an obstacle is of great interest in ultrasonic Non Destructive Evaluation (NDE). There exist two main scattering phenomena: specular reflection and diffraction. This paper is especially focused on possible improvements of the Geometrical Theory of Diffraction (GTD), one classical method used for modelling diffraction from scatterer edges. GTD notably presents two important drawbacks: it is theoretically valid for a canonical infinite edge and not for a finite one and presents discontinuities around the direction of specular reflection. In order to address the first drawback, a 3D hybrid method using both GTD and Huygens secondary sources has been developed to deal with finite flaws. ITD (Incremental Theory of Diffraction), a method developed in electromagnetism, has also been developed in elastodynamics to deal with small flaws. Experimental validation of these methods has been performed. As to the second drawback, a GTD uniform correction, the UTD (Uniform Theory of Diffraction) has been developed in the view of designing a generic model able to correctly simulate both specular reflection and diffraction. A comparison has been done between UTD numerical results and UAT (Uniform Asymptotic Theory of Diffraction) which is another uniform solution of GTD. (paper)

  18. Stretching and jamming of finite automata

    NARCIS (Netherlands)

    Beijer, de N.; Kourie, D.G.; Watson, B.W.; Cleophas, L.G.W.A.; Watson, B.W.

    2004-01-01

    In this paper we present two transformations on automata, called stretching and jamming. These transformations will, under certain conditions, reduce the size of the transition table, and under other conditions reduce the string processing time. Given a finite automaton, we can stretch it by

  19. Assessing terpene content variability of whitebark pine in order to estimate representative sample size

    Directory of Open Access Journals (Sweden)

    Stefanović Milena

    2013-01-01

    Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007

  20. Methodology for sample preparation and size measurement of commercial ZnO nanoparticles

    Directory of Open Access Journals (Sweden)

    Pei-Jia Lu

    2018-04-01

    Full Text Available This study discusses the strategies on sample preparation to acquire images with sufficient quality for size characterization by scanning electron microscope (SEM using two commercial ZnO nanoparticles of different surface properties as a demonstration. The central idea is that micrometer sized aggregates of ZnO in powdered forms need to firstly be broken down to nanosized particles through an appropriate process to generate nanoparticle dispersion before being deposited on a flat surface for SEM observation. Analytical tools such as contact angle, dynamic light scattering and zeta potential have been utilized to optimize the procedure for sample preparation and to check the quality of the results. Meanwhile, measurements of zeta potential values on flat surfaces also provide critical information and save lots of time and efforts in selection of suitable substrate for particles of different properties to be attracted and kept on the surface without further aggregation. This simple, low-cost methodology can be generally applied on size characterization of commercial ZnO nanoparticles with limited information from vendors. Keywords: Zinc oxide, Nanoparticles, Methodology

  1. Evaluation of Approaches to Analyzing Continuous Correlated Eye Data When Sample Size Is Small.

    Science.gov (United States)

    Huang, Jing; Huang, Jiayan; Chen, Yong; Ying, Gui-Shuang

    2018-02-01

    To evaluate the performance of commonly used statistical methods for analyzing continuous correlated eye data when sample size is small. We simulated correlated continuous data from two designs: (1) two eyes of a subject in two comparison groups; (2) two eyes of a subject in the same comparison group, under various sample size (5-50), inter-eye correlation (0-0.75) and effect size (0-0.8). Simulated data were analyzed using paired t-test, two sample t-test, Wald test and score test using the generalized estimating equations (GEE) and F-test using linear mixed effects model (LMM). We compared type I error rates and statistical powers, and demonstrated analysis approaches through analyzing two real datasets. In design 1, paired t-test and LMM perform better than GEE, with nominal type 1 error rate and higher statistical power. In design 2, no test performs uniformly well: two sample t-test (average of two eyes or a random eye) achieves better control of type I error but yields lower statistical power. In both designs, the GEE Wald test inflates type I error rate and GEE score test has lower power. When sample size is small, some commonly used statistical methods do not perform well. Paired t-test and LMM perform best when two eyes of a subject are in two different comparison groups, and t-test using the average of two eyes performs best when the two eyes are in the same comparison group. When selecting the appropriate analysis approach the study design should be considered.

  2. Impact of sample size on principal component analysis ordination of an environmental data set: effects on eigenstructure

    Directory of Open Access Journals (Sweden)

    Shaukat S. Shahid

    2016-06-01

    Full Text Available In this study, we used bootstrap simulation of a real data set to investigate the impact of sample size (N = 20, 30, 40 and 50 on the eigenvalues and eigenvectors resulting from principal component analysis (PCA. For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22 of a small data set comprising of 55 samples (stations from where water samples were collected. Because in ecology and environmental sciences the data sets are invariably small owing to high cost of collection and analysis of samples, we restricted our study to relatively small sample sizes. We focused attention on comparison of first 6 eigenvectors and first 10 eigenvalues. Data sets were compared using agglomerative cluster analysis using Ward’s method that does not require any stringent distributional assumptions.

  3. Sampled-data-based vibration control for structural systems with finite-time state constraint and sensor outage.

    Science.gov (United States)

    Weng, Falu; Liu, Mingxin; Mao, Weijie; Ding, Yuanchun; Liu, Feifei

    2018-05-10

    The problem of sampled-data-based vibration control for structural systems with finite-time state constraint and sensor outage is investigated in this paper. The objective of designing controllers is to guarantee the stability and anti-disturbance performance of the closed-loop systems while some sensor outages happen. Firstly, based on matrix transformation, the state-space model of structural systems with sensor outages and uncertainties appearing in the mass, damping and stiffness matrices is established. Secondly, by considering most of those earthquakes or strong winds happen in a very short time, and it is often the peak values make the structures damaged, the finite-time stability analysis method is introduced to constrain the state responses in a given time interval, and the H-infinity stability is adopted in the controller design to make sure that the closed-loop system has a prescribed level of disturbance attenuation performance during the whole control process. Furthermore, all stabilization conditions are expressed in the forms of linear matrix inequalities (LMIs), whose feasibility can be easily checked by using the LMI Toolbox. Finally, numerical examples are given to demonstrate the effectiveness of the proposed theorems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Finite dipolar hexagonal columns on piled layers of triangular lattice

    International Nuclear Information System (INIS)

    Matsushita, Katsuyoshi; Sugano, Ryoko; Kuroda, Akiyoshi; Tomita, Yusuke; Takayama, Hajime

    2007-01-01

    We have investigated, by the Monte Carlo simulation, spin systems which represent moments of arrayed magnetic nanoparticles interacting with each other only by the dipole-dipole interaction. In the present paper we aim the understanding of finite size effects on the magnetic nanoparticles arrayed in hexagonal columns cut out from the close-packing structures or from those with uniaxial compression. In columns with the genuine close-packing structures, we observe a single vortex state which is also observed previously in finite two-dimensional systems. On the other hand in the system with the inter-layer distance set 1/2 times of the close-packing one, we found ground states which depend on the number of layers. The dependence is induced by a finite size effect and is related to a orientation transition in the corresponding bulk system

  5. Diffusion to finite-size traps

    International Nuclear Information System (INIS)

    Richards, P.M.

    1986-01-01

    The survival probability of a random-walking particle is derived for hopping in a random distribution of traps of arbitrary radius and concentration. The single-center approximation is shown to be valid for times of physical interest even when the fraction of volume occupied by traps approaches unity. The theory is based on computation of the number of different potential trap regions sampled in a random walk and is confirmed by simulations on a simple-cubic lattice

  6. B-graph sampling to estimate the size of a hidden population

    NARCIS (Netherlands)

    Spreen, M.; Bogaerts, S.

    2015-01-01

    Link-tracing designs are often used to estimate the size of hidden populations by utilizing the relational links between their members. A major problem in studies of hidden populations is the lack of a convenient sampling frame. The most frequently applied design in studies of hidden populations is

  7. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    Science.gov (United States)

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  8. Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence

    International Nuclear Information System (INIS)

    Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A.

    2013-01-01

    Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)

  9. Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence

    Energy Technology Data Exchange (ETDEWEB)

    Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A., E-mail: dan-paiva@hotmail.com, E-mail: ejfranca@cnen.gov.br, E-mail: marcelo_rlm@hotmail.com, E-mail: maensoal@yahoo.com.br, E-mail: chazin@cnen.gov.b [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2013-07-01

    Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)

  10. Sample size calculation while controlling false discovery rate for differential expression analysis with RNA-sequencing experiments.

    Science.gov (United States)

    Bi, Ran; Liu, Peng

    2016-03-31

    RNA-Sequencing (RNA-seq) experiments have been popularly applied to transcriptome studies in recent years. Such experiments are still relatively costly. As a result, RNA-seq experiments often employ a small number of replicates. Power analysis and sample size calculation are challenging in the context of differential expression analysis with RNA-seq data. One challenge is that there are no closed-form formulae to calculate power for the popularly applied tests for differential expression analysis. In addition, false discovery rate (FDR), instead of family-wise type I error rate, is controlled for the multiple testing error in RNA-seq data analysis. So far, there are very few proposals on sample size calculation for RNA-seq experiments. In this paper, we propose a procedure for sample size calculation while controlling FDR for RNA-seq experimental design. Our procedure is based on the weighted linear model analysis facilitated by the voom method which has been shown to have competitive performance in terms of power and FDR control for RNA-seq differential expression analysis. We derive a method that approximates the average power across the differentially expressed genes, and then calculate the sample size to achieve a desired average power while controlling FDR. Simulation results demonstrate that the actual power of several popularly applied tests for differential expression is achieved and is close to the desired power for RNA-seq data with sample size calculated based on our method. Our proposed method provides an efficient algorithm to calculate sample size while controlling FDR for RNA-seq experimental design. We also provide an R package ssizeRNA that implements our proposed method and can be downloaded from the Comprehensive R Archive Network ( http://cran.r-project.org ).

  11. Finite-volume cumulant expansion in QCD-colorless plasma

    Energy Technology Data Exchange (ETDEWEB)

    Ladrem, M. [Taibah University, Physics Department, Faculty of Science, Al-Madinah, Al-Munawwarah (Saudi Arabia); Physics Department, Algiers (Algeria); ENS-Vieux Kouba (Bachir El-Ibrahimi), Laboratoire de Physique et de Mathematiques Appliquees (LPMA), Algiers (Algeria); Ahmed, M.A.A. [Taibah University, Physics Department, Faculty of Science, Al-Madinah, Al-Munawwarah (Saudi Arabia); ENS-Vieux Kouba (Bachir El-Ibrahimi), Laboratoire de Physique et de Mathematiques Appliquees (LPMA), Algiers (Algeria); Taiz University in Turba, Physics Department, Taiz (Yemen); Alfull, Z.Z. [Taibah University, Physics Department, Faculty of Science, Al-Madinah, Al-Munawwarah (Saudi Arabia); Cherif, S. [ENS-Vieux Kouba (Bachir El-Ibrahimi), Laboratoire de Physique et de Mathematiques Appliquees (LPMA), Algiers (Algeria); Ghardaia University, Sciences and Technologies Department, Ghardaia (Algeria)

    2015-09-15

    Due to the finite-size effects, the localization of the phase transition in finite systems and the determination of its order, become an extremely difficult task, even in the simplest known cases. In order to identify and locate the finite-volume transition point T{sub 0}(V) of the QCD deconfinement phase transition to a colorless QGP, we have developed a new approach using the finite-size cumulant expansion of the order parameter and the L{sub mn}-method. The first six cumulants C{sub 1,2,3,4,5,6} with the corresponding under-normalized ratios (skewness Σ, kurtosis κ, pentosis Π{sub ±}, and hexosis H{sub 1,2,3}) and three unnormalized combinations of them, (O = σ{sup 2}κΣ{sup -1},U = σ{sup -2}Σ{sup -1},N = σ{sup 2}κ) are calculated and studied as functions of (T, V). A new approach, unifying in a clear and consistent way the definitions of cumulant ratios, is proposed.Anumerical FSS analysis of the obtained results has allowed us to locate accurately the finite-volume transition point. The extracted transition temperature value T{sub 0}(V) agrees with that expected T{sub 0}{sup N}(V) from the order parameter and the thermal susceptibility χ{sub T} (T, V), according to the standard procedure of localization to within about 2%. In addition to this, a very good correlation factor is obtained proving the validity of our cumulants method. The agreement of our results with those obtained by means of other models is remarkable. (orig.)

  12. Estimating sample size for landscape-scale mark-recapture studies of North American migratory tree bats

    Science.gov (United States)

    Ellison, Laura E.; Lukacs, Paul M.

    2014-01-01

    Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.

  13. Sample size determination for a three-arm equivalence trial of Poisson and negative binomial responses.

    Science.gov (United States)

    Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen

    2017-01-01

    Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.

  14. The impact of sample size and marker selection on the study of haplotype structures

    Directory of Open Access Journals (Sweden)

    Sun Xiao

    2004-03-01

    Full Text Available Abstract Several studies of haplotype structures in the human genome in various populations have found that the human chromosomes are structured such that each chromosome can be divided into many blocks, within which there is limited haplotype diversity. In addition, only a few genetic markers in a putative block are needed to capture most of the diversity within a block. There has been no systematic empirical study of the effects of sample size and marker set on the identified block structures and representative marker sets, however. The purpose of this study was to conduct a detailed empirical study to examine such impacts. Towards this goal, we have analysed three representative autosomal regions from a large genome-wide study of haplotypes with samples consisting of African-Americans and samples consisting of Japanese and Chinese individuals. For both populations, we have found that the sample size and marker set have significant impact on the number of blocks and the total number of representative markers identified. The marker set in particular has very strong impacts, and our results indicate that the marker density in the original datasets may not be adequate to allow a meaningful characterisation of haplotype structures. In general, we conclude that we need a relatively large sample size and a very dense marker panel in the study of haplotype structures in human populations.

  15. Optimization of thermal systems based on finite-time thermodynamics and thermoeconomics

    Energy Technology Data Exchange (ETDEWEB)

    Durmayaz, A. [Istanbul Technical University (Turkey). Department of Mechanical Engineering; Sogut, O.S. [Istanbul Technical University, Maslak (Turkey). Department of Naval Architecture and Ocean Engineering; Sahin, B. [Yildiz Technical University, Besiktas, Istanbul (Turkey). Department of Naval Architecture; Yavuz, H. [Istanbul Technical University, Maslak (Turkey). Institute of Energy

    2004-07-01

    The irreversibilities originating from finite-time and finite-size constraints are important in the real thermal system optimization. Since classical thermodynamic analysis based on thermodynamic equilibrium do not consider these constraints directly, it is necessary to consider the energy transfer between the system and its surroundings in the rate form. Finite-time thermodynamics provides a fundamental starting point for the optimization of real thermal systems including the fundamental concepts of heat transfer and fluid mechanics to classical thermodynamics. In this study, optimization studies of thermal systems, that consider various objective functions, based on finite-time thermodynamics and thermoeconomics are reviewed. (author)

  16. Crystallite size variation of TiO_2 samples depending time heat treatment

    International Nuclear Information System (INIS)

    Galante, A.G.M.; Paula, F.R. de; Montanhera, M.A.; Pereira, E.A.; Spada, E.R.

    2016-01-01

    Titanium dioxide (TiO_2) is an oxide semiconductor that may be found in mixed phase or in distinct phases: brookite, anatase and rutile. In this work was carried out the study of the residence time influence at a given temperature in the TiO_2 powder physical properties. After the powder synthesis, the samples were divided and heat treated at 650 °C with a ramp up to 3 °C/min and a residence time ranging from 0 to 20 hours and subsequently characterized by x-ray diffraction. Analyzing the obtained diffraction patterns, it was observed that, from 5-hour residence time, began the two-distinct phase coexistence: anatase and rutile. It also calculated the average crystallite size of each sample. The results showed an increase in average crystallite size with increasing residence time of the heat treatment. (author)

  17. Statistical sampling techniques as applied to OSE inspections

    International Nuclear Information System (INIS)

    Davis, J.J.; Cote, R.W.

    1987-01-01

    The need has been recognized for statistically valid methods for gathering information during OSE inspections; and for interpretation of results, both from performance testing and from records reviews, interviews, etc. Battelle Columbus Division, under contract to DOE OSE has performed and is continuing to perform work in the area of statistical methodology for OSE inspections. This paper represents some of the sampling methodology currently being developed for use during OSE inspections. Topics include population definition, sample size requirements, level of confidence and practical logistical constraints associated with the conduct of an inspection based on random sampling. Sequential sampling schemes and sampling from finite populations are also discussed. The methods described are applicable to various data gathering activities, ranging from the sampling and examination of classified documents to the sampling of Protective Force security inspectors for skill testing

  18. Finite spatial-volume effect for π-N sigma term in lattice QCD

    International Nuclear Information System (INIS)

    Fukushima, M.; Chiba, S.; Tanigawa, T.

    2003-01-01

    We report on a finite spatial-volume effect for the pion-nucleon sigma term σ πN for quenched Wilson fermion on 8 3 x 20 and 16 3 x 20 lattices at β = 5.7 with the spatial lattice size of La∼1.12fm and La∼2.24fm, respectively. It is found that the spatial size dependence of the connected part of σ πN con is significant small. We observed the magnitude of finite size effect for the disconnected part of σ πN dis is much larger than for to connected one and an almost drastic decrease of σ πN dis amounting to 50% between La∼2.24fm to the smaller lattice size of La∼1.12fm. (author)

  19. How Sample Size Affects a Sampling Distribution

    Science.gov (United States)

    Mulekar, Madhuri S.; Siegel, Murray H.

    2009-01-01

    If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…

  20. Sample Size Requirements for Assessing Statistical Moments of Simulated Crop Yield Distributions

    NARCIS (Netherlands)

    Lehmann, N.; Finger, R.; Klein, T.; Calanca, P.

    2013-01-01

    Mechanistic crop growth models are becoming increasingly important in agricultural research and are extensively used in climate change impact assessments. In such studies, statistics of crop yields are usually evaluated without the explicit consideration of sample size requirements. The purpose of

  1. PIXE–PIGE analysis of size-segregated aerosol samples from remote areas

    Energy Technology Data Exchange (ETDEWEB)

    Calzolai, G., E-mail: calzolai@fi.infn.it [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Chiari, M.; Lucarelli, F.; Nava, S.; Taccetti, F. [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Becagli, S.; Frosini, D.; Traversi, R.; Udisti, R. [Department of Chemistry, University of Florence, Via della Lastruccia 3, 50019 Sesto Fiorentino (Italy)

    2014-01-01

    The chemical characterization of size-segregated samples is helpful to study the aerosol effects on both human health and environment. The sampling with multi-stage cascade impactors (e.g., Small Deposit area Impactor, SDI) produces inhomogeneous samples, with a multi-spot geometry and a non-negligible particle stratification. At LABEC (Laboratory of nuclear techniques for the Environment and the Cultural Heritage), an external beam line is fully dedicated to PIXE–PIGE analysis of aerosol samples. PIGE is routinely used as a sidekick of PIXE to correct the underestimation of PIXE in quantifying the concentration of the lightest detectable elements, like Na or Al, due to X-ray absorption inside the individual aerosol particles. In this work PIGE has been used to study proper attenuation correction factors for SDI samples: relevant attenuation effects have been observed also for stages collecting smaller particles, and consequent implications on the retrieved aerosol modal structure have been evidenced.

  2. Evaluation of Concrete Cylinder Tests Using Finite Elements

    DEFF Research Database (Denmark)

    Saabye Ottosen, Niels

    1984-01-01

    Nonlinear axisymmetric finite element analyses are performed on the uniaxial compressive test of concrete cylinders. The models include thick steel loading plates, and cylinders with height‐to‐diameter ratios (h/d) ranging from 1‐3 are treated. A simple constitutive model of the concrete is emplo......Nonlinear axisymmetric finite element analyses are performed on the uniaxial compressive test of concrete cylinders. The models include thick steel loading plates, and cylinders with height‐to‐diameter ratios (h/d) ranging from 1‐3 are treated. A simple constitutive model of the concrete...... uniaxial strength the use of geometrically matched loading plates seems to be advantageous. Finally, it is observed that for variations of the element size within limits otherwise required to obtain a realistic analysis, the results are insensitive to the element size....

  3. The one-sample PARAFAC approach reveals molecular size distributions of fluorescent components in dissolved organic matter

    DEFF Research Database (Denmark)

    Wünsch, Urban; Murphy, Kathleen R.; Stedmon, Colin

    2017-01-01

    Molecular size plays an important role in dissolved organic matter (DOM) biogeochemistry, but its relationship with the fluorescent fraction of DOM (FDOM) remains poorly resolved. Here high-performance size exclusion chromatography (HPSEC) was coupled to fluorescence emission-excitation (EEM...... but not their spectral properties. Thus, in contrast to absorption measurements, bulk fluorescence is unlikely to reliably indicate the average molecular size of DOM. The one-sample approach enables robust and independent cross-site comparisons without large-scale sampling efforts and introduces new analytical...... opportunities for elucidating the origins and biogeochemical properties of FDOM...

  4. Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction

    Science.gov (United States)

    2016-01-01

    human-sized scene in 0.048sec− 0.101sec. Index Terms—Microwave imaging, multistatic radar, Fast Fourier Transform (FFT). I. INTRODUCTION Near-field...configuration, but its computational demands are extreme. Fast Fourier Transform (FFT) imaging has long been used to efficiently construct images sampled...with the block diagram depicted in Fig. 4. It is noted that the multistatic to monostatic correction is valid over a finite imaging domain. However, as

  5. Finite-size effects on the dynamic susceptibility of CoPhOMe single-chain molecular magnets in presence of a static magnetic field

    Science.gov (United States)

    Pini, M. G.; Rettori, A.; Bogani, L.; Lascialfari, A.; Mariani, M.; Caneschi, A.; Sessoli, R.

    2011-09-01

    The static and dynamic properties of the single-chain molecular magnet Co(hfac)2NITPhOMe (CoPhOMe) (hfac = hexafluoroacetylacetonate, NITPhOMe = 4'-methoxy-phenyl-4,4,5,5-tetramethylimidazoline-1-oxyl-3-oxide) are investigated in the framework of the Ising model with Glauber dynamics, in order to take into account both the effect of an applied magnetic field and a finite size of the chains. For static fields of moderate intensity and short chain lengths, the approximation of a monoexponential decay of the magnetization fluctuations is found to be valid at low temperatures; for strong fields and long chains, a multiexponential decay should rather be assumed. The effect of an oscillating magnetic field, with intensity much smaller than that of the static one, is included in the theory in order to obtain the dynamic susceptibility χ(ω). We find that, for an open chain with N spins, χ(ω) can be written as a weighted sum of N frequency contributions, with a sum rule relating the frequency weights to the static susceptibility of the chain. Very good agreement is found between the theoretical dynamic susceptibility and the ac susceptibility measured in moderate static fields (Hdc≤2 kOe), where the approximation of a single dominating frequency for each segment length turns out to be valid. For static fields in this range, data for the relaxation time, τ versus Hdc, of the magnetization of CoPhOMe at low temperature are also qualitatively reproduced by theory, provided that finite-size effects are included.

  6. 14CO2 analysis of soil gas: Evaluation of sample size limits and sampling devices

    Science.gov (United States)

    Wotte, Anja; Wischhöfer, Philipp; Wacker, Lukas; Rethemeyer, Janet

    2017-12-01

    Radiocarbon (14C) analysis of CO2 respired from soils or sediments is a valuable tool to identify different carbon sources. The collection and processing of the CO2, however, is challenging and prone to contamination. We thus continuously improve our handling procedures and present a refined method for the collection of even small amounts of CO2 in molecular sieve cartridges (MSCs) for accelerator mass spectrometry 14C analysis. Using a modified vacuum rig and an improved desorption procedure, we were able to increase the CO2 recovery from the MSC (95%) as well as the sample throughput compared to our previous study. By processing series of different sample size, we show that our MSCs can be used for CO2 samples of as small as 50 μg C. The contamination by exogenous carbon determined in these laboratory tests, was less than 2.0 μg C from fossil and less than 3.0 μg C from modern sources. Additionally, we tested two sampling devices for the collection of CO2 samples released from soils or sediments, including a respiration chamber and a depth sampler, which are connected to the MSC. We obtained a very promising, low process blank for the entire CO2 sampling and purification procedure of ∼0.004 F14C (equal to 44,000 yrs BP) and ∼0.003 F14C (equal to 47,000 yrs BP). In contrast to previous studies, we observed no isotopic fractionation towards lighter δ13C values during the passive sampling with the depth samplers.

  7. Finite nucleus Dirac mean field theory and random phase approximation using finite B splines

    International Nuclear Information System (INIS)

    McNeil, J.A.; Furnstahl, R.J.; Rost, E.; Shepard, J.R.; Department of Physics, University of Maryland, College Park, Maryland 20742; Department of Physics, University of Colorado, Boulder, Colorado 80309)

    1989-01-01

    We calculate the finite nucleus Dirac mean field spectrum in a Galerkin approach using finite basis splines. We review the method and present results for the relativistic σ-ω model for the closed-shell nuclei 16 O and 40 Ca. We study the convergence of the method as a function of the size of the basis and the closure properties of the spectrum using an energy-weighted dipole sum rule. We apply the method to the Dirac random-phase-approximation response and present results for the isoscalar 1/sup -/ and 3/sup -/ longitudinal form factors of 16 O and 40 Ca. We also use a B-spline spectral representation of the positive-energy projector to evaluate partial energy-weighted sum rules and compare with nonrelativistic sum rule results

  8. Mechanical and chemical spinodal instabilities in finite quantum systems

    International Nuclear Information System (INIS)

    Colonna, M.; Chomaz, Ph.; Ayik, S.

    2001-01-01

    Self consistent quantum approaches are used to study the instabilities of finite nuclear systems. The frequencies of multipole density fluctuations are determined as a function of dilution and temperature, for several isotopes. The spinodal region of the phase diagrams is determined and it appears reduced by finite size effects. The role of surface and volume instabilities is discussed. Important chemical effects are associated with mechanical disruption and may lead to isospin fractionation. (authors)

  9. Avalanching Systems with Longer Range Connectivity: Occurrence of a Crossover Phenomenon and Multifractal Finite Size Scaling

    Directory of Open Access Journals (Sweden)

    Simone Benella

    2017-07-01

    Full Text Available Many out-of-equilibrium systems respond to external driving with nonlinear and self-similar dynamics. This near scale-invariant behavior of relaxation events has been modeled through sand pile cellular automata. However, a common feature of these models is the assumption of a local connectivity, while in many real systems, we have evidence for longer range connectivity and a complex topology of the interacting structures. Here, we investigate the role that longer range connectivity might play in near scale-invariant systems, by analyzing the results of a sand pile cellular automaton model on a Newman–Watts network. The analysis clearly indicates the occurrence of a crossover phenomenon in the statistics of the relaxation events as a function of the percentage of longer range links and the breaking of the simple Finite Size Scaling (FSS. The more complex nature of the dynamics in the presence of long-range connectivity is investigated in terms of multi-scaling features and analyzed by the Rank-Ordered Multifractal Analysis (ROMA.

  10. Statistical characterization of a large geochemical database and effect of sample size

    Science.gov (United States)

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.

  11. Statistics of stationary points of random finite polynomial potentials

    International Nuclear Information System (INIS)

    Mehta, Dhagash; Niemerg, Matthew; Sun, Chuang

    2015-01-01

    The stationary points (SPs) of the potential energy landscapes (PELs) of multivariate random potentials (RPs) have found many applications in many areas of Physics, Chemistry and Mathematical Biology. However, there are few reliable methods available which can find all the SPs accurately. Hence, one has to rely on indirect methods such as Random Matrix theory. With a combination of the numerical polynomial homotopy continuation method and a certification method, we obtain all the certified SPs of the most general polynomial RP for each sample chosen from the Gaussian distribution with mean 0 and variance 1. While obtaining many novel results for the finite size case of the RP, we also discuss the implications of our results on mathematics of random systems and string theory landscapes. (paper)

  12. A note on power and sample size calculations for the Kruskal-Wallis test for ordered categorical data.

    Science.gov (United States)

    Fan, Chunpeng; Zhang, Donghui

    2012-01-01

    Although the Kruskal-Wallis test has been widely used to analyze ordered categorical data, power and sample size methods for this test have been investigated to a much lesser extent when the underlying multinomial distributions are unknown. This article generalizes the power and sample size procedures proposed by Fan et al. ( 2011 ) for continuous data to ordered categorical data, when estimates from a pilot study are used in the place of knowledge of the true underlying distribution. Simulations show that the proposed power and sample size formulas perform well. A myelin oligodendrocyte glycoprotein (MOG) induced experimental autoimmunce encephalomyelitis (EAE) mouse study is used to demonstrate the application of the methods.

  13. Gridsampler – A Simulation Tool to Determine the Required Sample Size for Repertory Grid Studies

    Directory of Open Access Journals (Sweden)

    Mark Heckmann

    2017-01-01

    Full Text Available The repertory grid is a psychological data collection technique that is used to elicit qualitative data in the form of attributes as well as quantitative ratings. A common approach for evaluating multiple repertory grid data is sorting the elicited bipolar attributes (so called constructs into mutually exclusive categories by means of content analysis. An important question when planning this type of study is determining the sample size needed to a discover all attribute categories relevant to the field and b yield a predefined minimal number of attributes per category. For most applied researchers who collect multiple repertory grid data, programming a numeric simulation to answer these questions is not feasible. The gridsampler software facilitates determining the required sample size by providing a GUI for conducting the necessary numerical simulations. Researchers can supply a set of parameters suitable for the specific research situation, determine the required sample size, and easily explore the effects of changes in the parameter set.

  14. Anomalies in the detection of change: When changes in sample size are mistaken for changes in proportions.

    Science.gov (United States)

    Fiedler, Klaus; Kareev, Yaakov; Avrahami, Judith; Beier, Susanne; Kutzner, Florian; Hütter, Mandy

    2016-01-01

    Detecting changes, in performance, sales, markets, risks, social relations, or public opinions, constitutes an important adaptive function. In a sequential paradigm devised to investigate detection of change, every trial provides a sample of binary outcomes (e.g., correct vs. incorrect student responses). Participants have to decide whether the proportion of a focal feature (e.g., correct responses) in the population from which the sample is drawn has decreased, remained constant, or increased. Strong and persistent anomalies in change detection arise when changes in proportional quantities vary orthogonally to changes in absolute sample size. Proportional increases are readily detected and nonchanges are erroneously perceived as increases when absolute sample size increases. Conversely, decreasing sample size facilitates the correct detection of proportional decreases and the erroneous perception of nonchanges as decreases. These anomalies are however confined to experienced samples of elementary raw events from which proportions have to be inferred inductively. They disappear when sample proportions are described as percentages in a normalized probability format. To explain these challenging findings, it is essential to understand the inductive-learning constraints imposed on decisions from experience.

  15. On sample size of the kruskal-wallis test with application to a mouse peritoneal cavity study.

    Science.gov (United States)

    Fan, Chunpeng; Zhang, Donghui; Zhang, Cun-Hui

    2011-03-01

    As the nonparametric generalization of the one-way analysis of variance model, the Kruskal-Wallis test applies when the goal is to test the difference between multiple samples and the underlying population distributions are nonnormal or unknown. Although the Kruskal-Wallis test has been widely used for data analysis, power and sample size methods for this test have been investigated to a much lesser extent. This article proposes new power and sample size calculation methods for the Kruskal-Wallis test based on the pilot study in either a completely nonparametric model or a semiparametric location model. No assumption is made on the shape of the underlying population distributions. Simulation results show that, in terms of sample size calculation for the Kruskal-Wallis test, the proposed methods are more reliable and preferable to some more traditional methods. A mouse peritoneal cavity study is used to demonstrate the application of the methods. © 2010, The International Biometric Society.

  16. Probabilistic finite elements

    Science.gov (United States)

    Belytschko, Ted; Wing, Kam Liu

    1987-01-01

    In the Probabilistic Finite Element Method (PFEM), finite element methods have been efficiently combined with second-order perturbation techniques to provide an effective method for informing the designer of the range of response which is likely in a given problem. The designer must provide as input the statistical character of the input variables, such as yield strength, load magnitude, and Young's modulus, by specifying their mean values and their variances. The output then consists of the mean response and the variance in the response. Thus the designer is given a much broader picture of the predicted performance than with simply a single response curve. These methods are applicable to a wide class of problems, provided that the scale of randomness is not too large and the probabilistic density functions possess decaying tails. By incorporating the computational techniques we have developed in the past 3 years for efficiency, the probabilistic finite element methods are capable of handling large systems with many sources of uncertainties. Sample results for an elastic-plastic ten-bar structure and an elastic-plastic plane continuum with a circular hole subject to cyclic loadings with the yield stress on the random field are given.

  17. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  18. Finite size effects in the thermodynamics of a free neutral scalar field

    Science.gov (United States)

    Parvan, A. S.

    2018-04-01

    The exact analytical lattice results for the partition function of the free neutral scalar field in one spatial dimension in both the configuration and the momentum space were obtained in the framework of the path integral method. The symmetric square matrices of the bilinear forms on the vector space of fields in both configuration space and momentum space were found explicitly. The exact lattice results for the partition function were generalized to the three-dimensional spatial momentum space and the main thermodynamic quantities were derived both on the lattice and in the continuum limit. The thermodynamic properties and the finite volume corrections to the thermodynamic quantities of the free real scalar field were studied. We found that on the finite lattice the exact lattice results for the free massive neutral scalar field agree with the continuum limit only in the region of small values of temperature and volume. However, at these temperatures and volumes the continuum physical quantities for both massive and massless scalar field deviate essentially from their thermodynamic limit values and recover them only at high temperatures or/and large volumes in the thermodynamic limit.

  19. GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING

    Directory of Open Access Journals (Sweden)

    Christopher Ouma Onyango

    2010-09-01

    Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.

  20. A combined finite volume-nonconforming finite element scheme for compressible two phase flow in porous media

    KAUST Repository

    Saad, Bilal Mohammed; Saad, Mazen Naufal B M

    2014-01-01

    We propose and analyze a combined finite volume-nonconforming finite element scheme on general meshes to simulate the two compressible phase flow in porous media. The diffusion term, which can be anisotropic and heterogeneous, is discretized by piecewise linear nonconforming triangular finite elements. The other terms are discretized by means of a cell-centered finite volume scheme on a dual mesh, where the dual volumes are constructed around the sides of the original mesh. The relative permeability of each phase is decentred according the sign of the velocity at the dual interface. This technique also ensures the validity of the discrete maximum principle for the saturation under a non restrictive shape regularity of the space mesh and the positiveness of all transmissibilities. Next, a priori estimates on the pressures and a function of the saturation that denote capillary terms are established. These stabilities results lead to some compactness arguments based on the use of the Kolmogorov compactness theorem, and allow us to derive the convergence of a subsequence of the sequence of approximate solutions to a weak solution of the continuous equations, provided the mesh size tends to zero. The proof is given for the complete system when the density of the each phase depends on its own pressure. © 2014 Springer-Verlag Berlin Heidelberg.

  1. A combined finite volume-nonconforming finite element scheme for compressible two phase flow in porous media

    KAUST Repository

    Saad, Bilal Mohammed

    2014-06-28

    We propose and analyze a combined finite volume-nonconforming finite element scheme on general meshes to simulate the two compressible phase flow in porous media. The diffusion term, which can be anisotropic and heterogeneous, is discretized by piecewise linear nonconforming triangular finite elements. The other terms are discretized by means of a cell-centered finite volume scheme on a dual mesh, where the dual volumes are constructed around the sides of the original mesh. The relative permeability of each phase is decentred according the sign of the velocity at the dual interface. This technique also ensures the validity of the discrete maximum principle for the saturation under a non restrictive shape regularity of the space mesh and the positiveness of all transmissibilities. Next, a priori estimates on the pressures and a function of the saturation that denote capillary terms are established. These stabilities results lead to some compactness arguments based on the use of the Kolmogorov compactness theorem, and allow us to derive the convergence of a subsequence of the sequence of approximate solutions to a weak solution of the continuous equations, provided the mesh size tends to zero. The proof is given for the complete system when the density of the each phase depends on its own pressure. © 2014 Springer-Verlag Berlin Heidelberg.

  2. Investigation of faulted tunnel models by combined photoelasticity and finite element analysis

    International Nuclear Information System (INIS)

    Ladkany, S.G.; Huang, Yuping

    1994-01-01

    Models of square and circular tunnels with short faults cutting through their surfaces are investigated by photoelasticity. These models, when duplicated by finite element analysis can predict the stress states of square or circular faulted tunnels adequately. Finite element analysis, using gap elements, may be used to investigate full size faulted tunnel system

  3. Generalized prolate spheroidal wave functions for optical finite fractional Fourier and linear canonical transforms.

    Science.gov (United States)

    Pei, Soo-Chang; Ding, Jian-Jiun

    2005-03-01

    Prolate spheroidal wave functions (PSWFs) are known to be useful for analyzing the properties of the finite-extension Fourier transform (fi-FT). We extend the theory of PSWFs for the finite-extension fractional Fourier transform, the finite-extension linear canonical transform, and the finite-extension offset linear canonical transform. These finite transforms are more flexible than the fi-FT and can model much more generalized optical systems. We also illustrate how to use the generalized prolate spheroidal functions we derive to analyze the energy-preservation ratio, the self-imaging phenomenon, and the resonance phenomenon of the finite-sized one-stage or multiple-stage optical systems.

  4. An improved Landauer principle with finite-size corrections

    International Nuclear Information System (INIS)

    Reeb, David; Wolf, Michael M

    2014-01-01

    Landauer's principle relates entropy decrease and heat dissipation during logically irreversible processes. Most theoretical justifications of Landauer's principle either use thermodynamic reasoning or rely on specific models based on arguable assumptions. Here, we aim at a general and minimal setup to formulate Landauer's principle in precise terms. We provide a simple and rigorous proof of an improved version of the principle, which is formulated in terms of an equality rather than an inequality. The proof is based on quantum statistical mechanics concepts rather than on thermodynamic argumentation. From this equality version, we obtain explicit improvements of Landauer's bound that depend on the effective size of the thermal reservoir and reduce to Landauer's bound only for infinite-sized reservoirs. (paper)

  5. Atmospheric aerosol sampling campaign in Budapest and K-puszta. Part 1. Elemental concentrations and size distributions

    International Nuclear Information System (INIS)

    Dobos, E.; Borbely-Kiss, I.; Kertesz, Zs.; Szabo, Gy.; Salma, I.

    2004-01-01

    Complete text of publication follows. Atmospheric aerosol samples were collected in a sampling campaign from 24 July to 1 Au- gust, 2003 in Hungary. The sampling were performed in two places simultaneously: in Budapest (urban site) and K-puszta (remote area). Two PIXE International 7-stage cascade impactors were used for aerosol sampling with 24 hours duration. These impactors separate the aerosol into 7 size ranges. The elemental concentrations of the samples were obtained by proton-induced X-ray Emission (PIXE) analysis. Size distributions of S, Si, Ca, W, Zn, Pb and Fe elements were investigated in K-puszta and in Budapest. Average rates (shown in Table 1) of the elemental concentrations was calculated for each stage (in %) from the obtained distributions. The elements can be grouped into two parts on the basis of these data. The majority of the particle containing Fe, Si, Ca, (Ti) are in the 2-8 μm size range (first group). These soil origin elements were found usually in higher concentration in Budapest than in K-puszta (Fig.1.). The second group consisted of S, Pb and (W). The majority of these elements was found in the 0.25-1 μm size range and was much higher in Budapest than in K-puszta. W was measured only in samples collected in Budapest. Zn has uniform distribution in Budapest and does not belong to the above mentioned groups. This work was supported by the National Research and Development Program (NRDP 3/005/2001). (author)

  6. Size Distributions and Characterization of Native and Ground Samples for Toxicology Studies

    Science.gov (United States)

    McKay, David S.; Cooper, Bonnie L.; Taylor, Larry A.

    2010-01-01

    This slide presentation shows charts and graphs that review the particle size distribution and characterization of natural and ground samples for toxicology studies. There are graphs which show the volume distribution versus the number distribution for natural occurring dust, jet mill ground dust, and ball mill ground dust.

  7. Size Matters: Assessing Optimum Soil Sample Size for Fungal and Bacterial Community Structure Analyses Using High Throughput Sequencing of rRNA Gene Amplicons

    Directory of Open Access Journals (Sweden)

    Christopher Ryan Penton

    2016-06-01

    Full Text Available We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5 and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungal community structure, replicate dispersion and the number of operational taxonomic units (OTUs retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation.

  8. Evaluating sampling strategy for DNA barcoding study of coastal and inland halo-tolerant Poaceae and Chenopodiaceae: A case study for increased sample size.

    Directory of Open Access Journals (Sweden)

    Peng-Cheng Yao

    Full Text Available Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P < 0.01. These results suggest that increasing the sample size in specialist habitats can improve measurements of intraspecific genetic diversity, and will have a positive effect on the application of the DNA barcodes in widely distributed species. The results of random sampling showed that when sample size reached 11 for Chloris virgata, Chenopodium glaucum, and Dysphania ambrosioides, 13 for Setaria viridis, and 15 for Eleusine indica, Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.

  9. Adaptive clinical trial designs with pre-specified rules for modifying the sample size: understanding efficient types of adaptation.

    Science.gov (United States)

    Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S

    2013-04-15

    Adaptive clinical trial design has been proposed as a promising new approach that may improve the drug discovery process. Proponents of adaptive sample size re-estimation promote its ability to avoid 'up-front' commitment of resources, better address the complicated decisions faced by data monitoring committees, and minimize accrual to studies having delayed ascertainment of outcomes. We investigate aspects of adaptation rules, such as timing of the adaptation analysis and magnitude of sample size adjustment, that lead to greater or lesser statistical efficiency. Owing in part to the recent Food and Drug Administration guidance that promotes the use of pre-specified sampling plans, we evaluate alternative approaches in the context of well-defined, pre-specified adaptation. We quantify the relative costs and benefits of fixed sample, group sequential, and pre-specified adaptive designs with respect to standard operating characteristics such as type I error, maximal sample size, power, and expected sample size under a range of alternatives. Our results build on others' prior research by demonstrating in realistic settings that simple and easily implemented pre-specified adaptive designs provide only very small efficiency gains over group sequential designs with the same number of analyses. In addition, we describe optimal rules for modifying the sample size, providing efficient adaptation boundaries on a variety of scales for the interim test statistic for adaptation analyses occurring at several different stages of the trial. We thus provide insight into what are good and bad choices of adaptive sampling plans when the added flexibility of adaptive designs is desired. Copyright © 2012 John Wiley & Sons, Ltd.

  10. Determining Sample Size with a Given Range of Mean Effects in One-Way Heteroscedastic Analysis of Variance

    Science.gov (United States)

    Shieh, Gwowen; Jan, Show-Li

    2013-01-01

    The authors examined 2 approaches for determining the required sample size of Welch's test for detecting equality of means when the greatest difference between any 2 group means is given. It is shown that the actual power obtained with the sample size of the suggested approach is consistently at least as great as the nominal power. However, the…

  11. Twisted finite-volume corrections to K{sub l3} decays with partially-quenched and rooted-staggered quarks

    Energy Technology Data Exchange (ETDEWEB)

    Bernard, Claude [Department of Physics, Washington University,One Brookings Drive, Saint Louis (United States); Bijnens, Johan [Department of Astronomy and Theoretical Physics, Lund University,Sölvegatan 14A, SE 223-62 Lund (Sweden); Gámiz, Elvira [CAFPE and Departamento de Física Teórica y del Cosmos, Universidad de Granada,Campus de Fuente Nueva, E-18002 Granada (Spain); Relefors, Johan [Department of Astronomy and Theoretical Physics, Lund University,Sölvegatan 14A, SE 223-62 Lund (Sweden)

    2017-03-23

    The determination of |V{sub us}| from kaon semileptonic decays requires the value of the form factor f{sub +}(q{sup 2}=0) which can be calculated precisely on the lattice. We provide the one-loop partially quenched chiral perturbation theory expressions both with and without including the effects of staggered quarks for all form factors at finite volume and with partially twisted boundary conditions for both the vector current and scalar density matrix elements at all q{sup 2}. We point out that at finite volume there are more form factors than just f{sub +} and f{sub −} for the vector current matrix element but that the Ward identity is fully satisfied. The size of the finite-volume corrections at present lattice sizes is small. This will help improve the lattice determination of f{sub +}(q{sup 2}=0) since the finite-volume error is the dominant error source for some calculations. The size of the finite-volume corrections may be estimated on a single lattice ensemble by comparing results for various twist choices.

  12. Radiative nonrecoil nuclear finite size corrections of order α(Zα5 to the Lamb shift in light muonic atoms

    Directory of Open Access Journals (Sweden)

    R.N. Faustov

    2017-12-01

    Full Text Available On the basis of quasipotential method in quantum electrodynamics we calculate nuclear finite size radiative corrections of order α(Zα5 to the Lamb shift in muonic hydrogen and helium. To construct the interaction potential of particles, which gives the necessary contributions to the energy spectrum, we use the method of projection operators to states with a definite spin. Separate analytic expressions for the contributions of the muon self-energy, the muon vertex operator and the amplitude with spanning photon are obtained. We present also numerical results for these contributions using modern experimental data on the electromagnetic form factors of light nuclei. Keywords: Lamb shift, Muonic atoms, Quantum electrodynamics

  13. In Situ Sampling of Relative Dust Devil Particle Loads and Their Vertical Grain Size Distributions.

    Science.gov (United States)

    Raack, Jan; Reiss, Dennis; Balme, Matthew R; Taj-Eddine, Kamal; Ori, Gian Gabriele

    2017-04-19

    During a field campaign in the Sahara Desert in southern Morocco, spring 2012, we sampled the vertical grain size distribution of two active dust devils that exhibited different dimensions and intensities. With these in situ samples of grains in the vortices, it was possible to derive detailed vertical grain size distributions and measurements of the lifted relative particle load. Measurements of the two dust devils show that the majority of all lifted particles were only lifted within the first meter (∼46.5% and ∼61% of all particles; ∼76.5 wt % and ∼89 wt % of the relative particle load). Furthermore, ∼69% and ∼82% of all lifted sand grains occurred in the first meter of the dust devils, indicating the occurrence of "sand skirts." Both sampled dust devils were relatively small (∼15 m and ∼4-5 m in diameter) compared to dust devils in surrounding regions; nevertheless, measurements show that ∼58.5% to 73.5% of all lifted particles were small enough to go into suspension (grain size classification). This relatively high amount represents only ∼0.05 to 0.15 wt % of the lifted particle load. Larger dust devils probably entrain larger amounts of fine-grained material into the atmosphere, which can have an influence on the climate. Furthermore, our results indicate that the composition of the surface, on which the dust devils evolved, also had an influence on the particle load composition of the dust devil vortices. The internal particle load structure of both sampled dust devils was comparable related to their vertical grain size distribution and relative particle load, although both dust devils differed in their dimensions and intensities. A general trend of decreasing grain sizes with height was also detected. Key Words: Mars-Dust devils-Planetary science-Desert soils-Atmosphere-Grain sizes. Astrobiology 17, xxx-xxx.

  14. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    Science.gov (United States)

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  15. Migration of finite sized particles in a laminar square channel flow from low to high Reynolds numbers

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, M., E-mail: micheline.abbas@ensiacet.fr [Laboratoire de Génie Chimique, Université de Toulouse INPT-UPS, 31030, Toulouse (France); CNRS, Fédération de recherche FERMaT, CNRS, 31400, Toulouse (France); Magaud, P. [CNRS, Fédération de recherche FERMaT, CNRS, 31400, Toulouse (France); Institut Clément Ader, Université de Toulouse UPS-INSA-ISAE-Mines Albi, 31400, Toulouse (France); Gao, Y. [Institut Clément Ader, Université de Toulouse UPS-INSA-ISAE-Mines Albi, 31400, Toulouse (France); Geoffroy, S. [CNRS, Fédération de recherche FERMaT, CNRS, 31400, Toulouse (France); Laboratoire Matériaux et Durabilité des Constructions, Université de Toulouse (France); UPS, INSA, 31077, Toulouse (France)

    2014-12-15

    The migration of neutrally buoyant finite sized particles in a Newtonian square channel flow is investigated in the limit of very low solid volumetric concentration, within a wide range of channel Reynolds numbers Re = [0.07-120]. In situ microscope measurements of particle distributions, taken far from the channel inlet (at a distance several thousand times the channel height), revealed that particles are preferentially located near the channel walls at Re > 10 and near the channel center at Re < 1. Whereas the cross-streamline particle motion is governed by inertia-induced lift forces at high inertia, it seems to be controlled by shear-induced particle interactions at low (but finite) Reynolds numbers, despite the low solid volume fraction (<1%). The transition between both regimes is observed in the range Re = [1-10]. In order to exclude the effect of multi-body interactions, the trajectories of single freely moving particles are calculated thanks to numerical simulations based on the force coupling method. With the deployed numerical tool, the complete particle trajectories are accessible within a reasonable computational time only in the inertial regime (Re > 10). In this regime, we show that (i) the particle undergoes cross-streamline migration followed by a cross-lateral migration (parallel to the wall) in agreement with previous observations, and (ii) the stable equilibrium positions are located at the midline of the channel faces while the diagonal equilibrium positions are unstable. At low flow inertia, the first instants of the numerical simulations (carried at Re = O(1)) reveal that the cross-streamline migration of a single particle is oriented towards the channel wall, suggesting that the particle preferential positions around the channel center, observed in the experiments, are rather due to multi-body interactions.

  16. Migration of finite sized particles in a laminar square channel flow from low to high Reynolds numbers

    International Nuclear Information System (INIS)

    Abbas, M.; Magaud, P.; Gao, Y.; Geoffroy, S.

    2014-01-01

    The migration of neutrally buoyant finite sized particles in a Newtonian square channel flow is investigated in the limit of very low solid volumetric concentration, within a wide range of channel Reynolds numbers Re = [0.07-120]. In situ microscope measurements of particle distributions, taken far from the channel inlet (at a distance several thousand times the channel height), revealed that particles are preferentially located near the channel walls at Re > 10 and near the channel center at Re < 1. Whereas the cross-streamline particle motion is governed by inertia-induced lift forces at high inertia, it seems to be controlled by shear-induced particle interactions at low (but finite) Reynolds numbers, despite the low solid volume fraction (<1%). The transition between both regimes is observed in the range Re = [1-10]. In order to exclude the effect of multi-body interactions, the trajectories of single freely moving particles are calculated thanks to numerical simulations based on the force coupling method. With the deployed numerical tool, the complete particle trajectories are accessible within a reasonable computational time only in the inertial regime (Re > 10). In this regime, we show that (i) the particle undergoes cross-streamline migration followed by a cross-lateral migration (parallel to the wall) in agreement with previous observations, and (ii) the stable equilibrium positions are located at the midline of the channel faces while the diagonal equilibrium positions are unstable. At low flow inertia, the first instants of the numerical simulations (carried at Re = O(1)) reveal that the cross-streamline migration of a single particle is oriented towards the channel wall, suggesting that the particle preferential positions around the channel center, observed in the experiments, are rather due to multi-body interactions

  17. Evaluating the performance of species richness estimators: sensitivity to sample grain size

    DEFF Research Database (Denmark)

    Hortal, Joaquín; Borges, Paulo A. V.; Gaspar, Clara

    2006-01-01

    and several recent estimators [proposed by Rosenzweig et al. (Conservation Biology, 2003, 17, 864-874), and Ugland et al. (Journal of Animal Ecology, 2003, 72, 888-897)] performed poorly. 3.  Estimations developed using the smaller grain sizes (pair of traps, traps, records and individuals) presented similar....... Data obtained with standardized sampling of 78 transects in natural forest remnants of five islands were aggregated in seven different grains (i.e. ways of defining a single sample): islands, natural areas, transects, pairs of traps, traps, database records and individuals to assess the effect of using...

  18. Considerations for Sample Preparation Using Size-Exclusion Chromatography for Home and Synchrotron Sources.

    Science.gov (United States)

    Rambo, Robert P

    2017-01-01

    The success of a SAXS experiment for structural investigations depends on two precise measurements, the sample and the buffer background. Buffer matching between the sample and background can be achieved using dialysis methods but in biological SAXS of monodisperse systems, sample preparation is routinely being performed with size exclusion chromatography (SEC). SEC is the most reliable method for SAXS sample preparation as the method not only purifies the sample for SAXS but also almost guarantees ideal buffer matching. Here, I will highlight the use of SEC for SAXS sample preparation and demonstrate using example proteins that SEC purification does not always provide for ideal samples. Scrutiny of the SEC elution peak using quasi-elastic and multi-angle light scattering techniques can reveal hidden features (heterogeneity) of the sample that should be considered during SAXS data analysis. In some cases, sample heterogeneity can be controlled using a small molecule additive and I outline a simple additive screening method for sample preparation.

  19. The study of the sample size on the transverse magnetoresistance of bismuth nanowires

    International Nuclear Information System (INIS)

    Zare, M.; Layeghnejad, R.; Sadeghi, E.

    2012-01-01

    The effects of sample size on the galvanomagnetice properties of semimetal nanowires are theoretically investigated. Transverse magnetoresistance (TMR) ratios have been calculated within a Boltzmann Transport Equation (BTE) approach by specular reflection approximation. Temperature and radius dependence of the transverse magnetoresistance of cylindrical Bismuth nanowires are given. The obtained values are in good agreement with the experimental results, reported by Heremans et al. - Highlights: ► In this study effects of sample size on the galvanomagnetic properties of Bi. ► Nanowires were explained by Parrott theorem by solving the Boltzmann Transport Equation. ► Transverse magnetoresistance (TMR) ratios have been measured by specular reflection approximation. ► Temperature and radius dependence of the transverse magnetoresistance of cylindrical Bismuth nanowires are given. ► The obtained values are in good agreement with the experimental results, reported by Heremans et al.

  20. Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols

    DEFF Research Database (Denmark)

    Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.

    2008-01-01

    OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials. DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved...... in 1994-5 by the scientific-ethics committees for Copenhagen and Frederiksberg, Denmark (n=70). MAIN OUTCOME MEASURE: Proportion of protocols and publications that did not provide key information about sample size calculations and statistical methods; proportion of trials with discrepancies between...... of handling missing data was described in 16 protocols and 49 publications. 39/49 protocols and 42/43 publications reported the statistical test used to analyse primary outcome measures. Unacknowledged discrepancies between protocols and publications were found for sample size calculations (18/34 trials...

  1. A Web-based Simulator for Sample Size and Power Estimation in Animal Carcinogenicity Studies

    Directory of Open Access Journals (Sweden)

    Hojin Moon

    2002-12-01

    Full Text Available A Web-based statistical tool for sample size and power estimation in animal carcinogenicity studies is presented in this paper. It can be used to provide a design with sufficient power for detecting a dose-related trend in the occurrence of a tumor of interest when competing risks are present. The tumors of interest typically are occult tumors for which the time to tumor onset is not directly observable. It is applicable to rodent tumorigenicity assays that have either a single terminal sacrifice or multiple (interval sacrifices. The design is achieved by varying sample size per group, number of sacrifices, number of sacrificed animals at each interval, if any, and scheduled time points for sacrifice. Monte Carlo simulation is carried out in this tool to simulate experiments of rodent bioassays because no closed-form solution is available. It takes design parameters for sample size and power estimation as inputs through the World Wide Web. The core program is written in C and executed in the background. It communicates with the Web front end via a Component Object Model interface passing an Extensible Markup Language string. The proposed statistical tool is illustrated with an animal study in lung cancer prevention research.

  2. Nonuniform grid implicit spatial finite difference method for acoustic wave modeling in tilted transversely isotropic media

    KAUST Repository

    Chu, Chunlei

    2012-01-01

    Discrete earth models are commonly represented by uniform structured grids. In order to ensure accurate numerical description of all wave components propagating through these uniform grids, the grid size must be determined by the slowest velocity of the entire model. Consequently, high velocity areas are always oversampled, which inevitably increases the computational cost. A practical solution to this problem is to use nonuniform grids. We propose a nonuniform grid implicit spatial finite difference method which utilizes nonuniform grids to obtain high efficiency and relies on implicit operators to achieve high accuracy. We present a simple way of deriving implicit finite difference operators of arbitrary stencil widths on general nonuniform grids for the first and second derivatives and, as a demonstration example, apply these operators to the pseudo-acoustic wave equation in tilted transversely isotropic (TTI) media. We propose an efficient gridding algorithm that can be used to convert uniformly sampled models onto vertically nonuniform grids. We use a 2D TTI salt model to demonstrate its effectiveness and show that the nonuniform grid implicit spatial finite difference method can produce highly accurate seismic modeling results with enhanced efficiency, compared to uniform grid explicit finite difference implementations. © 2011 Elsevier B.V.

  3. Role of the surface in the critical behavior of finite systems

    Energy Technology Data Exchange (ETDEWEB)

    Duflot, V.; Chomaz, Ph. [Grand Accelerateur National d' Ions Lourds (GANIL), 14 - Caen (France); Gulminelli, F. [Laboratoire de Physique Corpusculaire, LPC-ISMRa, CNRS-IN2P3, 14 - Caen (France)

    2000-07-01

    The role of surfaces in a finite system undergoing a critical phenomenon is discussed in a canonical lattice-gas model. Surfaces are constrained by a mean volume defined via a La grange multiplier. We show that critical fragment size distributions are conserved even in very small systems with surfaces. This implies that critical signals are still relevant in the study of phase transitions in finite systems. (authors)

  4. Graph sampling

    OpenAIRE

    Zhang, L.-C.; Patone, M.

    2017-01-01

    We synthesise the existing theory of graph sampling. We propose a formal definition of sampling in finite graphs, and provide a classification of potential graph parameters. We develop a general approach of Horvitz–Thompson estimation to T-stage snowball sampling, and present various reformulations of some common network sampling methods in the literature in terms of the outlined graph sampling theory.

  5. Stochastic delocalization of finite populations

    International Nuclear Information System (INIS)

    Geyrhofer, Lukas; Hallatschek, Oskar

    2013-01-01

    The localization of populations of replicating bacteria, viruses or autocatalytic chemicals arises in various contexts, such as ecology, evolution, medicine or chemistry. Several deterministic mathematical models have been used to characterize the conditions under which localized states can form, and how they break down due to convective driving forces. It has been repeatedly found that populations remain localized unless the bias exceeds a critical threshold value, and that close to the transition the population is characterized by a diverging length scale. These results, however, have been obtained upon ignoring number fluctuations (‘genetic drift’), which are inevitable given the discreteness of the replicating entities. Here, we study the localization/delocalization of a finite population in the presence of genetic drift. The population is modeled by a linear chain of subpopulations, or demes, which exchange migrants at a constant rate. Individuals in one particular deme, called ‘oasis’, receive a growth rate benefit, and the total population is regulated to have constant size N. In this ecological setting, we find that any finite population delocalizes on sufficiently long time scales. Depending on parameters, however, populations may remain localized for a very long time. The typical waiting time to delocalization increases exponentially with both population size and distance to the critical wind speed of the deterministic approximation. We augment these simulation results by a mathematical analysis that treats the reproduction and migration of individuals as branching random walks subject to global constraints. For a particular constraint, different from a fixed population size constraint, this model yields a solvable first moment equation. We find that this solvable model approximates very well the fixed population size model for large populations, but starts to deviate as population sizes are small. Nevertheless, the qualitative behavior of the

  6. Generalized procedures for determining inspection sample sizes (related to quantitative measurements). Vol. 1: Detailed explanations

    International Nuclear Information System (INIS)

    Jaech, J.L.; Lemaire, R.J.

    1986-11-01

    Generalized procedures have been developed to determine sample sizes in connection with the planning of inspection activities. These procedures are based on different measurement methods. They are applied mainly to Bulk Handling Facilities and Physical Inventory Verifications. The present report attempts (i) to assign to appropriate statistical testers (viz. testers for gross, partial and small defects) the measurement methods to be used, and (ii) to associate the measurement uncertainties with the sample sizes required for verification. Working papers are also provided to assist in the application of the procedures. This volume contains the detailed explanations concerning the above mentioned procedures

  7. Tearing mode saturation with finite pressure

    International Nuclear Information System (INIS)

    Lee, J.K.

    1988-01-01

    With finite pressure, the saturation of the current-driven tearing mode is obtained in three-dimensional nonlinear resistive magnetohydrodynamic simulations for Tokamak plasmas. To effectively focus on the tearing modes, the perturbed pressure effects are excluded while the finite equilibrium pressure effects are retained. With this model, the linear growth rates of the tearing modes are found to be very insensitive to the equilibrium pressure increase. The nonlinear aspects of the tearing modes, however, are found to be very sensitive to the pressure increase in that the saturation level of the nonlinear harmonics of the tearing modes increases monotonically with the pressure rise. The increased level is associated with enhanced tearing island sizes or increased stochastic magnetic field region. (author)

  8. Dense QCD in a Finite Volume

    International Nuclear Information System (INIS)

    Yamamoto, Naoki; Kanazawa, Takuya

    2009-01-01

    We study the properties of QCD at high baryon density in a finite volume where color superconductivity occurs. We derive exact sum rules for complex eigenvalues of the Dirac operator at a finite chemical potential, and show that the Dirac spectrum is directly related to the color superconducting gap Δ. Also, we find a characteristic signature of color superconductivity: an X-shaped spectrum of partition function zeros in the complex quark mass plane near the origin, reflecting the Z(2) L xZ(2) R symmetry of the diquark pairing. Our results are universal in the domain Δ -1 π -1 where L is the linear size of the system and m π is the pion mass at high density.

  9. (I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.

    Science.gov (United States)

    van Rijnsoever, Frank J

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.

  10. Determination of a representative volume element based on the variability of mechanical properties with sample size in bread.

    Science.gov (United States)

    Ramírez, Cristian; Young, Ashley; James, Bryony; Aguilera, José M

    2010-10-01

    Quantitative analysis of food structure is commonly obtained by image analysis of a small portion of the material that may not be the representative of the whole sample. In order to quantify structural parameters (air cells) of 2 types of bread (bread and bagel) the concept of representative volume element (RVE) was employed. The RVE for bread, bagel, and gelatin-gel (used as control) was obtained from the relationship between sample size and the coefficient of variation, calculated from the apparent Young's modulus measured on 25 replicates. The RVE was obtained when the coefficient of variation for different sample sizes converged to a constant value. In the 2 types of bread tested, the tendency of the coefficient of variation was to decrease as the sample size increased, while in the homogeneous gelatin-gel, it remained always constant around 2.3% to 2.4%. The RVE resulted to be cubes with sides of 45 mm for bread, 20 mm for bagels, and 10 mm for gelatin-gel (smallest sample tested). The quantitative image analysis as well as visual observation demonstrated that bread presented the largest dispersion of air-cell sizes. Moreover, both the ratio of maximum air-cell area/image area and maximum air-cell height/image height were greater for bread (values of 0.05 and 0.30, respectively) than for bagels (0.03 and 0.20, respectively). Therefore, the size and the size variation of air cells present in the structure determined the size of the RVE. It was concluded that RVE is highly dependent on the heterogeneity of the structure of the types of baked products.

  11. Analysis of femtogram-sized plutonium samples by thermal ionization mass spectrometry

    International Nuclear Information System (INIS)

    Smith, D.H.; Duckworth, D.C.; Bostick, D.T.; Coleman, R.M.; McPherson, R.L.; McKown, H.S.

    1994-01-01

    The goal of this investigation was to extend the ability to perform isotopic analysis of plutonium to samples as small as possible. Plutonium ionizes thermally with quite good efficiency (first ionization potential 5.7 eV). Sub-nanogram sized samples can be analyzed on a near-routine basis given the necessary instrumentation. Efforts in this laboratory have been directed at rhenium-carbon systems; solutions of carbon in rhenium provide surfaces with work functions higher than pure rhenium (5.8 vs. ∼ 5.4 eV). Using a single resin bead as a sample loading medium both concentrates the sample nearly to a point and, due to its interaction with rhenium, produces the desired composite surface. Earlier work in this area showed that a layer of rhenium powder slurried in solution containing carbon substantially enhanced precision of isotopic measurements for uranium. Isotopic fractionation was virtually eliminated, and ionization efficiencies 2-5 times better than previously measured were attained for both Pu and U (1.7 and 0.5%, respectively). The other side of this coin should be the ability to analyze smaller samples, which is the subject of this report

  12. Preconditioning for Mixed Finite Element Formulations of Elliptic Problems

    KAUST Repository

    Wildey, Tim; Xue, Guangri

    2013-01-01

    In this paper, we discuss a preconditioning technique for mixed finite element discretizations of elliptic equations. The technique is based on a block-diagonal approximation of the mass matrix which maintains the sparsity and positive definiteness of the corresponding Schur complement. This preconditioner arises from the multipoint flux mixed finite element method and is robust with respect to mesh size and is better conditioned for full permeability tensors than a preconditioner based on a diagonal approximation of the mass matrix. © Springer-Verlag Berlin Heidelberg 2013.

  13. Sample Size and Robustness of Inferences from Logistic Regression in the Presence of Nonlinearity and Multicollinearity

    OpenAIRE

    Bergtold, Jason S.; Yeager, Elizabeth A.; Featherstone, Allen M.

    2011-01-01

    The logistic regression models has been widely used in the social and natural sciences and results from studies using this model can have significant impact. Thus, confidence in the reliability of inferences drawn from these models is essential. The robustness of such inferences is dependent on sample size. The purpose of this study is to examine the impact of sample size on the mean estimated bias and efficiency of parameter estimation and inference for the logistic regression model. A numbe...

  14. Bias in segmented gamma scans arising from size differences between calibration standards and assay samples

    International Nuclear Information System (INIS)

    Sampson, T.E.

    1991-01-01

    Recent advances in segmented gamma scanning have emphasized software corrections for gamma-ray self-adsorption in particulates or lumps of special nuclear material in the sample. another feature of this software is an attenuation correction factor formalism that explicitly accounts for differences in sample container size and composition between the calibration standards and the individual items being measured. Software without this container-size correction produces biases when the unknowns are not packaged in the same containers as the calibration standards. This new software allows the use of different size and composition containers for standards and unknowns, as enormous savings considering the expense of multiple calibration standard sets otherwise needed. This paper presents calculations of the bias resulting from not using this new formalism. These calculations may be used to estimate bias corrections for segmented gamma scanners that do not incorporate these advanced concepts

  15. Domain decomposition based iterative methods for nonlinear elliptic finite element problems

    Energy Technology Data Exchange (ETDEWEB)

    Cai, X.C. [Univ. of Colorado, Boulder, CO (United States)

    1994-12-31

    The class of overlapping Schwarz algorithms has been extensively studied for linear elliptic finite element problems. In this presentation, the author considers the solution of systems of nonlinear algebraic equations arising from the finite element discretization of some nonlinear elliptic equations. Several overlapping Schwarz algorithms, including the additive and multiplicative versions, with inexact Newton acceleration will be discussed. The author shows that the convergence rate of the Newton`s method is independent of the mesh size used in the finite element discretization, and also independent of the number of subdomains into which the original domain in decomposed. Numerical examples will be presented.

  16. Sample Size Estimation for Negative Binomial Regression Comparing Rates of Recurrent Events with Unequal Follow-Up Time.

    Science.gov (United States)

    Tang, Yongqiang

    2015-01-01

    A sample size formula is derived for negative binomial regression for the analysis of recurrent events, in which subjects can have unequal follow-up time. We obtain sharp lower and upper bounds on the required size, which is easy to compute. The upper bound is generally only slightly larger than the required size, and hence can be used to approximate the sample size. The lower and upper size bounds can be decomposed into two terms. The first term relies on the mean number of events in each group, and the second term depends on two factors that measure, respectively, the extent of between-subject variability in event rates, and follow-up time. Simulation studies are conducted to assess the performance of the proposed method. An application of our formulae to a multiple sclerosis trial is provided.

  17. Rule-of-thumb adjustment of sample sizes to accommodate dropouts in a two-stage analysis of repeated measurements.

    Science.gov (United States)

    Overall, John E; Tonidandel, Scott; Starbuck, Robert R

    2006-01-01

    Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study.

  18. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  19. A contemporary decennial global Landsat sample of changing agricultural field sizes

    Science.gov (United States)

    White, Emma; Roy, David

    2014-05-01

    Agriculture has caused significant human induced Land Cover Land Use (LCLU) change, with dramatic cropland expansion in the last century and significant increases in productivity over the past few decades. Satellite data have been used for agricultural applications including cropland distribution mapping, crop condition monitoring, crop production assessment and yield prediction. Satellite based agricultural applications are less reliable when the sensor spatial resolution is small relative to the field size. However, to date, studies of agricultural field size distributions and their change have been limited, even though this information is needed to inform the design of agricultural satellite monitoring systems. Moreover, the size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLU change. In many parts of the world field sizes may have increased. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, and impacts on the diffusion of herbicides, pesticides, disease pathogens, and pests. The Landsat series of satellites provide the longest record of global land observations, with 30m observations available since 1982. Landsat data are used to examine contemporary field size changes in a period (1980 to 2010) when significant global agricultural changes have occurred. A multi-scale sampling approach is used to locate global hotspots of field size change by examination of a recent global agricultural yield map and literature review. Nine hotspots are selected where significant field size change is apparent and where change has been driven by technological advancements (Argentina and U.S.), abrupt societal changes (Albania and Zimbabwe), government land use and agricultural policy changes (China, Malaysia, Brazil), and/or constrained by

  20. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    Science.gov (United States)

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Size effect on local magnetic moments in ferrimagnetic molecular complexes: an XMCD investigation

    International Nuclear Information System (INIS)

    Champion, G.; Villain, F.; Cartier dit Moulin, C.; Arrio, M.-A.; Sainctavit, P.; Zacchigna, M.; Zangrando, M.; Finazzi, M.; Parmigiani, F.; Mathoniere, C.

    2003-01-01

    Molecular chemistry allows to synthesize new magnetic systems with controlled properties such as size, magnetization or anisotropy. The theoretical study of the magnetic properties of small molecules (from 2 to 10 metallic cations per molecule) predicts that the magnetization at saturation of each ion does not reach the expected value for uncoupled ions when the magnetic interaction is antiferromagnetic. The quantum origin of this effect is due to the linear combination of several spin states building the wave function of the ground state and clusters of finite size and of finite spin value exhibit this property. When single crystals are available, spin densities on each atom can be experimentally given by polarized neutron diffraction (PND) experiments. In the case of bimetallic MnCu powdered samples, we will show that x-ray magnetic circular dichroism (XMCD) spectroscopy can be used to follow the evolution of the spin distribution on the Mn II and Cu II sites when passing from a dinuclear MnCu unit to a one dimensional (MnCu) n compound. (author)

  2. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  3. A Riemann-Hilbert formulation for the finite temperature Hubbard model

    Energy Technology Data Exchange (ETDEWEB)

    Cavaglià, Andrea [Dipartimento di Fisica and INFN, Università di Torino,Via P. Giuria 1, 10125 Torino (Italy); Cornagliotto, Martina [Dipartimento di Fisica and INFN, Università di Torino,Via P. Giuria 1, 10125 Torino (Italy); DESY Hamburg, Theory Group,Notkestrasse 85, D-22607 Hamburg (Germany); Mattelliano, Massimo; Tateo, Roberto [Dipartimento di Fisica and INFN, Università di Torino,Via P. Giuria 1, 10125 Torino (Italy)

    2015-06-03

    Inspired by recent results in the context of AdS/CFT integrability, we reconsider the Thermodynamic Bethe Ansatz equations describing the 1D fermionic Hubbard model at finite temperature. We prove that the infinite set of TBA equations are equivalent to a simple nonlinear Riemann-Hilbert problem for a finite number of unknown functions. The latter can be transformed into a set of three coupled nonlinear integral equations defined over a finite support, which can be easily solved numerically. We discuss the emergence of an exact Bethe Ansatz and the link between the TBA approach and the results by Jüttner, Klümper and Suzuki based on the Quantum Transfer Matrix method. We also comment on the analytic continuation mechanism leading to excited states and on the mirror equations describing the finite-size Hubbard model with twisted boundary conditions.

  4. Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.

    Science.gov (United States)

    Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe

    2015-08-01

    The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. The quantitative LOD score: test statistic and sample size for exclusion and linkage of quantitative traits in human sibships.

    Science.gov (United States)

    Page, G P; Amos, C I; Boerwinkle, E

    1998-04-01

    We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, using fixed-size sampling. The sample sizes required for both linkage and exclusion were not qualitatively different and depended on the percentage of variance being linked or excluded and on the total genetic variance. Information regarding linkage and exclusion in sibships larger than size 2 increased as approximately all possible pairs n(n-1)/2 up to sibships of size 6. Increasing the recombination (theta) distance between the marker and the trait loci reduced empirically the power for both linkage and exclusion, as a function of approximately (1-2theta)4.

  6. Discrete and mesoscopic regimes of finite-size wave turbulence

    International Nuclear Information System (INIS)

    L'vov, V. S.; Nazarenko, S.

    2010-01-01

    Bounding volume results in discreteness of eigenmodes in wave systems. This leads to a depletion or complete loss of wave resonances (three-wave, four-wave, etc.), which has a strong effect on wave turbulence (WT) i.e., on the statistical behavior of broadband sets of weakly nonlinear waves. This paper describes three different regimes of WT realizable for different levels of the wave excitations: discrete, mesoscopic and kinetic WT. Discrete WT comprises chaotic dynamics of interacting wave 'clusters' consisting of discrete (often finite) number of connected resonant wave triads (or quarters). Kinetic WT refers to the infinite-box theory, described by well-known wave-kinetic equations. Mesoscopic WT is a regime in which either the discrete and the kinetic evolutions alternate or when none of these two types is purely realized. We argue that in mesoscopic systems the wave spectrum experiences a sandpile behavior. Importantly, the mesoscopic regime is realized for a broad range of wave amplitudes which typically spans over several orders on magnitude, and not just for a particular intermediate level.

  7. The importance of plot size and the number of sampling seasons on capturing macrofungal species richness.

    Science.gov (United States)

    Li, Huili; Ostermann, Anne; Karunarathna, Samantha C; Xu, Jianchu; Hyde, Kevin D; Mortimer, Peter E

    2018-07-01

    The species-area relationship is an important factor in the study of species diversity, conservation biology, and landscape ecology. A deeper understanding of this relationship is necessary, in order to provide recommendations on how to improve the quality of data collection on macrofungal diversity in different land use systems in future studies, a systematic assessment of methodological parameters, in particular optimal plot sizes. The species-area relationship of macrofungi in tropical and temperate climatic zones and four different land use systems were investigated by determining the macrofungal species richness in plot sizes ranging from 100 m 2 to 10 000 m 2 over two sampling seasons. We found that the effect of plot size on recorded species richness significantly differed between land use systems with the exception of monoculture systems. For both climate zones, land use system needs to be considered when determining optimal plot size. Using an optimal plot size was more important than temporal replication (over two sampling seasons) in accurately recording species richness. Copyright © 2018 British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  8. Structural weights analysis of advanced aerospace vehicles using finite element analysis

    Science.gov (United States)

    Bush, Lance B.; Lentz, Christopher A.; Rehder, John J.; Naftel, J. Chris; Cerro, Jeffrey A.

    1989-01-01

    A conceptual/preliminary level structural design system has been developed for structural integrity analysis and weight estimation of advanced space transportation vehicles. The system includes a three-dimensional interactive geometry modeler, a finite element pre- and post-processor, a finite element analyzer, and a structural sizing program. Inputs to the system include the geometry, surface temperature, material constants, construction methods, and aerodynamic and inertial loads. The results are a sized vehicle structure capable of withstanding the static loads incurred during assembly, transportation, operations, and missions, and a corresponding structural weight. An analysis of the Space Shuttle external tank is included in this paper as a validation and benchmark case of the system.

  9. Finite element analysis-based design of a fluid-flow control nano-valve

    International Nuclear Information System (INIS)

    Grujicic, M.; Cao, G.; Pandurangan, B.; Roy, W.N.

    2005-01-01

    A finite element method-based procedure is developed for the design of molecularly functionalized nano-size devices. The procedure is aimed at the single-walled carbon nano-tubes (SWCNTs) used in the construction of such nano-devices and utilizes spatially varying nodal forces to represent electrostatic interactions between the charged groups of the functionalizing molecules. The procedure is next applied to the design of a fluid-flow control nano-valve. The results obtained suggest that the finite element-based procedure yields the results, which are very similar to their molecular modeling counterparts for small-size nano-valves, for which both types of analyses are feasible. The procedure is finally applied to optimize the design of a larger-size nano-valve, for which the molecular modeling approach is not practical

  10. Re-estimating sample size in cluster randomized trials with active recruitment within clusters

    NARCIS (Netherlands)

    van Schie, Sander; Moerbeek, Mirjam

    2014-01-01

    Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster

  11. Customer-oriented finite perturbation analysis for queueing networks

    NARCIS (Netherlands)

    Heidergott, B.F.

    2000-01-01

    We consider queueing networks for which the performance measureJ ( ) depends on a parameter , which can be a service time parameter or a buffer size, and we are interested in sensitivity analysis of J ( ) with respect to . We introduce a new method, called customer-oriented finite perturbation

  12. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    Science.gov (United States)

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  13. PET/CT in cancer: moderate sample sizes may suffice to justify replacement of a regional gold standard

    DEFF Research Database (Denmark)

    Gerke, Oke; Poulsen, Mads Hvid; Bouchelouche, Kirsten

    2009-01-01

    PURPOSE: For certain cancer indications, the current patient evaluation strategy is a perfect but locally restricted gold standard procedure. If positron emission tomography/computed tomography (PET/CT) can be shown to be reliable within the gold standard region and if it can be argued that PET...... of metastasized prostate cancer. RESULTS: An added value in accuracy of PET/CT in adjacent areas can outweigh a downsized target level of accuracy in the gold standard region, justifying smaller sample sizes. CONCLUSIONS: If PET/CT provides an accuracy benefit in adjacent regions, then sample sizes can be reduced....../CT also performs well in adjacent areas, then sample sizes in accuracy studies can be reduced. PROCEDURES: Traditional standard power calculations for demonstrating sensitivities of both 80% and 90% are shown. The argument is then described in general terms and demonstrated by an ongoing study...

  14. (I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research

    Science.gov (United States)

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario. PMID:28746358

  15. Validation Of Intermediate Large Sample Analysis (With Sizes Up to 100 G) and Associated Facility Improvement

    International Nuclear Information System (INIS)

    Bode, P.; Koster-Ammerlaan, M.J.J.

    2018-01-01

    Pragmatic rather than physical correction factors for neutron and gamma-ray shielding were studied for samples of intermediate size, i.e. up to the 10-100 gram range. It was found that for most biological and geological materials, the neutron self-shielding is less than 5 % and the gamma-ray self-attenuation can easily be estimated. A trueness control material of 1 kg size was made based on use of left-overs of materials, used in laboratory intercomparisons. A design study for a large sample pool-side facility, handling plate-type volumes, had to be stopped because of a reduction in human resources, available for this CRP. The large sample NAA facilities were made available to guest scientists from Greece and Brazil. The laboratory for neutron activation analysis participated in the world’s first laboratory intercomparison utilizing large samples. (author)

  16. Meson spectral functions at finite temperature

    International Nuclear Information System (INIS)

    Wetzorke, I.; Karsch, F.; Laermann, E.; Petreczky, P.; Stickan, S.

    2001-10-01

    The Maximum Entropy Method provides a Bayesian approach to reconstruct the spectral functions from discrete points in Euclidean time. The applicability of the approach at finite temperature is probed with the thermal meson correlation function. Furthermore the influence of fuzzing/smearing techniques on the spectral shape is investigated. We present first results for meson spectral functions at several temperatures below and above T c . The correlation functions were obtained from quenched calculations with Clover fermions on large isotropic lattices of the size (24 - 64) 3 x 16. We compare the resulting pole masses with the ones obtained from standard 2-exponential fits of spatial and temporal correlation functions at finite temperature and in the vacuum. The deviation of the meson spectral functions from free spectral functions is examined above the critical temperature. (orig.)

  17. Meson spectral functions at finite temperature

    International Nuclear Information System (INIS)

    Wetzorke, I.; Karsch, F.; Laermann, E.; Petreczky, P.; Stickan, S.

    2002-01-01

    The Maximum Entropy Method provides a Bayesian approach to reconstruct the spectral functions from discrete points in Euclidean time. The applicability of the approach at finite temperature is probed with the thermal meson correlation function. Furthermore the influence of fuzzing/smearing techniques on the spectral shape is investigated. We present first results for meson spectral functions at several temperatures below and above T c . The correlation functions were obtained from quenched calculations with Clover fermions on large isotropic lattices of the size (24 - 64) 3 x 16. We compare the resulting pole masses with the ones obtained from standard 2-exponential fits of spatial and temporal correlation functions at finite temperature and in the vacuum. The deviation of the meson spectral functions from free spectral functions is examined above the critical temperature

  18. Meson spectral functions at finite temperature

    Energy Technology Data Exchange (ETDEWEB)

    Wetzorke, I.; Karsch, F.; Laermann, E.; Petreczky, P.; Stickan, S

    2002-03-01

    The Maximum Entropy Method provides a Bayesian approach to reconstruct the spectral functions from discrete points in Euclidean time. The applicability of the approach at finite temperature is probed with the thermal meson correlation function. Furthermore the influence of fuzzing/smearing techniques on the spectral shape is investigated. We present first results for meson spectral functions at several temperatures below and above T{sub c}. The correlation functions were obtained from quenched calculations with Clover fermions on large isotropic lattices of the size (24 - 64){sup 3} x 16. We compare the resulting pole masses with the ones obtained from standard 2-exponential fits of spatial and temporal correlation functions at finite temperature and in the vacuum. The deviation of the meson spectral functions from free spectral functions is examined above the critical temperature.

  19. Meson spectral functions at finite temperature

    Energy Technology Data Exchange (ETDEWEB)

    Wetzorke, I.; Karsch, F.; Laermann, E.; Petreczky, P.; Stickan, S. [Bielefeld Univ. (Germany). Fakultaet fuer Physik

    2001-10-01

    The Maximum Entropy Method provides a Bayesian approach to reconstruct the spectral functions from discrete points in Euclidean time. The applicability of the approach at finite temperature is probed with the thermal meson correlation function. Furthermore the influence of fuzzing/smearing techniques on the spectral shape is investigated. We present first results for meson spectral functions at several temperatures below and above T{sub c}. The correlation functions were obtained from quenched calculations with Clover fermions on large isotropic lattices of the size (24 - 64){sup 3} x 16. We compare the resulting pole masses with the ones obtained from standard 2-exponential fits of spatial and temporal correlation functions at finite temperature and in the vacuum. The deviation of the meson spectral functions from free spectral functions is examined above the critical temperature. (orig.)

  20. Technical note: Alternatives to reduce adipose tissue sampling bias.

    Science.gov (United States)

    Cruz, G D; Wang, Y; Fadel, J G

    2014-10-01

    Understanding the mechanisms by which nutritional and pharmaceutical factors can manipulate adipose tissue growth and development in production animals has direct and indirect effects in the profitability of an enterprise. Adipocyte cellularity (number and size) is a key biological response that is commonly measured in animal science research. The variability and sampling of adipocyte cellularity within a muscle has been addressed in previous studies, but no attempt to critically investigate these issues has been proposed in the literature. The present study evaluated 2 sampling techniques (random and systematic) in an attempt to minimize sampling bias and to determine the minimum number of samples from 1 to 15 needed to represent the overall adipose tissue in the muscle. Both sampling procedures were applied on adipose tissue samples dissected from 30 longissimus muscles from cattle finished either on grass or grain. Briefly, adipose tissue samples were fixed with osmium tetroxide, and size and number of adipocytes were determined by a Coulter Counter. These results were then fit in a finite mixture model to obtain distribution parameters of each sample. To evaluate the benefits of increasing number of samples and the advantage of the new sampling technique, the concept of acceptance ratio was used; simply stated, the higher the acceptance ratio, the better the representation of the overall population. As expected, a great improvement on the estimation of the overall adipocyte cellularity parameters was observed using both sampling techniques when sample size number increased from 1 to 15 samples, considering both techniques' acceptance ratio increased from approximately 3 to 25%. When comparing sampling techniques, the systematic procedure slightly improved parameters estimation. The results suggest that more detailed research using other sampling techniques may provide better estimates for minimum sampling.