Galvan, T L; Burkness, E C; Hutchison, W D
2007-06-01
To develop a practical integrated pest management (IPM) system for the multicolored Asian lady beetle, Harmonia axyridis (Pallas) (Coleoptera: Coccinellidae), in wine grapes, we assessed the spatial distribution of H. axyridis and developed eight sampling plans to estimate adult density or infestation level in grape clusters. We used 49 data sets collected from commercial vineyards in 2004 and 2005, in Minnesota and Wisconsin. Enumerative plans were developed using two precision levels (0.10 and 0.25); the six binomial plans reflected six unique action thresholds (3, 7, 12, 18, 22, and 31% of cluster samples infested with at least one H. axyridis). The spatial distribution of H. axyridis in wine grapes was aggregated, independent of cultivar and year, but it was more randomly distributed as mean density declined. The average sample number (ASN) for each sampling plan was determined using resampling software. For research purposes, an enumerative plan with a precision level of 0.10 (SE/X) resulted in a mean ASN of 546 clusters. For IPM applications, the enumerative plan with a precision level of 0.25 resulted in a mean ASN of 180 clusters. In contrast, the binomial plans resulted in much lower ASNs and provided high probabilities of arriving at correct "treat or no-treat" decisions, making these plans more efficient for IPM applications. For a tally threshold of one adult per cluster, the operating characteristic curves for the six action thresholds provided binomial sequential sampling plans with mean ASNs of only 19-26 clusters, and probabilities of making correct decisions between 83 and 96%. The benefits of the binomial sampling plans are discussed within the context of improving IPM programs for wine grapes.
Exact Group Sequential Methods for Estimating a Binomial Proportion
Directory of Open Access Journals (Sweden)
Zhengjia Chen
2013-01-01
Full Text Available We first review existing sequential methods for estimating a binomial proportion. Afterward, we propose a new family of group sequential sampling schemes for estimating a binomial proportion with prescribed margin of error and confidence level. In particular, we establish the uniform controllability of coverage probability and the asymptotic optimality for such a family of sampling schemes. Our theoretical results establish the possibility that the parameters of this family of sampling schemes can be determined so that the prescribed level of confidence is guaranteed with little waste of samples. Analytic bounds for the cumulative distribution functions and expectations of sample numbers are derived. Moreover, we discuss the inherent connection of various sampling schemes. Numerical issues are addressed for improving the accuracy and efficiency of computation. Computational experiments are conducted for comparing sampling schemes. Illustrative examples are given for applications in clinical trials.
Burkness, Eric C; Hutchison, W D
2009-10-01
Populations of cabbage looper, Trichoplusiani (Lepidoptera: Noctuidae), were sampled in experimental plots and commercial fields of cabbage (Brasicca spp.) in Minnesota during 1998-1999 as part of a larger effort to implement an integrated pest management program. Using a resampling approach and the Wald's sequential probability ratio test, sampling plans with different sampling parameters were evaluated using independent presence/absence and enumerative data. Evaluations and comparisons of the different sampling plans were made based on the operating characteristic and average sample number functions generated for each plan and through the use of a decision probability matrix. Values for upper and lower decision boundaries, sequential error rates (alpha, beta), and tally threshold were modified to determine parameter influence on the operating characteristic and average sample number functions. The following parameters resulted in the most desirable operating characteristic and average sample number functions; action threshold of 0.1 proportion of plants infested, tally threshold of 1, alpha = beta = 0.1, upper boundary of 0.15, lower boundary of 0.05, and resampling with replacement. We found that sampling parameters can be modified and evaluated using resampling software to achieve desirable operating characteristic and average sample number functions. Moreover, management of T. ni by using binomial sequential sampling should provide a good balance between cost and reliability by minimizing sample size and maintaining a high level of correct decisions (>95%) to treat or not treat.
Forward selection two sample binomial test
Wong, Kam-Fai; Wong, Weng-Kee; Lin, Miao-Shan
2016-01-01
Fisher’s exact test (FET) is a conditional method that is frequently used to analyze data in a 2 × 2 table for small samples. This test is conservative and attempts have been made to modify the test to make it less conservative. For example, Crans and Shuster (2008) proposed adding more points in the rejection region to make the test more powerful. We provide another way to modify the test to make it less conservative by using two independent binomial distributions as the reference distribution for the test statistic. We compare our new test with several methods and show that our test has advantages over existing methods in terms of control of the type 1 and type 2 errors. We reanalyze results from an oncology trial using our proposed method and our software which is freely available to the reader. PMID:27335577
Chen, Connie; Gribble, Matthew O; Bartroff, Jay; Bay, Steven M; Goldstein, Larry
2017-05-01
The United States's Clean Water Act stipulates in section 303(d) that states must identify impaired water bodies for which total maximum daily loads (TMDLs) of pollution inputs into water bodies are developed. Decision-making procedures about how to list, or delist, water bodies as impaired, or not, per Clean Water Act 303(d) differ across states. In states such as California, whether or not a particular monitoring sample suggests that water quality is impaired can be regarded as a binary outcome variable, and California's current regulatory framework invokes a version of the exact binomial test to consolidate evidence across samples and assess whether the overall water body complies with the Clean Water Act. Here, we contrast the performance of California's exact binomial test with one potential alternative, the Sequential Probability Ratio Test (SPRT). The SPRT uses a sequential testing framework, testing samples as they become available and evaluating evidence as it emerges, rather than measuring all the samples and calculating a test statistic at the end of the data collection process. Through simulations and theoretical derivations, we demonstrate that the SPRT on average requires fewer samples to be measured to have comparable Type I and Type II error rates as the current fixed-sample binomial test. Policymakers might consider efficient alternatives such as SPRT to current procedure. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sample size calculation for comparing two negative binomial rates.
Zhu, Haiyuan; Lakkis, Hassan
2014-02-10
Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations. Copyright © 2013 John Wiley & Sons, Ltd.
An algorithm for sequential tail value at risk for path-independent payoffs in a binomial tree
Roorda, Berend
2010-01-01
We present an algorithm that determines Sequential Tail Value at Risk (STVaR) for path-independent payoffs in a binomial tree. STVaR is a dynamic version of Tail-Value-at-Risk (TVaR) characterized by the property that risk levels at any moment must be in the range of risk levels later on. The
Lara, Jesus R; Hoddle, Mark S
2015-08-01
Oligonychus perseae Tuttle, Baker, & Abatiello is a foliar pest of 'Hass' avocados [Persea americana Miller (Lauraceae)]. The recommended action threshold is 50-100 motile mites per leaf, but this count range and other ecological factors associated with O. perseae infestations limit the application of enumerative sampling plans in the field. Consequently, a comprehensive modeling approach was implemented to compare the practical application of various binomial sampling models for decision-making of O. perseae in California. An initial set of sequential binomial sampling models were developed using three mean-proportion modeling techniques (i.e., Taylor's power law, maximum likelihood, and an empirical model) in combination with two-leaf infestation tally thresholds of either one or two mites. Model performance was evaluated using a robust mite count database consisting of >20,000 Hass avocado leaves infested with varying densities of O. perseae and collected from multiple locations. Operating characteristic and average sample number results for sequential binomial models were used as the basis to develop and validate a standardized fixed-size binomial sampling model with guidelines on sample tree and leaf selection within blocks of avocado trees. This final validated model requires a leaf sampling cost of 30 leaves and takes into account the spatial dynamics of O. perseae to make reliable mite density classifications for a 50-mite action threshold. Recommendations for implementing this fixed-size binomial sampling plan to assess densities of O. perseae in commercial California avocado orchards are discussed. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Directory of Open Access Journals (Sweden)
Tudor DRUGAN
2003-08-01
Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.
Harold R. Offord
1966-01-01
Sequential sampling based on a negative binomial distribution of ribes populations required less than half the time taken by regular systematic line transect sampling in a comparison test. It gave the same control decision as the regular method in 9 of 13 field trials. A computer program that permits sequential plans to be built readily for other white pine regions is...
Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming
2014-01-01
The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.
Energy Technology Data Exchange (ETDEWEB)
Sigeti, David E. [Los Alamos National Laboratory; Pelak, Robert A. [Los Alamos National Laboratory
2012-09-11
We present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis with an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a
Tollerup, Kris E; Marcum, Daniel; Wilson, Rob; Godfrey, Larry
2013-08-01
The two-spotted spider mite, Tetranychus urticae Koch, is an economic pest on peppermint [Mentha x piperita (L.), 'Black Mitcham'] grown in California. A sampling plan for T. urticae was developed under Pacific Northwest conditions in the early 1980s and has been used by California growers since approximately 1998. This sampling plan, however, is cumbersome and a poor predictor of T. urticae densities in California. Between June and August, the numbers of immature and adult T. urticae were counted on leaves at three commercial peppermint fields (sites) in 2010 and a single field in 2011. In each of seven locations per site, 45 leaves were sampled, that is, 9 leaves per five stems. Leaf samples were stratified by collecting three leaves from the top, middle, and bottom strata per stem. The on-plant distribution of T. urticae did not significantly differ among the stem strata through the growing season. Binomial and enumerative sampling plans were developed using generic Taylor's power law coefficient values. The best fit of our data for binomial sampling occurred using a tally threshold of T = 0. The optimum number of leaves required for T urticae at the critical density of five mites per leaf was 20 for the binomial and 23 for the enumerative sampling plans, respectively. Sampling models were validated using Resampling for Validation of Sampling Plan Software.
Dating phylogenies with sequentially sampled tips.
Stadler, Tanja; Yang, Ziheng
2013-09-01
We develop a Bayesian Markov chain Monte Carlo (MCMC) algorithm for estimating divergence times using sequentially sampled molecular sequences. This type of data is commonly collected during viral epidemics and is sometimes available from different species in ancient DNA studies. We derive the distribution of ages of nodes in the tree under a birth-death-sequential-sampling (BDSS) model and use it as the prior for divergence times in the dating analysis. We implement the prior in the MCMCtree program in the PAML package for divergence dating. The BDSS prior is very flexible and, with different parameters, can generate trees of very different shapes, suitable for examining the sensitivity of posterior time estimates. We apply the method to a data set of SIV/HIV-2 genes in comparison with a likelihood-based dating method, and to a data set of influenza H1 genes from different hosts in comparison with the Bayesian program BEAST. We examined the impact of tree topology on time estimates and suggest that multifurcating consensus trees should be avoided in dating analysis. We found posterior time estimates for old nodes to be sensitive to the priors on times and rates and suggest that previous Bayesian dating studies may have produced overconfident estimates.
Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen
2017-01-01
Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.
Directory of Open Access Journals (Sweden)
Andrei ACHIMAŞ CADARIU
2004-08-01
Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.
Chain binomial models and binomial autoregressive processes.
Weiss, Christian H; Pollett, Philip K
2012-09-01
We establish a connection between a class of chain-binomial models of use in ecology and epidemiology and binomial autoregressive (AR) processes. New results are obtained for the latter, including expressions for the lag-conditional distribution and related quantities. We focus on two types of chain-binomial model, extinction-colonization and colonization-extinction models, and present two approaches to parameter estimation. The asymptotic distributions of the resulting estimators are studied, as well as their finite-sample performance, and we give an application to real data. A connection is made with standard AR models, which also has implications for parameter estimation. © 2011, The International Biometric Society.
Sequential determination of important ecotoxic radionuclides in nuclear waste samples
International Nuclear Information System (INIS)
Bilohuscin, J.
2016-01-01
In the dissertation thesis we focused on the development and optimization of a sequential determination method for radionuclides 93 Zr, 94 Nb, 99 Tc and 126 Sn, employing extraction chromatography sorbents TEVA (R) Resin and Anion Exchange Resin, supplied by Eichrom Industries. Prior to the attestation of sequential separation of these proposed radionuclides from radioactive waste samples, a unique sequential procedure of 90 Sr, 239 Pu, 241 Am separation from urine matrices was tried, using molecular recognition sorbents of AnaLig (R) series and extraction chromatography sorbent DGA (R) Resin. On these experiments, four various sorbents were continually used for separation, including PreFilter Resin sorbent, which removes interfering organic materials present in raw urine. After the acquisition of positive results of this sequential procedure followed experiments with a 126 Sn separation using TEVA (R) Resin and Anion Exchange Resin sorbents. Radiochemical recoveries obtained from samples of radioactive evaporate concentrates and sludge showed high efficiency of the separation, while values of 126 Sn were under the minimum detectable activities MDA. Activity of 126 Sn was determined after ingrowth of daughter nuclide 126m Sb on HPGe gamma detector, with minimal contamination of gamma interfering radionuclides with decontamination factors (D f ) higher then 1400 for 60 Co and 47000 for 137 Cs. Based on the acquired experiments and results of these separation procedures, a complex method of sequential separation of 93 Zr, 94 Nb, 99 Tc and 126 Sn was proposed, which included optimization steps similar to those used in previous parts of the dissertation work. Application of the sequential separation method for sorbents TEVA (R) Resin and Anion Exchange Resin on real samples of radioactive wastes provided satisfactory results and an economical, time sparing, efficient method. (author)
Sequential sampling: a novel method in farm animal welfare assessment.
Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J
2016-02-01
Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall
Statistical inference involving binomial and negative binomial parameters.
García-Pérez, Miguel A; Núñez-Antón, Vicente
2009-05-01
Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.
POLYP: an automatic device for drawing sequential samples of gas
International Nuclear Information System (INIS)
Gaglione, P.; Koechler, C.; Stanchi, L.
1974-12-01
POLYP is an automatic device consisting of an electronic equipment which drives sequentially 8 small pumps for drawing samples of gas. The electronic circuit is driven by a quartz oscillator and allows for the preselection of a waiting time in such a manner that a set of similar instruments placed in suitable position in the open country will start simultaneously. At the same time the first pump of each instrument will inflate a plastic bag for a preset time. Thereafter the other seven pumps will inflate sequentially the other bag. The instrument is powered by rechargeable batteries and realized with C-MOS integrated circuits for a nearly negligible consumption. As it is foreseen for field operation it is waterproof
Tang, Yongqiang
2015-01-01
A sample size formula is derived for negative binomial regression for the analysis of recurrent events, in which subjects can have unequal follow-up time. We obtain sharp lower and upper bounds on the required size, which is easy to compute. The upper bound is generally only slightly larger than the required size, and hence can be used to approximate the sample size. The lower and upper size bounds can be decomposed into two terms. The first term relies on the mean number of events in each group, and the second term depends on two factors that measure, respectively, the extent of between-subject variability in event rates, and follow-up time. Simulation studies are conducted to assess the performance of the proposed method. An application of our formulae to a multiple sclerosis trial is provided.
Tang, Yongqiang
2017-05-25
We derive the sample size formulae for comparing two negative binomial rates based on both the relative and absolute rate difference metrics in noninferiority and equivalence trials with unequal follow-up times, and establish an approximate relationship between the sample sizes required for the treatment comparison based on the two treatment effect metrics. The proposed method allows the dispersion parameter to vary by treatment groups. The accuracy of these methods is assessed by simulations. It is demonstrated that ignoring the between-subject variation in the follow-up time by setting the follow-up time for all individuals to be the mean follow-up time may greatly underestimate the required size, resulting in underpowered studies. Methods are provided for back-calculating the dispersion parameter based on the published summary results.
Sequential Sampling Models in Cognitive Neuroscience: Advantages, Applications, and Extensions.
Forstmann, B U; Ratcliff, R; Wagenmakers, E-J
2016-01-01
Sequential sampling models assume that people make speeded decisions by gradually accumulating noisy information until a threshold of evidence is reached. In cognitive science, one such model--the diffusion decision model--is now regularly used to decompose task performance into underlying processes such as the quality of information processing, response caution, and a priori bias. In the cognitive neurosciences, the diffusion decision model has recently been adopted as a quantitative tool to study the neural basis of decision making under time pressure. We present a selective overview of several recent applications and extensions of the diffusion decision model in the cognitive neurosciences.
Directory of Open Access Journals (Sweden)
Adele eDiederich
2014-09-01
Full Text Available A sequential sampling model for multiattribute binary choice options, called Multiattribute attention switching (MAAS model, assumes a separate sampling process for each attribute. During the deliberation process attention switches from one attribute consideration to the next. The order in which attributes are considered as well for how long each attribute is considered - the attention time - influences the predicted choice probabilities and choice response times. Several probability distributions for the attention time including deterministic, Poisson, binomial, geometric, and uniform with different variances are investigated. Depending on the time and order schedule the model predicts a rich choice probability/choice response time pattern including preference reversals and fast errors. Furthermore, the difference between a finite and infinite decision horizons for the attribute considered last is investigated. For the former case the model predicts a probability $p_0> 0$ of not deciding within the available time. The underlying stochastic process for each attribute is an Ornstein-Uhlenbeck process approximated by a discrete birth-death process. All predictions are also true for the widely applied Wiener process.
Sequential sampling of visual objects during sustained attention.
Directory of Open Access Journals (Sweden)
Jianrong Jia
2017-06-01
Full Text Available In a crowded visual scene, attention must be distributed efficiently and flexibly over time and space to accommodate different contexts. It is well established that selective attention enhances the corresponding neural responses, presumably implying that attention would persistently dwell on the task-relevant item. Meanwhile, recent studies, mostly in divided attentional contexts, suggest that attention does not remain stationary but samples objects alternately over time, suggesting a rhythmic view of attention. However, it remains unknown whether the dynamic mechanism essentially mediates attentional processes at a general level. Importantly, there is also a complete lack of direct neural evidence reflecting whether and how the brain rhythmically samples multiple visual objects during stimulus processing. To address these issues, in this study, we employed electroencephalography (EEG and a temporal response function (TRF approach, which can dissociate responses that exclusively represent a single object from the overall neuronal activity, to examine the spatiotemporal characteristics of attention in various attentional contexts. First, attention, which is characterized by inhibitory alpha-band (approximately 10 Hz activity in TRFs, switches between attended and unattended objects every approximately 200 ms, suggesting a sequential sampling even when attention is required to mostly stay on the attended object. Second, the attentional spatiotemporal pattern is modulated by the task context, such that alpha-mediated switching becomes increasingly prominent as the task requires a more uniform distribution of attention. Finally, the switching pattern correlates with attentional behavioral performance. Our work provides direct neural evidence supporting a generally central role of temporal organization mechanism in attention, such that multiple objects are sequentially sorted according to their priority in attentional contexts. The results suggest
DEFF Research Database (Denmark)
Hansen, Thomas Mejer; Mosegaard, Klaus; Cordua, Knud Skou
2010-01-01
Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample the solutions to non-linear inverse problems. In principle these methods allow incorporation of arbitrarily complex a priori information, but current methods allow only relatively simple...... priors to be used. We demonstrate how sequential simulation can be seen as an application of the Gibbs sampler, and how such a Gibbs sampler assisted by sequential simulation can be used to perform a random walk generating realizations of a relatively complex random function. We propose to combine...... this algorithm with the Metropolis algorithm to obtain an efficient method for sampling posterior probability densities for nonlinear inverse problems....
Luo, Maoyi; Xing, Shan; Yang, Yonggang; Song, Lijuan; Ma, Yan; Wang, Yadong; Dai, Xiongxin; Happel, Steffen
2018-07-01
There is a growing demand for the determination of actinides in soil and sediment samples for environmental monitoring and tracing, radiological protection, and nuclear forensic reasons. A total sample dissolution method based on lithium metaborate fusion, followed by sequential column chromatography separation, was developed for simultaneous determination of Pu, Am and Cm isotopes in large-size environmental samples by alpha spectrometry and mass spectrometric techniques. The overall recoveries of both Pu and Am for the entire procedure were higher than 70% for large-size soil samples. The method was validated using 20 g of soil samples spiked with known amounts of 239 Pu and 241 Am as well as the certified reference materials IAEA-384 (Fangataufa Lagoon sediment) and IAEA-385 (Irish Sea sediment). All the measured results agreed very well with the expected values. Copyright © 2018 Elsevier Ltd. All rights reserved.
DEFF Research Database (Denmark)
Qiao, Jixin; Hou, Xiaolin
2010-01-01
Fractionation of plutonium isotopes (238Pu, 239,240Pu) in environmental samples (i.e. soil and sediment) and bio-shielding concrete from decommissioning of nuclear reactor were carried out by dynamic sequential extraction using an on-line sequential injection (SI) system combined with a specially...... to the treatment and disposal of nuclear waste from decommissioning....
DEFF Research Database (Denmark)
Elmasry, Amr; Jensen, Claus; Katajainen, Jyrki
2017-01-01
the (total) number of elements stored in the data structure(s) prior to the operation. As the resulting data structure consists of two components that are different variants of binomial heaps, we call it a bipartite binomial heap. Compared to its counterpart, a multipartite binomial heap, the new structure...
Liu, Lian; Zhang, Shao-Wu; Huang, Yufei; Meng, Jia
2017-08-31
As a newly emerged research area, RNA epigenetics has drawn increasing attention recently for the participation of RNA methylation and other modifications in a number of crucial biological processes. Thanks to high throughput sequencing techniques, such as, MeRIP-Seq, transcriptome-wide RNA methylation profile is now available in the form of count-based data, with which it is often of interests to study the dynamics at epitranscriptomic layer. However, the sample size of RNA methylation experiment is usually very small due to its costs; and additionally, there usually exist a large number of genes whose methylation level cannot be accurately estimated due to their low expression level, making differential RNA methylation analysis a difficult task. We present QNB, a statistical approach for differential RNA methylation analysis with count-based small-sample sequencing data. Compared with previous approaches such as DRME model based on a statistical test covering the IP samples only with 2 negative binomial distributions, QNB is based on 4 independent negative binomial distributions with their variances and means linked by local regressions, and in the way, the input control samples are also properly taken care of. In addition, different from DRME approach, which relies only the input control sample only for estimating the background, QNB uses a more robust estimator for gene expression by combining information from both input and IP samples, which could largely improve the testing performance for very lowly expressed genes. QNB showed improved performance on both simulated and real MeRIP-Seq datasets when compared with competing algorithms. And the QNB model is also applicable to other datasets related RNA modifications, including but not limited to RNA bisulfite sequencing, m 1 A-Seq, Par-CLIP, RIP-Seq, etc.
CUMBIN - CUMULATIVE BINOMIAL PROGRAMS
Bowerman, P. N.
1994-01-01
The cumulative binomial program, CUMBIN, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), can be used independently of one another. CUMBIN can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CUMBIN calculates the probability that a system of n components has at least k operating if the probability that any one operating is p and the components are independent. Equivalently, this is the reliability of a k-out-of-n system having independent components with common reliability p. CUMBIN can evaluate the incomplete beta distribution for two positive integer arguments. CUMBIN can also evaluate the cumulative F distribution and the negative binomial distribution, and can determine the sample size in a test design. CUMBIN is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. The CUMBIN program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMBIN was developed in 1988.
Information-based sample size re-estimation in group sequential design for longitudinal trials.
Zhou, Jing; Adewale, Adeniyi; Shentu, Yue; Liu, Jiajun; Anderson, Keaven
2014-09-28
Group sequential design has become more popular in clinical trials because it allows for trials to stop early for futility or efficacy to save time and resources. However, this approach is less well-known for longitudinal analysis. We have observed repeated cases of studies with longitudinal data where there is an interest in early stopping for a lack of treatment effect or in adapting sample size to correct for inappropriate variance assumptions. We propose an information-based group sequential design as a method to deal with both of these issues. Updating the sample size at each interim analysis makes it possible to maintain the target power while controlling the type I error rate. We will illustrate our strategy with examples and simulations and compare the results with those obtained using fixed design and group sequential design without sample size re-estimation. Copyright © 2014 John Wiley & Sons, Ltd.
Extending the CLAST sequential rule to one-way ANOVA under group sampling.
Ximénez, Carmen; Revuelta, Javier
2007-02-01
Several studies have demonstrated that the fixed-sample stopping rule (FSR), in which the sample size is determined in advance, is less practical and efficient than are sequential-stopping rules. The composite limited adaptive sequential test (CLAST) is one such sequential-stopping rule. Previous research has shown that CLAST is more efficient in terms of sample size and power than are the FSR and other sequential rules and that it reflects more realistically the practice of experimental psychology researchers. The CLAST rule has been applied only to the t test of mean differences with two matched samples and to the chi-square independence test for twofold contingency tables. The present work extends previous research on the efficiency of CLAST to multiple group statistical tests. Simulation studies were conducted to test the efficiency of the CLAST rule for the one-way ANOVA for fixed effects models. The ANOVA general test and two linear contrasts of multiple comparisons among treatment means are considered. The article also introduces four rules for allocating N observations to J groups under the general null hypothesis and three allocation rules for the linear contrasts. Results show that the CLAST rule is generally more efficient than the FSR in terms of sample size and power for one-way ANOVA tests. However, the allocation rules vary in their optimality and have a differential impact on sample size and power. Thus, selecting an allocation rule depends on the cost of sampling and the intended precision.
DEFF Research Database (Denmark)
Huber, Martin; Lechner, Michael; Mellace, Giovanni
Using a comprehensive simulation study based on empirical data, this paper investigates the finite sample properties of different classes of parametric and semi-parametric estimators of (natural) direct and indirect causal effects used in mediation analysis under sequential conditional independence...
DEFF Research Database (Denmark)
Huber, Martin; Lechner, Michael; Mellace, Giovanni
2016-01-01
Using a comprehensive simulation study based on empirical data, this paper investigates the finite sample properties of different classes of parametric and semi-parametric estimators of (natural) direct and indirect causal effects used in mediation analysis under sequential conditional independence...
Small, coded, pill-sized tracers embedded in grain are proposed as a method for grain traceability. A sampling process for a grain traceability system was designed and investigated by applying probability statistics using a science-based sampling approach to collect an adequate number of tracers fo...
Uranium and thorium sequential separation from norm samples by using a SIA system.
Mola, M; Nieto, A; Peñalver, A; Borrull, F; Aguilar, C
2014-01-01
This study presents a sequential radiochemical separation method for uranium and thorium isotopes using a novel Sequential Injection Analysis (SIA) system with an extraction chromatographic resin (UTEVA). After the separation, uranium and thorium isotopes have been quantified by using alpha-particle spectrometry. The developed method has been tested by analyzing an intercomparison sample (phosphogypsum sample) from International Atomic Energy Agency (IAEA) with better recoveries for uranium and thorium than the obtained by using a classical method (93% for uranium using the new methodology and 82% with the classical method, and in the case of thorium the recoveries were 70% for the semi-automated method and 60% for the classical strategy). Afterwards, the method was successfully applied to different Naturally Occurring Radioactive Material (NORM) samples, in particular sludge samples taken from a drinking water treatment plant (DWTP) and also sediment samples taken from an area of influence of the dicalcium phosphate (DCP) factory located close to the Ebro river reservoir in Flix (Catalonia). The obtained results have also been compared with the obtained by the classical method and from that comparison it has been demonstrated that the presented strategy is a good alternative to existing methods offering some advantages as minimization of sample handling, reduction of solvents volume and also an important reduction of the time per analysis. Copyright © 2013 Elsevier Ltd. All rights reserved.
Phosphorus Concentrations in Sequentially Fractionated Soil Samples as Affected by Digestion Methods
do Nascimento, Carlos A. C.; Pagliari, Paulo H.; Schmitt, Djalma; He, Zhongqi; Waldrip, Heidi
2015-01-01
Sequential fractionation has helped improving our understanding of the lability and bioavailability of P in soil. Nevertheless, there have been no reports on how manipulation of the different fractions prior to analyses affects the total P (TP) concentrations measured. This study investigated the effects of sample digestion, filtration, and acidification on the TP concentrations determined by ICP-OES in 20 soil samples. Total P in extracts were either determined without digestion by ICP-OES, or ICP-OES following block digestion, or autoclave digestion. The effects of sample filtration, and acidification on undigested alkaline extracts prior to ICP-OES were also evaluated. Results showed that, TP concentrations were greatest in the block-digested extracts, though the variability introduced by the block-digestion was the highest. Acidification of NaHCO3 extracts resulted in lower TP concentrations, while acidification of NaOH randomly increased or decreased TP concentrations. The precision observed with ICP-OES of undigested extracts suggests this should be the preferred method for TP determination in sequentially extracted samples. Thus, observations reported in this work would be helpful in appropriate sample handling for P determination, thereby improving the precision of P determination. The results are also useful for literature data comparison and discussion when there are differences in sample treatments. PMID:26647644
2014-09-24
Stereo under Sequential Optimal Sampling: A Statistical Analysis Framework for Search Space Reduction Yilin Wang, Ke Wang, Enrique Dunn, Jan-Michael...100 Patch size 1 10 100 Re du nd an cy 0.1 10 20 30 40 50 60 70 80 90 100 Patch size 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 Sa m pl in gR at io 0 0.02
International Nuclear Information System (INIS)
Wen, Zhixun; Pei, Haiqing; Liu, Hai; Yue, Zhufeng
2016-01-01
The sequential Kriging reliability analysis (SKRA) method has been developed in recent years for nonlinear implicit response functions which are expensive to evaluate. This type of method includes EGRA: the efficient reliability analysis method, and AK-MCS: the active learning reliability method combining Kriging model and Monte Carlo simulation. The purpose of this paper is to improve SKRA by adaptive sampling regions and parallelizability. The adaptive sampling regions strategy is proposed to avoid selecting samples in regions where the probability density is so low that the accuracy of these regions has negligible effects on the results. The size of the sampling regions is adapted according to the failure probability calculated by last iteration. Two parallel strategies are introduced and compared, aimed at selecting multiple sample points at a time. The improvement is verified through several troublesome examples. - Highlights: • The ISKRA method improves the efficiency of SKRA. • Adaptive sampling regions strategy reduces the number of needed samples. • The two parallel strategies reduce the number of needed iterations. • The accuracy of the optimal value impacts the number of samples significantly.
DEFF Research Database (Denmark)
Chomchoei, R.; Hansen, Elo Harald; Shiowatana, J.
2007-01-01
This communication presents a novel approach to perform sequential extraction of elements in solid samples by using a sequential injection (SI) system incorporating a specially designed extraction microcolumn. Based on the operation of the syringe pump, different modes of extraction are potentially...... that the system entails many advantages such as being fully automated, and besides being characterised by rapidity, ease of operation and robustness, it is less prone to risks of contamination and personal errors as encountered in traditional batch systems. Moreover, improvement of the precision and accuracy...... of the chemical fractionation of metal in solids as compared with previous reports are obtained. The system ensures that extraction is performed at designated pH values. Variation of sample weight to column volume ratios do not affect the amounts of extractable metals, nor do extraction flow rates ranging from 50...
Negative binomial multiplicity distribution from binomial cluster production
International Nuclear Information System (INIS)
Iso, C.; Mori, K.
1990-01-01
Two-step interpretation of negative binomial multiplicity distribution as a compound of binomial cluster production and negative binomial like cluster decay distribution is proposed. In this model we can expect the average multiplicity for the cluster production increases with increasing energy, different from a compound Poisson-Logarithmic distribution. (orig.)
Binomial collisions and near collisions
Blokhuis, Aart; Brouwer, Andries; de Weger, Benne
2017-01-01
We describe efficient algorithms to search for cases in which binomial coefficients are equal or almost equal, give a conjecturally complete list of all cases where two binomial coefficients differ by 1, and give some identities for binomial coefficients that seem to be new.
Sahlstedt, Elina; Arppe, Laura
2017-04-01
Stable isotope composition of bones, analysed either from the mineral phase (hydroxyapatite) or from the organic phase (mainly collagen) carry important climatological and ecological information and are therefore widely used in paleontological and archaeological research. For the analysis of the stable isotope compositions, both of the phases, hydroxyapatite and collagen, have their more or less well established separation and analytical techniques. Recent development in IRMS and wet chemical extraction methods have facilitated the analysis of very small bone fractions (500 μg or less starting material) for PO43-O isotope composition. However, the uniqueness and (pre-) historical value of each archaeological and paleontological finding lead to preciously little material available for stable isotope analyses, encouraging further development of microanalytical methods for the use of stable isotope analyses. Here we present the first results in developing extraction methods for combining collagen C- and N-isotope analyses to PO43-O-isotope analyses from a single bone sample fraction. We tested sequential extraction starting with dilute acid demineralization and collection of both collagen and PO43-fractions, followed by further purification step by H2O2 (PO43-fraction). First results show that bone sample separates as small as 2 mg may be analysed for their δ15N, δ13C and δ18OPO4 values. The method may be incorporated in detailed investigation of sequentially developing skeletal material such as teeth, potentially allowing for the investigation of interannual variability in climatological/environmental signals or investigation of the early life history of an individual.
Chattopadhyay, Bhargab; Kelley, Ken
2016-01-01
The coefficient of variation is an effect size measure with many potential uses in psychology and related disciplines. We propose a general theory for a sequential estimation of the population coefficient of variation that considers both the sampling error and the study cost, importantly without specific distributional assumptions. Fixed sample size planning methods, commonly used in psychology and related fields, cannot simultaneously minimize both the sampling error and the study cost. The sequential procedure we develop is the first sequential sampling procedure developed for estimating the coefficient of variation. We first present a method of planning a pilot sample size after the research goals are specified by the researcher. Then, after collecting a sample size as large as the estimated pilot sample size, a check is performed to assess whether the conditions necessary to stop the data collection have been satisfied. If not an additional observation is collected and the check is performed again. This process continues, sequentially, until a stopping rule involving a risk function is satisfied. Our method ensures that the sampling error and the study costs are considered simultaneously so that the cost is not higher than necessary for the tolerable sampling error. We also demonstrate a variety of properties of the distribution of the final sample size for five different distributions under a variety of conditions with a Monte Carlo simulation study. In addition, we provide freely available functions via the MBESS package in R to implement the methods discussed.
Directory of Open Access Journals (Sweden)
A. Arbab
2016-04-01
Full Text Available The distribution of adult and larvae Bactrocera oleae (Diptera: Tephritidae, a key pest of olive, was studied in olive orchards. The first objective was to analyze the dispersion of this insect on olive and the second was to develop sampling plans based on fixed levels of precision for estimating B. oleae populations. The Taylor’s power law and Iwao’s patchiness regression models were used to analyze the data. Our results document that Iwao’s patchiness provided a better description between variance and mean density. Taylor’s b and Iwao’s β were both significantly more than 1, indicating that adults and larvae had aggregated spatial distribution. This result was further supported by the calculated common k of 2.17 and 4.76 for adult and larvae, respectively. Iwao’s a for larvae was significantly less than 0, indicating that the basic distribution component of B. oleae is the individual insect. Optimal sample sizes for fixed precision levels of 0.10 and 0.25 were estimated with Iwao’s patchiness coefficients. The optimum sample size for adult and larvae fluctuated throughout the seasons and depended upon the fly density and desired level of precision. For adult, this generally ranged from 2 to 11 and 7 to 15 traps to achieve precision levels of 0.25 and 0.10, respectively. With respect to optimum sample size, the developed fixed-precision sequential sampling plans was suitable for estimating flies density at a precision level of D=0.25. Sampling plans, presented here, should be a tool for research on pest management decisions of B. oleae.
NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel
2017-08-01
Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.
Adaptive estimation of binomial probabilities under misclassification
Albers, Willem/Wim; Veldman, H.J.
1984-01-01
If misclassification occurs the standard binomial estimator is usually seriously biased. It is known that an improvement can be achieved by using more than one observer in classifying the sample elements. Here it will be investigated which number of observers is optimal given the total number of
Adaptive bayesian analysis for binomial proportions
CSIR Research Space (South Africa)
Das, Sonali
2008-10-01
Full Text Available The authors consider the problem of statistical inference of binomial proportions for non-matched, correlated samples, under the Bayesian framework. Such inference can arise when the same group is observed at a different number of times with the aim...
CROSSER - CUMULATIVE BINOMIAL PROGRAMS
Bowerman, P. N.
1994-01-01
The cumulative binomial program, CROSSER, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CROSSER, CUMBIN (NPO-17555), and NEWTONP (NPO-17556), can be used independently of one another. CROSSER can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CROSSER calculates the point at which the reliability of a k-out-of-n system equals the common reliability of the n components. It is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. It also lists the number of iterations of Newton's method required to calculate the answer within the given error. The CROSSER program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CROSSER was developed in 1988.
Michel, H; Levent, D; Barci, V; Barci-Funel, G; Hurel, C
2008-02-15
A new sequential method for the determination of both natural (U, Th) and anthropogenic (Sr, Cs, Pu, Am) radionuclides has been developed for application to soil and sediment samples. The procedure was optimised using a reference sediment (IAEA-368) and reference soils (IAEA-375 and IAEA-326). Reference materials were first digested using acids (leaching), 'total' acids on hot plate, and acids in microwave in order to compare the different digestion technique. Then, the separation and purification were made by anion exchange resin and selective extraction chromatography: transuranic (TRU) and strontium (SR) resins. Natural and anthropogenic alpha radionuclides were separated by uranium and tetravalent actinide (UTEVA) resin, considering different acid elution medium. Finally, alpha and gamma semiconductor spectrometer and liquid scintillation spectrometer were used to measure radionuclide activities. The results obtained for strontium-90, cesium-137, thorium-232, uranium-238, plutonium-239+240 and americium-241 isotopes by the proposed method for the reference materials provided excellent agreement with the recommended values and good chemical recoveries. Plutonium isotopes in alpha spectrometry planchet deposits could be also analysed by ICPMS.
Host mediators in endodontic exudates. II. Changes in concentration with sequential sampling.
Kuo, M L; Lamster, I B; Hasselgren, G
1998-10-01
Exudate is often found in the root canal of teeth requiring endodontic therapy. The aim of this study was to evaluate the relationship of sequential changes of different host mediators in endodontic exudates to clinical and radiographic findings. Thirty-two nonvital teeth with periapical symptoms were evaluated. Exudates were collected with filter paper strips every 3 min after opening of the pulp chamber. The concentrations of beta-glucuronidase, IgG, IgA, IgM, and interleukin-1 beta in the exudates were analyzed. In general, the concentration of the mediators in exudates from less involved lesions did not change over time. The exception was an increase in the IgM concentration when patients presented with percussion or palpation sensitivity. In contrast, in the more involved lesions, the concentrations of IgA and IgM increased as sampling progressed. The concentrations of beta-glucuronidase and interleukin-1 beta decreased over time in the more involved lesions. These data suggest that the amount of proinflammatory mediators in the canal and periapical lesion is limited. Furthermore, IgM seemed to be a marker for the severity of periapical lesions. This may relate to vascular permeability that allows passage of this larger molecule into the extravascular environment.
DEFF Research Database (Denmark)
Wang, Jianhua; Hansen, Elo Harald
2005-01-01
Flow injection (FI) analysis, the first generation of this technique, became in the 1990s supplemented by its second generation, sequential injection (SI), and most recently by the third generation (i.e.,Lab-on-Valve). The dominant role played by FI in automatic, on-line, sample pretreatments in ...
Distinguishing between Binomial, Hypergeometric and Negative Binomial Distributions
Wroughton, Jacqueline; Cole, Tarah
2013-01-01
Recognizing the differences between three discrete distributions (Binomial, Hypergeometric and Negative Binomial) can be challenging for students. We present an activity designed to help students differentiate among these distributions. In addition, we present assessment results in the form of pre- and post-tests that were designed to assess the…
Silva, Ivair R
2018-01-15
Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near-continuous sequential analysis of post-market drug and vaccine safety surveillance. It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size. Unlike in post-market drug and vaccine safety surveillance, that is not important. In post-market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected. The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post-market drug and vaccine safety surveillance. This is shown for both, continuous and group sequential analysis. Copyright © 2017 John Wiley & Sons, Ltd.
NEWTONP - CUMULATIVE BINOMIAL PROGRAMS
Bowerman, P. N.
1994-01-01
The cumulative binomial program, NEWTONP, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, NEWTONP, CUMBIN (NPO-17555), and CROSSER (NPO-17557), can be used independently of one another. NEWTONP can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. NEWTONP calculates the probably p required to yield a given system reliability V for a k-out-of-n system. It can also be used to determine the Clopper-Pearson confidence limits (either one-sided or two-sided) for the parameter p of a Bernoulli distribution. NEWTONP can determine Bayesian probability limits for a proportion (if the beta prior has positive integer parameters). It can determine the percentiles of incomplete beta distributions with positive integer parameters. It can also determine the percentiles of F distributions and the midian plotting positions in probability plotting. NEWTONP is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. NEWTONP is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. It also lists the number of iterations of Newton's method required to calculate the answer within the given error. The NEWTONP program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. NEWTONP was developed in 1988.
Binomial Rings: Axiomatisation, Transfer and Classification
Xantcha, Qimh Richey
2011-01-01
Hall's binomial rings, rings with binomial coefficients, are given an axiomatisation and proved identical to the numerical rings studied by Ekedahl. The Binomial Transfer Principle is established, enabling combinatorial proofs of algebraical identities. The finitely generated binomial rings are completely classified. An application to modules over binomial rings is given.
Directory of Open Access Journals (Sweden)
Jaron T Colas
Full Text Available In principle, formal dynamical models of decision making hold the potential to represent fundamental computations underpinning value-based (i.e., preferential decisions in addition to perceptual decisions. Sequential-sampling models such as the race model and the drift-diffusion model that are grounded in simplicity, analytical tractability, and optimality remain popular, but some of their more recent counterparts have instead been designed with an aim for more feasibility as architectures to be implemented by actual neural systems. Connectionist models are proposed herein at an intermediate level of analysis that bridges mental phenomena and underlying neurophysiological mechanisms. Several such models drawing elements from the established race, drift-diffusion, feedforward-inhibition, divisive-normalization, and competing-accumulator models were tested with respect to fitting empirical data from human participants making choices between foods on the basis of hedonic value rather than a traditional perceptual attribute. Even when considering performance at emulating behavior alone, more neurally plausible models were set apart from more normative race or drift-diffusion models both quantitatively and qualitatively despite remaining parsimonious. To best capture the paradigm, a novel six-parameter computational model was formulated with features including hierarchical levels of competition via mutual inhibition as well as a static approximation of attentional modulation, which promotes "winner-take-all" processing. Moreover, a meta-analysis encompassing several related experiments validated the robustness of model-predicted trends in humans' value-based choices and concomitant reaction times. These findings have yet further implications for analysis of neurophysiological data in accordance with computational modeling, which is also discussed in this new light.
On a Fractional Binomial Process
Cahoy, Dexter O.; Polito, Federico
2012-02-01
The classical binomial process has been studied by Jakeman (J. Phys. A 23:2815-2825, 1990) (and the references therein) and has been used to characterize a series of radiation states in quantum optics. In particular, he studied a classical birth-death process where the chance of birth is proportional to the difference between a larger fixed number and the number of individuals present. It is shown that at large times, an equilibrium is reached which follows a binomial process. In this paper, the classical binomial process is generalized using the techniques of fractional calculus and is called the fractional binomial process. The fractional binomial process is shown to preserve the binomial limit at large times while expanding the class of models that include non-binomial fluctuations (non-Markovian) at regular and small times. As a direct consequence, the generality of the fractional binomial model makes the proposed model more desirable than its classical counterpart in describing real physical processes. More statistical properties are also derived.
Inverse problems with non-trivial priors: efficient solution through sequential Gibbs sampling
DEFF Research Database (Denmark)
Hansen, Thomas Mejer; Cordua, Knud Skou; Mosegaard, Klaus
2012-01-01
for applying the sequential Gibbs sampler and illustrate how it works. Through two case studies, we demonstrate the application of the method to a linear image restoration problem and to a non-linear cross-borehole inversion problem. We demonstrate how prior information can reduce the complexity of an inverse...
The Binomial Distribution in Shooting
Chalikias, Miltiadis S.
2009-01-01
The binomial distribution is used to predict the winner of the 49th International Shooting Sport Federation World Championship in double trap shooting held in 2006 in Zagreb, Croatia. The outcome of the competition was definitely unexpected.
del Río, Vanessa; Larrechi, M Soledad; Callao, M Pilar
2010-06-15
A new concept of flow titration is proposed and demonstrated for the determination of total acidity in plant oils and biodiesel. We use sequential injection analysis (SIA) with a diode array spectrophotometric detector linked to chemometric tools such as multivariate curve resolution-alternating least squares (MCR-ALS). This system is based on the evolution of the basic specie of an acid-base indicator, alizarine, when it comes into contact with a sample that contains free fatty acids. The gradual pH change in the reactor coil due to diffusion and reaction phenomenona allows the sequential appearance of both species of the indicator in the detector coil, recording a data matrix for each sample. The SIA-MCR-ALS method helps to reduce the amounts of sample, the reagents and the time consumed. Each determination consumes 0.413ml of sample, 0.250ml of indicator and 3ml of carrier (ethanol) and generates 3.333ml of waste. The frequency of the analysis is high (12 samples h(-1) including all steps, i.e., cleaning, preparing and analysing). The utilized reagents are of common use in the laboratory and it is not necessary to use the reagents of perfect known concentration. The method was applied to determine acidity in plant oil and biodiesel samples. Results obtained by the proposed method compare well with those obtained by the official European Community method that is time consuming and uses large amounts of organic solvents.
Directory of Open Access Journals (Sweden)
Ke Tang
2014-04-01
Full Text Available Loops in proteins are flexible regions connecting regular secondary structures. They are often involved in protein functions through interacting with other molecules. The irregularity and flexibility of loops make their structures difficult to determine experimentally and challenging to model computationally. Conformation sampling and energy evaluation are the two key components in loop modeling. We have developed a new method for loop conformation sampling and prediction based on a chain growth sequential Monte Carlo sampling strategy, called Distance-guided Sequential chain-Growth Monte Carlo (DISGRO. With an energy function designed specifically for loops, our method can efficiently generate high quality loop conformations with low energy that are enriched with near-native loop structures. The average minimum global backbone RMSD for 1,000 conformations of 12-residue loops is 1:53 A° , with a lowest energy RMSD of 2:99 A° , and an average ensembleRMSD of 5:23 A° . A novel geometric criterion is applied to speed up calculations. The computational cost of generating 1,000 conformations for each of the x loops in a benchmark dataset is only about 10 cpu minutes for 12-residue loops, compared to ca 180 cpu minutes using the FALCm method. Test results on benchmark datasets show that DISGRO performs comparably or better than previous successful methods, while requiring far less computing time. DISGRO is especially effective in modeling longer loops (10-17 residues.
Application of binomial-edited CPMG to shale characterization.
Washburn, Kathryn E; Birdwell, Justin E
2014-09-01
Unconventional shale resources may contain a significant amount of hydrogen in organic solids such as kerogen, but it is not possible to directly detect these solids with many NMR systems. Binomial-edited pulse sequences capitalize on magnetization transfer between solids, semi-solids, and liquids to provide an indirect method of detecting solid organic materials in shales. When the organic solids can be directly measured, binomial-editing helps distinguish between different phases. We applied a binomial-edited CPMG pulse sequence to a range of natural and experimentally-altered shale samples. The most substantial signal loss is seen in shales rich in organic solids while fluids associated with inorganic pores seem essentially unaffected. This suggests that binomial-editing is a potential method for determining fluid locations, solid organic content, and kerogen-bitumen discrimination. Copyright © 2014 Elsevier Inc. All rights reserved.
A method for multiple sequential analyses of macrophage functions using a small single cell sample
Directory of Open Access Journals (Sweden)
F.R.F. Nascimento
2003-09-01
Full Text Available Microbial pathogens such as bacillus Calmette-Guérin (BCG induce the activation of macrophages. Activated macrophages can be characterized by the increased production of reactive oxygen and nitrogen metabolites, generated via NADPH oxidase and inducible nitric oxide synthase, respectively, and by the increased expression of major histocompatibility complex class II molecules (MHC II. Multiple microassays have been developed to measure these parameters. Usually each assay requires 2-5 x 10(5 cells per well. In some experimental conditions the number of cells is the limiting factor for the phenotypic characterization of macrophages. Here we describe a method whereby this limitation can be circumvented. Using a single 96-well microassay and a very small number of peritoneal cells obtained from C3H/HePas mice, containing as little as <=2 x 10(5 macrophages per well, we determined sequentially the oxidative burst (H2O2, nitric oxide production and MHC II (IAk expression of BCG-activated macrophages. More specifically, with 100 µl of cell suspension it was possible to quantify H2O2 release and nitric oxide production after 1 and 48 h, respectively, and IAk expression after 48 h of cell culture. In addition, this microassay is easy to perform, highly reproducible and more economical.
Integer Solutions of Binomial Coefficients
Gilbertson, Nicholas J.
2016-01-01
A good formula is like a good story, rich in description, powerful in communication, and eye-opening to readers. The formula presented in this article for determining the coefficients of the binomial expansion of (x + y)n is one such "good read." The beauty of this formula is in its simplicity--both describing a quantitative situation…
Bayesian analysis of a correlated binomial model
Diniz, Carlos A. R.; Tutia, Marcelo H.; Leite, Jose G.
2010-01-01
In this paper a Bayesian approach is applied to the correlated binomial model, CB(n, p, ρ), proposed by Luceño (Comput. Statist. Data Anal. 20 (1995) 511–520). The data augmentation scheme is used in order to overcome the complexity of the mixture likelihood. MCMC methods, including Gibbs sampling and Metropolis within Gibbs, are applied to estimate the posterior marginal for the probability of success p and for the correlation coefficient ρ. The sensitivity of the posterior is studied taking...
Ochiai, Nobuo; Tsunokawa, Jun; Sasamoto, Kikuo; Hoffmann, Andreas
2014-12-05
A novel multi-volatile method (MVM) using sequential dynamic headspace (DHS) sampling for analysis of aroma compounds in aqueous sample was developed. The MVM consists of three different DHS method parameters sets including choice of the replaceable adsorbent trap. The first DHS sampling at 25 °C using a carbon-based adsorbent trap targets very volatile solutes with high vapor pressure (>20 kPa). The second DHS sampling at 25 °C using the same type of carbon-based adsorbent trap targets volatile solutes with moderate vapor pressure (1-20 kPa). The third DHS sampling using a Tenax TA trap at 80 °C targets solutes with low vapor pressure (0.9910) and high sensitivity (limit of detection: 1.0-7.5 ng mL(-1)) even with MS scan mode. The feasibility and benefit of the method was demonstrated with analysis of a wide variety of aroma compounds in brewed coffee. Ten potent aroma compounds from top-note to base-note (acetaldehyde, 2,3-butanedione, 4-ethyl guaiacol, furaneol, guaiacol, 3-methyl butanal, 2,3-pentanedione, 2,3,5-trimethyl pyrazine, vanillin, and 4-vinyl guaiacol) could be identified together with an additional 72 aroma compounds. Thirty compounds including 9 potent aroma compounds were quantified in the range of 74-4300 ng mL(-1) (RSD<10%, n=5). Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Boonjob, Warunya; Miró, Manuel; Kolev, Spas D
2013-12-15
A proof of concept of a novel pervaporation sequential injection (PSI) analysis method for automatic non-chromatographic speciation analysis of inorganic arsenic in complex aqueous samples is presented. The method is based on hydride generation of arsine followed by its on-line pervaporation-based membrane separation and CCD spectrophotometric detection. The concentrations of arsenite (As(III)) and arsenate (As(V)) are determined sequentially in a single sample zone. The leading section of the sample zone merges with a citric acid/citrate buffer solution (pH 4.5) for the selective reduction of As(III) to arsine while the trailing section of the sample zone merges with hydrochloric acid solution to allow the reduction of both As(III) and As(V) to arsine at pH lower than 1. Virtually identical analytical sensitivity is obtained for both As(III) and As(V) at this high acidity. The flow analyzer also accommodates in-line pH detector for monitoring of the acidity throughout the sample zone prior to hydride generation. Under optimal conditions the proposed PSI method is characterized by a limit of detection, linear calibration range and repeatability for As(III) of 22 μg L(-1) (3sblank level criterion), 50-1000 μg L(-1) and 3.0% at the 500 μg L(-1) level and for As(V) of 51 μg L(-1), 100-2000 μg L(-1) and 2.6% at the 500 μg L(-1) level, respectively. The method was validated with mixed As(III)/As(V) standard aqueous solutions and successfully applied to the determination of As(III) and As(V) in river water samples with elevated content of dissolved organic carbon and suspended particulate matter with no prior sample pretreatment. Excellent relative recoveries ranging from 98% to 104% were obtained for both As(III) and As(V). © 2013 Elsevier B.V. All rights reserved.
Efficient Sequential Monte Carlo Sampling for Continuous Monitoring of a Radiation Situation
Czech Academy of Sciences Publication Activity Database
Šmídl, Václav; Hofman, Radek
2014-01-01
Roč. 56, č. 4 (2014), s. 514-527 ISSN 0040-1706 R&D Projects: GA MV VG20102013018 Institutional support: RVO:67985556 Keywords : radiation protection * atmospheric dispersion model * importance sampling Subject RIV: BD - Theory of Information Impact factor: 1.814, year: 2014 http:// library .utia.cas.cz/separaty/2014/AS/smidl-0433631.pdf
Smoothness in Binomial Edge Ideals
Directory of Open Access Journals (Sweden)
Hamid Damadi
2016-06-01
Full Text Available In this paper we study some geometric properties of the algebraic set associated to the binomial edge ideal of a graph. We study the singularity and smoothness of the algebraic set associated to the binomial edge ideal of a graph. Some of these algebraic sets are irreducible and some of them are reducible. If every irreducible component of the algebraic set is smooth we call the graph an edge smooth graph, otherwise it is called an edge singular graph. We show that complete graphs are edge smooth and introduce two conditions such that the graph G is edge singular if and only if it satisfies these conditions. Then, it is shown that cycles and most of trees are edge singular. In addition, it is proved that complete bipartite graphs are edge smooth.
Sequential separation method for the determination of Plutonium and Americium in fecal samples
International Nuclear Information System (INIS)
Raveendran, Nanda; Rao, D.D.; Yadav, J.R.; Baburajan, A.
2014-01-01
The estimation of internal contamination due to Plutonium and Americium of radiation workers of Advanced Fuel Fabrication Facility (AFFF) at Tarapur was carried out by the bioassay (Fecal sample) of the workers. Conventionally the separation of 'Pu' and 'Am' was carried out by alkali fusion followed by the anion exchange separation for Pu and cation exchange separation for Am. This paper deals with an alternative method in which initially the entire ash of the sample added with 236 Pu tracer (3-11 mBq) and 243 Am tracer (2.8-14.5 mBq) was acid leached and Pu was separated by anion exchange as per standard analytical procedure and Am by using TRU resin. In this work the extraction chromatography method using TRU resin procured from Eichrom,U.K. which contains N-N-di isobutyl carbanoyl methyl phosphine oxide (CMPO) as extractant, tri-n-butyl phosphate (TBP) as diluent absorbed on inert polymeric support has been used for the separation of Am from fecal sample. The 8N HNO 3 effluent from Pu separation step was dried and the residue was dissolved in 10 ml 1M Al(NO 3 ) 3 in 3M HNO 3 and pinch of Ascorbic acid was added and loaded on a TRU resin column (dia ∼ 4 mm and height 60 mm) preconditioned with 30 ml 1M Al(NO 3 ) 3 in 3 MHNO 3 . The column was washed with 5 ml 3M HNO 3 and 5 ml 2M HNO 3 . The nitrate concentration was lowered using addition of 10 ml 0.05 M HNO 3 . Am was eluted with 3 ml 9M HCl and 20 ml 2M HCl. The elute was dried and electrodeposited on a SS planchet in NH 4 (SO 4 ) 2 solution at pH 2.2 for two hours. Pu and Am activity estimated by counting in passivated ion implanted planner Silicon detector (PIPS) coupled to 8K channel alpha spectrometer. The sample was counted for duration of 3-4 lacs of seconds. In this study the numbers of samples analyzed are 25. The paper gives detail of analytical recoveries of Pu tracer varies from 55-90 % with a mean of 70% and std. deviation 9.9%. The Am tracer recovery was in the range of 20-89.3% with a mean of
Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data
Bonett, Douglas G.; Price, Robert M.
2012-01-01
Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…
Directory of Open Access Journals (Sweden)
Maria Cristina Carvalho do Espírito-Santo
2012-10-01
Full Text Available Schistosomiasis constitutes a major public health problem, with an estimated 200 million individuals infected worldwide and 700 million people living in risk areas. In Brazil there are areas of high, medium and low endemicity. Studies have shown that in endemic areas with a low prevalence of Schistosoma infection the sensitivity of parasitological methods is clearly reduced. Consequently diagnosis is often impeded due to the presence of false-negative results. The aim of this study is to present the PCR reamplification (Re-PCR protocol for the detection of Schistosoma mansoni in samples with low parasite load (with less than 100 eggs per gram (epg of feces. Three methods were used for the lysis of the envelopes of the S. mansoni eggs and two techniques of DNA extraction were carried out. Extracted DNA was quantified, and the results suggested that the extraction technique, which mixed glass beads with a guanidine isothiocyanate/phenol/chloroform (GT solution, produced good results. PCR reamplification was conducted and detection sensitivity was found to be five eggs per 500 mg of artificially marked feces. The results achieved using these methods suggest that they are potentially viable for the detection of Schistosoma infection with low parasite load.
Serra, Gerardo V.; Porta, Norma C. La; Avalos, Susana; Mazzuferi, Vilma
2013-01-01
The alfalfa caterpillar, Colias lesbia (Fabricius) (Lepidoptera: Pieridae), is a major pest of alfalfa, Medicago sativa L. (Fabales: Fabaceae), crops in Argentina. Its management is based mainly on chemical control of larvae whenever the larvae exceed the action threshold. To develop and validate fixed-precision sequential sampling plans, an intensive sampling programme for C. lesbia eggs was carried out in two alfalfa plots located in the Province of Córdoba, Argentina, from 1999 to 2002. Using Resampling for Validation of Sampling Plans software, 12 additional independent data sets were used to validate the sequential sampling plan with precision levels of 0.10 and 0.25 (SE/mean), respectively. For a range of mean densities of 0.10 to 8.35 eggs/sample, an average sample size of only 27 and 26 sample units was required to achieve a desired precision level of 0.25 for the sampling plans of Green and Kuno, respectively. As the precision level was increased to 0.10, average sample size increased to 161 and 157 sample units for the sampling plans of Green and Kuno, respectively. We recommend using Green's sequential sampling plan because it is less sensitive to changes in egg density. These sampling plans are a valuable tool for researchers to study population dynamics and to evaluate integrated pest management strategies. PMID:23909840
Estimation of Log-Linear-Binomial Distribution with Applications
Directory of Open Access Journals (Sweden)
Elsayed Ali Habib
2010-01-01
Full Text Available Log-linear-binomial distribution was introduced for describing the behavior of the sum of dependent Bernoulli random variables. The distribution is a generalization of binomial distribution that allows construction of a broad class of distributions. In this paper, we consider the problem of estimating the two parameters of log-linearbinomial distribution by moment and maximum likelihood methods. The distribution is used to fit genetic data and to obtain the sampling distribution of the sign test under dependence among trials.
Beta-binomial regression and bimodal utilization.
Liu, Chuan-Fen; Burgess, James F; Manning, Willard G; Maciejewski, Matthew L
2013-10-01
To illustrate how the analysis of bimodal U-shaped distributed utilization can be modeled with beta-binomial regression, which is rarely used in health services research. Veterans Affairs (VA) administrative data and Medicare claims in 2001-2004 for 11,123 Medicare-eligible VA primary care users in 2000. We compared means and distributions of VA reliance (the proportion of all VA/Medicare primary care visits occurring in VA) predicted from beta-binomial, binomial, and ordinary least-squares (OLS) models. Beta-binomial model fits the bimodal distribution of VA reliance better than binomial and OLS models due to the nondependence on normality and the greater flexibility in shape parameters. Increased awareness of beta-binomial regression may help analysts apply appropriate methods to outcomes with bimodal or U-shaped distributions. © Health Research and Educational Trust.
Zero-truncated negative binomial - Erlang distribution
Bodhisuwan, Winai; Pudprommarat, Chookait; Bodhisuwan, Rujira; Saothayanun, Luckhana
2017-11-01
The zero-truncated negative binomial-Erlang distribution is introduced. It is developed from negative binomial-Erlang distribution. In this work, the probability mass function is derived and some properties are included. The parameters of the zero-truncated negative binomial-Erlang distribution are estimated by using the maximum likelihood estimation. Finally, the proposed distribution is applied to real data, the number of methamphetamine in the Bangkok, Thailand. Based on the results, it shows that the zero-truncated negative binomial-Erlang distribution provided a better fit than the zero-truncated Poisson, zero-truncated negative binomial, zero-truncated generalized negative-binomial and zero-truncated Poisson-Lindley distributions for this data.
DEFF Research Database (Denmark)
Qiao, Jixin; Hou, Xiaolin; Roos, Per
2009-01-01
This article presents an automated method for the rapid determination of 239Pu and 240Pu in various environmental samples. The analytical method involves the in-line separation of Pu isotopes using extraction chromatography (TEVA) implemented in a sequential injection (SI) network followed...... of the in-line extraction chromatographic run was...
Harrison, Xavier A
2015-01-01
Overdispersion is a common feature of models of biological data, but researchers often fail to model the excess variation driving the overdispersion, resulting in biased parameter estimates and standard errors. Quantifying and modeling overdispersion when it is present is therefore critical for robust biological inference. One means to account for overdispersion is to add an observation-level random effect (OLRE) to a model, where each data point receives a unique level of a random effect that can absorb the extra-parametric variation in the data. Although some studies have investigated the utility of OLRE to model overdispersion in Poisson count data, studies doing so for Binomial proportion data are scarce. Here I use a simulation approach to investigate the ability of both OLRE models and Beta-Binomial models to recover unbiased parameter estimates in mixed effects models of Binomial data under various degrees of overdispersion. In addition, as ecologists often fit random intercept terms to models when the random effect sample size is low (Binomial mixture model, leading to biased slope and intercept estimates, but performed well for overdispersion generated by adding random noise to the linear predictor. Comparison of parameter estimates from an OLRE model with those from its corresponding Beta-Binomial model readily identified when OLRE were performing poorly due to disagreement between effect sizes, and this strategy should be employed whenever OLRE are used for Binomial data to assess their reliability. Beta-Binomial models performed well across all contexts, but showed a tendency to underestimate effect sizes when modelling non-Beta-Binomial data. Finally, both OLRE and Beta-Binomial models performed poorly when models contained Binomial data, but that they do not perform well in all circumstances and researchers should take care to verify the robustness of parameter estimates of OLRE models.
Applications of Bayesian decision theory to sequential mastery testing
Vos, Hendrik J.
1999-01-01
The purpose of this paper is to formulate optimal sequential rules for mastery tests. The framework for the approach is derived from Bayesian sequential decision theory. Both a threshold and linear loss structure are considered. The binomial probability distribution is adopted as the psychometric
Ruggeri, Paolo; Irving, James; Holliger, Klaus
2015-08-01
We critically examine the performance of sequential geostatistical resampling (SGR) as a model proposal mechanism for Bayesian Markov-chain-Monte-Carlo (MCMC) solutions to near-surface geophysical inverse problems. Focusing on a series of simple yet realistic synthetic crosshole georadar tomographic examples characterized by different numbers of data, levels of data error and degrees of model parameter spatial correlation, we investigate the efficiency of three different resampling strategies with regard to their ability to generate statistically independent realizations from the Bayesian posterior distribution. Quite importantly, our results show that, no matter what resampling strategy is employed, many of the examined test cases require an unreasonably high number of forward model runs to produce independent posterior samples, meaning that the SGR approach as currently implemented will not be computationally feasible for a wide range of problems. Although use of a novel gradual-deformation-based proposal method can help to alleviate these issues, it does not offer a full solution. Further, we find that the nature of the SGR is found to strongly influence MCMC performance; however no clear rule exists as to what set of inversion parameters and/or overall proposal acceptance rate will allow for the most efficient implementation. We conclude that although the SGR methodology is highly attractive as it allows for the consideration of complex geostatistical priors as well as conditioning to hard and soft data, further developments are necessary in the context of novel or hybrid MCMC approaches for it to be considered generally suitable for near-surface geophysical inversions.
System-Reliability Cumulative-Binomial Program
Scheuer, Ernest M.; Bowerman, Paul N.
1989-01-01
Cumulative-binomial computer program, NEWTONP, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. NEWTONP, CUMBIN (NPO-17555), and CROSSER (NPO-17557), used independently of one another. Program finds probability required to yield given system reliability. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Program written in C.
A class of orthogonal nonrecursive binomial filters.
Haddad, R. A.
1971-01-01
The time- and frequency-domain properties of the orthogonal binomial sequences are presented. It is shown that these sequences, or digital filters based on them, can be generated using adders and delay elements only. The frequency-domain behavior of these nonrecursive binomial filters suggests a number of applications as low-pass Gaussian filters or as inexpensive bandpass filters.
Common-Reliability Cumulative-Binomial Program
Scheuer, Ernest, M.; Bowerman, Paul N.
1989-01-01
Cumulative-binomial computer program, CROSSER, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CROSSER, CUMBIN (NPO-17555), and NEWTONP (NPO-17556), used independently of one another. Point of equality between reliability of system and common reliability of components found. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Program written in C.
Problems on Divisibility of Binomial Coefficients
Osler, Thomas J.; Smoak, James
2004-01-01
Twelve unusual problems involving divisibility of the binomial coefficients are represented in this article. The problems are listed in "The Problems" section. All twelve problems have short solutions which are listed in "The Solutions" section. These problems could be assigned to students in any course in which the binomial theorem and Pascal's…
Morrison, H M; Kramps, J A; Dijkman, J H; Stockley, R A
1986-01-01
Bronchoalveolar lavage is used to obtain cells and proteins from the lower respiratory tract for diagnosis and research. Uncertainity exists about which site in the lung is sampled by the lavage fluid and what effect different lavage volumes have on recovery of the constituents of lavage fluid. Dilution of alveolar lining fluid by lavage fluid is variable and results are usually expressed as protein ratios to surmount this problem. We have compared cell profiles and the concentrations of two proteinase inhibitors--the low molecular weight bronchial protease inhibitor antileucoprotease and alpha 1 proteinase inhibitor, together with alpha 1 proteinase inhibitor function and its relationship to the cell profile in sequential bronchoalveolar lavage fluid samples from patients undergoing bronchoscopy. There was no difference in total or differential cell counts or albumin or alpha 1 proteinase inhibitor concentrations between the first and second halves of the lavage. Both the concentration of antileucoprotease and the ratio of antileucoprotease to albumin were, however, lower in the second half of the lavage (2p less than 0.01 and 2p less than 0.05 respectively). There was no difference in the function of alpha 1 proteinase inhibitor (assessed by inhibition of porcine pancreatic elastase--PPE) between aliquots (0.28 mole PPE inhibited/mol alpha 1 proteinase inhibitor; range 0-1.19 for the first half and 0.37 mol PPE inhibited/mol alpha 1 proteinase inhibitor; range 0.10-0.80 for the second half). About 60-70% of alpha 1 proteinase inhibitor in each half of the lavage fluid was inactive as an inhibitor. The function of alpha 1 proteinase inhibitor did not differ between bronchitic smokers and ex-smokers. Alpha 1 proteinase inhibitor function was not related to the number of total white cells, macrophages, or neutrophils in the lavage fluid. Contamination of lavage by red blood cells was found to alter the concentration of alpha 1 proteinase inhibitor but not its
Estimating negative binomial parameters from occurrence data with detection times.
Hwang, Wen-Han; Huggins, Richard; Stoklosa, Jakub
2016-11-01
The negative binomial distribution is a common model for the analysis of count data in biology and ecology. In many applications, we may not observe the complete frequency count in a quadrat but only that a species occurred in the quadrat. If only occurrence data are available then the two parameters of the negative binomial distribution, the aggregation index and the mean, are not identifiable. This can be overcome by data augmentation or through modeling the dependence between quadrat occupancies. Here, we propose to record the (first) detection time while collecting occurrence data in a quadrat. We show that under what we call proportionate sampling, where the time to survey a region is proportional to the area of the region, that both negative binomial parameters are estimable. When the mean parameter is larger than two, our proposed approach is more efficient than the data augmentation method developed by Solow and Smith (, Am. Nat. 176, 96-98), and in general is cheaper to conduct. We also investigate the effect of misidentification when collecting negative binomially distributed data, and conclude that, in general, the effect can be simply adjusted for provided that the mean and variance of misidentification probabilities are known. The results are demonstrated in a simulation study and illustrated in several real examples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DEFF Research Database (Denmark)
Chomchoei, Roongrat; Miró, Manuel; Hansen, Elo Harald
2005-01-01
to conventional batch methods, this fully automated approach furthermore offers the potentials of a variety of operational extraction protocols. Employing the three-step sequential extraction BCR scheme to a certified homogeneous soil reference material (NIST, SRM 2710), this communication investigates four...
Energy Technology Data Exchange (ETDEWEB)
Chang-Kyu Kim (IAEA, Physics, Chemistry and Instrumentation Lab., Agency' s Laboratories, Seibersdorf (Austria))
2010-03-15
A sequential injection system was developed, which can be widely used for the separation and preconcentration of analytes from diverse environmental samples. The system enables the separation time to be shortened by maintaining a constant flow rate of solution and by avoiding clogging or bubbling in a chromatographic column. The SI system was successfully applied to the separation of 237Np and Pu isotopes in IAEA reference materials and environmental samples, and to the sequential separation of 210Po and 210Pb in a phosphogypsum candidate reference material. The replicate analysis results of 237Np, 239+240Pu 210Po and 210Pb in some IAEA reference materials using the SI system associated with HR-ICP-MS, alpha-spectrometry and LSC are in good agreement with the recommended value within 5% of standard deviation. The SI system enabled a halving of the separation time required for radionuclides. (author)
International Nuclear Information System (INIS)
Kartal, Senol; Aydin, Zeki; Tokalioglu, Serife
2006-01-01
The concentrations of metals (Cd, Co, Cr, Cu, Fe, Mn, Ni, Pb, and Zn) in street sediment samples were determined by flame atomic absorption spectrometry (FAAS) using the modified BCR (the European Community Bureau of Reference) sequential extraction procedure. According to the BCR protocol for extracting the metals from the relevant target phases, 1.0 g of specimen of the sample was treated with 0.11 M acetic acid (exchangeable and bound to carbonates), 0.5 M hydroxylamine hydrochloride (bound to iron- and manganese-oxides), and 8.8 M hydrogen peroxide plus 1 M ammonium acetate (bound to sulphides and organics), sequentially. The residue was treated with aqua regia solution for recovery studies, although this step is not part of the BCR procedure. The mobility sequence based on the sum of the BCR sequential extraction stages was: Cd ∼ Zn (∼90%) > Pb (∼84%) > Cu (∼75%) > Mn (∼70%) > Co (∼57%) > Ni (∼43%) > Cr (∼40%) > Fe (∼17%). Enrichment factors as the criteria for examining the impact of the anthropogenic emission sources of heavy metals were calculated, and it was observed that the highest enriched elements were Cd, Pb, and Zn in the dust samples, average 190, 111, and 20, respectively. Correlation analysis (CA) and principal component analysis (PCA) were applied to the data matrix to evaluate the analytical results and to identify the possible pollution sources of metals. PCA revealed that the sampling area was mainly influenced from three pollution sources, namely; traffic, industrial, and natural sources. The results show that chemical sequential extraction is a precious operational tool. Validation of the analytical results was checked by both recovery studies and analysis of the standard reference material (NIST SRM 2711 Montana Soil)
Predicting Cumulative Incidence Probability by Direct Binomial Regression
DEFF Research Database (Denmark)
Scheike, Thomas H.; Zhang, Mei-Jie
Binomial modelling; cumulative incidence probability; cause-specific hazards; subdistribution hazard......Binomial modelling; cumulative incidence probability; cause-specific hazards; subdistribution hazard...
Correlated binomial models and correlation structures
International Nuclear Information System (INIS)
Hisakado, Masato; Kitsukawa, Kenji; Mori, Shintaro
2006-01-01
We discuss a general method to construct correlated binomial distributions by imposing several consistent relations on the joint probability function. We obtain self-consistency relations for the conditional correlations and conditional probabilities. The beta-binomial distribution is derived by a strong symmetric assumption on the conditional correlations. Our derivation clarifies the 'correlation' structure of the beta-binomial distribution. It is also possible to study the correlation structures of other probability distributions of exchangeable (homogeneous) correlated Bernoulli random variables. We study some distribution functions and discuss their behaviours in terms of their correlation structures
Directory of Open Access Journals (Sweden)
N. Ozcan
2013-05-01
Full Text Available This paper focuses on the concentrations of heavy metals (Cd, Co, Cr, Cu, Mn, Ni, Pb and Zn in 20 dust samples collected from the streets of the Organized Industrial District in Sakarya, Turkey using sequential extraction procedure were determined by ICP-OES. The three-step BCR sequential extraction procedure was used in order to evaluate mobility, availability and persistence of heavy elements in street dust samples. Three operationally defined fractions isolated using the BCR procedure was: acid extractable, reducible, and oxidizable. The mobility sequence based on the sum of the BCR sequential extraction stages: Cd (82.3% > Mn (80.0% > Zn (78.8% > Cu (70.2% > Ni (65.9% > Pb (63.8% > Cr (47.3% > Co (32.6%. Validation of the analytical results was checked by analysis of the BCR-701 certified reference material. The concentrations of metals in the street dust samples have been shown a decrease after the each extraction stage.
Mketo, Nomvano; Nomngongo, Philiswa N; Ngila, J Catherine
2018-05-15
A rapid three-step sequential extraction method was developed under microwave radiation followed by inductively coupled plasma-optical emission spectroscopic (ICP-OES) and ion-chromatographic (IC) analysis for the determination of sulphur forms in coal samples. The experimental conditions of the proposed microwave-assisted sequential extraction (MW-ASE) procedure were optimized by using multivariate mathematical tools. Pareto charts generated from 2 3 full factorial design showed that, extraction time has insignificant effect on the extraction of sulphur species, therefore, all the sequential extraction steps were performed for 5 min. The optimum values according to the central composite designs and counter plots of the response surface methodology were 200 °C (microwave temperature) and 0.1 g (coal amount) for all the investigated extracting reagents (H 2 O, HCl and HNO 3 ). When the optimum conditions of the proposed MW-ASE procedure were applied in coal CRMs, SARM 18 showed more organic sulphur (72%) and the other two coal CRMs (SARMs 19 and 20) were dominated by sulphide sulphur species (52-58%). The sum of the sulphur forms from the sequential extraction steps have shown consistent agreement (95-96%) with certified total sulphur values on the coal CRM certificates. This correlation, in addition to the good precision (1.7%) achieved by the proposed procedure, suggests that the sequential extraction method is reliable, accurate and reproducible. To safe-guard the destruction of pyritic and organic sulphur forms in extraction step 1, water was used instead of HCl. Additionally, the notorious acidic mixture (HCl/HNO 3 /HF) was replaced by greener reagent (H 2 O 2 ) in the last extraction step. Therefore, the proposed MW-ASE method can be applied in routine laboratories for the determination of sulphur forms in coal and coal related matrices. Copyright © 2018 Elsevier B.V. All rights reserved.
Newton Binomial Formulas in Schubert Calculus
Cordovez, Jorge; Gatto, Letterio; Santiago, Taise
2008-01-01
We prove Newton's binomial formulas for Schubert Calculus to determine numbers of base point free linear series on the projective line with prescribed ramification divisor supported at given distinct points.
Calculating Cumulative Binomial-Distribution Probabilities
Scheuer, Ernest M.; Bowerman, Paul N.
1989-01-01
Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.
Log-binomial models: exploring failed convergence.
Williamson, Tyler; Eliasziw, Misha; Fick, Gordon Hilton
2013-12-13
Relative risk is a summary metric that is commonly used in epidemiological investigations. Increasingly, epidemiologists are using log-binomial models to study the impact of a set of predictor variables on a single binary outcome, as they naturally offer relative risks. However, standard statistical software may report failed convergence when attempting to fit log-binomial models in certain settings. The methods that have been proposed in the literature for dealing with failed convergence use approximate solutions to avoid the issue. This research looks directly at the log-likelihood function for the simplest log-binomial model where failed convergence has been observed, a model with a single linear predictor with three levels. The possible causes of failed convergence are explored and potential solutions are presented for some cases. Among the principal causes is a failure of the fitting algorithm to converge despite the log-likelihood function having a single finite maximum. Despite these limitations, log-binomial models are a viable option for epidemiologists wishing to describe the relationship between a set of predictors and a binary outcome where relative risk is the desired summary measure. Epidemiologists are encouraged to continue to use log-binomial models and advocate for improvements to the fitting algorithms to promote the widespread use of log-binomial models.
The Validation of a Beta-Binomial Model for Overdispersed Binomial Data.
Kim, Jongphil; Lee, Ji-Hyun
2017-01-01
The beta-binomial model has been widely used as an analytically tractable alternative that captures the overdispersion of an intra-correlated, binomial random variable, X . However, the model validation for X has been rarely investigated. As a beta-binomial mass function takes on a few different shapes, the model validation is examined for each of the classified shapes in this paper. Further, the mean square error (MSE) is illustrated for each shape by the maximum likelihood estimator (MLE) based on a beta-binomial model approach and the method of moments estimator (MME) in order to gauge when and how much the MLE is biased.
DEFF Research Database (Denmark)
Chomchoei, Roongrat; Miró, Manuel; Hansen, Elo Harald
2005-01-01
An automated sequential injection (SI) system incorporating a dual-conical microcolumn is proposed as a versatile approach for the accommodation of both single and sequential extraction schemes for metal fractionation of solid samples of environmental concern. Coupled to flame atomic absorption...
Zero inflated negative binomial-generalized exponential distributionand its applications
Directory of Open Access Journals (Sweden)
Sirinapa Aryuyuen
2014-08-01
Full Text Available In this paper, we propose a new zero inflated distribution, namely, the zero inflated negative binomial-generalized exponential (ZINB-GE distribution. The new distribution is used for count data with extra zeros and is an alternative for data analysis with over-dispersed count data. Some characteristics of the distribution are given, such as mean, variance, skewness, and kurtosis. Parameter estimation of the ZINB-GE distribution uses maximum likelihood estimation (MLE method. Simulated and observed data are employed to examine this distribution. The results show that the MLE method seems to have high efficiency for large sample sizes. Moreover, the mean square error of parameter estimation is increased when the zero proportion is higher. For the real data sets, this new zero inflated distribution provides a better fit than the zero inflated Poisson and zero inflated negative binomial distributions.
Generazio, Edward R.
2011-01-01
The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.
Bowden, J; Mander, A
2014-01-01
In this paper, we review the adaptive design methodology of Li et al. (Biostatistics 3:277-287) for two-stage trials with mid-trial sample size adjustment. We argue that it is closer in principle to a group sequential design, in spite of its obvious adaptive element. Several extensions are proposed that aim to make it even more attractive and transparent alternative to a standard (fixed sample size) trial for funding bodies to consider. These enable a cap to be put on the maximum sample size and for the trial data to be analysed using standard methods at its conclusion. The regulatory view of trials incorporating unblinded sample size re-estimation is also discussed. © 2014 The Authors. Pharmaceutical Statistics published by John Wiley & Sons, Ltd.
DEFF Research Database (Denmark)
Qiao, Jixin; Hou, Xiaolin; Roos, Per
2010-01-01
This paper reports an automated analytical method for rapid and simultaneous determination of plutonium isotopes (239Pu and 240Pu) and neptunium (237Np) in environmental samples. An extraction chromatographic column packed with TrisKem TEVA® resin was incorporated in a sequential injection (SI...... procedures were investigated and compared for the adjustment of oxidation states of plutonium and neptunium to Pu(IV) and Np(IV), respectively. A two-step protocol using sulfite and concentrated nitric acid as redox reagents was proven to be the most effective method. The analytical results for both...
Gómez-Nieto, Beatriz; Gismera, Ma Jesús; Sevilla, Ma Teresa; Procopio, Jesús R
2015-01-07
The fast sequential multi-element determination of 11 elements present at different concentration levels in environmental samples and drinking waters has been investigated using high-resolution continuum source flame atomic absorption spectrometry. The main lines for Cu (324.754 nm), Zn (213.857 nm), Cd (228.802 nm), Ni (232.003 nm) and Pb (217.001 nm), main and secondary absorption lines for Mn (279.482 and 279.827 nm), Fe (248.327, 248.514 and 302.064 nm) and Ca (422.673 and 239.856 nm), secondary lines with different sensitivities for Na (589.592 and 330.237 nm) and K (769.897 and 404.414 nm) and a secondary line for Mg (202.582 nm) have been chosen to perform the analysis. A flow injection system has been used for sample introduction so sample consumption has been reduced up to less than 1 mL per element, measured in triplicate. Furthermore, the use of multiplets for Fe and the side pixel registration approach for Mg have been studied in order to reduce sensitivity and extend the linear working range. The figures of merit have been calculated and the proposed method was applied to determine these elements in a pine needles reference material (SRM 1575a), drinking and natural waters and soil extracts. Recoveries of analytes added at different concentration levels to water samples and extracts of soils were within 88-115% interval. In this way, the fast sequential multi-element determination of major and minor elements can be carried out, in triplicate, with successful results without requiring additional dilutions of samples or several different strategies for sample preparation using about 8-9 mL of sample. Copyright © 2014 Elsevier B.V. All rights reserved.
Beesley, Luke; Moreno-Jiménez, Eduardo; Clemente, Rafael; Lepp, Nicholas; Dickinson, Nicholas
2010-01-01
Three methods for predicting element mobility in soils have been applied to an iron-rich soil, contaminated with arsenic, cadmium and zinc. Soils were collected from 0 to 30 cm, 30 to 70 cm and 70 to 100 cm depths in the field and soil pore water was collected at different depths from an adjacent 100 cm deep trench. Sequential extraction and a column leaching test in the laboratory were compared to element concentrations in pore water sampled directly from the field. Arsenic showed low extractability, low leachability and occurred at low concentrations in pore water samples. Cadmium and zinc were more labile and present in higher concentrations in pore water, increasing with soil depth. Pore water sampling gave the best indication of short term element mobility when field conditions were taken into account, but further extraction and leaching procedures produced a fuller picture of element dynamics, revealing highly labile Cd deep in the soil profile.
DEFF Research Database (Denmark)
Hansen, Elo Harald
by bio- and chemiluminescence. In recent years, FIA has been supplemented by Sequential Injection Analysis (SIA), which, although it inherently entails some limitations as compared to FIA, offers specific advantages, notably in terms of significantly reduced sample and reagent(s) consumption and hence...... of applications. Particular focus will be placed on its use as a vehicle for pretreatment of complex sample matrices for determination of trace-level concentrations of metal ions by electrothermal atomic absorption spectrometry (ETAAS) and inductively coupled plasma mass spectrometry (ICPMS) via exploitation...... of the renewable microcolumn concept. Despite their excellent analytical chemical capabilities, ETAAS as well as ICPMS often require that the samples are subjected to suitable pretreatment in order to obtain the necessary sensitivity and selectivity. Either in order to separate the analyte from potentially...
DEFF Research Database (Denmark)
Hansen, Elo Harald
In the last decade, Flow Injection Analysis (FIA) became supplemented by Sequential Injection Analysis (SIA), which, although it inherently entails some limitations as compared to FIA, offers specific advantages, notably in terms of significantly reduced sample and reagent(s) consumption and hence...... will be directed towards its use as a vehicle for on-line pretreatment of complex sample matrices for determination of trace-level concentrations of metal ions by electrothermal atomic absorption spectrometry (ETAAS) and inductively coupled plasma mass spectrometry (ICPMS) via exploitation of the renewable solid......-phase microcolumn concept utilising hydrophobic as well as hydrophilic bead materials. Although ETAAS and ICPMS both are characterised by excellent analytical chemical capabilities, they nevertheless often require that the samples be subjected to suitable pretreatment in order to obtain the necessary sensitivity...
Binomial distribution for the charge asymmetry parameter
International Nuclear Information System (INIS)
Chou, T.T.; Yang, C.N.
1984-01-01
It is suggested that for high energy collisions the distribution with respect to the charge asymmetry z = nsub(F) - nsub(B) is binomial, where nsub(F) and nsub(B) are the forward and backward charge multiplicities. (orig.)
Binomial test models and item difficulty
van der Linden, Willem J.
1979-01-01
In choosing a binomial test model, it is important to know exactly what conditions are imposed on item difficulty. In this paper these conditions are examined for both a deterministic and a stochastic conception of item responses. It appears that they are more restrictive than is generally
Binomial vs poisson statistics in radiation studies
International Nuclear Information System (INIS)
Foster, J.; Kouris, K.; Spyrou, N.M.; Matthews, I.P.; Welsh National School of Medicine, Cardiff
1983-01-01
The processes of radioactive decay, decay and growth of radioactive species in a radioactive chain, prompt emission(s) from nuclear reactions, conventional activation and cyclic activation are discussed with respect to their underlying statistical density function. By considering the transformation(s) that each nucleus may undergo it is shown that all these processes are fundamentally binomial. Formally, when the number of experiments N is large and the probability of success p is close to zero, the binomial is closely approximated by the Poisson density function. In radiation and nuclear physics, N is always large: each experiment can be conceived of as the observation of the fate of each of the N nuclei initially present. Whether p, the probability that a given nucleus undergoes a prescribed transformation, is close to zero depends on the process and nuclide(s) concerned. Hence, although a binomial description is always valid, the Poisson approximation is not always adequate. Therefore further clarification is provided as to when the binomial distribution must be used in the statistical treatment of detected events. (orig.)
The Normal Distribution From Binomial to Normal
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 6. The Normal Distribution From Binomial to Normal. S Ramasubramanian. Series Article Volume 2 Issue 6 June 1997 pp 15-24. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/002/06/0015-0024 ...
Wald, Abraham
2013-01-01
In 1943, while in charge of Columbia University's Statistical Research Group, Abraham Wald devised Sequential Design, an innovative statistical inference system. Because the decision to terminate an experiment is not predetermined, sequential analysis can arrive at a decision much sooner and with substantially fewer observations than equally reliable test procedures based on a predetermined number of observations. The system's immense value was immediately recognized, and its use was restricted to wartime research and procedures. In 1945, it was released to the public and has since revolutio
International Nuclear Information System (INIS)
Kim, C.-K.; Sansone, U.; Martin, P.; Kim, C.-S.
2007-02-01
The Chemistry Unit of the Physics, Chemistry and Instrumentation Laboratory in the IAEA's Seibersdorf Laboratory in Austria, has the programmatic responsibility to provide assistance to Member State laboratories in maintaining and improving the reliability of analytical measurement results, both in trace element and radionuclide determinations. This is accomplished through the provision of reference materials of terrestrial origin, validated analytical procedures, training in the implementation of internal quality control, and through the evaluation of measurement performance by organization of worldwide and regional interlaboratory comparison exercises. In this framework an on-line sequential injection (SI) system was developed, which can be widely used for the separation and preconcentration of target analytes from diverse environmental samples. The system enables the separation time to be shortened by maintaining a constant flow rate of solution and by avoiding clogging or bubbling in a chromatographic column. The SI system was successfully applied to the separation of Pu in IAEA reference material (IAEA Soil-6) and to the sequential separation of 210 Po and 210 Pb in phosphogypsum candidate reference material. The replicate analysis results of Pu in IAEA reference material (Soil-6) obtained with the SI system are in good agreement with the recommended value within 5% of standard deviation. The SI system enabled a halving in the separation time required for of radionuclides
Longitudinal beta-binomial modeling using GEE for overdispersed binomial data.
Wu, Hongqian; Zhang, Ying; Long, Jeffrey D
2017-03-15
Longitudinal binomial data are frequently generated from multiple questionnaires and assessments in various scientific settings for which the binomial data are often overdispersed. The standard generalized linear mixed effects model may result in severe underestimation of standard errors of estimated regression parameters in such cases and hence potentially bias the statistical inference. In this paper, we propose a longitudinal beta-binomial model for overdispersed binomial data and estimate the regression parameters under a probit model using the generalized estimating equation method. A hybrid algorithm of the Fisher scoring and the method of moments is implemented for computing the method. Extensive simulation studies are conducted to justify the validity of the proposed method. Finally, the proposed method is applied to analyze functional impairment in subjects who are at risk of Huntington disease from a multisite observational study of prodromal Huntington disease. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Simulation on Poisson and negative binomial models of count road accident modeling
Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.
2016-11-01
Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.
Expansion around half-integer values, binomial sums, and inverse binomial sums
International Nuclear Information System (INIS)
Weinzierl, Stefan
2004-01-01
I consider the expansion of transcendental functions in a small parameter around rational numbers. This includes in particular the expansion around half-integer values. I present algorithms which are suitable for an implementation within a symbolic computer algebra system. The method is an extension of the technique of nested sums. The algorithms allow in addition the evaluation of binomial sums, inverse binomial sums and generalizations thereof
Tomography of binomial states of the radiation field
Bazrafkan, MR; Man'ko, [No Value
2004-01-01
The symplectic, optical, and photon-number tomographic symbols of binomial states of the radiation field are studied. Explicit relations for all tomograms of the binomial states are obtained. Two measures for nonclassical properties of these states are discussed.
International Nuclear Information System (INIS)
2014-01-01
Since 2004, IAEA activities related to the terrestrial environment have aimed at the development of a set of procedures to determine radionuclides in environmental samples. Reliable, comparable and ‘fit for purpose’ results are an essential requirement for any decision based on analytical measurements. For the analyst, tested and validated analytical procedures are extremely important tools for the production of analytical data. For maximum utility, such procedures should be comprehensive, clearly formulated and readily available for reference to both the analyst and the customer. This publication describes a combined procedure for the sequential determination of 90 Sr, 241 Am and Pu radioisotopes in environmental samples. The method is based on the chemical separation of strontium, americium and plutonium using ion exchange chromatography, extraction chromatography and precipitation followed by alpha spectrometric and liquid scintillation counting detection. The method was tested and validated in terms of repeatability and trueness in accordance with International Organization for Standardization (ISO) guidelines using reference materials and proficiency test samples. Reproducibility tests were performed later at the IAEA Terrestrial Environment Laboratory. The calculations of the massic activity, uncertainty budget, decision threshold and detection limit are also described in this publication. The procedure is introduced for the determination of 90 Sr, 241 Am and Pu radioisotopes in environmental samples such as soil, sediment, air filter and vegetation samples. It is expected to be of general use to a wide range of laboratories, including the Analytical Laboratories for the Measurement of Environmental Radioactivity (ALMERA) network for routine environmental monitoring purposes
Energy Technology Data Exchange (ETDEWEB)
Gratch, J. [Univ. of Southern California, Marina del Rey, CA (United States)
1996-12-31
This article advocates a new model for inductive learning. Called sequential induction, it helps bridge classical fixed-sample learning techniques (which are efficient but difficult to formally characterize), and worst-case approaches (which provide strong statistical guarantees but are too inefficient for practical use). Learning proceeds as a sequence of decisions which are informed by training data. By analyzing induction at the level of these decisions, and by utilizing the only enough data to make each decision, sequential induction provides statistical guarantees but with substantially less data than worst-case methods require. The sequential inductive model is also useful as a method for determining a sufficient sample size for inductive learning and as such, is relevant to learning problems where the preponderance of data or the cost of gathering data precludes the use of traditional methods.
Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit
2013-01-01
Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.
Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.
Bartley, David; Slaven, James; Harper, Martin
2017-03-01
The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.
Directory of Open Access Journals (Sweden)
Sunil Kumar C
2014-01-01
Full Text Available With number of students growing each year there is a strong need to automate systems capable of evaluating descriptive answers. Unfortunately, there aren’t many systems capable of performing this task. In this paper, we use a machine learning tool called LightSIDE to accomplish auto evaluation and scoring of descriptive answers. Our experiments are designed to cater to our primary goal of identifying the optimum training sample size so as to get optimum auto scoring. Besides the technical overview and the experiments design, the paper also covers challenges, benefits of the system. We also discussed interdisciplinary areas for future research on this topic.
, E.E. Awokunmi; , S.S. Asaolu; , O.O Ajayi; , A.O. Adebayo
2011-01-01
Ten heavy metals (Fe, Cu, Mn, Zn, Pb, Ni, Co, Cd, Cr and Sn) in fractioned and bulk soil samples collected from four dump sites located in AdoEkiti and Ikere -Ekiti, South western Nigeria were analysed using a modified Tessier’s procedure and acid digestion to obtain the distribution pattern of metal in this region. The metals were found to have been distributed in all phases with Fe, Cr, and Sn dominating the residual fraction (90.12 - 94.88%), Co, Ni, Cu, and Zn were found in all the extrac...
Pooling overdispersed binomial data to estimate event rate.
Young-Xu, Yinong; Chan, K Arnold
2008-08-19
The beta-binomial model is one of the methods that can be used to validly combine event rates from overdispersed binomial data. Our objective is to provide a full description of this method and to update and broaden its applications in clinical and public health research. We describe the statistical theories behind the beta-binomial model and the associated estimation methods. We supply information about statistical software that can provide beta-binomial estimations. Using a published example, we illustrate the application of the beta-binomial model when pooling overdispersed binomial data. In an example regarding the safety of oral antifungal treatments, we had 41 treatment arms with event rates varying from 0% to 13.89%. Using the beta-binomial model, we obtained a summary event rate of 3.44% with a standard error of 0.59%. The parameters of the beta-binomial model took the values of 1.24 for alpha and 34.73 for beta. The beta-binomial model can provide a robust estimate for the summary event rate by pooling overdispersed binomial data from different studies. The explanation of the method and the demonstration of its applications should help researchers incorporate the beta-binomial method as they aggregate probabilities of events from heterogeneous studies.
Pooling overdispersed binomial data to estimate event rate
Directory of Open Access Journals (Sweden)
Chan K Arnold
2008-08-01
Full Text Available Abstract Background The beta-binomial model is one of the methods that can be used to validly combine event rates from overdispersed binomial data. Our objective is to provide a full description of this method and to update and broaden its applications in clinical and public health research. Methods We describe the statistical theories behind the beta-binomial model and the associated estimation methods. We supply information about statistical software that can provide beta-binomial estimations. Using a published example, we illustrate the application of the beta-binomial model when pooling overdispersed binomial data. Results In an example regarding the safety of oral antifungal treatments, we had 41 treatment arms with event rates varying from 0% to 13.89%. Using the beta-binomial model, we obtained a summary event rate of 3.44% with a standard error of 0.59%. The parameters of the beta-binomial model took the values of 1.24 for alpha and 34.73 for beta. Conclusion The beta-binomial model can provide a robust estimate for the summary event rate by pooling overdispersed binomial data from different studies. The explanation of the method and the demonstration of its applications should help researchers incorporate the beta-binomial method as they aggregate probabilities of events from heterogeneous studies.
International Nuclear Information System (INIS)
Amitabh, J.; Vaccaro, J.A.; Hill, K.E.
1998-01-01
We study the recently defined number-phase Wigner function S NP (n,θ) for a single-mode field considered to be in binomial and negative binomial states. These states interpolate between Fock and coherent states and coherent and quasi thermal states, respectively, and thus provide a set of states with properties ranging from uncertain phase and sharp photon number to sharp phase and uncertain photon number. The distribution function S NP (n,θ) gives a graphical representation of the complimentary nature of the number and phase properties of these states. We highlight important differences between Wigner's quasi probability function, which is associated with the position and momentum observables, and S NP (n,θ), which is associated directly with the photon number and phase observables. We also discuss the number-phase entropic uncertainty relation for the binomial and negative binomial states and we show that negative binomial states give a lower phase entropy than states which minimize the phase variance
Sequential Methods and Their Applications
Mukhopadhyay, Nitis
2008-01-01
Illustrates the efficiency of sequential methodologies when dealing with contemporary statistical challenges in many areas. This book explores fixed sample size, sequential probability ratio, and nonparametric tests. It also presents multistage estimation methods for fixed-width confidence interval as well as minimum and bounded risk problems.
Canepari, Silvia; Perrino, Cinzia; Olivieri, Fabio; Astolfi, Maria Luisa
A study of the elemental composition and size distribution of atmospheric particulate matter and of its spatial and temporal variability has been conducted at two traffic sites and one urban background site in the area of Rome, Italy. Chemical analysis included the fractionation of 22 elements (Al, As, Ba, Ca, Cd, Co, Cr, Cu, Fe, Mg, Mn, Na, Ni, Pb, S, Sb, Si, Sn, Sr, Ti, Tl, V) into a water-extractable and a residual fraction. Size distribution analysis included measurements of aerosols in twelve size classes in the range 0.03-10 μm. The simultaneous determination of PM 10 and PM 2.5 at three sites during a 2-week study allowed the necessary evaluation of space and time concentration variations. The application of a chemical fractionation procedure to size-segregated samples proved to be a valuable approach for the characterisation of PM and for discriminating different emission sources. Extractable and residual fractions of the elements showed in fact different size distributions: for almost all elements the extractable fraction was mainly distributed in the fine particle size, while the residual fraction was in general predominant in the coarse size range. For some elements (As, Cd, Sb, Sn, V) the dimensional separation between the extractable fraction, almost quantitatively present in the fine mode particles, and the residual fraction, mainly distributed in the coarse mode particles, was almost quantitative. Under these conditions, the application of the chemical fractionation procedure to PM 10 samples allows a clear distinction between contributes originating from fine and coarse particle emission sources. The results related to PM (10-2.5) and PM 2.5 daily samples confirmed that chemical fractionation analysis increases the selectivity of most elements as source tracers. Extractable and residual fractions of As, Mg, Ni, Pb, S, Sn, Tl, Sb, Cd and V showed different time patterns and different spatial and size distributions, clearly indicating that the two
Chu, Ning; Fan, Shihua
2009-12-01
A new analytical method was developed for the simultaneous kinetic spectrophotometric determination of a quaternary carbamate pesticide mixture consisting of carbofuran, propoxur, metolcarb and fenobucarb using sequential injection analysis (SIA). The procedure was based upon the different kinetic properties between the analytes reacted with reagent in flow system in the non-stopped-flow mode, in which their hydrolysis products coupled with diazotized p-nitroaniline in an alkaline medium to form the corresponding colored complexes. The absorbance data from SIA peak time profile were recorded at 510 nm and resolved by the use of back-propagation-artificial neural network (BP-ANN) algorithms for multivariate quantitative analysis. The experimental variables and main network parameters were optimized and each of the pesticides could be determined in the concentration range of 0.5-10.0 μg mL -1, at a sampling frequency of 18 h -1. The proposed method was compared to other spectrophotometric methods for simultaneous determination of mixtures of carbamate pesticides, and it was proved to be adequately reliable and was successfully applied to the simultaneous determination of the four pesticide residues in water and fruit samples, obtaining the satisfactory results based on recovery studies (84.7-116.0%).
Thanasarakhan, Wish; Kruanetr, Senee; Deming, Richard L; Liawruangrath, Boonsom; Wangkarn, Sunantha; Liawruangrath, Saisunee
2011-06-15
A sequential injection analysis (SIA) spectrophotometric method for determining tetracycline (TC), chlortetracycline (CTC) and oxytetracycline (OTC) in different sample matrices were described. The method was based on the reaction between tetracyclines and yttrium (III) in weak basic micellar medium, yielding the light yellow complexes, which were monitored at 390, 392 and 395 nm, respectively. A cationic surfactant, cetyltrimethylammonium bromide (CTAB) was used to obtain the micellar system. The linear ranges of calibration graphs were between 1.0 × 10(-5) and 4 × 10(-4) mol L(-1), respectively. The molar absorptivities were 5.24 × 10(5), 4.98 × 10(4) and 4.78 × 10(4) L mol(-1)cm(-1). The detection limits (3σ) were between 4.9 × 10(-6) and 7.8 × 10(-6) mol L(-1) whereas the limit of quantitations (10σ) were between 1.63 × 10(-5) and 2.60 × 10(-5) mol L(-1) the interday and intraday precisions within a weak revealed as the relative standard deviations (R.S.D., n=11) were less than 4%. The method was rapid with a sampling rate of over 60 samples h(-1) for the three drugs. The proposed method has been satisfactorily applied for the determination of tetracycline and its derivatives in pharmaceutical preparations together with their residues in milk and honey samples collected in Chiang Mai Province. The accuracy was found to be high as the Student's t-values were found to be less than the theoretical ones. The results were compared favorably with those obtained by the conventional spectrophotometric method. Copyright © 2011 Elsevier B.V. All rights reserved.
Anthemidis, Aristidis N; Ioannou, Kallirroy-Ioanna G
2009-06-30
A simple, sensitive and powerful on-line sequential injection (SI) dispersive liquid-liquid microextraction (DLLME) system was developed as an alternative approach for on-line metal preconcentration and separation, using extraction solvent at microlitre volume. The potentials of this novel schema, coupled to flame atomic absorption spectrometry (FAAS), were demonstrated for trace copper and lead determination in water samples. The stream of methanol (disperser solvent) containing 2.0% (v/v) xylene (extraction solvent) and 0.3% (m/v) ammonium diethyldithiophosphate (chelating agent) was merged on-line with the stream of sample (aqueous phase), resulting a cloudy mixture, which was consisted of fine droplets of the extraction solvent dispersed entirely into the aqueous phase. By this continuous process, metal chelating complexes were formed and extracted into the fine droplets of the extraction solvent. The hydrophobic droplets of organic phase were retained into a microcolumn packed with PTFE-turnings. A portion of 300 microL isobutylmethylketone was used for quantitative elution of the analytes, which transported directly to the nebulizer of FAAS. All the critical parameters of the system such as type of extraction solvent, flow-rate of disperser and sample, extraction time as well as the chemical parameters were studied. Under the optimum conditions the enhancement factor for copper and lead was 560 and 265, respectively. For copper, the detection limit and the precision (R.S.D.) were 0.04 microg L(-1) and 2.1% at 2.0 microg L(-1) Cu(II), respectively, while for lead were 0.54 microg L(-1) and 1.9% at 30.0 microg L(-1) Pb(II), respectively. The developed method was evaluated by analyzing certified reference material and applied successfully to the analysis of environmental water samples.
On Using Selection Procedures with Binomial Models.
1983-10-01
eds.), Shinko Tsusho Co. Ltd., Tokyo, Japan , pp. 501-533. Gupta, S. S. and Sobel, M. (1960). Selecting a subset containing the best of several...IA_____3_6r__I____ *TITLE food A$ieweI L TYPE of 09PORT 6 PERIOD COVERED ON USING SELECTION PROCEDURES WITH BINOMIAL MODELS Technical 6. PeSPRFeauS1 ONG. REPORT...ontoedis stoc toeSI. to Ei.,..,t&* toemR.,. 14. SUPPOLEMENTARY MOCTES 19. Rey WORDS (Coatiou. 40 ow.oa* edo if Necesary and #do""&a by block number
Parameter estimation of the zero inflated negative binomial beta exponential distribution
Sirichantra, Chutima; Bodhisuwan, Winai
2017-11-01
The zero inflated negative binomial-beta exponential (ZINB-BE) distribution is developed, it is an alternative distribution for the excessive zero counts with overdispersion. The ZINB-BE distribution is a mixture of two distributions which are Bernoulli and negative binomial-beta exponential distributions. In this work, some characteristics of the proposed distribution are presented, such as, mean and variance. The maximum likelihood estimation is applied to parameter estimation of the proposed distribution. Finally some results of Monte Carlo simulation study, it seems to have high-efficiency when the sample size is large.
Negative binomial mixed models for analyzing microbiome count data.
Zhang, Xinyan; Mallick, Himel; Tang, Zaixiang; Zhang, Lei; Cui, Xiangqin; Benson, Andrew K; Yi, Nengjun
2017-01-03
Recent advances in next-generation sequencing (NGS) technology enable researchers to collect a large volume of metagenomic sequencing data. These data provide valuable resources for investigating interactions between the microbiome and host environmental/clinical factors. In addition to the well-known properties of microbiome count measurements, for example, varied total sequence reads across samples, over-dispersion and zero-inflation, microbiome studies usually collect samples with hierarchical structures, which introduce correlation among the samples and thus further complicate the analysis and interpretation of microbiome count data. In this article, we propose negative binomial mixed models (NBMMs) for detecting the association between the microbiome and host environmental/clinical factors for correlated microbiome count data. Although having not dealt with zero-inflation, the proposed mixed-effects models account for correlation among the samples by incorporating random effects into the commonly used fixed-effects negative binomial model, and can efficiently handle over-dispersion and varying total reads. We have developed a flexible and efficient IWLS (Iterative Weighted Least Squares) algorithm to fit the proposed NBMMs by taking advantage of the standard procedure for fitting the linear mixed models. We evaluate and demonstrate the proposed method via extensive simulation studies and the application to mouse gut microbiome data. The results show that the proposed method has desirable properties and outperform the previously used methods in terms of both empirical power and Type I error. The method has been incorporated into the freely available R package BhGLM ( http://www.ssg.uab.edu/bhglm/ and http://github.com/abbyyan3/BhGLM ), providing a useful tool for analyzing microbiome data.
Horstkotte, Burkhard; Jarošová, Patrícia; Chocholouš, Petr; Sklenářová, Hana; Solich, Petr
2015-05-01
In this work, the applicability of Sequential Injection Chromatography for the determination of transition metals in water is evaluated for the separation of copper(II), zinc(II), and iron(II) cations. Separations were performed using a Dionex IonPAC™ guard column (50mm×2mm i.d., 9 µm). Mobile phase composition and post-column reaction were optimized by modified SIMPLEX method with subsequent study of the concentration of each component. The mobile phase consisted of 2,6-pyridinedicarboxylic acid as analyte-selective compound, sodium sulfate, and formic acid/sodium formate buffer. Post-column addition of 4-(2-pyridylazo)resorcinol was carried out for spectrophotometric detection of the analytes׳ complexes at 530nm. Approaches to achieve higher robustness, baseline stability, and detection sensitivity by on-column stacking of the analytes and initial gradient implementation as well as air-cushion pressure damping for post-column reagent addition were studied. The method allowed the rapid separation of copper(II), zinc(II), and iron(II) within 6.5min including pump refilling and aspiration of sample and 1mmol HNO3 for analyte stacking on the separation column. High sensitivity was achieved applying an injection volume of up to 90µL. A signal repeatability of<2% RSD of peak height was found. Analyte recovery evaluated by spiking of different natural water samples was well suited for routine analysis with sub-micromolar limits of detection. Copyright © 2015 Elsevier B.V. All rights reserved.
Zero inflated negative binomial-Sushila distribution and its application
Yamrubboon, Darika; Thongteeraparp, Ampai; Bodhisuwan, Winai; Jampachaisri, Katechan
2017-11-01
A new zero inflated distribution is proposed in this work, namely the zero inflated negative binomial-Sushila distribution. The new distribution which is a mixture of the Bernoulli and negative binomial-Sushila distributions is an alternative distribution for the excessive zero counts and over-dispersion. Some characteristics of the proposed distribution are derived including probability mass function, mean and variance. The parameter estimation of the zero inflated negative binomial-Sushila distribution is also implemented by maximum likelihood method. In application, the proposed distribution can provide a better fit than traditional distributions: zero inflated Poisson and zero inflated negative binomial distributions.
Extending the Binomial Checkpointing Technique for Resilience
Energy Technology Data Exchange (ETDEWEB)
Walther, Andrea; Narayanan, Sri Hari Krishna
2016-10-10
In terms of computing time, adjoint methods offer a very attractive alternative to compute gradient information, re- quired, e.g., for optimization purposes. However, together with this very favorable temporal complexity result comes a memory requirement that is in essence proportional with the operation count of the underlying function, e.g., if algo- rithmic differentiation is used to provide the adjoints. For this reason, checkpointing approaches in many variants have become popular. This paper analyzes an extension of the so-called binomial approach to cover also possible failures of the computing systems. Such a measure of precaution is of special interest for massive parallel simulations and adjoint calculations where the mean time between failure of the large scale computing system is smaller than the time needed to complete the calculation of the adjoint information. We de- scribe the extensions of standard checkpointing approaches required for such resilience, provide a corresponding imple- mentation and discuss numerical results.
Fractional Sums and Differences with Binomial Coefficients
Directory of Open Access Journals (Sweden)
Thabet Abdeljawad
2013-01-01
Full Text Available In fractional calculus, there are two approaches to obtain fractional derivatives. The first approach is by iterating the integral and then defining a fractional order by using Cauchy formula to obtain Riemann fractional integrals and derivatives. The second approach is by iterating the derivative and then defining a fractional order by making use of the binomial theorem to obtain Grünwald-Letnikov fractional derivatives. In this paper we formulate the delta and nabla discrete versions for left and right fractional integrals and derivatives representing the second approach. Then, we use the discrete version of the Q-operator and some discrete fractional dual identities to prove that the presented fractional differences and sums coincide with the discrete Riemann ones describing the first approach.
Amaliana, Luthfatul; Sa'adah, Umu; Wayan Surya Wardhani, Ni
2017-12-01
Tetanus Neonatorum is an infectious disease that can be prevented by immunization. The number of Tetanus Neonatorum cases in East Java Province is the highest in Indonesia until 2015. Tetanus Neonatorum data contain over dispersion and big enough proportion of zero-inflation. Negative Binomial (NB) regression is an alternative method when over dispersion happens in Poisson regression. However, the data containing over dispersion and zero-inflation are more appropriately analyzed by using Zero-Inflated Negative Binomial (ZINB) regression. The purpose of this study are: (1) to model Tetanus Neonatorum cases in East Java Province with 71.05 percent proportion of zero-inflation by using NB and ZINB regression, (2) to obtain the best model. The result of this study indicates that ZINB is better than NB regression with smaller AIC.
Discovering Binomial Identities with PascGaloisJE
Evans, Tyler J.
2008-01-01
We describe exercises in which students use PascGaloisJE to formulate conjectures about certain binomial identities which hold when the binomial coefficients are interpreted as elements in the cyclic group Z[subscript p] of integers modulo a prime integer "p". In addition to having an appealing visual component, these exercises are open-ended and…
Wigner Function of Density Operator for Negative Binomial Distribution
International Nuclear Information System (INIS)
Xu Xinglei; Li Hongqi
2008-01-01
By using the technique of integration within an ordered product (IWOP) of operator we derive Wigner function of density operator for negative binomial distribution of radiation field in the mixed state case, then we derive the Wigner function of squeezed number state, which yields negative binomial distribution by virtue of the entangled state representation and the entangled Wigner operator
Penggunaan Model Binomial Pada Penentuan Harga Opsi Saham Karyawan
Directory of Open Access Journals (Sweden)
Dara Puspita Anggraeni
2015-11-01
Full Text Available Binomial Model for Valuing Employee Stock Options. Employee Stock Options (ESO differ from standard exchange-traded options. The three main differences in a valuation model for employee stock options : Vesting Period, Exit Rate and Non-Transferability. In this thesis, the model for valuing employee stock options discussed. This model are implement with a generalized binomial model.
Marginalized zero-inflated negative binomial regression with application to dental caries.
Preisser, John S; Das, Kalyan; Long, D Leann; Divaris, Kimon
2016-05-10
The zero-inflated negative binomial regression model (ZINB) is often employed in diverse fields such as dentistry, health care utilization, highway safety, and medicine to examine relationships between exposures of interest and overdispersed count outcomes exhibiting many zeros. The regression coefficients of ZINB have latent class interpretations for a susceptible subpopulation at risk for the disease/condition under study with counts generated from a negative binomial distribution and for a non-susceptible subpopulation that provides only zero counts. The ZINB parameters, however, are not well-suited for estimating overall exposure effects, specifically, in quantifying the effect of an explanatory variable in the overall mixture population. In this paper, a marginalized zero-inflated negative binomial regression (MZINB) model for independent responses is proposed to model the population marginal mean count directly, providing straightforward inference for overall exposure effects based on maximum likelihood estimation. Through simulation studies, the finite sample performance of MZINB is compared with marginalized zero-inflated Poisson, Poisson, and negative binomial regression. The MZINB model is applied in the evaluation of a school-based fluoride mouthrinse program on dental caries in 677 children. Copyright © 2015 John Wiley & Sons, Ltd.
Validity of the negative binomial distribution in particle production
International Nuclear Information System (INIS)
Cugnon, J.; Harouna, O.
1987-01-01
Some aspects of the clan picture for particle production in nuclear and in high-energy processes are examined. In particular, it is shown that the requirement of having logarithmic distribution for the number of particles within a clan in order to generate a negative binomial should not be taken strictly. Large departures are allowed without distorting too much the negative binomial. The question of the undetected particles is also studied. It is shown that, under reasonable circumstances, the latter do not affect the negative binomial character of the multiplicity distribution
Statistical Inference for a Class of Multivariate Negative Binomial Distributions
DEFF Research Database (Denmark)
Rubak, Ege H.; Møller, Jesper; McCullagh, Peter
This paper considers statistical inference procedures for a class of models for positively correlated count variables called -permanental random fields, and which can be viewed as a family of multivariate negative binomial distributions. Their appealing probabilistic properties have earlier been...
Multifractal structure of multiplicity distributions and negative binomials
International Nuclear Information System (INIS)
Malik, S.; Delhi, Univ.
1997-01-01
The paper presents experimental results of the multifractal structure analysis in proton-emulsion interactions at 800 GeV. The multiplicity moments have a power law dependence on the mean multiplicity in varying bin sizes of pseudorapidity. The values of generalised dimensions are calculated from the slope value. The multifractal characteristics are also examined in the light of negative binomials. The observed multiplicity moments and those derived from the negative-binomial fits agree well with each other. Also the values of D q , both observed and derived from the negative-binomial fits not only decrease with q typifying multifractality but also agree well each other showing consistency with the negative-binomial form
Lee, Bao-Shiang; Krishnanchettiar, Sangeeth; Lateef, Syed Salman; Li, Chun-Ting; Gupta, Shalini
2006-07-01
The phenomenon of protein randomly associating among themselves during MALDI-TOF mass spectrometric measurements is manifest on all the proteins tested here. The magnitude of this random association seems to be protein-dependent. However, a detailed mathematical analysis of this process has not been reported so far. Here, binomial and multinomial equations are used to analyze the relative populations of multimer ions formed by random protein association during MALDI-TOF mass spectrometric measurements. Hemoglobin A (which consists of two [alpha]-globins and two [beta]-globins) and biotinylated insulin (which contains intact, singly biotinylated, and doubly biotinylated insulin) are used as the test cases for two- and three-component protein systems, respectively. MALDI-TOF spectra are acquired using standard MALDI-TOF techniques and equipment. The binomial distribution matches the relative populations of multimer ions of Hb A perfectly. For biotinylated insulin sample, taking lesser relative populations for doubly biotinylated insulin and intact insulin compared with singly biotinylated insulin into account, the relative populations of multimer ions of the biotinylated insulin confirms the prediction of multinomial equation. The pairs of unrelated proteins such as myoglobin, avidin, and lysozyme show lesser intensities for heteromultimers than for homomultimers, indicating weaker propensities to associate between different proteins during MALDI-TOF mass spectrometric measurements. Contrary to the suggestion that the multimer ions are formed in the solution phase prior to MALDI-TOF mass spectrometric measurement through multistage sequential reactions of the aggregation of protein molecules, we postulate that multimer ions of proteins are formed after the protein molecules have been vaporized into the gas phase through the assistance of the laser and matrix.
Minimal sequential Hausdorff spaces
Directory of Open Access Journals (Sweden)
Bhamini M. P. Nayar
2004-01-01
Full Text Available A sequential space (X,T is called minimal sequential if no sequential topology on X is strictly weaker than T. This paper begins the study of minimal sequential Hausdorff spaces. Characterizations of minimal sequential Hausdorff spaces are obtained using filter bases, sequences, and functions satisfying certain graph conditions. Relationships between this class of spaces and other classes of spaces, for example, minimal Hausdorff spaces, countably compact spaces, H-closed spaces, SQ-closed spaces, and subspaces of minimal sequential spaces, are investigated. While the property of being sequential is not (in general preserved by products, some information is provided on the question of when the product of minimal sequential spaces is minimal sequential.
Entanglement of Generalized Two-Mode Binomial States and Teleportation
International Nuclear Information System (INIS)
Wang Dongmei; Yu Youhong
2009-01-01
The entanglement of the generalized two-mode binomial states in the phase damping channel is studied by making use of the relative entropy of the entanglement. It is shown that the factors of q and p play the crucial roles in control the relative entropy of the entanglement. Furthermore, we propose a scheme of teleporting an unknown state via the generalized two-mode binomial states, and calculate the mean fidelity of the scheme. (general)
Detecting non-binomial sex allocation when developmental mortality operates.
Wilkinson, Richard D; Kapranas, Apostolos; Hardy, Ian C W
2016-11-07
Optimal sex allocation theory is one of the most intricately developed areas of evolutionary ecology. Under a range of conditions, particularly under population sub-division, selection favours sex being allocated to offspring non-randomly, generating non-binomial variances of offspring group sex ratios. Detecting non-binomial sex allocation is complicated by stochastic developmental mortality, as offspring sex can often only be identified on maturity with the sex of non-maturing offspring remaining unknown. We show that current approaches for detecting non-binomiality have limited ability to detect non-binomial sex allocation when developmental mortality has occurred. We present a new procedure using an explicit model of sex allocation and mortality and develop a Bayesian model selection approach (available as an R package). We use the double and multiplicative binomial distributions to model over- and under-dispersed sex allocation and show how to calculate Bayes factors for comparing these alternative models to the null hypothesis of binomial sex allocation. The ability to detect non-binomial sex allocation is greatly increased, particularly in cases where mortality is common. The use of Bayesian methods allows for the quantification of the evidence in favour of each hypothesis, and our modelling approach provides an improved descriptive capability over existing approaches. We use a simulation study to demonstrate substantial improvements in power for detecting non-binomial sex allocation in situations where current methods fail, and we illustrate the approach in real scenarios using empirically obtained datasets on the sexual composition of groups of gregarious parasitoid wasps. Copyright © 2016 Elsevier Ltd. All rights reserved.
Negative Binomial Distribution and the multiplicity moments at the LHC
International Nuclear Information System (INIS)
Praszalowicz, Michal
2011-01-01
In this work we show that the latest LHC data on multiplicity moments C 2 -C 5 are well described by a two-step model in the form of a convolution of the Poisson distribution with energy-dependent source function. For the source function we take Γ Negative Binomial Distribution. No unexpected behavior of Negative Binomial Distribution parameter k is found. We give also predictions for the higher energies of 10 and 14 TeV.
Negative binomial properties and clan structure in multiplicity distributions
International Nuclear Information System (INIS)
Giovannini, A.; Van Hove, L.
1988-01-01
We review the negative binomial properties measured recently for many multiplicity distributions of high energy hadronic, semi-leptonic reactions in selected rapidity intervals. We analyse them in terms of the ''clan'' structure which can be defined for any negative binomial distribution. By comparing reactions we exhibit a number of regularities for the average number N-bar of clans and the average charged multiplicity (n-bar) c per clan. 22 refs., 6 figs. (author)
On some binomial [Formula: see text]-difference sequence spaces.
Meng, Jian; Song, Meimei
2017-01-01
In this paper, we introduce the binomial sequence spaces [Formula: see text], [Formula: see text] and [Formula: see text] by combining the binomial transformation and difference operator. We prove the BK -property and some inclusion relations. Furthermore, we obtain Schauder bases and compute the α -, β - and γ -duals of these sequence spaces. Finally, we characterize matrix transformations on the sequence space [Formula: see text].
O'Donnell, Katherine M; Thompson, Frank R; Semlitsch, Raymond D
2015-01-01
Detectability of individual animals is highly variable and nearly always binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model's potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3-5 surveys each spring and fall 2010-2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling), while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling). By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and protocols that maximize species availability and conditional detection probability to increase population parameter estimate reliability.
Comparison of multinomial and binomial proportion methods for analysis of multinomial count data.
Galyean, M L; Wester, D B
2010-10-01
Simulation methods were used to generate 1,000 experiments, each with 3 treatments and 10 experimental units/treatment, in completely randomized (CRD) and randomized complete block designs. Data were counts in 3 ordered or 4 nominal categories from multinomial distributions. For the 3-category analyses, category probabilities were 0.6, 0.3, and 0.1, respectively, for 2 of the treatments, and 0.5, 0.35, and 0.15 for the third treatment. In the 4-category analysis (CRD only), probabilities were 0.3, 0.3, 0.2, and 0.2 for treatments 1 and 2 vs. 0.4, 0.4, 0.1, and 0.1 for treatment 3. The 3-category data were analyzed with generalized linear mixed models as an ordered multinomial distribution with a cumulative logit link or by regrouping the data (e.g., counts in 1 category/sum of counts in all categories), followed by analysis of single categories as binomial proportions. Similarly, the 4-category data were analyzed as a nominal multinomial distribution with a glogit link or by grouping data as binomial proportions. For the 3-category CRD analyses, empirically determined type I error rates based on pair-wise comparisons (F- and Wald chi(2) tests) did not differ between multinomial and individual binomial category analyses with 10 (P = 0.38 to 0.60) or 50 (P = 0.19 to 0.67) sampling units/experimental unit. When analyzed as binomial proportions, power estimates varied among categories, with analysis of the category with the greatest counts yielding power similar to the multinomial analysis. Agreement between methods (percentage of experiments with the same results for the overall test for treatment effects) varied considerably among categories analyzed and sampling unit scenarios for the 3-category CRD analyses. Power (F-test) was 24.3, 49.1, 66.9, 83.5, 86.8, and 99.7% for 10, 20, 30, 40, 50, and 100 sampling units/experimental unit for the 3-category multinomial CRD analyses. Results with randomized complete block design simulations were similar to those with the CRD
Application of a binomial cusum control chart to monitor one drinking water indicator
Directory of Open Access Journals (Sweden)
Elisa Henning
2014-02-01
Full Text Available The aim of this study is to analyze the use of a binomial cumulative sum chart (CUSUM to monitor the presence of total coliforms, biological indicators of quality of water supplies in water treatment processes. The sample series were monthly taken from a water treatment plant and were analyzed from 2007 to 2009. The statistical treatment of the data was performed using GNU R, and routines were created for the approximation of the upper limit of the binomial CUSUM chart. Furthermore, a comparative study was conducted to investigate whether there is a significant difference in sensitivity between the use of CUSUM and the traditional Shewhart chart, the most commonly used chart in process monitoring. The results obtained demonstrate that this study was essential for making the right choice in selecting a chart for the statistical analysis of this process.
[Using log-binomial model for estimating the prevalence ratio].
Ye, Rong; Gao, Yan-hui; Yang, Yi; Chen, Yue
2010-05-01
To estimate the prevalence ratios, using a log-binomial model with or without continuous covariates. Prevalence ratios for individuals' attitude towards smoking-ban legislation associated with smoking status, estimated by using a log-binomial model were compared with odds ratios estimated by logistic regression model. In the log-binomial modeling, maximum likelihood method was used when there were no continuous covariates and COPY approach was used if the model did not converge, for example due to the existence of continuous covariates. We examined the association between individuals' attitude towards smoking-ban legislation and smoking status in men and women. Prevalence ratio and odds ratio estimation provided similar results for the association in women since smoking was not common. In men however, the odds ratio estimates were markedly larger than the prevalence ratios due to a higher prevalence of outcome. The log-binomial model did not converge when age was included as a continuous covariate and COPY method was used to deal with the situation. All analysis was performed by SAS. Prevalence ratio seemed to better measure the association than odds ratio when prevalence is high. SAS programs were provided to calculate the prevalence ratios with or without continuous covariates in the log-binomial regression analysis.
Analysis of hypoglycemic events using negative binomial models.
Luo, Junxiang; Qu, Yongming
2013-01-01
Negative binomial regression is a standard model to analyze hypoglycemic events in diabetes clinical trials. Adjusting for baseline covariates could potentially increase the estimation efficiency of negative binomial regression. However, adjusting for covariates raises concerns about model misspecification, in which the negative binomial regression is not robust because of its requirement for strong model assumptions. In some literature, it was suggested to correct the standard error of the maximum likelihood estimator through introducing overdispersion, which can be estimated by the Deviance or Pearson Chi-square. We proposed to conduct the negative binomial regression using Sandwich estimation to calculate the covariance matrix of the parameter estimates together with Pearson overdispersion correction (denoted by NBSP). In this research, we compared several commonly used negative binomial model options with our proposed NBSP. Simulations and real data analyses showed that NBSP is the most robust to model misspecification, and the estimation efficiency will be improved by adjusting for baseline hypoglycemia. Copyright © 2013 John Wiley & Sons, Ltd.
Zero inflated Poisson and negative binomial regression models: application in education.
Salehi, Masoud; Roudbari, Masoud
2015-01-01
The number of failed courses and semesters in students are indicators of their performance. These amounts have zero inflated (ZI) distributions. Using ZI Poisson and negative binomial distributions we can model these count data to find the associated factors and estimate the parameters. This study aims at to investigate the important factors related to the educational performance of students. This cross-sectional study performed in 2008-2009 at Iran University of Medical Sciences (IUMS) with a population of almost 6000 students, 670 students selected using stratified random sampling. The educational and demographical data were collected using the University records. The study design was approved at IUMS and the students' data kept confidential. The descriptive statistics and ZI Poisson and negative binomial regressions were used to analyze the data. The data were analyzed using STATA. In the number of failed semesters, Poisson and negative binomial distributions with ZI, students' total average and quota system had the most roles. For the number of failed courses, total average, and being in undergraduate or master levels had the most effect in both models. In all models the total average have the most effect on the number of failed courses or semesters. The next important factor is quota system in failed semester and undergraduate and master levels in failed courses. Therefore, average has an important inverse effect on the numbers of failed courses and semester.
Extra-binomial variation approach for analysis of pooled DNA sequencing data
Wallace, Chris
2012-01-01
Motivation: The invention of next-generation sequencing technology has made it possible to study the rare variants that are more likely to pinpoint causal disease genes. To make such experiments financially viable, DNA samples from several subjects are often pooled before sequencing. This induces large between-pool variation which, together with other sources of experimental error, creates over-dispersed data. Statistical analysis of pooled sequencing data needs to appropriately model this additional variance to avoid inflating the false-positive rate. Results: We propose a new statistical method based on an extra-binomial model to address the over-dispersion and apply it to pooled case-control data. We demonstrate that our model provides a better fit to the data than either a standard binomial model or a traditional extra-binomial model proposed by Williams and can analyse both rare and common variants with lower or more variable pool depths compared to the other methods. Availability: Package ‘extraBinomial’ is on http://cran.r-project.org/ Contact: chris.wallace@cimr.cam.ac.uk Supplementary information: Supplementary data are available at Bioinformatics Online. PMID:22976083
Chromosome aberration analysis based on a beta-binomial distribution
International Nuclear Information System (INIS)
Otake, Masanori; Prentice, R.L.
1983-10-01
Analyses carried out here generalized on earlier studies of chromosomal aberrations in the populations of Hiroshima and Nagasaki, by allowing extra-binomial variation in aberrant cell counts corresponding to within-subject correlations in cell aberrations. Strong within-subject correlations were detected with corresponding standard errors for the average number of aberrant cells that were often substantially larger than was previously assumed. The extra-binomial variation is accomodated in the analysis in the present report, as described in the section on dose-response models, by using a beta-binomial (B-B) variance structure. It is emphasized that we have generally satisfactory agreement between the observed and the B-B fitted frequencies by city-dose category. The chromosomal aberration data considered here are not extensive enough to allow a precise discrimination between competing dose-response models. A quadratic gamma ray and linear neutron model, however, most closely fits the chromosome data. (author)
Hadronic multiplicity distributions: the negative binomial and its alternatives
International Nuclear Information System (INIS)
Carruthers, P.
1986-01-01
We review properties of the negative binomial distribution, along with its many possible statistical or dynamical origins. Considering the relation of the multiplicity distribution to the density matrix for Boson systems, we re-introduce the partially coherent laser distribution, which allows for coherent as well as incoherent hadronic emission from the k fundamental cells, and provides equally good phenomenological fits to existing data. The broadening of non-single diffractive hadron-hadron distributions can be equally well due to the decrease of coherent with increasing energy as to the large (and rapidly decreasing) values of k deduced from negative binomial fits. Similarly the narrowness of e + -e - multiplicity distribution is due to nearly coherent (therefore nearly Poissonian) emission from a small number of jets, in contrast to the negative binomial with enormous values of k. 31 refs
Hadronic multiplicity distributions: the negative binomial and its alternatives
International Nuclear Information System (INIS)
Carruthers, P.
1986-01-01
We review properties of the negative binomial distribution, along with its many possible statistical or dynamical origins. Considering the relation of the multiplicity distribution to the density matrix for boson systems, we re-introduce the partially coherent laser distribution, which allows for coherent as well as incoherent hadronic emission from the k fundamental cells, and provides equally good phenomenological fits to existing data. The broadening of non-single diffractive hadron-hadron distributions can be equally well due to the decrease of coherence with increasing energy as to the large (and rapidly decreasing) values of k deduced from negative binomial fits. Similarly the narrowness of e + -e - multiplicity distribution is due to nearly coherent (therefore nearly Poissonian) emission from a small number of jets, in contrast to the negative binomial with enormous values of k. 31 refs
The k-Binomial Transforms and the Hankel Transform
Spivey, Michael Z.; Steil, Laura L.
2006-01-01
We give a new proof of the invariance of the Hankel transform under the binomial transform of a sequence. Our method of proof leads to three variations of the binomial transform; we call these the k-binomial transforms. We give a simple means of constructing these transforms via a triangle of numbers. We show how the exponential generating function of a sequence changes after our transforms are applied, and we use this to prove that several sequences in the On-Line Encyclopedia of Integer Sequences are related via our transforms. In the process, we prove three conjectures in the OEIS. Addressing a question of Layman, we then show that the Hankel transform of a sequence is invariant under one of our transforms, and we show how the Hankel transform changes after the other two transforms are applied. Finally, we use these results to determine the Hankel transforms of several integer sequences.
Interpretations and implications of negative binomial distributions of multiparticle productions
International Nuclear Information System (INIS)
Arisawa, Tetsuo
2006-01-01
The number of particles produced in high energy experiments is approximated by a negative binomial distribution. Deriving a representation of the distribution from a stochastic equation, conditions for the process to satisfy the distribution are clarified. Based on them, it is proposed that multiparticle production consists of spontaneous and induced production. The rate of the induced production is proportional to the number of existing particles. The ratio of the two production rates remains constant during the process. The ''NBD space'' is also defined where the number of particles produced in its subspaces follows negative binomial distributions with different parameters
International Nuclear Information System (INIS)
Gopinath, T.; Mote, Kaustubh R.; Veglia, Gianluigi
2015-01-01
We present a new method called DAISY (Dual Acquisition orIented ssNMR spectroScopY) for the simultaneous acquisition of 2D and 3D oriented solid-state NMR experiments for membrane proteins reconstituted in mechanically or magnetically aligned lipid bilayers. DAISY utilizes dual acquisition of sine and cosine dipolar or chemical shift coherences and long living 15 N longitudinal polarization to obtain two multi-dimensional spectra, simultaneously. In these new experiments, the first acquisition gives the polarization inversion spin exchange at the magic angle (PISEMA) or heteronuclear correlation (HETCOR) spectra, the second acquisition gives PISEMA-mixing or HETCOR-mixing spectra, where the mixing element enables inter-residue correlations through 15 N– 15 N homonuclear polarization transfer. The analysis of the two 2D spectra (first and second acquisitions) enables one to distinguish 15 N– 15 N inter-residue correlations for sequential assignment of membrane proteins. DAISY can be implemented in 3D experiments that include the polarization inversion spin exchange at magic angle via I spin coherence (PISEMAI) sequence, as we show for the simultaneous acquisition of 3D PISEMAI–HETCOR and 3D PISEMAI–HETCOR-mixing experiments
Directory of Open Access Journals (Sweden)
R Drew Carleton
Full Text Available Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with "pre-sampling" data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n ∼ 100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand was the most efficient, with sample means converging on true mean density for sample sizes of n ∼ 25-40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods.
Generation of the reciprocal-binomial state for optical fields
International Nuclear Information System (INIS)
Valverde, C.; Avelar, A.T.; Baseia, B.; Malbouisson, J.M.C.
2003-01-01
We compare the efficiencies of two interesting schemes to generate truncated states of the light field in running modes, namely the 'quantum scissors' and the 'beam-splitter array' schemes. The latter is applied to create the reciprocal-binomial state as a travelling wave, required to implement recent experimental proposals of phase-distribution determination and of quantum lithography
Improved binomial charts for monitoring high-quality processes
Albers, Willem/Wim
2009-01-01
For processes concerning attribute data with (very) small failure rate p, often negative binomial control charts are used. The decision whether to stop or continue is made each time r failures have occurred, for some r≥1. Finding the optimal r for detecting a given increase of p first requires
Statistical inference for a class of multivariate negative binomial distributions
DEFF Research Database (Denmark)
Rubak, Ege Holger; Møller, Jesper; McCullagh, Peter
This paper considers statistical inference procedures for a class of models for positively correlated count variables called α-permanental random fields, and which can be viewed as a family of multivariate negative binomial distributions. Their appealing probabilistic properties have earlier been...
Improved binomial charts for high-quality processes
Albers, Willem/Wim
For processes concerning attribute data with (very) small failure rate p, often negative binomial control charts are used. The decision whether to stop or continue is made each time r failures have occurred, for some r≥1. Finding the optimal r for detecting a given increase of p first requires
Calculation of generalized secant integral using binomial coefficients
International Nuclear Information System (INIS)
Guseinov, I.I.; Mamedov, B.A.
2004-01-01
A single series expansion relation is derived for the generalized secant (GS) integral in terms of binomial coefficients, exponential integrals and incomplete gamma functions. The convergence of the series is tested by the concrete cases of parameters. The formulas given in this study for the evaluation of GS integral show good rate of convergence and numerical stability
Selecting Tools to Model Integer and Binomial Multiplication
Pratt, Sarah Smitherman; Eddy, Colleen M.
2017-01-01
Mathematics teachers frequently provide concrete manipulatives to students during instruction; however, the rationale for using certain manipulatives in conjunction with concepts may not be explored. This article focuses on area models that are currently used in classrooms to provide concrete examples of integer and binomial multiplication. The…
Using the β-binomial distribution to characterize forest health
S.J. Zarnoch; R.L. Anderson; R.M. Sheffield
1995-01-01
The β-binomial distribution is suggested as a model for describing and analyzing the dichotomous data obtained from programs monitoring the health of forests in the United States. Maximum likelihood estimation of the parameters is given as well as asymptotic likelihood ratio tests. The procedure is illustrated with data on dogwood anthracnose infection (caused...
Currency lookback options and observation frequency: A binomial approach
T.H.F. Cheuk; A.C.F. Vorst (Ton)
1997-01-01
textabstractIn the last decade, interest in exotic options has been growing, especially in the over-the-counter currency market. In this paper we consider Iookback currency options, which are path-dependent. We show that a one-state variable binomial model for currency Iookback options can
Application of Negative Binomial Regression for Assessing Public ...
African Journals Online (AJOL)
Because the variance was nearly two times greater than the mean, the negative binomial regression model provided an improved fit to the data and accounted better for overdispersion than the Poisson regression model, which assumed that the mean and variance are the same. The level of education and race were found
Cizdziel, James V.
2011-01-01
In this laboratory experiment, students quantitatively determine the concentration of an element (mercury) in an environmental or biological sample while comparing and contrasting the fundamental techniques of atomic absorption spectrometry (AAS) and atomic fluorescence spectrometry (AFS). A mercury analyzer based on sample combustion,…
Directory of Open Access Journals (Sweden)
David Shilane
2013-01-01
Full Text Available The negative binomial distribution becomes highly skewed under extreme dispersion. Even at moderately large sample sizes, the sample mean exhibits a heavy right tail. The standard normal approximation often does not provide adequate inferences about the data's expected value in this setting. In previous work, we have examined alternative methods of generating confidence intervals for the expected value. These methods were based upon Gamma and Chi Square approximations or tail probability bounds such as Bernstein's inequality. We now propose growth estimators of the negative binomial mean. Under high dispersion, zero values are likely to be overrepresented in the data. A growth estimator constructs a normal-style confidence interval by effectively removing a small, predetermined number of zeros from the data. We propose growth estimators based upon multiplicative adjustments of the sample mean and direct removal of zeros from the sample. These methods do not require estimating the nuisance dispersion parameter. We will demonstrate that the growth estimators' confidence intervals provide improved coverage over a wide range of parameter values and asymptotically converge to the sample mean. Interestingly, the proposed methods succeed despite adding both bias and variance to the normal approximation.
Stochastic analysis of complex reaction networks using binomial moment equations.
Barzel, Baruch; Biham, Ofer
2012-09-01
The stochastic analysis of complex reaction networks is a difficult problem because the number of microscopic states in such systems increases exponentially with the number of reactive species. Direct integration of the master equation is thus infeasible and is most often replaced by Monte Carlo simulations. While Monte Carlo simulations are a highly effective tool, equation-based formulations are more amenable to analytical treatment and may provide deeper insight into the dynamics of the network. Here, we present a highly efficient equation-based method for the analysis of stochastic reaction networks. The method is based on the recently introduced binomial moment equations [Barzel and Biham, Phys. Rev. Lett. 106, 150602 (2011)]. The binomial moments are linear combinations of the ordinary moments of the probability distribution function of the population sizes of the interacting species. They capture the essential combinatorics of the reaction processes reflecting their stoichiometric structure. This leads to a simple and transparent form of the equations, and allows a highly efficient and surprisingly simple truncation scheme. Unlike ordinary moment equations, in which the inclusion of high order moments is prohibitively complicated, the binomial moment equations can be easily constructed up to any desired order. The result is a set of equations that enables the stochastic analysis of complex reaction networks under a broad range of conditions. The number of equations is dramatically reduced from the exponential proliferation of the master equation to a polynomial (and often quadratic) dependence on the number of reactive species in the binomial moment equations. The aim of this paper is twofold: to present a complete derivation of the binomial moment equations; to demonstrate the applicability of the moment equations for a representative set of example networks, in which stochastic effects play an important role.
Wagner, Brandie; Riggs, Paula; Mikulich-Gilbertson, Susan
2015-01-01
It is important to correctly understand the associations among addiction to multiple drugs and between co-occurring substance use and psychiatric disorders. Substance-specific outcomes (e.g. number of days used cannabis) have distributional characteristics which range widely depending on the substance and the sample being evaluated. We recommend a four-part strategy for determining the appropriate distribution for modeling substance use data. We demonstrate this strategy by comparing the model fit and resulting inferences from applying four different distributions to model use of substances that range greatly in the prevalence and frequency of their use. Using Timeline Followback (TLFB) data from a previously-published study, we used negative binomial, beta-binomial and their zero-inflated counterparts to model proportion of days during treatment of cannabis, cigarettes, alcohol, and opioid use. The fit for each distribution was evaluated with statistical model selection criteria, visual plots and a comparison of the resulting inferences. We demonstrate the feasibility and utility of modeling each substance individually and show that no single distribution provides the best fit for all substances. Inferences regarding use of each substance and associations with important clinical variables were not consistent across models and differed by substance. Thus, the distribution chosen for modeling substance use must be carefully selected and evaluated because it may impact the resulting conclusions. Furthermore, the common procedure of aggregating use across different substances may not be ideal.
Random sequential adsorption of cubes.
Cieśla, Michał; Kubala, Piotr
2018-01-14
Random packings built of cubes are studied numerically using a random sequential adsorption algorithm. To compare the obtained results with previous reports, three different models of cube orientation sampling were used. Also, three different cube-cube intersection algorithms were tested to find the most efficient one. The study focuses on the mean saturated packing fraction as well as kinetics of packing growth. Microstructural properties of packings were analyzed using density autocorrelation function.
Cao, Qingqing; Wu, Zhenqiang; Sun, Ying; Wang, Tiezhu; Han, Tengwei; Gu, Chaomei; Sun, Yehuan
2011-11-01
To Eexplore the application of negative binomial regression and modified Poisson regression analysis in analyzing the influential factors for injury frequency and the risk factors leading to the increase of injury frequency. 2917 primary and secondary school students were selected from Hefei by cluster random sampling method and surveyed by questionnaire. The data on the count event-based injuries used to fitted modified Poisson regression and negative binomial regression model. The risk factors incurring the increase of unintentional injury frequency for juvenile students was explored, so as to probe the efficiency of these two models in studying the influential factors for injury frequency. The Poisson model existed over-dispersion (P binomial regression model, was fitted better. respectively. Both showed that male gender, younger age, father working outside of the hometown, the level of the guardian being above junior high school and smoking might be the results of higher injury frequencies. On a tendency of clustered frequency data on injury event, both the modified Poisson regression analysis and negative binomial regression analysis can be used. However, based on our data, the modified Poisson regression fitted better and this model could give a more accurate interpretation of relevant factors affecting the frequency of injury.
International Nuclear Information System (INIS)
Zhang Yu; Wang Guangyi; Lu Xinmiao; Hu Yongcai; Xu Jiangtao
2016-01-01
The random telegraph signal noise in the pixel source follower MOSFET is the principle component of the noise in the CMOS image sensor under low light. In this paper, the physical and statistical model of the random telegraph signal noise in the pixel source follower based on the binomial distribution is set up. The number of electrons captured or released by the oxide traps in the unit time is described as the random variables which obey the binomial distribution. As a result, the output states and the corresponding probabilities of the first and the second samples of the correlated double sampling circuit are acquired. The standard deviation of the output states after the correlated double sampling circuit can be obtained accordingly. In the simulation section, one hundred thousand samples of the source follower MOSFET have been simulated, and the simulation results show that the proposed model has the similar statistical characteristics with the existing models under the effect of the channel length and the density of the oxide trap. Moreover, the noise histogram of the proposed model has been evaluated at different environmental temperatures. (paper)
Generalization of Binomial Coefficients to Numbers on the Nodes of Graphs
Khmelnitskaya, A.; van der Laan, G.; Talman, Dolf
2016-01-01
The triangular array of binomial coefficients, or Pascal's triangle, is formed by starting with an apex of 1. Every row of Pascal's triangle can be seen as a line-graph, to each node of which the corresponding binomial coefficient is assigned. We show that the binomial coefficient of a node is equal
Generalization of binomial coefficients to numbers on the nodes of graphs
Khmelnitskaya, Anna Borisovna; van der Laan, Gerard; Talman, Dolf
The triangular array of binomial coefficients, or Pascal's triangle, is formed by starting with an apex of 1. Every row of Pascal's triangle can be seen as a line-graph, to each node of which the corresponding binomial coefficient is assigned. We show that the binomial coefficient of a node is equal
A comparison of methods for the analysis of binomial clustered outcomes in behavioral research.
Ferrari, Alberto; Comelli, Mario
2016-12-01
In behavioral research, data consisting of a per-subject proportion of "successes" and "failures" over a finite number of trials often arise. This clustered binary data are usually non-normally distributed, which can distort inference if the usual general linear model is applied and sample size is small. A number of more advanced methods is available, but they are often technically challenging and a comparative assessment of their performances in behavioral setups has not been performed. We studied the performances of some methods applicable to the analysis of proportions; namely linear regression, Poisson regression, beta-binomial regression and Generalized Linear Mixed Models (GLMMs). We report on a simulation study evaluating power and Type I error rate of these models in hypothetical scenarios met by behavioral researchers; plus, we describe results from the application of these methods on data from real experiments. Our results show that, while GLMMs are powerful instruments for the analysis of clustered binary outcomes, beta-binomial regression can outperform them in a range of scenarios. Linear regression gave results consistent with the nominal level of significance, but was overall less powerful. Poisson regression, instead, mostly led to anticonservative inference. GLMMs and beta-binomial regression are generally more powerful than linear regression; yet linear regression is robust to model misspecification in some conditions, whereas Poisson regression suffers heavily from violations of the assumptions when used to model proportion data. We conclude providing directions to behavioral scientists dealing with clustered binary data and small sample sizes. Copyright © 2016 Elsevier B.V. All rights reserved.
Hernando, M D; Agüera, A; Fernández-Alba, A R; Piedra, L; Contreras, M
2001-01-01
A selective and sensitive chromatographic method is described for the determination of nine organochlorine and organophosphorus pesticides in vegetable samples by gas chromatography-mass spectrometry. The proposed method combines the use of positive and negative chemical ionisation and tandem mass spectrometric fragmentation, resulting in a significant increase in selectivity and allowing the simultaneous confirmation and quantification of trace levels of pesticides in complex vegetable matrices. Parameters relative to ionisation and fragmentation processes were optimised to obtain maximum sensitivity. Repeatability and reproducibility studies yielded relative standard deviations lower than 25% in all cases. Identification criteria, such as retention time and relative abundance of characteristic product ions, were also evaluated in order to guarantee the correct identification of the target compounds. The method was applied to real vegetable samples to demonstrate its use in routine analysis.
Determination of finite-difference weights using scaled binomial windows
Chu, Chunlei
2012-05-01
The finite-difference method evaluates a derivative through a weighted summation of function values from neighboring grid nodes. Conventional finite-difference weights can be calculated either from Taylor series expansions or by Lagrange interpolation polynomials. The finite-difference method can be interpreted as a truncated convolutional counterpart of the pseudospectral method in the space domain. For this reason, we also can derive finite-difference operators by truncating the convolution series of the pseudospectral method. Various truncation windows can be employed for this purpose and they result in finite-difference operators with different dispersion properties. We found that there exists two families of scaled binomial windows that can be used to derive conventional finite-difference operators analytically. With a minor change, these scaled binomial windows can also be used to derive optimized finite-difference operators with enhanced dispersion properties. © 2012 Society of Exploration Geophysicists.
Fat suppression in MR imaging with binomial pulse sequences
International Nuclear Information System (INIS)
Baudovin, C.J.; Bryant, D.J.; Bydder, G.M.; Young, I.R.
1989-01-01
This paper reports on a study to develop pulse sequences allowing suppression of fat signal on MR images without eliminating signal from other tissues with short T1. They have developed such a technique involving selective excitation of protons in water, based on a binomial pulse sequence. Imaging is performed at 0.15 T. Careful shimming is performed to maximize separation of fat and water peaks. A spin-echo 1,500/80 sequence is used, employing 90 degrees pulse with transit frequency optimized for water with null excitation of 20 H offset, followed by a section-selective 180 degrees pulse. With use of the binomial sequence for imagining, reduction in fat signal is seen on images of the pelvis and legs of volunteers. Patient studies show dramatic improvement in visualization of prostatic carcinoma compared with standard sequences
On pricing futures options on random binomial tree
International Nuclear Information System (INIS)
Bayram, Kamola; Ganikhodjaev, Nasir
2013-01-01
The discrete-time approach to real option valuation has typically been implemented in the finance literature using a binomial tree framework. Instead we develop a new model by randomizing the environment and call such model a random binomial tree. Whereas the usual model has only one environment (u, d) where the price of underlying asset can move by u times up and d times down, and pair (u, d) is constant over the life of the underlying asset, in our new model the underlying security is moving in two environments namely (u 1 , d 1 ) and (u 2 , d 2 ). Thus we obtain two volatilities σ 1 and σ 2 . This new approach enables calculations reflecting the real market since it consider the two states of market normal and extra ordinal. In this paper we define and study Futures options for such models.
PENERAPAN REGRESI BINOMIAL NEGATIF UNTUK MENGATASI OVERDISPERSI PADA REGRESI POISSON
Directory of Open Access Journals (Sweden)
PUTU SUSAN PRADAWATI
2013-09-01
Full Text Available Poisson regression was used to analyze the count data which Poisson distributed. Poisson regression analysis requires state equidispersion, in which the mean value of the response variable is equal to the value of the variance. However, there are deviations in which the value of the response variable variance is greater than the mean. This is called overdispersion. If overdispersion happens and Poisson Regression analysis is being used, then underestimated standard errors will be obtained. Negative Binomial Regression can handle overdispersion because it contains a dispersion parameter. From the simulation data which experienced overdispersion in the Poisson Regression model it was found that the Negative Binomial Regression was better than the Poisson Regression model.
Data analysis using the Binomial Failure Rate common cause model
International Nuclear Information System (INIS)
Atwood, C.L.
1983-09-01
This report explains how to use the Binomial Failure Rate (BFR) method to estimate common cause failure rates. The entire method is described, beginning with the conceptual model, and covering practical issues of data preparation, treatment of variation in the failure rates, Bayesian estimation of the quantities of interest, checking the model assumptions for lack of fit to the data, and the ultimate application of the answers
e+-e- hadronic multiplicity distributions: negative binomial or Poisson
International Nuclear Information System (INIS)
Carruthers, P.; Shih, C.C.
1986-01-01
On the basis of fits to the multiplicity distributions for variable rapidity windows and the forward backward correlation for the 2 jet subset of e + e - data it is impossible to distinguish between a global negative binomial and its generalization, the partially coherent distribution. It is suggested that intensity interferometry, especially the Bose-Einstein correlation, gives information which will discriminate among dynamical models. 16 refs
Hits per trial: Basic analysis of binomial data
International Nuclear Information System (INIS)
Atwood, C.L.
1994-09-01
This report presents simple statistical methods for analyzing binomial data, such as the number of failures in some number of demands. It gives point estimates, confidence intervals, and Bayesian intervals for the failure probability. It shows how to compare subsets of the data, both graphically and by statistical tests, and how to look for trends in time. It presents a compound model when the failure probability varies randomly. Examples and SAS programs are given
Hits per trial: Basic analysis of binomial data
Energy Technology Data Exchange (ETDEWEB)
Atwood, C.L.
1994-09-01
This report presents simple statistical methods for analyzing binomial data, such as the number of failures in some number of demands. It gives point estimates, confidence intervals, and Bayesian intervals for the failure probability. It shows how to compare subsets of the data, both graphically and by statistical tests, and how to look for trends in time. It presents a compound model when the failure probability varies randomly. Examples and SAS programs are given.
Correlation Structures of Correlated Binomial Models and Implied Default Distribution
S. Mori; K. Kitsukawa; M. Hisakado
2006-01-01
We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution h...
Standardized binomial models for risk or prevalence ratios and differences.
Richardson, David B; Kinlaw, Alan C; MacLehose, Richard F; Cole, Stephen R
2015-10-01
Epidemiologists often analyse binary outcomes in cohort and cross-sectional studies using multivariable logistic regression models, yielding estimates of adjusted odds ratios. It is widely known that the odds ratio closely approximates the risk or prevalence ratio when the outcome is rare, and it does not do so when the outcome is common. Consequently, investigators may decide to directly estimate the risk or prevalence ratio using a log binomial regression model. We describe the use of a marginal structural binomial regression model to estimate standardized risk or prevalence ratios and differences. We illustrate the proposed approach using data from a cohort study of coronary heart disease status in Evans County, Georgia, USA. The approach reduces problems with model convergence typical of log binomial regression by shifting all explanatory variables except the exposures of primary interest from the linear predictor of the outcome regression model to a model for the standardization weights. The approach also facilitates evaluation of departures from additivity in the joint effects of two exposures. Epidemiologists should consider reporting standardized risk or prevalence ratios and differences in cohort and cross-sectional studies. These are readily-obtained using the SAS, Stata and R statistical software packages. The proposed approach estimates the exposure effect in the total population. © The Author 2015; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.
Abstract knowledge versus direct experience in processing of binomial expressions.
Morgan, Emily; Levy, Roger
2016-12-01
We ask whether word order preferences for binomial expressions of the form A and B (e.g. bread and butter) are driven by abstract linguistic knowledge of ordering constraints referencing the semantic, phonological, and lexical properties of the constituent words, or by prior direct experience with the specific items in questions. Using forced-choice and self-paced reading tasks, we demonstrate that online processing of never-before-seen binomials is influenced by abstract knowledge of ordering constraints, which we estimate with a probabilistic model. In contrast, online processing of highly frequent binomials is primarily driven by direct experience, which we estimate from corpus frequency counts. We propose a trade-off wherein processing of novel expressions relies upon abstract knowledge, while reliance upon direct experience increases with increased exposure to an expression. Our findings support theories of language processing in which both compositional generation and direct, holistic reuse of multi-word expressions play crucial roles. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Sequential charged particle reaction
International Nuclear Information System (INIS)
Hori, Jun-ichi; Ochiai, Kentaro; Sato, Satoshi; Yamauchi, Michinori; Nishitani, Takeo
2004-01-01
The effective cross sections for producing the sequential reaction products in F82H, pure vanadium and LiF with respect to the 14.9-MeV neutron were obtained and compared with the estimation ones. Since the sequential reactions depend on the secondary charged particles behavior, the effective cross sections are corresponding to the target nuclei and the material composition. The effective cross sections were also estimated by using the EAF-libraries and compared with the experimental ones. There were large discrepancies between estimated and experimental values. Additionally, we showed the contribution of the sequential reaction on the induced activity and dose rate in the boundary region with water. From the present study, it has been clarified that the sequential reactions are of great importance to evaluate the dose rates around the surface of cooling pipe and the activated corrosion products. (author)
Beta-binomial model for meta-analysis of odds ratios.
Bakbergenuly, Ilyas; Kulinskaya, Elena
2017-05-20
In meta-analysis of odds ratios (ORs), heterogeneity between the studies is usually modelled via the additive random effects model (REM). An alternative, multiplicative REM for ORs uses overdispersion. The multiplicative factor in this overdispersion model (ODM) can be interpreted as an intra-class correlation (ICC) parameter. This model naturally arises when the probabilities of an event in one or both arms of a comparative study are themselves beta-distributed, resulting in beta-binomial distributions. We propose two new estimators of the ICC for meta-analysis in this setting. One is based on the inverted Breslow-Day test, and the other on the improved gamma approximation by Kulinskaya and Dollinger (2015, p. 26) to the distribution of Cochran's Q. The performance of these and several other estimators of ICC on bias and coverage is studied by simulation. Additionally, the Mantel-Haenszel approach to estimation of ORs is extended to the beta-binomial model, and we study performance of various ICC estimators when used in the Mantel-Haenszel or the inverse-variance method to combine ORs in meta-analysis. The results of the simulations show that the improved gamma-based estimator of ICC is superior for small sample sizes, and the Breslow-Day-based estimator is the best for n⩾100. The Mantel-Haenszel-based estimator of OR is very biased and is not recommended. The inverse-variance approach is also somewhat biased for ORs≠1, but this bias is not very large in practical settings. Developed methods and R programs, provided in the Web Appendix, make the beta-binomial model a feasible alternative to the standard REM for meta-analysis of ORs. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
International Nuclear Information System (INIS)
Shultis, J.K.; Buranapan, W.; Eckhoff, N.D.
1981-12-01
Of considerable importance in the safety analysis of nuclear power plants are methods to estimate the probability of failure-on-demand, p, of a plant component that normally is inactive and that may fail when activated or stressed. Properties of five methods for estimating from failure-on-demand data the parameters of the beta prior distribution in a compound beta-binomial probability model are examined. Simulated failure data generated from a known beta-binomial marginal distribution are used to estimate values of the beta parameters by (1) matching moments of the prior distribution to those of the data, (2) the maximum likelihood method based on the prior distribution, (3) a weighted marginal matching moments method, (4) an unweighted marginal matching moments method, and (5) the maximum likelihood method based on the marginal distribution. For small sample sizes (N = or < 10) with data typical of low failure probability components, it was found that the simple prior matching moments method is often superior (e.g. smallest bias and mean squared error) while for larger sample sizes the marginal maximum likelihood estimators appear to be best
Statistical properties of nonlinear intermediate states: binomial state
Energy Technology Data Exchange (ETDEWEB)
Abdalla, M Sebawe [Mathematics Department, College of Science, King Saud University, PO Box 2455, Riyadh 11451 (Saudi Arabia); Obada, A-S F [Department Mathematics, Faculty of Science, Al-Azhar University, Nasr City 11884, Cairo (Egypt); Darwish, M [Department of Physics, Faculty of Education, Suez Canal University, Al-Arish (Egypt)
2005-12-01
In the present paper we introduce a nonlinear binomial state (the state which interpolates between the nonlinear coherent and number states). The main investigation concentrates on the statistical properties for such a state where we consider the squeezing phenomenon by examining the variation in the quadrature variances for both normal and amplitude-squared squeezing. Examinations for the quasi-probability distribution functions (W-Wigner and Q-functions) are also given for both diagonal and off diagonal terms. The quadrature distribution and the phase distribution as well as the phase variances are discussed. Moreover, we give in detail a generation scheme for such state.
Microbial comparative pan-genomics using binomial mixture models
DEFF Research Database (Denmark)
Ussery, David; Snipen, L; Almøy, T
2009-01-01
The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter...... occurring genes in the population. CONCLUSION: Analyzing pan-genomics data with binomial mixture models is a way to handle dependencies between genomes, which we find is always present. A bottleneck in the estimation procedure is the annotation of rarely occurring genes....
Correlation Structures of Correlated Binomial Models and Implied Default Distribution
Mori, Shintaro; Kitsukawa, Kenji; Hisakado, Masato
2008-11-01
We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution has singular correlation structures, reflecting the credit market implications. We point out two possible origins of the singular behavior.
Sequential stochastic optimization
Cairoli, Renzo
1996-01-01
Sequential Stochastic Optimization provides mathematicians and applied researchers with a well-developed framework in which stochastic optimization problems can be formulated and solved. Offering much material that is either new or has never before appeared in book form, it lucidly presents a unified theory of optimal stopping and optimal sequential control of stochastic processes. This book has been carefully organized so that little prior knowledge of the subject is assumed; its only prerequisites are a standard graduate course in probability theory and some familiarity with discrete-paramet
Plowshare sequential device test
Energy Technology Data Exchange (ETDEWEB)
Ballou, L. B.
1971-08-02
For over a year we have been advocating the development of a hardened or ruggedized version of Diamond which will be suitable for sequential detonation of multiple explosives in one emplacement hole. A Plowshare-sponsored device development test, named `Yacht` is proposed for execution in Area 15 at the Nevada Test Site [NTS] in late September 1972. The test is designed to evaluate the ability of a ruggedized Diamond-type explosive assembly to withstand the effects of an adjacent nuclear detonation in the same emplacement hole and then be sequentially fired. The objectives and experimental plan for this concept is provided.
Directory of Open Access Journals (Sweden)
Hyungsuk Tak
2017-06-01
Full Text Available Rgbp is an R package that provides estimates and verifiable confidence intervals for random effects in two-level conjugate hierarchical models for overdispersed Gaussian, Poisson, and binomial data. Rgbp models aggregate data from k independent groups summarized by observed sufficient statistics for each random effect, such as sample means, possibly with covariates. Rgbp uses approximate Bayesian machinery with unique improper priors for the hyper-parameters, which leads to good repeated sampling coverage properties for random effects. A special feature of Rgbp is an option that generates synthetic data sets to check whether the interval estimates for random effects actually meet the nominal confidence levels. Additionally, Rgbp provides inference statistics for the hyper-parameters, e.g., regression coefficients.
Sequential memory: Binding dynamics
Afraimovich, Valentin; Gong, Xue; Rabinovich, Mikhail
2015-10-01
Temporal order memories are critical for everyday animal and human functioning. Experiments and our own experience show that the binding or association of various features of an event together and the maintaining of multimodality events in sequential order are the key components of any sequential memories—episodic, semantic, working, etc. We study a robustness of binding sequential dynamics based on our previously introduced model in the form of generalized Lotka-Volterra equations. In the phase space of the model, there exists a multi-dimensional binding heteroclinic network consisting of saddle equilibrium points and heteroclinic trajectories joining them. We prove here the robustness of the binding sequential dynamics, i.e., the feasibility phenomenon for coupled heteroclinic networks: for each collection of successive heteroclinic trajectories inside the unified networks, there is an open set of initial points such that the trajectory going through each of them follows the prescribed collection staying in a small neighborhood of it. We show also that the symbolic complexity function of the system restricted to this neighborhood is a polynomial of degree L - 1, where L is the number of modalities.
Negative binomial models for abundance estimation of multiple closed populations
Boyce, Mark S.; MacKenzie, Darry I.; Manly, Bryan F.J.; Haroldson, Mark A.; Moody, David W.
2001-01-01
Counts of uniquely identified individuals in a population offer opportunities to estimate abundance. However, for various reasons such counts may be burdened by heterogeneity in the probability of being detected. Theoretical arguments and empirical evidence demonstrate that the negative binomial distribution (NBD) is a useful characterization for counts from biological populations with heterogeneity. We propose a method that focuses on estimating multiple populations by simultaneously using a suite of models derived from the NBD. We used this approach to estimate the number of female grizzly bears (Ursus arctos) with cubs-of-the-year in the Yellowstone ecosystem, for each year, 1986-1998. Akaike's Information Criteria (AIC) indicated that a negative binomial model with a constant level of heterogeneity across all years was best for characterizing the sighting frequencies of female grizzly bears. A lack-of-fit test indicated the model adequately described the collected data. Bootstrap techniques were used to estimate standard errors and 95% confidence intervals. We provide a Monte Carlo technique, which confirms that the Yellowstone ecosystem grizzly bear population increased during the period 1986-1998.
Low reheating temperatures in monomial and binomial inflationary models
International Nuclear Information System (INIS)
Rehagen, Thomas; Gelmini, Graciela B.
2015-01-01
We investigate the allowed range of reheating temperature values in light of the Planck 2015 results and the recent joint analysis of Cosmic Microwave Background (CMB) data from the BICEP2/Keck Array and Planck experiments, using monomial and binomial inflationary potentials. While the well studied ϕ 2 inflationary potential is no longer favored by current CMB data, as well as ϕ p with p>2, a ϕ 1 potential and canonical reheating (w re =0) provide a good fit to the CMB measurements. In this last case, we find that the Planck 2015 68% confidence limit upper bound on the spectral index, n s , implies an upper bound on the reheating temperature of T re ≲6×10 10 GeV, and excludes instantaneous reheating. The low reheating temperatures allowed by this model open the possibility that dark matter could be produced during the reheating period instead of when the Universe is radiation dominated, which could lead to very different predictions for the relic density and momentum distribution of WIMPs, sterile neutrinos, and axions. We also study binomial inflationary potentials and show the effects of a small departure from a ϕ 1 potential. We find that as a subdominant ϕ 2 term in the potential increases, first instantaneous reheating becomes allowed, and then the lowest possible reheating temperature of T re =4 MeV is excluded by the Planck 2015 68% confidence limit
Estimation of adjusted rate differences using additive negative binomial regression.
Donoghoe, Mark W; Marschner, Ian C
2016-08-15
Rate differences are an important effect measure in biostatistics and provide an alternative perspective to rate ratios. When the data are event counts observed during an exposure period, adjusted rate differences may be estimated using an identity-link Poisson generalised linear model, also known as additive Poisson regression. A problem with this approach is that the assumption of equality of mean and variance rarely holds in real data, which often show overdispersion. An additive negative binomial model is the natural alternative to account for this; however, standard model-fitting methods are often unable to cope with the constrained parameter space arising from the non-negativity restrictions of the additive model. In this paper, we propose a novel solution to this problem using a variant of the expectation-conditional maximisation-either algorithm. Our method provides a reliable way to fit an additive negative binomial regression model and also permits flexible generalisations using semi-parametric regression functions. We illustrate the method using a placebo-controlled clinical trial of fenofibrate treatment in patients with type II diabetes, where the outcome is the number of laser therapy courses administered to treat diabetic retinopathy. An R package is available that implements the proposed method. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Irwin, Brian J.; Wagner, Tyler; Bence, James R.; Kepler, Megan V.; Liu, Weihai; Hayes, Daniel B.
2013-01-01
Partitioning total variability into its component temporal and spatial sources is a powerful way to better understand time series and elucidate trends. The data available for such analyses of fish and other populations are usually nonnegative integer counts of the number of organisms, often dominated by many low values with few observations of relatively high abundance. These characteristics are not well approximated by the Gaussian distribution. We present a detailed description of a negative binomial mixed-model framework that can be used to model count data and quantify temporal and spatial variability. We applied these models to data from four fishery-independent surveys of Walleyes Sander vitreus across the Great Lakes basin. Specifically, we fitted models to gill-net catches from Wisconsin waters of Lake Superior; Oneida Lake, New York; Saginaw Bay in Lake Huron, Michigan; and Ohio waters of Lake Erie. These long-term monitoring surveys varied in overall sampling intensity, the total catch of Walleyes, and the proportion of zero catches. Parameter estimation included the negative binomial scaling parameter, and we quantified the random effects as the variations among gill-net sampling sites, the variations among sampled years, and site × year interactions. This framework (i.e., the application of a mixed model appropriate for count data in a variance-partitioning context) represents a flexible approach that has implications for monitoring programs (e.g., trend detection) and for examining the potential of individual variance components to serve as response metrics to large-scale anthropogenic perturbations or ecological changes.
Salmerón, Diego; Cano, Juan A; Chirlaque, María D
2015-08-30
In cohort studies, binary outcomes are very often analyzed by logistic regression. However, it is well known that when the goal is to estimate a risk ratio, the logistic regression is inappropriate if the outcome is common. In these cases, a log-binomial regression model is preferable. On the other hand, the estimation of the regression coefficients of the log-binomial model is difficult owing to the constraints that must be imposed on these coefficients. Bayesian methods allow a straightforward approach for log-binomial regression models and produce smaller mean squared errors in the estimation of risk ratios than the frequentist methods, and the posterior inferences can be obtained using the software WinBUGS. However, Markov chain Monte Carlo methods implemented in WinBUGS can lead to large Monte Carlo errors in the approximations to the posterior inferences because they produce correlated simulations, and the accuracy of the approximations are inversely related to this correlation. To reduce correlation and to improve accuracy, we propose a reparameterization based on a Poisson model and a sampling algorithm coded in R. Copyright © 2015 John Wiley & Sons, Ltd.
A sequential tree approach for incremental sequential pattern mining
Indian Academy of Sciences (India)
Data mining; STISPM; sequential tree; incremental mining; backward tracking. Abstract. ''Sequential pattern mining'' is a prominent and significant method to explore the knowledge and innovation from the large database. Common sequential pattern mining algorithms handle static databases.Pragmatically, looking into the ...
Sequential measurements of conjugate observables
Energy Technology Data Exchange (ETDEWEB)
Carmeli, Claudio [Dipartimento di Fisica, Universita di Genova, Via Dodecaneso 33, 16146 Genova (Italy); Heinosaari, Teiko [Department of Physics and Astronomy, Turku Centre for Quantum Physics, University of Turku, 20014 Turku (Finland); Toigo, Alessandro, E-mail: claudio.carmeli@gmail.com, E-mail: teiko.heinosaari@utu.fi, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica ' Francesco Brioschi' , Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)
2011-07-15
We present a unified treatment of sequential measurements of two conjugate observables. Our approach is to derive a mathematical structure theorem for all the relevant covariant instruments. As a consequence of this result, we show that every Weyl-Heisenberg covariant observable can be implemented as a sequential measurement of two conjugate observables. This method is applicable both in finite- and infinite-dimensional Hilbert spaces, therefore covering sequential spin component measurements as well as position-momentum sequential measurements.
Forced Sequence Sequential Decoding
DEFF Research Database (Denmark)
Jensen, Ole Riis
is possible as low as Eb/No=0.6 dB, which is about 1.7 dB below the signal-to-noise ratio that marks the cut-off rate for the convolutional code. This is possible since the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability...
Lea, Amanda J.
2015-01-01
Identifying sources of variation in DNA methylation levels is important for understanding gene regulation. Recently, bisulfite sequencing has become a popular tool for investigating DNA methylation levels. However, modeling bisulfite sequencing data is complicated by dramatic variation in coverage across sites and individual samples, and because of the computational challenges of controlling for genetic covariance in count data. To address these challenges, we present a binomial mixed model and an efficient, sampling-based algorithm (MACAU: Mixed model association for count data via data augmentation) for approximate parameter estimation and p-value computation. This framework allows us to simultaneously account for both the over-dispersed, count-based nature of bisulfite sequencing data, as well as genetic relatedness among individuals. Using simulations and two real data sets (whole genome bisulfite sequencing (WGBS) data from Arabidopsis thaliana and reduced representation bisulfite sequencing (RRBS) data from baboons), we show that our method provides well-calibrated test statistics in the presence of population structure. Further, it improves power to detect differentially methylated sites: in the RRBS data set, MACAU detected 1.6-fold more age-associated CpG sites than a beta-binomial model (the next best approach). Changes in these sites are consistent with known age-related shifts in DNA methylation levels, and are enriched near genes that are differentially expressed with age in the same population. Taken together, our results indicate that MACAU is an efficient, effective tool for analyzing bisulfite sequencing data, with particular salience to analyses of structured populations. MACAU is freely available at www.xzlab.org/software.html. PMID:26599596
Covering Resilience: A Recent Development for Binomial Checkpointing
Energy Technology Data Exchange (ETDEWEB)
Walther, Andrea; Narayanan, Sri Hari Krishna
2016-09-12
In terms of computing time, adjoint methods offer a very attractive alternative to compute gradient information, required, e.g., for optimization purposes. However, together with this very favorable temporal complexity result comes a memory requirement that is in essence proportional with the operation count of the underlying function, e.g., if algorithmic differentiation is used to provide the adjoints. For this reason, checkpointing approaches in many variants have become popular. This paper analyzes an extension of the so-called binomial approach to cover also possible failures of the computing systems. Such a measure of precaution is of special interest for massive parallel simulations and adjoint calculations where the mean time between failure of the large scale computing system is smaller than the time needed to complete the calculation of the adjoint information. We describe the extensions of standard checkpointing approaches required for such resilience, provide a corresponding implementation and discuss first numerical results.
A Bayesian equivalency test for two independent binomial proportions.
Kawasaki, Yohei; Shimokawa, Asanao; Yamada, Hiroshi; Miyaoka, Etsuo
2016-01-01
In clinical trials, it is often necessary to perform an equivalence study. The equivalence study requires actively denoting equivalence between two different drugs or treatments. Since it is not possible to assert equivalence that is not rejected by a superiority test, statistical methods known as equivalency tests have been suggested. These methods for equivalency tests are based on the frequency framework; however, there are few such methods in the Bayesian framework. Hence, this article proposes a new index that suggests the equivalency of binomial proportions, which is constructed based on the Bayesian framework. In this study, we provide two methods for calculating the index and compare the probabilities that have been calculated by these two calculation methods. Moreover, we apply this index to the results of actual clinical trials to demonstrate the utility of the index.
Revealing Word Order: Using Serial Position in Binomials to Predict Properties of the Speaker
Iliev, Rumen; Smirnova, Anastasia
2016-01-01
Three studies test the link between word order in binomials and psychological and demographic characteristics of a speaker. While linguists have already suggested that psychological, cultural and societal factors are important in choosing word order in binomials, the vast majority of relevant research was focused on general factors and on broadly…
Modeling and Predistortion of Envelope Tracking Power Amplifiers using a Memory Binomial Model
DEFF Research Database (Denmark)
Tafuri, Felice Francesco; Sira, Daniel; Larsen, Torben
2013-01-01
. The model definition is based on binomial series, hence the name of memory binomial model (MBM). The MBM is here applied to measured data-sets acquired from an ET measurement set-up. When used as a PA model the MBM showed an NMSE (Normalized Mean Squared Error) as low as −40dB and an ACEPR (Adjacent Channel...
Some normed binomial difference sequence spaces related to the [Formula: see text] spaces.
Song, Meimei; Meng, Jian
2017-01-01
The aim of this paper is to introduce the normed binomial sequence spaces [Formula: see text] by combining the binomial transformation and difference operator, where [Formula: see text]. We prove that these spaces are linearly isomorphic to the spaces [Formula: see text] and [Formula: see text], respectively. Furthermore, we compute Schauder bases and the α -, β - and γ -duals of these sequence spaces.
An efficient binomial model-based measure for sequence comparison and its application.
Liu, Xiaoqing; Dai, Qi; Li, Lihua; He, Zerong
2011-04-01
Sequence comparison is one of the major tasks in bioinformatics, which could serve as evidence of structural and functional conservation, as well as of evolutionary relations. There are several similarity/dissimilarity measures for sequence comparison, but challenges remains. This paper presented a binomial model-based measure to analyze biological sequences. With help of a random indicator, the occurrence of a word at any position of sequence can be regarded as a random Bernoulli variable, and the distribution of a sum of the word occurrence is well known to be a binomial one. By using a recursive formula, we computed the binomial probability of the word count and proposed a binomial model-based measure based on the relative entropy. The proposed measure was tested by extensive experiments including classification of HEV genotypes and phylogenetic analysis, and further compared with alignment-based and alignment-free measures. The results demonstrate that the proposed measure based on binomial model is more efficient.
Majumdar, Arunabha; Witte, John S; Ghosh, Saurabh
2015-12-01
Binary phenotypes commonly arise due to multiple underlying quantitative precursors and genetic variants may impact multiple traits in a pleiotropic manner. Hence, simultaneously analyzing such correlated traits may be more powerful than analyzing individual traits. Various genotype-level methods, e.g., MultiPhen (O'Reilly et al. []), have been developed to identify genetic factors underlying a multivariate phenotype. For univariate phenotypes, the usefulness and applicability of allele-level tests have been investigated. The test of allele frequency difference among cases and controls is commonly used for mapping case-control association. However, allelic methods for multivariate association mapping have not been studied much. In this article, we explore two allelic tests of multivariate association: one using a Binomial regression model based on inverted regression of genotype on phenotype (Binomial regression-based Association of Multivariate Phenotypes [BAMP]), and the other employing the Mahalanobis distance between two sample means of the multivariate phenotype vector for two alleles at a single-nucleotide polymorphism (Distance-based Association of Multivariate Phenotypes [DAMP]). These methods can incorporate both discrete and continuous phenotypes. Some theoretical properties for BAMP are studied. Using simulations, the power of the methods for detecting multivariate association is compared with the genotype-level test MultiPhen's. The allelic tests yield marginally higher power than MultiPhen for multivariate phenotypes. For one/two binary traits under recessive mode of inheritance, allelic tests are found to be substantially more powerful. All three tests are applied to two different real data and the results offer some support for the simulation study. We propose a hybrid approach for testing multivariate association that implements MultiPhen when Hardy-Weinberg Equilibrium (HWE) is violated and BAMP otherwise, because the allelic approaches assume HWE
International Nuclear Information System (INIS)
Wright, T.
1982-01-01
A new sampling procedure is introduced for estimating a population proportion. The procedure combines the ideas of inverse binomial sampling and Bernoulli sampling. An unbiased estimator is given with its variance. The procedure can be viewed as a generalization of inverse binomial sampling
Synthetic Aperture Sequential Beamforming
DEFF Research Database (Denmark)
Kortbek, Jacob; Jensen, Jørgen Arendt; Gammelmark, Kim Løkke
2008-01-01
A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective...... is stored. The second stage applies the focused image lines from the first stage as input data. The SASB method has been investigated using simulations in Field II and by off-line processing of data acquired with a commercial scanner. The performance of SASB with a static image object is compared with DRF...
Group-sequential analysis may allow for early trial termination
DEFF Research Database (Denmark)
Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich
2017-01-01
assumed to be normally distributed, and sequential one-sided hypothesis tests on the population standard deviation of the differences against a hypothesised value of 1.5 were performed, employing an alpha spending function. The fixed-sample analysis (N = 45) was compared with the group-sequential analysis......BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG...... and the final analysis. Other partitions did not suggest early stopping after adjustment for multiple testing due to one influential outlier and our small sample size. CONCLUSIONS: Group-sequential testing may enable early stopping of a trial, allowing for potential time and resource savings. The testing...
Microbial comparative pan-genomics using binomial mixture models
Directory of Open Access Journals (Sweden)
Ussery David W
2009-08-01
Full Text Available Abstract Background The size of the core- and pan-genome of bacterial species is a topic of increasing interest due to the growing number of sequenced prokaryote genomes, many from the same species. Attempts to estimate these quantities have been made, using regression methods or mixture models. We extend the latter approach by using statistical ideas developed for capture-recapture problems in ecology and epidemiology. Results We estimate core- and pan-genome sizes for 16 different bacterial species. The results reveal a complex dependency structure for most species, manifested as heterogeneous detection probabilities. Estimated pan-genome sizes range from small (around 2600 gene families in Buchnera aphidicola to large (around 43000 gene families in Escherichia coli. Results for Echerichia coli show that as more data become available, a larger diversity is estimated, indicating an extensive pool of rarely occurring genes in the population. Conclusion Analyzing pan-genomics data with binomial mixture models is a way to handle dependencies between genomes, which we find is always present. A bottleneck in the estimation procedure is the annotation of rarely occurring genes.
Dynamic prediction of cumulative incidence functions by direct binomial regression.
Grand, Mia K; de Witte, Theo J M; Putter, Hein
2018-03-25
In recent years there have been a series of advances in the field of dynamic prediction. Among those is the development of methods for dynamic prediction of the cumulative incidence function in a competing risk setting. These models enable the predictions to be updated as time progresses and more information becomes available, for example when a patient comes back for a follow-up visit after completing a year of treatment, the risk of death, and adverse events may have changed since treatment initiation. One approach to model the cumulative incidence function in competing risks is by direct binomial regression, where right censoring of the event times is handled by inverse probability of censoring weights. We extend the approach by combining it with landmarking to enable dynamic prediction of the cumulative incidence function. The proposed models are very flexible, as they allow the covariates to have complex time-varying effects, and we illustrate how to investigate possible time-varying structures using Wald tests. The models are fitted using generalized estimating equations. The method is applied to bone marrow transplant data and the performance is investigated in a simulation study. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Confidence limits for parameters of Poisson and binomial distributions
International Nuclear Information System (INIS)
Arnett, L.M.
1976-04-01
The confidence limits for the frequency in a Poisson process and for the proportion of successes in a binomial process were calculated and tabulated for the situations in which the observed values of the frequency or proportion and an a priori distribution of these parameters are available. Methods are used that produce limits with exactly the stated confidence levels. The confidence interval [a,b] is calculated so that Pr [a less than or equal to lambda less than or equal to b c,μ], where c is the observed value of the parameter, and μ is the a priori hypothesis of the distribution of this parameter. A Bayesian type analysis is used. The intervals calculated are narrower and appreciably different from results, known to be conservative, that are often used in problems of this type. Pearson and Hartley recognized the characteristics of their methods and contemplated that exact methods could someday be used. The calculation of the exact intervals requires involved numerical analyses readily implemented only on digital computers not available to Pearson and Hartley. A Monte Carlo experiment was conducted to verify a selected interval from those calculated. This numerical experiment confirmed the results of the analytical methods and the prediction of Pearson and Hartley that their published tables give conservative results
Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.
Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao
2016-01-15
When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.
Sequential auctions and price anomalies
Directory of Open Access Journals (Sweden)
Trifunović Dejan
2014-01-01
Full Text Available In sequential auctions objects are sold one by one in separate auctions. These sequential auctions might be organized as sequential first-price, second-price, or English auctions. We will derive equilibrium bidding strategies for these auctions. Theoretical models suggest that prices in sequential auctions with private values or with randomly assigned heterogeneous objects should have no trend. However, empirical research contradicts this result and prices exhibit a declining or increasing trend, which is called declining and increasing price anomaly. We will present a review of these empirical results, as well as different theoretical explanations for these anomalies.
On the revival of the negative binomial distribution in multiparticle production
International Nuclear Information System (INIS)
Ekspong, G.
1990-01-01
This paper is based on published and some unpublished material pertaining to the revival of interest in and success of applying the negative binomial distribution to multiparticle production since 1983. After a historically oriented introduction going farther back in time, the main part of the paper is devoted to an unpublished derivation of the negative binomial distribution based on empirical observations of forward-backward multiplicity correlations. Some physical processes leading to the negative binomial distribution are mentioned and some comments made on published criticisms
International Nuclear Information System (INIS)
Arneodo, M.; Ferrero, M.I.; Peroni, C.; Bee, C.P.; Bird, I.; Coughlan, J.; Sloan, T.; Braun, H.; Brueck, H.; Drees, J.; Edwards, A.; Krueger, J.; Montgomery, H.E.; Peschel, H.; Pietrzyk, U.; Poetsch, M.; Schneider, A.; Dreyer, T.; Ernst, T.; Haas, J.; Kabuss, E.M.; Landgraf, U.; Mohr, W.; Rith, K.; Schlagboehmer, A.; Schroeder, T.; Stier, H.E.; Wallucks, W.
1987-01-01
The multiplicity distributions of charged hadrons produced in the deep inelastic muon-proton scattering at 280 GeV are analysed in various rapidity intervals, as a function of the total hadronic centre of mass energy W ranging from 4-20 GeV. Multiplicity distributions for the backward and forward hemispheres are also analysed separately. The data can be well parameterized by binomial distributions, extending their range of applicability to the case of lepton-proton scattering. The energy and the rapidity dependence of the parameters is presented and a smooth transition from the binomial distribution via Poissonian to the ordinary binomial is observed. (orig.)
A sequential tree approach for incremental sequential pattern mining
Indian Academy of Sciences (India)
''Sequential pattern mining'' is a prominent and significant method to explore the knowledge and innovation from the large database. Common sequential pattern mining algorithms handle static databases.Pragmatically, looking into the functional and actual execution, the database grows exponentially thereby leading to ...
Lu, Hai-Xia; Wong, May Chun Mei; Lo, Edward Chin Man; McGrath, Colman
2013-08-19
Limited information on oral health status for young adults aged 18 year-olds is known, and no available data exists in Hong Kong. The aims of this study were to investigate the oral health status and its risk indicators among young adults in Hong Kong using negative binomial regression. A survey was conducted in a representative sample of Hong Kong young adults aged 18 years. Clinical examinations were taken to assess oral health status using DMFT index and Community Periodontal Index (CPI) according to WHO criteria. Negative binomial regressions for DMFT score and the number of sextants with healthy gums were performed to identify the risk indicators of oral health status. A total of 324 young adults were examined. Prevalence of dental caries experience among the subjects was 59% and the overall mean DMFT score was 1.4. Most subjects (95%) had a score of 2 as their highest CPI score. Negative binomial regression analyses revealed that subjects who had a dental visit within 3 years had significantly higher DMFT scores (IRR = 1.68, p < 0.001). Subjects who brushed their teeth more frequently (IRR = 1.93, p < 0.001) and those with better dental knowledge (IRR = 1.09, p = 0.002) had significantly more sextants with healthy gums. Dental caries experience of the young adults aged 18 years in Hong Kong was not high but their periodontal condition was unsatisfactory. Their oral health status was related to their dental visit behavior, oral hygiene habit, and oral health knowledge.
Measured PET Data Characterization with the Negative Binomial Distribution Model.
Santarelli, Maria Filomena; Positano, Vincenzo; Landini, Luigi
2017-01-01
Accurate statistical model of PET measurements is a prerequisite for a correct image reconstruction when using statistical image reconstruction algorithms, or when pre-filtering operations must be performed. Although radioactive decay follows a Poisson distribution, deviation from Poisson statistics occurs on projection data prior to reconstruction due to physical effects, measurement errors, correction of scatter and random coincidences. Modelling projection data can aid in understanding the statistical nature of the data in order to develop efficient processing methods and to reduce noise. This paper outlines the statistical behaviour of measured emission data evaluating the goodness of fit of the negative binomial (NB) distribution model to PET data for a wide range of emission activity values. An NB distribution model is characterized by the mean of the data and the dispersion parameter α that describes the deviation from Poisson statistics. Monte Carlo simulations were performed to evaluate: (a) the performances of the dispersion parameter α estimator, (b) the goodness of fit of the NB model for a wide range of activity values. We focused on the effect produced by correction for random and scatter events in the projection (sinogram) domain, due to their importance in quantitative analysis of PET data. The analysis developed herein allowed us to assess the accuracy of the NB distribution model to fit corrected sinogram data, and to evaluate the sensitivity of the dispersion parameter α to quantify deviation from Poisson statistics. By the sinogram ROI-based analysis, it was demonstrated that deviation on the measured data from Poisson statistics can be quantitatively characterized by the dispersion parameter α, in any noise conditions and corrections.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros
2016-08-29
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
Hung, Tran Loc; Giang, Le Truong
2016-01-01
Using the Stein-Chen method some upper bounds in Poisson approximation for distributions of row-wise triangular arrays of independent negative-binomial distributed random variables are established in this note.
Distribution-free Inference of Zero-inated Binomial Data for Longitudinal Studies.
He, H; Wang, W J; Hu, J; Gallop, R; Crits-Christoph, P; Xia, Y L
2015-10-01
Count reponses with structural zeros are very common in medical and psychosocial research, especially in alcohol and HIV research, and the zero-inflated poisson (ZIP) and zero-inflated negative binomial (ZINB) models are widely used for modeling such outcomes. However, as alcohol drinking outcomes such as days of drinkings are counts within a given period, their distributions are bounded above by an upper limit (total days in the period) and thus inherently follow a binomial or zero-inflated binomial (ZIB) distribution, rather than a Poisson or zero-inflated Poisson (ZIP) distribution, in the presence of structural zeros. In this paper, we develop a new semiparametric approach for modeling zero-inflated binomial (ZIB)-like count responses for cross-sectional as well as longitudinal data. We illustrate this approach with both simulated and real study data.
Difference of Sums Containing Products of Binomial Coefficients and Their Logarithms
National Research Council Canada - National Science Library
Miller, Allen R; Moskowitz, Ira S
2005-01-01
Properties of the difference of two sums containing products of binomial coefficients and their logarithms which arise in the application of Shannon's information theory to a certain class of covert channels are deduced...
Difference of Sums Containing Products of Binomial Coefficients and their Logarithms
National Research Council Canada - National Science Library
Miller, Allen R; Moskowitz, Ira S
2004-01-01
Properties of the difference of two sums containing products of binomial coefficients and their logarithms which arise in the application of Shannon's information theory to a certain class of covert channels are deduced...
Poage, J. L.
1975-01-01
A sequential nonparametric pattern classification procedure is presented. The method presented is an estimated version of the Wald sequential probability ratio test (SPRT). This method utilizes density function estimates, and the density estimate used is discussed, including a proof of convergence in probability of the estimate to the true density function. The classification procedure proposed makes use of the theory of order statistics, and estimates of the probabilities of misclassification are given. The procedure was tested on discriminating between two classes of Gaussian samples and on discriminating between two kinds of electroencephalogram (EEG) responses.
Zheng, Gang; Torres, Allan M.; Price, William S.
2008-09-01
Two phase-modulated binomial-like π pulses have been developed by simultaneously optimizing pulse durations and phases. In combination with excitation sculpting, both of the new binomial-like sequences outperform the well-known 3-9-19 sequence in selectivity and inversion width. The new sequences provide similar selectivity and inversion width to the W5 sequence but with significantly shorter sequence durations. When used in PGSTE-WATERGATE, they afford highly selective solvent suppression in diffusion experiments.
Perbandingan Metode Binomial dan Metode Black-Scholes Dalam Penentuan Harga Opsi
Directory of Open Access Journals (Sweden)
Surya Amami Pramuditya
2016-04-01
Full Text Available ABSTRAKOpsi adalah kontrak antara pemegang dan penulis (buyer (holder dan seller (writer di mana penulis (writer memberikan hak (bukan kewajiban kepada holder untuk membeli atau menjual aset dari writer pada harga tertentu (strike atau latihan harga dan pada waktu tertentu dalam waktu (tanggal kadaluwarsa atau jatuh tempo waktu. Ada beberapa cara untuk menentukan harga opsi, diantaranya adalah Metode Black-Scholes dan Metode Binomial. Metode binomial berasal dari model pergerakan harga saham yang membagi waktu interval [0, T] menjadi n sama panjang. Sedangkan metode Black-Scholes, dimodelkan dengan pergerakan harga saham sebagai suatu proses stokastik. Semakin besar partisi waktu n pada Metode Binomial, maka nilai opsinya akan konvergen ke nilai opsi Metode Black-Scholes.Kata kunci: opsi, Binomial, Black-Scholes.ABSTRACT Option is a contract between the holder and the writer in which the writer gives the right (not the obligation to the holder to buy or sell an asset of a writer at a specified price (the strike or exercise price and at a specified time in the future (expiry date or maturity time. There are several ways to determine the price of options, including the Black-Scholes Method and Binomial Method. Binomial method come from a model of stock price movement that divide time interval [0, T] into n equally long. While the Black Scholes method, the stock price movement is modeled as a stochastic process. More larger the partition of time n in Binomial Method, the value option will converge to the value option in Black-Scholes Method.Key words: Options, Binomial, Black-Scholes
Directory of Open Access Journals (Sweden)
Quentin Noirhomme
2014-01-01
Full Text Available Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.
Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven
2014-01-01
Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain-computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.
Hepatobiliary sequential scintiscanning
International Nuclear Information System (INIS)
Eissner, D.
1985-01-01
The main criteria for interpreting hepatobiliary sequential scintiscanning (HBSS) data are given to be the following: (1) In young infants - without previous parenteral feeding - a normal up to a slightly increased activity uptake in the liver, accompanied by lack of activity excretion into the intestine (which requires a 24 - hour scan for detection) is a clear indication of bile duct atresia. However, the same findings can be obtained on very young newborn (up to one week of age) in case of a hepatitis with defined cholestasis. (2) In case of a comparably high activity uptake in the liver together with activity excretion into the intestine, which may be detectable in due time or after a delay (24-hour scan required), bile duct atresia can be excluded, the diagnosis being hepatitis. In general, hepatitis will cause a stronger liver cell damage, which excludes the criterion of activity excretion into the intestine. Similar findings can be obtained on infants with bile duct atresia with previous parenteral feeding. This is why the interpretation of HBSS data can only be effectively carried out in close cooperation with the pediatrician, and on the basis of profound knowledge of the overall clinical state of the infant. (orig.) [de
Sequential Design of Experiments
Energy Technology Data Exchange (ETDEWEB)
Anderson-Cook, Christine Michaela [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-06-30
A sequential design of experiments strategy is being developed and implemented that allows for adaptive learning based on incoming results as the experiment is being run. The plan is to incorporate these strategies for the NCCC and TCM experimental campaigns to be run in the coming months. This strategy for experimentation has the advantages of allowing new data collected during the experiment to inform future experimental runs based on their projected utility for a particular goal. For example, the current effort for the MEA capture system at NCCC plans to focus on maximally improving the quality of prediction of CO_{2} capture efficiency as measured by the width of the confidence interval for the underlying response surface that is modeled as a function of 1) Flue Gas Flowrate [1000-3000] kg/hr; 2) CO_{2} weight fraction [0.125-0.175]; 3) Lean solvent loading [0.1-0.3], and; 4) Lean solvent flowrate [3000-12000] kg/hr.
Adaptive sequential controller
Energy Technology Data Exchange (ETDEWEB)
El-Sharkawi, Mohamed A. (Renton, WA); Xing, Jian (Seattle, WA); Butler, Nicholas G. (Newberg, OR); Rodriguez, Alonso (Pasadena, CA)
1994-01-01
An adaptive sequential controller (50/50') for controlling a circuit breaker (52) or other switching device to substantially eliminate transients on a distribution line caused by closing and opening the circuit breaker. The device adaptively compensates for changes in the response time of the circuit breaker due to aging and environmental effects. A potential transformer (70) provides a reference signal corresponding to the zero crossing of the voltage waveform, and a phase shift comparator circuit (96) compares the reference signal to the time at which any transient was produced when the circuit breaker closed, producing a signal indicative of the adaptive adjustment that should be made. Similarly, in controlling the opening of the circuit breaker, a current transformer (88) provides a reference signal that is compared against the time at which any transient is detected when the circuit breaker last opened. An adaptive adjustment circuit (102) produces a compensation time that is appropriately modified to account for changes in the circuit breaker response, including the effect of ambient conditions and aging. When next opened or closed, the circuit breaker is activated at an appropriately compensated time, so that it closes when the voltage crosses zero and opens when the current crosses zero, minimizing any transients on the distribution line. Phase angle can be used to control the opening of the circuit breaker relative to the reference signal provided by the potential transformer.
Adaptive sequential controller
El-Sharkawi, Mohamed A.; Xing, Jian; Butler, Nicholas G.; Rodriguez, Alonso
1994-01-01
An adaptive sequential controller (50/50') for controlling a circuit breaker (52) or other switching device to substantially eliminate transients on a distribution line caused by closing and opening the circuit breaker. The device adaptively compensates for changes in the response time of the circuit breaker due to aging and environmental effects. A potential transformer (70) provides a reference signal corresponding to the zero crossing of the voltage waveform, and a phase shift comparator circuit (96) compares the reference signal to the time at which any transient was produced when the circuit breaker closed, producing a signal indicative of the adaptive adjustment that should be made. Similarly, in controlling the opening of the circuit breaker, a current transformer (88) provides a reference signal that is compared against the time at which any transient is detected when the circuit breaker last opened. An adaptive adjustment circuit (102) produces a compensation time that is appropriately modified to account for changes in the circuit breaker response, including the effect of ambient conditions and aging. When next opened or closed, the circuit breaker is activated at an appropriately compensated time, so that it closes when the voltage crosses zero and opens when the current crosses zero, minimizing any transients on the distribution line. Phase angle can be used to control the opening of the circuit breaker relative to the reference signal provided by the potential transformer.
Learning about Sampling with Boxer.
Picciotto, Henri; Ploger, Don
1991-01-01
Described is an introductory probability and statistics class focused on teaching the concepts of sampling and binomial distributions through a strategy based on teacher and student generated simulation using the Boxer computer language. The value of integrating programing with teaching subject matter is demonstrated, and sample student work is…
[Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].
Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L
2017-03-10
To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.
Mining Frequent Max and Closed Sequential Patterns
Afshar, Ramin
2002-01-01
Although frequent sequential pattern mining has an important role in many data mining tasks, however, it often generates a large number of sequential patterns, which reduces its efficiency and effectiveness. For many applications mining all the frequent sequential patterns is not necessary, and mining frequent Max, or Closed sequential patterns will provide the same amount of information. Comparing to frequent sequential pattern mining, frequent Max, or Closed sequential pattern mining g...
Jakaitiene, Audrone; Avino, Mariano; Guarracino, Mario Rosario
2017-04-01
Against diminishing costs, next-generation sequencing (NGS) still remains expensive for studies with a large number of individuals. As cost saving, sequencing genome of pools containing multiple samples might be used. Currently, there are many software available for the detection of single-nucleotide polymorphisms (SNPs). Sensitivity and specificity depend on the model used and data analyzed, indicating that all software have space for improvement. We use beta-binomial model to detect rare mutations in untagged pooled NGS experiments. We propose a multireference framework for pooled data with ability being specific up to two patients affected by neuromuscular disorders (NMD). We assessed the results comparing with The Genome Analysis Toolkit (GATK), CRISP, SNVer, and FreeBayes. Our results show that the multireference approach applying beta-binomial model is accurate in predicting rare mutations at 0.01 fraction. Finally, we explored the concordance of mutations between the model and software, checking their involvement in any NMD-related gene. We detected seven novel SNPs, for which the functional analysis produced enriched terms related to locomotion and musculature.
A fast algorithm for computing binomial coefficients modulo powers of two.
Andreica, Mugurel Ionut
2013-01-01
I present a new algorithm for computing binomial coefficients modulo 2N. The proposed method has an O(N3·Multiplication(N)+N4) preprocessing time, after which a binomial coefficient C(P, Q) with 0≤Q≤P≤2N-1 can be computed modulo 2N in O(N2·log(N)·Multiplication(N)) time. Multiplication(N) denotes the time complexity of multiplying two N-bit numbers, which can range from O(N2) to O(N·log(N)·log(log(N))) or better. Thus, the overall time complexity for evaluating M binomial coefficients C(P, Q) modulo 2N with 0≤Q≤P≤2N-1 is O((N3+M·N2·log(N))·Multiplication(N)+N4). After preprocessing, we can actually compute binomial coefficients modulo any 2R with R≤N. For larger values of P and Q, variations of Lucas' theorem must be used first in order to reduce the computation to the evaluation of multiple (O(log(P))) binomial coefficients C(P', Q') (or restricted types of factorials P'!) modulo 2N with 0≤Q'≤P'≤2N-1.
Water-selective excitation of short T2 species with binomial pulses.
Deligianni, Xeni; Bär, Peter; Scheffler, Klaus; Trattnig, Siegfried; Bieri, Oliver
2014-09-01
For imaging of fibrous musculoskeletal components, ultra-short echo time methods are often combined with fat suppression. Due to the increased chemical shift, spectral excitation of water might become a favorable option at ultra-high fields. Thus, this study aims to compare and explore short binomial excitation schemes for spectrally selective imaging of fibrous tissue components with short transverse relaxation time (T2 ). Water selective 1-1-binomial excitation is compared with nonselective imaging using a sub-millisecond spoiled gradient echo technique for in vivo imaging of fibrous tissue at 3T and 7T. Simulations indicate a maximum signal loss from binomial excitation of approximately 30% in the limit of very short T2 (0.1 ms), as compared to nonselective imaging; decreasing rapidly with increasing field strength and increasing T2 , e.g., to 19% at 3T and 10% at 7T for T2 of 1 ms. In agreement with simulations, a binomial phase close to 90° yielded minimum signal loss: approximately 6% at 3T and close to 0% at 7T for menisci, and for ligaments 9% and 13%, respectively. Overall, for imaging of short-lived T2 components, short 1-1 binomial excitation schemes prove to offer marginal signal loss especially at ultra-high fields with overall improved scanning efficiency. Copyright © 2013 Wiley Periodicals, Inc.
Censored Hurdle Negative Binomial Regression (Case Study: Neonatorum Tetanus Case in Indonesia)
Yuli Rusdiana, Riza; Zain, Ismaini; Wulan Purnami, Santi
2017-06-01
Hurdle negative binomial model regression is a method that can be used for discreate dependent variable, excess zero and under- and overdispersion. It uses two parts approach. The first part estimates zero elements from dependent variable is zero hurdle model and the second part estimates not zero elements (non-negative integer) from dependent variable is called truncated negative binomial models. The discrete dependent variable in such cases is censored for some values. The type of censor that will be studied in this research is right censored. This study aims to obtain the parameter estimator hurdle negative binomial regression for right censored dependent variable. In the assessment of parameter estimation methods used Maximum Likelihood Estimator (MLE). Hurdle negative binomial model regression for right censored dependent variable is applied on the number of neonatorum tetanus cases in Indonesia. The type data is count data which contains zero values in some observations and other variety value. This study also aims to obtain the parameter estimator and test statistic censored hurdle negative binomial model. Based on the regression results, the factors that influence neonatorum tetanus case in Indonesia is the percentage of baby health care coverage and neonatal visits.
Test sample handling apparatus
International Nuclear Information System (INIS)
1981-01-01
A test sample handling apparatus using automatic scintillation counting for gamma detection, for use in such fields as radioimmunoassay, is described. The apparatus automatically and continuously counts large numbers of samples rapidly and efficiently by the simultaneous counting of two samples. By means of sequential ordering of non-sequential counting data, it is possible to obtain precisely ordered data while utilizing sample carrier holders having a minimum length. (U.K.)
Development of sample size allocation program using hypergeometric distribution
International Nuclear Information System (INIS)
Kim, Hyun Tae; Kwack, Eun Ho; Park, Wan Soo; Min, Kyung Soo; Park, Chan Sik
1996-01-01
The objective of this research is the development of sample allocation program using hypergeometric distribution with objected-oriented method. When IAEA(International Atomic Energy Agency) performs inspection, it simply applies a standard binomial distribution which describes sampling with replacement instead of a hypergeometric distribution which describes sampling without replacement in sample allocation to up to three verification methods. The objective of the IAEA inspection is the timely detection of diversion of significant quantities of nuclear material, therefore game theory is applied to its sampling plan. It is necessary to use hypergeometric distribution directly or approximate distribution to secure statistical accuracy. Improved binomial approximation developed by Mr. J. L. Jaech and correctly applied binomial approximation are more closer to hypergeometric distribution in sample size calculation than the simply applied binomial approximation of the IAEA. Object-oriented programs of 1. sample approximate-allocation with correctly applied standard binomial approximation, 2. sample approximate-allocation with improved binomial approximation, and 3. sample approximate-allocation with hypergeometric distribution were developed with Visual C ++ and corresponding programs were developed with EXCEL(using Visual Basic for Application). 8 tabs., 15 refs. (Author)
Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal
2016-09-30
Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Joshi, A.; Lawande, S.V.
1990-01-01
A systematic study of squeezing obtained from k-photon anharmonic oscillator (with interaction hamiltonian of the form (a † ) k , k ≥ 2) interacting with light whose statistics can be varied from sub-Poissonian to poissonian via binomial state of field and super-Poissonian to poissonian via negative binomial state of field is presented. The authors predict that for all values of k there is a tendency increase in squeezing with increased sub-Poissonian character of the field while the reverse is true with super-Poissonian field. They also present non-classical behavior of the first order coherence function explicitly for k = 2 case (i.e., for two-photon anharmonic oscillator model used for a Kerr-like medium) with variation in the statistics of the input light
Analysis of generalized negative binomial distributions attached to hyperbolic Landau levels
Energy Technology Data Exchange (ETDEWEB)
Chhaiba, Hassan, E-mail: chhaiba.hassan@gmail.com [Department of Mathematics, Faculty of Sciences, Ibn Tofail University, P.O. Box 133, Kénitra (Morocco); Demni, Nizar, E-mail: nizar.demni@univ-rennes1.fr [IRMAR, Université de Rennes 1, Campus de Beaulieu, 35042 Rennes Cedex (France); Mouayn, Zouhair, E-mail: mouayn@fstbm.ac.ma [Department of Mathematics, Faculty of Sciences and Technics (M’Ghila), Sultan Moulay Slimane, P.O. Box 523, Béni Mellal (Morocco)
2016-07-15
To each hyperbolic Landau level of the Poincaré disc is attached a generalized negative binomial distribution. In this paper, we compute the moment generating function of this distribution and supply its atomic decomposition as a perturbation of the negative binomial distribution by a finitely supported measure. Using the Mandel parameter, we also discuss the nonclassical nature of the associated coherent states. Next, we derive a Lévy-Khintchine-type representation of its characteristic function when the latter does not vanish and deduce that it is quasi-infinitely divisible except for the lowest hyperbolic Landau level corresponding to the negative binomial distribution. By considering the total variation of the obtained quasi-Lévy measure, we introduce a new infinitely divisible distribution for which we derive the characteristic function.
Possibility and Challenges of Conversion of Current Virus Species Names to Linnaean Binomials
Energy Technology Data Exchange (ETDEWEB)
Postler, Thomas S.; Clawson, Anna N.; Amarasinghe, Gaya K.; Basler, Christopher F.; Bavari, Sbina; Benkő, Mária; Blasdell, Kim R.; Briese, Thomas; Buchmeier, Michael J.; Bukreyev, Alexander; Calisher, Charles H.; Chandran, Kartik; Charrel, Rémi; Clegg, Christopher S.; Collins, Peter L.; Juan Carlos, De La Torre; Derisi, Joseph L.; Dietzgen, Ralf G.; Dolnik, Olga; Dürrwald, Ralf; Dye, John M.; Easton, Andrew J.; Emonet, Sébastian; Formenty, Pierre; Fouchier, Ron A. M.; Ghedin, Elodie; Gonzalez, Jean-Paul; Harrach, Balázs; Hewson, Roger; Horie, Masayuki; Jiāng, Dàohóng; Kobinger, Gary; Kondo, Hideki; Kropinski, Andrew M.; Krupovic, Mart; Kurath, Gael; Lamb, Robert A.; Leroy, Eric M.; Lukashevich, Igor S.; Maisner, Andrea; Mushegian, Arcady R.; Netesov, Sergey V.; Nowotny, Norbert; Patterson, Jean L.; Payne, Susan L.; PaWeska, Janusz T.; Peters, Clarence J.; Radoshitzky, Sheli R.; Rima, Bertus K.; Romanowski, Victor; Rubbenstroth, Dennis; Sabanadzovic, Sead; Sanfaçon, Hélène; Salvato, Maria S.; Schwemmle, Martin; Smither, Sophie J.; Stenglein, Mark D.; Stone, David M.; Takada, Ayato; Tesh, Robert B.; Tomonaga, Keizo; Tordo, Noël; Towner, Jonathan S.; Vasilakis, Nikos; Volchkov, Viktor E.; Wahl-Jensen, Victoria; Walker, Peter J.; Wang, Lin-Fa; Varsani, Arvind; Whitfield, Anna E.; Zerbini, F. Murilo; Kuhn, Jens H.
2016-10-22
Botanical, mycological, zoological, and prokaryotic species names follow the Linnaean format, consisting of an italicized Latinized binomen with a capitalized genus name and a lower case species epithet (e.g., Homo sapiens). Virus species names, however, do not follow a uniform format, and, even when binomial, are not Linnaean in style. In this thought exercise, we attempted to convert all currently official names of species included in the virus family Arenaviridae and the virus order Mononegavirales to Linnaean binomials, and to identify and address associated challenges and concerns. Surprisingly, this endeavor was not as complicated or time-consuming as even the authors of this article expected when conceiving the experiment. [Arenaviridae; binomials; ICTV; International Committee on Taxonomy of Viruses; Mononegavirales; virus nomenclature; virus taxonomy.
Analysis of generalized negative binomial distributions attached to hyperbolic Landau levels
International Nuclear Information System (INIS)
Chhaiba, Hassan; Demni, Nizar; Mouayn, Zouhair
2016-01-01
To each hyperbolic Landau level of the Poincaré disc is attached a generalized negative binomial distribution. In this paper, we compute the moment generating function of this distribution and supply its atomic decomposition as a perturbation of the negative binomial distribution by a finitely supported measure. Using the Mandel parameter, we also discuss the nonclassical nature of the associated coherent states. Next, we derive a Lévy-Khintchine-type representation of its characteristic function when the latter does not vanish and deduce that it is quasi-infinitely divisible except for the lowest hyperbolic Landau level corresponding to the negative binomial distribution. By considering the total variation of the obtained quasi-Lévy measure, we introduce a new infinitely divisible distribution for which we derive the characteristic function.
Tran, Phoebe; Waller, Lance
2015-01-01
Lyme disease has been the subject of many studies due to increasing incidence rates year after year and the severe complications that can arise in later stages of the disease. Negative binomial models have been used to model Lyme disease in the past with some success. However, there has been little focus on the reliability and consistency of these models when they are used to study Lyme disease at multiple spatial scales. This study seeks to explore how sensitive/consistent negative binomial models are when they are used to study Lyme disease at different spatial scales (at the regional and sub-regional levels). The study area includes the thirteen states in the Northeastern United States with the highest Lyme disease incidence during the 2002-2006 period. Lyme disease incidence at county level for the period of 2002-2006 was linked with several previously identified key landscape and climatic variables in a negative binomial regression model for the Northeastern region and two smaller sub-regions (the New England sub-region and the Mid-Atlantic sub-region). This study found that negative binomial models, indeed, were sensitive/inconsistent when used at different spatial scales. We discuss various plausible explanations for such behavior of negative binomial models. Further investigation of the inconsistency and sensitivity of negative binomial models when used at different spatial scales is important for not only future Lyme disease studies and Lyme disease risk assessment/management but any study that requires use of this model type in a spatial context. Copyright © 2014 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Liu Tang-Kun; Zhang Kang-Long; Tao Yu; Shan Chuan-Jia; Liu Ji-Bing
2016-01-01
The temporal evolution of the degree of entanglement between two atoms in a system of the binomial optical field interacting with two arbitrary entangled atoms is investigated. The influence of the strength of the dipole–dipole interaction between two atoms, probabilities of the Bernoulli trial, and particle number of the binomial optical field on the temporal evolution of the atomic entanglement are discussed. The result shows that the two atoms are always in the entanglement state. Moreover, if and only if the two atoms are initially in the maximally entangled state, the entanglement evolution is not affected by the parameters, and the degree of entanglement is always kept as 1. (paper)
On extinction time of a generalized endemic chain-binomial model.
Aydogmus, Ozgur
2016-09-01
We considered a chain-binomial epidemic model not conferring immunity after infection. Mean field dynamics of the model has been analyzed and conditions for the existence of a stable endemic equilibrium are determined. The behavior of the chain-binomial process is probabilistically linked to the mean field equation. As a result of this link, we were able to show that the mean extinction time of the epidemic increases at least exponentially as the population size grows. We also present simulation results for the process to validate our analytical findings. Copyright © 2016 Elsevier Inc. All rights reserved.
Binomial confidence intervals for testing non-inferiority or superiority: a practitioner's dilemma.
Pradhan, Vivek; Evans, John C; Banerjee, Tathagata
2016-08-01
In testing for non-inferiority or superiority in a single arm study, the confidence interval of a single binomial proportion is frequently used. A number of such intervals are proposed in the literature and implemented in standard software packages. Unfortunately, use of different intervals leads to conflicting conclusions. Practitioners thus face a serious dilemma in deciding which one to depend on. Is there a way to resolve this dilemma? We address this question by investigating the performances of ten commonly used intervals of a single binomial proportion, in the light of two criteria, viz., coverage and expected length of the interval. © The Author(s) 2013.
Blocking for Sequential Political Experiments.
Moore, Ryan T; Moore, Sally A
2013-10-01
In typical political experiments, researchers randomize a set of households, precincts, or individuals to treatments all at once, and characteristics of all units are known at the time of randomization. However, in many other experiments, subjects "trickle in" to be randomized to treatment conditions, usually via complete randomization. To take advantage of the rich background data that researchers often have (but underutilize) in these experiments, we develop methods that use continuous covariates to assign treatments sequentially. We build on biased coin and minimization procedures for discrete covariates and demonstrate that our methods outperform complete randomization, producing better covariate balance in simulated data. We then describe how we selected and deployed a sequential blocking method in a clinical trial and demonstrate the advantages of our having done so. Further, we show how that method would have performed in two larger sequential political trials. Finally, we compare causal effect estimates from differences in means, augmented inverse propensity weighted estimators, and randomization test inversion.
Sequential logic analysis and synthesis
Cavanagh, Joseph
2007-01-01
Until now, there was no single resource for actual digital system design. Using both basic and advanced concepts, Sequential Logic: Analysis and Synthesis offers a thorough exposition of the analysis and synthesis of both synchronous and asynchronous sequential machines. With 25 years of experience in designing computing equipment, the author stresses the practical design of state machines. He clearly delineates each step of the structured and rigorous design principles that can be applied to practical applications. The book begins by reviewing the analysis of combinatorial logic and Boolean a
Increasing efficiency of preclinical research by group sequential designs.
Directory of Open Access Journals (Sweden)
Konrad Neumann
2017-03-01
Full Text Available Despite the potential benefits of sequential designs, studies evaluating treatments or experimental manipulations in preclinical experimental biomedicine almost exclusively use classical block designs. Our aim with this article is to bring the existing methodology of group sequential designs to the attention of researchers in the preclinical field and to clearly illustrate its potential utility. Group sequential designs can offer higher efficiency than traditional methods and are increasingly used in clinical trials. Using simulation of data, we demonstrate that group sequential designs have the potential to improve the efficiency of experimental studies, even when sample sizes are very small, as is currently prevalent in preclinical experimental biomedicine. When simulating data with a large effect size of d = 1 and a sample size of n = 18 per group, sequential frequentist analysis consumes in the long run only around 80% of the planned number of experimental units. In larger trials (n = 36 per group, additional stopping rules for futility lead to the saving of resources of up to 30% compared to block designs. We argue that these savings should be invested to increase sample sizes and hence power, since the currently underpowered experiments in preclinical biomedicine are a major threat to the value and predictiveness in this research domain.
Multilevel sequential Monte-Carlo samplers
Jasra, Ajay
2016-01-05
Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.
Merging particle filter for sequential data assimilation
Directory of Open Access Journals (Sweden)
S. Nakano
2007-07-01
Full Text Available A new filtering technique for sequential data assimilation, the merging particle filter (MPF, is proposed. The MPF is devised to avoid the degeneration problem, which is inevitable in the particle filter (PF, without prohibitive computational cost. In addition, it is applicable to cases in which a nonlinear relationship exists between a state and observed data where the application of the ensemble Kalman filter (EnKF is not effectual. In the MPF, the filtering procedure is performed based on sampling of a forecast ensemble as in the PF. However, unlike the PF, each member of a filtered ensemble is generated by merging multiple samples from the forecast ensemble such that the mean and covariance of the filtered distribution are approximately preserved. This merging of multiple samples allows the degeneration problem to be avoided. In the present study, the newly proposed MPF technique is introduced, and its performance is demonstrated experimentally.
Attack Trees with Sequential Conjunction
Jhawar, Ravi; Kordy, Barbara; Mauw, Sjouke; Radomirović, Sasa; Trujillo-Rasua, Rolando
2015-01-01
We provide the first formal foundation of SAND attack trees which are a popular extension of the well-known attack trees. The SAND at- tack tree formalism increases the expressivity of attack trees by intro- ducing the sequential conjunctive operator SAND. This operator enables the modeling of
A binomial random sum of present value models in investment analysis
Βουδούρη, Αγγελική; Ντζιαχρήστος, Ευάγγελος
1997-01-01
Stochastic present value models have been widely adopted in financial theory and practice and play a very important role in capital budgeting and profit planning. The purpose of this paper is to introduce a binomial random sum of stochastic present value models and offer an application in investment analysis.
Justin S. Crotteau; Martin W. Ritchie; J. Morgan. Varner
2014-01-01
Many western USA fire regimes are typified by mixed-severity fire, which compounds the variability inherent to natural regeneration densities in associated forests. Tree regeneration data are often discrete and nonnegative; accordingly, we fit a series of Poisson and negative binomial variation models to conifer seedling counts across four distinct burn severities and...
Topology of unitary groups and the prime orders of binomial coefficients
Duan, HaiBao; Lin, XianZu
2017-09-01
Let $c:SU(n)\\rightarrow PSU(n)=SU(n)/\\mathbb{Z}_{n}$ be the quotient map of the special unitary group $SU(n)$ by its center subgroup $\\mathbb{Z}_{n}$. We determine the induced homomorphism $c^{\\ast}:$ $H^{\\ast}(PSU(n))\\rightarrow H^{\\ast}(SU(n))$ on cohomologies by computing with the prime orders of binomial coefficients
A Bayesian Approach to Functional Mixed Effect Modeling for Longitudinal Data with Binomial Outcomes
Kliethermes, Stephanie; Oleson, Jacob
2014-01-01
Longitudinal growth patterns are routinely seen in medical studies where individual and population growth is followed over a period of time. Many current methods for modeling growth presuppose a parametric relationship between the outcome and time (e.g., linear, quadratic); however, these relationships may not accurately capture growth over time. Functional mixed effects (FME) models provide flexibility in handling longitudinal data with nonparametric temporal trends. Although FME methods are well-developed for continuous, normally distributed outcome measures, nonparametric methods for handling categorical outcomes are limited. We consider the situation with binomially distributed longitudinal outcomes. Although percent correct data can be modeled assuming normality, estimates outside the parameter space are possible and thus estimated curves can be unrealistic. We propose a binomial FME model using Bayesian methodology to account for growth curves with binomial (percentage) outcomes. The usefulness of our methods is demonstrated using a longitudinal study of speech perception outcomes from cochlear implant users where we successfully model both the population and individual growth trajectories. Simulation studies also advocate the usefulness of the binomial model particularly when outcomes occur near the boundary of the probability parameter space and in situations with a small number of trials. PMID:24723495
A mixed-binomial model for Likert-type personality measures.
Allik, Jüri
2014-01-01
Personality measurement is based on the idea that values on an unobservable latent variable determine the distribution of answers on a manifest response scale. Typically, it is assumed in the Item Response Theory (IRT) that latent variables are related to the observed responses through continuous normal or logistic functions, determining the probability with which one of the ordered response alternatives on a Likert-scale item is chosen. Based on an analysis of 1731 self- and other-rated responses on the 240 NEO PI-3 questionnaire items, it was proposed that a viable alternative is a finite number of latent events which are related to manifest responses through a binomial function which has only one parameter-the probability with which a given statement is approved. For the majority of items, the best fit was obtained with a mixed-binomial distribution, which assumes two different subpopulations who endorse items with two different probabilities. It was shown that the fit of the binomial IRT model can be improved by assuming that about 10% of random noise is contained in the answers and by taking into account response biases toward one of the response categories. It was concluded that the binomial response model for the measurement of personality traits may be a workable alternative to the more habitual normal and logistic IRT models.
Confidence Intervals for Weighted Composite Scores under the Compound Binomial Error Model
Kim, Kyung Yong; Lee, Won-Chan
2018-01-01
Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…
Bianca N.I. Eskelson; Hailemariam Temesgen; Tara M. Barrett
2009-01-01
Cavity tree and snag abundance data are highly variable and contain many zero observations. We predict cavity tree and snag abundance from variables that are readily available from forest cover maps or remotely sensed data using negative binomial (NB), zero-inflated NB, and zero-altered NB (ZANB) regression models as well as nearest neighbor (NN) imputation methods....
Binomial Coefficients Modulo a Prime--A Visualization Approach to Undergraduate Research
Bardzell, Michael; Poimenidou, Eirini
2011-01-01
In this article we present, as a case study, results of undergraduate research involving binomial coefficients modulo a prime "p." We will discuss how undergraduates were involved in the project, even with a minimal mathematical background beforehand. There are two main avenues of exploration described to discover these binomial…
Computational results on the compound binomial risk model with nonhomogeneous claim occurrences
Tuncel, A.; Tank, F.
2013-01-01
The aim of this paper is to give a recursive formula for non-ruin (survival) probability when the claim occurrences are nonhomogeneous in the compound binomial risk model. We give recursive formulas for non-ruin (survival) probability and for distribution of the total number of claims under the
International Nuclear Information System (INIS)
Valor, Alma; Alfonso, Lester; Caleyo, Francisco; Vidal, Julio; Perez-Baruch, Eloy; Hallen, José M.
2015-01-01
Highlights: • Observed external-corrosion defects in underground pipelines revealed a tendency to cluster. • The Poisson distribution is unable to fit extensive count data for these type of defects. • In contrast, the negative binomial distribution provides a suitable count model for them. • Two spatial stochastic processes lead to the negative binomial distribution for defect counts. • They are the Gamma-Poisson mixed process and the compound Poisson process. • A Rogeŕs process also arises as a plausible temporal stochastic process leading to corrosion defect clustering and to negative binomially distributed defect counts. - Abstract: The spatial distribution of external corrosion defects in buried pipelines is usually described as a Poisson process, which leads to corrosion defects being randomly distributed along the pipeline. However, in real operating conditions, the spatial distribution of defects considerably departs from Poisson statistics due to the aggregation of defects in groups or clusters. In this work, the statistical analysis of real corrosion data from underground pipelines operating in southern Mexico leads to conclude that the negative binomial distribution provides a better description for defect counts. The origin of this distribution from several processes is discussed. The analysed processes are: mixed Gamma-Poisson, compound Poisson and Roger’s processes. The physical reasons behind them are discussed for the specific case of soil corrosion.
Use of the negative binomial-truncated Poisson distribution in thunderstorm prediction
Cohen, A. C.
1971-01-01
A probability model is presented for the distribution of thunderstorms over a small area given that thunderstorm events (1 or more thunderstorms) are occurring over a larger area. The model incorporates the negative binomial and truncated Poisson distributions. Probability tables for Cape Kennedy for spring, summer, and fall months and seasons are presented. The computer program used to compute these probabilities is appended.
Determining order-up-to levels under periodic review for compound binomial (intermittent) demand
Teunter, R. H.; Syntetos, A. A.; Babai, M. Z.
2010-01-01
We propose a new method for determining order-up-to levels for intermittent demand items in a periodic review system. Contrary to existing methods, we exploit the intermittent character of demand by modelling lead time demand as a compound binomial process. in an extensive numerical study using
Negative binomial distribution for multiplicity distributions in e/sup +/e/sup -/ annihilation
International Nuclear Information System (INIS)
Chew, C.K.; Lim, Y.K.
1986-01-01
The authors show that the negative binomial distribution fits excellently the available charged-particle multiplicity distributions of e/sup +/e/sup -/ annihilation into hadrons at three different energies √s = 14, 22 and 34 GeV
Kliethermes, Stephanie; Oleson, Jacob
2014-08-15
Longitudinal growth patterns are routinely seen in medical studies where individual growth and population growth are followed up over a period of time. Many current methods for modeling growth presuppose a parametric relationship between the outcome and time (e.g., linear and quadratic); however, these relationships may not accurately capture growth over time. Functional mixed-effects (FME) models provide flexibility in handling longitudinal data with nonparametric temporal trends. Although FME methods are well developed for continuous, normally distributed outcome measures, nonparametric methods for handling categorical outcomes are limited. We consider the situation with binomially distributed longitudinal outcomes. Although percent correct data can be modeled assuming normality, estimates outside the parameter space are possible, and thus, estimated curves can be unrealistic. We propose a binomial FME model using Bayesian methodology to account for growth curves with binomial (percentage) outcomes. The usefulness of our methods is demonstrated using a longitudinal study of speech perception outcomes from cochlear implant users where we successfully model both the population and individual growth trajectories. Simulation studies also advocate the usefulness of the binomial model particularly when outcomes occur near the boundary of the probability parameter space and in situations with a small number of trials. Copyright © 2014 John Wiley & Sons, Ltd.
Time evolution of negative binomial optical field in a diffusion channel
International Nuclear Information System (INIS)
Liu Tang-Kun; Wu Pan-Pan; Shan Chuan-Jia; Liu Ji-Bing; Fan Hong-Yi
2015-01-01
We find the time evolution law of a negative binomial optical field in a diffusion channel. We reveal that by adjusting the diffusion parameter, the photon number can be controlled. Therefore, the diffusion process can be considered a quantum controlling scheme through photon addition. (paper)
Joint Analysis of Binomial and Continuous Traits with a Recursive Model
DEFF Research Database (Denmark)
Varona, Louis; Sorensen, Daniel
2014-01-01
This work presents a model for the joint analysis of a binomial and a Gaussian trait using a recursive parametrization that leads to a computationally efficient implementation. The model is illustrated in an analysis of mortality and litter size in two breeds of Danish pigs, Landrace and Yorkshir...
International Nuclear Information System (INIS)
Lo Franco, R.; Compagno, G.; Messina, A.; Napoli, A.
2007-01-01
We introduce the N-photon quantum superposition of two orthogonal generalized binomial states of an electromagnetic field. We then propose, using resonant atom-cavity interactions, nonconditional schemes to generate and reveal such a quantum superposition for the two-photon case in a single-mode high-Q cavity. We finally discuss the implementation of the proposed schemes
Learning Binomial Probability Concepts with Simulation, Random Numbers and a Spreadsheet
Rochowicz, John A., Jr.
2005-01-01
This paper introduces the reader to the concepts of binomial probability and simulation. A spreadsheet is used to illustrate these concepts. Random number generators are great technological tools for demonstrating the concepts of probability. Ideas of approximation, estimation, and mathematical usefulness provide numerous ways of learning…
Raw and Central Moments of Binomial Random Variables via Stirling Numbers
Griffiths, Martin
2013-01-01
We consider here the problem of calculating the moments of binomial random variables. It is shown how formulae for both the raw and the central moments of such random variables may be obtained in a recursive manner utilizing Stirling numbers of the first kind. Suggestions are also provided as to how students might be encouraged to explore this…
Studying the Binomial Distribution Using LabVIEW
George, Danielle J.; Hammer, Nathan I.
2015-01-01
This undergraduate physical chemistry laboratory exercise introduces students to the study of probability distributions both experimentally and using computer simulations. Students perform the classic coin toss experiment individually and then pool all of their data together to study the effect of experimental sample size on the binomial…
NBLDA: negative binomial linear discriminant analysis for RNA-Seq data.
Dong, Kai; Zhao, Hongyu; Tong, Tiejun; Wan, Xiang
2016-09-13
RNA-sequencing (RNA-Seq) has become a powerful technology to characterize gene expression profiles because it is more accurate and comprehensive than microarrays. Although statistical methods that have been developed for microarray data can be applied to RNA-Seq data, they are not ideal due to the discrete nature of RNA-Seq data. The Poisson distribution and negative binomial distribution are commonly used to model count data. Recently, Witten (Annals Appl Stat 5:2493-2518, 2011) proposed a Poisson linear discriminant analysis for RNA-Seq data. The Poisson assumption may not be as appropriate as the negative binomial distribution when biological replicates are available and in the presence of overdispersion (i.e., when the variance is larger than or equal to the mean). However, it is more complicated to model negative binomial variables because they involve a dispersion parameter that needs to be estimated. In this paper, we propose a negative binomial linear discriminant analysis for RNA-Seq data. By Bayes' rule, we construct the classifier by fitting a negative binomial model, and propose some plug-in rules to estimate the unknown parameters in the classifier. The relationship between the negative binomial classifier and the Poisson classifier is explored, with a numerical investigation of the impact of dispersion on the discriminant score. Simulation results show the superiority of our proposed method. We also analyze two real RNA-Seq data sets to demonstrate the advantages of our method in real-world applications. We have developed a new classifier using the negative binomial model for RNA-seq data classification. Our simulation results show that our proposed classifier has a better performance than existing works. The proposed classifier can serve as an effective tool for classifying RNA-seq data. Based on the comparison results, we have provided some guidelines for scientists to decide which method should be used in the discriminant analysis of RNA-Seq data
Energy Technology Data Exchange (ETDEWEB)
Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.
A Mechanistic Beta-Binomial Probability Model for mRNA Sequencing Data.
Smith, Gregory R; Birtwistle, Marc R
2016-01-01
A main application for mRNA sequencing (mRNAseq) is determining lists of differentially-expressed genes (DEGs) between two or more conditions. Several software packages exist to produce DEGs from mRNAseq data, but they typically yield different DEGs, sometimes markedly so. The underlying probability model used to describe mRNAseq data is central to deriving DEGs, and not surprisingly most softwares use different models and assumptions to analyze mRNAseq data. Here, we propose a mechanistic justification to model mRNAseq as a binomial process, with data from technical replicates given by a binomial distribution, and data from biological replicates well-described by a beta-binomial distribution. We demonstrate good agreement of this model with two large datasets. We show that an emergent feature of the beta-binomial distribution, given parameter regimes typical for mRNAseq experiments, is the well-known quadratic polynomial scaling of variance with the mean. The so-called dispersion parameter controls this scaling, and our analysis suggests that the dispersion parameter is a continually decreasing function of the mean, as opposed to current approaches that impose an asymptotic value to the dispersion parameter at moderate mean read counts. We show how this leads to current approaches overestimating variance for moderately to highly expressed genes, which inflates false negative rates. Describing mRNAseq data with a beta-binomial distribution thus may be preferred since its parameters are relatable to the mechanistic underpinnings of the technique and may improve the consistency of DEG analysis across softwares, particularly for moderately to highly expressed genes.
Sequential Analysis of Metals in Municipal Dumpsite Composts of ...
African Journals Online (AJOL)
... Ni) in Municipal dumpsite compost were determined by the sequential extraction method. Chemical parameters such as pH, conductivity, and organic carbon contents of the samples were also determined. Analysis of the extracts was carried out by atomic absorption spectrophotometer machine (Buck Scientific VPG 210).
Continuous sequential boundaries for vaccine safety surveillance.
Li, Rongxia; Stewart, Brock; Weintraub, Eric; McNeil, Michael M
2014-08-30
Various recently developed sequential methods have been used to detect signals for post-marketing surveillance in drug and vaccine safety. Among these, the maximized sequential probability ratio test (MaxSPRT) has been used to detect elevated risks of adverse events following vaccination using large healthcare databases. However, a limitation of MaxSPRT is that it only provides a time-invariant flat boundary. In this study, we propose the use of time-varying boundaries for controlling how type I error is distributed throughout the surveillance period. This is especially useful in two scenarios: (i) when we desire generally larger sample sizes before a signal is generated, for example, when early adopters are not representative of the larger population; and (ii) when it is desired for a signal to be generated as early as possible, for example, when the adverse event is considered rare but serious. We consider four specific time-varying boundaries (which we call critical value functions), and we study their statistical power and average time to signal detection. The methodology we present here can be viewed as a generalization or flexible extension of MaxSPRT. Published 2014. This article is a US Government work and is in the public domain in the USA.
Kozak, Joanna; Wójtowicz, Marzena; Gawenda, Nadzieja; Kościelniak, Paweł
2011-06-15
An automatic sequential injection system, combining monosegmented flow analysis, sequential injection analysis and sequential injection titration is proposed for acidity determination. The system enables controllable sample dilution and generation of standards of required concentration in a monosegmented sequential injection manner, sequential injection titration of the prepared solutions, data collecting, and handling. It has been tested on spectrophotometric determination of acetic, citric and phosphoric acids with sodium hydroxide used as a titrant and phenolphthalein or thymolphthalein (in the case of phosphoric acid determination) as indicators. Accuracy better than |4.4|% (RE) and repeatability better than 2.9% (RSD) have been obtained. It has been applied to the determination of total acidity in vinegars and various soft drinks. The system provides low sample (less than 0.3 mL) consumption. On average, analysis of a sample takes several minutes. Copyright © 2011 Elsevier B.V. All rights reserved.
Sequential analysis of RNA synthesis by microchip electrophoresis.
Umemoto, Yoshihiro; Kataoka, Masatoshi; Yatsushiro, Shouki; Watanabe, Masahiro; Kido, Jun-Ichi; Kakuhata, Rei; Yamamoto, Takenori; Shinohara, Yasuo; Baba, Yoshinobu
2009-05-01
We describe the potential of microchip electrophoresis with a Hitachi SV1100, which can be used to evaluate the integrity of total RNA, for the analysis of synthesized RNA. There was little interference by DNA and/or the components of the in vitro transcription system with the microchip electrophoresis. The fluorescence intensity corresponding to the synthesized RNA increased in a time-dependent manner as to the RNA synthesis reaction on sequential analysis. A result can be obtained in 160 s and only 1/10 aliquots of samples, compared with the conventional method, are required. These results indicate the potential of microchip electrophoresis for sequential analysis of RNA synthesis.
The pursuit of balance in sequential randomized trials
Directory of Open Access Journals (Sweden)
Raymond P. Guiteras
2016-06-01
Full Text Available In many randomized trials, subjects enter the sample sequentially. Because the covariates for all units are not known in advance, standard methods of stratification do not apply. We describe and assess the method of DA-optimal sequential allocation (Atkinson, 1982 for balancing stratification covariates across treatment arms. We provide simulation evidence that the method can provide substantial improvements in precision over commonly employed alternatives. We also describe our experience implementing the method in a field trial of a clean water and handwashing intervention in Dhaka, Bangladesh, the first time the method has been used. We provide advice and software for future researchers.
Group-sequential clinical trials with multiple co-objectives
Hamasaki, Toshimitsu; Evans, Scott R; Ochiai, Toshimitsu
2016-01-01
This book focuses on group sequential methods for clinical trials with co-primary endpoints based on the decision-making frameworks for: (1) rejecting the null hypothesis (stopping for efficacy), (2) rejecting the alternative hypothesis (stopping for futility), and (3) rejecting the null or alternative hypothesis (stopping for either futility or efficacy), where the trial is designed to evaluate whether the intervention is superior to the control on all endpoints. For assessing futility, there are two fundamental approaches, i.e., the decision to stop for futility based on the conditional probability of rejecting the null hypothesis, and the other based on stopping boundaries using group sequential methods. In this book, the latter approach is discussed. The book also briefly deals with the group sequential methods for clinical trials designed to evaluate whether the intervention is superior to the control on at least one endpoint. In addition, the book describes sample size recalculation and the resulting ef...
Possibility and challenges of conversion of current virus species names to Linnaean binomials
Thomas, Postler; Clawson, Anna N.; Amarasinghe, Gaya K.; Basler, Christopher F.; Bavari, Sina; Benko, Maria; Blasdell, Kim R.; Briese, Thomas; Buchmeier, Michael J.; Bukreyev, Alexander; Calisher, Charles H.; Chandran, Kartik; Charrel, Remi; Clegg, Christopher S.; Collins, Peter L.; De la Torre, Juan Carlos; DeRisi, Joseph L.; Dietzgen, Ralf G.; Dolnik, Olga; Durrwald, Ralf; Dye, John M.; Easton, Andrew J.; Emonet, Sebastian; Formenty, Pierre; Fouchier, Ron A. M.; Ghedin, Elodie; Gonzalez, Jean-Paul; Harrach, Balazs; Hewson, Roger; Horie, Masayuki; Jiang, Daohong; Kobinger, Gary P.; Kondo, Hideki; Kropinski, Andrew; Krupovic, Mart; Kurath, Gael; Lamb, Robert A.; Leroy, Eric M.; Lukashevich, Igor S.; Maisner, Andrea; Mushegian, Arcady; Netesov, Sergey V.; Nowotny, Norbert; Patterson, Jean L.; Payne, Susan L.; Paweska, Janusz T.; Peters, C.J.; Radoshitzky, Sheli; Rima, Bertus K.; Romanowski, Victor; Rubbenstroth, Dennis; Sabanadzovic, Sead; Sanfacon, Helene; Salvato , Maria; Schwemmle, Martin; Smither, Sophie J.; Stenglein, Mark; Stone, D.M.; Takada , Ayato; Tesh, Robert B.; Tomonaga, Keizo; Tordo, N.; Towner, Jonathan S.; Vasilakis, Nikos; Volchkov, Victor E.; Jensen, Victoria; Walker, Peter J.; Wang, Lin-Fa; Varsani, Arvind; Whitfield , Anna E.; Zerbini, Francisco Murilo; Kuhn, Jens H.
2017-01-01
Botanical, mycological, zoological, and prokaryotic species names follow the Linnaean format, consisting of an italicized Latinized binomen with a capitalized genus name and a lower case species epithet (e.g., Homo sapiens). Virus species names, however, do not follow a uniform format, and, even when binomial, are not Linnaean in style. In this thought exercise, we attempted to convert all currently official names of species included in the virus family Arenaviridae and the virus order Mononegavirales to Linnaean binomials, and to identify and address associated challenges and concerns. Surprisingly, this endeavor was not as complicated or time-consuming as even the authors of this article expected when conceiving the experiment.
Poisson and negative binomial item count techniques for surveys with sensitive question.
Tian, Guo-Liang; Tang, Man-Lai; Wu, Qin; Liu, Yin
2017-04-01
Although the item count technique is useful in surveys with sensitive questions, privacy of those respondents who possess the sensitive characteristic of interest may not be well protected due to a defect in its original design. In this article, we propose two new survey designs (namely the Poisson item count technique and negative binomial item count technique) which replace several independent Bernoulli random variables required by the original item count technique with a single Poisson or negative binomial random variable, respectively. The proposed models not only provide closed form variance estimate and confidence interval within [0, 1] for the sensitive proportion, but also simplify the survey design of the original item count technique. Most importantly, the new designs do not leak respondents' privacy. Empirical results show that the proposed techniques perform satisfactorily in the sense that it yields accurate parameter estimate and confidence interval.
The option to expand a project: its assessment with the binomial options pricing model
Directory of Open Access Journals (Sweden)
Salvador Cruz Rambaud
Full Text Available Traditional methods of investment appraisal, like the Net Present Value, are not able to include the value of the operational flexibility of the project. In this paper, real options, and more specifically the option to expand, are assumed to be included in the project information in addition to the expected cash flows. Thus, to calculate the total value of the project, we are going to apply the methodology of the Net Present Value to the different scenarios derived from the existence of the real option to expand. Taking into account the analogy between real and financial options, the value of including an option to expand is explored by using the binomial options pricing model. In this way, estimating the value of the option to expand is a tool which facilitates the control of the uncertainty element implicit in the project. Keywords: Real options, Option to expand, Binomial options pricing model, Investment project appraisal
A Bayesian non-inferiority test for two independent binomial proportions.
Kawasaki, Yohei; Miyaoka, Etsuo
2013-01-01
In drug development, non-inferiority tests are often employed to determine the difference between two independent binomial proportions. Many test statistics for non-inferiority are based on the frequentist framework. However, research on non-inferiority in the Bayesian framework is limited. In this paper, we suggest a new Bayesian index τ = P(π₁ > π₂-Δ₀|X₁, X₂), where X₁ and X₂ denote binomial random variables for trials n1 and n₂, and parameters π₁ and π₂ , respectively, and the non-inferiority margin is Δ₀> 0. We show two calculation methods for τ, an approximate method that uses normal approximation and an exact method that uses an exact posterior PDF. We compare the approximate probability with the exact probability for τ. Finally, we present the results of actual clinical trials to show the utility of index τ. Copyright © 2013 John Wiley & Sons, Ltd.
Nested (inverse) binomial sums and new iterated integrals for massive Feynman diagrams
International Nuclear Information System (INIS)
Ablinger, Jakob; Schneider, Carsten; Bluemlein, Johannes; Raab, Clemens G.
2014-07-01
Nested sums containing binomial coefficients occur in the computation of massive operatormatrix elements. Their associated iterated integrals lead to alphabets including radicals, for which we determined a suitable basis. We discuss algorithms for converting between sum and integral representations, mainly relying on the Mellin transform. To aid the conversion we worked out dedicated rewrite rules, based on which also some general patterns emerging in the process can be obtained.
Study on Emission Measurement of Vehicle on Road Based on Binomial Logit Model
Aly, Sumarni Hamid; Selintung, Mary; Ramli, Muhammad Isran; Sumi, Tomonori
2011-01-01
This research attempts to evaluate emission measurement of on road vehicle. In this regard, the research develops failure probability model of vehicle emission test for passenger car which utilize binomial logit model. The model focuses on failure of CO and HC emission test for gasoline cars category and Opacity emission test for diesel-fuel cars category as dependent variables, while vehicle age, engine size, brand and type of the cars as independent variables. In order to imp...
Directory of Open Access Journals (Sweden)
Patricio Peña-Rehbein
Full Text Available This paper describes the frequency and number of Sphyrion laevigatum in the skin of Genypterus blacodes, an important economic resource in Chile. The analysis of a spatial distribution model indicated that the parasites tended to cluster. Variations in the number of parasites per host could be described by a negative binomial distribution. The maximum number of parasites observed per host was two.
Computation of Clebsch-Gordan and Gaunt coefficients using binomial coefficients
International Nuclear Information System (INIS)
Guseinov, I.I.; Oezmen, A.; Atav, Ue
1995-01-01
Using binomial coefficients the Clebsch-Gordan and Gaunt coefficients were calculated for extremely large quantum numbers. The main advantage of this approach is directly calculating these coefficients, instead of using recursion relations. Accuracy of the results is quite high for quantum numbers l 1 , and l 2 up to 100. Despite direct calculation, the CPU times are found comparable with those given in the related literature. 11 refs., 1 fig., 2 tabs
Negative binomial multiplicity distributions, a new empirical law for high energy collisions
International Nuclear Information System (INIS)
Van Hove, L.; Giovannini, A.
1987-01-01
For a variety of high energy hadron production reactions, recent experiments have confirmed the findings of the UA5 Collaboration that charged particle multiplicities in central (pseudo) rapidity intervals and in full phase space obey negative binomial (NB) distributions. The authors discuss the meaning of this new empirical law on the basis of new data and they show that they support the interpretation of the NB distributions in terms of a cascading mechanism of hardron production
Generalized harmonic, cyclotomic, and binomial sums, their polylogarithms and special numbers
Energy Technology Data Exchange (ETDEWEB)
Ablinger, J.; Schneider, C. [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation (RISC); Bluemlein, J. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)
2013-10-15
A survey is given on mathematical structures which emerge in multi-loop Feynman diagrams. These are multiply nested sums, and, associated to them by an inverse Mellin transform, specific iterated integrals. Both classes lead to sets of special numbers. Starting with harmonic sums and polylogarithms we discuss recent extensions of these quantities as cyclotomic, generalized (cyclotomic), and binomially weighted sums, associated iterated integrals and special constants and their relations.
Estimasi Harga Multi-State European Call Option Menggunakan Model Binomial
Directory of Open Access Journals (Sweden)
Mila Kurniawaty, Endah Rokhmati
2011-05-01
Full Text Available Option merupakan kontrak yang memberikan hak kepada pemiliknya untuk membeli (call option atau menjual (put option sejumlah aset dasar tertentu (underlying asset dengan harga tertentu (strike price dalam jangka waktu tertentu (sebelum atau saat expiration date. Perkembangan option belakangan ini memunculkan banyak model pricing untuk mengestimasi harga option, salah satu model yang digunakan adalah formula Black-Scholes. Multi-state option merupakan sebuah option yang payoff-nya didasarkan pada dua atau lebih aset dasar. Ada beberapa metode yang dapat digunakan dalam mengestimasi harga call option, salah satunya masyarakat finance sering menggunakan model binomial untuk estimasi berbagai model option yang lebih luas seperti multi-state call option. Selanjutnya, dari hasil estimasi call option dengan model binomial didapatkan formula terbaik berdasarkan penghitungan eror dengan mean square error. Dari penghitungan eror didapatkan eror rata-rata dari masing-masing formula pada model binomial. Hasil eror rata-rata menunjukkan bahwa estimasi menggunakan formula 5 titik lebih baik dari pada estimasi menggunakan formula 4 titik.
Discrimination of numerical proportions: A comparison of binomial and Gaussian models.
Raidvee, Aire; Lember, Jüri; Allik, Jüri
2017-01-01
Observers discriminated the numerical proportion of two sets of elements (N = 9, 13, 33, and 65) that differed either by color or orientation. According to the standard Thurstonian approach, the accuracy of proportion discrimination is determined by irreducible noise in the nervous system that stochastically transforms the number of presented visual elements onto a continuum of psychological states representing numerosity. As an alternative to this customary approach, we propose a Thurstonian-binomial model, which assumes discrete perceptual states, each of which is associated with a certain visual element. It is shown that the probability β with which each visual element can be noticed and registered by the perceptual system can explain data of numerical proportion discrimination at least as well as the continuous Thurstonian-Gaussian model, and better, if the greater parsimony of the Thurstonian-binomial model is taken into account using AIC model selection. We conclude that Gaussian and binomial models represent two different fundamental principles-internal noise vs. using only a fraction of available information-which are both plausible descriptions of visual perception.
Genome-enabled predictions for binomial traits in sugar beet populations.
Biscarini, Filippo; Stevanato, Piergiorgio; Broccanello, Chiara; Stella, Alessandra; Saccomani, Massimo
2014-07-22
Genomic information can be used to predict not only continuous but also categorical (e.g. binomial) traits. Several traits of interest in human medicine and agriculture present a discrete distribution of phenotypes (e.g. disease status). Root vigor in sugar beet (B. vulgaris) is an example of binomial trait of agronomic importance. In this paper, a panel of 192 SNPs (single nucleotide polymorphisms) was used to genotype 124 sugar beet individual plants from 18 lines, and to classify them as showing "high" or "low" root vigor. A threshold model was used to fit the relationship between binomial root vigor and SNP genotypes, through the matrix of genomic relationships between individuals in a genomic BLUP (G-BLUP) approach. From a 5-fold cross-validation scheme, 500 testing subsets were generated. The estimated average cross-validation error rate was 0.000731 (0.073%). Only 9 out of 12326 test observations (500 replicates for an average test set size of 24.65) were misclassified. The estimated prediction accuracy was quite high. Such accurate predictions may be related to the high estimated heritability for root vigor (0.783) and to the few genes with large effect underlying the trait. Despite the sparse SNP panel, there was sufficient within-scaffold LD where SNPs with large effect on root vigor were located to allow for genome-enabled predictions to work.
Analysis of railroad tank car releases using a generalized binomial model.
Liu, Xiang; Hong, Yili
2015-11-01
The United States is experiencing an unprecedented boom in shale oil production, leading to a dramatic growth in petroleum crude oil traffic by rail. In 2014, U.S. railroads carried over 500,000 tank carloads of petroleum crude oil, up from 9500 in 2008 (a 5300% increase). In light of continual growth in crude oil by rail, there is an urgent national need to manage this emerging risk. This need has been underscored in the wake of several recent crude oil release incidents. In contrast to highway transport, which usually involves a tank trailer, a crude oil train can carry a large number of tank cars, having the potential for a large, multiple-tank-car release incident. Previous studies exclusively assumed that railroad tank car releases in the same train accident are mutually independent, thereby estimating the number of tank cars releasing given the total number of tank cars derailed based on a binomial model. This paper specifically accounts for dependent tank car releases within a train accident. We estimate the number of tank cars releasing given the number of tank cars derailed based on a generalized binomial model. The generalized binomial model provides a significantly better description for the empirical tank car accident data through our numerical case study. This research aims to provide a new methodology and new insights regarding the further development of risk management strategies for improving railroad crude oil transportation safety. Copyright © 2015 Elsevier Ltd. All rights reserved.
Random sequential adsorption on fractals.
Ciesla, Michal; Barbasz, Jakub
2012-07-28
Irreversible adsorption of spheres on flat collectors having dimension d fractals (1 < d < 2), and on general Cantor set (d < 1). Adsorption process is modeled numerically using random sequential adsorption (RSA) algorithm. The paper concentrates on measurement of fundamental properties of coverages, i.e., maximal random coverage ratio and density autocorrelation function, as well as RSA kinetics. Obtained results allow to improve phenomenological relation between maximal random coverage ratio and collector dimension. Moreover, simulations show that, in general, most of known dimensional properties of adsorbed monolayers are valid for non-integer dimensions.
Brenner, Tom; Chen, Johnny; Stait-Gardner, Tim; Zheng, Gang; Matsukawa, Shingo; Price, William S.
2018-03-01
A new family of binomial-like inversion sequences, named jump-and-return sandwiches (JRS), has been developed by inserting a binomial-like sequence into a standard jump-and-return sequence, discovered through use of a stochastic Genetic Algorithm optimisation. Compared to currently used binomial-like inversion sequences (e.g., 3-9-19 and W5), the new sequences afford wider inversion bands and narrower non-inversion bands with an equal number of pulses. As an example, two jump-and-return sandwich 10-pulse sequences achieved 95% inversion at offsets corresponding to 9.4% and 10.3% of the non-inversion band spacing, compared to 14.7% for the binomial-like W5 inversion sequence, i.e., they afforded non-inversion bands about two thirds the width of the W5 non-inversion band.
The Bacterial Sequential Markov Coalescent.
De Maio, Nicola; Wilson, Daniel J
2017-05-01
Bacteria can exchange and acquire new genetic material from other organisms directly and via the environment. This process, known as bacterial recombination, has a strong impact on the evolution of bacteria, for example, leading to the spread of antibiotic resistance across clades and species, and to the avoidance of clonal interference. Recombination hinders phylogenetic and transmission inference because it creates patterns of substitutions (homoplasies) inconsistent with the hypothesis of a single evolutionary tree. Bacterial recombination is typically modeled as statistically akin to gene conversion in eukaryotes, i.e. , using the coalescent with gene conversion (CGC). However, this model can be very computationally demanding as it needs to account for the correlations of evolutionary histories of even distant loci. So, with the increasing popularity of whole genome sequencing, the need has emerged for a faster approach to model and simulate bacterial genome evolution. We present a new model that approximates the coalescent with gene conversion: the bacterial sequential Markov coalescent (BSMC). Our approach is based on a similar idea to the sequential Markov coalescent (SMC)-an approximation of the coalescent with crossover recombination. However, bacterial recombination poses hurdles to a sequential Markov approximation, as it leads to strong correlations and linkage disequilibrium across very distant sites in the genome. Our BSMC overcomes these difficulties, and shows a considerable reduction in computational demand compared to the exact CGC, and very similar patterns in simulated data. We implemented our BSMC model within new simulation software FastSimBac. In addition to the decreased computational demand compared to previous bacterial genome evolution simulators, FastSimBac provides more general options for evolutionary scenarios, allowing population structure with migration, speciation, population size changes, and recombination hotspots. FastSimBac is
Sample Size Determination for Estimation of Sensor Detection Probabilities Based on a Test Variable
National Research Council Canada - National Science Library
Oymak, Okan
2007-01-01
.... Army Yuma Proving Ground. Specifically, we evaluate the coverage probabilities and lengths of widely used confidence intervals for a binomial proportion and report the required sample sizes for some specified goals...
Sequential multi-channel OCT in the retina using high-speed fiber optic switches
Wartak, Andreas; Augustin, Marco; Beer, Florian; Haindl, Richard; Baumann, Bernhard; Pircher, Michael; Hitzenberger, Christoph K.
2017-07-01
A sequential multi-channel OCT prototype featuring high-speed fiber optical switches to enable inter A-scan (A-scan rate: 100 kHz) sample arm switching was developed and human retinal image data is presented.
Immediate Sequential Bilateral Cataract Surgery
DEFF Research Database (Denmark)
Kessel, Line; Andresen, Jens; Erngaard, Ditte
2015-01-01
The aim of the present systematic review was to examine the benefits and harms associated with immediate sequential bilateral cataract surgery (ISBCS) with specific emphasis on the rate of complications, postoperative anisometropia, and subjective visual function in order to formulate evidence......-based national Danish guidelines for cataract surgery. A systematic literature review in PubMed, Embase, and Cochrane central databases identified three randomized controlled trials that compared outcome in patients randomized to ISBCS or bilateral cataract surgery on two different dates. Meta-analyses were...... performed using the Cochrane Review Manager software. The quality of the evidence was assessed using the GRADE method (Grading of Recommendation, Assessment, Development, and Evaluation). We did not find any difference in the risk of complications or visual outcome in patients randomized to ISBCS or surgery...
Bakbergenuly, Ilyas; Morgenthaler, Stephan
2016-01-01
We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group‐level studies or in meta‐analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log‐odds and arcsine transformations of the estimated probability p^, both for single‐group studies and in combining results from several groups or studies in meta‐analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta‐analysis and result in abysmal coverage of the combined effect for large K. We also propose bias‐correction for the arcsine transformation. Our simulations demonstrate that this bias‐correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta‐analyses of prevalence. PMID:27192062
Bakbergenuly, Ilyas; Kulinskaya, Elena; Morgenthaler, Stephan
2016-07-01
We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group-level studies or in meta-analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log-odds and arcsine transformations of the estimated probability p̂, both for single-group studies and in combining results from several groups or studies in meta-analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta-analysis and result in abysmal coverage of the combined effect for large K. We also propose bias-correction for the arcsine transformation. Our simulations demonstrate that this bias-correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta-analyses of prevalence. © 2016 The Authors. Biometrical Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.
Chen, Johnny; Zheng, Gang; Price, William S
2017-02-01
A new 8-pulse Phase Modulated binomial-like selective inversion pulse sequence, dubbed '8PM', was developed by optimizing the nutation and phase angles of the constituent radio-frequency pulses so that the inversion profile resembled a target profile. Suppression profiles were obtained for both the 8PM and W5 based excitation sculpting sequences with equal inter-pulse delays. Significant distortions were observed in both profiles because of the offset effect of the radio frequency pulses. These distortions were successfully reduced by adjusting the inter-pulse delays. With adjusted inter-pulse delays, the 8PM and W5 based excitation sculpting sequences were tested on an aqueous lysozyme solution. The 8 PM based sequence provided higher suppression selectivity than the W5 based sequence. Two-dimensional nuclear Overhauser effect spectroscopy experiments were also performed on the lysozyme sample with 8PM and W5 based water signal suppression. The 8PM based suppression provided a spectrum with significantly increased (~ doubled) cross-peak intensity around the suppressed water resonance compared to the W5 based suppression. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Sequential sampling, magnitude estimation, and the wisdom of crowds
DEFF Research Database (Denmark)
Nash, Ulrik W.
2017-01-01
Sir Francis Galton (Galton, 1907) conjectured the psychological process of magnitude estimation caused the curious distribution of judgments he observed at Plymouth in 1906. However, after he published Vox Populi, researchers narrowed their attention to the first moment of judgment distributions...... in the wisdom of crowds indicated by judgment distribution skewness. The present study reports findings from an experiment on magnitude estimation and supports these predictions. The study moreover demonstrates that systematic errors by groups of people can be corrected using information about the judgment...... distribution these people together form, before errors might cause damage to decision making. In concluding, we revisit Galton's data from the West of England Fat Stock and Poultry Exhibition in light of what we have discovered....
Metaprop: a Stata command to perform meta-analysis of binomial data.
Nyaga, Victoria N; Arbyn, Marc; Aerts, Marc
2014-01-01
Meta-analyses have become an essential tool in synthesizing evidence on clinical and epidemiological questions derived from a multitude of similar studies assessing the particular issue. Appropriate and accessible statistical software is needed to produce the summary statistic of interest. Metaprop is a statistical program implemented to perform meta-analyses of proportions in Stata. It builds further on the existing Stata procedure metan which is typically used to pool effects (risk ratios, odds ratios, differences of risks or means) but which is also used to pool proportions. Metaprop implements procedures which are specific to binomial data and allows computation of exact binomial and score test-based confidence intervals. It provides appropriate methods for dealing with proportions close to or at the margins where the normal approximation procedures often break down, by use of the binomial distribution to model the within-study variability or by allowing Freeman-Tukey double arcsine transformation to stabilize the variances. Metaprop was applied on two published meta-analyses: 1) prevalence of HPV-infection in women with a Pap smear showing ASC-US; 2) cure rate after treatment for cervical precancer using cold coagulation. The first meta-analysis showed a pooled HPV-prevalence of 43% (95% CI: 38%-48%). In the second meta-analysis, the pooled percentage of cured women was 94% (95% CI: 86%-97%). By using metaprop, no studies with 0% or 100% proportions were excluded from the meta-analysis. Furthermore, study specific and pooled confidence intervals always were within admissible values, contrary to the original publication, where metan was used.
Binomial tree method for pricing a regime-switching volatility stock loans
Putri, Endah R. M.; Zamani, Muhammad S.; Utomo, Daryono B.
2018-03-01
Binomial model with regime switching may represents the price of stock loan which follows the stochastic process. Stock loan is one of alternative that appeal investors to get the liquidity without selling the stock. The stock loan mechanism resembles that of American call option when someone can exercise any time during the contract period. From the resembles both of mechanism, determination price of stock loan can be interpreted from the model of American call option. The simulation result shows the behavior of the price of stock loan under a regime-switching with respect to various interest rate and maturity.
CUANDO FALLA EL SUPUESTO DE HOMOCEDASTICIDAD EN VARIABLES CON DISTRIBUCIÓN BINOMIAL
Edison Ramiro Vásquez; Alberto Caballero Núñez
2011-01-01
Se utilizó el proceso de Simulación de Monte Carlopara generar poblaciones de variables aleatorias con distribuciónBinomial con varianzas homogéneas y heterogéneas; para cinco,10 y 30 observaciones por unidad experimental (n) y probabi-lidad de éxito del evento de 0,10, 0,20, ¿,0,90(p). Se conformaronexperimentos en Diseño Bloques al Azar con tres, cinco y nuevetratamientos (t); cuatro y ocho réplicas(r); para cada combinaciónt-r-n, se generaron 100 experimentos. A modo de disponer deun refer...
Constructing Binomial Trees Via Random Maps for Analysis of Financial Assets
Directory of Open Access Journals (Sweden)
Antonio Airton Carneiro de Freitas
2010-04-01
Full Text Available Random maps can be constructed from a priori knowledge of the financial assets. It is also addressed the reverse problem, i.e. from a function of an empirical stationary probability density function we set up a random map that naturally leads to an implied binomial tree, allowing the adjustment of models, including the ability to incorporate jumps. An applica- tion related to the options market is presented. It is emphasized that the quality of the model to incorporate a priori knowledge of the financial asset may be affected, for example, by the skewed vision of the analyst.
Directory of Open Access Journals (Sweden)
M. Subbiah
2017-01-01
Full Text Available Extensive statistical practice has shown the importance and relevance of the inferential problem of estimating probability parameters in a binomial experiment; especially on the issues of competing intervals from frequentist, Bayesian, and Bootstrap approaches. The package written in the free R environment and presented in this paper tries to take care of the issues just highlighted, by pooling a number of widely available and well-performing methods and apporting on them essential variations. A wide range of functions helps users with differing skills to estimate, evaluate, summarize, numerically and graphically, various measures adopting either the frequentist or the Bayesian paradigm.
Trial Sequential Methods for Meta-Analysis
Kulinskaya, Elena; Wood, John
2014-01-01
Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…
Sequential association rules in atonal music
Honingh, A.; Weyde, T.; Conklin, D.; Chew, E.; Childs, A.; Chuan, C.-H.
2009-01-01
This paper describes a preliminary study on the structure of atonal music. In the same way as sequential association rules of chords can be found in tonal music, sequential association rules of pitch class set categories can be found in atonal music. It has been noted before that certain pitch class
Multi-agent sequential hypothesis testing
Kim, Kwang-Ki K.
2014-12-15
This paper considers multi-agent sequential hypothesis testing and presents a framework for strategic learning in sequential games with explicit consideration of both temporal and spatial coordination. The associated Bayes risk functions explicitly incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well-defined value functions with respect to (a) the belief states for the case of conditional independent private noisy measurements that are also assumed to be independent identically distributed over time, and (b) the information states for the case of correlated private noisy measurements. A sequential investment game of strategic coordination and delay is also discussed as an application of the proposed strategic learning rules.
Some sequential, distribution-free pattern classification procedures with applications
Poage, J. L.
1971-01-01
Some sequential, distribution-free pattern classification techniques are presented. The decision problem to which the proposed classification methods are applied is that of discriminating between two kinds of electroencephalogram responses recorded from a human subject: spontaneous EEG and EEG driven by a stroboscopic light stimulus at the alpha frequency. The classification procedures proposed make use of the theory of order statistics. Estimates of the probabilities of misclassification are given. The procedures were tested on Gaussian samples and the EEG responses.
A Bayesian sequential design using alpha spending function to control type I error.
Zhu, Han; Yu, Qingzhao
2017-10-01
We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.
Karakawa, Ayako; Murata, Hiroshi; Hirasawa, Hiroyo; Mayama, Chihiro; Asaoka, Ryo
2013-01-01
To compare the performance of newly proposed point-wise linear regression (PLR) with the binomial test (binomial PLR) against mean deviation (MD) trend analysis and permutation analyses of PLR (PoPLR), in detecting global visual field (VF) progression in glaucoma. 15 VFs (Humphrey Field Analyzer, SITA standard, 24-2) were collected from 96 eyes of 59 open angle glaucoma patients (6.0 ± 1.5 [mean ± standard deviation] years). Using the total deviation of each point on the 2(nd) to 16(th) VFs (VF2-16), linear regression analysis was carried out. The numbers of VF test points with a significant trend at various probability levels (pbinomial test (one-side). A VF series was defined as "significant" if the median p-value from the binomial test was binomial PLR method (0.14 to 0.86) was significantly higher than MD trend analysis (0.04 to 0.89) and PoPLR (0.09 to 0.93). The PIS of the proposed method (0.0 to 0.17) was significantly lower than the MD approach (0.0 to 0.67) and PoPLR (0.07 to 0.33). The PBNS of the three approaches were not significantly different. The binomial BLR method gives more consistent results than MD trend analysis and PoPLR, hence it will be helpful as a tool to 'flag' possible VF deterioration.
Marginal likelihood estimation of negative binomial parameters with applications to RNA-seq data.
León-Novelo, Luis; Fuentes, Claudio; Emerson, Sarah
2017-10-01
RNA-Seq data characteristically exhibits large variances, which need to be appropriately accounted for in any proposed model. We first explore the effects of this variability on the maximum likelihood estimator (MLE) of the dispersion parameter of the negative binomial distribution, and propose instead to use an estimator obtained via maximization of the marginal likelihood in a conjugate Bayesian framework. We show, via simulation studies, that the marginal MLE can better control this variation and produce a more stable and reliable estimator. We then formulate a conjugate Bayesian hierarchical model, and use this new estimator to propose a Bayesian hypothesis test to detect differentially expressed genes in RNA-Seq data. We use numerical studies to show that our much simpler approach is competitive with other negative binomial based procedures, and we use a real data set to illustrate the implementation and flexibility of the procedure. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Bayesian analysis of overdispersed chromosome aberration data with the negative binomial model
International Nuclear Information System (INIS)
Brame, R.S.; Groer, P.G.
2002-01-01
The usual assumption of a Poisson model for the number of chromosome aberrations in controlled calibration experiments implies variance equal to the mean. However, it is known that chromosome aberration data from experiments involving high linear energy transfer radiations can be overdispersed, i.e. the variance is greater than the mean. Present methods for dealing with overdispersed chromosome data rely on frequentist statistical techniques. In this paper, the problem of overdispersion is considered from a Bayesian standpoint. The Bayes Factor is used to compare Poisson and negative binomial models for two previously published calibration data sets describing the induction of dicentric chromosome aberrations by high doses of neutrons. Posterior densities for the model parameters, which characterise dose response and overdispersion are calculated and graphed. Calibrative densities are derived for unknown neutron doses from hypothetical radiation accident data to determine the impact of different model assumptions on dose estimates. The main conclusion is that an initial assumption of a negative binomial model is the conservative approach to chromosome dosimetry for high LET radiations. (author)
Estimation of component failure probability from masked binomial system testing data
International Nuclear Information System (INIS)
Tan Zhibin
2005-01-01
The component failure probability estimates from analysis of binomial system testing data are very useful because they reflect the operational failure probability of components in the field which is similar to the test environment. In practice, this type of analysis is often confounded by the problem of data masking: the status of tested components is unknown. Methods in considering this type of uncertainty are usually computationally intensive and not practical to solve the problem for complex systems. In this paper, we consider masked binomial system testing data and develop a probabilistic model to efficiently estimate component failure probabilities. In the model, all system tests are classified into test categories based on component coverage. Component coverage of test categories is modeled by a bipartite graph. Test category failure probabilities conditional on the status of covered components are defined. An EM algorithm to estimate component failure probabilities is developed based on a simple but powerful concept: equivalent failures and tests. By simulation we not only demonstrate the convergence and accuracy of the algorithm but also show that the probabilistic model is capable of analyzing systems in series, parallel and any other user defined structures. A case study illustrates an application in test case prioritization
Ma, Zhuanglin; Zhang, Honglu; Chien, Steven I-Jy; Wang, Jin; Dong, Chunjiao
2017-01-01
To investigate the relationship between crash frequency and potential influence factors, the accident data for events occurring on a 50km long expressway in China, including 567 crash records (2006-2008), were collected and analyzed. Both the fixed-length and the homogeneous longitudinal grade methods were applied to divide the study expressway section into segments. A negative binomial (NB) model and a random effect negative binomial (RENB) model were developed to predict crash frequency. The parameters of both models were determined using the maximum likelihood (ML) method, and the mixed stepwise procedure was applied to examine the significance of explanatory variables. Three explanatory variables, including longitudinal grade, road width, and ratio of longitudinal grade and curve radius (RGR), were found as significantly affecting crash frequency. The marginal effects of significant explanatory variables to the crash frequency were analyzed. The model performance was determined by the relative prediction error and the cumulative standardized residual. The results show that the RENB model outperforms the NB model. It was also found that the model performance with the fixed-length segment method is superior to that with the homogeneous longitudinal grade segment method. Copyright © 2016. Published by Elsevier Ltd.
Simulation and sequential dynamical systems
Energy Technology Data Exchange (ETDEWEB)
Mortveit, H.S.; Reidys, C.M.
1999-06-01
Computer simulations have a generic structure. Motivated by this the authors present a new class of discrete dynamical systems that captures this structure in a mathematically precise way. This class of systems consists of (1) a loopfree graph {Upsilon} with vertex set {l_brace}1,2,{hor_ellipsis},n{r_brace} where each vertex has a binary state, (2) a vertex labeled set of functions (F{sub i,{Upsilon}}:F{sub 2}{sup n} {yields} F{sub 2}{sup n}){sub i} and (3) a permutation {pi} {element_of} S{sub n}. The function F{sub i,{Upsilon}} updates the state of vertex i as a function of the states of vertex i and its {Upsilon}-neighbors and leaves the states of all other vertices fixed. The permutation {pi} represents the update ordering, i.e., the order in which the functions F{sub i,{Upsilon}} are applied. By composing the functions F{sub i,{Upsilon}} in the order given by {pi} one obtains the dynamical system (equation given in paper), which the authors refer to as a sequential dynamical system, or SDS for short. The authors will present bounds for the number of functionally different systems and for the number of nonisomorphic digraphs {Gamma}[F{sub {Upsilon}},{pi}] that can be obtained by varying the update order and applications of these to specific graphs and graph classes.
Sequential provisional implant prosthodontics therapy.
Zinner, Ira D; Markovits, Stanley; Jansen, Curtis E; Reid, Patrick E; Schnader, Yale E; Shapiro, Herbert J
2012-01-01
The fabrication and long-term use of first- and second-stage provisional implant prostheses is critical to create a favorable prognosis for function and esthetics of a fixed-implant supported prosthesis. The fixed metal and acrylic resin cemented first-stage prosthesis, as reviewed in Part I, is needed for prevention of adjacent and opposing tooth movement, pressure on the implant site as well as protection to avoid micromovement of the freshly placed implant body. The second-stage prosthesis, reviewed in Part II, should be used following implant uncovering and abutment installation. The patient wears this provisional prosthesis until maturation of the bone and healing of soft tissues. The second-stage provisional prosthesis is also a fail-safe mechanism for possible early implant failures and also can be used with late failures and/or for the necessity to repair the definitive prosthesis. In addition, the screw-retained provisional prosthesis is used if and when an implant requires removal or other implants are to be placed as in a sequential approach. The creation and use of both first- and second-stage provisional prostheses involve a restorative dentist, dental technician, surgeon, and patient to work as a team. If the dentist alone cannot do diagnosis and treatment planning, surgery, and laboratory techniques, he or she needs help by employing the expertise of a surgeon and a laboratory technician. This team approach is essential for optimum results.
Trial Sequential Analysis in systematic reviews with meta-analysis
Directory of Open Access Journals (Sweden)
Jørn Wetterslev
2017-03-01
Full Text Available Abstract Background Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors and too many false negative conclusions (type II errors. Methods We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. Results The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D2 measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in
Sequential boundaries approach in clinical trials with unequal allocation ratios
Directory of Open Access Journals (Sweden)
Ayatollahi Seyyed
2006-01-01
Full Text Available Abstract Background In clinical trials, both unequal randomization design and sequential analyses have ethical and economic advantages. In the single-stage-design (SSD, however, if the sample size is not adjusted based on unequal randomization, the power of the trial will decrease, whereas with sequential analysis the power will always remain constant. Our aim was to compare sequential boundaries approach with the SSD when the allocation ratio (R was not equal. Methods We evaluated the influence of R, the ratio of the patients in experimental group to the standard group, on the statistical properties of two-sided tests, including the two-sided single triangular test (TT, double triangular test (DTT and SSD by multiple simulations. The average sample size numbers (ASNs and power (1-β were evaluated for all tests. Results Our simulation study showed that choosing R = 2 instead of R = 1 increases the sample size of SSD by 12% and the ASN of the TT and DTT by the same proportion. Moreover, when R = 2, compared to the adjusted SSD, using the TT or DTT allows to retrieve the well known reductions of ASN observed when R = 1, compared to SSD. In addition, when R = 2, compared to SSD, using the TT and DTT allows to obtain smaller reductions of ASN than when R = 1, but maintains the power of the test to its planned value. Conclusion This study indicates that when the allocation ratio is not equal among the treatment groups, sequential analysis could indeed serve as a compromise between ethicists, economists and statisticians.
Is ‘hit and run’ a single word? The processing of irreversible binomials in neglect dyslexia
Directory of Open Access Journals (Sweden)
Giorgio eArcara
2012-02-01
Full Text Available The present study is the first neuropsychological investigation into the problem of the mental representation and processing of irreversible binomials, i.e. word pairs linked by a conjunction (e.g. ‘hit and run’, ‘dead or alive’. In order to test their lexical status, the phenomenon of neglect dyslexia is explored.People with left-sided neglect dyslexia show a clear lexical effect: they can read irreversible binomials better (i.e., by dropping the leftmost words less frequently when their components are presented in their correct order. This may be taken as an indication that they treat these constructions as lexical, not decomposable, elements. This finding therefore constitutes strong evidence that irreversible binomials tend to be stored in the mental lexicon as a whole and that this whole form is preferably addressed in the retrieval process.
The effect of the negative binomial distribution on the line-width of the micromaser cavity field
International Nuclear Information System (INIS)
Kremid, A. M.
2009-01-01
The influence of negative binomial distribution (NBD) on the line-width of the negative binomial distribution (NBD) on the line-width of the micromaser is considered. The threshold of the micromaser is shifted towards higher values of the pumping parameter q. Moreover the line-width exhibits sharp dips 'resonances' when the cavity temperature reduces to a very low value. These dips are very clear evidence for the occurrence of the so-called trapping states regime in the micromaser. This statistics prevents the appearance of these trapping states, namely by increasing the negative binomial parameter q these dips wash out and the line-width becomes more broadening. For small values of the parameter q the line-width at large values of q randomly oscillates around its transition line. As q becomes large this oscillatory behavior occurs at rarely values of q. (author)
Multiple imputation with sequential penalized regression.
Zahid, Faisal M; Heumann, Christian
2018-01-01
Missing data is a common issue that can cause problems in estimation and inference in biomedical, epidemiological and social research. Multiple imputation is an increasingly popular approach for handling missing data. In case of a large number of covariates with missing data, existing multiple imputation software packages may not work properly and often produce errors. We propose a multiple imputation algorithm called mispr based on sequential penalized regression models. Each variable with missing values is assumed to have a different distributional form and is imputed with its own imputation model using the ridge penalty. In the case of a large number of predictors with respect to the sample size, the use of a quadratic penalty guarantees unique estimates for the parameters and leads to better predictions than the usual Maximum Likelihood Estimation (MLE), with a good compromise between bias and variance. As a result, the proposed algorithm performs well and provides imputed values that are better even for a large number of covariates with small samples. The results are compared with the existing software packages mice, VIM and Amelia in simulation studies. The missing at random mechanism was the main assumption in the simulation study. The imputation performance of the proposed algorithm is evaluated with mean squared imputation error and mean absolute imputation error. The mean squared error ([Formula: see text]), parameter estimates with their standard errors and confidence intervals are also computed to compare the performance in the regression context. The proposed algorithm is observed to be a good competitor to the existing algorithms, with smaller mean squared imputation error, mean absolute imputation error and mean squared error. The algorithm's performance becomes considerably better than that of the existing algorithms with increasing number of covariates, especially when the number of predictors is close to or even greater than the sample size. Two
Sequential acquisition of mutations in myelodysplastic syndromes.
Makishima, Hideki
2017-01-01
Recent progress in next-generation sequencing technologies allows us to discover frequent mutations throughout the coding regions of myelodysplastic syndromes (MDS), potentially providing us with virtually a complete spectrum of driver mutations in this disease. As shown by many study groups these days, such driver mutations are acquired in a gene-specific fashion. For instance, DDX41 mutations are observed in germline cells long before MDS presentation. In blood samples from healthy elderly individuals, somatic DNMT3A and TET2 mutations are detected as age-related clonal hematopoiesis and are believed to be a risk factor for hematological neoplasms. In MDS, mutations of genes such as NRAS and FLT3, designated as Type-1 genes, may be significantly associated with leukemic evolution. Another type (Type-2) of genes, including RUNX1 and GATA2, are related to progression from low-risk to high-risk MDS. Overall, various driver mutations are sequentially acquired in MDS, at a specific time, in either germline cells, normal hematopoietic cells, or clonal MDS cells.
Sequential function approximation on arbitrarily distributed point sets
Wu, Kailiang; Xiu, Dongbin
2018-02-01
We present a randomized iterative method for approximating unknown function sequentially on arbitrary point set. The method is based on a recently developed sequential approximation (SA) method, which approximates a target function using one data point at each step and avoids matrix operations. The focus of this paper is on data sets with highly irregular distribution of the points. We present a nearest neighbor replacement (NNR) algorithm, which allows one to sample the irregular data sets in a near optimal manner. We provide mathematical justification and error estimates for the NNR algorithm. Extensive numerical examples are also presented to demonstrate that the NNR algorithm can deliver satisfactory convergence for the SA method on data sets with high irregularity in their point distributions.
Stationary Anonymous Sequential Games with Undiscounted Rewards.
Więcek, Piotr; Altman, Eitan
Stationary anonymous sequential games with undiscounted rewards are a special class of games that combine features from both population games (infinitely many players) with stochastic games. We extend the theory for these games to the cases of total expected reward as well as to the expected average reward. We show that in the anonymous sequential game equilibria correspond to the limits of those of related finite population games as the number of players grows to infinity. We provide examples to illustrate our results.
Assessing the Option to Abandon an Investment Project by the Binomial Options Pricing Model
Directory of Open Access Journals (Sweden)
Salvador Cruz Rambaud
2016-01-01
Full Text Available Usually, traditional methods for investment project appraisal such as the net present value (hereinafter NPV do not incorporate in their values the operational flexibility offered by including a real option included in the project. In this paper, real options, and more specifically the option to abandon, are analysed as a complement to cash flow sequence which quantifies the project. In this way, by considering the existing analogy with financial options, a mathematical expression is derived by using the binomial options pricing model. This methodology provides the value of the option to abandon the project within one, two, and in general n periods. Therefore, this paper aims to be a useful tool in determining the value of the option to abandon according to its residual value, thus making easier the control of the uncertainty element within the project.
Is "hit and run" a single word? The processing of irreversible binomials in neglect dyslexia.
Arcara, Giorgio; Lacaita, Graziano; Mattaloni, Elisa; Passarini, Laura; Mondini, Sara; Benincà, Paola; Semenza, Carlo
2012-01-01
The present study is the first neuropsychological investigation into the problem of the mental representation and processing of irreversible binomials (IBs), i.e., word pairs linked by a conjunction (e.g., "hit and run," "dead or alive"). In order to test their lexical status, the phenomenon of neglect dyslexia is explored. People with left-sided neglect dyslexia show a clear lexical effect: they can read IBs better (i.e., by dropping the leftmost words less frequently) when their components are presented in their correct order. This may be taken as an indication that they treat these constructions as lexical, not decomposable, elements. This finding therefore constitutes strong evidence that IBs tend to be stored in the mental lexicon as a whole and that this whole form is preferably addressed in the retrieval process.
Binomial moments of the distance distribution and the probability of undetected error
Energy Technology Data Exchange (ETDEWEB)
Barg, A. [Lucent Technologies, Murray Hill, NJ (United States). Bell Labs.; Ashikhmin, A. [Los Alamos National Lab., NM (United States)
1998-09-01
In [1] K.A.S. Abdel-Ghaffar derives a lower bound on the probability of undetected error for unrestricted codes. The proof relies implicitly on the binomial moments of the distance distribution of the code. The authors use the fact that these moments count the size of subcodes of the code to give a very simple proof of the bound in [1] by showing that it is essentially equivalent to the Singleton bound. They discuss some combinatorial connections revealed by this proof. They also discuss some improvements of this bound. Finally, they analyze asymptotics. They show that an upper bound on the undetected error exponent that corresponds to the bound of [1] improves known bounds on this function.
The multi-class binomial failure rate model for the treatment of common-cause failures
International Nuclear Information System (INIS)
Hauptmanns, U.
1995-01-01
The impact of common cause failures (CCF) on PSA results for NPPs is in sharp contrast with the limited quality which can be achieved in their assessment. This is due to the dearth of observations and cannot be remedied in the short run. Therefore the methods employed for calculating failure rates should be devised such as to make the best use of the few available observations on CCF. The Multi-Class Binomial Failure Rate (MCBFR) Model achieves this by assigning observed failures to different classes according to their technical characteristics and applying the BFR formalism to each of these. The results are hence determined by a superposition of BFR type expressions for each class, each of them with its own coupling factor. The model thus obtained flexibly reproduces the dependence of CCF rates on failure multiplicity insinuated by the observed failure multiplicities. This is demonstrated by evaluating CCFs observed for combined impulse pilot valves in German NPPs. (orig.) [de
Xie, Wen-Jie; Han, Rui-Qi; Jiang, Zhi-Qiang; Wei, Lijian; Zhou, Wei-Xing
2017-08-01
Complex network is not only a powerful tool for the analysis of complex system, but also a promising way to analyze time series. The algorithm of horizontal visibility graph (HVG) maps time series into graphs, whose degree distributions are numerically and analytically investigated for certain time series. We derive the degree distributions of HVGs through an iterative construction process of HVGs. The degree distributions of the HVG and the directed HVG for random series are derived to be exponential, which confirms the analytical results from other methods. We also obtained the analytical expressions of degree distributions of HVGs and in-degree and out-degree distributions of directed HVGs transformed from multifractal binomial measures, which agree excellently with numerical simulations.
Directory of Open Access Journals (Sweden)
Xiong Wang
2013-01-01
Full Text Available Based on characteristics of the nonlife joint-stock insurance company, this paper presents a compound binomial risk model that randomizes the premium income on unit time and sets the threshold for paying dividends to shareholders. In this model, the insurance company obtains the insurance policy in unit time with probability and pays dividends to shareholders with probability when the surplus is no less than . We then derive the recursive formulas of the expected discounted penalty function and the asymptotic estimate for it. And we will derive the recursive formulas and asymptotic estimates for the ruin probability and the distribution function of the deficit at ruin. The numerical examples have been shown to illustrate the accuracy of the asymptotic estimations.
Connecting Binomial and Black-Scholes Option Pricing Models: A Spreadsheet-Based Illustration
Directory of Open Access Journals (Sweden)
Yi Feng
2012-07-01
Full Text Available The Black-Scholes option pricing model is part of the modern financial curriculum, even at the introductory level. However, as the derivation of the model, which requires advanced mathematical tools, is well beyond the scope of standard finance textbooks, the model has remained a great, but mysterious, recipe for many students. This paper illustrates, from a pedagogic perspective, how a simple binomial model, which converges to the Black-Scholes formula, can capture the economic insight in the original derivation. Microsoft ExcelTM plays an important pedagogic role in connecting the two models. The interactivity as provided by scroll bars, in conjunction with Excel's graphical features, will allow students to visualize the impacts of individual input parameters on option pricing.
Organ procurement and the body donor-family binomial: instruments to subsidize nursing approach
Directory of Open Access Journals (Sweden)
Gisele da Cruz Ferreira
2013-05-01
Full Text Available We aimed to describe the design of instruments to subsidize the care for the body donor-family binomial in the perspective of the process of organ procurement. The Activities of Living Model grounded the instruments for data collection. We identified 33 possible diagnoses, 14 associated to the body preservation and 19 to responses from family members facing grieving and the decision on whether to authorize the donation. We selected 31 interventions to preserve the body for organs/tissues procurement, and 25 to meet the needs for information, coping and support for the family decision. The nursing diagnoses, interventions, and outcomes were registered according to the North American Nursing Diagnosis Association, Nursing Intervention Classification, and Nursing Outcome Classification, respectively. The instruments follow the legislation of the Board of Nursing and the donor/organ procurement, needing to be validated by field experts.
A comparison of LMC and SDL complexity measures on binomial distributions
Piqueira, José Roberto C.
2016-02-01
The concept of complexity has been widely discussed in the last forty years, with a lot of thinking contributions coming from all areas of the human knowledge, including Philosophy, Linguistics, History, Biology, Physics, Chemistry and many others, with mathematicians trying to give a rigorous view of it. In this sense, thermodynamics meets information theory and, by using the entropy definition, López-Ruiz, Mancini and Calbet proposed a definition for complexity that is referred as LMC measure. Shiner, Davison and Landsberg, by slightly changing the LMC definition, proposed the SDL measure and the both, LMC and SDL, are satisfactory to measure complexity for a lot of problems. Here, SDL and LMC measures are applied to the case of a binomial probability distribution, trying to clarify how the length of the data set implies complexity and how the success probability of the repeated trials determines how complex the whole set is.
Gomes, Marcos José Timbó Lima; Cunto, Flávio; da Silva, Alan Ricardo
2017-09-01
Generalized Linear Models (GLM) with negative binomial distribution for errors, have been widely used to estimate safety at the level of transportation planning. The limited ability of this technique to take spatial effects into account can be overcome through the use of local models from spatial regression techniques, such as Geographically Weighted Poisson Regression (GWPR). Although GWPR is a system that deals with spatial dependency and heterogeneity and has already been used in some road safety studies at the planning level, it fails to account for the possible overdispersion that can be found in the observations on road-traffic crashes. Two approaches were adopted for the Geographically Weighted Negative Binomial Regression (GWNBR) model to allow discrete data to be modeled in a non-stationary form and to take note of the overdispersion of the data: the first examines the constant overdispersion for all the traffic zones and the second includes the variable for each spatial unit. This research conducts a comparative analysis between non-spatial global crash prediction models and spatial local GWPR and GWNBR at the level of traffic zones in Fortaleza/Brazil. A geographic database of 126 traffic zones was compiled from the available data on exposure, network characteristics, socioeconomic factors and land use. The models were calibrated by using the frequency of injury crashes as a dependent variable and the results showed that GWPR and GWNBR achieved a better performance than GLM for the average residuals and likelihood as well as reducing the spatial autocorrelation of the residuals, and the GWNBR model was more able to capture the spatial heterogeneity of the crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.
Branching ratios in sequential statistical multifragmentation
International Nuclear Information System (INIS)
Moretto, L.G.; Phair, L.; Tso, K.; Jing, K.; Wozniak, G.J.
1995-01-01
The energy dependence of the probability of producing n fragments follows a characteristic statistical law. Experimental intermediate-mass-fragment multiplicity distributions are shown to be binomial at all excitation energies. From these distributions a single binary event probability can be extracted that has the thermal dependence p=exp[-B/T]. Thus, it is inferred that multifragmentation is a sequence of thermal binary events. The increase of p with excitation energy implies a corresponding contraction of the time-scale and explains recently observed fragment-fragment and fragment-spectator Coulomb correlations. (authors). 22 refs., 5 figs
Branching ratios in sequential statistical multifragmentation
International Nuclear Information System (INIS)
Moretto, L.G.; Phair, L.; Tso, K.; Jing, K.; Wozniak, G.J.
1995-01-01
The energy dependence of the probability of producing n fragments follows a characteristic statistical law. Experimental intermediate-mass-fragment multiplicity distributions are shown to be binomial at all excitation energies. From these distributions a single binary event probability can be extracted that has the thermal dependence p = exp[-B/T]. Thus, it is inferred that multifragmentation is a sequence of thermal binary events. The increase of p with excitation energy implies a corresponding contraction of the time-scale and explains recently observed fragment-fragment and fragment-spectator Coulomb correlations. (author). 22 refs., 5 figs
Continuous versus group sequential analysis for post-market drug and vaccine safety surveillance.
Silva, I R; Kulldorff, M
2015-09-01
The use of sequential statistical analysis for post-market drug safety surveillance is quickly emerging. Both continuous and group sequential analysis have been used, but consensus is lacking as to when to use which approach. We compare the statistical performance of continuous and group sequential analysis in terms of type I error probability; statistical power; expected time to signal when the null hypothesis is rejected; and the sample size required to end surveillance without rejecting the null. We present a mathematical proposition to show that for any group sequential design there always exists a continuous sequential design that is uniformly better. As a consequence, it is shown that more frequent testing is always better. Additionally, for a Poisson based probability model and a flat rejection boundary in terms of the log likelihood ratio, we compare the performance of various continuous and group sequential designs. Using exact calculations, we found that, for the parameter settings used, there is always a continuous design with shorter expected time to signal than the best group design. The two key conclusions from this article are (i) that any post-market safety surveillance system should attempt to obtain data as frequently as possible, and (ii) that sequential testing should always be performed when new data arrives without deliberately waiting for additional data. © 2015, The International Biometric Society.
Bennett, Bradley C; Husby, Chad E
2008-03-28
Botanical pharmacopoeias are non-random subsets of floras, with some taxonomic groups over- or under-represented. Moerman [Moerman, D.E., 1979. Symbols and selectivity: a statistical analysis of Native American medical ethnobotany, Journal of Ethnopharmacology 1, 111-119] introduced linear regression/residual analysis to examine these patterns. However, regression, the commonly-employed analysis, suffers from several statistical flaws. We use contingency table and binomial analyses to examine patterns of Shuar medicinal plant use (from Amazonian Ecuador). We first analyzed the Shuar data using Moerman's approach, modified to better meet requirements of linear regression analysis. Second, we assessed the exact randomization contingency table test for goodness of fit. Third, we developed a binomial model to test for non-random selection of plants in individual families. Modified regression models (which accommodated assumptions of linear regression) reduced R(2) to from 0.59 to 0.38, but did not eliminate all problems associated with regression analyses. Contingency table analyses revealed that the entire flora departs from the null model of equal proportions of medicinal plants in all families. In the binomial analysis, only 10 angiosperm families (of 115) differed significantly from the null model. These 10 families are largely responsible for patterns seen at higher taxonomic levels. Contingency table and binomial analyses offer an easy and statistically valid alternative to the regression approach.
Pradhan, Vivek; Saha, Krishna K; Banerjee, Tathagata; Evans, John C
2014-07-30
Inference on the difference between two binomial proportions in the paired binomial setting is often an important problem in many biomedical investigations. Tang et al. (2010, Statistics in Medicine) discussed six methods to construct confidence intervals (henceforth, we abbreviate it as CI) for the difference between two proportions in paired binomial setting using method of variance estimates recovery. In this article, we propose weighted profile likelihood-based CIs for the difference between proportions of a paired binomial distribution. However, instead of the usual likelihood, we use weighted likelihood that is essentially making adjustments to the cell frequencies of a 2 × 2 table in the spirit of Agresti and Min (2005, Statistics in Medicine). We then conduct numerical studies to compare the performances of the proposed CIs with that of Tang et al. and Agresti and Min in terms of coverage probabilities and expected lengths. Our numerical study clearly indicates that the weighted profile likelihood-based intervals and Jeffreys interval (cf. Tang et al.) are superior in terms of achieving the nominal level, and in terms of expected lengths, they are competitive. Finally, we illustrate the use of the proposed CIs with real-life examples. Copyright © 2014 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Jianwei Zhou
2014-01-01
Full Text Available The explicit formulae of spectral norms for circulant-type matrices are investigated; the matrices are circulant matrix, skew-circulant matrix, and g-circulant matrix, respectively. The entries are products of binomial coefficients with harmonic numbers. Explicit identities for these spectral norms are obtained. Employing these approaches, some numerical tests are listed to verify the results.
Lee, JuHee; Park, Chang Gi; Choi, Moonki
2016-05-01
This study was conducted to identify risk factors that influence regular exercise among patients with Parkinson's disease in Korea. Parkinson's disease is prevalent in the elderly, and may lead to a sedentary lifestyle. Exercise can enhance physical and psychological health. However, patients with Parkinson's disease are less likely to exercise than are other populations due to physical disability. A secondary data analysis and cross-sectional descriptive study were conducted. A convenience sample of 106 patients with Parkinson's disease was recruited at an outpatient neurology clinic of a tertiary hospital in Korea. Demographic characteristics, disease-related characteristics (including disease duration and motor symptoms), self-efficacy for exercise, balance, and exercise level were investigated. Negative binomial regression and zero-inflated negative binomial regression for exercise count data were utilized to determine factors involved in exercise. The mean age of participants was 65.85 ± 8.77 years, and the mean duration of Parkinson's disease was 7.23 ± 6.02 years. Most participants indicated that they engaged in regular exercise (80.19%). Approximately half of participants exercised at least 5 days per week for 30 min, as recommended (51.9%). Motor symptoms were a significant predictor of exercise in the count model, and self-efficacy for exercise was a significant predictor of exercise in the zero model. Severity of motor symptoms was related to frequency of exercise. Self-efficacy contributed to the probability of exercise. Symptom management and improvement of self-efficacy for exercise are important to encourage regular exercise in patients with Parkinson's disease. Copyright © 2015 Elsevier Inc. All rights reserved.
Sequential extraction applied to Peruibe black mud, SP, Brazil
International Nuclear Information System (INIS)
Torrecilha, Jefferson Koyaishi
2014-01-01
The Peruibe Black mud is used in therapeutic treatments such as psoriasis, peripheral dermatitis, acne and seborrhoea, as well as in the treatment of myalgia, arthritis, rheumatism and non-articular processes. Likewise other medicinal clays, it may not be free from possible adverse health effects due to possible hazardous minerals leading to respiratory system occurrences and other effects, caused by the presence of toxic elements. Once used for therapeutic purposes, any given material should be fully characterized and thus samples of Peruibe black mud were analyzed to determine physical and chemical properties: moisture content, organic matter and loss on ignition; pH, particle size, cation exchange capacity and swelling index. The elemental composition was determined by Neutron Activation Analysis, Atomic Absorption Graphite Furnace and X-ray fluorescence; the mineralogical composition was determined by X-ray diffraction. Another tool widely used to evaluate the behavior of trace elements, in various environmental matrices, is the sequential extraction. Thus, a sequential extraction procedure was applied to fractionate the mud in specific geochemical forms and verify how and how much of the elements may be contained in it. Considering the several sequential extraction procedures, BCR-701 method (Community Bureau of Reference) was used since it is considered the most reproducible among them. A simple extraction with an artificial sweat was, also, applied in order to verify which components are potentially available for absorption by the patient skin during the topical treatment. The results indicated that the mud is basically composed by a silty-clay material, rich in organic matter and with good cation exchange capacity. There were no significant variations in mineralogy and elemental composition of both, in natura and mature mud forms. The analysis by sequential extraction and by simple extraction indicated that the elements possibly available in larger
Mean-Variance-Validation Technique for Sequential Kriging Metamodels
International Nuclear Information System (INIS)
Lee, Tae Hee; Kim, Ho Sung
2010-01-01
The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean 0 validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean 0 validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels
Finding False Paths in Sequential Circuits
Matrosova, A. Yu.; Andreeva, V. V.; Chernyshov, S. V.; Rozhkova, S. V.; Kudin, D. V.
2018-02-01
Method of finding false paths in sequential circuits is developed. In contrast with heuristic approaches currently used abroad, the precise method based on applying operations on Reduced Ordered Binary Decision Diagrams (ROBDDs) extracted from the combinational part of a sequential controlling logic circuit is suggested. The method allows finding false paths when transfer sequence length is not more than the given value and obviates the necessity of investigation of combinational circuit equivalents of the given lengths. The possibilities of using of the developed method for more complicated circuits are discussed.
Passive Baited Sequential Filth Fly Trap.
Aldridge, Robert L; Britch, Seth C; Snelling, Melissa; Gutierez, Arturo; White, Gregory; Linthicum, Kenneth J
2015-09-01
Filth fly control measures may be optimized with a better understanding of fly population dynamics measured throughout the day. We describe the modification of a commercial motorized sequential mosquito trap to accept liquid odorous bait and leverage a classic inverted-cone design to passively confine flies in 8 modified collection bottles corresponding to 8 intervals. Efficacy trials in a hot-arid desert environment indicate no significant difference (P = 0.896) between the modified sequential trap and a Rid-Max® fly trap.
Asynchronous Operators of Sequential Logic Venjunction & Sequention
Vasyukevich, Vadim
2011-01-01
This book is dedicated to new mathematical instruments assigned for logical modeling of the memory of digital devices. The case in point is logic-dynamical operation named venjunction and venjunctive function as well as sequention and sequentional function. Venjunction and sequention operate within the framework of sequential logic. In a form of the corresponding equations, they organically fit analytical expressions of Boolean algebra. Thus, a sort of symbiosis is formed using elements of asynchronous sequential logic on the one hand and combinational logic on the other hand. So, asynchronous
Double sequential defibrillation for refractory ventricular fibrillation.
El Tawil, Chady; Mrad, Sandra; Khishfe, Basem F
2017-12-01
A 54-year-old suffered from an out-of-hospital cardiac arrest. Compressions were started within minutes and the patient was in refractory ventricular fibrillation despite multiple asynchronized shocks and maximal doses of antiarrhythmic agents. Double sequential defibrillation was attempted with successful Return Of Spontaneous Circulation (ROSC) after a total of 61min of cardiac arrest. The patient was discharged home neurologically intact. Double sequential defibrillation could be a simple effective approach to patients with refractory ventricular fibrillation. Copyright © 2017 Elsevier Inc. All rights reserved.
Conversion from Excel into Aleph sequential
Renaville, François; Thirion, Paul
2009-01-01
Libraries must sometimes load records that are not available to them in a bibliographic format standard (Marc21, Unimarc...): integration of the book database of an academic research center, list of new e-journals bought by the library... This can make the conversion procedure of the data to the Aleph sequential format quite hard. Sometimes the records are only available in Excel. This poster explains how to convert easily in a few steps an Excel file into Aleph sequential in order to load re...
Design of Ultra-Wideband Tapered Slot Antenna by Using Binomial Transformer with Corrugation
Chareonsiri, Yosita; Thaiwirot, Wanwisa; Akkaraekthalin, Prayoot
2017-05-01
In this paper, the tapered slot antenna (TSA) with corrugation is proposed for UWB applications. The multi-section binomial transformer is used to design taper profile of the proposed TSA that does not involve using time consuming optimization. A step-by-step procedure for synthesis of the step impedance values related with step slot widths of taper profile is presented. The smooth taper can be achieved by fitting the smoothing curve to the entire step slot. The design of TSA based on this method yields results with a quite flat gain and wide impedance bandwidth covering UWB spectrum from 3.1 GHz to 10.6 GHz. To further improve the radiation characteristics, the corrugation is added on the both edges of the proposed TSA. The effects of different corrugation shapes on the improvement of antenna gain and front-to-back ratio (F-to-B ratio) are investigated. To demonstrate the validity of the design, the prototypes of TSA without and with corrugation are fabricated and measured. The results show good agreement between simulation and measurement.
Hilpert, Markus; Rasmuson, Anna; Johnson, William
2017-04-01
Transport of colloids in saturated porous media is significantly influenced by colloidal interactions with grain surfaces. Near-surface fluid domain colloids experience relatively low fluid drag and relatively strong colloidal forces that slow their down-gradient translation relative to colloids in bulk fluid. Near surface fluid domain colloids may re-enter into the bulk fluid via diffusion (nanoparticles) or expulsion at rear flow stagnation zones, they may immobilize (attach) via strong primary minimum interactions, or they may move along a grain-to-grain contact to the near surface fluid domain of an adjacent grain. We introduce a simple model that accounts for all possible permutations of mass transfer within a dual pore and grain network. The primary phenomena thereby represented in the model are mass transfer of colloids between the bulk and near-surface fluid domains and immobilization onto grain surfaces. Colloid movement is described by a sequence of trials in a series of unit cells, and the binomial distribution is used to calculate the probabilities of each possible sequence. Pore-scale simulations provide mechanistically-determined likelihoods and timescales associated with the above pore-scale colloid mass transfer processes, whereas the network-scale model employs pore and grain topology to determine probabilities of transfer from up-gradient bulk and near-surface fluid domains to down-gradient bulk and near-surface fluid domains. Inter-grain transport of colloids in the near surface fluid domain can cause extended tailing.
The binomial work-health in the transit of Curitiba city.
Tokars, Eunice; Moro, Antonio Renato Pereira; Cruz, Roberto Moraes
2012-01-01
The working activity in traffic of the big cities complex interacts with the environment is often in unsafe and unhealthy imbalance favoring the binomial work - health. The aim of this paper was to analyze the relationship between work and health of taxi drivers in Curitiba, Brazil. This cross-sectional observational study with 206 individuals used a questionnaire on the organization's profile and perception of the environment and direct observation of work. It was found that the majority are male, aged between 26 and 49 years and has a high school degree. They are sedentary, like making a journey from 8 to 12 hours. They consider a stressful profession, related low back pain and are concerned about safety and accidents. 40% are smokers and consume alcoholic drink and 65% do not have or do not use devices of comfort. Risk factors present in the daily taxi constraints cause physical, cognitive and organizational and can affect your performance. It is concluded that the taxi drivers must change the unhealthy lifestyle, requiring a more efficient management of government authorities for this work is healthy and safe for all involved.
Veiga, Helena Perrut; Bianchini, Esther Mandelbaum Gonçalves
2012-01-01
To perform an integrative review of studies on liquid sequential swallowing, by characterizing the methodology of the studies and the most important findings in young and elderly adults. Review of the literature written in English and Portuguese on PubMed, LILACS, SciELO and MEDLINE databases, within the past twenty years, available fully, using the following uniterms: sequential swallowing, swallowing, dysphagia, cup, straw, in various combinations. Research articles with a methodological approach on the characterization of liquid sequential swallowing by young and/or elderly adults, regardless of health condition, excluding studies involving only the esophageal phase. The following research indicators were applied: objectives, number and gender of participants; age group; amount of liquid offered; intake instruction; utensil used, methods and main findings. 18 studies met the established criteria. The articles were categorized according to the sample characterization and the methodology on volume intake, utensil used and types of exams. Most studies investigated only healthy individuals, with no swallowing complaints. Subjects were given different instructions as to the intake of all the volume: usual manner, continually, as rapidly as possible. The findings about the characterization of sequential swallowing were varied and described in accordance with the objectives of each study. It found great variability in the methodology employed to characterize the sequential swallowing. Some findings are not comparable, and sequential swallowing is not studied in most swallowing protocols, without consensus on the influence of the utensil.
Dobolyi, David G; Dodson, Chad S
2013-12-01
Confidence judgments for eyewitness identifications play an integral role in determining guilt during legal proceedings. Past research has shown that confidence in positive identifications is strongly associated with accuracy. Using a standard lineup recognition paradigm, we investigated accuracy using signal detection and ROC analyses, along with the tendency to choose a face with both simultaneous and sequential lineups. We replicated past findings of reduced rates of choosing with sequential as compared to simultaneous lineups, but notably found an accuracy advantage in favor of simultaneous lineups. Moreover, our analysis of the confidence-accuracy relationship revealed two key findings. First, we observed a sequential mistaken identification overconfidence effect: despite an overall reduction in false alarms, confidence for false alarms that did occur was higher with sequential lineups than with simultaneous lineups, with no differences in confidence for correct identifications. This sequential mistaken identification overconfidence effect is an expected byproduct of the use of a more conservative identification criterion with sequential than with simultaneous lineups. Second, we found a steady drop in confidence for mistaken identifications (i.e., foil identifications and false alarms) from the first to the last face in sequential lineups, whereas confidence in and accuracy of correct identifications remained relatively stable. Overall, we observed that sequential lineups are both less accurate and produce higher confidence false identifications than do simultaneous lineups. Given the increasing prominence of sequential lineups in our legal system, our data argue for increased scrutiny and possibly a wholesale reevaluation of this lineup format. PsycINFO Database Record (c) 2013 APA, all rights reserved.
inverse gaussian model for small area estimation via gibbs sampling
African Journals Online (AJOL)
ADMIN
For example, MacGibbon and Tomberlin. (1989) have considered estimating small area rates and binomial parameters using empirical Bayes methods. Stroud (1991) used hierarchical Bayes approach for univariate natural exponential families with quadratic variance functions in sample survey applications, while Chaubey ...
International Nuclear Information System (INIS)
Molloy, Janelle A.
2010-01-01
Purpose: Improvements in delivery techniques for total body irradiation (TBI) using Tomotherapy and intensity modulated radiation therapy have been proven feasible. Despite the promise of improved dose conformality, the application of these ''sequential'' techniques has been hampered by concerns over dose heterogeneity to circulating blood. The present study was conducted to provide quantitative evidence regarding the potential clinical impact of this heterogeneity. Methods: Blood perfusion was modeled analytically as possessing linear, sinusoidal motion in the craniocaudal dimension. The average perfusion period for human circulation was estimated to be approximately 78 s. Sequential treatment delivery was modeled as a Gaussian-shaped dose cloud with a 10 cm length that traversed a 183 cm patient length at a uniform speed. Total dose to circulating blood voxels was calculated via numerical integration and normalized to 2 Gy per fraction. Dose statistics and equivalent uniform dose (EUD) were calculated for relevant treatment times, radiobiological parameters, blood perfusion rates, and fractionation schemes. The model was then refined to account for random dispersion superimposed onto the underlying periodic blood flow. Finally, a fully stochastic model was developed using binomial and trinomial probability distributions. These models allowed for the analysis of nonlinear sequential treatment modalities and treatment designs that incorporate deliberate organ sparing. Results: The dose received by individual blood voxels exhibited asymmetric behavior that depended on the coherence among the blood velocity, circulation phase, and the spatiotemporal characteristics of the irradiation beam. Heterogeneity increased with the perfusion period and decreased with the treatment time. Notwithstanding, heterogeneity was less than ±10% for perfusion periods less than 150 s. The EUD was compromised for radiosensitive cells, long perfusion periods, and short treatment times
Molloy, Janelle A
2010-11-01
Improvements in delivery techniques for total body irradiation (TBI) using Tomotherapy and intensity modulated radiation therapy have been proven feasible. Despite the promise of improved dose conformality, the application of these "sequential" techniques has been hampered by concerns over dose heterogeneity to circulating blood. The present study was conducted to provide quantitative evidence regarding the potential clinical impact of this heterogeneity. Blood perfusion was modeled analytically as possessing linear, sinusoidal motion in the craniocaudal dimension. The average perfusion period for human circulation was estimated to be approximately 78 s. Sequential treatment delivery was modeled as a Gaussian-shaped dose cloud with a 10 cm length that traversed a 183 cm patient length at a uniform speed. Total dose to circulating blood voxels was calculated via numerical integration and normalized to 2 Gy per fraction. Dose statistics and equivalent uniform dose (EUD) were calculated for relevant treatment times, radiobiological parameters, blood perfusion rates, and fractionation schemes. The model was then refined to account for random dispersion superimposed onto the underlying periodic blood flow. Finally, a fully stochastic model was developed using binomial and trinomial probability distributions. These models allowed for the analysis of nonlinear sequential treatment modalities and treatment designs that incorporate deliberate organ sparing. The dose received by individual blood voxels exhibited asymmetric behavior that depended on the coherence among the blood velocity, circulation phase, and the spatiotemporal characteristics of the irradiation beam. Heterogeneity increased with the perfusion period and decreased with the treatment time. Notwithstanding, heterogeneity was less than +/- 10% for perfusion periods less than 150 s. The EUD was compromised for radiosensitive cells, long perfusion periods, and short treatment times. However, the EUD was
Weak Sequential Composition in Process Algebras
Rensink, Arend; Jonsson, B.; Parrow, J.; Wehrheim, H.
1994-01-01
n this paper we study a special operator for sequential composition, which is defined relative to a dependency relation over the actions of a given system. The idea is that actions which are not dependent (intuitively because they share no common resources) do not have to wait for one another to
Sequential Bayesian technique: An alternative approach for ...
Indian Academy of Sciences (India)
This paper proposes a sequential Bayesian approach similar to Kalman ﬁlter for estimating reliability growth or decay of software. The main advantage of proposed method is that it shows the variation of the parameter over a time, as new failure data become available. The usefulness of the method is demonstrated with ...
Sequential Bayesian technique: An alternative approach for ...
Indian Academy of Sciences (India)
MS received 8 October 2007; revised 15 July 2008. Abstract. This paper proposes a sequential Bayesian approach similar to Kalman filter for estimating reliability growth or decay of software. The main advantage of proposed method is that it shows the variation of the parameter over a time, as new failure data become ...
Fareed Zakaria's Democratic Sequentialism and Nigeria's ...
African Journals Online (AJOL)
This essay will attempt to analyse the prospects for political and economic liberalisation and ultimately democracy in Nigeria by examining Fareed Zakaria's prescription of democratic sequentialism i.e, liberalism before democracy. Zakaria's argument was that due to cultural variation, different societies will require different ...
Early Astronomical Sequential Photography, 1873-1923
Bonifácio, Vitor
2011-11-01
In 1873 Jules Janssen conceived the first automatic sequential photographic apparatus to observe the eagerly anticipated 1874 transit of Venus. This device, the 'photographic revolver', is commonly considered today as the earliest cinema precursor. In the following years, in order to study the variability or the motion of celestial objects, several instruments, either manually or automatically actuated, were devised to obtain as many photographs as possible of astronomical events in a short time interval. In this paper we strive to identify from the available documents the attempts made between 1873 and 1923, and discuss the motivations behind them and the results obtained. During the time period studied astronomical sequential photography was employed to determine the time of the instants of contact in transits and occultations, and to study total solar eclipses. The technique was seldom used but apparently the modern film camera invention played no role on this situation. Astronomical sequential photographs were obtained both before and after 1895. We conclude that the development of astronomical sequential photography was constrained by the reduced number of subjects to which the technique could be applied.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
S.M.P. SEQUENTIAL MATHEMATICS PROGRAM.
CICIARELLI, V; LEONARD, JOSEPH
A SEQUENTIAL MATHEMATICS PROGRAM BEGINNING WITH THE BASIC FUNDAMENTALS ON THE FOURTH GRADE LEVEL IS PRESENTED. INCLUDED ARE AN UNDERSTANDING OF OUR NUMBER SYSTEM, AND THE BASIC OPERATIONS OF WORKING WITH WHOLE NUMBERS--ADDITION, SUBTRACTION, MULTIPLICATION, AND DIVISION. COMMON FRACTIONS ARE TAUGHT IN THE FIFTH, SIXTH, AND SEVENTH GRADES. A…
The curse of sequentiality in routing games
Correa, José; de Jong, Jasper; de Keijzer, Bart; Uetz, Marc Jochen; Markakis, Evangelos; Schäfer, Guido
2015-01-01
In the "The curse of simultaneity", Paes Leme at al. show that there are interesting classes of games for which sequential decision making and corresponding subgame perfect equilibria avoid worst case Nash equilibria, resulting in substantial improvements for the price of anarchy. This is called the
A framework for sequential multiblock component methods
Smilde, A.K.; Westerhuis, J.A.; Jong, S.de
2003-01-01
Multiblock or multiset methods are starting to be used in chemistry and biology to study complex data sets. In chemometrics, sequential multiblock methods are popular; that is, methods that calculate one component at a time and use deflation for finding the next component. In this paper a framework
STABILIZED SEQUENTIAL QUADRATIC PROGRAMMING: A SURVEY
Directory of Open Access Journals (Sweden)
Damián Fernández
2014-12-01
Full Text Available We review the motivation for, the current state-of-the-art in convergence results, and some open questions concerning the stabilized version of the sequential quadratic programming algorithm for constrained optimization. We also discuss the tools required for its local convergence analysis, globalization challenges, and extentions of the method to the more general variational problems.
Adult Word Recognition and Visual Sequential Memory
Holmes, V. M.
2012-01-01
Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…
Interpretability degrees of finitely axiomatized sequential theories
Visser, Albert
In this paper we show that the degrees of interpretability of finitely axiomatized extensions-in-the-same-language of a finitely axiomatized sequential theory-like Elementary Arithmetic EA, IΣ1, or the Gödel-Bernays theory of sets and classes GB-have suprema. This partially answers a question posed
Interpretability Degrees of Finitely Axiomatized Sequential Theories
Visser, Albert
2012-01-01
In this paper we show that the degrees of interpretability of finitely axiomatized extensions-in-the-same-language of a finitely axiomatized sequential theory —like Elementary Arithmetic EA, IΣ1, or the Gödel-Bernays theory of sets and classes GB— have suprema. This partially answers a question
Sequential auctions for full truckload allocation
Mes, Martijn R.K.
2008-01-01
In this thesis we examine the use of sequential auctions for the dynamic allocation of transportation jobs. For all players, buyers and sellers, we develop strategies and examine their performance both in terms of individual benefits and with respect to the global logistical performance (resource
Media Exposure: How Models Simplify Sampling
DEFF Research Database (Denmark)
Mortensen, Peter Stendahl
1998-01-01
In media planning, the distribution of exposures to more ad spots in more media (print, TV, radio) is crucial to the evaluation of the campaign. If such information should be sampled, it would only be possible in expensive panel-studies (eg TV-meter panels). Alternatively, the distribution of exp...... of exposures may be modelled statistically, using the Beta distribution combined with the Binomial Distribution. Examples are given....
Moghimbeigi, Abbas
2015-05-07
Poisson regression models provide a standard framework for quantitative trait locus (QTL) mapping of count traits. In practice, however, count traits are often over-dispersed relative to the Poisson distribution. In these situations, the zero-inflated Poisson (ZIP), zero-inflated generalized Poisson (ZIGP) and zero-inflated negative binomial (ZINB) regression may be useful for QTL mapping of count traits. Added genetic variables to the negative binomial part equation, may also affect extra zero data. In this study, to overcome these challenges, I apply two-part ZINB model. The EM algorithm with Newton-Raphson method in the M-step uses for estimating parameters. An application of the two-part ZINB model for QTL mapping is considered to detect associations between the formation of gallstone and the genotype of markers. Copyright © 2015 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Lioba Simon-Schuhmacher
2015-11-01
Full Text Available The essay describes the use of the Night-Death binomial and tracks its evolution from the eighteenth century to Expressionism across Great Britain, Germany, Spain, and Austria, at the hand of poems such as Edward Young’s Night Thoughts (1745, Novalis’s Hymnen an die Nacht, (1800, José Blanco White’s sonnet “Night and Death” (1828, and Georg Trakl’s “Verwandlung des Bösen” (1914. Romanticism brought along a preference for the nocturnal: night, moonlight, shades and shadows, mist and mystery, somber mood, morbidity, and death, as opposed to the Enlightenment’s predilection for day, light, clarity, and life. The essay analyses how poets from different national contexts and ages employ images and symbols of the night to create an association with death. It furthermore shows how, with varying attitudes and results, they manage to convert this binomial into a poetic ploy.
Directory of Open Access Journals (Sweden)
Giselle Garcia-Sepulveda
2017-06-01
Full Text Available This paper describes the frequency and number of Trifur tortuosus in the skin of Merluccius gayi, an important economic resource in Chile. Analysis of a spatial distribution model indicated that the parasites tended to cluster. Variations in the number of parasites per host can be described by a negative binomial distribution. The maximum number of parasites observed per host was one, similar patterns was described for other parasites in Chilean marine fishes.
Najera-Zuloaga, Josu; Lee, Dae-Jin; Arostegui, Inmaculada
2017-01-01
Health-related quality of life has become an increasingly important indicator of health status in clinical trials and epidemiological research. Moreover, the study of the relationship of health-related quality of life with patients and disease characteristics has become one of the primary aims of many health-related quality of life studies. Health-related quality of life scores are usually assumed to be distributed as binomial random variables and often highly skewed. The use of the beta-binomial distribution in the regression context has been proposed to model such data; however, the beta-binomial regression has been performed by means of two different approaches in the literature: (i) beta-binomial distribution with a logistic link; and (ii) hierarchical generalized linear models. None of the existing literature in the analysis of health-related quality of life survey data has performed a comparison of both approaches in terms of adequacy and regression parameter interpretation context. This paper is motivated by the analysis of a real data application of health-related quality of life outcomes in patients with Chronic Obstructive Pulmonary Disease, where the use of both approaches yields to contradictory results in terms of covariate effects significance and consequently the interpretation of the most relevant factors in health-related quality of life. We present an explanation of the results in both methodologies through a simulation study and address the need to apply the proper approach in the analysis of health-related quality of life survey data for practitioners, providing an R package.
Criterios sobre el uso de la distribución normal para aproximar la distribución binomial
Ortiz Pinilla, Jorge; Castro, Amparo; Neira, Tito; Torres, Pedro; Castañeda, Javier
2012-01-01
Las dos reglas empíricas más conocidas para aceptar la aproximación normal de la distribución binomial carecen de regularidad en el control del margen de error cometido al utilizar la aproximación normal. Se propone un criterio y algunas fórmulas para controlarlo cerca de algunos valores escogidos para el error máximo.
Speciation of heavy metals in street dust samples from Sakarya I ...
African Journals Online (AJOL)
, Pb and Zn) in 20 dust samples collected from the streets of the Organized Industrial District in Sakarya, Turkey using sequential extraction procedure were determined by ICP-OES. The three-step BCR sequential extraction procedure was ...
Robust inference in the negative binomial regression model with an application to falls data.
Aeberhard, William H; Cantoni, Eva; Heritier, Stephane
2014-12-01
A popular way to model overdispersed count data, such as the number of falls reported during intervention studies, is by means of the negative binomial (NB) distribution. Classical estimating methods are well-known to be sensitive to model misspecifications, taking the form of patients falling much more than expected in such intervention studies where the NB regression model is used. We extend in this article two approaches for building robust M-estimators of the regression parameters in the class of generalized linear models to the NB distribution. The first approach achieves robustness in the response by applying a bounded function on the Pearson residuals arising in the maximum likelihood estimating equations, while the second approach achieves robustness by bounding the unscaled deviance components. For both approaches, we explore different choices for the bounding functions. Through a unified notation, we show how close these approaches may actually be as long as the bounding functions are chosen and tuned appropriately, and provide the asymptotic distributions of the resulting estimators. Moreover, we introduce a robust weighted maximum likelihood estimator for the overdispersion parameter, specific to the NB distribution. Simulations under various settings show that redescending bounding functions yield estimates with smaller biases under contamination while keeping high efficiency at the assumed model, and this for both approaches. We present an application to a recent randomized controlled trial measuring the effectiveness of an exercise program at reducing the number of falls among people suffering from Parkinsons disease to illustrate the diagnostic use of such robust procedures and their need for reliable inference. © 2014, The International Biometric Society.
Hilpert, Markus; Rasmuson, Anna; Johnson, William P.
2017-07-01
Colloid transport in saturated porous media is significantly influenced by colloidal interactions with grain surfaces. Near-surface fluid domain colloids experience relatively low fluid drag and relatively strong colloidal forces that slow their downgradient translation relative to colloids in bulk fluid. Near-surface fluid domain colloids may reenter into the bulk fluid via diffusion (nanoparticles) or expulsion at rear flow stagnation zones, they may immobilize (attach) via primary minimum interactions, or they may move along a grain-to-grain contact to the near-surface fluid domain of an adjacent grain. We introduce a simple model that accounts for all possible permutations of mass transfer within a dual pore and grain network. The primary phenomena thereby represented in the model are mass transfer of colloids between the bulk and near-surface fluid domains and immobilization. Colloid movement is described by a Markov chain, i.e., a sequence of trials in a 1-D network of unit cells, which contain a pore and a grain. Using combinatorial analysis, which utilizes the binomial coefficient, we derive the residence time distribution, i.e., an inventory of the discrete colloid travel times through the network and of their probabilities to occur. To parameterize the network model, we performed mechanistic pore-scale simulations in a single unit cell that determined the likelihoods and timescales associated with the above colloid mass transfer processes. We found that intergrain transport of colloids in the near-surface fluid domain can cause extended tailing, which has traditionally been attributed to hydrodynamic dispersion emanating from flow tortuosity of solute trajectories.
Geedipally, Srinivas Reddy; Lord, Dominique; Dhavala, Soma Sekhar
2012-03-01
There has been a considerable amount of work devoted by transportation safety analysts to the development and application of new and innovative models for analyzing crash data. One important characteristic about crash data that has been documented in the literature is related to datasets that contained a large amount of zeros and a long or heavy tail (which creates highly dispersed data). For such datasets, the number of sites where no crash is observed is so large that traditional distributions and regression models, such as the Poisson and Poisson-gamma or negative binomial (NB) models cannot be used efficiently. To overcome this problem, the NB-Lindley (NB-L) distribution has recently been introduced for analyzing count data that are characterized by excess zeros. The objective of this paper is to document the application of a NB generalized linear model with Lindley mixed effects (NB-L GLM) for analyzing traffic crash data. The study objective was accomplished using simulated and observed datasets. The simulated dataset was used to show the general performance of the model. The model was then applied to two datasets based on observed data. One of the dataset was characterized by a large amount of zeros. The NB-L GLM was compared with the NB and zero-inflated models. Overall, the research study shows that the NB-L GLM not only offers superior performance over the NB and zero-inflated models when datasets are characterized by a large number of zeros and a long tail, but also when the crash dataset is highly dispersed. Published by Elsevier Ltd.
Group sequential designs for stepped-wedge cluster randomised trials.
Grayling, Michael J; Wason, James Ms; Mander, Adrian P
2017-10-01
The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial's type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into
Negative-binomial multiplicity distributions in the interaction of light ions with 12C at 4.2 GeV/c
International Nuclear Information System (INIS)
Tucholski, A.; Bogdanowicz, J.; Moroz, Z.; Wojtkowska, J.
1989-01-01
Multiplicity distribution of single-charged particles in the interaction of p, d, α and 12 C projectiles with C target nuclei at 4.2 GeV/c were analysed in terms of the negative binomial distribution. The experimental data obtained by the Dubna Propane Bubble Chamber Group were used. It is shown that the experimental distributions are satisfactorily described by the negative-binomial distribution. Values of the parameters of these distributions are discussed. (orig.)
International Nuclear Information System (INIS)
Ghosh, D.; Deb, A.; Haldar, P.K.; Sahoo, S.R.; Maity, D.
2004-01-01
This work studies the validity of the negative binomial distribution in the multiplicity distribution of charged secondaries in 16 O and 32 S interactions with AgBr at 60 GeV/c per nucleon and 200 GeV/c per nucleon, respectively. The validity of negative binomial distribution (NBD) is studied in different azimuthal phase spaces. It is observed that the data can be well parameterized in terms of the NBD law for different azimuthal phase spaces. (authors)
Manzini, Paola; Mariotti, Marco
2004-01-01
A sequentially rationalizable choice function is a choice function which can be obtained by applying sequentially a fixed set of asymmetric binary relations (rationales). A Rational ShortlistMethod (RSM) is a choice function which is sequentially rationalizable by two rationales. These concepts translate into economic language some human choice heuristics studied in psychology. We provide a full characterization of RSMs and study some properties of sequential rationalizability. These properti...
Sequential shrink photolithography for plastic microlens arrays
Dyer, David; Shreim, Samir; Jayadev, Shreshta; Lew, Valerie; Botvinick, Elliot; Khine, Michelle
2011-01-01
Endeavoring to push the boundaries of microfabrication with shrinkable polymers, we have developed a sequential shrink photolithography process. We demonstrate the utility of this approach by rapidly fabricating plastic microlens arrays. First, we create a mask out of the children’s toy Shrinky Dinks by simply printing dots using a standard desktop printer. Upon retraction of this pre-stressed thermoplastic sheet, the dots shrink to a fraction of their original size, which we then lithographically transfer onto photoresist-coated commodity shrink wrap film. This shrink film reduces in area by 95% when briefly heated, creating smooth convex photoresist bumps down to 30 µm. Taken together, this sequential shrink process provides a complete process to create microlenses, with an almost 99% reduction in area from the original pattern size. Finally, with a lithography molding step, we emboss these bumps into optical grade plastics such as cyclic olefin copolymer for functional microlens arrays. PMID:21863126
Robust Sequential Analysis for Special Capacities
高橋, 一
1985-01-01
Huber type robustness will be considered for some extensions of Wald's Sequential Probability Ratio Test, including Wald's three decision problem and the Kiefer-Weiss formulation. The results of Huber (1965, 1968), Huber and Strassen (1973), Rieder (1977) and Osterreicher (1978) will be extended to derive a least favorable tuple in the multiple decision problem. And then the asymptotically least favorable KieferWeiss procedure together with its asymptotic relative efficiency for the s-contami...
Sleep memory processing: the sequential hypothesis
Giuditta, Antonio
2014-01-01
According to the sequential hypothesis (SH) memories acquired during wakefulness are processed during sleep in two serial steps respectively occurring during slow wave sleep (SWS) and rapid eye movement (REM) sleep. During SWS memories to be retained are distinguished from irrelevant or competing traces that undergo downgrading or elimination. Processed memories are stored again during REM sleep which integrates them with preexisting memories. The hypothesis received support from a wealth of ...
Sequential neural models with stochastic layers
DEFF Research Database (Denmark)
Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich
2016-01-01
How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...
On Locally Most Powerful Sequential Rank Tests
Czech Academy of Sciences Publication Activity Database
Kalina, Jan
2017-01-01
Roč. 36, č. 1 (2017), s. 111-125 ISSN 0747-4946 R&D Projects: GA ČR GA17-07384S Grant - others:Nadační fond na podporu vědy(CZ) Neuron Institutional support: RVO:67985807 Keywords : nonparametric tests * sequential ranks * stopping variable Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.339, year: 2016
Sequential pattern recognition by maximum conditional informativity
Czech Academy of Sciences Publication Activity Database
Grim, Jiří
2014-01-01
Roč. 45, č. 1 (2014), s. 39-45 ISSN 0167-8655 R&D Projects: GA ČR(CZ) GA14-02652S; GA ČR(CZ) GA14-10911S Keywords : Multivariate statistics * Statistical pattern recognition * Sequential decision making * Product mixtures * EM algorithm * Shannon information Subject RIV: IN - Informatics, Computer Science Impact factor: 1.551, year: 2014 http://library.utia.cas.cz/separaty/2014/RO/grim-0428565.pdf
Sequential tests for near-real-time accounting
International Nuclear Information System (INIS)
Cobb, D.D.
1981-01-01
Statistical hypothesis testing is used in the analysis of nuclear materials accounting data for evidence of diversion. Sequential hypothesis testing is particularly well suited for analyzing data that arise sequentially in time from near-real-time accounting systems. The properties of selected sequential tests adapted for this application are described. 10 figures, 12 tables
Solid reactors in sequential injection analysis: Recent trends in the environmental field
DEFF Research Database (Denmark)
Miró, Manuel; Hansen, Elo Harald
2006-01-01
The second generation of flow injection analysis (FIA), so-called sequential injection (SI), has already been consolidated as an attractive flowing-stream approach in several analytical fields, with advantages over the first generation of FIA in terms of automation, miniaturization, and sample/re...
Sequential Analysis of Conflict and Accord in Distressed and Nondistressed Marital Partners.
Margolin, Gayla; Wampold, Bruce E.
1981-01-01
Compared the interactional patterns of distressed (N=22) and nondistressed (N=17) couples through base rate and sequential analyses of communication samples that were coded with the Marital Interactional Coding System. Nondistressed couples emitted higher rates of problem-solving, verbal and nonverbal positive, and neutral behaviors. (Author)
van Staden, J F; Mashamba, Mulalo G; Stefan, Raluca I
2002-09-01
An on-line potentiometric sequential injection titration process analyser for the determination of acetic acid is proposed. A solution of 0.1 mol L(-1) sodium chloride is used as carrier. Titration is achieved by aspirating acetic acid samples between two strong base-zone volumes into a holding coil and by channelling the stack of well-defined zones with flow reversal through a reaction coil to a potentiometric sensor where the peak widths were measured. A linear relationship between peak width and logarithm of the acid concentration was obtained in the range 1-9 g/100 mL. Vinegar samples were analysed without any sample pre-treatment. The method has a relative standard deviation of 0.4% with a sample frequency of 28 samples per hour. The results revealed good agreement between the proposed sequential injection and an automated batch titration method.
A MEMS lamination technology based on sequential multilayer electrodeposition
International Nuclear Information System (INIS)
Kim, Minsoo; Kim, Jooncheol; Herrault, Florian; Schafer, Richard; Allen, Mark G
2013-01-01
A MEMS lamination technology based on sequential multilayer electrodeposition is presented. The process comprises three main steps: (1) automated sequential electrodeposition of permalloy (Ni 80 Fe 20 ) structural and copper sacrificial layers to form multilayer structures of significant total thickness; (2) fabrication of polymeric anchor structures through the thickness of the multilayer structures and (3) selective removal of copper. The resulting structure is a set of air-insulated permalloy laminations, the separation of which is sustained by insulating polymeric anchor structures. Individual laminations have precisely controllable thicknesses ranging from 500 nm to 5 µm, and each lamination layer is electrically isolated from adjacent layers by narrow air gaps of similar scale. In addition to air, interlamination insulators based on polymers are investigated. Interlamination air gaps with very high aspect ratio (>1:100) can be filled with polyvinylalcohol and polydimethylsiloxane. The laminated structures are characterized using scanning electron microscopy and atomic force microscopy to directly examine properties such as the roughness and the thickness uniformity of the layers. In addition, the quality of the electrical insulation between the laminations is evaluated by quantifying the eddy current within the sample as a function of frequency. Fabricated laminations are comprised of uniform, smooth (surface roughness <100 nm) layers with effective electrical insulation for all layer thicknesses and insulator approaches studied. Such highly laminated structures have potential uses ranging from energy conversion to applications where composite materials with highly anisotropic mechanical or thermal properties are required. (paper)
Bootstrap Sequential Determination of the Co-integration Rank in VAR Models
DEFF Research Database (Denmark)
Cavaliere, Giuseppe; Rahbek, Anders; Taylor, A. M. Robert
with empirical rejection frequencies often very much in excess of the nominal level. As a consequence, bootstrap versions of these tests have been developed. To be useful, however, sequential procedures for determining the co-integrating rank based on these bootstrap tests need to be consistent, in the sense...... in the literature by proposing a bootstrap sequential algorithm which we demonstrate delivers consistent cointegration rank estimation for general I(1) processes. Finite sample Monte Carlo simulations show the proposed procedure performs well in practice....
Group-Sequential Strategies in Clinical Trials with Multiple Co-Primary Outcomes
Hamasaki, Toshimitsu; Asakura, Koko; Evans, Scott R; Sugimoto, Tomoyuki; Sozu, Takashi
2015-01-01
We discuss the decision-making frameworks for clinical trials with multiple co-primary endpoints in a group-sequential setting. The decision-making frameworks can account for flexibilities such as a varying number of analyses, equally or unequally spaced increments of information and fixed or adaptive Type I error allocation among endpoints. The frameworks can provide efficiency, i.e., potentially fewer trial participants, than the fixed sample size designs. We investigate the operating characteristics of the decision-making frameworks and provide guidance on constructing efficient group-sequential strategies in clinical trials with multiple co-primary endpoints. PMID:25844122
Honma, Toshimitsu; Ohba, Hirotomo; Makino, Tomoyuki; Ohyama, Takuji
2015-01-01
The method for the sequential extraction of cadmium from soil was adapted to investigate the relationship between different chemical forms of cadmium in soils and the soil properties of Cd-contaminated and uncontaminated paddy soils. Air-dried soil samples from each field site were sequentially fractionated into five forms: exchangeable Cd, inorganically bound Cd, organically bound Cd, oxide-occluded fraction, and residual Cd. The average and range of soil properties such as pH, total C, tota...
Consistency of self-reported alcohol consumption on randomized and sequential alcohol purchase tasks
Directory of Open Access Journals (Sweden)
Michael eAmlung
2012-07-01
Full Text Available Behavioral economic demand for addictive substances is commonly assessed via purchase tasks that measure estimated drug consumption at a range of prices. Purchase tasks typically use escalating prices in sequential order, which may influence performance by providing explicit price reference points. This study investigated the consistency of value preferences on two alcohol purchase tasks (APTs that used either a randomized or sequential price order (price range: free to $30 per drink in a sample of ninety-one young adult monthly drinkers. Randomization of prices significantly reduced relative response consistency (p < .01, although absolute consistency was high for both versions (>95%. Self-reported alcohol consumption across prices and indices of demand were highly similar across versions, although a few notable exceptions were found. These results suggest generally high consistency and overlapping performance between randomized and sequential price assessment. Implications for the behavioral economics literature and priorities for future research are discussed.
Directory of Open Access Journals (Sweden)
Gisela Lundberg
2008-08-01
Full Text Available Amplification of the oncogene MYCN in double minutes (DMs is a common finding in neuroblastoma (NB. Because DMs lack centromeric sequences it has been unclear how NB cells retain and amplify extrachromosomal MYCN copies during tumour development.We show that MYCN-carrying DMs in NB cells translocate from the nuclear interior to the periphery of the condensing chromatin at transition from interphase to prophase and are preferentially located adjacent to the telomere repeat sequences of the chromosomes throughout cell division. However, DM segregation was not affected by disruption of the telosome nucleoprotein complex and DMs readily migrated from human to murine chromatin in human/mouse cell hybrids, indicating that they do not bind to specific positional elements in human chromosomes. Scoring DM copy-numbers in ana/telophase cells revealed that DM segregation could be closely approximated by a binomial random distribution. Colony-forming assay demonstrated a strong growth-advantage for NB cells with high DM (MYCN copy-numbers, compared to NB cells with lower copy-numbers. In fact, the overall distribution of DMs in growing NB cell populations could be readily reproduced by a mathematical model assuming binomial segregation at cell division combined with a proliferative advantage for cells with high DM copy-numbers.Binomial segregation at cell division explains the high degree of MYCN copy-number variability in NB. Our findings also provide a proof-of-principle for oncogene amplification through creation of genetic diversity by random events followed by Darwinian selection.
Directory of Open Access Journals (Sweden)
Marcelo Angelo Cirillo
2010-07-01
Full Text Available A inferencia estatistica em populacoes binomiais contaminadas esta sujeita a erros grosseiros de estimacao, uma vez que as amostras nao sao identicamente distribuidas. Por esse problema, este trabalho tem por objetivo determinar qual a melhor constante de afinidade (c1 que proporcione melhor desempenho em um estimador pertencente a classedos estimadores-E. Com esse proposito, neste trabalho, foi utilizada a metodologia, considerando-se o metodo de simulacao Monte Carlo, no qual diferentes configuracoes descritas pela combinacao de valores parametricos, niveis de contaminacao e tamanhos de amostra foram avaliados. Concluiu-se que, para alta probabilidade de mistura (ƒÁ = 0,40, recomenda-se assumir c1 = 0,1 nas situacoes de grandes amostras (n = 50 e n = 80. The statistical inference in binomial population is subject to gross errors of estimate, as the samples are not identically distributed. Due to this problem, this work aims to determine which is the best affinity constant (c1 that provides the best performance in the estimator, belonging to the class of E-estimators. With that purpose, the methodology used in this work was applied considering the Monte Carlo simulation method, in which different configurations described by combination of parametric values, levels of contamination and sample sizes were appraised. It was concluded that for the high probability of contamination (ƒÁ = 0.40, c1 = 0.1 is recommended in cases with large samples (n = 50 and n = 80.
International Nuclear Information System (INIS)
Ghosh, D.; Mukhopadhyay, A.; Ghosh, A.; Roy, J.
1989-01-01
This letter presents new data on the multiplicity distribution of charged secondaries in 24 Mg interactions with AgBr at 4.5 GeV/c per nucleon. The validity of the negative binomial distribution (NBD) is studied. It is observed that the data can be well parametrized in terms of the NBD law for the whole phase space and also for different pseudo-rapidity bins. A comparison of different parameters with those in the case of h-h interactions reveals some interesting results, the implications of which are discussed. (orig.)
Directory of Open Access Journals (Sweden)
Robert Kurniawan
2017-11-01
Full Text Available AbstractThe incidence rates of DHF in Jakarta in 2010 to 2014 are always higher than that of the national rates. Therefore, this study aims to find the effect of weather parameter on DHF cases. Weather is chosen because it can be observed daily and can be predicted so that it can be used as earlydetection in estimating the number of DHF cases. Data use includes DHF cases which is collected daily and weather data including lowest and highest temperatures and rainfall. Data analysis used is zero-truncated negative binomial analysis at 10% significance level. Based on the periodic dataof selected variables from January 1st 2015 until May 31st 2015, the study revealed that weather factors consisting of highest temperature, lowest temperature, and rainfall rate were significant enough to predict the number of DHF patients in DKI Jakarta. The three variables had positiveeffects in influencing the number of DHF patients in the same period. However, the weather factors cannot be controlled by humans, so that appropriate preventions are required whenever weather’s predictions indicate the increasing number of DHF cases in DKI Jakarta.Keywords: Dengue Hemorrhagic Fever, zero truncated negative binomial, early warning.AbstrakAngka kesakitan DBD pada tahun 2010 hingga 2014 selalu lebih tinggi dibandingkan dengan angka kesakitan DBD nasional. Oleh karena itu, penelitian ini bertujuan untuk mencari pengaruh faktor cuaca terhadap kasus DBD. Faktor cuaca dipilih karena dapat diamati setiap harinya dan dapat diprediksi sehingga dapat dijadikan deteksi dini dalam perkiraan jumlah penderita DBD. Data yang digunakan dalam penelitian ini adalah data jumlah penderita DBD di DKI Jakarta per hari dan data cuaca yang meliputi suhu terendah, suhu tertinggi dan curah hujan. Untuk mengetahui pengaruh faktor cuaca tersebut terhadap jumlah penderita DBD di DKI Jakarta digunakan metode analisis zero-truncated negative binomial. Berdasarkan data periode 1 Januari 2015 hingga
Non-binomial distribution of palladium Kα Ln X-ray satellites emitted after excitation by 16O ions
International Nuclear Information System (INIS)
Rymuza, P.; Sujkowski, Z.; Carlen, M.; Dousse, J.C.; Gasser, M.; Kern, J.; Perny, B.; Rheme, C.
1988-02-01
The palladium K α L n X-ray satellites spectrum emitted after excitation by 5.4 MeV/u 16 O ions has been measured. The distribution of the satellites yield is found to be significantly narrower than the binomial one. The deviations can be accounted for by assuming that the L-shell ionization is due to two uncorrelated processes: the direct ionization by impact and the electron capture to the K-shell of the projectile. 12 refs., 1 fig., 1 tab. (author)
Directory of Open Access Journals (Sweden)
Paweł Mielcarz
2007-06-01
Full Text Available The article presents a case study of valuation of real options included in a investment project. The main goal of the article is to present the calculation and methodological issues of application the methodology for real option valuation. In order to do it there are used the binomial model and Market Asset Declaimer methodology. The project presented in the article concerns the introduction of radio station to a new market. It includes two valuable real options: to abandon the project and to expand.
Sequential tool use in great apes.
Directory of Open Access Journals (Sweden)
Gema Martin-Ordas
Full Text Available Sequential tool use is defined as using a tool to obtain another non-food object which subsequently itself will serve as a tool to act upon a further (subgoal. Previous studies have shown that birds and great apes succeed in such tasks. However, the inclusion of a training phase for each of the sequential steps and the low cost associated with retrieving the longest tools limits the scope of the conclusions. The goal of the experiments presented here was, first to replicate a previous study on sequential tool use conducted on New Caledonian crows and, second, extend this work by increasing the cost of retrieving a tool in order to test tool selectivity of apes. In Experiment 1, we presented chimpanzees, orangutans and bonobos with an out-of-reach reward, two tools that were available but too short to reach the food and four out-of-reach tools differing in functionality. Similar to crows, apes spontaneously used up to 3 tools in sequence to get the reward and also showed a strong preference for the longest out-of reach tool independently of the distance of the food. In Experiment 2, we increased the cost of reaching for the longest out-of reach tool. Now apes used up to 5 tools in sequence to get the reward and became more selective in their choice of the longest tool as the costs of its retrieval increased. The findings of the studies presented here contribute to the growing body of comparative research on tool use.
Reaction probability for sequential separatrix crossings
International Nuclear Information System (INIS)
Cary, J.R.; Skodje, R.T.
1988-01-01
The change of the crossing parameter (essentially the phase) between sequential slow separatrix crossings is calculated for Hamiltonian systems with one degree of freedom. Combined with the previous separatrix crossing analysis, these results reduce the dynamics of adiabatic systems with separatrices to a map. This map determines whether a trajectory leaving a given separatrix lobe is ultimately captured by the other lobe. Averaging these results over initial phase yields the reaction probability, which does not asymptote to the fully phase-mixed result even for arbitrarily long times between separatrix crossings
A sequential/parallel track selector
Bertolino, F; Bressani, Tullio; Chiavassa, E; Costa, S; Dellacasa, G; Gallio, M; Musso, A
1980-01-01
A medium speed ( approximately 1 mu s) hardware pre-analyzer for the selection of events detected in four planes of drift chambers in the magnetic field of the Omicron Spectrometer at the CERN SC is described. Specific geometrical criteria determine patterns of hits in the four planes of vertical wires that have to be recognized and that are stored as patterns of '1's in random access memories. Pairs of good hits are found sequentially, then the RAMs are used as look-up tables. (6 refs).
THE DEVELOPMENT OF SPECIAL SEQUENTIALLY-TIMED
Directory of Open Access Journals (Sweden)
Stanislav LICHOROBIEC
2016-06-01
Full Text Available This article documents the development of the noninvasive use of explosives during the destruction of ice mass in river flows. The system of special sequentially-timed charges utilizes the increase in efficiency of cutting charges by covering them with bags filled with water, while simultaneously increasing the effect of the entire system of timed charges. Timing, spatial combinations during placement, and the linking of these charges results in the loosening of ice barriers on a frozen waterway, while at the same time regulating the size of the ice fragments. The developed charges will increase the operability and safety of IRS units.
Pass-transistor asynchronous sequential circuits
Whitaker, Sterling R.; Maki, Gary K.
1989-01-01
Design methods for asynchronous sequential pass-transistor circuits, which result in circuits that are hazard- and critical-race-free and which have added degrees of freedom for the input signals, are discussed. The design procedures are straightforward and easy to implement. Two single-transition-time state assignment methods are presented, and hardware bounds for each are established. A surprising result is that the hardware realizations for each next state variable and output variable is identical for a given flow table. Thus, a state machine with N states and M outputs can be constructed using a single layout replicated N + M times.
From sequential to parallel programming with patterns
CERN. Geneva
2018-01-01
To increase in both performance and efficiency, our programming models need to adapt to better exploit modern processors. The classic idioms and patterns for programming such as loops, branches or recursion are the pillars of almost every code and are well known among all programmers. These patterns all have in common that they are sequential in nature. Embracing parallel programming patterns, which allow us to program for multi- and many-core hardware in a natural way, greatly simplifies the task of designing a program that scales and performs on modern hardware, independently of the used programming language, and in a generic way.
Decoding restricted participation in sequential electricity markets
Energy Technology Data Exchange (ETDEWEB)
Knaut, Andreas; Paschmann, Martin
2017-06-15
Restricted participation in sequential markets may cause high price volatility and welfare losses. In this paper we therefore analyze the drivers of restricted participation in the German intraday auction which is a short-term electricity market with quarter-hourly products. Applying a fundamental electricity market model with 15-minute temporal resolution, we identify the lack of sub-hourly market coupling being the most relevant driver of restricted participation. We derive a proxy for price volatility and find that full market coupling may trigger quarter-hourly price volatility to decrease by a factor close to four.
Boundary conditions in random sequential adsorption
Cieśla, Michał; Ziff, Robert M.
2018-04-01
The influence of different boundary conditions on the density of random packings of disks is studied. Packings are generated using the random sequential adsorption algorithm with three different types of boundary conditions: periodic, open, and wall. It is found that the finite size effects are smallest for periodic boundary conditions, as expected. On the other hand, in the case of open and wall boundaries it is possible to introduce an effective packing size and a constant correction term to significantly improve the packing densities.
On Locally Most Powerful Sequential Rank Tests
Czech Academy of Sciences Publication Activity Database
Kalina, Jan
2017-01-01
Roč. 36, č. 1 (2017), s. 111-125 ISSN 0747-4946 R&D Projects: GA ČR GA17-07384S Grant - others:Nadační fond na podporu vědy(CZ) Neuron Institutional support: RVO:67985556 Keywords : nonparametric tests * sequential ranks * stopping variable Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.339, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/kalina-0474065.pdf
The use of sequential indicator simulation to characterize geostatistical uncertainty
International Nuclear Information System (INIS)
Hansen, K.M.
1992-10-01
Sequential indicator simulation (SIS) is a geostatistical technique designed to aid in the characterization of uncertainty about the structure or behavior of natural systems. This report discusses a simulation experiment designed to study the quality of uncertainty bounds generated using SIS. The results indicate that, while SIS may produce reasonable uncertainty bounds in many situations, factors like the number and location of available sample data, the quality of variogram models produced by the user, and the characteristics of the geologic region to be modeled, can all have substantial effects on the accuracy and precision of estimated confidence limits. It is recommended that users of SIS conduct validation studies for the technique on their particular regions of interest before accepting the output uncertainty bounds
Group-sequential analysis may allow for early trial termination
DEFF Research Database (Denmark)
Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich
2017-01-01
-PET/CT measurements, illuminating the possibility of early trial termination which implicates significant potential time and resource savings. METHODS: Primary lesion maximum standardised uptake value (SUVmax) was determined twice from preoperative FDG-PET/CTs in 45 ovarian cancer patients. Differences in SUVmax were...... strategies comprising one (at N = 23), two (at N = 15, 30), or three interim analyses (at N = 11, 23, 34), respectively, which were defined post hoc. RESULTS: When performing interim analyses with one third and two thirds of patients, sufficient agreement could be concluded after the first interim analysis...... strategy must, though, be defined at the planning stage, and sample sizes must be reasonably large at interim analysis to ensure robustness against single outliers. Group-sequential testing may have a place in accuracy and agreement studies....
Determination of isoxsuprine hydrochloride by sequential injection visible spectrophotometry.
Beyene, Negussie W; Van Staden, Jacobus F; Stefan, Raluca-Ioana; Aboul-Enein, Hassan Y
2005-01-01
An automated sequential injection (SI) spectrophotometric method for the determination of isoxsuprine hydrochloride is described. The method is based on the condensation of aminoantipyrine with phenols (isoxsuprine hydrochloride) in the presence of an alkaline oxidizing agent (potassium hexacyanoferrate) to yield a pink colored product, the absorbance of which is monitored at 507 nm. Chemical as well as physical SI parameters that affect the signal response have been optimized in order to get better sensitivity, higher sampling rate and better reagent economy. Using the optimized aforesaid parameters, a linear relationship between the relative peak height and concentration was obtained in the range 1-60 mg l-1. The detection limit (as 3sigma value) was 0.3 mg l-1 and precision was 1.4% and 1.6% at 5 and 10 mg l-1, respectively. As compared to previous reports, wide linear range, low detection limit, and highly economical reagent consumption are the advantages of this automated method.
Sequential evidence accumulation in decision making
Directory of Open Access Journals (Sweden)
Daniel Hausmann
2008-03-01
Full Text Available Judgments and decisions under uncertainty are frequently linked to a prior sequential search for relevant information. In such cases, the subject has to decide when to stop the search for information. Evidence accumulation models from social and cognitive psychology assume an active and sequential information search until enough evidence has been accumulated to pass a decision threshold. In line with such theories, we conceptualize the evidence threshold as the ``desired level of confidence'' (DLC of a person. This model is tested against a fixed stopping rule (one-reason decision making and against the class of multi-attribute information integrating models. A series of experiments using an information board for horse race betting demonstrates an advantage of the proposed model by measuring the individual DLC of each subject and confirming its correctness in two separate stages. In addition to a better understanding of the stopping rule (within the narrow framework of simple heuristics, the results indicate that individual aspiration levels might be a relevant factor when modelling decision making by task analysis of statistical environments.
Unsupervised Sequential Outlier Detection With Deep Architectures.
Lu, Weining; Cheng, Yu; Xiao, Cao; Chang, Shiyu; Huang, Shuai; Liang, Bin; Huang, Thomas
2017-09-01
Unsupervised outlier detection is a vital task and has high impact on a wide variety of applications domains, such as image analysis and video surveillance. It also gains long-standing attentions and has been extensively studied in multiple research areas. Detecting and taking action on outliers as quickly as possible are imperative in order to protect network and related stakeholders or to maintain the reliability of critical systems. However, outlier detection is difficult due to the one class nature and challenges in feature construction. Sequential anomaly detection is even harder with more challenges from temporal correlation in data, as well as the presence of noise and high dimensionality. In this paper, we introduce a novel deep structured framework to solve the challenging sequential outlier detection problem. We use autoencoder models to capture the intrinsic difference between outliers and normal instances and integrate the models to recurrent neural networks that allow the learning to make use of previous context as well as make the learners more robust to warp along the time axis. Furthermore, we propose to use a layerwise training procedure, which significantly simplifies the training procedure and hence helps achieve efficient and scalable training. In addition, we investigate a fine-tuning step to update all parameters set by incorporating the temporal correlation in the sequence. We further apply our proposed models to conduct systematic experiments on five real-world benchmark data sets. Experimental results demonstrate the effectiveness of our model, compared with other state-of-the-art approaches.
Noncommutative Biology: Sequential Regulation of Complex Networks.
Directory of Open Access Journals (Sweden)
William Letsou
2016-08-01
Full Text Available Single-cell variability in gene expression is important for generating distinct cell types, but it is unclear how cells use the same set of regulatory molecules to specifically control similarly regulated genes. While combinatorial binding of transcription factors at promoters has been proposed as a solution for cell-type specific gene expression, we found that such models resulted in substantial information bottlenecks. We sought to understand the consequences of adopting sequential logic wherein the time-ordering of factors informs the final outcome. We showed that with noncommutative control, it is possible to independently control targets that would otherwise be activated simultaneously using combinatorial logic. Consequently, sequential logic overcomes the information bottleneck inherent in complex networks. We derived scaling laws for two noncommutative models of regulation, motivated by phosphorylation/neural networks and chromosome folding, respectively, and showed that they scale super-exponentially in the number of regulators. We also showed that specificity in control is robust to the loss of a regulator. Lastly, we connected these theoretical results to real biological networks that demonstrate specificity in the context of promiscuity. These results show that achieving a desired outcome often necessitates roundabout steps.
Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P
2014-06-26
To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.
Martina, R; Kay, R; van Maanen, R; Ridder, A
2015-01-01
Clinical studies in overactive bladder have traditionally used analysis of covariance or nonparametric methods to analyse the number of incontinence episodes and other count data. It is known that if the underlying distributional assumptions of a particular parametric method do not hold, an alternative parametric method may be more efficient than a nonparametric one, which makes no assumptions regarding the underlying distribution of the data. Therefore, there are advantages in using methods based on the Poisson distribution or extensions of that method, which incorporate specific features that provide a modelling framework for count data. One challenge with count data is overdispersion, but methods are available that can account for this through the introduction of random effect terms in the modelling, and it is this modelling framework that leads to the negative binomial distribution. These models can also provide clinicians with a clearer and more appropriate interpretation of treatment effects in terms of rate ratios. In this paper, the previously used parametric and non-parametric approaches are contrasted with those based on Poisson regression and various extensions in trials evaluating solifenacin and mirabegron in patients with overactive bladder. In these applications, negative binomial models are seen to fit the data well. Copyright © 2014 John Wiley & Sons, Ltd.
Zheng, Han; Kimber, Alan; Goodwin, Victoria A; Pickering, Ruth M
2018-01-01
A common design for a falls prevention trial is to assess falling at baseline, randomize participants into an intervention or control group, and ask them to record the number of falls they experience during a follow-up period of time. This paper addresses how best to include the baseline count in the analysis of the follow-up count of falls in negative binomial (NB) regression. We examine the performance of various approaches in simulated datasets where both counts are generated from a mixed Poisson distribution with shared random subject effect. Including the baseline count after log-transformation as a regressor in NB regression (NB-logged) or as an offset (NB-offset) resulted in greater power than including the untransformed baseline count (NB-unlogged). Cook and Wei's conditional negative binomial (CNB) model replicates the underlying process generating the data. In our motivating dataset, a statistically significant intervention effect resulted from the NB-logged, NB-offset, and CNB models, but not from NB-unlogged, and large, outlying baseline counts were overly influential in NB-unlogged but not in NB-logged. We conclude that there is little to lose by including the log-transformed baseline count in standard NB regression compared to CNB for moderate to larger sized datasets. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Generazio, Edward R.
2014-01-01
Unknown risks are introduced into failure critical systems when probability of detection (POD) capabilities are accepted without a complete understanding of the statistical method applied and the interpretation of the statistical results. The presence of this risk in the nondestructive evaluation (NDE) community is revealed in common statements about POD. These statements are often interpreted in a variety of ways and therefore, the very existence of the statements identifies the need for a more comprehensive understanding of POD methodologies. Statistical methodologies have data requirements to be met, procedures to be followed, and requirements for validation or demonstration of adequacy of the POD estimates. Risks are further enhanced due to the wide range of statistical methodologies used for determining the POD capability. Receiver/Relative Operating Characteristics (ROC) Display, simple binomial, logistic regression, and Bayes' rule POD methodologies are widely used in determining POD capability. This work focuses on Hit-Miss data to reveal the framework of the interrelationships between Receiver/Relative Operating Characteristics Display, simple binomial, logistic regression, and Bayes' Rule methodologies as they are applied to POD. Knowledge of these interrelationships leads to an intuitive and global understanding of the statistical data, procedural and validation requirements for establishing credible POD estimates.
Directory of Open Access Journals (Sweden)
Lei Wang
2015-01-01
Full Text Available We propose the weighted expected sample size (WESS to evaluate the overall performance on the indifference-zones for three composite hypotheses’ testing problem. Based on minimizing the WESS to control the expected sample sizes, a new sequential test is developed by utilizing two double sequential weighted probability ratio tests (2-SWPRTs simultaneously. It is proven that the proposed test has a finite stopping time and is asymptotically optimal in the sense of asymptotically minimizing not only the expected sample size but also any positive moment of the stopping time on the indifference-zones under some mild conditions. Simulation studies illustrate that the proposed test has the smallest WESS and relative mean index (RMI compared with Sobel-Wald and Whitehead-Brunier tests.
ECTOPIC PREGNANCY AFTER SEQUENTIAL EMBRYO TRANSFER: REVIEW OF 22 CASES
Nadkarni Purnima K, Nadkarni Kishore, Singh Pooja P, Singh Prabhakar , Nadkarni Aditi A , Agarwal Neha R
2015-01-01
Objective: To assess the prevalence of ectopic pregnancy among women who conceived with assisted reproductive technology and to see if there is increased risk after sequential embryo transfer. Methods: The ectopic pregnancy rate for ART pregnancies was calculated among women who conceived and had ectopic pregnancy after ICSI followed by Sequential embryo transfer from an ART centre. Variation in ectopic risk by patient and ART treatment factors was assessed including Sequential transfer, risk...
Villalobos-Rodelo, Juan J; Medina-Solís, Carlo E; Verdugo-Barraza, Lourdes; Islas-Granillo, Horacio; García-Jau, Rosa A; Escoffié-Ramírez, Mauricio; Maupomé, Gerardo
2013-01-01
Dental caries is one of the most common chronic childhood diseases worldwide. In Mexico it is a public health problem. To identify variables associated with caries occurrence (non-reversible and reversible lesions) in a sample of Mexican schoolchildren. We performed a cross-sectional study in 640 schoolchildren of 11 and 12 years of age. The dependent variable was the D 1+2 MFT index, comprising reversible and irreversible carious lesions (dental caries) according to the Pitts D 1 /D 2 classification. Clinical examinations were performed by trained and standardized examiners. Using structured questionnaires we collected socio-demographic, socio-economic and health-related oral behaviors. Negative binomial regression was used for the analysis. The D 1+2 MFT index was 5.68±3.47. The schoolchildren characteristics associated with an increase in the expected average rate of dental caries were: being female (27.1%), having 12 years of age (23.2%), consuming larger amounts of sugar (13.9%), having mediocre (31.3%) and poor/very poor oral hygiene (62.3%). Conversely, when the family owned a car the expected mean D 1+2 MFT decreased 13.5%. When dental caries occurrence (about 6 decayed teeth) is estimated taking into consideration not only cavities (lesions in need of restorative dental treatment) but also incipient carious lesions, the character of this disease as a common clinical problem and as a public health problem are further emphasized. Results revealed the need to establish preventive and curative strategies in the sample.
Tailored sequential drug release from bilayered calcium sulfate composites
International Nuclear Information System (INIS)
Orellana, Bryan R.; Puleo, David A.
2014-01-01
The current standard for treating infected bony defects, such as those caused by periodontal disease, requires multiple time-consuming steps and often multiple procedures to fight the infection and recover lost tissue. Releasing an antibiotic followed by an osteogenic agent from a synthetic bone graft substitute could allow for a streamlined treatment, reducing the need for multiple surgeries and thereby shortening recovery time. Tailorable bilayered calcium sulfate (CS) bone graft substitutes were developed with the ability to sequentially release multiple therapeutic agents. Bilayered composite samples having a shell and core geometry were fabricated with varying amounts (1 or 10 wt.%) of metronidazole-loaded poly(lactic-co-glycolic acid) (PLGA) particles embedded in the shell and simvastatin directly loaded into either the shell, core, or both. Microcomputed tomography showed the overall layered geometry as well as the uniform distribution of PLGA within the shells. Dissolution studies demonstrated that the amount of PLGA particles (i.e., 1 vs. 10 wt.%) had a small but significant effect on the erosion rate (3% vs. 3.4%/d). Mechanical testing determined that introducing a layered geometry had a significant effect on the compressive strength, with an average reduction of 35%, but properties were comparable to those of mandibular trabecular bone. Sustained release of simvastatin directly loaded into CS demonstrated that changing the shell to core volume ratio dictates the duration of drug release from each layer. When loaded together in the shell or in separate layers, sequential release of metronidazole and simvastatin was achieved. By introducing a tunable, layered geometry capable of releasing multiple drugs, CS-based bone graft substitutes could be tailored in order to help streamline the multiple steps needed to regenerate tissue in infected defects. - Highlights: • Bilayered CS composites were fabricated as potential bone graft substitutes. • The shell
Tailored sequential drug release from bilayered calcium sulfate composites
Energy Technology Data Exchange (ETDEWEB)
Orellana, Bryan R.; Puleo, David A., E-mail: puleo@uky.edu
2014-10-01
The current standard for treating infected bony defects, such as those caused by periodontal disease, requires multiple time-consuming steps and often multiple procedures to fight the infection and recover lost tissue. Releasing an antibiotic followed by an osteogenic agent from a synthetic bone graft substitute could allow for a streamlined treatment, reducing the need for multiple surgeries and thereby shortening recovery time. Tailorable bilayered calcium sulfate (CS) bone graft substitutes were developed with the ability to sequentially release multiple therapeutic agents. Bilayered composite samples having a shell and core geometry were fabricated with varying amounts (1 or 10 wt.%) of metronidazole-loaded poly(lactic-co-glycolic acid) (PLGA) particles embedded in the shell and simvastatin directly loaded into either the shell, core, or both. Microcomputed tomography showed the overall layered geometry as well as the uniform distribution of PLGA within the shells. Dissolution studies demonstrated that the amount of PLGA particles (i.e., 1 vs. 10 wt.%) had a small but significant effect on the erosion rate (3% vs. 3.4%/d). Mechanical testing determined that introducing a layered geometry had a significant effect on the compressive strength, with an average reduction of 35%, but properties were comparable to those of mandibular trabecular bone. Sustained release of simvastatin directly loaded into CS demonstrated that changing the shell to core volume ratio dictates the duration of drug release from each layer. When loaded together in the shell or in separate layers, sequential release of metronidazole and simvastatin was achieved. By introducing a tunable, layered geometry capable of releasing multiple drugs, CS-based bone graft substitutes could be tailored in order to help streamline the multiple steps needed to regenerate tissue in infected defects. - Highlights: • Bilayered CS composites were fabricated as potential bone graft substitutes. • The shell
Adaptive Learning in Extensive Form Games and Sequential Equilibrium
DEFF Research Database (Denmark)
Groes, Ebbe; Jacobsen, Hans Jørgen; Sloth, Birgitte
1999-01-01
This paper studies adaptive learning in extensive form games and provides conditions for convergence points of adaptive learning to be sequential equilibria. Precisely, we present a set of conditions on learning sequences such that an assessment is a sequential equilibrium if and only if there is......This paper studies adaptive learning in extensive form games and provides conditions for convergence points of adaptive learning to be sequential equilibria. Precisely, we present a set of conditions on learning sequences such that an assessment is a sequential equilibrium if and only...
Turi, Christina E; Murch, Susan J
2013-07-09
Ethnobotanical research and the study of plants used for rituals, ceremonies and to connect with the spirit world have led to the discovery of many novel psychoactive compounds such as nicotine, caffeine, and cocaine. In North America, spiritual and ceremonial uses of plants are well documented and can be accessed online via the University of Michigan's Native American Ethnobotany Database. The objective of the study was to compare Residual, Bayesian, Binomial and Imprecise Dirichlet Model (IDM) analyses of ritual, ceremonial and spiritual plants in Moerman's ethnobotanical database and to identify genera that may be good candidates for the discovery of novel psychoactive compounds. The database was queried with the following format "Family Name AND Ceremonial OR Spiritual" for 263 North American botanical families. Spiritual and ceremonial flora consisted of 86 families with 517 species belonging to 292 genera. Spiritual taxa were then grouped further into ceremonial medicines and items categories. Residual, Bayesian, Binomial and IDM analysis were performed to identify over and under-utilized families. The 4 statistical approaches were in good agreement when identifying under-utilized families but large families (>393 species) were underemphasized by Binomial, Bayesian and IDM approaches for over-utilization. Residual, Binomial, and IDM analysis identified similar families as over-utilized in the medium (92-392 species) and small (Binomial analysis when separated into small, medium and large families. The Bayesian, Binomial and IDM approaches identified different genera as potentially important. Species belonging to the genus Artemisia and Ligusticum were most consistently identified and may be valuable in future studies of the ethnopharmacology. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Dancing Twins: Stellar Hierarchies That Formed Sequentially?
Tokovinin, Andrei
2018-04-01
This paper draws attention to the class of resolved triple stars with moderate ratios of inner and outer periods (possibly in a mean motion resonance) and nearly circular, mutually aligned orbits. Moreover, stars in the inner pair are twins with almost identical masses, while the mass sum of the inner pair is comparable to the mass of the outer component. Such systems could be formed either sequentially (inside-out) by disk fragmentation with subsequent accretion and migration, or by a cascade hierarchical fragmentation of a rotating cloud. Orbits of the outer and inner subsystems are computed or updated in four such hierarchies: LHS 1070 (GJ 2005, periods 77.6 and 17.25 years), HIP 9497 (80 and 14.4 years), HIP 25240 (1200 and 47.0 years), and HIP 78842 (131 and 10.5 years).
Sequential scintigraphic staging of small cell carcinoma
International Nuclear Information System (INIS)
Bitran, J.D.; Bekerman, C.; Pinsky, S.
1981-01-01
Thirty patients with small cell carcinoma (SCC) of the lung were sequentially staged following a history and physical exam with liver, bran, bone, and gallium-67 citrate scans. Scintigraphic evaluation disclosed 7 of 30 patients (23%) with advanced disease, stage IIIM1. When Gallium-67 scans were used as the sole criteria for staging, they proved to be accurate and identified six of the seven patients with occult metastatic disease. Gallium-67 scans proved to be accurate in detecting thoracic and extrathoracic metastases in the 30 patients with SCC, especially within the liver and lymph node-bearing area. The diagnostic accuracy of gallium-67 fell in regions such as bone or brain. Despite the limitations of gallium-67 scanning, the authors conclude that these scans are useful in staging patients with SCC and should be the initial scans used in staging such patients
Gleason-Busch theorem for sequential measurements
Flatt, Kieran; Barnett, Stephen M.; Croke, Sarah
2017-12-01
Gleason's theorem is a statement that, given some reasonable assumptions, the Born rule used to calculate probabilities in quantum mechanics is essentially unique [A. M. Gleason, Indiana Univ. Math. J. 6, 885 (1957), 10.1512/iumj.1957.6.56050]. We show that Gleason's theorem contains within it also the structure of sequential measurements, and along with this the state update rule. We give a small set of axioms, which are physically motivated and analogous to those in Busch's proof of Gleason's theorem [P. Busch, Phys. Rev. Lett. 91, 120403 (2003), 10.1103/PhysRevLett.91.120403], from which the familiar Kraus operator form follows. An axiomatic approach has practical relevance as well as fundamental interest, in making clear those assumptions which underlie the security of quantum communication protocols. Interestingly, the two-time formalism is seen to arise naturally in this approach.
Sequential Therapy in Metastatic Renal Cell Carcinoma
Directory of Open Access Journals (Sweden)
Bradford R Hirsch
2016-04-01
Full Text Available The treatment of metastatic renal cell carcinoma (mRCC has changed dramatically in the past decade. As the number of available agents, and related volume of research, has grown, it is increasingly complex to know how to optimally treat patients. The authors are practicing medical oncologists at the US Oncology Network, the largest community-based network of oncology providers in the country, and represent the leadership of the Network's Genitourinary Research Committee. We outline our thought process in approaching sequential therapy of mRCC and the use of real-world data to inform our approach. We also highlight the evolving literature that will impact practicing oncologists in the near future.
Prosody and alignment: a sequential perspective
Szczepek Reed, Beatrice
2010-12-01
In their analysis of a corpus of classroom interactions in an inner city high school, Roth and Tobin describe how teachers and students accomplish interactional alignment by prosodically matching each other's turns. Prosodic matching, and specific prosodic patterns are interpreted as signs of, and contributions to successful interactional outcomes and positive emotions. Lack of prosodic matching, and other specific prosodic patterns are interpreted as features of unsuccessful interactions, and negative emotions. This forum focuses on the article's analysis of the relation between interpersonal alignment, emotion and prosody. It argues that prosodic matching, and other prosodic linking practices, play a primarily sequential role, i.e. one that displays the way in which participants place and design their turns in relation to other participants' turns. Prosodic matching, rather than being a conversational action in itself, is argued to be an interactional practice (Schegloff 1997), which is not always employed for the accomplishment of `positive', or aligning actions.
Sequential Stereotype Priming: A Meta-Analysis.
Kidder, Ciara K; White, Katherine R; Hinojos, Michelle R; Sandoval, Mayra; Crites, Stephen L
2017-08-01
Psychological interest in stereotype measurement has spanned nearly a century, with researchers adopting implicit measures in the 1980s to complement explicit measures. One of the most frequently used implicit measures of stereotypes is the sequential priming paradigm. The current meta-analysis examines stereotype priming, focusing specifically on this paradigm. To contribute to ongoing discussions regarding methodological rigor in social psychology, one primary goal was to identify methodological moderators of the stereotype priming effect-whether priming is due to a relation between the prime and target stimuli, the prime and target response, participant task, stereotype dimension, stimulus onset asynchrony (SOA), and stimuli type. Data from 39 studies yielded 87 individual effect sizes from 5,497 participants. Analyses revealed that stereotype priming is significantly moderated by the presence of prime-response relations, participant task, stereotype dimension, target stimulus type, SOA, and prime repetition. These results carry both practical and theoretical implications for future research on stereotype priming.
Directory of Open Access Journals (Sweden)
Gely J. P.
2006-11-01
(Lorenz et al. , 1987 make this borehole a first-rate stratigraphic reference concerning the Lower and Middle Jurassic series in the southern part of the Paris Basin. The lithostratigraphic analysis has already been the subject of a publication (Lorenz et al. , 1992, and a description of the sequences has been sketched out (Gely and Lorenz, 1991. At the same time, the implementation of lithostratigraphic data and the description of the well logs provide greater accuracy in interpretation in terms of desposit sequences, while the stratigraphic calibration of the well-log signatures gives a reference on the scale of the Paris Basin. The sequences defined in the Couy borehole are compared to the ones already published elsewhere on a global scale (Haq et al. , 1988; Vail et al. , 1987 and on a regional scale (Rioult et al. , 1991; Gonnin et al. , 1992, 1993; Bessereau and Guillocheau, 1994, (Figs. 2 and 3. Discontinuities corresponding to a sedimentation gap are usually represented by traces of bioturbation or by perforations in the top surface of a bed or else by a surface of gullying. In other cases, the sequence boundaries do not seem to be so clearly expressed but correspond to highly bioturbated bands, a single limestone bed in the midst of marls or perhaps an abrupt lithological change. For a limestone bed situated at the upper boundary of a sequence, we can see that this latter is often perforated or bioturbated. This enables them to be distinguished from other limestone levels situated at the transgressive maximum, and which contain pelagic fossils or authigenic minerals. On well logs, these two types of beds often have a comparable signature. The sequence boundaries are clearly shown by the well logs, which show clearcut curve breaks and are often capable of orienting or confirming the choice of identification criteria of the boundaries in the core samples. However, it can be seen that there is no direct relationships between the visible size of the discontinuity and
Discriminative predation: Simultaneous and sequential encounter experiments
Directory of Open Access Journals (Sweden)
C. D. BEATTY, D.W.FRANKS
2012-08-01
Full Text Available There are many situations in which the ability of animals to distinguish between two similar looking objects can have significant selective consequences. For example, the objects that require discrimination may be edible versus defended prey, predators versus non-predators, or mates of varying quality. Working from the premise that there are situations in which discrimination may be more or less successful, we hypothesized that individuals find it more difficult to distinguish between stimuli when they encounter them sequentially rather than simultaneously. Our study has wide biological and psychological implications from the perspective of signal perception, signal evolution, and discrimination, and could apply to any system where individuals are making relative judgments or choices between two or more stimuli or signals. While this is a general principle that might seem intuitive, it has not been experimentally tested in this context, and is often not considered in the design of models or experiments, or in the interpretation of a wide range of studies. Our study is different from previous studies in psychology in that a the level of similarity of stimuli are gradually varied to obtain selection gradients, and b we discuss the implications of our study for specific areas in ecology, such as the level of perfection of mimicry in predator-prey systems. Our experiments provide evidence that it is indeed more difficult to distinguish between stimuli – and to learn to distinguish between stimuli – when they are encountered sequentially rather than simultaneously, even if the intervening time interval is short [Current Zoology 58 (4: 649–657, 2012].
Exploring the potential of sequential simulation.
Powell, Polly; Sorefan, Zinah; Hamilton, Sara; Kneebone, Roger; Bello, Fernando
2016-04-01
Several recent papers have highlighted the need for better integrated care to improve health care for children and families. Our team spent a year exploring the potential of 'Sequential Simulation' (SqS) as a teaching tool to address this need with young people and multidisciplinary teams. SqS allows the simulation of a series of key events or 'crunch points' that come together to represent the patient journey, and highlights the impact of individuals on this journey. The pilot SqS was based on an adolescent with asthma - a common condition that requires excellent multidisciplinary care with the patient at the centre. The SqS was designed using transportable sets and audio-visual equipment to create realism. Actors were employed to play the roles of the young person and mother and health professionals played themselves. The SqS was run at different events with varied audiences, including young people, health professionals and teachers. It was used to explore the difficulties that can arise during a patient journey, the importance of communication throughout, and to highlight the significance of each individual in the patient experience. The SqS was met with enthusiasm and felt to be an innovative and effective way of promoting better teamwork and communication. It was well received at a school asthma education event for pupils and community teams, demonstrating its varied potential. The year was the first step in the introduction of this exciting new concept that has the potential to help promote better integrated care for paediatric patients and their families. Our team spent a year exploring the potential of 'Sequential Simulation' as a teaching tool [to provide better integrated care]. © 2015 John Wiley & Sons Ltd.
Directory of Open Access Journals (Sweden)
Patricio Peña-Rehbein
2012-03-01
Full Text Available Nematodes of the genus Anisakis have marine fishes as intermediate hosts. One of these hosts is Thyrsites atun, an important fishery resource in Chile between 38 and 41° S. This paper describes the frequency and number of Anisakis nematodes in the internal organs of Thyrsites atun. An analysis based on spatial distribution models showed that the parasites tend to be clustered. The variation in the number of parasites per host could be described by the negative binomial distribution. The maximum observed number of parasites was nine parasites per host. The environmental and zoonotic aspects of the study are also discussed.Nematóides do gênero Anisakis têm nos peixes marinhos seus hospedeiros intermediários. Um desses hospedeiros é Thyrsites atun, um importante recurso pesqueiro no Chile entre 38 e 41° S. Este artigo descreve a freqüência e o número de nematóides Anisakis nos órgãos internos de Thyrsites atun. Uma análise baseada em modelos de distribuição espacial demonstrou que os parasitos tendem a ficar agrupados. A variação numérica de parasitas por hospedeiro pôde ser descrita por distribuição binomial negativa. O número máximo observado de parasitas por hospedeiro foi nove. Os aspectos ambientais e zoonóticos desse estudo também serão discutidos.
Shirazi, Mohammadali; Dhavala, Soma Sekhar; Lord, Dominique; Geedipally, Srinivas Reddy
2017-10-01
Safety analysts usually use post-modeling methods, such as the Goodness-of-Fit statistics or the Likelihood Ratio Test, to decide between two or more competitive distributions or models. Such metrics require all competitive distributions to be fitted to the data before any comparisons can be accomplished. Given the continuous growth in introducing new statistical distributions, choosing the best one using such post-modeling methods is not a trivial task, in addition to all theoretical or numerical issues the analyst may face during the analysis. Furthermore, and most importantly, these measures or tests do not provide any intuitions into why a specific distribution (or model) is preferred over another (Goodness-of-Logic). This paper ponders into these issues by proposing a methodology to design heuristics for Model Selection based on the characteristics of data, in terms of descriptive summary statistics, before fitting the models. The proposed methodology employs two analytic tools: (1) Monte-Carlo Simulations and (2) Machine Learning Classifiers, to design easy heuristics to predict the label of the 'most-likely-true' distribution for analyzing data. The proposed methodology was applied to investigate when the recently introduced Negative Binomial Lindley (NB-L) distribution is preferred over the Negative Binomial (NB) distribution. Heuristics were designed to select the 'most-likely-true' distribution between these two distributions, given a set of prescribed summary statistics of data. The proposed heuristics were successfully compared against classical tests for several real or observed datasets. Not only they are easy to use and do not need any post-modeling inputs, but also, using these heuristics, the analyst can attain useful information about why the NB-L is preferred over the NB - or vice versa- when modeling data. Copyright © 2017 Elsevier Ltd. All rights reserved.
Noel, James D; Biswas, Pratim; Giammar, Daniel E
2007-07-01
Leaching of mercury from coal combustion byproducts is a concern because of the toxicity of mercury. Leachability of mercury can be assessed by using sequential extraction procedures. Sequential extraction procedures are commonly used to determine the speciation and mobility of trace metals in solid samples and are designed to differentiate among metals bound by different mechanisms and to different solid phases. This study evaluated the selectivity and effectiveness of a sequential extraction process used to determine mercury binding mechanisms to various materials. A six-step sequential extraction process was applied to laboratory-synthesized materials with known mercury concentrations and binding mechanisms. These materials were calcite, hematite, goethite, and titanium dioxide. Fly ash from a full-scale power plant was also investigated. The concentrations of mercury were measured using inductively coupled plasma (ICP) mass spectrometry, whereas the major elements were measured by ICP atomic emission spectrometry. The materials were characterized by X-ray powder diffraction and scanning electron microscopy with energy dispersive spectroscopy. The sequential extraction procedure provided information about the solid phases with which mercury was associated in the solid sample. The procedure effectively extracted mercury from the target phases. The procedure was generally selective in extracting mercury. However, some steps in the procedure extracted mercury from nontarget phases, and others resulted in mercury redistribution. Iron from hematite and goethite was only leached in the reducible and residual extraction steps. Some mercury associated with goethite was extracted in the ion exchangeable step, whereas mercury associated with hematite was extracted almost entirely in the residual step. Calcium in calcite and mercury associated with calcite were primarily removed in the acid-soluble extraction step. Titanium in titanium dioxide and mercury adsorbed onto
Two-step sequential pretreatment for the enhanced enzymatic hydrolysis of coffee spent waste.
Ravindran, Rajeev; Jaiswal, Swarna; Abu-Ghannam, Nissreen; Jaiswal, Amit K
2017-09-01
In the present study, eight different pretreatments of varying nature (physical, chemical and physico-chemical) followed by a sequential, combinatorial pretreatment strategy was applied to spent coffee waste to attain maximum sugar yield. Pretreated samples were analysed for total reducing sugar, individual sugars and generation of inhibitory compounds such as furfural and hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity. Native spent coffee waste was high in hemicellulose content. Galactose was found to be the predominant sugar in spent coffee waste. Results showed that sequential pretreatment yielded 350.12mg of reducing sugar/g of substrate, which was 1.7-fold higher than in native spent coffee waste (203.4mg/g of substrate). Furthermore, extensive delignification was achieved using sequential pretreatment strategy. XRD, FTIR, and DSC profiles of the pretreated substrates were studied to analyse the various changes incurred in sequentially pretreated spent coffee waste as opposed to native spent coffee waste. Copyright © 2017 Elsevier Ltd. All rights reserved.
Extended moment series and the parameters of the negative binomial distribution
International Nuclear Information System (INIS)
Bowman, K.O.
1984-01-01
Recent studies indicate that, for finite sample sizes, moment estimators may be superior to maximum likelihood estimators in some regions of parameter space. In this paper a statistic based on the central moment of the sample is expanded in a Taylor series using 24 derivatives and many more terms than previous expansions. A summary algorithm is required to find meaningful approximants using the higher-order coefficients. A example is presented and a comparison between theoretical assessment and simulation results is made
Ali, Asad; Zaidi, Farrah; Fatima, Syeda Hira; Adnan, Muhammad; Ullah, Saleem
2018-03-24
In this study, we propose to develop a geostatistical computational framework to model the distribution of rat bite infestation of epidemic proportion in Peshawar valley, Pakistan. Two species Rattus norvegicus and Rattus rattus are suspected to spread the infestation. The framework combines strengths of maximum entropy algorithm and binomial kriging with logistic regression to spatially model the distribution of infestation and to determine the individual role of environmental predictors in modeling the distribution trends. Our results demonstrate the significance of a number of social and environmental factors in rat infestations such as (I) high human population density; (II) greater dispersal ability of rodents due to the availability of better connectivity routes such as roads, and (III) temperature and precipitation influencing rodent fecundity and life cycle.
Is “Hit and Run” a Single Word? The Processing of Irreversible Binomials in Neglect Dyslexia
Arcara, Giorgio; Lacaita, Graziano; Mattaloni, Elisa; Passarini, Laura; Mondini, Sara; Benincà, Paola; Semenza, Carlo
2012-01-01
The present study is the first neuropsychological investigation into the problem of the mental representation and processing of irreversible binomials (IBs), i.e., word pairs linked by a conjunction (e.g., “hit and run,” “dead or alive”). In order to test their lexical status, the phenomenon of neglect dyslexia is explored. People with left-sided neglect dyslexia show a clear lexical effect: they can read IBs better (i.e., by dropping the leftmost words less frequently) when their components are presented in their correct order. This may be taken as an indication that they treat these constructions as lexical, not decomposable, elements. This finding therefore constitutes strong evidence that IBs tend to be stored in the mental lexicon as a whole and that this whole form is preferably addressed in the retrieval process. PMID:22347199
von Gunten, Konstantin; Alam, Md Samrat; Hubmann, Magdalena; Ok, Yong Sik; Konhauser, Kurt O; Alessi, Daniel S
2017-07-01
A modified Community Bureau of Reference (CBR) sequential extraction method was tested to assess the composition of untreated pyrogenic carbon (biochar) and oil sands petroleum coke. Wood biochar samples were found to contain lower concentrations of metals, but had higher fractions of easily mobilized alkaline earth and transition metals. Sewage sludge biochar was determined to be less recalcitrant and had higher total metal concentrations, with most of the metals found in the more resilient extraction fractions (oxidizable, residual). Petroleum coke was the most stable material, with a similar metal distribution pattern as the sewage sludge biochar. The applied sequential extraction method represents a suitable technique to recover metals from these materials, and is a valuable tool in understanding the metal retaining and leaching capability of various biochar types and carbonaceous petroleum coke samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
Derivation of sequential, real-time, process-control programs
Marzullo, Keith; Schneider, Fred B.; Budhiraja, Navin
1991-01-01
The use of weakest-precondition predicate transformers in the derivation of sequential, process-control software is discussed. Only one extension to Dijkstra's calculus for deriving ordinary sequential programs was found to be necessary: function-valued auxiliary variables. These auxiliary variables are needed for reasoning about states of a physical process that exists during program transitions.
Factor screening for simulation with multiple responses : Sequential bifurcation
Shi, W.; Kleijnen, J.P.C.; Liu, Z.
2014-01-01
The goal of factor screening is to find the really important inputs (factors) among the many inputs that may be changed in a realistic simulation experiment. A specific method is sequential bifurcation (SB), which is a sequential method that changes groups of inputs simultaneously. SB is most
Factor Screening for Simulation with Multiple Responses : Sequential Bifurcation
Shi, W.; Kleijnen, Jack P.C.; Liu, Zhixue
2012-01-01
Abstract: Factor screening searches for the really important inputs (factors) among the many inputs that are changed in a realistic simulation experiment. Sequential bifurcation (or SB) is a sequential method that changes groups of inputs simultaneously. SB is the most e¢ cient and effective method
Factor Screening For Simulation With Multiple Responses : Sequential Bifurcation
Shi, W.; Kleijnen, Jack P.C.; Liu, Zhixue
2013-01-01
Abstract: Factor screening searches for the really important inputs (factors) among the many inputs that are changed in a realistic simulation experiment. Sequential bifurcation (SB) is a sequential method that changes groups of inputs simultaneously. SB is the most efficient and effective method if
A Survey of Multi-Objective Sequential Decision-Making
Roijers, D.M.; Vamplew, P.; Whiteson, S.; Dazeley, R.
2013-01-01
Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential
Accounting for Heterogeneous Returns in Sequential Schooling Decisions
Zamarro, G.
2006-01-01
This paper presents a method for estimating returns to schooling that takes into account that returns may be heterogeneous among agents and that educational decisions are made sequentially.A sequential decision model is interesting because it explicitly considers that the level of education of each
The sequential price of anarchy for atomic congestion games
de Jong, Jasper; Uetz, Marc Jochen; Liu, Tie-Yan; Qi, Qi; Ye, Yinyu
2014-01-01
In situations without central coordination, the price of anarchy relates the quality of any Nash equilibrium to the quality of a global optimum. Instead of assuming that all players choose their actions simultaneously, we consider games where players choose their actions sequentially. The sequential
Quantum Probability Zero-One Law for Sequential Terminal Events
Rehder, Wulf
1980-07-01
On the basis of the Jauch-Piron quantum probability calculus a zero-one law for sequential terminal events is proven, and the significance of certain crucial axioms in the quantum probability calculus is discussed. The result shows that the Jauch-Piron set of axioms is appropriate for the non-Boolean algebra of sequential events.
The Clinical effectiveness of sequential treatment of skeletal class III ...
African Journals Online (AJOL)
Aim: To assess the dentofacial changes induced by the sequential treatment in the skeletal class III malocclusion with maxillary retrognathism. Study design: Controlled clinical trial assessing the effectiveness of sequential treatment of skeletal class III malocclusion. Materials and Methods: The treated group consisted of 30 ...
Lag Sequential Analysis: Taking Consultation Communication Research to the Movies.
Benes, Kathryn M.; Gutkin, Terry B.; Kramer, Jack J.
1995-01-01
Describes lag-sequential analysis and its unique contributions to research literature, addressing communication processes in school-based consultation. For purposes of demonstrating the application and potential utility of lag-sequential analysis, article analyzes the communication behaviors of two consultants. Considers directions for future…
Sequential injection spectrophotometric determination of V(V) in ...
African Journals Online (AJOL)
Sequential injection spectrophotometric determination of V(V) in environmental polluted waters. ES Silva, PCAG Pinto, JLFC Lima, MLMFS Saraiva. Abstract. A fast and robust sequential injection analysis (SIA) methodology for routine determination of V(V) in environmental polluted waters is presented. The determination ...
Directory of Open Access Journals (Sweden)
Manu Batra
2016-01-01
Full Text Available Context: Dental caries among children has been described as a pandemic disease with a multifactorial nature. Various sociodemographic factors and oral hygiene practices are commonly tested for their influence on dental caries. In recent years, a recent statistical model that allows for covariate adjustment has been developed and is commonly referred zero-inflated negative binomial (ZINB models. Aim: To compare the fit of the two models, the conventional linear regression (LR model and ZINB model to assess the risk factors associated with dental caries. Materials and Methods: A cross-sectional survey was conducted on 1138 12-year-old school children in Moradabad Town, Uttar Pradesh during months of February-August 2014. Selected participants were interviewed using a questionnaire. Dental caries was assessed by recording decayed, missing, or filled teeth (DMFT index. Statistical Analysis Used: To assess the risk factor associated with dental caries in children, two approaches have been applied - LR model and ZINB model. Results: The prevalence of caries-free subjects was 24.1%, and mean DMFT was 3.4 ± 1.8. In LR model, all the variables were statistically significant. Whereas in ZINB model, negative binomial part showed place of residence, father′s education level, tooth brushing frequency, and dental visit statistically significant implying that the degree of being caries-free (DMFT = 0 increases for group of children who are living in urban, whose father is university pass out, who brushes twice a day and if have ever visited a dentist. Conclusion: The current study report that the LR model is a poorly fitted model and may lead to spurious conclusions whereas ZINB model has shown better goodness of fit (Akaike information criterion values - LR: 3.94; ZINB: 2.39 and can be preferred if high variance and number of an excess of zeroes are present.
Batra, Manu; Shah, Aasim Farooq; Rajput, Prashant; Shah, Ishrat Aasim
2016-01-01
Dental caries among children has been described as a pandemic disease with a multifactorial nature. Various sociodemographic factors and oral hygiene practices are commonly tested for their influence on dental caries. In recent years, a recent statistical model that allows for covariate adjustment has been developed and is commonly referred zero-inflated negative binomial (ZINB) models. To compare the fit of the two models, the conventional linear regression (LR) model and ZINB model to assess the risk factors associated with dental caries. A cross-sectional survey was conducted on 1138 12-year-old school children in Moradabad Town, Uttar Pradesh during months of February-August 2014. Selected participants were interviewed using a questionnaire. Dental caries was assessed by recording decayed, missing, or filled teeth (DMFT) index. To assess the risk factor associated with dental caries in children, two approaches have been applied - LR model and ZINB model. The prevalence of caries-free subjects was 24.1%, and mean DMFT was 3.4 ± 1.8. In LR model, all the variables were statistically significant. Whereas in ZINB model, negative binomial part showed place of residence, father's education level, tooth brushing frequency, and dental visit statistically significant implying that the degree of being caries-free (DMFT = 0) increases for group of children who are living in urban, whose father is university pass out, who brushes twice a day and if have ever visited a dentist. The current study report that the LR model is a poorly fitted model and may lead to spurious conclusions whereas ZINB model has shown better goodness of fit (Akaike information criterion values - LR: 3.94; ZINB: 2.39) and can be preferred if high variance and number of an excess of zeroes are present.
The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples
Avetisyan, Marianna; Fox, Jean-Paul
2012-01-01
In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…
Assessing Trauma, Substance Abuse, and Mental Health in a Sample of Homeless Men
Kim, Mimi M.; Ford, Julian D.; Howard, Daniel L.; Bradford, Daniel W.
2010-01-01
This study examined the impact of physical and sexual trauma on a sample of 239 homeless men. Study participants completed a self-administered survey that collected data on demographics, exposure to psychological trauma, physical health and mental health problems, and substance use or misuse. Binomial logistic regression analyses were used to…
Sequential multiple assignment randomization trials with enrichment design.
Liu, Ying; Wang, Yuanjia; Zeng, Donglin
2017-06-01
Sequential multiple assignment randomization trial (SMART) is a powerful design to study Dynamic Treatment Regimes (DTRs) and allows causal comparisons of DTRs. To handle practical challenges of SMART, we propose a SMART with Enrichment (SMARTER) design, which performs stage-wise enrichment for SMART. SMARTER can improve design efficiency, shorten the recruitment period, and partially reduce trial duration to make SMART more practical with limited time and resource. Specifically, at each subsequent stage of a SMART, we enrich the study sample with new patients who have received previous stages' treatments in a naturalistic fashion without randomization, and only randomize them among the current stage treatment options. One extreme case of the SMARTER is to synthesize separate independent single-stage randomized trials with patients who have received previous stage treatments. We show data from SMARTER allows for unbiased estimation of DTRs as SMART does under certain assumptions. Furthermore, we show analytically that the efficiency gain of the new design over SMART can be significant especially when the dropout rate is high. Lastly, extensive simulation studies are performed to demonstrate performance of SMARTER design, and sample size estimation in a scenario informed by real data from a SMART study is presented. © 2016, The International Biometric Society.
An exploratory sequential design to validate measures of moral emotions.
Márquez, Margarita G; Delgado, Ana R
2017-05-01
This paper presents an exploratory and sequential mixed methods approach in validating measures of knowledge of the moral emotions of contempt, anger and disgust. The sample comprised 60 participants in the qualitative phase when a measurement instrument was designed. Item stems, response options and correction keys were planned following the results obtained in a descriptive phenomenological analysis of the interviews. In the quantitative phase, the scale was used with a sample of 102 Spanish participants, and the results were analysed with the Rasch model. In the qualitative phase, salient themes included reasons, objects and action tendencies. In the quantitative phase, good psychometric properties were obtained. The model fit was adequate. However, some changes had to be made to the scale in order to improve the proportion of variance explained. Substantive and methodological im-plications of this mixed-methods study are discussed. Had the study used a single re-search method in isolation, aspects of the global understanding of contempt, anger and disgust would have been lost.
Simulation based sequential Monte Carlo methods for discretely observed Markov processes
Neal, Peter
2014-01-01
Parameter estimation for discretely observed Markov processes is a challenging problem. However, simulation of Markov processes is straightforward using the Gillespie algorithm. We exploit this ease of simulation to develop an effective sequential Monte Carlo (SMC) algorithm for obtaining samples from the posterior distribution of the parameters. In particular, we introduce two key innovations, coupled simulations, which allow us to study multiple parameter values on the basis of a single sim...
Two-Step Sequential Pretreatment for the Enhanced Enzymatic Hydrolysis of Coffee Spent Waste
Ravindran, Rajeev; Jaiswal, Swarna; Abu-ghannam, Nissreen; Jaiswal, Amit
2017-01-01
In the present study, eight different pretreatments of varying nature (physical, chemical and physico-chemical) followed by a sequential, combinatorial pretreatment strategy was applied to spent coffee waste to attain maximum sugar yield. Pretreated samples were analysed for total reducing sugar, individual sugars and generation of inhibitory compounds such as furfural and hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity. Native spent coffee waste was high in...
Beam geometry selection using sequential beam addition.
Popple, Richard A; Brezovich, Ivan A; Fiveash, John B
2014-05-01
The selection of optimal beam geometry has been of interest since the inception of conformal radiotherapy. The authors report on sequential beam addition, a simple beam geometry selection method, for intensity modulated radiation therapy. The sequential beam addition algorithm (SBA) requires definition of an objective function (score) and a set of candidate beam geometries (pool). In the first iteration, the optimal score is determined for each beam in the pool and the beam with the best score selected. In the next iteration, the optimal score is calculated for each beam remaining in the pool combined with the beam selected in the first iteration, and the best scoring beam is selected. The process is repeated until the desired number of beams is reached. The authors selected three treatment sites, breast, lung, and brain, and determined beam arrangements for up to 11 beams from a pool comprised of 25 equiangular transverse beams. For the brain, arrangements were additionally selected from a pool of 22 noncoplanar beams. Scores were determined for geometries comprised equiangular transverse beams (EQA), as well as two tangential beams for the breast case. In all cases, SBA resulted in scores superior to EQA. The breast case had the strongest dependence on beam geometry, for which only the 7-beam EQA geometry had a score better than the two tangential beams, whereas all SBA geometries with more than two beams were superior. In the lung case, EQA and SBA scores monotonically improved with increasing number of beams; however, SBA required fewer beams to achieve scores equivalent to EQA. For the brain case, SBA with a coplanar pool was equivalent to EQA, while the noncoplanar pool resulted in slightly better scores; however, the dose-volume histograms demonstrated that the differences were not clinically significant. For situations in which beam geometry has a significant effect on the objective function, SBA can identify arrangements equivalent to equiangular