WorldWideScience

Sample records for sequential likelihood detection

  1. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  2. Likelihood ratio sequential sampling models of recognition memory.

    Science.gov (United States)

    Osth, Adam F; Dennis, Simon; Heathcote, Andrew

    2017-02-01

    The mirror effect - a phenomenon whereby a manipulation produces opposite effects on hit and false alarm rates - is benchmark regularity of recognition memory. A likelihood ratio decision process, basing recognition on the relative likelihood that a stimulus is a target or a lure, naturally predicts the mirror effect, and so has been widely adopted in quantitative models of recognition memory. Glanzer, Hilford, and Maloney (2009) demonstrated that likelihood ratio models, assuming Gaussian memory strength, are also capable of explaining regularities observed in receiver-operating characteristics (ROCs), such as greater target than lure variance. Despite its central place in theorising about recognition memory, however, this class of models has not been tested using response time (RT) distributions. In this article, we develop a linear approximation to the likelihood ratio transformation, which we show predicts the same regularities as the exact transformation. This development enabled us to develop a tractable model of recognition-memory RT based on the diffusion decision model (DDM), with inputs (drift rates) provided by an approximate likelihood ratio transformation. We compared this "LR-DDM" to a standard DDM where all targets and lures receive their own drift rate parameters. Both were implemented as hierarchical Bayesian models and applied to four datasets. Model selection taking into account parsimony favored the LR-DDM, which requires fewer parameters than the standard DDM but still fits the data well. These results support log-likelihood based models as providing an elegant explanation of the regularities of recognition memory, not only in terms of choices made but also in terms of the times it takes to make them. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Sequential Analysis: Hypothesis Testing and Changepoint Detection

    Science.gov (United States)

    2014-07-11

    maintains the flexibility of deciding sooner than the fixed sample size procedure at the price of some lower power [13, 514]. The sequential probability... markets , detection of signals with unknown arrival time in seismology, navigation, radar and sonar signal processing, speech segmentation, and the... skimming cruise missile can yield a significant increase in the probability of raid annihilation. Furthermore, usually detection systems are

  4. Sequential Change-Point Detection via Online Convex Optimization

    Directory of Open Access Journals (Sweden)

    Yang Cao

    2018-02-01

    Full Text Available Sequential change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. When the post-change parameters are unknown, we consider a set of detection procedures based on sequential likelihood ratios with non-anticipating estimators constructed using online convex optimization algorithms such as online mirror descent, which provides a more versatile approach to tackling complex situations where recursive maximum likelihood estimators cannot be found. When the underlying distributions belong to a exponential family and the estimators satisfy the logarithm regret property, we show that this approach is nearly second-order asymptotically optimal. This means that the upper bound for the false alarm rate of the algorithm (measured by the average-run-length meets the lower bound asymptotically up to a log-log factor when the threshold tends to infinity. Our proof is achieved by making a connection between sequential change-point and online convex optimization and leveraging the logarithmic regret bound property of online mirror descent algorithm. Numerical and real data examples validate our theory.

  5. Efficient Detection of Repeating Sites to Accelerate Phylogenetic Likelihood Calculations.

    Science.gov (United States)

    Kobert, K; Stamatakis, A; Flouri, T

    2017-03-01

    The phylogenetic likelihood function (PLF) is the major computational bottleneck in several applications of evolutionary biology such as phylogenetic inference, species delimitation, model selection, and divergence times estimation. Given the alignment, a tree and the evolutionary model parameters, the likelihood function computes the conditional likelihood vectors for every node of the tree. Vector entries for which all input data are identical result in redundant likelihood operations which, in turn, yield identical conditional values. Such operations can be omitted for improving run-time and, using appropriate data structures, reducing memory usage. We present a fast, novel method for identifying and omitting such redundant operations in phylogenetic likelihood calculations, and assess the performance improvement and memory savings attained by our method. Using empirical and simulated data sets, we show that a prototype implementation of our method yields up to 12-fold speedups and uses up to 78% less memory than one of the fastest and most highly tuned implementations of the PLF currently available. Our method is generic and can seamlessly be integrated into any phylogenetic likelihood implementation. [Algorithms; maximum likelihood; phylogenetic likelihood function; phylogenetics]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  6. Physics-based, Bayesian sequential detection method and system for radioactive contraband

    Science.gov (United States)

    Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E

    2014-03-18

    A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.

  7. Sequential Detection of Thermophilic Lipase and Protease by Zymography.

    Science.gov (United States)

    Kurz, Liliana; Hernández, Zully; Contreras, Lellys M; Wilkesman, Jeff

    2017-01-01

    Lipase and protease present in cell-free fractions of thermophilic Bacillus sp. cultures were analyzed by polyacrylamide gel (PAG) electrophoresis. After run, the gel is electrotransferred to another PAG copolymerized with glycerol tributyrate, olive oil, and gelatin. This multi-substrate gel was incubated first for lipase detection, until bands appeared, and then stained with Coomassie for protease detection. Advantages of this sequential procedure are the detection of two different enzyme activities on a single PAG, beside time and resource saving.

  8. Oestrus Detection in Dairy Cows Using Likelihood Ratio Tests

    DEFF Research Database (Denmark)

    Jónsson, Ragnar Ingi; Björgvinssin, Trausti; Blanke, Mogens

    2008-01-01

    This paper addresses detection of oestrus in dairy cows using methods from statistical change detection. The activity of the cows was measured by a necklace attached sensor. Statistical properties of the activity measure were investigated. Using data sets from 17 cows, diurnal activity variations...

  9. Sequential Detection of Fission Processes for Harbor Defense

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J V; Walston, S E; Chambers, D H

    2015-02-12

    With the large increase in terrorist activities throughout the world, the timely and accurate detection of special nuclear material (SNM) has become an extremely high priority for many countries concerned with national security. The detection of radionuclide contraband based on their γ-ray emissions has been attacked vigorously with some interesting and feasible results; however, the fission process of SNM has not received as much attention due to its inherent complexity and required predictive nature. In this paper, on-line, sequential Bayesian detection and estimation (parameter) techniques to rapidly and reliably detect unknown fissioning sources with high statistical confidence are developed.

  10. FLEAD: online frequency likelihood estimation anomaly detection for mobile sensing

    NARCIS (Netherlands)

    Le Viet Duc, L Duc; Scholten, Johan; Havinga, Paul J.M.

    With the rise of smartphone platforms, adaptive sensing becomes an predominant key to overcome intricate constraints such as smartphone's capabilities and dynamic data. One way to do this is estimating the event probability based on anomaly detection to invoke heavy processes, such as switching on

  11. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  12. Efficient image duplicated region detection model using sequential block clustering

    Czech Academy of Sciences Publication Activity Database

    Sekeh, M. A.; Maarof, M. A.; Rohani, M. F.; Mahdian, Babak

    2013-01-01

    Roč. 10, č. 1 (2013), s. 73-84 ISSN 1742-2876 Institutional support: RVO:67985556 Keywords : Image forensic * Copy–paste forgery * Local block matching Subject RIV: IN - Informatics, Computer Science Impact factor: 0.986, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/mahdian-efficient image duplicated region detection model using sequential block clustering.pdf

  13. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    Science.gov (United States)

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  14. Comparison of IRT Likelihood Ratio Test and Logistic Regression DIF Detection Procedures

    Science.gov (United States)

    Atar, Burcu; Kamata, Akihito

    2011-01-01

    The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…

  15. Sequential detection of influenza epidemics by the Kolmogorov-Smirnov test

    Directory of Open Access Journals (Sweden)

    Closas Pau

    2012-10-01

    Full Text Available Abstract Background Influenza is a well known and common human respiratory infection, causing significant morbidity and mortality every year. Despite Influenza variability, fast and reliable outbreak detection is required for health resource planning. Clinical health records, as published by the Diagnosticat database in Catalonia, host useful data for probabilistic detection of influenza outbreaks. Methods This paper proposes a statistical method to detect influenza epidemic activity. Non-epidemic incidence rates are modeled against the exponential distribution, and the maximum likelihood estimate for the decaying factor λ is calculated. The sequential detection algorithm updates the parameter as new data becomes available. Binary epidemic detection of weekly incidence rates is assessed by Kolmogorov-Smirnov test on the absolute difference between the empirical and the cumulative density function of the estimated exponential distribution with significance level 0 ≤ α ≤ 1. Results The main advantage with respect to other approaches is the adoption of a statistically meaningful test, which provides an indicator of epidemic activity with an associated probability. The detection algorithm was initiated with parameter λ0 = 3.8617 estimated from the training sequence (corresponding to non-epidemic incidence rates of the 2008-2009 influenza season and sequentially updated. Kolmogorov-Smirnov test detected the following weeks as epidemic for each influenza season: 50−10 (2008-2009 season, 38−50 (2009-2010 season, weeks 50−9 (2010-2011 season and weeks 3 to 12 for the current 2011-2012 season. Conclusions Real medical data was used to assess the validity of the approach, as well as to construct a realistic statistical model of weekly influenza incidence rates in non-epidemic periods. For the tested data, the results confirmed the ability of the algorithm to detect the start and the end of epidemic periods. In general, the proposed test could

  16. LASER: A Maximum Likelihood Toolkit for Detecting Temporal Shifts in Diversification Rates From Molecular Phylogenies

    Directory of Open Access Journals (Sweden)

    Daniel L. Rabosky

    2006-01-01

    Full Text Available Rates of species origination and extinction can vary over time during evolutionary radiations, and it is possible to reconstruct the history of diversification using molecular phylogenies of extant taxa only. Maximum likelihood methods provide a useful framework for inferring temporal variation in diversification rates. LASER is a package for the R programming environment that implements maximum likelihood methods based on the birth-death process to test whether diversification rates have changed over time. LASER contrasts the likelihood of phylogenetic data under models where diversification rates have changed over time to alternative models where rates have remained constant over time. Major strengths of the package include the ability to detect temporal increases in diversification rates and the inference of diversification parameters under multiple rate-variable models of diversification. The program and associated documentation are freely available from the R package archive at http://cran.r-project.org.

  17. Assessing Compatibility of Direct Detection Data: Halo-Independent Global Likelihood Analyses

    CERN Document Server

    Gelmini, Graciela B.

    2016-10-18

    We present two different halo-independent methods utilizing a global maximum likelihood that can assess the compatibility of dark matter direct detection data given a particular dark matter model. The global likelihood we use is comprised of at least one extended likelihood and an arbitrary number of Poisson or Gaussian likelihoods. In the first method we find the global best fit halo function and construct a two sided pointwise confidence band, which can then be compared with those derived from the extended likelihood alone to assess the joint compatibility of the data. In the second method we define a "constrained parameter goodness-of-fit" test statistic, whose $p$-value we then use to define a "plausibility region" (e.g. where $p \\geq 10\\%$). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (e.g. $p < 10 \\%$). As an example we apply these methods to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic s...

  18. Approximate likelihood approaches for detecting the influence of primordial gravitational waves in cosmic microwave background polarization

    Science.gov (United States)

    Pan, Zhen; Anderes, Ethan; Knox, Lloyd

    2018-05-01

    One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.

  19. Age-specific incidence of A/H1N1 2009 influenza infection in England from sequential antibody prevalence data using likelihood-based estimation.

    Directory of Open Access Journals (Sweden)

    Marc Baguelin

    2011-02-01

    Full Text Available Estimating the age-specific incidence of an emerging pathogen is essential for understanding its severity and transmission dynamics. This paper describes a statistical method that uses likelihoods to estimate incidence from sequential serological data. The method requires information on seroconversion intervals and allows integration of information on the temporal distribution of cases from clinical surveillance. Among a family of candidate incidences, a likelihood function is derived by reconstructing the change in seroprevalence from seroconversion following infection and comparing it with the observed sequence of positivity among the samples. This method is applied to derive the cumulative and weekly incidence of A/H1N1 pandemic influenza in England during the second wave using sera taken between September 2009 and February 2010 in four age groups (1-4, 5-14, 15-24, 25-44 years. The highest cumulative incidence was in 5-14 year olds (59%, 95% credible interval (CI: 52%, 68% followed by 1-4 year olds (49%, 95% CI: 38%, 61%, rates 20 and 40 times higher respectively than estimated from clinical surveillance. The method provides a more accurate and continuous measure of incidence than achieved by comparing prevalence in samples grouped by time period.

  20. Assessing compatibility of direct detection data: halo-independent global likelihood analyses

    Energy Technology Data Exchange (ETDEWEB)

    Gelmini, Graciela B. [Department of Physics and Astronomy, UCLA,475 Portola Plaza, Los Angeles, CA 90095 (United States); Huh, Ji-Haeng [CERN Theory Division,CH-1211, Geneva 23 (Switzerland); Witte, Samuel J. [Department of Physics and Astronomy, UCLA,475 Portola Plaza, Los Angeles, CA 90095 (United States)

    2016-10-18

    We present two different halo-independent methods to assess the compatibility of several direct dark matter detection data sets for a given dark matter model using a global likelihood consisting of at least one extended likelihood and an arbitrary number of Gaussian or Poisson likelihoods. In the first method we find the global best fit halo function (we prove that it is a unique piecewise constant function with a number of down steps smaller than or equal to a maximum number that we compute) and construct a two-sided pointwise confidence band at any desired confidence level, which can then be compared with those derived from the extended likelihood alone to assess the joint compatibility of the data. In the second method we define a “constrained parameter goodness-of-fit” test statistic, whose p-value we then use to define a “plausibility region” (e.g. where p≥10%). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (e.g. p<10%). We illustrate these methods by applying them to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic spin-independent isospin-conserving interactions or exothermic spin-independent isospin-violating interactions.

  1. A score to estimate the likelihood of detecting advanced colorectal neoplasia at colonoscopy.

    Science.gov (United States)

    Kaminski, Michal F; Polkowski, Marcin; Kraszewska, Ewa; Rupinski, Maciej; Butruk, Eugeniusz; Regula, Jaroslaw

    2014-07-01

    This study aimed to develop and validate a model to estimate the likelihood of detecting advanced colorectal neoplasia in Caucasian patients. We performed a cross-sectional analysis of database records for 40-year-old to 66-year-old patients who entered a national primary colonoscopy-based screening programme for colorectal cancer in 73 centres in Poland in the year 2007. We used multivariate logistic regression to investigate the associations between clinical variables and the presence of advanced neoplasia in a randomly selected test set, and confirmed the associations in a validation set. We used model coefficients to develop a risk score for detection of advanced colorectal neoplasia. Advanced colorectal neoplasia was detected in 2544 of the 35,918 included participants (7.1%). In the test set, a logistic-regression model showed that independent risk factors for advanced colorectal neoplasia were: age, sex, family history of colorectal cancer, cigarette smoking (padvanced neoplasia: 1.00 (95% CI 0.95 to 1.06)) and had moderate discriminatory power (c-statistic 0.62). We developed a score that estimated the likelihood of detecting advanced neoplasia in the validation set, from 1.32% for patients scoring 0, to 19.12% for patients scoring 7-8. Developed and internally validated score consisting of simple clinical factors successfully estimates the likelihood of detecting advanced colorectal neoplasia in asymptomatic Caucasian patients. Once externally validated, it may be useful for counselling or designing primary prevention studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  2. Orthogonal series generalized likelihood ratio test for failure detection and isolation. [for aircraft control

    Science.gov (United States)

    Hall, Steven R.; Walker, Bruce K.

    1990-01-01

    A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.

  3. A Space Object Detection Algorithm using Fourier Domain Likelihood Ratio Test

    Science.gov (United States)

    Becker, D.; Cain, S.

    Space object detection is of great importance in the highly dependent yet competitive and congested space domain. Detection algorithms employed play a crucial role in fulfilling the detection component in the situational awareness mission to detect, track, characterize and catalog unknown space objects. Many current space detection algorithms use a matched filter or a spatial correlator to make a detection decision at a single pixel point of a spatial image based on the assumption that the data follows a Gaussian distribution. This paper explores the potential for detection performance advantages when operating in the Fourier domain of long exposure images of small and/or dim space objects from ground based telescopes. A binary hypothesis test is developed based on the joint probability distribution function of the image under the hypothesis that an object is present and under the hypothesis that the image only contains background noise. The detection algorithm tests each pixel point of the Fourier transformed images to make the determination if an object is present based on the criteria threshold found in the likelihood ratio test. Using simulated data, the performance of the Fourier domain detection algorithm is compared to the current algorithm used in space situational awareness applications to evaluate its value.

  4. Brief Communication: Likelihood of societal preparedness for global change: trend detection

    Directory of Open Access Journals (Sweden)

    R. M. Vogel

    2013-07-01

    Full Text Available Anthropogenic influences on earth system processes are now pervasive, resulting in trends in river discharge, pollution levels, ocean levels, precipitation, temperature, wind, landslides, bird and plant populations and a myriad of other important natural hazards relating to earth system state variables. Thousands of trend detection studies have been published which report the statistical significance of observed trends. Unfortunately, such studies only concentrate on the null hypothesis of "no trend". Little or no attention is given to the power of such statistical trend tests, which would quantify the likelihood that we might ignore a trend if it really existed. The probability of missing the trend, if it exists, known as the type II error, informs us about the likelihood of whether or not society is prepared to accommodate and respond to such trends. We describe how the power or probability of detecting a trend if it exists, depends critically on our ability to develop improved multivariate deterministic and statistical methods for predicting future trends in earth system processes. Several other research and policy implications for improving our understanding of trend detection and our societal response to those trends are discussed.

  5. Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model

    International Nuclear Information System (INIS)

    Edwards, Darrin C.; Kupinski, Matthew A.; Metz, Charles E.; Nishikawa, Robert M.

    2002-01-01

    We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of 'candidate detections' as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well

  6. A New Maximum-Likelihood Change Estimator for Two-Pass SAR Coherent Change Detection.

    Energy Technology Data Exchange (ETDEWEB)

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Charles V,

    2014-09-01

    In this paper, we derive a new optimal change metric to be used in synthetic aperture RADAR (SAR) coherent change detection (CCD). Previous CCD methods tend to produce false alarm states (showing change when there is none) in areas of the image that have a low clutter-to-noise power ratio (CNR). The new estimator does not suffer from this shortcoming. It is a surprisingly simple expression, easy to implement, and is optimal in the maximum-likelihood (ML) sense. The estimator produces very impressive results on the CCD collects that we have tested.

  7. Gravitational wave chirp search: no-signal cumulative distribution of the maximum likelihood detection statistic

    International Nuclear Information System (INIS)

    Croce, R P; Demma, Th; Longo, M; Marano, S; Matta, V; Pierro, V; Pinto, I M

    2003-01-01

    The cumulative distribution of the supremum of a set (bank) of correlators is investigated in the context of maximum likelihood detection of gravitational wave chirps from coalescing binaries with unknown parameters. Accurate (lower-bound) approximants are introduced based on a suitable generalization of previous results by Mohanty. Asymptotic properties (in the limit where the number of correlators goes to infinity) are highlighted. The validity of numerical simulations made on small-size banks is extended to banks of any size, via a Gaussian correlation inequality

  8. Improved anomaly detection using multi-scale PLS and generalized likelihood ratio test

    KAUST Repository

    Madakyaru, Muddu

    2017-02-16

    Process monitoring has a central role in the process industry to enhance productivity, efficiency, and safety, and to avoid expensive maintenance. In this paper, a statistical approach that exploit the advantages of multiscale PLS models (MSPLS) and those of a generalized likelihood ratio (GLR) test to better detect anomalies is proposed. Specifically, to consider the multivariate and multi-scale nature of process dynamics, a MSPLS algorithm combining PLS and wavelet analysis is used as modeling framework. Then, GLR hypothesis testing is applied using the uncorrelated residuals obtained from MSPLS model to improve the anomaly detection abilities of these latent variable based fault detection methods even further. Applications to a simulated distillation column data are used to evaluate the proposed MSPLS-GLR algorithm.

  9. Improved anomaly detection using multi-scale PLS and generalized likelihood ratio test

    KAUST Repository

    Madakyaru, Muddu; Harrou, Fouzi; Sun, Ying

    2017-01-01

    Process monitoring has a central role in the process industry to enhance productivity, efficiency, and safety, and to avoid expensive maintenance. In this paper, a statistical approach that exploit the advantages of multiscale PLS models (MSPLS) and those of a generalized likelihood ratio (GLR) test to better detect anomalies is proposed. Specifically, to consider the multivariate and multi-scale nature of process dynamics, a MSPLS algorithm combining PLS and wavelet analysis is used as modeling framework. Then, GLR hypothesis testing is applied using the uncorrelated residuals obtained from MSPLS model to improve the anomaly detection abilities of these latent variable based fault detection methods even further. Applications to a simulated distillation column data are used to evaluate the proposed MSPLS-GLR algorithm.

  10. Penalized likelihood and multi-objective spatial scans for the detection and inference of irregular clusters

    Directory of Open Access Journals (Sweden)

    Fonseca Carlos M

    2010-10-01

    Full Text Available Abstract Background Irregularly shaped spatial clusters are difficult to delineate. A cluster found by an algorithm often spreads through large portions of the map, impacting its geographical meaning. Penalized likelihood methods for Kulldorff's spatial scan statistics have been used to control the excessive freedom of the shape of clusters. Penalty functions based on cluster geometry and non-connectivity have been proposed recently. Another approach involves the use of a multi-objective algorithm to maximize two objectives: the spatial scan statistics and the geometric penalty function. Results & Discussion We present a novel scan statistic algorithm employing a function based on the graph topology to penalize the presence of under-populated disconnection nodes in candidate clusters, the disconnection nodes cohesion function. A disconnection node is defined as a region within a cluster, such that its removal disconnects the cluster. By applying this function, the most geographically meaningful clusters are sifted through the immense set of possible irregularly shaped candidate cluster solutions. To evaluate the statistical significance of solutions for multi-objective scans, a statistical approach based on the concept of attainment function is used. In this paper we compared different penalized likelihoods employing the geometric and non-connectivity regularity functions and the novel disconnection nodes cohesion function. We also build multi-objective scans using those three functions and compare them with the previous penalized likelihood scans. An application is presented using comprehensive state-wide data for Chagas' disease in puerperal women in Minas Gerais state, Brazil. Conclusions We show that, compared to the other single-objective algorithms, multi-objective scans present better performance, regarding power, sensitivity and positive predicted value. The multi-objective non-connectivity scan is faster and better suited for the

  11. Efficient Mining and Detection of Sequential Intrusion Patterns for Network Intrusion Detection Systems

    Science.gov (United States)

    Shyu, Mei-Ling; Huang, Zifang; Luo, Hongli

    In recent years, pervasive computing infrastructures have greatly improved the interaction between human and system. As we put more reliance on these computing infrastructures, we also face threats of network intrusion and/or any new forms of undesirable IT-based activities. Hence, network security has become an extremely important issue, which is closely connected with homeland security, business transactions, and people's daily life. Accurate and efficient intrusion detection technologies are required to safeguard the network systems and the critical information transmitted in the network systems. In this chapter, a novel network intrusion detection framework for mining and detecting sequential intrusion patterns is proposed. The proposed framework consists of a Collateral Representative Subspace Projection Modeling (C-RSPM) component for supervised classification, and an inter-transactional association rule mining method based on Layer Divided Modeling (LDM) for temporal pattern analysis. Experiments on the KDD99 data set and the traffic data set generated by a private LAN testbed show promising results with high detection rates, low processing time, and low false alarm rates in mining and detecting sequential intrusion detections.

  12. Parametric Roll Resonance Detection using Phase Correlation and Log-likelihood Testing Techniques

    DEFF Research Database (Denmark)

    Galeazzi, Roberto; Blanke, Mogens; Poulsen, Niels Kjølstad

    2009-01-01

    generation warning system the purpose of which is to provide the master with an onboard system able to trigger an alarm when parametric roll is likely to happen within the immediate future. A detection scheme is introduced, which is able to issue a warning within five roll periods after a resonant motion......Real-time detection of parametric roll is still an open issue that is gathering an increasing attention. A first generation warning systems, based on guidelines and polar diagrams, showed their potential to face issues like long-term prediction and risk assessment. This paper presents a second...... started. After having determined statistical properties of the signals at hand, a detector based on the generalised log-likelihood ratio test (GLRT) is designed to look for variation in signal power. The ability of the detector to trigger alarms when parametric roll is going to onset is evaluated on two...

  13. Empirical likelihood based detection procedure for change point in mean residual life functions under random censorship.

    Science.gov (United States)

    Chen, Ying-Ju; Ning, Wei; Gupta, Arjun K

    2016-05-01

    The mean residual life (MRL) function is one of the basic parameters of interest in survival analysis that describes the expected remaining time of an individual after a certain age. The study of changes in the MRL function is practical and interesting because it may help us to identify some factors such as age and gender that may influence the remaining lifetimes of patients after receiving a certain surgery. In this paper, we propose a detection procedure based on the empirical likelihood for the changes in MRL functions with right censored data. Two real examples are also given: Veterans' administration lung cancer study and Stanford heart transplant to illustrate the detecting procedure. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Maximum-Likelihood Sequence Detection of Multiple Antenna Systems over Dispersive Channels via Sphere Decoding

    Directory of Open Access Journals (Sweden)

    Hassibi Babak

    2002-01-01

    Full Text Available Multiple antenna systems are capable of providing high data rate transmissions over wireless channels. When the channels are dispersive, the signal at each receive antenna is a combination of both the current and past symbols sent from all transmit antennas corrupted by noise. The optimal receiver is a maximum-likelihood sequence detector and is often considered to be practically infeasible due to high computational complexity (exponential in number of antennas and channel memory. Therefore, in practice, one often settles for a less complex suboptimal receiver structure, typically with an equalizer meant to suppress both the intersymbol and interuser interference, followed by the decoder. We propose a sphere decoding for the sequence detection in multiple antenna communication systems over dispersive channels. The sphere decoding provides the maximum-likelihood estimate with computational complexity comparable to the standard space-time decision-feedback equalizing (DFE algorithms. The performance and complexity of the sphere decoding are compared with the DFE algorithm by means of simulations.

  15. The metabolic network of Clostridium acetobutylicum: Comparison of the approximate Bayesian computation via sequential Monte Carlo (ABC-SMC) and profile likelihood estimation (PLE) methods for determinability analysis.

    Science.gov (United States)

    Thorn, Graeme J; King, John R

    2016-01-01

    The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. A flexible spatial scan statistic with a restricted likelihood ratio for detecting disease clusters.

    Science.gov (United States)

    Tango, Toshiro; Takahashi, Kunihiko

    2012-12-30

    Spatial scan statistics are widely used tools for detection of disease clusters. Especially, the circular spatial scan statistic proposed by Kulldorff (1997) has been utilized in a wide variety of epidemiological studies and disease surveillance. However, as it cannot detect noncircular, irregularly shaped clusters, many authors have proposed different spatial scan statistics, including the elliptic version of Kulldorff's scan statistic. The flexible spatial scan statistic proposed by Tango and Takahashi (2005) has also been used for detecting irregularly shaped clusters. However, this method sets a feasible limitation of a maximum of 30 nearest neighbors for searching candidate clusters because of heavy computational load. In this paper, we show a flexible spatial scan statistic implemented with a restricted likelihood ratio proposed by Tango (2008) to (1) eliminate the limitation of 30 nearest neighbors and (2) to have surprisingly much less computational time than the original flexible spatial scan statistic. As a side effect, it is shown to be able to detect clusters with any shape reasonably well as the relative risk of the cluster becomes large via Monte Carlo simulation. We illustrate the proposed spatial scan statistic with data on mortality from cerebrovascular disease in the Tokyo Metropolitan area, Japan. Copyright © 2012 John Wiley & Sons, Ltd.

  17. Aircraft control surface failure detection and isolation using the OSGLR test. [orthogonal series generalized likelihood ratio

    Science.gov (United States)

    Bonnice, W. F.; Motyka, P.; Wagner, E.; Hall, S. R.

    1986-01-01

    The performance of the orthogonal series generalized likelihood ratio (OSGLR) test in detecting and isolating commercial aircraft control surface and actuator failures is evaluated. A modification to incorporate age-weighting which significantly reduces the sensitivity of the algorithm to modeling errors is presented. The steady-state implementation of the algorithm based on a single linear model valid for a cruise flight condition is tested using a nonlinear aircraft simulation. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection and isolation performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling on dynamic pressure and flap deflection is examined. Based on this testing, the OSGLR algorithm should be capable of detecting control surface failures that would affect the safe operation of a commercial aircraft. Isolation may be difficult if there are several surfaces which produce similar effects on the aircraft. Extending the algorithm over the entire operating envelope of a commercial aircraft appears feasible.

  18. Task-based detectability in CT image reconstruction by filtered backprojection and penalized likelihood estimation

    Energy Technology Data Exchange (ETDEWEB)

    Gang, Grace J. [Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5G 2M9, Canada and Department of Biomedical Engineering, Johns Hopkins University, Baltimore Maryland 21205 (Canada); Stayman, J. Webster; Zbijewski, Wojciech [Department of Biomedical Engineering, Johns Hopkins University, Baltimore Maryland 21205 (United States); Siewerdsen, Jeffrey H., E-mail: jeff.siewerdsen@jhu.edu [Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5G 2M9, Canada and Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205 (United States)

    2014-08-15

    Purpose: Nonstationarity is an important aspect of imaging performance in CT and cone-beam CT (CBCT), especially for systems employing iterative reconstruction. This work presents a theoretical framework for both filtered-backprojection (FBP) and penalized-likelihood (PL) reconstruction that includes explicit descriptions of nonstationary noise, spatial resolution, and task-based detectability index. Potential utility of the model was demonstrated in the optimal selection of regularization parameters in PL reconstruction. Methods: Analytical models for local modulation transfer function (MTF) and noise-power spectrum (NPS) were investigated for both FBP and PL reconstruction, including explicit dependence on the object and spatial location. For FBP, a cascaded systems analysis framework was adapted to account for nonstationarity by separately calculating fluence and system gains for each ray passing through any given voxel. For PL, the point-spread function and covariance were derived using the implicit function theorem and first-order Taylor expansion according toFessler [“Mean and variance of implicitly defined biased estimators (such as penalized maximum likelihood): Applications to tomography,” IEEE Trans. Image Process. 5(3), 493–506 (1996)]. Detectability index was calculated for a variety of simple tasks. The model for PL was used in selecting the regularization strength parameter to optimize task-based performance, with both a constant and a spatially varying regularization map. Results: Theoretical models of FBP and PL were validated in 2D simulated fan-beam data and found to yield accurate predictions of local MTF and NPS as a function of the object and the spatial location. The NPS for both FBP and PL exhibit similar anisotropic nature depending on the pathlength (and therefore, the object and spatial location within the object) traversed by each ray, with the PL NPS experiencing greater smoothing along directions with higher noise. The MTF of FBP

  19. Parameter-free bearing fault detection based on maximum likelihood estimation and differentiation

    International Nuclear Information System (INIS)

    Bozchalooi, I Soltani; Liang, Ming

    2009-01-01

    Bearing faults can lead to malfunction and ultimately complete stall of many machines. The conventional high-frequency resonance (HFR) method has been commonly used for bearing fault detection. However, it is often very difficult to obtain and calibrate bandpass filter parameters, i.e. the center frequency and bandwidth, the key to the success of the HFR method. This inevitably undermines the usefulness of the conventional HFR technique. To avoid such difficulties, we propose parameter-free, versatile yet straightforward techniques to detect bearing faults. We focus on two types of measured signals frequently encountered in practice: (1) a mixture of impulsive faulty bearing vibrations and intrinsic background noise and (2) impulsive faulty bearing vibrations blended with intrinsic background noise and vibration interferences. To design a proper signal processing technique for each case, we analyze the effects of intrinsic background noise and vibration interferences on amplitude demodulation. For the first case, a maximum likelihood-based fault detection method is proposed to accommodate the Rician distribution of the amplitude-demodulated signal mixture. For the second case, we first illustrate that the high-amplitude low-frequency vibration interferences can make the amplitude demodulation ineffective. Then we propose a differentiation method to enhance the fault detectability. It is shown that the iterative application of a differentiation step can boost the relative strength of the impulsive faulty bearing signal component with respect to the vibration interferences. This preserves the effectiveness of amplitude demodulation and hence leads to more accurate fault detection. The proposed approaches are evaluated on simulated signals and experimental data acquired from faulty bearings

  20. Fault detection in multiply-redundant measurement systems via sequential testing

    International Nuclear Information System (INIS)

    Ray, A.

    1988-01-01

    The theory and application of a sequential test procedure for fault detection and isolation. The test procedure is suited for development of intelligent instrumentation in strategic processes like aircraft and nuclear plants where redundant measurements are usually available for individual critical variables. The test procedure consists of: (1) a generic redundancy management procedure which is essentially independent of the fault detection strategy and measurement noise statistics, and (2) a modified version of sequential probability ratio test algorithm for fault detection and isolation, which functions within the framework of this redundancy management procedure. The sequential test procedure is suitable for real-time applications using commercially available microcomputers and its efficacy has been verified by online fault detection in an operating nuclear reactor. 15 references

  1. Unsupervised land cover change detection: meaningful sequential time series analysis

    CSIR Research Space (South Africa)

    Salmon, BP

    2011-06-01

    Full Text Available An automated land cover change detection method is proposed that uses coarse spatial resolution hyper-temporal earth observation satellite time series data. The study compared three different unsupervised clustering approaches that operate on short...

  2. A preliminary evaluation of the generalized likelihood ratio for detecting and identifying control element failures in a transport aircraft

    Science.gov (United States)

    Bundick, W. T.

    1985-01-01

    The application of the Generalized Likelihood Ratio technique to the detection and identification of aircraft control element failures has been evaluated in a linear digital simulation of the longitudinal dynamics of a B-737 aircraft. Simulation results show that the technique has potential but that the effects of wind turbulence and Kalman filter model errors are problems which must be overcome.

  3. Enhancement of sequential zymography technique for the detection of thermophilic lipases and proteases.

    Science.gov (United States)

    Wilkesman, Jeff; Hernández, Zully; Fernández, Marleny; Contreras, Lellys M; Kurz, Liliana

    2014-05-01

    Analysis of lipases and proteases present in cell-free fractions of thermophilic Bacillus sp. cultures were performed in an enhanced sequential zymography method. After the PAGE run, the gel was electrotransferred to another polyacrylamide gel containing a mixture of glycerol tributyrate, olive oil and gelatin. After transference, this substrate-mix gel was incubated for lipase detection, until bands appeared, and later stained with CBB for protease detection. Assets are, besides detecting two enzymes on a single gel, time and material saving.

  4. Speech detection in noise and spatial unmasking in children with simultaneous versus sequential bilateral cochlear implants.

    Science.gov (United States)

    Chadha, Neil K; Papsin, Blake C; Jiwani, Salima; Gordon, Karen A

    2011-09-01

    To measure speech detection in noise performance for children with bilateral cochlear implants (BiCI), to compare performance in children with simultaneous implant versus those with sequential implant, and to compare performance to normal-hearing children. Prospective cohort study. Tertiary academic pediatric center. Children with early-onset bilateral deafness and 2-year BiCI experience, comprising the "sequential" group (>2 yr interimplantation delay, n = 12) and "simultaneous group" (no interimplantation delay, n = 10) and normal-hearing controls (n = 8). Thresholds to speech detection (at 0-degree azimuth) were measured with noise at 0-degree azimuth or ± 90-degree azimuth. Spatial unmasking (SU) as the noise condition changed from 0-degree azimuth to ± 90-degree azimuth and binaural summation advantage (BSA) of 2 over 1 CI. Speech detection in noise was significantly poorer than controls for both BiCI groups (p simultaneous group approached levels found in normal controls (7.2 ± 0.6 versus 8.6 ± 0.6 dB, p > 0.05) and was significantly better than that in the sequential group (3.9 ± 0.4 dB, p simultaneous group but, in the sequential group, was significantly better when noise was moved to the second rather than the first implanted ear (4.8 ± 0.5 versus 3.0 ± 0.4 dB, p sequential group's second rather than first CI. Children with simultaneously implanted BiCI demonstrated an advantage over children with sequential implant by using spatial cues to improve speech detection in noise.

  5. An anomaly detection and isolation scheme with instance-based learning and sequential analysis

    International Nuclear Information System (INIS)

    Yoo, T. S.; Garcia, H. E.

    2006-01-01

    This paper presents an online anomaly detection and isolation (FDI) technique using an instance-based learning method combined with a sequential change detection and isolation algorithm. The proposed method uses kernel density estimation techniques to build statistical models of the given empirical data (null hypothesis). The null hypothesis is associated with the set of alternative hypotheses modeling the abnormalities of the systems. A decision procedure involves a sequential change detection and isolation algorithm. Notably, the proposed method enjoys asymptotic optimality as the applied change detection and isolation algorithm is optimal in minimizing the worst mean detection/isolation delay for a given mean time before a false alarm or a false isolation. Applicability of this methodology is illustrated with redundant sensor data set and its performance. (authors)

  6. Sequential capillary electrophoresis analysis using optically gated sample injection and UV/vis detection.

    Science.gov (United States)

    Liu, Xiaoxia; Tian, Miaomiao; Camara, Mohamed Amara; Guo, Liping; Yang, Li

    2015-10-01

    We present sequential CE analysis of amino acids and L-asparaginase-catalyzed enzyme reaction, by combing the on-line derivatization, optically gated (OG) injection and commercial-available UV-Vis detection. Various experimental conditions for sequential OG-UV/vis CE analysis were investigated and optimized by analyzing a standard mixture of amino acids. High reproducibility of the sequential CE analysis was demonstrated with RSD values (n = 20) of 2.23, 2.57, and 0.70% for peak heights, peak areas, and migration times, respectively, and the LOD of 5.0 μM (for asparagine) and 2.0 μM (for aspartic acid) were obtained. With the application of the OG-UV/vis CE analysis, sequential online CE enzyme assay of L-asparaginase-catalyzed enzyme reaction was carried out by automatically and continuously monitoring the substrate consumption and the product formation every 12 s from the beginning to the end of the reaction. The Michaelis constants for the reaction were obtained and were found to be in good agreement with the results of traditional off-line enzyme assays. The study demonstrated the feasibility and reliability of integrating the OG injection with UV/vis detection for sequential online CE analysis, which could be of potential value for online monitoring various chemical reaction and bioprocesses. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Eyewitness decisions in simultaneous and sequential lineups: a dual-process signal detection theory analysis.

    Science.gov (United States)

    Meissner, Christian A; Tredoux, Colin G; Parker, Janat F; MacLin, Otto H

    2005-07-01

    Many eyewitness researchers have argued for the application of a sequential alternative to the traditional simultaneous lineup, given its role in decreasing false identifications of innocent suspects (sequential superiority effect). However, Ebbesen and Flowe (2002) have recently noted that sequential lineups may merely bring about a shift in response criterion, having no effect on discrimination accuracy. We explored this claim, using a method that allows signal detection theory measures to be collected from eyewitnesses. In three experiments, lineup type was factorially combined with conditions expected to influence response criterion and/or discrimination accuracy. Results were consistent with signal detection theory predictions, including that of a conservative criterion shift with the sequential presentation of lineups. In a fourth experiment, we explored the phenomenological basis for the criterion shift, using the remember-know-guess procedure. In accord with previous research, the criterion shift in sequential lineups was associated with a reduction in familiarity-based responding. It is proposed that the relative similarity between lineup members may create a context in which fluency-based processing is facilitated to a greater extent when lineup members are presented simultaneously.

  8. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    Science.gov (United States)

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  9. The Maximum Cross-Correlation approach to detecting translational motions from sequential remote-sensing images

    Science.gov (United States)

    Gao, J.; Lythe, M. B.

    1996-06-01

    This paper presents the principle of the Maximum Cross-Correlation (MCC) approach in detecting translational motions within dynamic fields from time-sequential remotely sensed images. A C program implementing the approach is presented and illustrated in a flowchart. The program is tested with a pair of sea-surface temperature images derived from Advanced Very High Resolution Radiometer (AVHRR) images near East Cape, New Zealand. Results show that the mean currents in the region have been detected satisfactorily with the approach.

  10. Performance and sensitivity analysis of the generalized likelihood ratio method for failure detection. M.S. Thesis

    Science.gov (United States)

    Bueno, R. A.

    1977-01-01

    Results of the generalized likelihood ratio (GLR) technique for the detection of failures in aircraft application are presented, and its relationship to the properties of the Kalman-Bucy filter is examined. Under the assumption that the system is perfectly modeled, the detectability and distinguishability of four failure types are investigated by means of analysis and simulations. Detection of failures is found satisfactory, but problems in identifying correctly the mode of a failure may arise. These issues are closely examined as well as the sensitivity of GLR to modeling errors. The advantages and disadvantages of this technique are discussed, and various modifications are suggested to reduce its limitations in performance and computational complexity.

  11. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies.

    Science.gov (United States)

    Lirio, R B; Dondériz, I C; Pérez Abalo, M C

    1992-08-01

    The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.

  12. Temporal characteristics of radiologists’ and novices’ lesion detection in viewing medical images presented rapidly and sequentially

    Directory of Open Access Journals (Sweden)

    Ryoichi Nakashima

    2016-10-01

    Full Text Available Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers’ attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy. This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks.

  13. Bearing Fault Detection Based on Maximum Likelihood Estimation and Optimized ANN Using the Bees Algorithm

    Directory of Open Access Journals (Sweden)

    Behrooz Attaran

    2015-01-01

    Full Text Available Rotating machinery is the most common machinery in industry. The root of the faults in rotating machinery is often faulty rolling element bearings. This paper presents a technique using optimized artificial neural network by the Bees Algorithm for automated diagnosis of localized faults in rolling element bearings. The inputs of this technique are a number of features (maximum likelihood estimation values, which are derived from the vibration signals of test data. The results shows that the performance of the proposed optimized system is better than most previous studies, even though it uses only two features. Effectiveness of the above method is illustrated using obtained bearing vibration data.

  14. Is sequential cranial ultrasound reliable for detection of white matter injury in very preterm infants?

    International Nuclear Information System (INIS)

    Leijser, Lara M.; Steggerda, Sylke J.; Walther, Frans J.; Wezel-Meijler, Gerda van; Bruine, Francisca T. de; Grond, Jeroen van der

    2010-01-01

    Cranial ultrasound (cUS) may not be reliable for detection of diffuse white matter (WM) injury. Our aim was to assess in very preterm infants the reliability of a classification system for WM injury on sequential cUS throughout the neonatal period, using magnetic resonance imaging (MRI) as reference standard. In 110 very preterm infants (gestational age <32 weeks), serial cUS during admission (median 8, range 4-22) and again around term equivalent age (TEA) and a single MRI around TEA were performed. cUS during admission were assessed for presence of WM changes, and contemporaneous cUS and MRI around TEA additionally for abnormality of lateral ventricles. Sequential cUS (from birth up to TEA) and MRI were classified as normal/mildly abnormal, moderately abnormal, or severely abnormal, based on a combination of findings of the WM and lateral ventricles. Predictive values of the cUS classification were calculated. Sequential cUS were classified as normal/mildly abnormal, moderately abnormal, and severely abnormal in, respectively, 22%, 65%, and 13% of infants and MRI in, respectively, 30%, 52%, and 18%. The positive predictive value of the cUS classification for the MRI classification was high for severely abnormal WM (0.79) but lower for normal/mildly abnormal (0.67) and moderately abnormal (0.64) WM. Sequential cUS during the neonatal period detects severely abnormal WM in very preterm infants but is less reliable for mildly and moderately abnormal WM. MRI around TEA seems needed to reliably detect WM injury in very preterm infants. (orig.)

  15. Simultaneous capture and sequential detection of two malarial biomarkers on magnetic microparticles.

    Science.gov (United States)

    Markwalter, Christine F; Ricks, Keersten M; Bitting, Anna L; Mudenda, Lwiindi; Wright, David W

    2016-12-01

    We have developed a rapid magnetic microparticle-based detection strategy for malarial biomarkers Plasmodium lactate dehydrogenase (pLDH) and Plasmodium falciparum histidine-rich protein II (PfHRPII). In this assay, magnetic particles functionalized with antibodies specific for pLDH and PfHRPII as well as detection antibodies with distinct enzymes for each biomarker are added to parasitized lysed blood samples. Sandwich complexes for pLDH and PfHRPII form on the surface of the magnetic beads, which are washed and sequentially re-suspended in detection enzyme substrate for each antigen. The developed simultaneous capture and sequential detection (SCSD) assay detects both biomarkers in samples as low as 2.0parasites/µl, an order of magnitude below commercially available ELISA kits, has a total incubation time of 35min, and was found to be reproducible between users over time. This assay provides a simple and efficient alternative to traditional 96-well plate ELISAs, which take 5-8h to complete and are limited to one analyte. Further, the modularity of the magnetic bead-based SCSD ELISA format could serve as a platform for application to other diseases for which multi-biomarker detection is advantageous. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Anatomy of a defective barrier: sequential glove leak detection in a surgical and dental environment.

    Science.gov (United States)

    Albin, M S; Bunegin, L; Duke, E S; Ritter, R R; Page, C P

    1992-02-01

    a) To determine the frequency of perforations in latex surgical gloves before, during, and after surgical and dental procedures; b) to evaluate the topographical distribution of perforations in latex surgical gloves after surgical and dental procedures; and c) to validate methods of testing for latex surgical glove patency. Multitrial tests under in vitro conditions and a prospective sequential patient study using consecutive testing. An outpatient dental clinic at a university dental school, the operating suite in a medical school affiliated with the Veteran's Hospital, and a biomechanics laboratory. Surgeons, scrub nurses, and dental technicians participating in 50 surgical and 50 dental procedures. We collected 679 latex surgical gloves after surgical procedures and tested them for patency by using a water pressure test. We also employed an electronic glove leak detector before donning, after sequential time intervals, and upon termination of 47 surgical (sequential surgical), 50 dental (sequential dental), and in three orthopedic cases where double gloving was used. The electronic glove leak detector was validated by using electronic point-by-point surface probing, fluorescein dye diffusion, as well as detecting glove punctures made with a 27-gauge needle. The random study indicated a leak rate of 33.0% (224 out of 679) in latex surgical gloves; the sequential surgical study demonstrated patency in 203 out of 347 gloves (58.5%); the sequential dental study showed 34 leaks in the 106 gloves used (32.1%); and with double gloving, the leak rate decreased to 25.0% (13 of 52 gloves tested). While the allowable FDA defect rate for unused latex surgical gloves is 1.5%, we noted defect rates in unused gloves of 5.5% in the sequential surgical, 1.9% in the sequential dental, and 4.0% in our electronic glove leak detector validating study. In the sequential surgical study, 52% of the leaks had occurred by 75 mins, and in the sequential dental study, 75% of the leaks

  17. Use of (D, MUF) and maximum-likelihood methods for detecting falsification and diversion in data-verification problems

    International Nuclear Information System (INIS)

    Goldman, A.S.; Beedgen, R.

    1982-01-01

    The investigation of data falsification and/or diversion is of major concern in nuclear materials accounting procedures used in international safeguards. In this paper, two procedures, denoted by (D,MUF) and LR (Likelihood Ratio), are discussed and compared when testing the hypothesis that neither diversion nor falsification has taken place versus the one-sided alternative that at least one of these parameters is positive. Critical regions and detection probabilities are given for both tests. It is shown that the LR method outperforms (D,MUF) when diversion and falsification take place

  18. An omnibus likelihood test statistic and its factorization for change detection in time series of polarimetric SAR data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Skriver, Henning

    2016-01-01

    Based on an omnibus likelihood ratio test statistic for the equality of several variance-covariance matrices following the complex Wishart distribution with an associated p-value and a factorization of this test statistic, change analysis in a short sequence of multilook, polarimetric SAR data...... in the covariance matrix representation is carried out. The omnibus test statistic and its factorization detect if and when change(s) occur. The technique is demonstrated on airborne EMISAR L-band data but may be applied to Sentinel-1, Cosmo-SkyMed, TerraSAR-X, ALOS and RadarSat-2 or other dual- and quad...

  19. Nuclear Power Plant Thermocouple Sensor-Fault Detection and Classification Using Deep Learning and Generalized Likelihood Ratio Test

    Science.gov (United States)

    Mandal, Shyamapada; Santhi, B.; Sridhar, S.; Vinolia, K.; Swaminathan, P.

    2017-06-01

    In this paper, an online fault detection and classification method is proposed for thermocouples used in nuclear power plants. In the proposed method, the fault data are detected by the classification method, which classifies the fault data from the normal data. Deep belief network (DBN), a technique for deep learning, is applied to classify the fault data. The DBN has a multilayer feature extraction scheme, which is highly sensitive to a small variation of data. Since the classification method is unable to detect the faulty sensor; therefore, a technique is proposed to identify the faulty sensor from the fault data. Finally, the composite statistical hypothesis test, namely generalized likelihood ratio test, is applied to compute the fault pattern of the faulty sensor signal based on the magnitude of the fault. The performance of the proposed method is validated by field data obtained from thermocouple sensors of the fast breeder test reactor.

  20. Anomaly Detection in Gas Turbine Fuel Systems Using a Sequential Symbolic Method

    Directory of Open Access Journals (Sweden)

    Fei Li

    2017-05-01

    Full Text Available Anomaly detection plays a significant role in helping gas turbines run reliably and economically. Considering the collective anomalous data and both sensitivity and robustness of the anomaly detection model, a sequential symbolic anomaly detection method is proposed and applied to the gas turbine fuel system. A structural Finite State Machine is used to evaluate posterior probabilities of observing symbolic sequences and the most probable state sequences they may locate. Hence an estimation-based model and a decoding-based model are used to identify anomalies in two different ways. Experimental results indicate that both models have both ideal performance overall, but the estimation-based model has a strong robustness ability, whereas the decoding-based model has a strong accuracy ability, particularly in a certain range of sequence lengths. Therefore, the proposed method can facilitate well existing symbolic dynamic analysis- based anomaly detection methods, especially in the gas turbine domain.

  1. Theoretical Analysis of Penalized Maximum-Likelihood Patlak Parametric Image Reconstruction in Dynamic PET for Lesion Detection.

    Science.gov (United States)

    Yang, Li; Wang, Guobao; Qi, Jinyi

    2016-04-01

    Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.

  2. Empirical likelihood

    CERN Document Server

    Owen, Art B

    2001-01-01

    Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...

  3. Approximate Likelihood

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...

  4. The phylogenetic likelihood library.

    Science.gov (United States)

    Flouri, T; Izquierdo-Carrasco, F; Darriba, D; Aberer, A J; Nguyen, L-T; Minh, B Q; Von Haeseler, A; Stamatakis, A

    2015-03-01

    We introduce the Phylogenetic Likelihood Library (PLL), a highly optimized application programming interface for developing likelihood-based phylogenetic inference and postanalysis software. The PLL implements appropriate data structures and functions that allow users to quickly implement common, error-prone, and labor-intensive tasks, such as likelihood calculations, model parameter as well as branch length optimization, and tree space exploration. The highly optimized and parallelized implementation of the phylogenetic likelihood function and a thorough documentation provide a framework for rapid development of scalable parallel phylogenetic software. By example of two likelihood-based phylogenetic codes we show that the PLL improves the sequential performance of current software by a factor of 2-10 while requiring only 1 month of programming time for integration. We show that, when numerical scaling for preventing floating point underflow is enabled, the double precision likelihood calculations in the PLL are up to 1.9 times faster than those in BEAGLE. On an empirical DNA dataset with 2000 taxa the AVX version of PLL is 4 times faster than BEAGLE (scaling enabled and required). The PLL is available at http://www.libpll.org under the GNU General Public License (GPL). © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  5. Analytical maximum-likelihood method to detect patterns in real networks

    International Nuclear Information System (INIS)

    Squartini, Tiziano; Garlaschelli, Diego

    2011-01-01

    In order to detect patterns in real networks, randomized graph ensembles that preserve only part of the topology of an observed network are systematically used as fundamental null models. However, the generation of them is still problematic. Existing approaches are either computationally demanding and beyond analytic control or analytically accessible but highly approximate. Here, we propose a solution to this long-standing problem by introducing a fast method that allows one to obtain expectation values and standard deviations of any topological property analytically, for any binary, weighted, directed or undirected network. Remarkably, the time required to obtain the expectation value of any property analytically across the entire graph ensemble is as short as that required to compute the same property using the adjacency matrix of the single original network. Our method reveals that the null behavior of various correlation properties is different from what was believed previously, and is highly sensitive to the particular network considered. Moreover, our approach shows that important structural properties (such as the modularity used in community detection problems) are currently based on incorrect expressions, and provides the exact quantities that should replace them.

  6. On the likelihood of detecting gravitational waves from Population III compact object binaries

    Science.gov (United States)

    Belczynski, Krzysztof; Ryu, Taeho; Perna, Rosalba; Berti, Emanuele; Tanaka, Takamitsu L.; Bulik, Tomasz

    2017-11-01

    We study the contribution of binary black hole (BH-BH) mergers from the first, metal-free stars in the Universe (Pop III) to gravitational wave detection rates. Our study combines initial conditions for the formation of Pop III stars based on N-body simulations of binary formation (including rates, binary fraction, initial mass function, orbital separation and eccentricity distributions) with an updated model of stellar evolution specific for Pop III stars. We find that the merger rate of these Pop III BH-BH systems is relatively small (≲ 0.1 Gpc-3 yr-1) at low redshifts (z 1 per cent) contribution of these stars to low-redshift BH-BH mergers. However, it remains to be tested whether (and at what level) rapidly spinning Pop III stars in the homogeneous evolution scenario can contribute to BH-BH mergers in the local Universe.

  7. Novel maximum likelihood approach for passive detection and localisation of multiple emitters

    Science.gov (United States)

    Hernandez, Marcel

    2017-12-01

    In this paper, a novel target acquisition and localisation algorithm (TALA) is introduced that offers a capability for detecting and localising multiple targets using the intermittent "signals-of-opportunity" (e.g. acoustic impulses or radio frequency transmissions) they generate. The TALA is a batch estimator that addresses the complex multi-sensor/multi-target data association problem in order to estimate the locations of an unknown number of targets. The TALA is unique in that it does not require measurements to be of a specific type, and can be implemented for systems composed of either homogeneous or heterogeneous sensors. The performance of the TALA is demonstrated in simulated scenarios with a network of 20 sensors and up to 10 targets. The sensors generate angle-of-arrival (AOA), time-of-arrival (TOA), or hybrid AOA/TOA measurements. It is shown that the TALA is able to successfully detect 83-99% of the targets, with a negligible number of false targets declared. Furthermore, the localisation errors of the TALA are typically within 10% of the errors generated by a "genie" algorithm that is given the correct measurement-to-target associations. The TALA also performs well in comparison with an optimistic Cramér-Rao lower bound, with typical differences in performance of 10-20%, and differences in performance of 40-50% in the most difficult scenarios considered. The computational expense of the TALA is also controllable, which allows the TALA to maintain computational feasibility even in the most challenging scenarios considered. This allows the approach to be implemented in time-critical scenarios, such as in the localisation of artillery firing events. It is concluded that the TALA provides a powerful situational awareness aid for passive surveillance operations.

  8. Sequential chromatin immunoprecipitation to detect SUMOylated MeCP2 in neurons

    Directory of Open Access Journals (Sweden)

    Tao Wu

    2016-03-01

    Full Text Available The small ubiquitin-like modifier (SUMO is a short peptide that can be covalently linked to proteins altering their function. SUMOylation is an essential post-translational modification (PTM. Because of its dynamic nature, low abundance levels, and technical limitations, the occupation of endogenous SUMOylated transcription factors at genomic loci is challenging to detect. The chromatin regulator Methyl CpG binding protein 2 (MeCP2 is subjected to PTMs including SUMO. Mutations in MeCP2 lead to Rett syndrome, a severe neurodevelopmental disorder. Here, we present an efficient method to perform sequential chromatin immunoprecipitation (Seq-ChIP for detecting SUMOylated MeCP2 in neurons. This Seq-ChIP technique is a useful tool to determine the occupancy of SUMOylated transcription and chromatin factors at specific genomic regions.

  9. Sequential projection pursuit for optimised vibration-based damage detection in an experimental wind turbine blade

    Science.gov (United States)

    Hoell, Simon; Omenzetter, Piotr

    2018-02-01

    To advance the concept of smart structures in large systems, such as wind turbines (WTs), it is desirable to be able to detect structural damage early while using minimal instrumentation. Data-driven vibration-based damage detection methods can be competitive in that respect because global vibrational responses encompass the entire structure. Multivariate damage sensitive features (DSFs) extracted from acceleration responses enable to detect changes in a structure via statistical methods. However, even though such DSFs contain information about the structural state, they may not be optimised for the damage detection task. This paper addresses the shortcoming by exploring a DSF projection technique specialised for statistical structural damage detection. High dimensional initial DSFs are projected onto a low-dimensional space for improved damage detection performance and simultaneous computational burden reduction. The technique is based on sequential projection pursuit where the projection vectors are optimised one by one using an advanced evolutionary strategy. The approach is applied to laboratory experiments with a small-scale WT blade under wind-like excitations. Autocorrelation function coefficients calculated from acceleration signals are employed as DSFs. The optimal numbers of projection vectors are identified with the help of a fast forward selection procedure. To benchmark the proposed method, selections of original DSFs as well as principal component analysis scores from these features are additionally investigated. The optimised DSFs are tested for damage detection on previously unseen data from the healthy state and a wide range of damage scenarios. It is demonstrated that using selected subsets of the initial and transformed DSFs improves damage detectability compared to the full set of features. Furthermore, superior results can be achieved by projecting autocorrelation coefficients onto just a single optimised projection vector.

  10. Ultrahigh Responsivity and Detectivity Graphene-Perovskite Hybrid Phototransistors by Sequential Vapor Deposition

    Science.gov (United States)

    Chang, Po-Han; Liu, Shang-Yi; Lan, Yu-Bing; Tsai, Yi-Chen; You, Xue-Qian; Li, Chia-Shuo; Huang, Kuo-You; Chou, Ang-Sheng; Cheng, Tsung-Chin; Wang, Juen-Kai; Wu, Chih-I.

    2017-04-01

    In this work, graphene-methylammonium lead iodide (MAPbI3) perovskite hybrid phototransistors fabricated by sequential vapor deposition are demonstrated. Ultrahigh responsivity of 1.73 × 107 A W-1 and detectivity of 2 × 1015 Jones are achieved, with extremely high effective quantum efficiencies of about 108% in the visible range (450-700 nm). This excellent performance is attributed to the ultra-flat perovskite films grown by vapor deposition on the graphene sheets. The hybrid structure of graphene covered with uniform perovskite has high exciton separation ability under light exposure, and thus efficiently generates photocurrents. This paper presents photoluminescence (PL) images along with statistical analysis used to study the photo-induced exciton behavior. Both uniform and dramatic PL intensity quenching has been observed over entire measured regions, consistently demonstrating excellent exciton separation in the devices.

  11. Detecting changes in ultrasound backscattered statistics by using Nakagami parameters: Comparisons of moment-based and maximum likelihood estimators.

    Science.gov (United States)

    Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang

    2017-05-01

    The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE 1 and MLE 2 , respectively), and Greenwood approximation (MLE gw ) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE 1 , the MLE 2 and MLE gw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE 2 and MLE gw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE 2 and MLE gw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Visual detection and sequential injection determination of aluminium using a cinnamoyl derivative.

    Science.gov (United States)

    Elečková, Lenka; Alexovič, Michal; Kuchár, Juraj; Balogh, Ioseph S; Andruch, Vasil

    2015-02-01

    A cinnamoyl derivative, 3-[4-(dimethylamino)cinnamoyl]-4-hydroxy-6-methyl-3,4-2H-pyran-2-one, was used as a ligand for the determination of aluminium. Upon the addition of an acetonitrile solution of the ligand to an aqueous solution containing Al(III) and a buffer solution at pH 8, a marked change in colour from yellow to orange is observed. The colour intensity is proportional to the concentration of Al(III); thus, the 'naked-eye' detection of aluminium is possible. The reaction is also applied for sequential injection determination of aluminium. Beer׳s law is obeyed in the range from 0.055 to 0.66 mg L(-1) of Al(III). The limit of detection, calculated as three times the standard deviation of the blank test (n=10), was found to be 4 μg L(-1) for Al(III). The method was applied for the determination of aluminium in spiked water samples and pharmaceutical preparations. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Hazardous Traffic Event Detection Using Markov Blanket and Sequential Minimal Optimization (MB-SMO

    Directory of Open Access Journals (Sweden)

    Lixin Yan

    2016-07-01

    Full Text Available The ability to identify hazardous traffic events is already considered as one of the most effective solutions for reducing the occurrence of crashes. Only certain particular hazardous traffic events have been studied in previous studies, which were mainly based on dedicated video stream data and GPS data. The objective of this study is twofold: (1 the Markov blanket (MB algorithm is employed to extract the main factors associated with hazardous traffic events; (2 a model is developed to identify hazardous traffic event using driving characteristics, vehicle trajectory, and vehicle position data. Twenty-two licensed drivers were recruited to carry out a natural driving experiment in Wuhan, China, and multi-sensor information data were collected for different types of traffic events. The results indicated that a vehicle’s speed, the standard deviation of speed, the standard deviation of skin conductance, the standard deviation of brake pressure, turn signal, the acceleration of steering, the standard deviation of acceleration, and the acceleration in Z (G have significant influences on hazardous traffic events. The sequential minimal optimization (SMO algorithm was adopted to build the identification model, and the accuracy of prediction was higher than 86%. Moreover, compared with other detection algorithms, the MB-SMO algorithm was ranked best in terms of the prediction accuracy. The conclusions can provide reference evidence for the development of dangerous situation warning products and the design of intelligent vehicles.

  14. Detection of sequential activation of left atrium and coronary sinus musculature in the general population

    Directory of Open Access Journals (Sweden)

    Masaki Ota, MD

    2016-12-01

    Conclusions: Far-field LA potentials are often recorded in the CS during sequential LA and CSM activation in the general population. The timing of LA potentials in CS recordings reflected the direction of conduction across the CSM.

  15. Inhomogeneities detection in annual precipitation time series in Portugal using direct sequential simulation

    Science.gov (United States)

    Caineta, Júlio; Ribeiro, Sara; Costa, Ana Cristina; Henriques, Roberto; Soares, Amílcar

    2014-05-01

    Climate data homogenisation is of major importance in monitoring climate change, the validation of weather forecasting, general circulation and regional atmospheric models, modelling of erosion, drought monitoring, among other studies of hydrological and environmental impacts. This happens because non-climate factors can cause time series discontinuities which may hide the true climatic signal and patterns, thus potentially bias the conclusions of those studies. In the last two decades, many methods have been developed to identify and remove these inhomogeneities. One of those is based on geostatistical simulation (DSS - direct sequential simulation), where local probability density functions (pdf) are calculated at candidate monitoring stations, using spatial and temporal neighbouring observations, and then are used for detection of inhomogeneities. This approach has been previously applied to detect inhomogeneities in four precipitation series (wet day count) from a network with 66 monitoring stations located in the southern region of Portugal (1980-2001). This study revealed promising results and the potential advantages of geostatistical techniques for inhomogeneities detection in climate time series. This work extends the case study presented before and investigates the application of the geostatistical stochastic approach to ten precipitation series that were previously classified as inhomogeneous by one of six absolute homogeneity tests (Mann-Kendall test, Wald-Wolfowitz runs test, Von Neumann ratio test, Standard normal homogeneity test (SNHT) for a single break, Pettit test, and Buishand range test). Moreover, a sensibility analysis is implemented to investigate the number of simulated realisations that should be used to accurately infer the local pdfs. Accordingly, the number of simulations per iteration is increased from 50 to 500, which resulted in a more representative local pdf. A set of default and recommended settings is provided, which will help

  16. Sequential strand displacement beacon for detection of DNA coverage on functionalized gold nanoparticles.

    Science.gov (United States)

    Paliwoda, Rebecca E; Li, Feng; Reid, Michael S; Lin, Yanwen; Le, X Chris

    2014-06-17

    Functionalizing nanomaterials for diverse analytical, biomedical, and therapeutic applications requires determination of surface coverage (or density) of DNA on nanomaterials. We describe a sequential strand displacement beacon assay that is able to quantify specific DNA sequences conjugated or coconjugated onto gold nanoparticles (AuNPs). Unlike the conventional fluorescence assay that requires the target DNA to be fluorescently labeled, the sequential strand displacement beacon method is able to quantify multiple unlabeled DNA oligonucleotides using a single (universal) strand displacement beacon. This unique feature is achieved by introducing two short unlabeled DNA probes for each specific DNA sequence and by performing sequential DNA strand displacement reactions. Varying the relative amounts of the specific DNA sequences and spacing DNA sequences during their coconjugation onto AuNPs results in different densities of the specific DNA on AuNP, ranging from 90 to 230 DNA molecules per AuNP. Results obtained from our sequential strand displacement beacon assay are consistent with those obtained from the conventional fluorescence assays. However, labeling of DNA with some fluorescent dyes, e.g., tetramethylrhodamine, alters DNA density on AuNP. The strand displacement strategy overcomes this problem by obviating direct labeling of the target DNA. This method has broad potential to facilitate more efficient design and characterization of novel multifunctional materials for diverse applications.

  17. A sequential sampling account of response bias and speed-accuracy tradeoffs in a conflict detection task.

    Science.gov (United States)

    Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew

    2014-03-01

    Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association

  18. Sequential analysis as a tool for detection of amikacin ototoxicity in the treatment of multidrug-resistant tuberculosis.

    Science.gov (United States)

    Vasconcelos, Karla Anacleto de; Frota, Silvana Maria Monte Coelho; Ruffino-Netto, Antonio; Kritski, Afrânio Lineu

    2018-04-01

    To investigate early detection of amikacin-induced ototoxicity in a population treated for multidrug-resistant tuberculosis (MDR-TB), by means of three different tests: pure-tone audiometry (PTA); high-frequency audiometry (HFA); and distortion-product otoacoustic emission (DPOAE) testing. This was a longitudinal prospective cohort study involving patients aged 18-69 years with a diagnosis of MDR-TB who had to receive amikacin for six months as part of their antituberculosis drug regimen for the first time. Hearing was assessed before treatment initiation and at two and six months after treatment initiation. Sequential statistics were used to analyze the results. We included 61 patients, but the final population consisted of 10 patients (7 men and 3 women) because of sequential analysis. Comparison of the test results obtained at two and six months after treatment initiation with those obtained at baseline revealed that HFA at two months and PTA at six months detected hearing threshold shifts consistent with ototoxicity. However, DPOAE testing did not detect such shifts. The statistical method used in this study makes it possible to conclude that, over the six-month period, amikacin-associated hearing threshold shifts were detected by HFA and PTA, and that DPOAE testing was not efficient in detecting such shifts.

  19. Detecting the contagion effect in mass killings; a constructive example of the statistical advantages of unbinned likelihood methods.

    Science.gov (United States)

    Towers, Sherry; Mubayi, Anuj; Castillo-Chavez, Carlos

    2018-01-01

    When attempting to statistically distinguish between a null and an alternative hypothesis, many researchers in the life and social sciences turn to binned statistical analysis methods, or methods that are simply based on the moments of a distribution (such as the mean, and variance). These methods have the advantage of simplicity of implementation, and simplicity of explanation. However, when null and alternative hypotheses manifest themselves in subtle differences in patterns in the data, binned analysis methods may be insensitive to these differences, and researchers may erroneously fail to reject the null hypothesis when in fact more sensitive statistical analysis methods might produce a different result when the null hypothesis is actually false. Here, with a focus on two recent conflicting studies of contagion in mass killings as instructive examples, we discuss how the use of unbinned likelihood methods makes optimal use of the information in the data; a fact that has been long known in statistical theory, but perhaps is not as widely appreciated amongst general researchers in the life and social sciences. In 2015, Towers et al published a paper that quantified the long-suspected contagion effect in mass killings. However, in 2017, Lankford & Tomek subsequently published a paper, based upon the same data, that claimed to contradict the results of the earlier study. The former used unbinned likelihood methods, and the latter used binned methods, and comparison of distribution moments. Using these analyses, we also discuss how visualization of the data can aid in determination of the most appropriate statistical analysis methods to distinguish between a null and alternate hypothesis. We also discuss the importance of assessment of the robustness of analysis results to methodological assumptions made (for example, arbitrary choices of number of bins and bin widths when using binned methods); an issue that is widely overlooked in the literature, but is critical

  20. Adaptive x-ray threat detection using sequential hypotheses testing with fan-beam experimental data (Conference Presentation)

    Science.gov (United States)

    Thamvichai, Ratchaneekorn; Huang, Liang-Chih; Ashok, Amit; Gong, Qian; Coccarelli, David; Greenberg, Joel A.; Gehm, Michael E.; Neifeld, Mark A.

    2017-05-01

    We employ an adaptive measurement system, based on sequential hypotheses testing (SHT) framework, for detecting material-based threats using experimental data acquired on an X-ray experimental testbed system. This testbed employs 45-degree fan-beam geometry and 15 views over a 180-degree span to generate energy sensitive X-ray projection data. Using this testbed system, we acquire multiple view projection data for 200 bags. We consider an adaptive measurement design where the X-ray projection measurements are acquired in a sequential manner and the adaptation occurs through the choice of the optimal "next" source/view system parameter. Our analysis of such an adaptive measurement design using the experimental data demonstrates a 3x-7x reduction in the probability of error relative to a static measurement design. Here the static measurement design refers to the operational system baseline that corresponds to a sequential measurement using all the available sources/views. We also show that by using adaptive measurements it is possible to reduce the number of sources/views by nearly 50% compared a system that relies on static measurements.

  1. Green offal inspection of cattle, small ruminants and pigs in the United Kingdom: Impact assessment of changes in the inspection protocol on likelihood of detection of selected hazards.

    Science.gov (United States)

    Blagojevic, Bojan; Dadios, Nikolaos; Reinmann, Karin; Guitian, Javier; Stärk, Katharina D C

    2015-06-01

    The changes in detection of selected public and animal health as well as welfare hazards due to the change in current inspection of green offal in cattle, small ruminants and pigs were assessed. With respect to public health and animal health, the conditional likelihood of detection with the current green offal inspection was found to be low for eleven out of the twenty-four selected hazard-species pairings and very low for the remaining thirteen pairings. This strongly suggests that the contribution of current green offal inspection to risk mitigation is very limited for public and animal health hazards. The removal of green offal inspection would reduce the detection of some selected animal welfare conditions. For all selected public and animal health as well as welfare hazards, the reduced detection could be compensated with other pre-harvest, harvest and/or post-harvest control measures including existing meat inspection tasks. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Induction machine bearing faults detection based on a multi-dimensional MUSIC algorithm and maximum likelihood estimation.

    Science.gov (United States)

    Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed

    2016-07-01

    Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Evaluation of an Improved Branch-Site Likelihood Method for Detecting Positive Selection at the Molecular Level

    DEFF Research Database (Denmark)

    Zhang, Jianzhi; Nielsen, Rasmus; Yang, Ziheng

    2005-01-01

    of interest, while test 2 had acceptable false-positive rates and appeared robust against violations of model assumptions. As test 2 is a direct test of positive selection on the lineages of interest, it is referred to as the branch-site test of positive selection and is recommended for use in real data......Detecting positive Darwinian selection at the DNA sequence level has been a subject of considerable interest. However, positive selection is difficult to detect because it often operates episodically on a few amino acid sites, and the signal may be masked by negative selection. Several methods have...... been developed to test positive selection that acts on given branches (branch methods) or on a subset of sites (site methods). Recently, Yang, Z., and R. Nielsen (2002. Codon-substitution models for detecting molecular adaptation at individual sites along specific lineages. Mol. Biol. Evol. 19...

  4. Likelihood of Brine and CO2 Leak Detection using Magnetotellurics and Electrical Resistivity Tomography Methods

    Energy Technology Data Exchange (ETDEWEB)

    Yang, X. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Buscheck, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mansoor, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Carroll, S. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-11

    The US DOE National Risk Assessment Partnership (NRAP), funded through the Office of Fossil Energy and NETL, is developing methods to evaluate the effectiveness of monitoring techniques to detect brine and CO2 leakage from legacy wells into underground sources of drinking water (USDW) overlying a CO2 storage reservoir. As part of the NRAP Strategic Monitoring group, we have generated 140 simulations of aquifer impact data based on the Kimberlina site in California’s southern San Joaquin Basin, Kimberlina Rev. 1.1. CO2 buoyancy allows some of the stored CO2 to reach shallower permeable zones and is detectable with surface geophysical sensors. We are using this simulated data set to evaluate effectiveness of electrical resistivity tomography (ERT) and magnetotellurics (MT) for leak detection. The evaluation of additional monitoring methods such as pressure, seismic and gravity is underway through a multi-lab collaboration.

  5. Iterative User Interface Design for Automated Sequential Organ Failure Assessment Score Calculator in Sepsis Detection.

    Science.gov (United States)

    Aakre, Christopher Ansel; Kitson, Jaben E; Li, Man; Herasevich, Vitaly

    2017-05-18

    The new sepsis definition has increased the need for frequent sequential organ failure assessment (SOFA) score recalculation and the clerical burden of information retrieval makes this score ideal for automated calculation. The aim of this study was to (1) estimate the clerical workload of manual SOFA score calculation through a time-motion analysis and (2) describe a user-centered design process for an electronic medical record (EMR) integrated, automated SOFA score calculator with subsequent usability evaluation study. First, we performed a time-motion analysis by recording time-to-task-completion for the manual calculation of 35 baseline and 35 current SOFA scores by 14 internal medicine residents over a 2-month period. Next, we used an agile development process to create a user interface for a previously developed automated SOFA score calculator. The final user interface usability was evaluated by clinician end users with the Computer Systems Usability Questionnaire. The overall mean (standard deviation, SD) time-to-complete manual SOFA score calculation time was 61.6 s (33). Among the 24% (12/50) usability survey respondents, our user-centered user interface design process resulted in >75% favorability of survey items in the domains of system usability, information quality, and interface quality. Early stakeholder engagement in our agile design process resulted in a user interface for an automated SOFA score calculator that reduced clinician workload and met clinicians' needs at the point of care. Emerging interoperable platforms may facilitate dissemination of similarly useful clinical score calculators and decision support algorithms as "apps." A user-centered design process and usability evaluation should be considered during creation of these tools. ©Christopher Ansel Aakre, Jaben E Kitson, Man Li, Vitaly Herasevich. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 18.05.2017.

  6. Vehicle speed detection based on gaussian mixture model using sequential of images

    Science.gov (United States)

    Setiyono, Budi; Ratna Sulistyaningrum, Dwi; Soetrisno; Fajriyah, Farah; Wahyu Wicaksono, Danang

    2017-09-01

    Intelligent Transportation System is one of the important components in the development of smart cities. Detection of vehicle speed on the highway is supporting the management of traffic engineering. The purpose of this study is to detect the speed of the moving vehicles using digital image processing. Our approach is as follows: The inputs are a sequence of frames, frame rate (fps) and ROI. The steps are following: First we separate foreground and background using Gaussian Mixture Model (GMM) in each frames. Then in each frame, we calculate the location of object and its centroid. Next we determine the speed by computing the movement of centroid in sequence of frames. In the calculation of speed, we only consider frames when the centroid is inside the predefined region of interest (ROI). Finally we transform the pixel displacement into a time unit of km/hour. Validation of the system is done by comparing the speed calculated manually and obtained by the system. The results of software testing can detect the speed of vehicles with the highest accuracy is 97.52% and the lowest accuracy is 77.41%. And the detection results of testing by using real video footage on the road is included with real speed of the vehicle.

  7. Evaluation of a novel PTFE material for separation and preconcentration of trace levels of metal ions in sequential injection (SI) and sequential injection lab-on-valve (SI-LOV) systems interfaced with detection by ETAAS

    DEFF Research Database (Denmark)

    Long, Xiangbao; Chomchoei, Roongrat; Gała, Piotr

    The operational characteristics of a novel PTFE bead material, granular Algoflon®, used for separation and preconcentration of metal ions via adsorption of on-line generated non-charged metal complexes, were evaluated in a sequential injection (SI) system furnished with an external packed column...... and in a sequential injection lab-on-valve (SI-LOV) system. Employed for the determination of cadmium(II), complexed with diethyldithiophosphate (DDPA), and detection by electrothermal atomic absorption spectrometry (ETAAS), its performance was compared to that of a previously used material, Aldrich PTFE, which had...... demonstrated that PTFE was the most promising for solid-state pretreatments. By comparing the two materials, the Algoflon® beads exhibited much higher sensitivity (1.6107 versus 0.2956 μg l-1 per integrated absorbance (s)), and better retention efficiency (82% versus 74%) and enrichment factor (20.8 versus 17...

  8. Sequential extraction of heavy metals in river sediments of an abandoned pyrite mining area: pollution detection and affinity series

    International Nuclear Information System (INIS)

    Pagnanelli, F.; Moscardini, E.; Giuliano, V.; Toro, L.

    2004-01-01

    In this paper heavy metal pollution at an abandoned Italian pyrite mine has been investigated by comparing total concentrations and speciation of heavy metals (Fe, Cu, Mn, Zn, Pb and As) in a red mud sample and a river sediment. Acid digestions show that all the investigated heavy metals present larger concentrations in the sediment than in the tailing. A modified Tessier's procedure has been used to discriminate heavy metal bound to organic fraction from those originally present in the mineral sulphide matrix and to detect a possible trend of metal mobilisation from red mud to river sediment. Sequential extractions on bulk and size fractionated samples denote that sediment samples present larger percent concentrations of the investigated heavy metals in the first extractive steps (I-IV) especially in lower dimension size fractionated samples suggesting that heavy metals in the sediment are significantly bound by superficial adsorption mechanisms. - Capsule: A modified Tessier's procedure, discriminating organic and sulphide bound metals, was used to detect pollutant mobilisation from red mud to river sediment in an abandoned pyrite mine

  9. CHIRP-Like Signals: Estimation, Detection and Processing A Sequential Model-Based Approach

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-08-04

    Chirp signals have evolved primarily from radar/sonar signal processing applications specifically attempting to estimate the location of a target in surveillance/tracking volume. The chirp, which is essentially a sinusoidal signal whose phase changes instantaneously at each time sample, has an interesting property in that its correlation approximates an impulse function. It is well-known that a matched-filter detector in radar/sonar estimates the target range by cross-correlating a replicant of the transmitted chirp with the measurement data reflected from the target back to the radar/sonar receiver yielding a maximum peak corresponding to the echo time and therefore enabling the desired range estimate. In this application, we perform the same operation as a radar or sonar system, that is, we transmit a “chirp-like pulse” into the target medium and attempt to first detect its presence and second estimate its location or range. Our problem is complicated by the presence of disturbance signals from surrounding broadcast stations as well as extraneous sources of interference in our frequency bands and of course the ever present random noise from instrumentation. First, we discuss the chirp signal itself and illustrate its inherent properties and then develop a model-based processing scheme enabling both the detection and estimation of the signal from noisy measurement data.

  10. The sequentially discounting autoregressive (SDAR) method for on-line automatic seismic event detecting on long term observation

    Science.gov (United States)

    Wang, L.; Toshioka, T.; Nakajima, T.; Narita, A.; Xue, Z.

    2017-12-01

    In recent years, more and more Carbon Capture and Storage (CCS) studies focus on seismicity monitoring. For the safety management of geological CO2 storage at Tomakomai, Hokkaido, Japan, an Advanced Traffic Light System (ATLS) combined different seismic messages (magnitudes, phases, distributions et al.) is proposed for injection controlling. The primary task for ATLS is the seismic events detection in a long-term sustained time series record. Considering the time-varying characteristics of Signal to Noise Ratio (SNR) of a long-term record and the uneven energy distributions of seismic event waveforms will increase the difficulty in automatic seismic detecting, in this work, an improved probability autoregressive (AR) method for automatic seismic event detecting is applied. This algorithm, called sequentially discounting AR learning (SDAR), can identify the effective seismic event in the time series through the Change Point detection (CPD) of the seismic record. In this method, an anomaly signal (seismic event) can be designed as a change point on the time series (seismic record). The statistical model of the signal in the neighborhood of event point will change, because of the seismic event occurrence. This means the SDAR aims to find the statistical irregularities of the record thought CPD. There are 3 advantages of SDAR. 1. Anti-noise ability. The SDAR does not use waveform messages (such as amplitude, energy, polarization) for signal detecting. Therefore, it is an appropriate technique for low SNR data. 2. Real-time estimation. When new data appears in the record, the probability distribution models can be automatic updated by SDAR for on-line processing. 3. Discounting property. the SDAR introduces a discounting parameter to decrease the influence of present statistic value on future data. It makes SDAR as a robust algorithm for non-stationary signal processing. Within these 3 advantages, the SDAR method can handle the non-stationary time-varying long

  11. How Well Can We Detect Lineage-Specific Diversification-Rate Shifts? A Simulation Study of Sequential AIC Methods.

    Science.gov (United States)

    May, Michael R; Moore, Brian R

    2016-11-01

    Evolutionary biologists have long been fascinated by the extreme differences in species numbers across branches of the Tree of Life. This has motivated the development of statistical methods for detecting shifts in the rate of lineage diversification across the branches of phylogenic trees. One of the most frequently used methods, MEDUSA, explores a set of diversification-rate models, where each model assigns branches of the phylogeny to a set of diversification-rate categories. Each model is first fit to the data, and the Akaike information criterion (AIC) is then used to identify the optimal diversification model. Surprisingly, the statistical behavior of this popular method is uncharacterized, which is a concern in light of: (1) the poor performance of the AIC as a means of choosing among models in other phylogenetic contexts; (2) the ad hoc algorithm used to visit diversification models, and; (3) errors that we reveal in the likelihood function used to fit diversification models to the phylogenetic data. Here, we perform an extensive simulation study demonstrating that MEDUSA (1) has a high false-discovery rate (on average, spurious diversification-rate shifts are identified [Formula: see text] of the time), and (2) provides biased estimates of diversification-rate parameters. Understanding the statistical behavior of MEDUSA is critical both to empirical researchers-in order to clarify whether these methods can make reliable inferences from empirical datasets-and to theoretical biologists-in order to clarify the specific problems that need to be solved in order to develop more reliable approaches for detecting shifts in the rate of lineage diversification. [Akaike information criterion; extinction; lineage-specific diversification rates; phylogenetic model selection; speciation.]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  12. Effect of Box-Cox transformation on power of Haseman-Elston and maximum-likelihood variance components tests to detect quantitative trait Loci.

    Science.gov (United States)

    Etzel, C J; Shete, S; Beasley, T M; Fernandez, J R; Allison, D B; Amos, C I

    2003-01-01

    Non-normality of the phenotypic distribution can affect power to detect quantitative trait loci in sib pair studies. Previously, we observed that Winsorizing the sib pair phenotypes increased the power of quantitative trait locus (QTL) detection for both Haseman-Elston (HE) least-squares tests [Hum Hered 2002;53:59-67] and maximum likelihood-based variance components (MLVC) analysis [Behav Genet (in press)]. Winsorizing the phenotypes led to a slight increase in type 1 error in H-E tests and a slight decrease in type I error for MLVC analysis. Herein, we considered transforming the sib pair phenotypes using the Box-Cox family of transformations. Data were simulated for normal and non-normal (skewed and kurtic) distributions. Phenotypic values were replaced by Box-Cox transformed values. Twenty thousand replications were performed for three H-E tests of linkage and the likelihood ratio test (LRT), the Wald test and other robust versions based on the MLVC method. We calculated the relative nominal inflation rate as the ratio of observed empirical type 1 error divided by the set alpha level (5, 1 and 0.1% alpha levels). MLVC tests applied to non-normal data had inflated type I errors (rate ratio greater than 1.0), which were controlled best by Box-Cox transformation and to a lesser degree by Winsorizing. For example, for non-transformed, skewed phenotypes (derived from a chi2 distribution with 2 degrees of freedom), the rates of empirical type 1 error with respect to set alpha level=0.01 were 0.80, 4.35 and 7.33 for the original H-E test, LRT and Wald test, respectively. For the same alpha level=0.01, these rates were 1.12, 3.095 and 4.088 after Winsorizing and 0.723, 1.195 and 1.905 after Box-Cox transformation. Winsorizing reduced inflated error rates for the leptokurtic distribution (derived from a Laplace distribution with mean 0 and variance 8). Further, power (adjusted for empirical type 1 error) at the 0.01 alpha level ranged from 4.7 to 17.3% across all tests

  13. Logic of likelihood

    International Nuclear Information System (INIS)

    Wall, M.J.W.

    1992-01-01

    The notion of open-quotes probabilityclose quotes is generalized to that of open-quotes likelihood,close quotes and a natural logical structure is shown to exist for any physical theory which predicts likelihoods. Two physically based axioms are given for this logical structure to form an orthomodular poset, with an order-determining set of states. The results strengthen the basis of the quantum logic approach to axiomatic quantum theory. 25 refs

  14. A failure detection and isolation system simulator

    International Nuclear Information System (INIS)

    Assumpcao Filho, E.O.; Nakata, H.

    1990-04-01

    A failure detection and isolation system (FDI) simulation program has been developed for IBM-PC microcomputers. The program, based on the sequential likelihood ratio testing method developed by A. Wald, was implemented with the Monte-Carlo technique. The calculated failure detection rate was favorably compared against the wind-tunnel experimental redundant temperature sensors. (author) [pt

  15. Extraction and preconcentration of trace levels of cobalt using functionalized magnetic nanoparticles in a sequential injection lab-on-valve system with detection by electrothermal atomic absorption spectrometry

    International Nuclear Information System (INIS)

    Wang Yang; Luo Xiaoyu; Tang Jie; Hu Xiaoya; Xu Qin; Yang Chun

    2012-01-01

    Graphical abstract: An approach to performing extraction and preconcentration employing functionalized magnetic particles for the determination of cobalt in the sequential injection lab-on-valve system using detection by electrothermal atomic absorption spectrometry. Highlights: ► New SPE method for cobalt separation/preconcentration was reported. ► Functionalized magnetic nanoparticles were used as adsorbent. ► Extraction, elution, and detection procedures were performed in the LOV system. ► This automatic extraction technique provided a good platform for metal analysis. - Abstract: A new approach to performing extraction and preconcentration employing functionalized magnetic nanoparticles for the determination of trace metals is presented. Alumina-coated iron oxide nanoparticles were synthesized and used as the solid support. The nanoparticles were functionalized with sodium dodecyl sulfate and used as adsorbents for solid phase extraction of the analyte. Extraction, elution, and detection procedures were performed sequentially in the sequential injection lab-on-valve (SI-LOV) system followed by electrothermal atomic absorption spectrometry (ETAAS). Mixtures of hydrophobic analytes were successfully extracted from solution using the synthesized magnetic adsorbents. The potential use of the established scheme was demonstrated by taking cobalt as a model analyte. Under the optimal conditions, the calibration curve showed an excellent linearity in the concentration range of 0.01–5 μg L −1 , and the relative standard deviation was 2.8% at the 0.5 μg L −1 level (n = 11). The limit of detection was 6 ng L −1 with a sampling frequency of 18 h −1 . The present method has been successfully applied to cobalt determination in water samples and two certified reference materials.

  16. Sequential Banking.

    OpenAIRE

    Bizer, David S; DeMarzo, Peter M

    1992-01-01

    The authors study environments in which agents may borrow sequentially from more than one leader. Although debt is prioritized, additional lending imposes an externality on prior debt because, with moral hazard, the probability of repayment of prior loans decreases. Equilibrium interest rates are higher than they would be if borrowers could commit to borrow from at most one bank. Even though the loan terms are less favorable than they would be under commitment, the indebtedness of borrowers i...

  17. Longer duration of homelessness is associated with a lower likelihood of non-detectable plasma HIV-1 RNA viral load among people who use illicit drugs in a Canadian setting.

    Science.gov (United States)

    Loh, Jane; Kennedy, Mary Clare; Wood, Evan; Kerr, Thomas; Marshall, Brandon; Parashar, Surita; Montaner, Julio; Milloy, M-J

    2016-11-01

    Homelessness is common among people who use drugs (PWUD) and, for those living with HIV/AIDS, an important contributor to sub-optimal HIV treatment outcomes. This study aims to investigate the relationship between the duration of homelessness and the likelihood of plasma HIV-1 RNA viral load (VL) non-detectability among a cohort of HIV-positive PWUD. We used data from the ACCESS study, a long-running prospective cohort study of HIV-positive PWUD linked to comprehensive HIV clinical records including systematic plasma HIV-1 RNA VL monitoring. We estimated the longitudinal relationship between the duration of homelessness and the likelihood of exhibiting a non-detectable VL (i.e., effects modelling. Between May 1996 and June 2014, 922 highly active antiretroviral therapy-exposed participants were recruited and contributed 8188 observations. Of these, 4800 (59%) were characterized by non-detectable VL. Participants reported they were homeless in 910 (11%) interviews (median: six months, interquartile range: 6-12 months). A longer duration of homelessness was associated with lower odds of VL non-detectability (adjusted odds ratio = 0.71 per six-month period of homelessness, 95% confidence interval: 0.60-0.83) after adjustment for age, ancestry, drug use patterns, engagement in addiction treatment, and other potential confounders. Longer durations of episodes of homelessness in this cohort of HIV-positive illicit drug users were associated with a lower likelihood of plasma VL non-detectability. Our findings suggest that interventions that seek to promptly house homeless individuals, such as Housing First approaches, might assist in maximizing the clinical and public health benefits of antiretroviral therapy among people living with HIV/AIDS.

  18. Maximum likelihood of phylogenetic networks.

    Science.gov (United States)

    Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir

    2006-11-01

    Horizontal gene transfer (HGT) is believed to be ubiquitous among bacteria, and plays a major role in their genome diversification as well as their ability to develop resistance to antibiotics. In light of its evolutionary significance and implications for human health, developing accurate and efficient methods for detecting and reconstructing HGT is imperative. In this article we provide a new HGT-oriented likelihood framework for many problems that involve phylogeny-based HGT detection and reconstruction. Beside the formulation of various likelihood criteria, we show that most of these problems are NP-hard, and offer heuristics for efficient and accurate reconstruction of HGT under these criteria. We implemented our heuristics and used them to analyze biological as well as synthetic data. In both cases, our criteria and heuristics exhibited very good performance with respect to identifying the correct number of HGT events as well as inferring their correct location on the species tree. Implementation of the criteria as well as heuristics and hardness proofs are available from the authors upon request. Hardness proofs can also be downloaded at http://www.cs.tau.ac.il/~tamirtul/MLNET/Supp-ML.pdf

  19. Earthquake likelihood model testing

    Science.gov (United States)

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  20. Comparing a recursive digital filter with the moving-average and sequential probability-ratio detection methods for SNM portal monitors

    International Nuclear Information System (INIS)

    Fehlau, P.E.

    1993-01-01

    The author compared a recursive digital filter proposed as a detection method for French special nuclear material monitors with the author's detection methods, which employ a moving-average scaler or a sequential probability-ratio test. Each of these nine test subjects repeatedly carried a test source through a walk-through portal monitor that had the same nuisance-alarm rate with each method. He found that the average detection probability for the test source is also the same for each method. However, the recursive digital filter may have on drawback: its exponentially decreasing response to past radiation intensity prolongs the impact of any interference from radiation sources of radiation-producing machinery. He also examined the influence of each test subject on the monitor's operation by measuring individual attenuation factors for background and source radiation, then ranked the subjects' attenuation factors against their individual probabilities for detecting the test source. The one inconsistent ranking was probably caused by that subject's unusually long stride when passing through the portal

  1. Rapid Determination of Plutonium Isotopes in Environmental Samples Using Sequential Injection Extraction Chromatography and Detection by Inductively Coupled Plasma Mass Spectrometry

    DEFF Research Database (Denmark)

    Qiao, Jixin; Hou, Xiaolin; Roos, Per

    2009-01-01

    This article presents an automated method for the rapid determination of 239Pu and 240Pu in various environmental samples. The analytical method involves the in-line separation of Pu isotopes using extraction chromatography (TEVA) implemented in a sequential injection (SI) network followed...... by detection of isolated analytes with inductively coupled plasma mass spectrometry (ICP-MS). The method has been devised for the determination of Pu isotopes at environmentally relevant concentrations, whereby it has been successfully applied to the analyses of large volumes/amounts of samples, for example......, 100−200 g of soil and sediment, 20 g of seaweed, and 200 L of seawater following analyte preconcentration. The investigation of the separation capability of the assembled SI system revealed that up to 200 g of soil or sediment can be treated using a column containing about 0.70 g of TEVA resin...

  2. Sequential use of indium-111 labeled monoclonal antibodies 96.5 and ZME-018 does not increase detection sensitivity for metastatic melanoma

    International Nuclear Information System (INIS)

    Frontiera, M.; Murray, J.L.; Lamki, L.

    1989-01-01

    Two indium-111 labeled anti-melanoma murine monoclonal antibodies (MAb), 96.5 and ZME-018, each recognizing separate antigens on melanoma cells, were administered intravenously to 17 patients with melanoma in a sequential fashion to determine whether: (1) additional tumor sites could be imaged with the combination compared to a single Mab; (2) the first MAb influenced the biodistribution and tumor localization of the second; and (3) significantly toxicity occurred with the combination. Patients were randomized to receive either 96.5, followed by ZME-018, ZME-018 followed by 96.5, or each MAb followed by itself (controls). Infusions of the second MAb occurred 10 days after the first infusion. Gamma camera images were obtained 72 hours after each antibody infusion. There were 139 known metastatic sites of which 72 lesions were localized by either MAb for an overall sensitivity of 52%. The detection rate was higher when lesions only greater than 1.5 cm were considered. Imaging results were independent of MAb administration sequence. When ZME-018 was given as the first infusion, when ZME-018 was given as a second infusion (p = NS). However, mean sensitivities using 96.5 as the first or second infusion were 48% and 66% respectively (p = NS). There was not a significant number of sites detected by MAb 2 that were missed by MAb 1. Human anti-murine antibody (HAMA) response occurred in seven of eight patients studied; two patients who experienced toxicity had levels of HAMA greater than 2000 ng/ml. We conclude that the use of these two murine anti-melanoma monoclonal antibodies given in sequential fashion did not significantly change the imaging sensitivity from that seen with each individual antibody

  3. Evaluation of a novel PTFE material for use as a means for separation and preconcentration of trace levels of metal ions in sequential injection (SI) and sequential injection lab-on-valve (SI-LOV) systems. Determination of cadmium (II) with detection by electrothermal atomic absorption spectrometry

    DEFF Research Database (Denmark)

    Long, Xiangbao; Chomchoei, Roongrat; Hansen, Elo Harald

    2004-01-01

    with an external packed column and in a sequential injection lab-on-valve (SI-LOV) system. Employed for the determination of cadmium(II), complexed with diethyldithiophosphate (DDPA), and detection by electrothermal atomic absorption spectrometry (ETAAS), its performance was compared to that of a previously used...

  4. Likelihood of early detection of breast cancer in relation to false-positive risk in life-time mammographic screening: population-based cohort study.

    Science.gov (United States)

    Otten, J D M; Fracheboud, J; den Heeten, G J; Otto, S J; Holland, R; de Koning, H J; Broeders, M J M; Verbeek, A L M

    2013-10-01

    Women require balanced, high-quality information when making an informed decision on screening benefits and harms before attending biennial mammographic screening. The cumulative risk of a false-positive recall and/or (small) screen-detected or interval cancer over 13 consecutive screening examinations for women aged 50 from the start of screening were estimated using data from the Nijmegen programme, the Netherlands. Women who underwent 13 successive screens in the period 1975-1976 had a 5.3% cumulative chance of a screen-detected cancer, with a 4.2% risk of at least one false-positive recall. The risk of being diagnosed with interval cancer was 3.7%. Two decades later, these estimates were 6.9%, 7.3% and 2.9%, respectively. The chance of detection of a small, favourable invasive breast cancer, anticipating a normal life-expectancy, rose from 2.3% to 3.7%. Extrapolation to digital screening mammography indicates that the proportion of false-positive results will rise to 16%. Dutch women about to participate in the screening programme can be reassured that the chance of false-positive recall in the Netherlands is relatively low. A new screening policy and improved mammography have increased the detection of an early screening carcinoma and lowering the risk of interval carcinoma.

  5. Nano-mole scale sequential signal assignment by 1 H-detected protein solid-state NMR

    KAUST Repository

    Wang, Songlin; Parthasarathy, Sudhakar; Xiao, Yiling; Nishiyama, Yusuke; Long, Fei; Matsuda, Isamu; Endo, Yuki; Nemoto, Takahiro; Yamauchi, Kazuo; Asakura, Tetsuo; Takeda, Mitsuhiro; Terauchi, Tsutomu; Kainosho, Masatsune; Ishii, Yoshitaka

    2015-01-01

    We present a 3D 1H-detected solid-state NMR (SSNMR) approach for main-chain signal assignments of 10-100 nmol of fully protonated proteins using ultra-fast magic-angle spinning (MAS) at ∼80 kHz by a novel spectral-editing method, which permits drastic spectral simplification. The approach offers ∼110 fold time saving over a traditional 3D 13C-detected SSNMR approach. This journal is © The Royal Society of Chemistry 2015.

  6. Lineup Composition, Suspect Position, and the Sequential Lineup Advantage

    Science.gov (United States)

    Carlson, Curt A.; Gronlund, Scott D.; Clark, Steven E.

    2008-01-01

    N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate…

  7. Likelihood-ratio-based biometric verification

    NARCIS (Netherlands)

    Bazen, A.M.; Veldhuis, Raymond N.J.

    2002-01-01

    This paper presents results on optimal similarity measures for biometric verification based on fixed-length feature vectors. First, we show that the verification of a single user is equivalent to the detection problem, which implies that for single-user verification the likelihood ratio is optimal.

  8. Likelihood Ratio-Based Biometric Verification

    NARCIS (Netherlands)

    Bazen, A.M.; Veldhuis, Raymond N.J.

    The paper presents results on optimal similarity measures for biometric verification based on fixed-length feature vectors. First, we show that the verification of a single user is equivalent to the detection problem, which implies that, for single-user verification, the likelihood ratio is optimal.

  9. Exploiting Sequential Injection on-line Solvent Extraction/Back Extraction with Detection by ETAAS or ICPMS

    DEFF Research Database (Denmark)

    Wang, Jianhua; Hansen, Elo Harald

    presents an on-line SI-solvent extraction/back extraction procedure used in connection with detection by either ETAAS or ICPMS. Incorporating two newly designed dual-conical gravitational phase separators, its performance is demonstrated for the determination of various metals in reference materials.......Electrothermal atomic absorption spectrometry (ETAAS) and inductively coupled plasma mass spectrometry (ICPMS) are highly sensitive techniques for trace metal analyses. Nevertheless, separation/preconcentration procedures are often called for in order to overcome their inherent low matrix...... tolerances. With detection by ETAAS, separation/preconcentration by solvent extraction has enjoyed much use. However, this approach is not necessarily the optimal one since introduction of organic eluates directly into the graphite tube might lead to deteriorated reproducibility and lower sensitivity...

  10. Exploiting sequential injection on-line solvent extraction/back extraction with detection by ETAAS and ICPMS

    DEFF Research Database (Denmark)

    Wang, Jianhua; Hansen, Elo Harald

    presents an on-line SI-solvent extraction/back extraction procedure used in connection with detection by either ETAAS or ICPMS. Incorporating two newly designed dual-conical gravitational phase separators, its performance is demonstrated for the determination of various metals in reference materials.......Electrothermal atomic absorption spectrometry (ETAAS) and inductively coupled plasma mass spectrometry (ICPMS) are highly sensitive techniques for trace metal analyses. Nevertheless, separation/preconcentration procedures are often called for in order to overcome their inherent low matrix tolerance....... With detection by ETAAS, separation/preconcentration by solvent extraction has enjoyed much use. However, this approach is not necessarily the optimal one since introduction of organic eluates directly into the graphite tube might lead to deteriorated reproducibility and lower sensitivity. And for ICPMS...

  11. Microchemical Plant in a Liquid Droplet: Plasmonic Liquid Marble for Sequential Reactions and Attomole Detection of Toxin at Microliter Scale.

    Science.gov (United States)

    Han, Xuemei; Koh, Charlynn Sher Lin; Lee, Hiang Kwee; Chew, Wee Shern; Ling, Xing Yi

    2017-11-15

    Miniaturizing the continuous multistep operations of a factory into a microchemical plant offers a safe and cost-effective approach to promote high-throughput screening in drug development and enforcement of industrial/environmental safety. While particle-assembled microdroplets in the form of liquid marble are ideal as microchemical plant, these platforms are mainly restricted to single-step reactions and limited to ex situ reaction monitoring. Herein, we utilize plasmonic liquid marble (PLM), formed by encapsulating liquid droplet with Ag nanocubes, to address these issues and demonstrate it as an ideal microchemical plant to conduct reaction-and-detection sequences on-demand in a nondisruptive manner. Utilizing a two-step azo-dye formation as our model reaction, our microchemical plant allows rapid and efficient diazotization of nitroaniline to form diazonium nitrobenzene, followed by the azo coupling of this intermediate with target aromatic compound to yield azo-dye. These molecular events are tracked in situ via SERS measurement through the plasmonic shell and further verified with in silico investigation. Furthermore, we apply our microchemical plant for ultrasensitive SERS detection and quantification of bisphenol A (BPA) with detection limit down to 10 amol, which is 50 000-fold lower than the BPA safety limit. Together with the protections offered by plasmonic shell against external environments, these collective advantages empower PLM as a multifunctional microchemical plant to facilitate small-volume testing and optimization of processes relevant in industrial and research contexts.

  12. Sex Differences in Detecting Sexual Infidelity : Results of a Maximum Likelihood Method for Analyzing the Sensitivity of Sex Differences to Underreporting.

    Science.gov (United States)

    Andrews, Paul W; Gangestad, Steven W; Miller, Geoffrey F; Haselton, Martie G; Thornhill, Randy; Neale, Michael C

    2008-12-01

    Despite the importance of extrapair copulation (EPC) in human evolution, almost nothing is known about the design features of EPC detection mechanisms. We tested for sex differences in EPC inference-making mechanisms in a sample of 203 young couples. Men made more accurate inferences (φmen = 0.66, φwomen = 0.46), and the ratio of positive errors to negative errors was higher for men than for women (1.22 vs. 0.18). Since some may have been reluctant to admit EPC behavior, we modeled how underreporting could have influenced these results. These analyses indicated that it would take highly sex-differentiated levels of underreporting by subjects with trusting partners for there to be no real sex difference. Further analyses indicated that men may be less willing to harbor unresolved suspicions about their partners' EPC behavior, which may explain the sex difference in accuracy. Finally, we estimated that women underreported their own EPC behavior (10%) more than men (0%).

  13. Likelihood devices in spatial statistics

    NARCIS (Netherlands)

    Zwet, E.W. van

    1999-01-01

    One of the main themes of this thesis is the application to spatial data of modern semi- and nonparametric methods. Another, closely related theme is maximum likelihood estimation from spatial data. Maximum likelihood estimation is not common practice in spatial statistics. The method of moments

  14. Extended likelihood inference in reliability

    International Nuclear Information System (INIS)

    Martz, H.F. Jr.; Beckman, R.J.; Waller, R.A.

    1978-10-01

    Extended likelihood methods of inference are developed in which subjective information in the form of a prior distribution is combined with sampling results by means of an extended likelihood function. The extended likelihood function is standardized for use in obtaining extended likelihood intervals. Extended likelihood intervals are derived for the mean of a normal distribution with known variance, the failure-rate of an exponential distribution, and the parameter of a binomial distribution. Extended second-order likelihood methods are developed and used to solve several prediction problems associated with the exponential and binomial distributions. In particular, such quantities as the next failure-time, the number of failures in a given time period, and the time required to observe a given number of failures are predicted for the exponential model with a gamma prior distribution on the failure-rate. In addition, six types of life testing experiments are considered. For the binomial model with a beta prior distribution on the probability of nonsurvival, methods are obtained for predicting the number of nonsurvivors in a given sample size and for predicting the required sample size for observing a specified number of nonsurvivors. Examples illustrate each of the methods developed. Finally, comparisons are made with Bayesian intervals in those cases where these are known to exist

  15. Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods

    Directory of Open Access Journals (Sweden)

    Anthony Hoak

    2017-03-01

    Full Text Available We develop an interactive likelihood (ILH for sequential Monte Carlo (SMC methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL and TUD-Stadtmitte using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA and classification of events, activities and relationships for multi-object trackers (CLEAR MOT. In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter.

  16. Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods.

    Science.gov (United States)

    Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J

    2017-03-03

    We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter.

  17. Obtaining reliable Likelihood Ratio tests from simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    It is standard practice by researchers and the default option in many statistical programs to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). This paper shows that when the estimated likelihood functions depend on standard deviations of mixed param...

  18. Exploiting flow injection and sequential injection for trace metal determinations in conjunction with detection by electrothermal atomic absorption spectrometry and inductively coupled plasma mass spectrometry

    DEFF Research Database (Denmark)

    Hansen, Elo Harald

    and selectivity. Either in order to separate/preconcentrate the analyte material, or because of the presence of potentially interfering matrix constituents. Such pretreatments are advantageously performed in flow injection (FI) or sequential injection (SI) manifolds, where all appropriate unit operations can...

  19. Development of an automated sequential injection on-line solvent extraction-back extraction procedure as demonstrated for the determination of cadmium with detection by electrothermal atomic absorption spectrometry

    DEFF Research Database (Denmark)

    Wang, Jianhua; Hansen, Elo Harald

    2002-01-01

    An automated sequential injection (SI) on-line solvent extraction-back extraction separation/preconcentration procedure is described. Demonstrated for the assay of cadmium by electrothermal atomic absorption spectrometry (ETAAS), the analyte is initially complexed with ammonium pyrrolidinedithioc......An automated sequential injection (SI) on-line solvent extraction-back extraction separation/preconcentration procedure is described. Demonstrated for the assay of cadmium by electrothermal atomic absorption spectrometry (ETAAS), the analyte is initially complexed with ammonium....../preconcentration process of the ensuing sample. An enrichment factor of 21.4, a detection limit of 2.7 ng/l, along with a sampling frequency of 13s/h were obtained at a sample flow rate of 6.0mlmin/sup -1/. The precision (R.S.D.) at the 0.4 mug/l level was 1.8% as compared to 3.2% when quantifying the organic extractant...

  20. Unbinned likelihood analysis of EGRET observations

    International Nuclear Information System (INIS)

    Digel, Seth W.

    2000-01-01

    We present a newly-developed likelihood analysis method for EGRET data that defines the likelihood function without binning the photon data or averaging the instrumental response functions. The standard likelihood analysis applied to EGRET data requires the photons to be binned spatially and in energy, and the point-spread functions to be averaged over energy and inclination angle. The full-width half maximum of the point-spread function increases by about 40% from on-axis to 30 degree sign inclination, and depending on the binning in energy can vary by more than that in a single energy bin. The new unbinned method avoids the loss of information that binning and averaging cause and can properly analyze regions where EGRET viewing periods overlap and photons with different inclination angles would otherwise be combined in the same bin. In the poster, we describe the unbinned analysis method and compare its sensitivity with binned analysis for detecting point sources in EGRET data

  1. Estimation After a Group Sequential Trial.

    Science.gov (United States)

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why

  2. Ego involvement increases doping likelihood.

    Science.gov (United States)

    Ring, Christopher; Kavussanu, Maria

    2018-08-01

    Achievement goal theory provides a framework to help understand how individuals behave in achievement contexts, such as sport. Evidence concerning the role of motivation in the decision to use banned performance enhancing substances (i.e., doping) is equivocal on this issue. The extant literature shows that dispositional goal orientation has been weakly and inconsistently associated with doping intention and use. It is possible that goal involvement, which describes the situational motivational state, is a stronger determinant of doping intention. Accordingly, the current study used an experimental design to examine the effects of goal involvement, manipulated using direct instructions and reflective writing, on doping likelihood in hypothetical situations in college athletes. The ego-involving goal increased doping likelihood compared to no goal and a task-involving goal. The present findings provide the first evidence that ego involvement can sway the decision to use doping to improve athletic performance.

  3. Lineup composition, suspect position, and the sequential lineup advantage.

    Science.gov (United States)

    Carlson, Curt A; Gronlund, Scott D; Clark, Steven E

    2008-06-01

    N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate in the simultaneous lineup, and no sequential lineup advantage was found. This led the authors to hypothesize that protection from a sequential lineup might emerge only when an innocent suspect stands out from the other lineup members. In Experiment 2, participants viewed a simultaneous or sequential lineup with either the guilty suspect or 1 of 3 innocent suspects. Lineup fairness was varied to influence the degree to which a suspect stood out. A sequential lineup advantage was found only for the unfair lineups. Additional analyses of suspect position in the sequential lineups showed an increase in the diagnosticity of suspect identifications as the suspect was placed later in the sequential lineup. These results suggest that the sequential lineup advantage is dependent on lineup composition and suspect position. (c) 2008 APA, all rights reserved

  4. Adaptive sequential controller

    Energy Technology Data Exchange (ETDEWEB)

    El-Sharkawi, Mohamed A. (Renton, WA); Xing, Jian (Seattle, WA); Butler, Nicholas G. (Newberg, OR); Rodriguez, Alonso (Pasadena, CA)

    1994-01-01

    An adaptive sequential controller (50/50') for controlling a circuit breaker (52) or other switching device to substantially eliminate transients on a distribution line caused by closing and opening the circuit breaker. The device adaptively compensates for changes in the response time of the circuit breaker due to aging and environmental effects. A potential transformer (70) provides a reference signal corresponding to the zero crossing of the voltage waveform, and a phase shift comparator circuit (96) compares the reference signal to the time at which any transient was produced when the circuit breaker closed, producing a signal indicative of the adaptive adjustment that should be made. Similarly, in controlling the opening of the circuit breaker, a current transformer (88) provides a reference signal that is compared against the time at which any transient is detected when the circuit breaker last opened. An adaptive adjustment circuit (102) produces a compensation time that is appropriately modified to account for changes in the circuit breaker response, including the effect of ambient conditions and aging. When next opened or closed, the circuit breaker is activated at an appropriately compensated time, so that it closes when the voltage crosses zero and opens when the current crosses zero, minimizing any transients on the distribution line. Phase angle can be used to control the opening of the circuit breaker relative to the reference signal provided by the potential transformer.

  5. Adaptive sequential controller

    Science.gov (United States)

    El-Sharkawi, Mohamed A.; Xing, Jian; Butler, Nicholas G.; Rodriguez, Alonso

    1994-01-01

    An adaptive sequential controller (50/50') for controlling a circuit breaker (52) or other switching device to substantially eliminate transients on a distribution line caused by closing and opening the circuit breaker. The device adaptively compensates for changes in the response time of the circuit breaker due to aging and environmental effects. A potential transformer (70) provides a reference signal corresponding to the zero crossing of the voltage waveform, and a phase shift comparator circuit (96) compares the reference signal to the time at which any transient was produced when the circuit breaker closed, producing a signal indicative of the adaptive adjustment that should be made. Similarly, in controlling the opening of the circuit breaker, a current transformer (88) provides a reference signal that is compared against the time at which any transient is detected when the circuit breaker last opened. An adaptive adjustment circuit (102) produces a compensation time that is appropriately modified to account for changes in the circuit breaker response, including the effect of ambient conditions and aging. When next opened or closed, the circuit breaker is activated at an appropriately compensated time, so that it closes when the voltage crosses zero and opens when the current crosses zero, minimizing any transients on the distribution line. Phase angle can be used to control the opening of the circuit breaker relative to the reference signal provided by the potential transformer.

  6. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaë l; Davison, Anthony C.; Genton, Marc G.

    2015-01-01

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  7. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaël

    2015-11-17

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  8. Sequential probability ratio controllers for safeguards radiation monitors

    International Nuclear Information System (INIS)

    Fehlau, P.E.; Coop, K.L.; Nixon, K.V.

    1984-01-01

    Sequential hypothesis tests applied to nuclear safeguards accounting methods make the methods more sensitive to detecting diversion. The sequential tests also improve transient signal detection in safeguards radiation monitors. This paper describes three microprocessor control units with sequential probability-ratio tests for detecting transient increases in radiation intensity. The control units are designed for three specific applications: low-intensity monitoring with Poisson probability ratios, higher intensity gamma-ray monitoring where fixed counting intervals are shortened by sequential testing, and monitoring moving traffic where the sequential technique responds to variable-duration signals. The fixed-interval controller shortens a customary 50-s monitoring time to an average of 18 s, making the monitoring delay less bothersome. The controller for monitoring moving vehicles benefits from the sequential technique by maintaining more than half its sensitivity when the normal passage speed doubles

  9. Sequential charged particle reaction

    International Nuclear Information System (INIS)

    Hori, Jun-ichi; Ochiai, Kentaro; Sato, Satoshi; Yamauchi, Michinori; Nishitani, Takeo

    2004-01-01

    The effective cross sections for producing the sequential reaction products in F82H, pure vanadium and LiF with respect to the 14.9-MeV neutron were obtained and compared with the estimation ones. Since the sequential reactions depend on the secondary charged particles behavior, the effective cross sections are corresponding to the target nuclei and the material composition. The effective cross sections were also estimated by using the EAF-libraries and compared with the experimental ones. There were large discrepancies between estimated and experimental values. Additionally, we showed the contribution of the sequential reaction on the induced activity and dose rate in the boundary region with water. From the present study, it has been clarified that the sequential reactions are of great importance to evaluate the dose rates around the surface of cooling pipe and the activated corrosion products. (author)

  10. Pendeteksian Outlier pada Regresi Nonlinier dengan Metode statistik Likelihood Displacement

    Directory of Open Access Journals (Sweden)

    Siti Tabi'atul Hasanah

    2012-11-01

    Full Text Available Outlier is an observation that much different (extreme from the other observational data, or data can be interpreted that do not follow the general pattern of the model. Sometimes outliers provide information that can not be provided by other data. That's why outliers should not just be eliminated. Outliers can also be an influential observation. There are many methods that can be used to detect of outliers. In previous studies done on outlier detection of linear regression. Next will be developed detection of outliers in nonlinear regression. Nonlinear regression here is devoted to multiplicative nonlinear regression. To detect is use of statistical method likelihood displacement. Statistical methods abbreviated likelihood displacement (LD is a method to detect outliers by removing the suspected outlier data. To estimate the parameters are used to the maximum likelihood method, so we get the estimate of the maximum. By using LD method is obtained i.e likelihood displacement is thought to contain outliers. Further accuracy of LD method in detecting the outliers are shown by comparing the MSE of LD with the MSE from the regression in general. Statistic test used is Λ. Initial hypothesis was rejected when proved so is an outlier.

  11. Unbinned likelihood maximisation framework for neutrino clustering in Python

    Energy Technology Data Exchange (ETDEWEB)

    Coenders, Stefan [Technische Universitaet Muenchen, Boltzmannstr. 2, 85748 Garching (Germany)

    2016-07-01

    Albeit having detected an astrophysical neutrino flux with IceCube, sources of astrophysical neutrinos remain hidden up to now. A detection of a neutrino point source is a smoking gun for hadronic processes and acceleration of cosmic rays. The search for neutrino sources has many degrees of freedom, for example steady versus transient, point-like versus extended sources, et cetera. Here, we introduce a Python framework designed for unbinned likelihood maximisations as used in searches for neutrino point sources by IceCube. Implementing source scenarios in a modular way, likelihood searches on various kinds can be implemented in a user-friendly way, without sacrificing speed and memory management.

  12. Likelihood ratio decisions in memory: three implied regularities.

    Science.gov (United States)

    Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T

    2009-06-01

    We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.

  13. Constraint likelihood analysis for a network of gravitational wave detectors

    International Nuclear Information System (INIS)

    Klimenko, S.; Rakhmanov, M.; Mitselmakher, G.; Mohanty, S.

    2005-01-01

    We propose a coherent method for detection and reconstruction of gravitational wave signals with a network of interferometric detectors. The method is derived by using the likelihood ratio functional for unknown signal waveforms. In the likelihood analysis, the global maximum of the likelihood ratio over the space of waveforms is used as the detection statistic. We identify a problem with this approach. In the case of an aligned pair of detectors, the detection statistic depends on the cross correlation between the detectors as expected, but this dependence disappears even for infinitesimally small misalignments. We solve the problem by applying constraints on the likelihood functional and obtain a new class of statistics. The resulting method can be applied to data from a network consisting of any number of detectors with arbitrary detector orientations. The method allows us reconstruction of the source coordinates and the waveforms of two polarization components of a gravitational wave. We study the performance of the method with numerical simulations and find the reconstruction of the source coordinates to be more accurate than in the standard likelihood method

  14. Speciation Analysis of 129I and 127I in Aerosols Using Sequential Extraction and Mass Spectrometry Detection

    DEFF Research Database (Denmark)

    Zhang, Luyuan; Hou, Xiaolin; Xu, Sheng

    2015-01-01

    A new analytical method has been developed for speciation analysis of 127I and 129I in aerosols collected on polypropylene (PP) filter paper. Iodide, iodate, NaOH soluble iodine, and insoluble iodine were separated from aerosols using sequential extraction, chromatography separation, and alkaline...... ashing and measured using inductive coupled plasma mass spectrometry (ICP-MS) for 127I and accelerator mass spectrometry (AMS) for 129I. Parameters affecting the leaching efficiency and stability of iodine species, such as leaching time and temperature, amount of alkaline reagent for ashing, ashing...... temperature and time, and iodine protective agent, were investigated and optimized. It was observed that long time water leaching would change inorganic iodine species due to photochemical oxidation of iodide on the PP filter surface. NaOH leaching can only extract less than 60% of iodine from the studied...

  15. Multiple and sequential data acquisition method: an improved method for fragmentation and detection of cross-linked peptides on a hybrid linear trap quadrupole Orbitrap Velos mass spectrometer.

    Science.gov (United States)

    Rudashevskaya, Elena L; Breitwieser, Florian P; Huber, Marie L; Colinge, Jacques; Müller, André C; Bennett, Keiryn L

    2013-02-05

    The identification and validation of cross-linked peptides by mass spectrometry remains a daunting challenge for protein-protein cross-linking approaches when investigating protein interactions. This includes the fragmentation of cross-linked peptides in the mass spectrometer per se and following database searching, the matching of the molecular masses of the fragment ions to the correct cross-linked peptides. The hybrid linear trap quadrupole (LTQ) Orbitrap Velos combines the speed of the tandem mass spectrometry (MS/MS) duty circle with high mass accuracy, and these features were utilized in the current study to substantially improve the confidence in the identification of cross-linked peptides. An MS/MS method termed multiple and sequential data acquisition method (MSDAM) was developed. Preliminary optimization of the MS/MS settings was performed with a synthetic peptide (TP1) cross-linked with bis[sulfosuccinimidyl] suberate (BS(3)). On the basis of these results, MSDAM was created and assessed on the BS(3)-cross-linked bovine serum albumin (BSA) homodimer. MSDAM applies a series of multiple sequential fragmentation events with a range of different normalized collision energies (NCE) to the same precursor ion. The combination of a series of NCE enabled a considerable improvement in the quality of the fragmentation spectra for cross-linked peptides, and ultimately aided in the identification of the sequences of the cross-linked peptides. Concurrently, MSDAM provides confirmatory evidence from the formation of reporter ions fragments, which reduces the false positive rate of incorrectly assigned cross-linked peptides.

  16. Maximum likelihood versus likelihood-free quantum system identification in the atom maser

    International Nuclear Information System (INIS)

    Catana, Catalin; Kypraios, Theodore; Guţă, Mădălin

    2014-01-01

    We consider the problem of estimating a dynamical parameter of a Markovian quantum open system (the atom maser), by performing continuous time measurements in the system's output (outgoing atoms). Two estimation methods are investigated and compared. Firstly, the maximum likelihood estimator (MLE) takes into account the full measurement data and is asymptotically optimal in terms of its mean square error. Secondly, the ‘likelihood-free’ method of approximate Bayesian computation (ABC) produces an approximation of the posterior distribution for a given set of summary statistics, by sampling trajectories at different parameter values and comparing them with the measurement data via chosen statistics. Building on previous results which showed that atom counts are poor statistics for certain values of the Rabi angle, we apply MLE to the full measurement data and estimate its Fisher information. We then select several correlation statistics such as waiting times, distribution of successive identical detections, and use them as input of the ABC algorithm. The resulting posterior distribution follows closely the data likelihood, showing that the selected statistics capture ‘most’ statistical information about the Rabi angle. (paper)

  17. Subtracting and Fitting Histograms using Profile Likelihood

    CERN Document Server

    D'Almeida, F M L

    2008-01-01

    It is known that many interesting signals expected at LHC are of unknown shape and strongly contaminated by background events. These signals will be dif cult to detect during the rst years of LHC operation due to the initial low luminosity. In this work, one presents a method of subtracting histograms based on the pro le likelihood function when the background is previously estimated by Monte Carlo events and one has low statistics. Estimators for the signal in each bin of the histogram difference are calculated so as limits for the signals with 68.3% of Con dence Level in a low statistics case when one has a exponential background and a Gaussian signal. The method can also be used to t histograms when the signal shape is known. Our results show a good performance and avoid the problem of negative values when subtracting histograms.

  18. Rapid isolation of plutonium in environmental solid samples using sequential injection anion exchange chromatography followed by detection with inductively coupled plasma mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Qiao Jixin, E-mail: jixin.qiao@risoe.d [Radiation Research Division, Riso National Laboratory for Sustainable Energy, Technical University of Denmark, DK-4000 Roskilde (Denmark); Hou Xiaolin; Roos, Per [Radiation Research Division, Riso National Laboratory for Sustainable Energy, Technical University of Denmark, DK-4000 Roskilde (Denmark); Miro, Manuel [Department of Chemistry, Faculty of Sciences, University of the Balearic Islands, Carretera de Valldemossa km. 7.5, E-07122 Palma de Mallorca, Illes Balears (Spain)

    2011-01-31

    This paper reports an automated analytical method for rapid determination of plutonium isotopes ({sup 239}Pu and {sup 240}Pu) in environmental solid extracts. Anion exchange chromatographic columns were incorporated in a sequential injection (SI) system to undertake the automated separation of plutonium from matrix and interfering elements. The analytical results most distinctly demonstrated that the crosslinkage of the anion exchanger is a key parameter controlling the separation efficiency. AG 1-x4 type resin was selected as the most suitable sorbent material for analyte separation. Investigation of column size effect upon the separation efficiency revealed that small-sized (2 mL) columns sufficed to handle up to 50 g of environmental soil samples. Under the optimum conditions, chemical yields of plutonium exceeded 90% and the decontamination factors for uranium, thorium and lead ranged from 10{sup 3} to 10{sup 4}. The determination of plutonium isotopes in three standard/certified reference materials (IAEA-375 soil, IAEA-135 sediment and NIST-4359 seaweed) and two reference samples (Irish Sea sediment and Danish soil) revealed a good agreement with reference/certified values. The SI column-separation method is straightforward and less labor intensive as compared with batch-wise anion exchange chromatographic procedures. Besides, the automated method features low consumption of ion-exchanger and reagents for column washing and elution, with the consequent decrease in the generation of acidic waste, thus bearing green chemical credentials.

  19. Sequential stochastic optimization

    CERN Document Server

    Cairoli, Renzo

    1996-01-01

    Sequential Stochastic Optimization provides mathematicians and applied researchers with a well-developed framework in which stochastic optimization problems can be formulated and solved. Offering much material that is either new or has never before appeared in book form, it lucidly presents a unified theory of optimal stopping and optimal sequential control of stochastic processes. This book has been carefully organized so that little prior knowledge of the subject is assumed; its only prerequisites are a standard graduate course in probability theory and some familiarity with discrete-paramet

  20. Sequential memory: Binding dynamics

    Science.gov (United States)

    Afraimovich, Valentin; Gong, Xue; Rabinovich, Mikhail

    2015-10-01

    Temporal order memories are critical for everyday animal and human functioning. Experiments and our own experience show that the binding or association of various features of an event together and the maintaining of multimodality events in sequential order are the key components of any sequential memories—episodic, semantic, working, etc. We study a robustness of binding sequential dynamics based on our previously introduced model in the form of generalized Lotka-Volterra equations. In the phase space of the model, there exists a multi-dimensional binding heteroclinic network consisting of saddle equilibrium points and heteroclinic trajectories joining them. We prove here the robustness of the binding sequential dynamics, i.e., the feasibility phenomenon for coupled heteroclinic networks: for each collection of successive heteroclinic trajectories inside the unified networks, there is an open set of initial points such that the trajectory going through each of them follows the prescribed collection staying in a small neighborhood of it. We show also that the symbolic complexity function of the system restricted to this neighborhood is a polynomial of degree L - 1, where L is the number of modalities.

  1. Sequential Dependencies in Driving

    Science.gov (United States)

    Doshi, Anup; Tran, Cuong; Wilder, Matthew H.; Mozer, Michael C.; Trivedi, Mohan M.

    2012-01-01

    The effect of recent experience on current behavior has been studied extensively in simple laboratory tasks. We explore the nature of sequential effects in the more naturalistic setting of automobile driving. Driving is a safety-critical task in which delayed response times may have severe consequences. Using a realistic driving simulator, we find…

  2. Mining compressing sequential problems

    NARCIS (Netherlands)

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  3. The Laplace Likelihood Ratio Test for Heteroscedasticity

    Directory of Open Access Journals (Sweden)

    J. Martin van Zyl

    2011-01-01

    Full Text Available It is shown that the likelihood ratio test for heteroscedasticity, assuming the Laplace distribution, gives good results for Gaussian and fat-tailed data. The likelihood ratio test, assuming normality, is very sensitive to any deviation from normality, especially when the observations are from a distribution with fat tails. Such a likelihood test can also be used as a robust test for a constant variance in residuals or a time series if the data is partitioned into groups.

  4. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  5. Eyewitness confidence in simultaneous and sequential lineups: a criterion shift account for sequential mistaken identification overconfidence.

    Science.gov (United States)

    Dobolyi, David G; Dodson, Chad S

    2013-12-01

    Confidence judgments for eyewitness identifications play an integral role in determining guilt during legal proceedings. Past research has shown that confidence in positive identifications is strongly associated with accuracy. Using a standard lineup recognition paradigm, we investigated accuracy using signal detection and ROC analyses, along with the tendency to choose a face with both simultaneous and sequential lineups. We replicated past findings of reduced rates of choosing with sequential as compared to simultaneous lineups, but notably found an accuracy advantage in favor of simultaneous lineups. Moreover, our analysis of the confidence-accuracy relationship revealed two key findings. First, we observed a sequential mistaken identification overconfidence effect: despite an overall reduction in false alarms, confidence for false alarms that did occur was higher with sequential lineups than with simultaneous lineups, with no differences in confidence for correct identifications. This sequential mistaken identification overconfidence effect is an expected byproduct of the use of a more conservative identification criterion with sequential than with simultaneous lineups. Second, we found a steady drop in confidence for mistaken identifications (i.e., foil identifications and false alarms) from the first to the last face in sequential lineups, whereas confidence in and accuracy of correct identifications remained relatively stable. Overall, we observed that sequential lineups are both less accurate and produce higher confidence false identifications than do simultaneous lineups. Given the increasing prominence of sequential lineups in our legal system, our data argue for increased scrutiny and possibly a wholesale reevaluation of this lineup format. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  6. Essays on empirical likelihood in economics

    NARCIS (Netherlands)

    Gao, Z.

    2012-01-01

    This thesis intends to exploit the roots of empirical likelihood and its related methods in mathematical programming and computation. The roots will be connected and the connections will induce new solutions for the problems of estimation, computation, and generalization of empirical likelihood.

  7. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis

    In this thesis we describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon codes with non-uniform profile. With this scheme decoding with good performance...... is possible as low as Eb/No=0.6 dB, which is about 1.7 dB below the signal-to-noise ratio that marks the cut-off rate for the convolutional code. This is possible since the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability...... of computational overflow. Analytical results for the probability that the first Reed-Solomon word is decoded after C computations are presented. This is supported by simulation results that are also extended to other parameters....

  8. Sequential Power-Dependence Theory

    NARCIS (Netherlands)

    Buskens, Vincent; Rijt, Arnout van de

    2008-01-01

    Existing methods for predicting resource divisions in laboratory exchange networks do not take into account the sequential nature of the experimental setting. We extend network exchange theory by considering sequential exchange. We prove that Sequential Power-Dependence Theory—unlike

  9. Modelling sequentially scored item responses

    NARCIS (Netherlands)

    Akkermans, W.

    2000-01-01

    The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is

  10. Case of small hepatocellular carcinoma detected only by dynamic sequential computed tomography with table incrementation during arterial portography (DSCTI-AP)

    Energy Technology Data Exchange (ETDEWEB)

    Nishijima, Hiroshi; Ida, Masahiro; Takayama, Shigeru; Yanaghi, Sekiya; Kuroda, Yuzuru [Fukui Saiseikai Hospital (Japan)

    1984-05-01

    A 62 years old woman was admitted for liver cirrhosis with esophageal varix. Radionuclide liver scintigram, ultrasonogram, computed tomography and infusion hepatic angiography showed no signs of hepatocellular carcinoma (HCC). But DSCTI-AP detected a small solid mass lesion about 1.5cm in diameter in the right lobe of the liver. It was difficult to diagnose this lesion as HCC because of low titer of serum ..cap alpha../sub 1/-fetoprotein. After the esophageal transection for esophageal varices, she was followed as an outpatient. Six months later, serum ..cap alpha../sub 1/-fetoprotein was slightly elevated (130 ng/ml), and all imaging methods were carried out for the detection of HCC. Ultrasonogram, computed tomography, infusion hepatic angiography, DSCTI-AP all showed hepatocellular carcinoma, about 4.5cm in diameter in the same site where a small nodule was visualized by DSCTI-AP performed 5 months ago. This case suggests that DSCTI-AP is a sensitive and useful method for detection of some type of small liver cancer.

  11. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1998-01-01

    We describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon (RS) codes with nonuniform profile. With this scheme decoding with good performance is possible as low...... as Eb/N0=0.6 dB, which is about 1.25 dB below the signal-to-noise ratio (SNR) that marks the cutoff rate for the full system. Accounting for about 0.45 dB due to the outer codes, sequential decoding takes place at about 1.7 dB below the SNR cutoff rate for the convolutional code. This is possible since...... the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability of computational overflow. Analytical results for the probability that the first RS word is decoded after C computations are presented. These results are supported...

  12. Likelihood Analysis of Supersymmetric SU(5) GUTs

    CERN Document Server

    Bagnaschi, E.

    2017-01-01

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringi...

  13. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  14. Likelihood analysis of the minimal AMSB model

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Borsato, M.; Chobanova, V.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Sakurai, K. [Institute for Particle Physics Phenomenology, University of Durham, Science Laboratories, Department of Physics, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Buchmueller, O.; Citron, M.; Costa, J.C.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); De Roeck, A. [Experimental Physics Department, CERN, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [School of Physics, University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, Melbourne (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Cantabria (Spain); Isidori, G. [Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Luo, F. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba (Japan); Olive, K.A. [School of Physics and Astronomy, University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)

    2017-04-15

    We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}} 0) but the scalar mass m{sub 0} is poorly constrained. In the wino-LSP case, m{sub 3/2} is constrained to about 900 TeV and m{sub χ{sup 0}{sub 1}} to 2.9 ± 0.1 TeV, whereas in the Higgsino-LSP case m{sub 3/2} has just a lower limit >or similar 650 TeV (>or similar 480 TeV) and m{sub χ{sup 0}{sub 1}} is constrained to 1.12 (1.13) ± 0.02 TeV in the μ > 0 (μ < 0) scenario. In neither case can the anomalous magnetic moment of the muon, (g-2){sub μ}, be improved significantly relative to its Standard Model (SM) value, nor do flavour measurements constrain the model significantly, and there are poor prospects for discovering supersymmetric particles at the LHC, though there are some prospects for direct DM detection. On the other hand, if the χ{sup 0}{sub 1} contributes only a fraction of the cold DM density, future LHC E{sub T}-based searches for gluinos, squarks and heavier chargino and neutralino states as well as disappearing track searches in the wino-like LSP region will be relevant, and interference effects enable BR(B{sub s,d} → μ{sup +}μ{sup -}) to agree with the data better than in the SM in the case of wino-like DM with μ > 0. (orig.)

  15. Likelihood analysis of supersymmetric SU(5) GUTs

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Costa, J.C.; Buchmueller, O.; Citron, M.; Richards, A.; De Vries, K.J. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Sakurai, K. [University of Durham, Science Laboratories, Department of Physics, Institute for Particle Physics Phenomenology, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Borsato, M.; Chobanova, V.; Lucio, M.; Martinez Santos, D. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); Roeck, A. de [CERN, Experimental Physics Department, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, Parkville (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); Theoretical Physics Department, CERN, Geneva 23 (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Cantoblanco, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Isidori, G. [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Olive, K.A. [University of Minnesota, William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, Minneapolis, MN (United States)

    2017-02-15

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has seven parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R} - χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub τ} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC. (orig.)

  16. Likelihood analysis of supersymmetric SU(5) GUTs

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E. [DESY, Hamburg (Germany); Costa, J.C. [Imperial College, London (United Kingdom). Blackett Lab.; Sakurai, K. [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomonology; Warsaw Univ. (Poland). Inst. of Theoretical Physics; Collaboration: MasterCode Collaboration; and others

    2016-10-15

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and avour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets+E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R}-χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub T} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC.

  17. Sequential decay of Reggeons

    International Nuclear Information System (INIS)

    Yoshida, Toshihiro

    1981-01-01

    Probabilities of meson production in the sequential decay of Reggeons, which are formed from the projectile and the target in the hadron-hadron to Reggeon-Reggeon processes, are investigated. It is assumed that pair creation of heavy quarks and simultaneous creation of two antiquark-quark pairs are negligible. The leading-order terms with respect to ratio of creation probabilities of anti s s to anti u u (anti d d) are calculated. The production cross sections in the target fragmentation region are given in terms of probabilities in the initial decay of the Reggeons and an effect of manyparticle production. (author)

  18. Asymptotic Likelihood Distribution for Correlated & Constrained Systems

    CERN Document Server

    Agarwal, Ujjwal

    2016-01-01

    It describes my work as summer student at CERN. The report discusses the asymptotic distribution of the likelihood ratio for total no. of parameters being h and 2 out of these being are constrained and correlated.

  19. Synthetic Aperture Sequential Beamforming

    DEFF Research Database (Denmark)

    Kortbek, Jacob; Jensen, Jørgen Arendt; Gammelmark, Kim Løkke

    2008-01-01

    A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective is to im......A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective...... is to improve and obtain a more range independent lateral resolution compared to conventional dynamic receive focusing (DRF) without compromising frame rate. SASB is a two-stage procedure using two separate beamformers. First a set of Bmode image lines using a single focal point in both transmit and receive...... is stored. The second stage applies the focused image lines from the first stage as input data. The SASB method has been investigated using simulations in Field II and by off-line processing of data acquired with a commercial scanner. The performance of SASB with a static image object is compared with DRF...

  20. Simple and flexible SAS and SPSS programs for analyzing lag-sequential categorical data.

    Science.gov (United States)

    O'Connor, B P

    1999-11-01

    This paper describes simple and flexible programs for analyzing lag-sequential categorical data, using SAS and SPSS. The programs read a stream of codes and produce a variety of lag-sequential statistics, including transitional frequencies, expected transitional frequencies, transitional probabilities, adjusted residuals, z values, Yule's Q values, likelihood ratio tests of stationarity across time and homogeneity across groups or segments, transformed kappas for unidirectional dependence, bidirectional dependence, parallel and nonparallel dominance, and significance levels based on both parametric and randomization tests.

  1. Comparing two Poisson populations sequentially: an application

    International Nuclear Information System (INIS)

    Halteman, E.J.

    1986-01-01

    Rocky Flats Plant in Golden, Colorado monitors each of its employees for radiation exposure. Excess exposure is detected by comparing the means of two Poisson populations. A sequential probability ratio test (SPRT) is proposed as a replacement for the fixed sample normal approximation test. A uniformly most efficient SPRT exists, however logistics suggest using a truncated SPRT. The truncated SPRT is evaluated in detail and shown to possess large potential savings in average time spent by employees in the monitoring process

  2. Sequential test procedures for inventory differences

    International Nuclear Information System (INIS)

    Goldman, A.S.; Kern, E.A.; Emeigh, C.W.

    1985-01-01

    By means of a simulation study, we investigated the appropriateness of Page's and power-one sequential tests on sequences of inventory differences obtained from an example materials control unit, a sub-area of a hypothetical UF 6 -to-U 3 O 8 conversion process. The study examined detection probability and run length curves obtained from different loss scenarios. 12 refs., 10 figs., 2 tabs

  3. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    Science.gov (United States)

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  4. Detection of anorectal and cervicovaginal Chlamydia trachomatis infections following azithromycin treatment: prospective cohort study with multiple time-sequential measures of rRNA, DNA, quantitative load and symptoms.

    Directory of Open Access Journals (Sweden)

    Nicole H T M Dukers-Muijrers

    Full Text Available Determination of Chlamydia trachomatis (Ct treatment success is hampered by current assessment methods, which involve a single post-treatment measurement only. Therefore, we evaluated Ct detection by applying multiple laboratory measures on time-sequential post-treatment samples.A prospective cohort study was established with azithromycin-treated (1000 mg Ct patients (44 cervicovaginal and 15 anorectal cases. Each patient provided 18 self-taken samples pre-treatment and for 8 weeks post-treatment (response: 96%; 1,016 samples. Samples were tested for 16S rRNA (TMA, bacterial load (quantitative PCR; Chlamydia plasmid DNA and type (serovar and multilocus sequence typing. Covariates (including behavior, pre-treatment load, anatomic site, symptoms, age, and menstruation were tested for their potential association with positivity and load at 3-8 weeks using regression analyses controlling for repeated measures.By day 9, Ct positivity decreased to 20% and the median load to 0.3 inclusion-forming units (IFU per ml (pre-treatment: 170 IFU/ml. Of the 35 cases who reported no sex, sex with a treated partner or safe sex with a new partner, 40% had detection, i.e. one or more positive samples from 3-8 weeks (same Ct type over time, indicating possible antimicrobial treatment failure. Cases showed intermittent positive detection and the number of positive samples was higher in anorectal cases than in cervicovaginal cases. The highest observed bacterial load between 3-8 weeks post-treatment was 313 IFU/ml, yet the majority (65% of positive samples showed a load of ≤ 2 IFU/ml. Pre-treatment load was found to be associated with later load in anorectal cases.A single test at 3-8 weeks post-treatment frequently misses Ct. Detection reveals intermittent low loads, with an unknown risk of later complications or transmission. These findings warrant critical re-evaluation of the clinical management of single dose azithromycin-treated Ct patients and fuel the debate

  5. The composite sequential clustering technique for analysis of multispectral scanner data

    Science.gov (United States)

    Su, M. Y.

    1972-01-01

    The clustering technique consists of two parts: (1) a sequential statistical clustering which is essentially a sequential variance analysis, and (2) a generalized K-means clustering. In this composite clustering technique, the output of (1) is a set of initial clusters which are input to (2) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by traditional supervised maximum likelihood classification techniques. The mathematical algorithms for the composite sequential clustering program and a detailed computer program description with job setup are given.

  6. Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager

    Energy Technology Data Exchange (ETDEWEB)

    Lowell, A. W.; Boggs, S. E; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C. [Space Sciences Laboratory, University of California, Berkeley (United States); Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y. [Institute of Astronomy, National Tsing Hua University, Taiwan (China); Jean, P.; Ballmoos, P. von [IRAP Toulouse (France); Lin, C.-H. [Institute of Physics, Academia Sinica, Taiwan (China); Amman, M. [Lawrence Berkeley National Laboratory (United States)

    2017-10-20

    Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ∼21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.

  7. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisová, Katarina

    To the best of our knowledge, this is the first paper which discusses likelihood inference or a random set using a germ-grain model, where the individual grains are unobservable edge effects occur, and other complications appear. We consider the case where the grains form a disc process modelled...... is specified with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analyzing Peter Diggle's heather dataset, where we discuss the results...... of simulation-based maximum likelihood inference and the effect of specifying different reference Poisson models....

  8. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  9. Maintaining symmetry of simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    This paper suggests solutions to two different types of simulation errors related to Quasi-Monte Carlo integration. Likelihood functions which depend on standard deviations of mixed parameters are symmetric in nature. This paper shows that antithetic draws preserve this symmetry and thereby...... improves precision substantially. Another source of error is that models testing away mixing dimensions must replicate the relevant dimensions of the quasi-random draws in the simulation of the restricted likelihood. These simulation errors are ignored in the standard estimation procedures used today...

  10. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisova, K.

    2010-01-01

    This is probably the first paper which discusses likelihood inference for a random set using a germ-grain model, where the individual grains are unobservable, edge effects occur and other complications appear. We consider the case where the grains form a disc process modelled by a marked point...... process, where the germs are the centres and the marks are the associated radii of the discs. We propose to use a recent parametric class of interacting disc process models, where the minimal sufficient statistic depends on various geometric properties of the random set, and the density is specified......-based maximum likelihood inference and the effect of specifying different reference Poisson models....

  11. Composite likelihood estimation of demographic parameters

    Directory of Open Access Journals (Sweden)

    Garrigan Daniel

    2009-11-01

    Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable

  12. Highly selective micro-sequential injection lab-on-valve (muSI-LOV) method for the determination of ultra-trace concentrations of nickel in saline matrices using detection by electrothermal atomic absorption spectrometry.

    Science.gov (United States)

    Long, Xiangbao; Miró, Manuel; Jensen, Rikard; Hansen, Elo Harald

    2006-10-01

    A highly selective procedure is proposed for the determination of ultra-trace level concentrations of nickel in saline aqueous matrices exploiting a micro-sequential injection Lab-On-Valve (muSI-LOV) sample pretreatment protocol comprising bead injection separation/pre-concentration and detection by electrothermal atomic absorption spectrometry (ETAAS). Based on the dimethylglyoxime (DMG) reaction used for nickel analysis, the sample, as contained in a pH 9.0 buffer, is, after on-line merging with the chelating reagent, transported to a reaction coil attached to one of the external ports of the LOV to assure sufficient reaction time for the formation of Ni(DMG)(2) chelate. The non-ionic coordination compound is then collected in a renewable micro-column packed with a reversed-phase copolymeric sorbent [namely, poly(divinylbenzene-co-N-vinylpyrrolidone)] containing a balanced ratio of hydrophilic and lipophilic monomers. Following elution by a 50-muL methanol plug in an air-segmented modality, the nickel is finally quantified by ETAAS. Under the optimized conditions and for a sample volume of 1.8 mL, a retention efficiency of 70 % and an enrichment factor of 25 were obtained. The proposed methodology showed a high tolerance to the commonly encountered alkaline earth matrix elements in environmental waters, that is, calcium and magnesium, and was successfully applied for the determination of nickel in an NIST standard reference material (NIST 1640-Trace elements in natural water), household tap water of high hardness and local seawater. Satisfying recoveries were achieved for all spiked environmental water samples with maximum deviations of 6 %. The experimental results for the standard reference material were not statistically different to the certified value at a significance level of 0.05.

  13. Efficient Bit-to-Symbol Likelihood Mappings

    Science.gov (United States)

    Moision, Bruce E.; Nakashima, Michael A.

    2010-01-01

    This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.

  14. Sequential Scintigraphy in Renal Transplantation

    Energy Technology Data Exchange (ETDEWEB)

    Winkel, K. zum; Harbst, H.; Schenck, P.; Franz, H. E.; Ritz, E.; Roehl, L.; Ziegler, M.; Ammann, W.; Maier-Borst, W. [Institut Fuer Nuklearmedizin, Deutsches Krebsforschungszentrum, Heidelberg, Federal Republic of Germany (Germany)

    1969-05-15

    Based on experience gained from more than 1600 patients with proved or suspected kidney diseases and on results on extended studies with dogs, sequential scintigraphy was performed after renal transplantation in dogs. After intravenous injection of 500 {mu}Ci. {sup 131}I-Hippuran scintiphotos were taken during the first minute with an exposure time of 15 sec each and thereafter with an exposure of 2 min up to at least 16 min.. Several examinations were evaluated digitally. 26 examinations were performed on 11 dogs with homotransplanted kidneys. Immediately after transplantation the renal function was almost normal arid the bladder was filled in due time. At the beginning of rejection the initial uptake of radioactive Hippuran was reduced. The intrarenal transport became delayed; probably the renal extraction rate decreased. Corresponding to the development of an oedema in the transplant the uptake area increased in size. In cases of thrombosis of the main artery there was no evidence of any uptake of radioactivity in the transplant. Similar results were obtained in 41 examinations on 15 persons. Patients with postoperative anuria due to acute tubular necrosis showed still some uptake of radioactivity contrary to those with thrombosis of the renal artery, where no uptake was found. In cases of rejection the most frequent signs were a reduced initial uptake and a delayed intrarenal transport of radioactive Hippuran. Infarction could be detected by a reduced uptake in distinct areas of the transplant. (author)

  15. A one-sided sequential test

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A.; Lux, I. [Hungarian Academy of Sciences, Budapest (Hungary). Atomic Energy Research Inst.

    1996-04-16

    The applicability of the classical sequential probability ratio testing (SPRT) for early failure detection problems is limited by the fact that there is an extra time delay between the occurrence of the failure and its first recognition. Chien and Adams developed a method to minimize this time for the case when the problem can be formulated as testing the mean value of a Gaussian signal. In our paper we propose a procedure that can be applied for both mean and variance testing and that minimizes the time delay. The method is based on a special parametrization of the classical SPRT. The one-sided sequential tests (OSST) can reproduce the results of the Chien-Adams test when applied for mean values. (author).

  16. Quantum Inequalities and Sequential Measurements

    International Nuclear Information System (INIS)

    Candelpergher, B.; Grandouz, T.; Rubinx, J.L.

    2011-01-01

    In this article, the peculiar context of sequential measurements is chosen in order to analyze the quantum specificity in the two most famous examples of Heisenberg and Bell inequalities: Results are found at some interesting variance with customary textbook materials, where the context of initial state re-initialization is described. A key-point of the analysis is the possibility of defining Joint Probability Distributions for sequential random variables associated to quantum operators. Within the sequential context, it is shown that Joint Probability Distributions can be defined in situations where not all of the quantum operators (corresponding to random variables) do commute two by two. (authors)

  17. Improved Likelihood Function in Particle-based IR Eye Tracking

    DEFF Research Database (Denmark)

    Satria, R.; Sorensen, J.; Hammoud, R.

    2005-01-01

    In this paper we propose a log likelihood-ratio function of foreground and background models used in a particle filter to track the eye region in dark-bright pupil image sequences. This model fuses information from both dark and bright pupil images and their difference image into one model. Our...... enhanced tracker overcomes the issues of prior selection of static thresholds during the detection of feature observations in the bright-dark difference images. The auto-initialization process is performed using cascaded classifier trained using adaboost and adapted to IR eye images. Experiments show good...

  18. Maximum Likelihood Joint Tracking and Association in Strong Clutter

    Directory of Open Access Journals (Sweden)

    Leonid I. Perlovsky

    2013-01-01

    Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non-combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague-to-crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly-cluttered scenarios and results in an orders-of-magnitude improvement in signal-to-clutter ratio.

  19. Phylogenetic analysis using parsimony and likelihood methods.

    Science.gov (United States)

    Yang, Z

    1996-02-01

    The assumptions underlying the maximum-parsimony (MP) method of phylogenetic tree reconstruction were intuitively examined by studying the way the method works. Computer simulations were performed to corroborate the intuitive examination. Parsimony appears to involve very stringent assumptions concerning the process of sequence evolution, such as constancy of substitution rates between nucleotides, constancy of rates across nucleotide sites, and equal branch lengths in the tree. For practical data analysis, the requirement of equal branch lengths means similar substitution rates among lineages (the existence of an approximate molecular clock), relatively long interior branches, and also few species in the data. However, a small amount of evolution is neither a necessary nor a sufficient requirement of the method. The difficulties involved in the application of current statistical estimation theory to tree reconstruction were discussed, and it was suggested that the approach proposed by Felsenstein (1981, J. Mol. Evol. 17: 368-376) for topology estimation, as well as its many variations and extensions, differs fundamentally from the maximum likelihood estimation of a conventional statistical parameter. Evidence was presented showing that the Felsenstein approach does not share the asymptotic efficiency of the maximum likelihood estimator of a statistical parameter. Computer simulations were performed to study the probability that MP recovers the true tree under a hierarchy of models of nucleotide substitution; its performance relative to the likelihood method was especially noted. The results appeared to support the intuitive examination of the assumptions underlying MP. When a simple model of nucleotide substitution was assumed to generate data, the probability that MP recovers the true topology could be as high as, or even higher than, that for the likelihood method. When the assumed model became more complex and realistic, e.g., when substitution rates were

  20. Framework for sequential approximate optimization

    NARCIS (Netherlands)

    Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.

    2004-01-01

    An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python

  1. Sequentially pulsed traveling wave accelerator

    Science.gov (United States)

    Caporaso, George J [Livermore, CA; Nelson, Scott D [Patterson, CA; Poole, Brian R [Tracy, CA

    2009-08-18

    A sequentially pulsed traveling wave compact accelerator having two or more pulse forming lines each with a switch for producing a short acceleration pulse along a short length of a beam tube, and a trigger mechanism for sequentially triggering the switches so that a traveling axial electric field is produced along the beam tube in synchronism with an axially traversing pulsed beam of charged particles to serially impart energy to the particle beam.

  2. Factors Associated with Young Adults’ Pregnancy Likelihood

    Science.gov (United States)

    Kitsantas, Panagiota; Lindley, Lisa L.; Wu, Huichuan

    2014-01-01

    OBJECTIVES While progress has been made to reduce adolescent pregnancies in the United States, rates of unplanned pregnancy among young adults (18–29 years) remain high. In this study, we assessed factors associated with perceived likelihood of pregnancy (likelihood of getting pregnant/getting partner pregnant in the next year) among sexually experienced young adults who were not trying to get pregnant and had ever used contraceptives. METHODS We conducted a secondary analysis of 660 young adults, 18–29 years old in the United States, from the cross-sectional National Survey of Reproductive and Contraceptive Knowledge. Logistic regression and classification tree analyses were conducted to generate profiles of young adults most likely to report anticipating a pregnancy in the next year. RESULTS Nearly one-third (32%) of young adults indicated they believed they had at least some likelihood of becoming pregnant in the next year. Young adults who believed that avoiding pregnancy was not very important were most likely to report pregnancy likelihood (odds ratio [OR], 5.21; 95% CI, 2.80–9.69), as were young adults for whom avoiding a pregnancy was important but not satisfied with their current contraceptive method (OR, 3.93; 95% CI, 1.67–9.24), attended religious services frequently (OR, 3.0; 95% CI, 1.52–5.94), were uninsured (OR, 2.63; 95% CI, 1.31–5.26), and were likely to have unprotected sex in the next three months (OR, 1.77; 95% CI, 1.04–3.01). DISCUSSION These results may help guide future research and the development of pregnancy prevention interventions targeting sexually experienced young adults. PMID:25782849

  3. Review of Elaboration Likelihood Model of persuasion

    OpenAIRE

    藤原, 武弘; 神山, 貴弥

    1989-01-01

    This article mainly introduces Elaboration Likelihood Model (ELM), proposed by Petty & Cacioppo, that is, a general attitude change theory. ELM posturates two routes to persuasion; central and peripheral route. Attitude change by central route is viewed as resulting from a diligent consideration of the issue-relevant informations presented. On the other hand, attitude change by peripheral route is viewed as resulting from peripheral cues in the persuasion context. Secondly we compare these tw...

  4. The impact of eyewitness identifications from simultaneous and sequential lineups.

    Science.gov (United States)

    Wright, Daniel B

    2007-10-01

    Recent guidelines in the US allow either simultaneous or sequential lineups to be used for eyewitness identification. This paper investigates how potential jurors weight the probative value of the different outcomes from both of these types of lineups. Participants (n=340) were given a description of a case that included some exonerating and some incriminating evidence. There was either a simultaneous or a sequential lineup. Depending on the condition, an eyewitness chose the suspect, chose a filler, or made no identification. The participant had to judge the guilt of the suspect and decide whether to render a guilty verdict. For both simultaneous and sequential lineups an identification had a large effect,increasing the probability of a guilty verdict. There were no reliable effects detected between making no identification and identifying a filler. The effect sizes were similar for simultaneous and sequential lineups. These findings are important for judges and other legal professionals to know for trials involving lineup identifications.

  5. A Bayesian sequential processor approach to spectroscopic portal system decisions

    Energy Technology Data Exchange (ETDEWEB)

    Sale, K; Candy, J; Breitfeller, E; Guidry, B; Manatt, D; Gosnell, T; Chambers, D

    2007-07-31

    The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waiting for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.

  6. Determination of trace metal ions via on-line separation and preconcentration by means of chelating Sepharose beads in a sequential injection lab-on-valve (SI-LOV) system coupled to electrothermal atomic absorption spectrometric detection

    DEFF Research Database (Denmark)

    Long, Xiangbao; Hansen, Elo Harald; Miró, Manuel

    2005-01-01

    The analytical performance of an on-line sequential injection lab-on-valve (SI-LOV) system using chelating Sepharose beads as sorbent material for the determination of ultra trace levels of Cd(II), Pb(II) and Ni(II) by electrothermal atomic absorption spectrometry (ETAAS) is described and discussed...

  7. A sequential/parallel track selector

    CERN Document Server

    Bertolino, F; Bressani, Tullio; Chiavassa, E; Costa, S; Dellacasa, G; Gallio, M; Musso, A

    1980-01-01

    A medium speed ( approximately 1 mu s) hardware pre-analyzer for the selection of events detected in four planes of drift chambers in the magnetic field of the Omicron Spectrometer at the CERN SC is described. Specific geometrical criteria determine patterns of hits in the four planes of vertical wires that have to be recognized and that are stored as patterns of '1's in random access memories. Pairs of good hits are found sequentially, then the RAMs are used as look-up tables. (6 refs).

  8. Dimension-Independent Likelihood-Informed MCMC

    KAUST Repository

    Cui, Tiangang; Law, Kody; Marzouk, Youssef

    2015-01-01

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters, which in principle can be described as functions. By exploiting low-dimensional structure in the change from prior to posterior [distributions], we introduce a suite of MCMC samplers that can adapt to the complex structure of the posterior distribution, yet are well-defined on function space. Posterior sampling in nonlinear inverse problems arising from various partial di erential equations and also a stochastic differential equation are used to demonstrate the e ciency of these dimension-independent likelihood-informed samplers.

  9. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  10. Dimension-Independent Likelihood-Informed MCMC

    KAUST Repository

    Cui, Tiangang

    2015-01-07

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters, which in principle can be described as functions. By exploiting low-dimensional structure in the change from prior to posterior [distributions], we introduce a suite of MCMC samplers that can adapt to the complex structure of the posterior distribution, yet are well-defined on function space. Posterior sampling in nonlinear inverse problems arising from various partial di erential equations and also a stochastic differential equation are used to demonstrate the e ciency of these dimension-independent likelihood-informed samplers.

  11. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  12. Remarks on sequential designs in risk assessment

    International Nuclear Information System (INIS)

    Seidenfeld, T.

    1982-01-01

    The special merits of sequential designs are reviewed in light of particular challenges that attend risk assessment for human population. The kinds of ''statistical inference'' are distinguished and the problem of design which is pursued is the clash between Neyman-Pearson and Bayesian programs of sequential design. The value of sequential designs is discussed and the Neyman-Pearson vs. Bayesian sequential designs are probed in particular. Finally, warnings with sequential designs are considered, especially in relation to utilitarianism

  13. Sequential versus simultaneous market delineation

    DEFF Research Database (Denmark)

    Haldrup, Niels; Møllgaard, Peter; Kastberg Nielsen, Claus

    2005-01-01

    and geographical markets. Using a unique data setfor prices of Norwegian and Scottish salmon, we propose a methodologyfor simultaneous market delineation and we demonstrate that comparedto a sequential approach conclusions will be reversed.JEL: C3, K21, L41, Q22Keywords: Relevant market, econometric delineation......Delineation of the relevant market forms a pivotal part of most antitrustcases. The standard approach is sequential. First the product marketis delineated, then the geographical market is defined. Demand andsupply substitution in both the product dimension and the geographicaldimension...

  14. Sequential logic analysis and synthesis

    CERN Document Server

    Cavanagh, Joseph

    2007-01-01

    Until now, there was no single resource for actual digital system design. Using both basic and advanced concepts, Sequential Logic: Analysis and Synthesis offers a thorough exposition of the analysis and synthesis of both synchronous and asynchronous sequential machines. With 25 years of experience in designing computing equipment, the author stresses the practical design of state machines. He clearly delineates each step of the structured and rigorous design principles that can be applied to practical applications. The book begins by reviewing the analysis of combinatorial logic and Boolean a

  15. The likelihood of Latino women to seek help in response to interpersonalvictimization: An examination of individual, interpersonal and socioculturalinfluences

    Directory of Open Access Journals (Sweden)

    Chiara Sabina

    2014-07-01

    Full Text Available Help-seeking is a process that is influenced by individual, interpersonal, and sociocultural factors. Thecurrent study examined these influences on the likelihood of seeking help (police, pressing charges,medical services, social services, and informal help for interpersonal violence among a national sample ofLatino women. Women living in high-density Latino neighborhoods in the USA were interviewed by phonein their preferred language. Women reporting being, on average, between "somewhat likely" and "verylikely" to seek help should they experience interpersonal victimization. Sequential linear regression resultsindicated that individual (age, depression, interpersonal (having children, past victimization, andsociocultural factors (immigrant status, acculturation were associated with the self-reported likelihood ofseeking help for interpersonal violence. Having children was consistently related to a greater likelihood toseek all forms of help. Overall, women appear to respond to violence in ways that reflects their ecologicalcontext. Help-seeking is best understood within a multi-layered and dynamic context.

  16. Evaluation Using Sequential Trials Methods.

    Science.gov (United States)

    Cohen, Mark E.; Ralls, Stephen A.

    1986-01-01

    Although dental school faculty as well as practitioners are interested in evaluating products and procedures used in clinical practice, research design and statistical analysis can sometimes pose problems. Sequential trials methods provide an analytical structure that is both easy to use and statistically valid. (Author/MLW)

  17. Attack Trees with Sequential Conjunction

    NARCIS (Netherlands)

    Jhawar, Ravi; Kordy, Barbara; Mauw, Sjouke; Radomirović, Sasa; Trujillo-Rasua, Rolando

    2015-01-01

    We provide the first formal foundation of SAND attack trees which are a popular extension of the well-known attack trees. The SAND at- tack tree formalism increases the expressivity of attack trees by intro- ducing the sequential conjunctive operator SAND. This operator enables the modeling of

  18. A Predictive Likelihood Approach to Bayesian Averaging

    Directory of Open Access Journals (Sweden)

    Tomáš Jeřábek

    2015-01-01

    Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. In this paper we combine multivariate density forecasts of GDP growth, inflation and real interest rates from four various models, two type of Bayesian vector autoregression (BVAR models, a New Keynesian dynamic stochastic general equilibrium (DSGE model of small open economy and DSGE-VAR model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. Because forecast accuracy of observed models are different, the weighting scheme based on the predictive likelihood, the trace of past MSE matrix, model ranks are used to combine the models. The equal-weight scheme is used as a simple combination scheme. The results show that optimally combined densities are comparable to the best individual models.

  19. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  20. A maximum likelihood framework for protein design

    Directory of Open Access Journals (Sweden)

    Philippe Hervé

    2006-06-01

    Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces

  1. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  2. A generally applicable sequential alkaline phosphatase immunohistochemical double staining

    NARCIS (Netherlands)

    van der Loos, Chris M.; Teeling, Peter

    2008-01-01

    A universal type of sequential double alkaline phosphatase immunohistochemical staining is described that can be used for formalin-fixed, paraffin-embedded and cryostat tissue sections from human and mouse origin. It consists of two alkaline phosphatase detection systems including enzymatic

  3. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  4. Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data

    Science.gov (United States)

    Carpenter, J. Russell; Markley, F. Landis; Gold, Dara

    2013-01-01

    We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.

  5. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.; Shamma, Jeff S.

    2014-01-01

    incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well

  6. Laser-Based Slam with Efficient Occupancy Likelihood Map Learning for Dynamic Indoor Scenes

    Science.gov (United States)

    Li, Li; Yao, Jian; Xie, Renping; Tu, Jinge; Feng, Chen

    2016-06-01

    Location-Based Services (LBS) have attracted growing attention in recent years, especially in indoor environments. The fundamental technique of LBS is the map building for unknown environments, this technique also named as simultaneous localization and mapping (SLAM) in robotic society. In this paper, we propose a novel approach for SLAMin dynamic indoor scenes based on a 2D laser scanner mounted on a mobile Unmanned Ground Vehicle (UGV) with the help of the grid-based occupancy likelihood map. Instead of applying scan matching in two adjacent scans, we propose to match current scan with the occupancy likelihood map learned from all previous scans in multiple scales to avoid the accumulation of matching errors. Due to that the acquisition of the points in a scan is sequential but not simultaneous, there unavoidably exists the scan distortion at different extents. To compensate the scan distortion caused by the motion of the UGV, we propose to integrate a velocity of a laser range finder (LRF) into the scan matching optimization framework. Besides, to reduce the effect of dynamic objects such as walking pedestrians often existed in indoor scenes as much as possible, we propose a new occupancy likelihood map learning strategy by increasing or decreasing the probability of each occupancy grid after each scan matching. Experimental results in several challenged indoor scenes demonstrate that our proposed approach is capable of providing high-precision SLAM results.

  7. Maximum likelihood pedigree reconstruction using integer linear programming.

    Science.gov (United States)

    Cussens, James; Bartlett, Mark; Jones, Elinor M; Sheehan, Nuala A

    2013-01-01

    Large population biobanks of unrelated individuals have been highly successful in detecting common genetic variants affecting diseases of public health concern. However, they lack the statistical power to detect more modest gene-gene and gene-environment interaction effects or the effects of rare variants for which related individuals are ideally required. In reality, most large population studies will undoubtedly contain sets of undeclared relatives, or pedigrees. Although a crude measure of relatedness might sometimes suffice, having a good estimate of the true pedigree would be much more informative if this could be obtained efficiently. Relatives are more likely to share longer haplotypes around disease susceptibility loci and are hence biologically more informative for rare variants than unrelated cases and controls. Distant relatives are arguably more useful for detecting variants with small effects because they are less likely to share masking environmental effects. Moreover, the identification of relatives enables appropriate adjustments of statistical analyses that typically assume unrelatedness. We propose to exploit an integer linear programming optimisation approach to pedigree learning, which is adapted to find valid pedigrees by imposing appropriate constraints. Our method is not restricted to small pedigrees and is guaranteed to return a maximum likelihood pedigree. With additional constraints, we can also search for multiple high-probability pedigrees and thus account for the inherent uncertainty in any particular pedigree reconstruction. The true pedigree is found very quickly by comparison with other methods when all individuals are observed. Extensions to more complex problems seem feasible. © 2012 Wiley Periodicals, Inc.

  8. Dimension-independent likelihood-informed MCMC

    KAUST Repository

    Cui, Tiangang

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.

  9. Reducing the likelihood of long tennis matches.

    Science.gov (United States)

    Barnett, Tristan; Alan, Brown; Pollard, Graham

    2006-01-01

    Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match. Key PointsThe cumulant generating function has nice properties for calculating the parameters of distributions in a tennis matchA final tiebreaker set reduces the length of matches as currently being used in the US OpenA new 50-40 game reduces the length of matches whilst maintaining comparable probabilities for the better player to win the match.

  10. Dimension-independent likelihood-informed MCMC

    KAUST Repository

    Cui, Tiangang; Law, Kody; Marzouk, Youssef M.

    2015-01-01

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.

  11. Robustness of the Sequential Lineup Advantage

    Science.gov (United States)

    Gronlund, Scott D.; Carlson, Curt A.; Dailey, Sarah B.; Goodsell, Charles A.

    2009-01-01

    A growing movement in the United States and around the world involves promoting the advantages of conducting an eyewitness lineup in a sequential manner. We conducted a large study (N = 2,529) that included 24 comparisons of sequential versus simultaneous lineups. A liberal statistical criterion revealed only 2 significant sequential lineup…

  12. Sequential Probability Ration Tests : Conservative and Robust

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; Shi, Wen

    2017-01-01

    In practice, most computers generate simulation outputs sequentially, so it is attractive to analyze these outputs through sequential statistical methods such as sequential probability ratio tests (SPRTs). We investigate several SPRTs for choosing between two hypothesized values for the mean output

  13. Random sequential adsorption of cubes

    Science.gov (United States)

    Cieśla, Michał; Kubala, Piotr

    2018-01-01

    Random packings built of cubes are studied numerically using a random sequential adsorption algorithm. To compare the obtained results with previous reports, three different models of cube orientation sampling were used. Also, three different cube-cube intersection algorithms were tested to find the most efficient one. The study focuses on the mean saturated packing fraction as well as kinetics of packing growth. Microstructural properties of packings were analyzed using density autocorrelation function.

  14. Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation

    Science.gov (United States)

    Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.

    2015-11-01

    We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.

  15. Sequential Acral Lentiginous Melanomas of the Foot

    Directory of Open Access Journals (Sweden)

    Jiro Uehara

    2010-12-01

    Full Text Available A 64-year-old Japanese woman had a lightly brown-blackish pigmented macule (1.2 cm in diameter on the left sole of her foot. She received surgical excision following a diagnosis of acral lentiginous melanoma (ALM, which was confirmed histopathologically. One month after the operation, a second melanoma lesion was noticed adjacent to the grafted site. Histopathologically, the two lesions had no continuity, but HMB-45 and cyclin D1 double-positive cells were detected not only on aggregates of atypical melanocytes but also on single cells near the cutting edge of the first lesion. The unique occurrence of a sequential lesion of a primary melanoma might be caused by stimulated subclinical field cells during the wound healing process following the initial operation. This case warrants further investigation to establish the appropriate surgical margin of ALM lesions.

  16. Maximum likelihood estimation of the position of a radiating source in a waveguide

    International Nuclear Information System (INIS)

    Hinich, M.J.

    1979-01-01

    An array of sensors is receiving radiation from a source of interest. The source and the array are in a one- or two-dimensional waveguide. The maximum-likelihood estimators of the coordinates of the source are analyzed under the assumptions that the noise field is Gaussian. The Cramer-Rao lower bound is of the order of the number of modes which define the source excitation function. The results show that the accuracy of the maximum likelihood estimator of source depth using a vertical array in a infinite horizontal waveguide (such as the ocean) is limited by the number of modes detected by the array regardless of the array size

  17. An improved likelihood model for eye tracking

    DEFF Research Database (Denmark)

    Hammoud, Riad I.; Hansen, Dan Witzner

    2007-01-01

    While existing eye detection and tracking algorithms can work reasonably well in a controlled environment, they tend to perform poorly under real world imaging conditions where the lighting produces shadows and the person's eyes can be occluded by e.g. glasses or makeup. As a result, pixel clusters...... associated with the eyes tend to be grouped together with background-features. This problem occurs both for eye detection and eye tracking. Problems that especially plague eye tracking include head movement, eye blinking and light changes, all of which can cause the eyes to suddenly disappear. The usual...... approach in such cases is to abandon the tracking routine and re-initialize eye detection. Of course this may be a difficult process due to missed data problem. Accordingly, what is needed is an efficient method of reliably tracking a person's eyes between successively produced video image frames, even...

  18. Maximum likelihood sequence estimation for optical complex direct modulation.

    Science.gov (United States)

    Che, Di; Yuan, Feng; Shieh, William

    2017-04-17

    Semiconductor lasers are versatile optical transmitters in nature. Through the direct modulation (DM), the intensity modulation is realized by the linear mapping between the injection current and the light power, while various angle modulations are enabled by the frequency chirp. Limited by the direct detection, DM lasers used to be exploited only as 1-D (intensity or angle) transmitters by suppressing or simply ignoring the other modulation. Nevertheless, through the digital coherent detection, simultaneous intensity and angle modulations (namely, 2-D complex DM, CDM) can be realized by a single laser diode. The crucial technique of CDM is the joint demodulation of intensity and differential phase with the maximum likelihood sequence estimation (MLSE), supported by a closed-form discrete signal approximation of frequency chirp to characterize the MLSE transition probability. This paper proposes a statistical method for the transition probability to significantly enhance the accuracy of the chirp model. Using the statistical estimation, we demonstrate the first single-channel 100-Gb/s PAM-4 transmission over 1600-km fiber with only 10G-class DM lasers.

  19. The behavior of the likelihood ratio test for testing missingness

    OpenAIRE

    Hens, Niel; Aerts, Marc; Molenberghs, Geert; Thijs, Herbert

    2003-01-01

    To asses the sensitivity of conclusions to model choices in the context of selection models for non-random dropout, one can oppose the different missing mechanisms to each other; e.g. by the likelihood ratio tests. The finite sample behavior of the null distribution and the power of the likelihood ratio test is studied under a variety of missingness mechanisms. missing data; sensitivity analysis; likelihood ratio test; missing mechanisms

  20. Penalized Maximum Likelihood Estimation for univariate normal mixture distributions

    International Nuclear Information System (INIS)

    Ridolfi, A.; Idier, J.

    2001-01-01

    Due to singularities of the likelihood function, the maximum likelihood approach for the estimation of the parameters of normal mixture models is an acknowledged ill posed optimization problem. Ill posedness is solved by penalizing the likelihood function. In the Bayesian framework, it amounts to incorporating an inverted gamma prior in the likelihood function. A penalized version of the EM algorithm is derived, which is still explicit and which intrinsically assures that the estimates are not singular. Numerical evidence of the latter property is put forward with a test

  1. Sequential determination of important ecotoxic radionuclides in nuclear waste samples

    International Nuclear Information System (INIS)

    Bilohuscin, J.

    2016-01-01

    In the dissertation thesis we focused on the development and optimization of a sequential determination method for radionuclides 93 Zr, 94 Nb, 99 Tc and 126 Sn, employing extraction chromatography sorbents TEVA (R) Resin and Anion Exchange Resin, supplied by Eichrom Industries. Prior to the attestation of sequential separation of these proposed radionuclides from radioactive waste samples, a unique sequential procedure of 90 Sr, 239 Pu, 241 Am separation from urine matrices was tried, using molecular recognition sorbents of AnaLig (R) series and extraction chromatography sorbent DGA (R) Resin. On these experiments, four various sorbents were continually used for separation, including PreFilter Resin sorbent, which removes interfering organic materials present in raw urine. After the acquisition of positive results of this sequential procedure followed experiments with a 126 Sn separation using TEVA (R) Resin and Anion Exchange Resin sorbents. Radiochemical recoveries obtained from samples of radioactive evaporate concentrates and sludge showed high efficiency of the separation, while values of 126 Sn were under the minimum detectable activities MDA. Activity of 126 Sn was determined after ingrowth of daughter nuclide 126m Sb on HPGe gamma detector, with minimal contamination of gamma interfering radionuclides with decontamination factors (D f ) higher then 1400 for 60 Co and 47000 for 137 Cs. Based on the acquired experiments and results of these separation procedures, a complex method of sequential separation of 93 Zr, 94 Nb, 99 Tc and 126 Sn was proposed, which included optimization steps similar to those used in previous parts of the dissertation work. Application of the sequential separation method for sorbents TEVA (R) Resin and Anion Exchange Resin on real samples of radioactive wastes provided satisfactory results and an economical, time sparing, efficient method. (author)

  2. Separation and preconcentration of ultratrace levels of cadmium(II) in a sequential injection (SI) system with a PTFE packed column as a mimic sequential injection lab-on-valve (SI-LOV) system with renewable column employing detection by electrothermal atomic absorption spectrometry (ETAAS)

    DEFF Research Database (Denmark)

    Long, Xiangbao; Chomchoei, Roongrat; Gała, Piotr

    of cadmium(II) by detection with electrothermal atomic absorption spectrometry (ETAAS). The non-charged complex formed between the analyste and the chelating reagent diethyldithiophosphate (DDPA) was selectively adsorbed on the surface of the PTFE beads and eluted by ethanol before being directed...

  3. Sequential series for nuclear reactions

    International Nuclear Information System (INIS)

    Izumo, Ko

    1975-01-01

    A new time-dependent treatment of nuclear reactions is given, in which the wave function of compound nucleus is expanded by a sequential series of the reaction processes. The wave functions of the sequential series form another complete set of compound nucleus at the limit Δt→0. It is pointed out that the wave function is characterized by the quantities: the number of degrees of freedom of motion n, the period of the motion (Poincare cycle) tsub(n), the delay time t sub(nμ) and the relaxation time tausub(n) to the equilibrium of compound nucleus, instead of the usual quantum number lambda, the energy eigenvalue Esub(lambda) and the total width GAMMAsub(lambda) of resonance levels, respectively. The transition matrix elements and the yields of nuclear reactions also become the functions of time given by the Fourier transform of the usual ones. The Poincare cycles of compound nuclei are compared with the observed correlations among resonance levels, which are about 10 -17 --10 -16 sec for medium and heavy nuclei and about 10 -20 sec for the intermediate resonances. (auth.)

  4. Exploring the sequential lineup advantage using WITNESS.

    Science.gov (United States)

    Goodsell, Charles A; Gronlund, Scott D; Carlson, Curt A

    2010-12-01

    Advocates claim that the sequential lineup is an improvement over simultaneous lineup procedures, but no formal (quantitatively specified) explanation exists for why it is better. The computational model WITNESS (Clark, Appl Cogn Psychol 17:629-654, 2003) was used to develop theoretical explanations for the sequential lineup advantage. In its current form, WITNESS produced a sequential advantage only by pairing conservative sequential choosing with liberal simultaneous choosing. However, this combination failed to approximate four extant experiments that exhibited large sequential advantages. Two of these experiments became the focus of our efforts because the data were uncontaminated by likely suspect position effects. Decision-based and memory-based modifications to WITNESS approximated the data and produced a sequential advantage. The next step is to evaluate the proposed explanations and modify public policy recommendations accordingly.

  5. Planck intermediate results: XVI. Profile likelihoods for cosmological parameters

    DEFF Research Database (Denmark)

    Bartlett, J.G.; Cardoso, J.-F.; Delabrouille, J.

    2014-01-01

    We explore the 2013 Planck likelihood function with a high-precision multi-dimensional minimizer (Minuit). This allows a refinement of the CDM best-fit solution with respect to previously-released results, and the construction of frequentist confidence intervals using profile likelihoods. The agr...

  6. Planck 2013 results. XV. CMB power spectra and likelihood

    DEFF Research Database (Denmark)

    Tauber, Jan; Bartlett, J.G.; Bucher, M.

    2014-01-01

    This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best...

  7. The modified signed likelihood statistic and saddlepoint approximations

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1992-01-01

    SUMMARY: For a number of tests in exponential families we show that the use of a normal approximation to the modified signed likelihood ratio statistic r * is equivalent to the use of a saddlepoint approximation. This is also true in a large deviation region where the signed likelihood ratio...... statistic r is of order √ n. © 1992 Biometrika Trust....

  8. Sequential lineup presentation: Patterns and policy

    OpenAIRE

    Lindsay, R C L; Mansour, Jamal K; Beaudry, J L; Leach, A-M; Bertrand, M I

    2009-01-01

    Sequential lineups were offered as an alternative to the traditional simultaneous lineup. Sequential lineups reduce incorrect lineup selections; however, the accompanying loss of correct identifications has resulted in controversy regarding adoption of the technique. We discuss the procedure and research relevant to (1) the pattern of results found using sequential versus simultaneous lineups; (2) reasons (theory) for differences in witness responses; (3) two methodological issues; and (4) im...

  9. Likelihood analysis of parity violation in the compound nucleus

    International Nuclear Information System (INIS)

    Bowman, D.; Sharapov, E.

    1993-01-01

    We discuss the determination of the root mean-squared matrix element of the parity-violating interaction between compound-nuclear states using likelihood analysis. We briefly review the relevant features of the statistical model of the compound nucleus and the formalism of likelihood analysis. We then discuss the application of likelihood analysis to data on panty-violating longitudinal asymmetries. The reliability of the extracted value of the matrix element and errors assigned to the matrix element is stressed. We treat the situations where the spins of the p-wave resonances are not known and known using experimental data and Monte Carlo techniques. We conclude that likelihood analysis provides a reliable way to determine M and its confidence interval. We briefly discuss some problems associated with the normalization of the likelihood function

  10. The Bacterial Sequential Markov Coalescent.

    Science.gov (United States)

    De Maio, Nicola; Wilson, Daniel J

    2017-05-01

    Bacteria can exchange and acquire new genetic material from other organisms directly and via the environment. This process, known as bacterial recombination, has a strong impact on the evolution of bacteria, for example, leading to the spread of antibiotic resistance across clades and species, and to the avoidance of clonal interference. Recombination hinders phylogenetic and transmission inference because it creates patterns of substitutions (homoplasies) inconsistent with the hypothesis of a single evolutionary tree. Bacterial recombination is typically modeled as statistically akin to gene conversion in eukaryotes, i.e. , using the coalescent with gene conversion (CGC). However, this model can be very computationally demanding as it needs to account for the correlations of evolutionary histories of even distant loci. So, with the increasing popularity of whole genome sequencing, the need has emerged for a faster approach to model and simulate bacterial genome evolution. We present a new model that approximates the coalescent with gene conversion: the bacterial sequential Markov coalescent (BSMC). Our approach is based on a similar idea to the sequential Markov coalescent (SMC)-an approximation of the coalescent with crossover recombination. However, bacterial recombination poses hurdles to a sequential Markov approximation, as it leads to strong correlations and linkage disequilibrium across very distant sites in the genome. Our BSMC overcomes these difficulties, and shows a considerable reduction in computational demand compared to the exact CGC, and very similar patterns in simulated data. We implemented our BSMC model within new simulation software FastSimBac. In addition to the decreased computational demand compared to previous bacterial genome evolution simulators, FastSimBac provides more general options for evolutionary scenarios, allowing population structure with migration, speciation, population size changes, and recombination hotspots. FastSimBac is

  11. Biased lineups: sequential presentation reduces the problem.

    Science.gov (United States)

    Lindsay, R C; Lea, J A; Nosworthy, G J; Fulford, J A; Hector, J; LeVan, V; Seabrook, C

    1991-12-01

    Biased lineups have been shown to increase significantly false, but not correct, identification rates (Lindsay, Wallbridge, & Drennan, 1987; Lindsay & Wells, 1980; Malpass & Devine, 1981). Lindsay and Wells (1985) found that sequential lineup presentation reduced false identification rates, presumably by reducing reliance on relative judgment processes. Five staged-crime experiments were conducted to examine the effect of lineup biases and sequential presentation on eyewitness recognition accuracy. Sequential lineup presentation significantly reduced false identification rates from fair lineups as well as from lineups biased with regard to foil similarity, instructions, or witness attire, and from lineups biased in all of these ways. The results support recommendations that police present lineups sequentially.

  12. Standardized method for reproducing the sequential X-rays flap

    International Nuclear Information System (INIS)

    Brenes, Alejandra; Molina, Katherine; Gudino, Sylvia

    2009-01-01

    A method is validated to estandardize in the taking, developing and analysis of bite-wing radiographs taken in sequential way, in order to compare and evaluate detectable changes in the evolution of the interproximal lesions through time. A radiographic positioner called XCP® is modified by means of a rigid acrylic guide, to achieve proper of the X ray equipment core positioning relative to the XCP® ring and the reorientation during the sequential x-rays process. 16 subjects of 4 to 40 years old are studied for a total number of 32 registries. Two x-rays of the same block of teeth of each subject have been taken in sequential way, with a minimal difference of 30 minutes between each one, before the placement of radiographic attachment. The images have been digitized with a Super Cam® scanner and imported to a software. The measurements in X and Y-axis for both x-rays were performed to proceed to compare. The intraclass correlation index (ICI) has shown that the proposed method is statistically related to measurement (mm) obtained in the X and Y-axis for both sequential series of x-rays (p=0.01). The measures of central tendency and dispersion have shown that the usual occurrence is indifferent between the two measurements (Mode 0.000 and S = 0083 and 0.109) and that the probability of occurrence of different values is lower than expected. (author) [es

  13. The Origin of Sequential Chromospheric Brightenings

    Science.gov (United States)

    Kirk, M. S.; Balasubramaniam, K. S.; Jackiewicz, J.; Gilbert, H. R.

    2017-06-01

    Sequential chromospheric brightenings (SCBs) are often observed in the immediate vicinity of erupting flares and are associated with coronal mass ejections. Since their initial discovery in 2005, there have been several subsequent investigations of SCBs. These studies have used differing detection and analysis techniques, making it difficult to compare results between studies. This work employs the automated detection algorithm of Kirk et al. (Solar Phys. 283, 97, 2013) to extract the physical characteristics of SCBs in 11 flares of varying size and intensity. We demonstrate that the magnetic substructure within the SCB appears to have a significantly smaller area than the corresponding Hα emission. We conclude that SCBs originate in the lower corona around 0.1 R_{⊙} above the photosphere, propagate away from the flare center at speeds of 35 - 85 km s^{-1}, and have peak photosphere magnetic intensities of 148±2.9 G. In light of these measurements, we infer SCBs to be distinctive chromospheric signatures of erupting coronal mass ejections.

  14. Dark matter CMB constraints and likelihoods for poor particle physicists

    Energy Technology Data Exchange (ETDEWEB)

    Cline, James M.; Scott, Pat, E-mail: jcline@physics.mcgill.ca, E-mail: patscott@physics.mcgill.ca [Department of Physics, McGill University, 3600 rue University, Montréal, QC, H3A 2T8 (Canada)

    2013-03-01

    The cosmic microwave background provides constraints on the annihilation and decay of light dark matter at redshifts between 100 and 1000, the strength of which depends upon the fraction of energy ending up in the form of electrons and photons. The resulting constraints are usually presented for a limited selection of annihilation and decay channels. Here we provide constraints on the annihilation cross section and decay rate, at discrete values of the dark matter mass m{sub χ}, for all the annihilation and decay channels whose secondary spectra have been computed using PYTHIA in arXiv:1012.4515 (''PPPC 4 DM ID: a poor particle physicist cookbook for dark matter indirect detection''), namely e, μ, τ, V → e, V → μ, V → τ, u, d s, c, b, t, γ, g, W, Z and h. By interpolating in mass, these can be used to find the CMB constraints and likelihood functions from WMAP7 and Planck for a wide range of dark matter models, including those with annihilation or decay into a linear combination of different channels.

  15. Dark matter CMB constraints and likelihoods for poor particle physicists

    International Nuclear Information System (INIS)

    Cline, James M.; Scott, Pat

    2013-01-01

    The cosmic microwave background provides constraints on the annihilation and decay of light dark matter at redshifts between 100 and 1000, the strength of which depends upon the fraction of energy ending up in the form of electrons and photons. The resulting constraints are usually presented for a limited selection of annihilation and decay channels. Here we provide constraints on the annihilation cross section and decay rate, at discrete values of the dark matter mass m χ , for all the annihilation and decay channels whose secondary spectra have been computed using PYTHIA in arXiv:1012.4515 (''PPPC 4 DM ID: a poor particle physicist cookbook for dark matter indirect detection''), namely e, μ, τ, V → e, V → μ, V → τ, u, d s, c, b, t, γ, g, W, Z and h. By interpolating in mass, these can be used to find the CMB constraints and likelihood functions from WMAP7 and Planck for a wide range of dark matter models, including those with annihilation or decay into a linear combination of different channels

  16. Group sequential designs for stepped-wedge cluster randomised trials.

    Science.gov (United States)

    Grayling, Michael J; Wason, James Ms; Mander, Adrian P

    2017-10-01

    The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial's type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into

  17. Immediate Sequential Bilateral Cataract Surgery

    DEFF Research Database (Denmark)

    Kessel, Line; Andresen, Jens; Erngaard, Ditte

    2015-01-01

    The aim of the present systematic review was to examine the benefits and harms associated with immediate sequential bilateral cataract surgery (ISBCS) with specific emphasis on the rate of complications, postoperative anisometropia, and subjective visual function in order to formulate evidence......-based national Danish guidelines for cataract surgery. A systematic literature review in PubMed, Embase, and Cochrane central databases identified three randomized controlled trials that compared outcome in patients randomized to ISBCS or bilateral cataract surgery on two different dates. Meta-analyses were...... performed using the Cochrane Review Manager software. The quality of the evidence was assessed using the GRADE method (Grading of Recommendation, Assessment, Development, and Evaluation). We did not find any difference in the risk of complications or visual outcome in patients randomized to ISBCS or surgery...

  18. Random and cooperative sequential adsorption

    Science.gov (United States)

    Evans, J. W.

    1993-10-01

    Irreversible random sequential adsorption (RSA) on lattices, and continuum "car parking" analogues, have long received attention as models for reactions on polymer chains, chemisorption on single-crystal surfaces, adsorption in colloidal systems, and solid state transformations. Cooperative generalizations of these models (CSA) are sometimes more appropriate, and can exhibit richer kinetics and spatial structure, e.g., autocatalysis and clustering. The distribution of filled or transformed sites in RSA and CSA is not described by an equilibrium Gibbs measure. This is the case even for the saturation "jammed" state of models where the lattice or space cannot fill completely. However exact analysis is often possible in one dimension, and a variety of powerful analytic methods have been developed for higher dimensional models. Here we review the detailed understanding of asymptotic kinetics, spatial correlations, percolative structure, etc., which is emerging for these far-from-equilibrium processes.

  19. The sequential trauma score - a new instrument for the sequential mortality prediction in major trauma*

    Directory of Open Access Journals (Sweden)

    Huber-Wagner S

    2010-05-01

    Full Text Available Abstract Background There are several well established scores for the assessment of the prognosis of major trauma patients that all have in common that they can be calculated at the earliest during intensive care unit stay. We intended to develop a sequential trauma score (STS that allows prognosis at several early stages based on the information that is available at a particular time. Study design In a retrospective, multicenter study using data derived from the Trauma Registry of the German Trauma Society (2002-2006, we identified the most relevant prognostic factors from the patients basic data (P, prehospital phase (A, early (B1, and late (B2 trauma room phase. Univariate and logistic regression models as well as score quality criteria and the explanatory power have been calculated. Results A total of 2,354 patients with complete data were identified. From the patients basic data (P, logistic regression showed that age was a significant predictor of survival (AUCmodel p, area under the curve = 0.63. Logistic regression of the prehospital data (A showed that blood pressure, pulse rate, Glasgow coma scale (GCS, and anisocoria were significant predictors (AUCmodel A = 0.76; AUCmodel P + A = 0.82. Logistic regression of the early trauma room phase (B1 showed that peripheral oxygen saturation, GCS, anisocoria, base excess, and thromboplastin time to be significant predictors of survival (AUCmodel B1 = 0.78; AUCmodel P +A + B1 = 0.85. Multivariate analysis of the late trauma room phase (B2 detected cardiac massage, abbreviated injury score (AIS of the head ≥ 3, the maximum AIS, the need for transfusion or massive blood transfusion, to be the most important predictors (AUCmodel B2 = 0.84; AUCfinal model P + A + B1 + B2 = 0.90. The explanatory power - a tool for the assessment of the relative impact of each segment to mortality - is 25% for P, 7% for A, 17% for B1 and 51% for B2. A spreadsheet for the easy calculation of the sequential trauma

  20. Trial Sequential Methods for Meta-Analysis

    Science.gov (United States)

    Kulinskaya, Elena; Wood, John

    2014-01-01

    Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…

  1. Sequential lineup laps and eyewitness accuracy.

    Science.gov (United States)

    Steblay, Nancy K; Dietrich, Hannah L; Ryan, Shannon L; Raczynski, Jeanette L; James, Kali A

    2011-08-01

    Police practice of double-blind sequential lineups prompts a question about the efficacy of repeated viewings (laps) of the sequential lineup. Two laboratory experiments confirmed the presence of a sequential lap effect: an increase in witness lineup picks from first to second lap, when the culprit was a stranger. The second lap produced more errors than correct identifications. In Experiment 2, lineup diagnosticity was significantly higher for sequential lineup procedures that employed a single versus double laps. Witnesses who elected to view a second lap made significantly more errors than witnesses who chose to stop after one lap or those who were required to view two laps. Witnesses with prior exposure to the culprit did not exhibit a sequential lap effect.

  2. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.

    2014-12-15

    This paper considers multi-agent sequential hypothesis testing and presents a framework for strategic learning in sequential games with explicit consideration of both temporal and spatial coordination. The associated Bayes risk functions explicitly incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well-defined value functions with respect to (a) the belief states for the case of conditional independent private noisy measurements that are also assumed to be independent identically distributed over time, and (b) the information states for the case of correlated private noisy measurements. A sequential investment game of strategic coordination and delay is also discussed as an application of the proposed strategic learning rules.

  3. Sequential Product of Quantum Effects: An Overview

    Science.gov (United States)

    Gudder, Stan

    2010-12-01

    This article presents an overview for the theory of sequential products of quantum effects. We first summarize some of the highlights of this relatively recent field of investigation and then provide some new results. We begin by discussing sequential effect algebras which are effect algebras endowed with a sequential product satisfying certain basic conditions. We then consider sequential products of (discrete) quantum measurements. We next treat transition effect matrices (TEMs) and their associated sequential product. A TEM is a matrix whose entries are effects and whose rows form quantum measurements. We show that TEMs can be employed for the study of quantum Markov chains. Finally, we prove some new results concerning TEMs and vector densities.

  4. Trial Sequential Analysis in systematic reviews with meta-analysis

    Directory of Open Access Journals (Sweden)

    Jørn Wetterslev

    2017-03-01

    Full Text Available Abstract Background Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors and too many false negative conclusions (type II errors. Methods We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. Results The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D2 measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in

  5. Automated sequential injection-microcolumn approach with on-line flame atomic absorption spectrometric detection for implementing metal fractionation schemes of homogeneous and non-homogeneous solid samples of environmental interest

    DEFF Research Database (Denmark)

    Chomchoei, Roongrat; Miró, Manuel; Hansen, Elo Harald

    2005-01-01

    spectrometric detection and used for the determination of Cu as a model analyte, the potentials of this novel hyphenated approach are demonstrated by the ability of handling up to 300 mg sample of a nonhomogeneous sewage amended soil (viz., CRM 483). The three steps of the endorsed Standards, Measurements...

  6. MUF residuals tested by a sequential test with power one

    International Nuclear Information System (INIS)

    Sellinschegg, D.; Bicking, U.

    1983-01-01

    Near-real-time material accountancy is an ongoing safeguards development to extend the current capability of IAEA safeguards. The evaluation of the observed ''Material Unaccounted For'' (MUF) time series is an important part in a near-real-time material accountancy regime. The maximum capability of a sequential data evaluation procedure is demonstrated by applying this procedure to the material balance area of the chemical separation process of a reference reprocessing facility with a throughput of 1000 tonnes heavy metal per year, as an example. It is shown that, compared to a conventional material accountancy approach, the detection time as well as the detection probability is significantly improved. (author)

  7. Posterior distributions for likelihood ratios in forensic science.

    Science.gov (United States)

    van den Hout, Ardo; Alberink, Ivo

    2016-09-01

    Evaluation of evidence in forensic science is discussed using posterior distributions for likelihood ratios. Instead of eliminating the uncertainty by integrating (Bayes factor) or by conditioning on parameter values, uncertainty in the likelihood ratio is retained by parameter uncertainty derived from posterior distributions. A posterior distribution for a likelihood ratio can be summarised by the median and credible intervals. Using the posterior mean of the distribution is not recommended. An analysis of forensic data for body height estimation is undertaken. The posterior likelihood approach has been criticised both theoretically and with respect to applicability. This paper addresses the latter and illustrates an interesting application area. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  8. Practical likelihood analysis for spatial generalized linear mixed models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Ribeiro, Paulo Justiniano

    2016-01-01

    We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...

  9. Algorithms of maximum likelihood data clustering with applications

    Science.gov (United States)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  10. Generalized empirical likelihood methods for analyzing longitudinal data

    KAUST Repository

    Wang, S.; Qian, L.; Carroll, R. J.

    2010-01-01

    Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks

  11. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  12. Attitude towards, and likelihood of, complaining in the banking ...

    African Journals Online (AJOL)

    aims to determine customers' attitudes towards complaining as well as their likelihood of voicing a .... is particularly powerful and impacts greatly on customer satisfaction and retention. ...... 'Cross-national analysis of hotel customers' attitudes ...

  13. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq; Al-Naffouri, Tareq Y.; Al-Ghadhban, Samir N.

    2012-01-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous

  14. On the likelihood function of Gaussian max-stable processes

    KAUST Repository

    Genton, M. G.; Ma, Y.; Sang, H.

    2011-01-01

    We derive a closed form expression for the likelihood function of a Gaussian max-stable process indexed by ℝd at p≤d+1 sites, d≥1. We demonstrate the gain in efficiency in the maximum composite likelihood estimators of the covariance matrix from p=2 to p=3 sites in ℝ2 by means of a Monte Carlo simulation study. © 2011 Biometrika Trust.

  15. Incorporating Nuisance Parameters in Likelihoods for Multisource Spectra

    CERN Document Server

    Conway, J.S.

    2011-01-01

    We describe here the general mathematical approach to constructing likelihoods for fitting observed spectra in one or more dimensions with multiple sources, including the effects of systematic uncertainties represented as nuisance parameters, when the likelihood is to be maximized with respect to these parameters. We consider three types of nuisance parameters: simple multiplicative factors, source spectra "morphing" parameters, and parameters representing statistical uncertainties in the predicted source spectra.

  16. On the likelihood function of Gaussian max-stable processes

    KAUST Repository

    Genton, M. G.

    2011-05-24

    We derive a closed form expression for the likelihood function of a Gaussian max-stable process indexed by ℝd at p≤d+1 sites, d≥1. We demonstrate the gain in efficiency in the maximum composite likelihood estimators of the covariance matrix from p=2 to p=3 sites in ℝ2 by means of a Monte Carlo simulation study. © 2011 Biometrika Trust.

  17. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-01-01

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  18. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros

    2016-08-29

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  19. Sequential provisional implant prosthodontics therapy.

    Science.gov (United States)

    Zinner, Ira D; Markovits, Stanley; Jansen, Curtis E; Reid, Patrick E; Schnader, Yale E; Shapiro, Herbert J

    2012-01-01

    The fabrication and long-term use of first- and second-stage provisional implant prostheses is critical to create a favorable prognosis for function and esthetics of a fixed-implant supported prosthesis. The fixed metal and acrylic resin cemented first-stage prosthesis, as reviewed in Part I, is needed for prevention of adjacent and opposing tooth movement, pressure on the implant site as well as protection to avoid micromovement of the freshly placed implant body. The second-stage prosthesis, reviewed in Part II, should be used following implant uncovering and abutment installation. The patient wears this provisional prosthesis until maturation of the bone and healing of soft tissues. The second-stage provisional prosthesis is also a fail-safe mechanism for possible early implant failures and also can be used with late failures and/or for the necessity to repair the definitive prosthesis. In addition, the screw-retained provisional prosthesis is used if and when an implant requires removal or other implants are to be placed as in a sequential approach. The creation and use of both first- and second-stage provisional prostheses involve a restorative dentist, dental technician, surgeon, and patient to work as a team. If the dentist alone cannot do diagnosis and treatment planning, surgery, and laboratory techniques, he or she needs help by employing the expertise of a surgeon and a laboratory technician. This team approach is essential for optimum results.

  20. Tapered composite likelihood for spatial max-stable models

    KAUST Repository

    Sang, Huiyan

    2014-05-01

    Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.

  1. Dissociating response conflict and error likelihood in anterior cingulate cortex.

    Science.gov (United States)

    Yeung, Nick; Nieuwenhuis, Sander

    2009-11-18

    Neuroimaging studies consistently report activity in anterior cingulate cortex (ACC) in conditions of high cognitive demand, leading to the view that ACC plays a crucial role in the control of cognitive processes. According to one prominent theory, the sensitivity of ACC to task difficulty reflects its role in monitoring for the occurrence of competition, or "conflict," between responses to signal the need for increased cognitive control. However, a contrasting theory proposes that ACC is the recipient rather than source of monitoring signals, and that ACC activity observed in relation to task demand reflects the role of this region in learning about the likelihood of errors. Response conflict and error likelihood are typically confounded, making the theories difficult to distinguish empirically. The present research therefore used detailed computational simulations to derive contrasting predictions regarding ACC activity and error rate as a function of response speed. The simulations demonstrated a clear dissociation between conflict and error likelihood: fast response trials are associated with low conflict but high error likelihood, whereas slow response trials show the opposite pattern. Using the N2 component as an index of ACC activity, an EEG study demonstrated that when conflict and error likelihood are dissociated in this way, ACC activity tracks conflict and is negatively correlated with error likelihood. These findings support the conflict-monitoring theory and suggest that, in speeded decision tasks, ACC activity reflects current task demands rather than the retrospective coding of past performance.

  2. Tapered composite likelihood for spatial max-stable models

    KAUST Repository

    Sang, Huiyan; Genton, Marc G.

    2014-01-01

    Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.

  3. Asymptotically optimum multialternative sequential procedures for discernment of processes minimizing average length of observations

    Science.gov (United States)

    Fishman, M. M.

    1985-01-01

    The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.

  4. Sequential acquisition of mutations in myelodysplastic syndromes.

    Science.gov (United States)

    Makishima, Hideki

    2017-01-01

    Recent progress in next-generation sequencing technologies allows us to discover frequent mutations throughout the coding regions of myelodysplastic syndromes (MDS), potentially providing us with virtually a complete spectrum of driver mutations in this disease. As shown by many study groups these days, such driver mutations are acquired in a gene-specific fashion. For instance, DDX41 mutations are observed in germline cells long before MDS presentation. In blood samples from healthy elderly individuals, somatic DNMT3A and TET2 mutations are detected as age-related clonal hematopoiesis and are believed to be a risk factor for hematological neoplasms. In MDS, mutations of genes such as NRAS and FLT3, designated as Type-1 genes, may be significantly associated with leukemic evolution. Another type (Type-2) of genes, including RUNX1 and GATA2, are related to progression from low-risk to high-risk MDS. Overall, various driver mutations are sequentially acquired in MDS, at a specific time, in either germline cells, normal hematopoietic cells, or clonal MDS cells.

  5. Programme for test generation for combinatorial and sequential systems

    International Nuclear Information System (INIS)

    Tran Huy Hoan

    1973-01-01

    This research thesis reports the computer-assisted search for tests aimed at failure detection in combinatorial and sequential logic circuits. As he wants to deal with complex circuits with many modules such as those met in large scale integrated circuits (LSI), the author used propagation paths. He reports the development of a method which is valid for combinatorial systems and for several sequential circuits comprising elementary logic modules and JK and RS flip-flops. This method is developed on an IBM 360/91 computer in PL/1 language. The used memory space is limited and adjustable with respect to circuit dimension. Computing time is short when compared to that needed by other programmes. The solution is practical and efficient for failure test and localisation

  6. Tradable permit allocations and sequential choice

    Energy Technology Data Exchange (ETDEWEB)

    MacKenzie, Ian A. [Centre for Economic Research, ETH Zuerich, Zurichbergstrasse 18, 8092 Zuerich (Switzerland)

    2011-01-15

    This paper investigates initial allocation choices in an international tradable pollution permit market. For two sovereign governments, we compare allocation choices that are either simultaneously or sequentially announced. We show sequential allocation announcements result in higher (lower) aggregate emissions when announcements are strategic substitutes (complements). Whether allocation announcements are strategic substitutes or complements depends on the relationship between the follower's damage function and governments' abatement costs. When the marginal damage function is relatively steep (flat), allocation announcements are strategic substitutes (complements). For quadratic abatement costs and damages, sequential announcements provide a higher level of aggregate emissions. (author)

  7. Sequential Generalized Transforms on Function Space

    Directory of Open Access Journals (Sweden)

    Jae Gil Choi

    2013-01-01

    Full Text Available We define two sequential transforms on a function space Ca,b[0,T] induced by generalized Brownian motion process. We then establish the existence of the sequential transforms for functionals in a Banach algebra of functionals on Ca,b[0,T]. We also establish that any one of these transforms acts like an inverse transform of the other transform. Finally, we give some remarks about certain relations between our sequential transforms and other well-known transforms on Ca,b[0,T].

  8. The unfolding of NaI(Tl) γ-ray spectrum based on maximum likelihood method

    International Nuclear Information System (INIS)

    Zhang Qingxian; Ge Liangquan; Gu Yi; Zeng Guoqiang; Lin Yanchang; Wang Guangxi

    2011-01-01

    NaI(Tl) detectors, having a good detection efficiency, are used to detect gamma rays in field surveys. But the poor energy resolution hinders their applications, despite the use of traditional methods to resolve the overlapped gamma-ray peaks. In this paper, the maximum likelihood (ML) solution is used to resolve the spectrum. The ML method,which is capable of decomposing the peaks in energy difference of over 2/3 FWHM, is applied to scale NaI(Tl) the spectrometer. The result shows that the net area is in proportion to the content of isotopes and the precision of scaling is better than the stripping ration method. (authors)

  9. Efficacy of premixed versus sequential administration of ...

    African Journals Online (AJOL)

    sequential administration in separate syringes on block characteristics, haemodynamic parameters, side effect profile and postoperative analgesic requirement. Trial design: This was a prospective, randomised clinical study. Method: Sixty orthopaedic patients scheduled for elective lower limb surgery under spinal ...

  10. Structural Consistency, Consistency, and Sequential Rationality.

    OpenAIRE

    Kreps, David M; Ramey, Garey

    1987-01-01

    Sequential equilibria comprise consistent beliefs and a sequentially ra tional strategy profile. Consistent beliefs are limits of Bayes ratio nal beliefs for sequences of strategies that approach the equilibrium strategy. Beliefs are structurally consistent if they are rationaliz ed by some single conjecture concerning opponents' strategies. Consis tent beliefs are not necessarily structurally consistent, notwithstan ding a claim by Kreps and Robert Wilson (1982). Moreover, the spirit of stru...

  11. Image properties of list mode likelihood reconstruction for a rectangular positron emission mammography with DOI measurements

    International Nuclear Information System (INIS)

    Qi, Jinyi; Klein, Gregory J.; Huesman, Ronald H.

    2000-01-01

    A positron emission mammography scanner is under development at our Laboratory. The tomograph has a rectangular geometry consisting of four banks of detector modules. For each detector, the system can measure the depth of interaction information inside the crystal. The rectangular geometry leads to irregular radial and angular sampling and spatially variant sensitivity that are different from conventional PET systems. Therefore, it is of importance to study the image properties of the reconstructions. We adapted the theoretical analysis that we had developed for conventional PET systems to the list mode likelihood reconstruction for this tomograph. The local impulse response and covariance of the reconstruction can be easily computed using FFT. These theoretical results are also used with computer observer models to compute the signal-to-noise ratio for lesion detection. The analysis reveals the spatially variant resolution and noise properties of the list mode likelihood reconstruction. The theoretical predictions are in good agreement with Monte Carlo results

  12. Sequential lineup presentation promotes less-biased criterion setting but does not improve discriminability.

    Science.gov (United States)

    Palmer, Matthew A; Brewer, Neil

    2012-06-01

    When compared with simultaneous lineup presentation, sequential presentation has been shown to reduce false identifications to a greater extent than it reduces correct identifications. However, there has been much debate about whether this difference in identification performance represents improved discriminability or more conservative responding. In this research, data from 22 experiments that compared sequential and simultaneous lineups were analyzed using a compound signal-detection model, which is specifically designed to describe decision-making performance on tasks such as eyewitness identification tests. Sequential (cf. simultaneous) presentation did not influence discriminability, but produced a conservative shift in response bias that resulted in less-biased choosing for sequential than simultaneous lineups. These results inform understanding of the effects of lineup presentation mode on eyewitness identification decisions.

  13. Profile-likelihood Confidence Intervals in Item Response Theory Models.

    Science.gov (United States)

    Chalmers, R Philip; Pek, Jolynn; Liu, Yang

    2017-01-01

    Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.

  14. Generalized empirical likelihood methods for analyzing longitudinal data

    KAUST Repository

    Wang, S.

    2010-02-16

    Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks theorem for the limiting distributions of the empirical likelihood ratios is derived. It is shown that one of the proposed methods is locally efficient among a class of within-subject variance-covariance matrices. A simulation study is conducted to investigate the finite sample properties of the proposed methods and compare them with the block empirical likelihood method by You et al. (2006) and the normal approximation with a correctly estimated variance-covariance. The results suggest that the proposed methods are generally more efficient than existing methods which ignore the correlation structure, and better in coverage compared to the normal approximation with correctly specified within-subject correlation. An application illustrating our methods and supporting the simulation study results is also presented.

  15. Likelihood based inference for partially observed renewal processes

    NARCIS (Netherlands)

    van Lieshout, Maria Nicolette Margaretha

    2016-01-01

    This paper is concerned with inference for renewal processes on the real line that are observed in a broken interval. For such processes, the classic history-based approach cannot be used. Instead, we adapt tools from sequential spatial point process theory to propose a Monte Carlo maximum

  16. Nearly Efficient Likelihood Ratio Tests of the Unit Root Hypothesis

    DEFF Research Database (Denmark)

    Jansson, Michael; Nielsen, Morten Ørregaard

    Seemingly absent from the arsenal of currently available "nearly efficient" testing procedures for the unit root hypothesis, i.e. tests whose local asymptotic power functions are indistinguishable from the Gaussian power envelope, is a test admitting a (quasi-)likelihood ratio interpretation. We...... show that the likelihood ratio unit root test derived in a Gaussian AR(1) model with standard normal innovations is nearly efficient in that model. Moreover, these desirable properties carry over to more complicated models allowing for serially correlated and/or non-Gaussian innovations....

  17. A note on estimating errors from the likelihood function

    International Nuclear Information System (INIS)

    Barlow, Roger

    2005-01-01

    The points at which the log likelihood falls by 12 from its maximum value are often used to give the 'errors' on a result, i.e. the 68% central confidence interval. The validity of this is examined for two simple cases: a lifetime measurement and a Poisson measurement. Results are compared with the exact Neyman construction and with the simple Bartlett approximation. It is shown that the accuracy of the log likelihood method is poor, and the Bartlett construction explains why it is flawed

  18. Nearly Efficient Likelihood Ratio Tests for Seasonal Unit Roots

    DEFF Research Database (Denmark)

    Jansson, Michael; Nielsen, Morten Ørregaard

    In an important generalization of zero frequency autore- gressive unit root tests, Hylleberg, Engle, Granger, and Yoo (1990) developed regression-based tests for unit roots at the seasonal frequencies in quarterly time series. We develop likelihood ratio tests for seasonal unit roots and show...... that these tests are "nearly efficient" in the sense of Elliott, Rothenberg, and Stock (1996), i.e. that their local asymptotic power functions are indistinguishable from the Gaussian power envelope. Currently available nearly efficient testing procedures for seasonal unit roots are regression-based and require...... the choice of a GLS detrending parameter, which our likelihood ratio tests do not....

  19. LDR: A Package for Likelihood-Based Sufficient Dimension Reduction

    Directory of Open Access Journals (Sweden)

    R. Dennis Cook

    2011-03-01

    Full Text Available We introduce a new mlab software package that implements several recently proposed likelihood-based methods for sufficient dimension reduction. Current capabilities include estimation of reduced subspaces with a fixed dimension d, as well as estimation of d by use of likelihood-ratio testing, permutation testing and information criteria. The methods are suitable for preprocessing data for both regression and classification. Implementations of related estimators are also available. Although the software is more oriented to command-line operation, a graphical user interface is also provided for prototype computations.

  20. Statistical analysis of COMPTEL maximum likelihood-ratio distributions: evidence for a signal from previously undetected AGN

    International Nuclear Information System (INIS)

    Williams, O. R.; Bennett, K.; Much, R.; Schoenfelder, V.; Blom, J. J.; Ryan, J.

    1997-01-01

    The maximum likelihood-ratio method is frequently used in COMPTEL analysis to determine the significance of a point source at a given location. In this paper we do not consider whether the likelihood-ratio at a particular location indicates a detection, but rather whether distributions of likelihood-ratios derived from many locations depart from that expected for source free data. We have constructed distributions of likelihood-ratios by reading values from standard COMPTEL maximum-likelihood ratio maps at positions corresponding to the locations of different categories of AGN. Distributions derived from the locations of Seyfert galaxies are indistinguishable, according to a Kolmogorov-Smirnov test, from those obtained from ''random'' locations, but differ slightly from those obtained from the locations of flat spectrum radio loud quasars, OVVs, and BL Lac objects. This difference is not due to known COMPTEL sources, since regions near these sources are excluded from the analysis. We suggest that it might arise from a number of sources with fluxes below the COMPTEL detection threshold

  1. Trial Sequential Analysis in systematic reviews with meta-analysis

    DEFF Research Database (Denmark)

    Wetterslev, Jørn; Jakobsen, Janus Christian; Gluud, Christian

    2017-01-01

    BACKGROUND: Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size...... from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in systematic reviews with traditional meta-analyses can be reduced using Trial Sequential Analysis. Several empirical studies have demonstrated...

  2. Understanding the properties of diagnostic tests - Part 2: Likelihood ratios.

    Science.gov (United States)

    Ranganathan, Priya; Aggarwal, Rakesh

    2018-01-01

    Diagnostic tests are used to identify subjects with and without disease. In a previous article in this series, we examined some attributes of diagnostic tests - sensitivity, specificity, and predictive values. In this second article, we look at likelihood ratios, which are useful for the interpretation of diagnostic test results in everyday clinical practice.

  3. Comparison of likelihood testing procedures for parallel systems with covariances

    International Nuclear Information System (INIS)

    Ayman Baklizi; Isa Daud; Noor Akma Ibrahim

    1998-01-01

    In this paper we considered investigating and comparing the behavior of the likelihood ratio, the Rao's and the Wald's statistics for testing hypotheses on the parameters of the simple linear regression model based on parallel systems with covariances. These statistics are asymptotically equivalent (Barndorff-Nielsen and Cox, 1994). However, their relative performances in finite samples are generally known. A Monte Carlo experiment is conducted to stimulate the sizes and the powers of these statistics for complete samples and in the presence of time censoring. Comparisons of the statistics are made according to the attainment of assumed size of the test and their powers at various points in the parameter space. The results show that the likelihood ratio statistics appears to have the best performance in terms of the attainment of the assumed size of the test. Power comparisons show that the Rao statistic has some advantage over the Wald statistic in almost all of the space of alternatives while likelihood ratio statistic occupies either the first or the last position in term of power. Overall, the likelihood ratio statistic appears to be more appropriate to the model under study, especially for small sample sizes

  4. Maximum likelihood estimation of the attenuated ultrasound pulse

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...

  5. Planck 2013 results. XV. CMB power spectra and likelihood

    CERN Document Server

    Ade, P.A.R.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Bartlett, J.G.; Battaner, E.; Benabed, K.; Benoit, A.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J.J.; Bonaldi, A.; Bonavera, L.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R.C.; Calabrese, E.; Cardoso, J.F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, L.Y.; Chiang, H.C.; Christensen, P.R.; Church, S.; Clements, D.L.; Colombi, S.; Colombo, L.P.L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B.P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.M.; Desert, F.X.; Dickinson, C.; Diego, J.M.; Dole, H.; Donzelli, S.; Dore, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Ensslin, T.A.; Eriksen, H.K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A.A.; Franceschi, E.; Gaier, T.C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Heraud, Y.; Gjerlow, E.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J.E.; Hansen, F.K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Hobson, M.; Holmes, W.A.; Hornstrup, A.; Hovest, W.; Huffenberger, K.M.; Hurier, G.; Jaffe, T.R.; Jaffe, A.H.; Jewell, J.; Jones, W.C.; Juvela, M.; Keihanen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T.S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lahteenmaki, A.; Lamarre, J.M.; Lasenby, A.; Lattanzi, M.; Laureijs, R.J.; Lawrence, C.R.; Le Jeune, M.; Leach, S.; Leahy, J.P.; Leonardi, R.; Leon-Tavares, J.; Lesgourgues, J.; Liguori, M.; Lilje, P.B.; Lindholm, V.; Linden-Vornle, M.; Lopez-Caniego, M.; Lubin, P.M.; Macias-Perez, J.F.; Maffei, B.; Maino, D.; Mandolesi, N.; Marinucci, D.; Maris, M.; Marshall, D.J.; Martin, P.G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P.R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschenes, M.A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C.B.; Norgaard-Nielsen, H.U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I.J.; Orieux, F.; Osborne, S.; Oxborrow, C.A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Paykari, P.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G.W.; Prezeau, G.; Prunet, S.; Puget, J.L.; Rachen, J.P.; Rahlin, A.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ringeval, C.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubino-Martin, J.A.; Rusholme, B.; Sandri, M.; Sanselme, L.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M.D.; Shellard, E.P.S.; Spencer, L.D.; Starck, J.L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Turler, M.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L.A.; Wandelt, B.D.; Wehus, I.K.; White, M.; White, S.D.M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-01-01

    We present the Planck likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations. We use this likelihood to derive the Planck CMB power spectrum over three decades in l, covering 2 = 50, we employ a correlated Gaussian likelihood approximation based on angular cross-spectra derived from the 100, 143 and 217 GHz channels. We validate our likelihood through an extensive suite of consistency tests, and assess the impact of residual foreground and instrumental uncertainties on cosmological parameters. We find good internal agreement among the high-l cross-spectra with residuals of a few uK^2 at l <= 1000. We compare our results with foreground-cleaned CMB maps, and with cross-spectra derived from the 70 GHz Planck map, and find broad agreement in terms of spectrum residuals and cosmological parameters. The best-fit LCDM cosmology is in excellent agreement with preliminary Planck polarisation spectra. The standard LCDM cosmology is well constrained b...

  6. Robust Gaussian Process Regression with a Student-t Likelihood

    NARCIS (Netherlands)

    Jylänki, P.P.; Vanhatalo, J.; Vehtari, A.

    2011-01-01

    This paper considers the robust and efficient implementation of Gaussian process regression with a Student-t observation model, which has a non-log-concave likelihood. The challenge with the Student-t model is the analytically intractable inference which is why several approximative methods have

  7. MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR

    NARCIS (Netherlands)

    SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM

    In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the

  8. A simplification of the likelihood ratio test statistic for testing ...

    African Journals Online (AJOL)

    The traditional likelihood ratio test statistic for testing hypothesis about goodness of fit of multinomial probabilities in one, two and multi – dimensional contingency table was simplified. Advantageously, using the simplified version of the statistic to test the null hypothesis is easier and faster because calculating the expected ...

  9. Adaptive Unscented Kalman Filter using Maximum Likelihood Estimation

    DEFF Research Database (Denmark)

    Mahmoudi, Zeinab; Poulsen, Niels Kjølstad; Madsen, Henrik

    2017-01-01

    The purpose of this study is to develop an adaptive unscented Kalman filter (UKF) by tuning the measurement noise covariance. We use the maximum likelihood estimation (MLE) and the covariance matching (CM) method to estimate the noise covariance. The multi-step prediction errors generated...

  10. LIKELIHOOD ESTIMATION OF PARAMETERS USING SIMULTANEOUSLY MONITORED PROCESSES

    DEFF Research Database (Denmark)

    Friis-Hansen, Peter; Ditlevsen, Ove Dalager

    2004-01-01

    The topic is maximum likelihood inference from several simultaneously monitored response processes of a structure to obtain knowledge about the parameters of other not monitored but important response processes when the structure is subject to some Gaussian load field in space and time. The consi....... The considered example is a ship sailing with a given speed through a Gaussian wave field....

  11. Likelihood-based inference for clustered line transect data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Schweder, Tore

    2006-01-01

    The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...

  12. Likelihood-based Dynamic Factor Analysis for Measurement and Forecasting

    NARCIS (Netherlands)

    Jungbacker, B.M.J.P.; Koopman, S.J.

    2015-01-01

    We present new results for the likelihood-based analysis of the dynamic factor model. The latent factors are modelled by linear dynamic stochastic processes. The idiosyncratic disturbance series are specified as autoregressive processes with mutually correlated innovations. The new results lead to

  13. Likelihood-based inference for clustered line transect data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge; Schweder, Tore

    The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...

  14. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  15. Reconceptualizing Social Influence in Counseling: The Elaboration Likelihood Model.

    Science.gov (United States)

    McNeill, Brian W.; Stoltenberg, Cal D.

    1989-01-01

    Presents Elaboration Likelihood Model (ELM) of persuasion (a reconceptualization of the social influence process) as alternative model of attitude change. Contends ELM unifies conflicting social psychology results and can potentially account for inconsistent research findings in counseling psychology. Provides guidelines on integrating…

  16. Counseling Pretreatment and the Elaboration Likelihood Model of Attitude Change.

    Science.gov (United States)

    Heesacker, Martin

    1986-01-01

    Results of the application of the Elaboration Likelihood Model (ELM) to a counseling context revealed that more favorable attitudes toward counseling occurred as subjects' ego involvement increased and as intervention quality improved. Counselor credibility affected the degree to which subjects' attitudes reflected argument quality differences.…

  17. Cases in which ancestral maximum likelihood will be confusingly misleading.

    Science.gov (United States)

    Handelman, Tomer; Chor, Benny

    2017-05-07

    Ancestral maximum likelihood (AML) is a phylogenetic tree reconstruction criteria that "lies between" maximum parsimony (MP) and maximum likelihood (ML). ML has long been known to be statistically consistent. On the other hand, Felsenstein (1978) showed that MP is statistically inconsistent, and even positively misleading: There are cases where the parsimony criteria, applied to data generated according to one tree topology, will be optimized on a different tree topology. The question of weather AML is statistically consistent or not has been open for a long time. Mossel et al. (2009) have shown that AML can "shrink" short tree edges, resulting in a star tree with no internal resolution, which yields a better AML score than the original (resolved) model. This result implies that AML is statistically inconsistent, but not that it is positively misleading, because the star tree is compatible with any other topology. We show that AML is confusingly misleading: For some simple, four taxa (resolved) tree, the ancestral likelihood optimization criteria is maximized on an incorrect (resolved) tree topology, as well as on a star tree (both with specific edge lengths), while the tree with the original, correct topology, has strictly lower ancestral likelihood. Interestingly, the two short edges in the incorrect, resolved tree topology are of length zero, and are not adjacent, so this resolved tree is in fact a simple path. While for MP, the underlying phenomenon can be described as long edge attraction, it turns out that here we have long edge repulsion. Copyright © 2017. Published by Elsevier Ltd.

  18. Multilevel maximum likelihood estimation with application to covariance matrices

    Czech Academy of Sciences Publication Activity Database

    Turčičová, Marie; Mandel, J.; Eben, Kryštof

    Published online: 23 January ( 2018 ) ISSN 0361-0926 R&D Projects: GA ČR GA13-34856S Institutional support: RVO:67985807 Keywords : Fisher information * High dimension * Hierarchical maximum likelihood * Nested parameter spaces * Spectral diagonal covariance model * Sparse inverse covariance model Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.311, year: 2016

  19. Gaussian copula as a likelihood function for environmental models

    Science.gov (United States)

    Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.

    2017-12-01

    Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an

  20. Modeling gene expression measurement error: a quasi-likelihood approach

    Directory of Open Access Journals (Sweden)

    Strimmer Korbinian

    2003-03-01

    Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also

  1. Exclusion probabilities and likelihood ratios with applications to mixtures.

    Science.gov (United States)

    Slooten, Klaas-Jan; Egeland, Thore

    2016-01-01

    The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.

  2. DarkBit. A GAMBIT module for computing dark matter observables and likelihoods

    Energy Technology Data Exchange (ETDEWEB)

    Bringmann, Torsten; Dal, Lars A. [University of Oslo, Department of Physics, Oslo (Norway); Conrad, Jan; Edsjoe, Joakim; Farmer, Ben [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Cornell, Jonathan M. [McGill University, Department of Physics, Montreal, QC (Canada); Kahlhoefer, Felix; Wild, Sebastian [DESY, Hamburg (Germany); Kvellestad, Anders; Savage, Christopher [NORDITA, Stockholm (Sweden); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Scott, Pat [Blackett Laboratory, Imperial College London, Department of Physics, London (United Kingdom); Weniger, Christoph [University of Amsterdam, GRAPPA, Institute of Physics, Amsterdam (Netherlands); White, Martin [University of Adelaide, Department of Physics, Adelaide, SA (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale, Parkville (Australia); Collaboration: The GAMBIT Dark Matter Workgroup

    2017-12-15

    We introduce DarkBit, an advanced software code for computing dark matter constraints on various extensions to the Standard Model of particle physics, comprising both new native code and interfaces to external packages. This release includes a dedicated signal yield calculator for gamma-ray observations, which significantly extends current tools by implementing a cascade-decay Monte Carlo, as well as a dedicated likelihood calculator for current and future experiments (gamLike). This provides a general solution for studying complex particle physics models that predict dark matter annihilation to a multitude of final states. We also supply a direct detection package that models a large range of direct detection experiments (DDCalc), and that provides the corresponding likelihoods for arbitrary combinations of spin-independent and spin-dependent scattering processes. Finally, we provide custom relic density routines along with interfaces to DarkSUSY, micrOMEGAs, and the neutrino telescope likelihood package nulike. DarkBit is written in the framework of the Global And Modular Beyond the Standard Model Inference Tool (GAMBIT), providing seamless integration into a comprehensive statistical fitting framework that allows users to explore new models with both particle and astrophysics constraints, and a consistent treatment of systematic uncertainties. In this paper we describe its main functionality, provide a guide to getting started quickly, and show illustrative examples for results obtained with DarkBit (both as a stand-alone tool and as a GAMBIT module). This includes a quantitative comparison between two of the main dark matter codes (DarkSUSY and micrOMEGAs), and application of DarkBit's advanced direct and indirect detection routines to a simple effective dark matter model. (orig.)

  3. Optimisation of beryllium-7 gamma analysis following BCR sequential extraction

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, A. [Plymouth University, School of Geography, Earth and Environmental Sciences, 8 Kirkby Place, Plymouth PL4 8AA (United Kingdom); Blake, W.H., E-mail: wblake@plymouth.ac.uk [Plymouth University, School of Geography, Earth and Environmental Sciences, 8 Kirkby Place, Plymouth PL4 8AA (United Kingdom); Keith-Roach, M.J. [Plymouth University, School of Geography, Earth and Environmental Sciences, 8 Kirkby Place, Plymouth PL4 8AA (United Kingdom); Kemakta Konsult, Stockholm (Sweden)

    2012-03-30

    Graphical abstract: Showing decrease in analytical uncertainty using the optimal (combined preconcentrated sample extract) method. nv (no value) where extract activities were Detectable Activity. Highlights: Black-Right-Pointing-Pointer Knowledge of {sup 7}Be geochemical behaviour is required to support tracer studies. Black-Right-Pointing-Pointer Sequential extraction with natural {sup 7}Be returns high analytical uncertainties. Black-Right-Pointing-Pointer Preconcentrating extracts from a large sample mass improved analytical uncertainty. Black-Right-Pointing-Pointer This optimised method can be readily employed in studies using low activity samples. - Abstract: The application of cosmogenic {sup 7}Be as a sediment tracer at the catchment-scale requires an understanding of its geochemical associations in soil to underpin the assumption of irreversible adsorption. Sequential extractions offer a readily accessible means of determining the associations of {sup 7}Be with operationally defined soil phases. However, the subdivision of the low activity concentrations of fallout {sup 7}Be in soils into geochemical fractions can introduce high gamma counting uncertainties. Extending analysis time significantly is not always an option for batches of samples, owing to the on-going decay of {sup 7}Be (t{sub 1/2} = 53.3 days). Here, three different methods of preparing and quantifying {sup 7}Be extracted using the optimised BCR three-step scheme have been evaluated and compared with a focus on reducing analytical uncertainties. The optimal method involved carrying out the BCR extraction in triplicate, sub-sampling each set of triplicates for stable Be analysis before combining each set and coprecipitating the {sup 7}Be with metal oxyhydroxides to produce a thin source for gamma analysis. This method was applied to BCR extractions of natural {sup 7}Be in four agricultural soils. The approach gave good counting statistics from a 24 h analysis period ({approx}10% (2

  4. Sequential dependencies in magnitude scaling of loudness

    DEFF Research Database (Denmark)

    Joshi, Suyash Narendra; Jesteadt, Walt

    2013-01-01

    Ten normally hearing listeners used a programmable sone-potentiometer knob to adjust the level of a 1000-Hz sinusoid to match the loudness of numbers presented to them in a magnitude production task. Three different power-law exponents (0.15, 0.30, and 0.60) and a log-law with equal steps in d......B were used to program the sone-potentiometer. The knob settings systematically influenced the form of the loudness function. Time series analysis was used to assess the sequential dependencies in the data, which increased with increasing exponent and were greatest for the log-law. It would be possible......, therefore, to choose knob properties that minimized these dependencies. When the sequential dependencies were removed from the data, the slope of the loudness functions did not change, but the variability decreased. Sequential dependencies were only present when the level of the tone on the previous trial...

  5. Maximum-likelihood model averaging to profile clustering of site types across discrete linear sequences.

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2009-06-01

    Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.

  6. Targeted search for continuous gravitational waves: Bayesian versus maximum-likelihood statistics

    International Nuclear Information System (INIS)

    Prix, Reinhard; Krishnan, Badri

    2009-01-01

    We investigate the Bayesian framework for detection of continuous gravitational waves (GWs) in the context of targeted searches, where the phase evolution of the GW signal is assumed to be known, while the four amplitude parameters are unknown. We show that the orthodox maximum-likelihood statistic (known as F-statistic) can be rediscovered as a Bayes factor with an unphysical prior in amplitude parameter space. We introduce an alternative detection statistic ('B-statistic') using the Bayes factor with a more natural amplitude prior, namely an isotropic probability distribution for the orientation of GW sources. Monte Carlo simulations of targeted searches show that the resulting Bayesian B-statistic is more powerful in the Neyman-Pearson sense (i.e., has a higher expected detection probability at equal false-alarm probability) than the frequentist F-statistic.

  7. Dihydroazulene photoswitch operating in sequential tunneling regime

    DEFF Research Database (Denmark)

    Broman, Søren Lindbæk; Lara-Avila, Samuel; Thisted, Christine Lindbjerg

    2012-01-01

    to electrodes so that the electron transport goes by sequential tunneling. To assure weak coupling, the DHA switching kernel is modified by incorporating p-MeSC6H4 end-groups. Molecules are prepared by Suzuki cross-couplings on suitable halogenated derivatives of DHA. The synthesis presents an expansion of our......, incorporating a p-MeSC6H4 anchoring group in one end, has been placed in a silver nanogap. Conductance measurements justify that transport through both DHA (high resistivity) and VHF (low resistivity) forms goes by sequential tunneling. The switching is fairly reversible and reenterable; after more than 20 ON...

  8. Asynchronous Operators of Sequential Logic Venjunction & Sequention

    CERN Document Server

    Vasyukevich, Vadim

    2011-01-01

    This book is dedicated to new mathematical instruments assigned for logical modeling of the memory of digital devices. The case in point is logic-dynamical operation named venjunction and venjunctive function as well as sequention and sequentional function. Venjunction and sequention operate within the framework of sequential logic. In a form of the corresponding equations, they organically fit analytical expressions of Boolean algebra. Thus, a sort of symbiosis is formed using elements of asynchronous sequential logic on the one hand and combinational logic on the other hand. So, asynchronous

  9. A composite likelihood approach for spatially correlated survival data

    Science.gov (United States)

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450

  10. A composite likelihood approach for spatially correlated survival data.

    Science.gov (United States)

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.

  11. Secondary Analysis under Cohort Sampling Designs Using Conditional Likelihood

    Directory of Open Access Journals (Sweden)

    Olli Saarela

    2012-01-01

    Full Text Available Under cohort sampling designs, additional covariate data are collected on cases of a specific type and a randomly selected subset of noncases, primarily for the purpose of studying associations with a time-to-event response of interest. With such data available, an interest may arise to reuse them for studying associations between the additional covariate data and a secondary non-time-to-event response variable, usually collected for the whole study cohort at the outset of the study. Following earlier literature, we refer to such a situation as secondary analysis. We outline a general conditional likelihood approach for secondary analysis under cohort sampling designs and discuss the specific situations of case-cohort and nested case-control designs. We also review alternative methods based on full likelihood and inverse probability weighting. We compare the alternative methods for secondary analysis in two simulated settings and apply them in a real-data example.

  12. GENERALIZATION OF RAYLEIGH MAXIMUM LIKELIHOOD DESPECKLING FILTER USING QUADRILATERAL KERNELS

    Directory of Open Access Journals (Sweden)

    S. Sridevi

    2013-02-01

    Full Text Available Speckle noise is the most prevalent noise in clinical ultrasound images. It visibly looks like light and dark spots and deduce the pixel intensity as murkiest. Gazing at fetal ultrasound images, the impact of edge and local fine details are more palpable for obstetricians and gynecologists to carry out prenatal diagnosis of congenital heart disease. A robust despeckling filter has to be contrived to proficiently suppress speckle noise and simultaneously preserve the features. The proposed filter is the generalization of Rayleigh maximum likelihood filter by the exploitation of statistical tools as tuning parameters and use different shapes of quadrilateral kernels to estimate the noise free pixel from neighborhood. The performance of various filters namely Median, Kuwahura, Frost, Homogenous mask filter and Rayleigh maximum likelihood filter are compared with the proposed filter in terms PSNR and image profile. Comparatively the proposed filters surpass the conventional filters.

  13. Likelihood inference for a nonstationary fractional autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Ørregård Nielsen, Morten

    2010-01-01

    This paper discusses model-based inference in an autoregressive model for fractional processes which allows the process to be fractional of order d or d-b. Fractional differencing involves infinitely many past values and because we are interested in nonstationary processes we model the data X1......,...,X_{T} given the initial values X_{-n}, n=0,1,..., as is usually done. The initial values are not modeled but assumed to be bounded. This represents a considerable generalization relative to all previous work where it is assumed that initial values are zero. For the statistical analysis we assume...... the conditional Gaussian likelihood and for the probability analysis we also condition on initial values but assume that the errors in the autoregressive model are i.i.d. with suitable moment conditions. We analyze the conditional likelihood and its derivatives as stochastic processes in the parameters, including...

  14. Physical constraints on the likelihood of life on exoplanets

    Science.gov (United States)

    Lingam, Manasvi; Loeb, Abraham

    2018-04-01

    One of the most fundamental questions in exoplanetology is to determine whether a given planet is habitable. We estimate the relative likelihood of a planet's propensity towards habitability by considering key physical characteristics such as the role of temperature on ecological and evolutionary processes, and atmospheric losses via hydrodynamic escape and stellar wind erosion. From our analysis, we demonstrate that Earth-sized exoplanets in the habitable zone around M-dwarfs seemingly display much lower prospects of being habitable relative to Earth, owing to the higher incident ultraviolet fluxes and closer distances to the host star. We illustrate our results by specifically computing the likelihood (of supporting life) for the recently discovered exoplanets, Proxima b and TRAPPIST-1e, which we find to be several orders of magnitude smaller than that of Earth.

  15. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    Science.gov (United States)

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  16. Deformation of log-likelihood loss function for multiclass boosting.

    Science.gov (United States)

    Kanamori, Takafumi

    2010-09-01

    The purpose of this paper is to study loss functions in multiclass classification. In classification problems, the decision function is estimated by minimizing an empirical loss function, and then, the output label is predicted by using the estimated decision function. We propose a class of loss functions which is obtained by a deformation of the log-likelihood loss function. There are four main reasons why we focus on the deformed log-likelihood loss function: (1) this is a class of loss functions which has not been deeply investigated so far, (2) in terms of computation, a boosting algorithm with a pseudo-loss is available to minimize the proposed loss function, (3) the proposed loss functions provide a clear correspondence between the decision functions and conditional probabilities of output labels, (4) the proposed loss functions satisfy the statistical consistency of the classification error rate which is a desirable property in classification problems. Based on (3), we show that the deformed log-likelihood loss provides a model of mislabeling which is useful as a statistical model of medical diagnostics. We also propose a robust loss function against outliers in multiclass classification based on our approach. The robust loss function is a natural extension of the existing robust loss function for binary classification. A model of mislabeling and a robust loss function are useful to cope with noisy data. Some numerical studies are presented to show the robustness of the proposed loss function. A mathematical characterization of the deformed log-likelihood loss function is also presented. Copyright 2010 Elsevier Ltd. All rights reserved.

  17. Bayesian interpretation of Generalized empirical likelihood by maximum entropy

    OpenAIRE

    Rochet , Paul

    2011-01-01

    We study a parametric estimation problem related to moment condition models. As an alternative to the generalized empirical likelihood (GEL) and the generalized method of moments (GMM), a Bayesian approach to the problem can be adopted, extending the MEM procedure to parametric moment conditions. We show in particular that a large number of GEL estimators can be interpreted as a maximum entropy solution. Moreover, we provide a more general field of applications by proving the method to be rob...

  18. Menyoal Elaboration Likelihood Model (ELM) dan Teori Retorika

    OpenAIRE

    Yudi Perbawaningsih

    2012-01-01

    Abstract: Persuasion is a communication process to establish or change attitudes, which can be understood through theory of Rhetoric and theory of Elaboration Likelihood Model (ELM). This study elaborates these theories in a Public Lecture series which to persuade the students in choosing their concentration of study. The result shows that in term of persuasion effectiveness it is not quite relevant to separate the message and its source. The quality of source is determined by the quality of ...

  19. Corporate governance effect on financial distress likelihood: Evidence from Spain

    Directory of Open Access Journals (Sweden)

    Montserrat Manzaneque

    2016-01-01

    Full Text Available The paper explores some mechanisms of corporate governance (ownership and board characteristics in Spanish listed companies and their impact on the likelihood of financial distress. An empirical study was conducted between 2007 and 2012 using a matched-pairs research design with 308 observations, with half of them classified as distressed and non-distressed. Based on the previous study by Pindado, Rodrigues, and De la Torre (2008, a broader concept of bankruptcy is used to define business failure. Employing several conditional logistic models, as well as to other previous studies on bankruptcy, the results confirm that in difficult situations prior to bankruptcy, the impact of board ownership and proportion of independent directors on business failure likelihood are similar to those exerted in more extreme situations. These results go one step further, to offer a negative relationship between board size and the likelihood of financial distress. This result is interpreted as a form of creating diversity and to improve the access to the information and resources, especially in contexts where the ownership is highly concentrated and large shareholders have a great power to influence the board structure. However, the results confirm that ownership concentration does not have a significant impact on financial distress likelihood in the Spanish context. It is argued that large shareholders are passive as regards an enhanced monitoring of management and, alternatively, they do not have enough incentives to hold back the financial distress. These findings have important implications in the Spanish context, where several changes in the regulatory listing requirements have been carried out with respect to corporate governance, and where there is no empirical evidence regarding this respect.

  20. Maximum Likelihood, Consistency and Data Envelopment Analysis: A Statistical Foundation

    OpenAIRE

    Rajiv D. Banker

    1993-01-01

    This paper provides a formal statistical basis for the efficiency evaluation techniques of data envelopment analysis (DEA). DEA estimators of the best practice monotone increasing and concave production function are shown to be also maximum likelihood estimators if the deviation of actual output from the efficient output is regarded as a stochastic variable with a monotone decreasing probability density function. While the best practice frontier estimator is biased below the theoretical front...

  1. Multiple Improvements of Multiple Imputation Likelihood Ratio Tests

    OpenAIRE

    Chan, Kin Wai; Meng, Xiao-Li

    2017-01-01

    Multiple imputation (MI) inference handles missing data by first properly imputing the missing values $m$ times, and then combining the $m$ analysis results from applying a complete-data procedure to each of the completed datasets. However, the existing method for combining likelihood ratio tests has multiple defects: (i) the combined test statistic can be negative in practice when the reference null distribution is a standard $F$ distribution; (ii) it is not invariant to re-parametrization; ...

  2. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    Science.gov (United States)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  3. Menyoal Elaboration Likelihood Model (ELM) Dan Teori Retorika

    OpenAIRE

    Perbawaningsih, Yudi

    2012-01-01

    : Persuasion is a communication process to establish or change attitudes, which can be understood through theory of Rhetoric and theory of Elaboration Likelihood Model (ELM). This study elaborates these theories in a Public Lecture series which to persuade the students in choosing their concentration of study. The result shows that in term of persuasion effectiveness it is not quite relevant to separate the message and its source. The quality of source is determined by the quality of the mess...

  4. Penggunaan Elaboration Likelihood Model dalam Menganalisis Penerimaan Teknologi Informasi

    OpenAIRE

    vitrian, vitrian2

    2010-01-01

    This article discusses some technology acceptance models in an organization. Thorough analysis of how technology is acceptable help managers make any planning to implement new teachnology and make sure that new technology could enhance organization's performance. Elaboration Likelihood Model (ELM) is the one which sheds light on some behavioral factors in acceptance of information technology. The basic tenet of ELM states that human behavior in principle can be influenced through central r...

  5. Statistical Bias in Maximum Likelihood Estimators of Item Parameters.

    Science.gov (United States)

    1982-04-01

    34 a> E r’r~e r ,C Ie I# ne,..,.rVi rnd Id.,flfv b1 - bindk numb.r) I; ,t-i i-cd I ’ tiie bias in the maximum likelihood ,st i- i;, ’ t iIeiIrs in...NTC, IL 60088 Psychometric Laboratory University of North Carolina I ERIC Facility-Acquisitions Davie Hall 013A 4833 Rugby Avenue Chapel Hill, NC

  6. Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.

    Science.gov (United States)

    Xie, Yanmei; Zhang, Biao

    2017-04-20

    Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and

  7. Democracy, Autocracy and the Likelihood of International Conflict

    OpenAIRE

    Tangerås, Thomas

    2008-01-01

    This is a game-theoretic analysis of the link between regime type and international conflict. The democratic electorate can credibly punish the leader for bad conflict outcomes, whereas the autocratic selectorate cannot. For the fear of being thrown out of office, democratic leaders are (i) more selective about the wars they initiate and (ii) on average win more of the wars they start. Foreign policy behaviour is found to display strategic complementarities. The likelihood of interstate war, ...

  8. Moment Conditions Selection Based on Adaptive Penalized Empirical Likelihood

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2014-01-01

    Full Text Available Empirical likelihood is a very popular method and has been widely used in the fields of artificial intelligence (AI and data mining as tablets and mobile application and social media dominate the technology landscape. This paper proposes an empirical likelihood shrinkage method to efficiently estimate unknown parameters and select correct moment conditions simultaneously, when the model is defined by moment restrictions in which some are possibly misspecified. We show that our method enjoys oracle-like properties; that is, it consistently selects the correct moment conditions and at the same time its estimator is as efficient as the empirical likelihood estimator obtained by all correct moment conditions. Moreover, unlike the GMM, our proposed method allows us to carry out confidence regions for the parameters included in the model without estimating the covariances of the estimators. For empirical implementation, we provide some data-driven procedures for selecting the tuning parameter of the penalty function. The simulation results show that the method works remarkably well in terms of correct moment selection and the finite sample properties of the estimators. Also, a real-life example is carried out to illustrate the new methodology.

  9. Approximate maximum likelihood estimation for population genetic inference.

    Science.gov (United States)

    Bertl, Johanna; Ewing, Gregory; Kosiol, Carolin; Futschik, Andreas

    2017-11-27

    In many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.

  10. Caching and interpolated likelihoods: accelerating cosmological Monte Carlo Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Bouland, Adam; Easther, Richard; Rosenfeld, Katherine, E-mail: adam.bouland@aya.yale.edu, E-mail: richard.easther@yale.edu, E-mail: krosenfeld@cfa.harvard.edu [Department of Physics, Yale University, New Haven CT 06520 (United States)

    2011-05-01

    We describe a novel approach to accelerating Monte Carlo Markov Chains. Our focus is cosmological parameter estimation, but the algorithm is applicable to any problem for which the likelihood surface is a smooth function of the free parameters and computationally expensive to evaluate. We generate a high-order interpolating polynomial for the log-likelihood using the first points gathered by the Markov chains as a training set. This polynomial then accurately computes the majority of the likelihoods needed in the latter parts of the chains. We implement a simple version of this algorithm as a patch (InterpMC) to CosmoMC and show that it accelerates parameter estimatation by a factor of between two and four for well-converged chains. The current code is primarily intended as a ''proof of concept'', and we argue that there is considerable room for further performance gains. Unlike other approaches to accelerating parameter fits, we make no use of precomputed training sets or special choices of variables, and InterpMC is almost entirely transparent to the user.

  11. Caching and interpolated likelihoods: accelerating cosmological Monte Carlo Markov chains

    International Nuclear Information System (INIS)

    Bouland, Adam; Easther, Richard; Rosenfeld, Katherine

    2011-01-01

    We describe a novel approach to accelerating Monte Carlo Markov Chains. Our focus is cosmological parameter estimation, but the algorithm is applicable to any problem for which the likelihood surface is a smooth function of the free parameters and computationally expensive to evaluate. We generate a high-order interpolating polynomial for the log-likelihood using the first points gathered by the Markov chains as a training set. This polynomial then accurately computes the majority of the likelihoods needed in the latter parts of the chains. We implement a simple version of this algorithm as a patch (InterpMC) to CosmoMC and show that it accelerates parameter estimatation by a factor of between two and four for well-converged chains. The current code is primarily intended as a ''proof of concept'', and we argue that there is considerable room for further performance gains. Unlike other approaches to accelerating parameter fits, we make no use of precomputed training sets or special choices of variables, and InterpMC is almost entirely transparent to the user

  12. Maximum likelihood as a common computational framework in tomotherapy

    International Nuclear Information System (INIS)

    Olivera, G.H.; Shepard, D.M.; Reckwerdt, P.J.; Ruchala, K.; Zachman, J.; Fitchard, E.E.; Mackie, T.R.

    1998-01-01

    Tomotherapy is a dose delivery technique using helical or axial intensity modulated beams. One of the strengths of the tomotherapy concept is that it can incorporate a number of processes into a single piece of equipment. These processes include treatment optimization planning, dose reconstruction and kilovoltage/megavoltage image reconstruction. A common computational technique that could be used for all of these processes would be very appealing. The maximum likelihood estimator, originally developed for emission tomography, can serve as a useful tool in imaging and radiotherapy. We believe that this approach can play an important role in the processes of optimization planning, dose reconstruction and kilovoltage and/or megavoltage image reconstruction. These processes involve computations that require comparable physical methods. They are also based on equivalent assumptions, and they have similar mathematical solutions. As a result, the maximum likelihood approach is able to provide a common framework for all three of these computational problems. We will demonstrate how maximum likelihood methods can be applied to optimization planning, dose reconstruction and megavoltage image reconstruction in tomotherapy. Results for planning optimization, dose reconstruction and megavoltage image reconstruction will be presented. Strengths and weaknesses of the methodology are analysed. Future directions for this work are also suggested. (author)

  13. Interpretability degrees of finitely axiomatized sequential theories

    NARCIS (Netherlands)

    Visser, Albert

    In this paper we show that the degrees of interpretability of finitely axiomatized extensions-in-the-same-language of a finitely axiomatized sequential theory-like Elementary Arithmetic EA, IΣ1, or the Gödel-Bernays theory of sets and classes GB-have suprema. This partially answers a question posed

  14. Interpretability Degrees of Finitely Axiomatized Sequential Theories

    NARCIS (Netherlands)

    Visser, Albert

    2012-01-01

    In this paper we show that the degrees of interpretability of finitely axiomatized extensions-in-the-same-language of a finitely axiomatized sequential theory —like Elementary Arithmetic EA, IΣ1, or the Gödel-Bernays theory of sets and classes GB— have suprema. This partially answers a question

  15. S.M.P. SEQUENTIAL MATHEMATICS PROGRAM.

    Science.gov (United States)

    CICIARELLI, V; LEONARD, JOSEPH

    A SEQUENTIAL MATHEMATICS PROGRAM BEGINNING WITH THE BASIC FUNDAMENTALS ON THE FOURTH GRADE LEVEL IS PRESENTED. INCLUDED ARE AN UNDERSTANDING OF OUR NUMBER SYSTEM, AND THE BASIC OPERATIONS OF WORKING WITH WHOLE NUMBERS--ADDITION, SUBTRACTION, MULTIPLICATION, AND DIVISION. COMMON FRACTIONS ARE TAUGHT IN THE FIFTH, SIXTH, AND SEVENTH GRADES. A…

  16. Sequential and Simultaneous Logit: A Nested Model.

    NARCIS (Netherlands)

    van Ophem, J.C.M.; Schram, A.J.H.C.

    1997-01-01

    A nested model is presented which has both the sequential and the multinomial logit model as special cases. This model provides a simple test to investigate the validity of these specifications. Some theoretical properties of the model are discussed. In the analysis a distribution function is

  17. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  18. Sequential models for coarsening and missingness

    NARCIS (Netherlands)

    Gill, R.D.; Robins, J.M.

    1997-01-01

    In a companion paper we described what intuitively would seem to be the most general possible way to generate Coarsening at Random mechanisms a sequential procedure called randomized monotone coarsening Counterexamples showed that CAR mechanisms exist which cannot be represented in this way Here we

  19. Sequential motor skill: cognition, perception and action

    NARCIS (Netherlands)

    Ruitenberg, M.F.L.

    2013-01-01

    Discrete movement sequences are assumed to be the building blocks of more complex sequential actions that are present in our everyday behavior. The studies presented in this dissertation address the (neuro)cognitive underpinnings of such movement sequences, in particular in relationship to the role

  20. Sequential decoders for large MIMO systems

    KAUST Repository

    Ali, Konpal S.; Abediseid, Walid; Alouini, Mohamed-Slim

    2014-01-01

    the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity

  1. A framework for sequential multiblock component methods

    NARCIS (Netherlands)

    Smilde, A.K.; Westerhuis, J.A.; Jong, S.de

    2003-01-01

    Multiblock or multiset methods are starting to be used in chemistry and biology to study complex data sets. In chemometrics, sequential multiblock methods are popular; that is, methods that calculate one component at a time and use deflation for finding the next component. In this paper a framework

  2. Classical and sequential limit analysis revisited

    Science.gov (United States)

    Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi

    2018-04-01

    Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.

  3. Sequential spatial processes for image analysis

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette); V. Capasso

    2009-01-01

    htmlabstractWe give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects

  4. Sequential spatial processes for image analysis

    NARCIS (Netherlands)

    Lieshout, van M.N.M.; Capasso, V.

    2009-01-01

    We give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects through a video

  5. STABILIZED SEQUENTIAL QUADRATIC PROGRAMMING: A SURVEY

    Directory of Open Access Journals (Sweden)

    Damián Fernández

    2014-12-01

    Full Text Available We review the motivation for, the current state-of-the-art in convergence results, and some open questions concerning the stabilized version of the sequential quadratic programming algorithm for constrained optimization. We also discuss the tools required for its local convergence analysis, globalization challenges, and extentions of the method to the more general variational problems.

  6. Truly costly sequential search and oligopolistic pricing

    NARCIS (Netherlands)

    Janssen, Maarten C W; Moraga-González, José Luis; Wildenbeest, Matthijs R.

    We modify the paper of Stahl (1989) [Stahl, D.O., 1989. Oligopolistic pricing with sequential consumer search. American Economic Review 79, 700-12] by relaxing the assumption that consumers obtain the first price quotation for free. When all price quotations are costly to obtain, the unique

  7. Zips : mining compressing sequential patterns in streams

    NARCIS (Netherlands)

    Hoang, T.L.; Calders, T.G.K.; Yang, J.; Mörchen, F.; Fradkin, D.; Chau, D.H.; Vreeken, J.; Leeuwen, van M.; Faloutsos, C.

    2013-01-01

    We propose a streaming algorithm, based on the minimal description length (MDL) principle, for extracting non-redundant sequential patterns. For static databases, the MDL-based approach that selects patterns based on their capacity to compress data rather than their frequency, was shown to be

  8. How to Read the Tractatus Sequentially

    Directory of Open Access Journals (Sweden)

    Tim Kraft

    2016-11-01

    Full Text Available One of the unconventional features of Wittgenstein’s Tractatus Logico-Philosophicus is its use of an elaborated and detailed numbering system. Recently, Bazzocchi, Hacker und Kuusela have argued that the numbering system means that the Tractatus must be read and interpreted not as a sequentially ordered book, but as a text with a two-dimensional, tree-like structure. Apart from being able to explain how the Tractatus was composed, the tree reading allegedly solves exegetical issues both on the local (e. g. how 4.02 fits into the series of remarks surrounding it and the global level (e. g. relation between ontology and picture theory, solipsism and the eye analogy, resolute and irresolute readings. This paper defends the sequential reading against the tree reading. After presenting the challenges generated by the numbering system and the two accounts as attempts to solve them, it is argued that Wittgenstein’s own explanation of the numbering system, anaphoric references within the Tractatus and the exegetical issues mentioned above do not favour the tree reading, but a version of the sequential reading. This reading maintains that the remarks of the Tractatus form a sequential chain: The role of the numbers is to indicate how remarks on different levels are interconnected to form a concise, surveyable and unified whole.

  9. Adult Word Recognition and Visual Sequential Memory

    Science.gov (United States)

    Holmes, V. M.

    2012-01-01

    Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…

  10. Terminating Sequential Delphi Survey Data Collection

    Science.gov (United States)

    Kalaian, Sema A.; Kasim, Rafa M.

    2012-01-01

    The Delphi survey technique is an iterative mail or electronic (e-mail or web-based) survey method used to obtain agreement or consensus among a group of experts in a specific field on a particular issue through a well-designed and systematic multiple sequential rounds of survey administrations. Each of the multiple rounds of the Delphi survey…

  11. Communicating likelihoods and probabilities in forecasts of volcanic eruptions

    Science.gov (United States)

    Doyle, Emma E. H.; McClure, John; Johnston, David M.; Paton, Douglas

    2014-02-01

    The issuing of forecasts and warnings of natural hazard events, such as volcanic eruptions, earthquake aftershock sequences and extreme weather often involves the use of probabilistic terms, particularly when communicated by scientific advisory groups to key decision-makers, who can differ greatly in relative expertise and function in the decision making process. Recipients may also differ in their perception of relative importance of political and economic influences on interpretation. Consequently, the interpretation of these probabilistic terms can vary greatly due to the framing of the statements, and whether verbal or numerical terms are used. We present a review from the psychology literature on how the framing of information influences communication of these probability terms. It is also unclear as to how people rate their perception of an event's likelihood throughout a time frame when a forecast time window is stated. Previous research has identified that, when presented with a 10-year time window forecast, participants viewed the likelihood of an event occurring ‘today’ as being of less than that in year 10. Here we show that this skew in perception also occurs for short-term time windows (under one week) that are of most relevance for emergency warnings. In addition, unlike the long-time window statements, the use of the phrasing “within the next…” instead of “in the next…” does not mitigate this skew, nor do we observe significant differences between the perceived likelihoods of scientists and non-scientists. This finding suggests that effects occurring due to the shorter time window may be ‘masking’ any differences in perception due to wording or career background observed for long-time window forecasts. These results have implications for scientific advice, warning forecasts, emergency management decision-making, and public information as any skew in perceived event likelihood towards the end of a forecast time window may result in

  12. Comparisons of likelihood and machine learning methods of individual classification

    Science.gov (United States)

    Guinand, B.; Topchy, A.; Page, K.S.; Burnham-Curtis, M. K.; Punch, W.F.; Scribner, K.T.

    2002-01-01

    Classification methods used in machine learning (e.g., artificial neural networks, decision trees, and k-nearest neighbor clustering) are rarely used with population genetic data. We compare different nonparametric machine learning techniques with parametric likelihood estimations commonly employed in population genetics for purposes of assigning individuals to their population of origin (“assignment tests”). Classifier accuracy was compared across simulated data sets representing different levels of population differentiation (low and high FST), number of loci surveyed (5 and 10), and allelic diversity (average of three or eight alleles per locus). Empirical data for the lake trout (Salvelinus namaycush) exhibiting levels of population differentiation comparable to those used in simulations were examined to further evaluate and compare classification methods. Classification error rates associated with artificial neural networks and likelihood estimators were lower for simulated data sets compared to k-nearest neighbor and decision tree classifiers over the entire range of parameters considered. Artificial neural networks only marginally outperformed the likelihood method for simulated data (0–2.8% lower error rates). The relative performance of each machine learning classifier improved relative likelihood estimators for empirical data sets, suggesting an ability to “learn” and utilize properties of empirical genotypic arrays intrinsic to each population. Likelihood-based estimation methods provide a more accessible option for reliable assignment of individuals to the population of origin due to the intricacies in development and evaluation of artificial neural networks. In recent years, characterization of highly polymorphic molecular markers such as mini- and microsatellites and development of novel methods of analysis have enabled researchers to extend investigations of ecological and evolutionary processes below the population level to the level of

  13. Sequential injection lab-on-valve: the third generation of flow injection analysis

    DEFF Research Database (Denmark)

    Wang, Jianhua; Hansen, Elo Harald

    2003-01-01

    Termed the third generation of flow injection analysis, sequential injection (SI)-lab-on-valve (LOV) has specific advantages and allows novel, unique applications - not least as a versatile front end to a variety of detection techniques. This review presents snd discusses progress to date of the ...

  14. The effects of sequential attention shifts within visual working memory

    Directory of Open Access Journals (Sweden)

    Qi eLi

    2014-09-01

    Full Text Available Previous studies have shown conflicting data as to whether it is possible to sequentially shift spatial attention among visual working memory (VWM representations. The present study investigated this issue by asynchronously presenting attentional cues during the retention interval of a change detection task. In particular, we focused on two types of sequential attention shifts: 1 orienting attention to one location, and then withdrawing attention from it, and 2 switching the focus of attention from one location to another. In Experiment 1, a withdrawal cue was presented after a spatial retro-cue to measure the effect of withdrawing attention. The withdrawal cue significantly reduced the cost of invalid spatial cues, but surprisingly, did not attenuate the benefit of valid spatial cues. This indicates that the withdrawal cue only triggered the activation of facilitative components but not inhibitory components of attention. In Experiment 2, two spatial retro-cues were presented successively to examine the effect of switching the focus of attention. We observed benefits of both the first and second cues in sequential cueing, indicating that participants were able to reorient attention from one location to another within VWM, and the reallocation of attention did not attenuate memory at the first cued location. In Experiment 3, we found that reducing the validity of the preceding spatial cue did lead to a significant reduction in its benefit. However, performance at the first-cued location was still better than the neutral baseline or performance at the uncued locations, indicating that the first cue benefit might have been preserved both partially under automatic control and partially under voluntary control. Our findings revealed new properties of dynamic attentional control in VWM maintenance.

  15. Observation Likelihood Model Design and Failure Recovery Scheme toward Reliable Localization of Mobile Robots

    Directory of Open Access Journals (Sweden)

    Chang-bae Moon

    2011-01-01

    Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.

  16. Observation Likelihood Model Design and Failure Recovery Scheme Toward Reliable Localization of Mobile Robots

    Directory of Open Access Journals (Sweden)

    Chang-bae Moon

    2010-12-01

    Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.

  17. astroABC : An Approximate Bayesian Computation Sequential Monte Carlo sampler for cosmological parameter estimation

    Energy Technology Data Exchange (ETDEWEB)

    Jennings, E.; Madigan, M.

    2017-04-01

    Given the complexity of modern cosmological parameter inference where we arefaced with non-Gaussian data and noise, correlated systematics and multi-probecorrelated data sets, the Approximate Bayesian Computation (ABC) method is apromising alternative to traditional Markov Chain Monte Carlo approaches in thecase where the Likelihood is intractable or unknown. The ABC method is called"Likelihood free" as it avoids explicit evaluation of the Likelihood by using aforward model simulation of the data which can include systematics. Weintroduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler forparameter estimation. A key challenge in astrophysics is the efficient use oflarge multi-probe datasets to constrain high dimensional, possibly correlatedparameter spaces. With this in mind astroABC allows for massive parallelizationusing MPI, a framework that handles spawning of jobs across multiple nodes. Akey new feature of astroABC is the ability to create MPI groups with differentcommunicators, one for the sampler and several others for the forward modelsimulation, which speeds up sampling time considerably. For smaller jobs thePython multiprocessing option is also available. Other key features include: aSequential Monte Carlo sampler, a method for iteratively adapting tolerancelevels, local covariance estimate using scikit-learn's KDTree, modules forspecifying optimal covariance matrix for a component-wise or multivariatenormal perturbation kernel, output and restart files are backed up everyiteration, user defined metric and simulation methods, a module for specifyingheterogeneous parameter priors including non-standard prior PDFs, a module forspecifying a constant, linear, log or exponential tolerance level,well-documented examples and sample scripts. This code is hosted online athttps://github.com/EliseJ/astroABC

  18. Sequential analysis of materials balances. Application to a prospective reprocessing facility

    International Nuclear Information System (INIS)

    Picard, R.

    1986-01-01

    This paper discusses near-real-time accounting in the context of the prospective DWK reprocessing plant. Sensitivity of a standard sequential testing procedure, applied to unfalsified operator data only, is examined with respect to a variety of loss scenarios. It is seen that large inventories preclude high-probability detection of certain protracted losses of material. In Sec. 2, a rough error propagation for the MBA of interest is outlined. Mathematical development for the analysis is given in Sec. 3, and generic aspects of sequential testing are reviewed in Sec. 4. In Sec. 5, results from a simulation to quantify performance of the accounting system are presented

  19. A mathematical programming approach for sequential clustering of dynamic networks

    Science.gov (United States)

    Silva, Jonathan C.; Bennett, Laura; Papageorgiou, Lazaros G.; Tsoka, Sophia

    2016-02-01

    A common analysis performed on dynamic networks is community structure detection, a challenging problem that aims to track the temporal evolution of network modules. An emerging area in this field is evolutionary clustering, where the community structure of a network snapshot is identified by taking into account both its current state as well as previous time points. Based on this concept, we have developed a mixed integer non-linear programming (MINLP) model, SeqMod, that sequentially clusters each snapshot of a dynamic network. The modularity metric is used to determine the quality of community structure of the current snapshot and the historical cost is accounted for by optimising the number of node pairs co-clustered at the previous time point that remain so in the current snapshot partition. Our method is tested on social networks of interactions among high school students, college students and members of the Brazilian Congress. We show that, for an adequate parameter setting, our algorithm detects the classes that these students belong more accurately than partitioning each time step individually or by partitioning the aggregated snapshots. Our method also detects drastic discontinuities in interaction patterns across network snapshots. Finally, we present comparative results with similar community detection methods for time-dependent networks from the literature. Overall, we illustrate the applicability of mathematical programming as a flexible, adaptable and systematic approach for these community detection problems. Contribution to the Topical Issue "Temporal Network Theory and Applications", edited by Petter Holme.

  20. Applying exclusion likelihoods from LHC searches to extended Higgs sectors

    International Nuclear Information System (INIS)

    Bechtle, Philip; Heinemeyer, Sven; Staal, Oscar; Stefaniak, Tim; Weiglein, Georg

    2015-01-01

    LHC searches for non-standard Higgs bosons decaying into tau lepton pairs constitute a sensitive experimental probe for physics beyond the Standard Model (BSM), such as supersymmetry (SUSY). Recently, the limits obtained from these searches have been presented by the CMS collaboration in a nearly model-independent fashion - as a narrow resonance model - based on the full 8 TeV dataset. In addition to publishing a 95 % C.L. exclusion limit, the full likelihood information for the narrowresonance model has been released. This provides valuable information that can be incorporated into global BSM fits. We present a simple algorithm that maps an arbitrary model with multiple neutral Higgs bosons onto the narrow resonance model and derives the corresponding value for the exclusion likelihood from the CMS search. This procedure has been implemented into the public computer code HiggsBounds (version 4.2.0 and higher). We validate our implementation by cross-checking against the official CMS exclusion contours in three Higgs benchmark scenarios in the Minimal Supersymmetric Standard Model (MSSM), and find very good agreement. Going beyond validation, we discuss the combined constraints of the ττ search and the rate measurements of the SM-like Higgs at 125 GeV in a recently proposed MSSM benchmark scenario, where the lightest Higgs boson obtains SM-like couplings independently of the decoupling of the heavier Higgs states. Technical details for how to access the likelihood information within HiggsBounds are given in the appendix. The program is available at http:// higgsbounds.hepforge.org. (orig.)

  1. Impact of Diagrams on Recalling Sequential Elements in Expository Texts.

    Science.gov (United States)

    Guri-Rozenblit, Sarah

    1988-01-01

    Examines the instructional effectiveness of abstract diagrams on recall of sequential relations in social science textbooks. Concludes that diagrams assist significantly the recall of sequential relations in a text and decrease significantly the rate of order mistakes. (RS)

  2. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid

    2013-04-04

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the \\\\textit{lattice decoder}. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  3. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid

    2012-10-01

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  4. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid; Alouini, Mohamed-Slim

    2012-01-01

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  5. Australian food life style segments and elaboration likelihood differences

    DEFF Research Database (Denmark)

    Brunsø, Karen; Reid, Mike

    As the global food marketing environment becomes more competitive, the international and comparative perspective of consumers' attitudes and behaviours becomes more important for both practitioners and academics. This research employs the Food-Related Life Style (FRL) instrument in Australia...... in order to 1) determine Australian Life Style Segments and compare these with their European counterparts, and to 2) explore differences in elaboration likelihood among the Australian segments, e.g. consumers' interest and motivation to perceive product related communication. The results provide new...

  6. Maximum-likelihood method for numerical inversion of Mellin transform

    International Nuclear Information System (INIS)

    Iqbal, M.

    1997-01-01

    A method is described for inverting the Mellin transform which uses an expansion in Laguerre polynomials and converts the Mellin transform to Laplace transform, then the maximum-likelihood regularization method is used to recover the original function of the Mellin transform. The performance of the method is illustrated by the inversion of the test functions available in the literature (J. Inst. Math. Appl., 20 (1977) 73; Math. Comput., 53 (1989) 589). Effectiveness of the method is shown by results obtained through demonstration by means of tables and diagrams

  7. How to Improve the Likelihood of CDM Approval?

    DEFF Research Database (Denmark)

    Brandt, Urs Steiner; Svendsen, Gert Tinggaard

    2014-01-01

    How can the likelihood of Clean Development Mechanism (CDM) approval be improved in the face of institutional shortcomings? To answer this question, we focus on the three institutional shortcomings of income sharing, risk sharing and corruption prevention concerning afforestation/reforestation (A....../R). Furthermore, three main stakeholders are identified, namely investors, governments and agents in a principal-agent model regarding monitoring and enforcement capacity. Developing countries such as West Africa have, despite huge potentials, not been integrated in A/R CDM projects yet. Remote sensing, however...

  8. Maximum Likelihood and Bayes Estimation in Randomly Censored Geometric Distribution

    Directory of Open Access Journals (Sweden)

    Hare Krishna

    2017-01-01

    Full Text Available In this article, we study the geometric distribution under randomly censored data. Maximum likelihood estimators and confidence intervals based on Fisher information matrix are derived for the unknown parameters with randomly censored data. Bayes estimators are also developed using beta priors under generalized entropy and LINEX loss functions. Also, Bayesian credible and highest posterior density (HPD credible intervals are obtained for the parameters. Expected time on test and reliability characteristics are also analyzed in this article. To compare various estimates developed in the article, a Monte Carlo simulation study is carried out. Finally, for illustration purpose, a randomly censored real data set is discussed.

  9. Elemental composition of cosmic rays using a maximum likelihood method

    International Nuclear Information System (INIS)

    Ruddick, K.

    1996-01-01

    We present a progress report on our attempts to determine the composition of cosmic rays in the knee region of the energy spectrum. We have used three different devices to measure properties of the extensive air showers produced by primary cosmic rays: the Soudan 2 underground detector measures the muon flux deep underground, a proportional tube array samples shower density at the surface of the earth, and a Cherenkov array observes light produced high in the atmosphere. We have begun maximum likelihood fits to these measurements with the hope of determining the nuclear mass number A on an event by event basis. (orig.)

  10. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  11. Process criticality accident likelihoods, consequences and emergency planning

    International Nuclear Information System (INIS)

    McLaughlin, T.P.

    1992-01-01

    Evaluation of criticality accident risks in the processing of significant quantities of fissile materials is both complex and subjective, largely due to the lack of accident statistics. Thus, complying with national and international standards and regulations which require an evaluation of the net benefit of a criticality accident alarm system, is also subjective. A review of guidance found in the literature on potential accident magnitudes is presented for different material forms and arrangements. Reasoned arguments are also presented concerning accident prevention and accident likelihoods for these material forms and arrangements. (Author)

  12. Likelihood Estimation of Gamma Ray Bursts Duration Distribution

    OpenAIRE

    Horvath, Istvan

    2005-01-01

    Two classes of Gamma Ray Bursts have been identified so far, characterized by T90 durations shorter and longer than approximately 2 seconds. It was shown that the BATSE 3B data allow a good fit with three Gaussian distributions in log T90. In the same Volume in ApJ. another paper suggested that the third class of GRBs is may exist. Using the full BATSE catalog here we present the maximum likelihood estimation, which gives us 0.5% probability to having only two subclasses. The MC simulation co...

  13. Process criticality accident likelihoods, consequences, and emergency planning

    Energy Technology Data Exchange (ETDEWEB)

    McLaughlin, T.P.

    1991-01-01

    Evaluation of criticality accident risks in the processing of significant quantities of fissile materials is both complex and subjective, largely due to the lack of accident statistics. Thus, complying with standards such as ISO 7753 which mandates that the need for an alarm system be evaluated, is also subjective. A review of guidance found in the literature on potential accident magnitudes is presented for different material forms and arrangements. Reasoned arguments are also presented concerning accident prevention and accident likelihoods for these material forms and arrangements. 13 refs., 1 fig., 1 tab.

  14. Estimating likelihood of future crashes for crash-prone drivers

    OpenAIRE

    Subasish Das; Xiaoduan Sun; Fan Wang; Charles Leboeuf

    2015-01-01

    At-fault crash-prone drivers are usually considered as the high risk group for possible future incidents or crashes. In Louisiana, 34% of crashes are repeatedly committed by the at-fault crash-prone drivers who represent only 5% of the total licensed drivers in the state. This research has conducted an exploratory data analysis based on the driver faultiness and proneness. The objective of this study is to develop a crash prediction model to estimate the likelihood of future crashes for the a...

  15. Similar tests and the standardized log likelihood ratio statistic

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1986-01-01

    When testing an affine hypothesis in an exponential family the 'ideal' procedure is to calculate the exact similar test, or an approximation to this, based on the conditional distribution given the minimal sufficient statistic under the null hypothesis. By contrast to this there is a 'primitive......' approach in which the marginal distribution of a test statistic considered and any nuisance parameter appearing in the test statistic is replaced by an estimate. We show here that when using standardized likelihood ratio statistics the 'primitive' procedure is in fact an 'ideal' procedure to order O(n -3...

  16. Sequential learning in individuals with agrammatic aphasia: evidence from artificial grammar learning.

    Science.gov (United States)

    Schuchard, Julia; Thompson, Cynthia K

    2017-01-01

    We examined sequential learning in individuals with agrammatic aphasia ( n = 12) and healthy age-matched participants ( n = 12) using an artificial grammar. Artificial grammar acquisition, 24-hour retention, and the potential benefits of additional training were examined by administering an artificial grammar judgment test (1) immediately following auditory exposure-based training, (2) one day after training, and (3) after a second training session on the second day. An untrained control group ( n = 12 healthy age-matched participants) completed the tests on the same time schedule. The trained healthy and aphasic groups showed greater sensitivity to the detection of grammatical items than the control group. No significant correlations between sequential learning and language abilities were observed among the aphasic participants. The results suggest that individuals with agrammatic aphasia show sequential learning, but the underlying processes involved in this learning may be different than for healthy adults.

  17. Sequential decoding of intramuscular EMG signals via estimation of a Markov model.

    Science.gov (United States)

    Monsifrot, Jonathan; Le Carpentier, Eric; Aoustin, Yannick; Farina, Dario

    2014-09-01

    This paper addresses the sequential decoding of intramuscular single-channel electromyographic (EMG) signals to extract the activity of individual motor neurons. A hidden Markov model is derived from the physiological generation of the EMG signal. The EMG signal is described as a sum of several action potentials (wavelet) trains, embedded in noise. For each train, the time interval between wavelets is modeled by a process that parameters are linked to the muscular activity. The parameters of this process are estimated sequentially by a Bayes filter, along with the firing instants. The method was tested on some simulated signals and an experimental one, from which the rates of detection and classification of action potentials were above 95% with respect to the reference decomposition. The method works sequentially in time, and is the first to address the problem of intramuscular EMG decomposition online. It has potential applications for man-machine interfacing based on motor neuron activities.

  18. Documentscape: Intertextuality, Sequentiality & Autonomy at Work

    DEFF Research Database (Denmark)

    Christensen, Lars Rune; Bjørn, Pernille

    2014-01-01

    On the basis of an ethnographic field study, this article introduces the concept of documentscape to the analysis of document-centric work practices. The concept of documentscape refers to the entire ensemble of documents in their mutual intertextual interlocking. Providing empirical data from...... a global software development case, we show how hierarchical structures and sequentiality across the interlocked documents are critical to how actors make sense of the work of others and what to do next in a geographically distributed setting. Furthermore, we found that while each document is created...... as part of a quasi-sequential order, this characteristic does not make the document, as a single entity, into a stable object. Instead, we found that the documents were malleable and dynamic while suspended in intertextual structures. Our concept of documentscape points to how the hierarchical structure...

  19. A minimax procedure in the context of sequential mastery testing

    NARCIS (Netherlands)

    Vos, Hendrik J.

    1999-01-01

    The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master or a nonmaster, or to continue sampling and administering another random test item. The framework of minimax sequential decision theory

  20. Applying the minimax principle to sequential mastery testing

    NARCIS (Netherlands)

    Vos, Hendrik J.

    2002-01-01

    The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master, a nonmaster, or to continue sampling and administering another random item. The framework of minimax sequential decision theory (minimum

  1. Optimal Sequential Rules for Computer-Based Instruction.

    Science.gov (United States)

    Vos, Hans J.

    1998-01-01

    Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

  2. On Locally Most Powerful Sequential Rank Tests

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 36, č. 1 (2017), s. 111-125 ISSN 0747-4946 R&D Projects: GA ČR GA17-07384S Grant - others:Nadační fond na podporu vědy(CZ) Neuron Institutional support: RVO:67985807 Keywords : nonparametric test s * sequential ranks * stopping variable Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.339, year: 2016

  3. Sequential pattern recognition by maximum conditional informativity

    Czech Academy of Sciences Publication Activity Database

    Grim, Jiří

    2014-01-01

    Roč. 45, č. 1 (2014), s. 39-45 ISSN 0167-8655 R&D Projects: GA ČR(CZ) GA14-02652S; GA ČR(CZ) GA14-10911S Keywords : Multivariate statistics * Statistical pattern recognition * Sequential decision making * Product mixtures * EM algorithm * Shannon information Subject RIV: IN - Informatics, Computer Sci ence Impact factor: 1.551, year: 2014 http://library.utia.cas.cz/separaty/2014/RO/grim-0428565.pdf

  4. Heat accumulation during sequential cortical bone drilling.

    Science.gov (United States)

    Palmisano, Andrew C; Tai, Bruce L; Belmont, Barry; Irwin, Todd A; Shih, Albert; Holmes, James R

    2016-03-01

    Significant research exists regarding heat production during single-hole bone drilling. No published data exist regarding repetitive sequential drilling. This study elucidates the phenomenon of heat accumulation for sequential drilling with both Kirschner wires (K wires) and standard two-flute twist drills. It was hypothesized that cumulative heat would result in a higher temperature with each subsequent drill pass. Nine holes in a 3 × 3 array were drilled sequentially on moistened cadaveric tibia bone kept at body temperature (about 37 °C). Four thermocouples were placed at the center of four adjacent holes and 2 mm below the surface. A battery-driven hand drill guided by a servo-controlled motion system was used. Six samples were drilled with each tool (2.0 mm K wire and 2.0 and 2.5 mm standard drills). K wire drilling increased temperature from 5 °C at the first hole to 20 °C at holes 6 through 9. A similar trend was found in standard drills with less significant increments. The maximum temperatures of both tools increased from drill sizes was found to be insignificant (P > 0.05). In conclusion, heat accumulated during sequential drilling, with size difference being insignificant. K wire produced more heat than its twist-drill counterparts. This study has demonstrated the heat accumulation phenomenon and its significant effect on temperature. Maximizing the drilling field and reducing the number of drill passes may decrease bone injury. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  5. Sequential Monte Carlo with Highly Informative Observations

    OpenAIRE

    Del Moral, Pierre; Murray, Lawrence M.

    2014-01-01

    We propose sequential Monte Carlo (SMC) methods for sampling the posterior distribution of state-space models under highly informative observation regimes, a situation in which standard SMC methods can perform poorly. A special case is simulating bridges between given initial and final values. The basic idea is to introduce a schedule of intermediate weighting and resampling times between observation times, which guide particles towards the final state. This can always be done for continuous-...

  6. Sequential neural models with stochastic layers

    DEFF Research Database (Denmark)

    Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich

    2016-01-01

    How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...

  7. Likelihood Approximation With Parallel Hierarchical Matrices For Large Spatial Datasets

    KAUST Repository

    Litvinenko, Alexander

    2017-11-01

    The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Matérn covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\H$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.

  8. Likelihood Approximation With Parallel Hierarchical Matrices For Large Spatial Datasets

    KAUST Repository

    Litvinenko, Alexander; Sun, Ying; Genton, Marc G.; Keyes, David E.

    2017-01-01

    The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Matérn covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\H$-) matrix format with computational cost $\\mathcal{O}(k^2n \\log^2 n/p)$ and storage $\\mathcal{O}(kn \\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.

  9. Superfast maximum-likelihood reconstruction for quantum tomography

    Science.gov (United States)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  10. Likelihood inference for a fractionally cointegrated vector autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Ørregård Nielsen, Morten

    2012-01-01

    such that the process X_{t} is fractional of order d and cofractional of order d-b; that is, there exist vectors ß for which ß'X_{t} is fractional of order d-b, and no other fractionality order is possible. We define the statistical model by 0inference when the true values satisfy b0¿1/2 and d0-b0......We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model with a restricted constant term, ¿, based on the Gaussian likelihood conditional on initial values. The model nests the I(d) VAR model. We give conditions on the parameters...... process in the parameters when errors are i.i.d. with suitable moment conditions and initial values are bounded. When the limit is deterministic this implies uniform convergence in probability of the conditional likelihood function. If the true value b0>1/2, we prove that the limit distribution of (ß...

  11. Likelihood-Based Inference of B Cell Clonal Families.

    Directory of Open Access Journals (Sweden)

    Duncan K Ralph

    2016-10-01

    Full Text Available The human immune system depends on a highly diverse collection of antibody-making B cells. B cell receptor sequence diversity is generated by a random recombination process called "rearrangement" forming progenitor B cells, then a Darwinian process of lineage diversification and selection called "affinity maturation." The resulting receptors can be sequenced in high throughput for research and diagnostics. Such a collection of sequences contains a mixture of various lineages, each of which may be quite numerous, or may consist of only a single member. As a step to understanding the process and result of this diversification, one may wish to reconstruct lineage membership, i.e. to cluster sampled sequences according to which came from the same rearrangement events. We call this clustering problem "clonal family inference." In this paper we describe and validate a likelihood-based framework for clonal family inference based on a multi-hidden Markov Model (multi-HMM framework for B cell receptor sequences. We describe an agglomerative algorithm to find a maximum likelihood clustering, two approximate algorithms with various trade-offs of speed versus accuracy, and a third, fast algorithm for finding specific lineages. We show that under simulation these algorithms greatly improve upon existing clonal family inference methods, and that they also give significantly different clusters than previous methods when applied to two real data sets.

  12. Simulation-based marginal likelihood for cluster strong lensing cosmology

    Science.gov (United States)

    Killedar, M.; Borgani, S.; Fabjan, D.; Dolag, K.; Granato, G.; Meneghetti, M.; Planelles, S.; Ragone-Figueroa, C.

    2018-01-01

    Comparisons between observed and predicted strong lensing properties of galaxy clusters have been routinely used to claim either tension or consistency with Λ cold dark matter cosmology. However, standard approaches to such cosmological tests are unable to quantify the preference for one cosmology over another. We advocate approximating the relevant Bayes factor using a marginal likelihood that is based on the following summary statistic: the posterior probability distribution function for the parameters of the scaling relation between Einstein radii and cluster mass, α and β. We demonstrate, for the first time, a method of estimating the marginal likelihood using the X-ray selected z > 0.5 Massive Cluster Survey clusters as a case in point and employing both N-body and hydrodynamic simulations of clusters. We investigate the uncertainty in this estimate and consequential ability to compare competing cosmologies, which arises from incomplete descriptions of baryonic processes, discrepancies in cluster selection criteria, redshift distribution and dynamical state. The relation between triaxial cluster masses at various overdensities provides a promising alternative to the strong lensing test.

  13. Risk factors and likelihood of Campylobacter colonization in broiler flocks

    Directory of Open Access Journals (Sweden)

    SL Kuana

    2007-09-01

    Full Text Available Campylobacter was investigated in cecal droppings, feces, and cloacal swabs of 22 flocks of 3 to 5 week-old broilers. Risk factors and the likelihood of the presence of this agent in these flocks were determined. Management practices, such as cleaning and disinfection, feeding, drinkers, and litter treatments, were assessed. Results were evaluated using Odds Ratio (OR test, and their significance was tested by Fisher's test (p<0.05. A Campylobacter prevalence of 81.8% was found in the broiler flocks (18/22, and within positive flocks, it varied between 85 and 100%. Campylobacter incidence among sample types was homogenous, being 81.8% in cecal droppings, 80.9% in feces, and 80.4% in cloacal swabs (230. Flocks fed by automatic feeding systems presented higher incidence of Campylobacter as compared to those fed by tube feeders. Litter was reused in 63.6% of the farm, and, despite the lack of statistical significance, there was higher likelihood of Campylobacter incidence when litter was reused. Foot bath was not used in 45.5% of the flocks, whereas the use of foot bath associated to deficient lime management increased the number of positive flocks, although with no statiscal significance. The evaluated parameters were not significantly associated with Campylobacter colonization in the assessed broiler flocks.

  14. Menyoal Elaboration Likelihood Model (ELM dan Teori Retorika

    Directory of Open Access Journals (Sweden)

    Yudi Perbawaningsih

    2012-06-01

    Full Text Available Abstract: Persuasion is a communication process to establish or change attitudes, which can be understood through theory of Rhetoric and theory of Elaboration Likelihood Model (ELM. This study elaborates these theories in a Public Lecture series which to persuade the students in choosing their concentration of study. The result shows that in term of persuasion effectiveness it is not quite relevant to separate the message and its source. The quality of source is determined by the quality of the message, and vice versa. Separating the two routes of the persuasion process as described in the ELM theory would not be relevant. Abstrak: Persuasi adalah proses komunikasi untuk membentuk atau mengubah sikap, yang dapat dipahami dengan teori Retorika dan teori Elaboration Likelihood Model (ELM. Penelitian ini mengelaborasi teori tersebut dalam Kuliah Umum sebagai sarana mempersuasi mahasiswa untuk memilih konsentrasi studi studi yang didasarkan pada proses pengolahan informasi. Menggunakan metode survey, didapatkan hasil yaitu tidaklah cukup relevan memisahkan pesan dan narasumber dalam melihat efektivitas persuasi. Keduanya menyatu yang berarti bahwa kualitas narasumber ditentukan oleh kualitas pesan yang disampaikannya, dan sebaliknya. Memisahkan proses persuasi dalam dua lajur seperti yang dijelaskan dalam ELM teori menjadi tidak relevan.

  15. Corporate brand extensions based on the purchase likelihood: governance implications

    Directory of Open Access Journals (Sweden)

    Spyridon Goumas

    2018-03-01

    Full Text Available This paper is examining the purchase likelihood of hypothetical service brand extensions from product companies focusing on consumer electronics based on sector categorization and perceptions of fit between the existing product category and image of the company. Prior research has recognized that levels of brand knowledge eases the transference of associations and affect to the new products. Similarity to the existing products of the parent company and perceived image also influence the success of brand extensions. However, sector categorization may interfere with this relationship. The purpose of this study is to examine Greek consumers’ attitudes towards hypothetical brand extensions, and how these are affected by consumers’ existing knowledge about the brand, sector categorization and perceptions of image and category fit of cross-sector extensions. This aim is examined in the context of technological categories, where less-known companies exhibited significance in purchase likelihood, and contradictory with the existing literature, service companies did not perform as positively as expected. Additional insights to the existing literature about sector categorization are provided. The effect of both image and category fit is also examined and predictions regarding the effect of each are made.

  16. Gauging the likelihood of stable cavitation from ultrasound contrast agents.

    Science.gov (United States)

    Bader, Kenneth B; Holland, Christy K

    2013-01-07

    The mechanical index (MI) was formulated to gauge the likelihood of adverse bioeffects from inertial cavitation. However, the MI formulation did not consider bubble activity from stable cavitation. This type of bubble activity can be readily nucleated from ultrasound contrast agents (UCAs) and has the potential to promote beneficial bioeffects. Here, the presence of stable cavitation is determined numerically by tracking the onset of subharmonic oscillations within a population of bubbles for frequencies up to 7 MHz and peak rarefactional pressures up to 3 MPa. In addition, the acoustic pressure rupture threshold of an UCA population was determined using the Marmottant model. The threshold for subharmonic emissions of optimally sized bubbles was found to be lower than the inertial cavitation threshold for all frequencies studied. The rupture thresholds of optimally sized UCAs were found to be lower than the threshold for subharmonic emissions for either single cycle or steady state acoustic excitations. Because the thresholds of both subharmonic emissions and UCA rupture are linearly dependent on frequency, an index of the form I(CAV) = P(r)/f (where P(r) is the peak rarefactional pressure in MPa and f is the frequency in MHz) was derived to gauge the likelihood of subharmonic emissions due to stable cavitation activity nucleated from UCAs.

  17. Safe semi-supervised learning based on weighted likelihood.

    Science.gov (United States)

    Kawakita, Masanori; Takeuchi, Jun'ichi

    2014-05-01

    We are interested in developing a safe semi-supervised learning that works in any situation. Semi-supervised learning postulates that n(') unlabeled data are available in addition to n labeled data. However, almost all of the previous semi-supervised methods require additional assumptions (not only unlabeled data) to make improvements on supervised learning. If such assumptions are not met, then the methods possibly perform worse than supervised learning. Sokolovska, Cappé, and Yvon (2008) proposed a semi-supervised method based on a weighted likelihood approach. They proved that this method asymptotically never performs worse than supervised learning (i.e., it is safe) without any assumption. Their method is attractive because it is easy to implement and is potentially general. Moreover, it is deeply related to a certain statistical paradox. However, the method of Sokolovska et al. (2008) assumes a very limited situation, i.e., classification, discrete covariates, n(')→∞ and a maximum likelihood estimator. In this paper, we extend their method by modifying the weight. We prove that our proposal is safe in a significantly wide range of situations as long as n≤n('). Further, we give a geometrical interpretation of the proof of safety through the relationship with the above-mentioned statistical paradox. Finally, we show that the above proposal is asymptotically safe even when n(')

  18. Deciphering Intrinsic Inter-subunit Couplings that Lead to Sequential Hydrolysis of F 1 -ATPase Ring

    Science.gov (United States)

    Dai, Liqiang; Flechsig, Holger; Yu, Jin

    2017-10-01

    The rotary sequential hydrolysis of metabolic machine F1-ATPase is a prominent feature to reveal high coordination among multiple chemical sites on the stator F1 ring, which also contributes to tight coupling between the chemical reaction and central {\\gamma}-shaft rotation. High-speed AFM experiments discovered that the sequential hydrolysis was maintained on the F1 ring even in the absence of the {\\gamma} rotor. To explore how the intrinsic sequential performance arises, we computationally investigated essential inter-subunit couplings on the hexameric ring of mitochondrial and bacterial F1. We first reproduced the sequential hydrolysis schemes as experimentally detected, by simulating tri-site ATP hydrolysis cycles on the F1 ring upon kinetically imposing inter-subunit couplings to substantially promote the hydrolysis products release. We found that it is key for certain ATP binding and hydrolysis events to facilitate the neighbor-site ADP and Pi release to support the sequential hydrolysis. The kinetically feasible couplings were then scrutinized through atomistic molecular dynamics simulations as well as coarse-grained simulations, in which we enforced targeted conformational changes for the ATP binding or hydrolysis. Notably, we detected the asymmetrical neighbor-site opening that would facilitate the ADP release upon the enforced ATP binding, and computationally captured the complete Pi release through charge hopping upon the enforced neighbor-site ATP hydrolysis. The ATP-hydrolysis triggered Pi release revealed in current TMD simulation confirms a recent prediction made from statistical analyses of single molecule experimental data in regard to the role ATP hydrolysis plays. Our studies, therefore, elucidate both the concerted chemical kinetics and underlying structural dynamics of the inter-subunit couplings that lead to the rotary sequential hydrolysis of the F1 ring.

  19. Sequential Sampling Plan of Anthonomus grandis (Coleoptera: Curculionidae) in Cotton Plants.

    Science.gov (United States)

    Grigolli, J F J; Souza, L A; Mota, T A; Fernandes, M G; Busoli, A C

    2017-04-01

    The boll weevil, Anthonomus grandis grandis Boheman (Coleoptera: Curculionidae), is one of the most important pests of cotton production worldwide. The objective of this work was to develop a sequential sampling plan for the boll weevil. The studies were conducted in Maracaju, MS, Brazil, in two seasons with cotton cultivar FM 993. A 10,000-m2 area of cotton was subdivided into 100 of 10- by 10-m plots, and five plants per plot were evaluated weekly, recording the number of squares with feeding + oviposition punctures of A. grandis in each plant. A sequential sampling plan by the maximum likelihood ratio test was developed, using a 10% threshold level of squares attacked. A 5% security level was adopted for the elaboration of the sequential sampling plan. The type I and type II error used was 0.05, recommended for studies with insects. The adjustment of the frequency distributions used were divided into two phases, so that the model that best fit to the data was the negative binomial distribution up to 85 DAE (Phase I), and from there the best fit was Poisson distribution (Phase II). The equations that define the decision-making for Phase I are S0 = -5.1743 + 0.5730N and S1 = 5.1743 + 0.5730N, and for the Phase II are S0 = -4.2479 + 0.5771N and S1 = 4.2479 + 0.5771N. The sequential sampling plan developed indicated the maximum number of sample units expected for decision-making is ∼39 and 31 samples for Phases I and II, respectively. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Sequential grouping constraints on across-channel auditory processing

    DEFF Research Database (Denmark)

    Oxenham, Andrew J.; Dau, Torsten

    2005-01-01

    Søren Buus was one of the pioneers in the study of across-channel auditory processing. His influential 1985 paper showed that introducing slow fluctuations to a low-frequency masker could reduce the detection thresholds of a high-frequency signal by as much as 25 dB [S. Buus, J. Acoust. Soc. Am. 78......, 1958–1965 (1985)]. Søren explained this surprising result in terms of the spread of masker excitation and across-channel processing of envelope fluctuations. A later study [S. Buus and C. Pan, J. Acoust. Soc. Am. 96, 1445–1457 (1994)] pioneered the use of the same stimuli in tasks where across......-channel processing could either help or hinder performance. In the present set of studies we also use paradigms in which across-channel processing can lead to either improvement or deterioration in performance. We show that sequential grouping constraints can affect both types of paradigm. In particular...

  1. Sequential Probability Ratio Test for Spacecraft Collision Avoidance Maneuver Decisions

    Science.gov (United States)

    Carpenter, J. Russell; Markley, F. Landis

    2013-01-01

    A document discusses sequential probability ratio tests that explicitly allow decision-makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models the null hypotheses that the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming, highly elliptical orbit formation flying mission.

  2. Wald Sequential Probability Ratio Test for Space Object Conjunction Assessment

    Science.gov (United States)

    Carpenter, James R.; Markley, F Landis

    2014-01-01

    This paper shows how satellite owner/operators may use sequential estimates of collision probability, along with a prior assessment of the base risk of collision, in a compound hypothesis ratio test to inform decisions concerning collision risk mitigation maneuvers. The compound hypothesis test reduces to a simple probability ratio test, which appears to be a novel result. The test satisfies tolerances related to targeted false alarm and missed detection rates. This result is independent of the method one uses to compute the probability density that one integrates to compute collision probability. A well-established test case from the literature shows that this test yields acceptable results within the constraints of a typical operational conjunction assessment decision timeline. Another example illustrates the use of the test in a practical conjunction assessment scenario based on operations of the International Space Station.

  3. LikelihoodLib - Fitting, Function Maximization, and Numerical Analysis

    CERN Document Server

    Smirnov, I B

    2001-01-01

    A new class library is designed for function maximization, minimization, solution of equations and for other problems related to mathematical analysis of multi-parameter functions by numerical iterative methods. When we search the maximum or another special point of a function, we may change and fit all parameters simultaneously, sequentially, recursively, or by any combination of these methods. The discussion is focused on the first the most complicated method, although the others are also supported by the library. For this method we apply: control of precision by interval computations; the calculation of derivatives either by differential arithmetic, or by the method of finite differences with the step lengths which provide suppression of the influence of numerical noise; possible synchronization of the subjective function calls with minimization of the number of iterations; competitive application of various methods for step calculation, and converging to the solution by many trajectories.

  4. Maximum likelihood positioning algorithm for high-resolution PET scanners

    International Nuclear Information System (INIS)

    Gross-Weege, Nicolas; Schug, David; Hallen, Patrick; Schulz, Volkmar

    2016-01-01

    Purpose: In high-resolution positron emission tomography (PET), lightsharing elements are incorporated into typical detector stacks to read out scintillator arrays in which one scintillator element (crystal) is smaller than the size of the readout channel. In order to identify the hit crystal by means of the measured light distribution, a positioning algorithm is required. One commonly applied positioning algorithm uses the center of gravity (COG) of the measured light distribution. The COG algorithm is limited in spatial resolution by noise and intercrystal Compton scatter. The purpose of this work is to develop a positioning algorithm which overcomes this limitation. Methods: The authors present a maximum likelihood (ML) algorithm which compares a set of expected light distributions given by probability density functions (PDFs) with the measured light distribution. Instead of modeling the PDFs by using an analytical model, the PDFs of the proposed ML algorithm are generated assuming a single-gamma-interaction model from measured data. The algorithm was evaluated with a hot-rod phantom measurement acquired with the preclinical HYPERION II D PET scanner. In order to assess the performance with respect to sensitivity, energy resolution, and image quality, the ML algorithm was compared to a COG algorithm which calculates the COG from a restricted set of channels. The authors studied the energy resolution of the ML and the COG algorithm regarding incomplete light distributions (missing channel information caused by detector dead time). Furthermore, the authors investigated the effects of using a filter based on the likelihood values on sensitivity, energy resolution, and image quality. Results: A sensitivity gain of up to 19% was demonstrated in comparison to the COG algorithm for the selected operation parameters. Energy resolution and image quality were on a similar level for both algorithms. Additionally, the authors demonstrated that the performance of the ML

  5. Elevated levels of serum IL-5 are associated with an increased likelihood of major depressive disorder.

    Science.gov (United States)

    Elomaa, Antti-Pekka; Niskanen, Leo; Herzig, Karl-Heinz; Viinamäki, Heimo; Hintikka, Jukka; Koivumaa-Honkanen, Heli; Honkalampi, Kirsi; Valkonen-Korhonen, Minna; Harvima, Ilkka T; Lehto, Soili M

    2012-01-09

    Inflammatory mediators in both the peripheral circulation and central nervous system (CNS) are dysregulated in major depressive disorder (MDD). Nevertheless, relatively little is known about the role of the T-helper (Th)-2 effector cytokines interleukin (IL)-5 and IL-13 in MDD. We examined the serum levels of these cytokines and a Th-1 comparison cytokine, interferon (IFN)-γ, in 116 individuals (MDD, n = 58; controls, n = 58). In our basic multivariate model controlling for the effects of potential confounders on the associations between MDD and the examined cytokines, each 1-unit increase in the serum IL-5 level increased the likelihood of belonging to the MDD group by 76% (OR 1.76, 95% CI 1.03-2.99, p = 0.04; model covariates: age, gender, marital status, daily smoking and alcohol use). The likelihood further increased in models additionally controlling for the effects of the use of antidepressants and NSAIDS, and a diagnosis of asthma. No such associations were detected with regard to IL-13 (OR 1.08, 95% CI 0.96-1.22, p = 0.22) or IFN-γ (OR 1.02, 95% CI 0.99-1.05, p = 0.23). Elevated levels of IL-5, which uses the neural plasticity-related RAS GTPase-extracellular signal-regulated kinase (Ras-ERK) pathway to mediate its actions in the central nervous system (CNS), could be one of the factors underlying the depression-related changes in CNS plasticity.

  6. Marginal likelihood estimation of negative binomial parameters with applications to RNA-seq data.

    Science.gov (United States)

    León-Novelo, Luis; Fuentes, Claudio; Emerson, Sarah

    2017-10-01

    RNA-Seq data characteristically exhibits large variances, which need to be appropriately accounted for in any proposed model. We first explore the effects of this variability on the maximum likelihood estimator (MLE) of the dispersion parameter of the negative binomial distribution, and propose instead to use an estimator obtained via maximization of the marginal likelihood in a conjugate Bayesian framework. We show, via simulation studies, that the marginal MLE can better control this variation and produce a more stable and reliable estimator. We then formulate a conjugate Bayesian hierarchical model, and use this new estimator to propose a Bayesian hypothesis test to detect differentially expressed genes in RNA-Seq data. We use numerical studies to show that our much simpler approach is competitive with other negative binomial based procedures, and we use a real data set to illustrate the implementation and flexibility of the procedure. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. On the insufficiency of arbitrarily precise covariance matrices: non-Gaussian weak-lensing likelihoods

    Science.gov (United States)

    Sellentin, Elena; Heavens, Alan F.

    2018-01-01

    We investigate whether a Gaussian likelihood, as routinely assumed in the analysis of cosmological data, is supported by simulated survey data. We define test statistics, based on a novel method that first destroys Gaussian correlations in a data set, and then measures the non-Gaussian correlations that remain. This procedure flags pairs of data points that depend on each other in a non-Gaussian fashion, and thereby identifies where the assumption of a Gaussian likelihood breaks down. Using this diagnosis, we find that non-Gaussian correlations in the CFHTLenS cosmic shear correlation functions are significant. With a simple exclusion of the most contaminated data points, the posterior for s8 is shifted without broadening, but we find no significant reduction in the tension with s8 derived from Planck cosmic microwave background data. However, we also show that the one-point distributions of the correlation statistics are noticeably skewed, such that sound weak-lensing data sets are intrinsically likely to lead to a systematically low lensing amplitude being inferred. The detected non-Gaussianities get larger with increasing angular scale such that for future wide-angle surveys such as Euclid or LSST, with their very small statistical errors, the large-scale modes are expected to be increasingly affected. The shifts in posteriors may then not be negligible and we recommend that these diagnostic tests be run as part of future analyses.

  8. The asymptotic behaviour of the maximum likelihood function of Kriging approximations using the Gaussian correlation function

    CSIR Research Space (South Africa)

    Kok, S

    2012-07-01

    Full Text Available continuously as the correlation function hyper-parameters approach zero. Since the global minimizer of the maximum likelihood function is an asymptote in this case, it is unclear if maximum likelihood estimation (MLE) remains valid. Numerical ill...

  9. Transfer Entropy as a Log-Likelihood Ratio

    Science.gov (United States)

    Barnett, Lionel; Bossomaier, Terry

    2012-09-01

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.

  10. A Non-standard Empirical Likelihood for Time Series

    DEFF Research Database (Denmark)

    Nordman, Daniel J.; Bunzel, Helle; Lahiri, Soumendra N.

    Standard blockwise empirical likelihood (BEL) for stationary, weakly dependent time series requires specifying a fixed block length as a tuning parameter for setting confidence regions. This aspect can be difficult and impacts coverage accuracy. As an alternative, this paper proposes a new version...... of BEL based on a simple, though non-standard, data-blocking rule which uses a data block of every possible length. Consequently, the method involves no block selection and is also anticipated to exhibit better coverage performance. Its non-standard blocking scheme, however, induces non......-standard asymptotics and requires a significantly different development compared to standard BEL. We establish the large-sample distribution of log-ratio statistics from the new BEL method for calibrating confidence regions for mean or smooth function parameters of time series. This limit law is not the usual chi...

  11. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq

    2012-06-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous with the frequency grid of the ZP-OFDM system. The proposed structure based technique uses the fact that the NBI signal is sparse as compared to the ZP-OFDM signal in the frequency domain. The structure is also useful in reducing the computational complexity of the proposed method. The paper also presents a data aided approach for improved NBI estimation. The suitability of the proposed method is demonstrated through simulations. © 2012 IEEE.

  12. Calibration of two complex ecosystem models with different likelihood functions

    Science.gov (United States)

    Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán

    2014-05-01

    The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model

  13. Preliminary attempt on maximum likelihood tomosynthesis reconstruction of DEI data

    International Nuclear Information System (INIS)

    Wang Zhentian; Huang Zhifeng; Zhang Li; Kang Kejun; Chen Zhiqiang; Zhu Peiping

    2009-01-01

    Tomosynthesis is a three-dimension reconstruction method that can remove the effect of superimposition with limited angle projections. It is especially promising in mammography where radiation dose is concerned. In this paper, we propose a maximum likelihood tomosynthesis reconstruction algorithm (ML-TS) on the apparent absorption data of diffraction enhanced imaging (DEI). The motivation of this contribution is to develop a tomosynthesis algorithm in low-dose or noisy circumstances and make DEI get closer to clinic application. The theoretical statistical models of DEI data in physics are analyzed and the proposed algorithm is validated with the experimental data at the Beijing Synchrotron Radiation Facility (BSRF). The results of ML-TS have better contrast compared with the well known 'shift-and-add' algorithm and FBP algorithm. (authors)

  14. H.264 SVC Complexity Reduction Based on Likelihood Mode Decision

    Directory of Open Access Journals (Sweden)

    L. Balaji

    2015-01-01

    Full Text Available H.264 Advanced Video Coding (AVC was prolonged to Scalable Video Coding (SVC. SVC executes in different electronics gadgets such as personal computer, HDTV, SDTV, IPTV, and full-HDTV in which user demands various scaling of the same content. The various scaling is resolution, frame rate, quality, heterogeneous networks, bandwidth, and so forth. Scaling consumes more encoding time and computational complexity during mode selection. In this paper, to reduce encoding time and computational complexity, a fast mode decision algorithm based on likelihood mode decision (LMD is proposed. LMD is evaluated in both temporal and spatial scaling. From the results, we conclude that LMD performs well, when compared to the previous fast mode decision algorithms. The comparison parameters are time, PSNR, and bit rate. LMD achieve time saving of 66.65% with 0.05% detriment in PSNR and 0.17% increment in bit rate compared with the full search method.

  15. H.264 SVC Complexity Reduction Based on Likelihood Mode Decision.

    Science.gov (United States)

    Balaji, L; Thyagharajan, K K

    2015-01-01

    H.264 Advanced Video Coding (AVC) was prolonged to Scalable Video Coding (SVC). SVC executes in different electronics gadgets such as personal computer, HDTV, SDTV, IPTV, and full-HDTV in which user demands various scaling of the same content. The various scaling is resolution, frame rate, quality, heterogeneous networks, bandwidth, and so forth. Scaling consumes more encoding time and computational complexity during mode selection. In this paper, to reduce encoding time and computational complexity, a fast mode decision algorithm based on likelihood mode decision (LMD) is proposed. LMD is evaluated in both temporal and spatial scaling. From the results, we conclude that LMD performs well, when compared to the previous fast mode decision algorithms. The comparison parameters are time, PSNR, and bit rate. LMD achieve time saving of 66.65% with 0.05% detriment in PSNR and 0.17% increment in bit rate compared with the full search method.

  16. Likelihood Approximation With Hierarchical Matrices For Large Spatial Datasets

    KAUST Repository

    Litvinenko, Alexander

    2017-09-03

    We use available measurements to estimate the unknown parameters (variance, smoothness parameter, and covariance length) of a covariance function by maximizing the joint Gaussian log-likelihood function. To overcome cubic complexity in the linear algebra, we approximate the discretized covariance function in the hierarchical (H-) matrix format. The H-matrix format has a log-linear computational cost and storage O(kn log n), where the rank k is a small integer and n is the number of locations. The H-matrix technique allows us to work with general covariance matrices in an efficient way, since H-matrices can approximate inhomogeneous covariance functions, with a fairly general mesh that is not necessarily axes-parallel, and neither the covariance matrix itself nor its inverse have to be sparse. We demonstrate our method with Monte Carlo simulations and an application to soil moisture data. The C, C++ codes and data are freely available.

  17. Music genre classification via likelihood fusion from multiple feature models

    Science.gov (United States)

    Shiu, Yu; Kuo, C.-C. J.

    2005-01-01

    Music genre provides an efficient way to index songs in a music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. A new two-stage scheme for music genre classification is proposed in this work. At the first stage, we examine a couple of different features, construct their corresponding parametric models (e.g. GMM and HMM) and compute their likelihood functions to yield soft classification results. In particular, the timbre, rhythm and temporal variation features are considered. Then, at the second stage, these soft classification results are integrated to result in a hard decision for final music genre classification. Experimental results are given to demonstrate the performance of the proposed scheme.

  18. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  19. Maximum likelihood estimation of phase-type distributions

    DEFF Research Database (Denmark)

    Esparza, Luz Judith R

    for both univariate and multivariate cases. Methods like the EM algorithm and Markov chain Monte Carlo are applied for this purpose. Furthermore, this thesis provides explicit formulae for computing the Fisher information matrix for discrete and continuous phase-type distributions, which is needed to find......This work is concerned with the statistical inference of phase-type distributions and the analysis of distributions with rational Laplace transform, known as matrix-exponential distributions. The thesis is focused on the estimation of the maximum likelihood parameters of phase-type distributions...... confidence regions for their estimated parameters. Finally, a new general class of distributions, called bilateral matrix-exponential distributions, is defined. These distributions have the entire real line as domain and can be used, for instance, for modelling. In addition, this class of distributions...

  20. The elaboration likelihood model and communication about food risks.

    Science.gov (United States)

    Frewer, L J; Howard, C; Hedderley, D; Shepherd, R

    1997-12-01

    Factors such as hazard type and source credibility have been identified as important in the establishment of effective strategies for risk communication. The elaboration likelihood model was adapted to investigate the potential impact of hazard type, information source, and persuasive content of information on individual engagement in elaborative, or thoughtful, cognitions about risk messages. One hundred sixty respondents were allocated to one of eight experimental groups, and the effects of source credibility, persuasive content of information and hazard type were systematically varied. The impact of the different factors on beliefs about the information and elaborative processing examined. Low credibility was particularly important in reducing risk perceptions, although persuasive content and hazard type were also influential in determining whether elaborative processing occurred.

  1. Maximum Likelihood Blood Velocity Estimator Incorporating Properties of Flow Physics

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2004-01-01

    )-data under investigation. The flow physic properties are exploited in the second term, as the range of velocity values investigated in the cross-correlation analysis are compared to the velocity estimates in the temporal and spatial neighborhood of the signal segment under investigation. The new estimator...... has been compared to the cross-correlation (CC) estimator and the previously developed maximum likelihood estimator (MLE). The results show that the CMLE can handle a larger velocity search range and is capable of estimating even low velocity levels from tissue motion. The CC and the MLE produce...... for the CC and the MLE. When the velocity search range is set to twice the limit of the CC and the MLE, the number of incorrect velocity estimates are 0, 19.1, and 7.2% for the CMLE, CC, and MLE, respectively. The ability to handle a larger search range and estimating low velocity levels was confirmed...

  2. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Daigle Bernie J

    2012-05-01

    Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods

  3. CONSTRUCTING A FLEXIBLE LIKELIHOOD FUNCTION FOR SPECTROSCOPIC INFERENCE

    International Nuclear Information System (INIS)

    Czekala, Ian; Andrews, Sean M.; Mandel, Kaisey S.; Green, Gregory M.; Hogg, David W.

    2015-01-01

    We present a modular, extensible likelihood framework for spectroscopic inference based on synthetic model spectra. The subtraction of an imperfect model from a continuously sampled spectrum introduces covariance between adjacent datapoints (pixels) into the residual spectrum. For the high signal-to-noise data with large spectral range that is commonly employed in stellar astrophysics, that covariant structure can lead to dramatically underestimated parameter uncertainties (and, in some cases, biases). We construct a likelihood function that accounts for the structure of the covariance matrix, utilizing the machinery of Gaussian process kernels. This framework specifically addresses the common problem of mismatches in model spectral line strengths (with respect to data) due to intrinsic model imperfections (e.g., in the atomic/molecular databases or opacity prescriptions) by developing a novel local covariance kernel formalism that identifies and self-consistently downweights pathological spectral line “outliers.” By fitting many spectra in a hierarchical manner, these local kernels provide a mechanism to learn about and build data-driven corrections to synthetic spectral libraries. An open-source software implementation of this approach is available at http://iancze.github.io/Starfish, including a sophisticated probabilistic scheme for spectral interpolation when using model libraries that are sparsely sampled in the stellar parameters. We demonstrate some salient features of the framework by fitting the high-resolution V-band spectrum of WASP-14, an F5 dwarf with a transiting exoplanet, and the moderate-resolution K-band spectrum of Gliese 51, an M5 field dwarf

  4. Likelihood of illegal alcohol sales at professional sport stadiums.

    Science.gov (United States)

    Toomey, Traci L; Erickson, Darin J; Lenk, Kathleen M; Kilian, Gunna R

    2008-11-01

    Several studies have assessed the propensity for illegal alcohol sales at licensed alcohol establishments and community festivals, but no previous studies examined the propensity for these sales at professional sport stadiums. In this study, we assessed the likelihood of alcohol sales to both underage youth and obviously intoxicated patrons at professional sports stadiums across the United States, and assessed the factors related to likelihood of both types of alcohol sales. We conducted pseudo-underage (i.e., persons age 21 or older who appear under 21) and pseudo-intoxicated (i.e., persons feigning intoxication) alcohol purchase attempts at stadiums that house professional hockey, basketball, baseball, and football teams. We conducted the purchase attempts at 16 sport stadiums located in 5 states. We measured 2 outcome variables: pseudo-underage sale (yes, no) and pseudo-intoxicated sale (yes, no), and 3 types of independent variables: (1) seller characteristics, (2) purchase attempt characteristics, and (3) event characteristics. Following univariate and bivariate analyses, we a separate series of logistic generalized mixed regression models for each outcome variable. The overall sales rates to the pseudo-underage and pseudo-intoxicated buyers were 18% and 74%, respectively. In the multivariate logistic analyses, we found that the odds of a sale to a pseudo-underage buyer in the stands was 2.9 as large as the odds of a sale at the concession booths (30% vs. 13%; p = 0.01). The odds of a sale to an obviously intoxicated buyer in the stands was 2.9 as large as the odds of a sale at the concession booths (89% vs. 73%; p = 0.02). Similar to studies assessing illegal alcohol sales at licensed alcohol establishments and community festivals, findings from this study shows the need for interventions specifically focused on illegal alcohol sales at professional sporting events.

  5. Targeted maximum likelihood estimation for a binary treatment: A tutorial.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Schomaker, Michael; Rachet, Bernard; Schnitzer, Mireille E

    2018-04-23

    When estimating the average effect of a binary treatment (or exposure) on an outcome, methods that incorporate propensity scores, the G-formula, or targeted maximum likelihood estimation (TMLE) are preferred over naïve regression approaches, which are biased under misspecification of a parametric outcome model. In contrast propensity score methods require the correct specification of an exposure model. Double-robust methods only require correct specification of either the outcome or the exposure model. Targeted maximum likelihood estimation is a semiparametric double-robust method that improves the chances of correct model specification by allowing for flexible estimation using (nonparametric) machine-learning methods. It therefore requires weaker assumptions than its competitors. We provide a step-by-step guided implementation of TMLE and illustrate it in a realistic scenario based on cancer epidemiology where assumptions about correct model specification and positivity (ie, when a study participant had 0 probability of receiving the treatment) are nearly violated. This article provides a concise and reproducible educational introduction to TMLE for a binary outcome and exposure. The reader should gain sufficient understanding of TMLE from this introductory tutorial to be able to apply the method in practice. Extensive R-code is provided in easy-to-read boxes throughout the article for replicability. Stata users will find a testing implementation of TMLE and additional material in the Appendix S1 and at the following GitHub repository: https://github.com/migariane/SIM-TMLE-tutorial. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  6. Glycerol production by Oenococcus oeni during sequential and simultaneous cultures with wine yeast strains.

    Science.gov (United States)

    Ale, Cesar E; Farías, Marta E; Strasser de Saad, Ana M; Pasteris, Sergio E

    2014-07-01

    Growth and fermentation patterns of Saccharomyces cerevisiae, Kloeckera apiculata, and Oenococcus oeni strains cultured in grape juice medium were studied. In pure, sequential and simultaneous cultures, the strains reached the stationary growth phase between 2 and 3 days. Pure and mixed K. apiculata and S. cerevisiae cultures used mainly glucose, producing ethanol, organic acids, and 4.0 and 0.1 mM glycerol, respectively. In sequential cultures, O. oeni achieved about 1 log unit at 3 days using mainly fructose and L-malic acid. Highest sugars consumption was detected in K. apiculata supernatants, lactic acid being the major end-product. 8.0 mM glycerol was found in 6-day culture supernatants. In simultaneous cultures, total sugars and L-malic acid were used at 3 days and 98% of ethanol and glycerol were detected. This study represents the first report of the population dynamics and metabolic behavior of yeasts and O. oeni in sequential and simultaneous cultures and contributes to the selection of indigenous strains to design starter cultures for winemaking, also considering the inclusion of K. apiculata. The sequential inoculation of yeasts and O. oeni would enhance glycerol production, which confers desirable organoleptic characteristics to wines, while organic acids levels would not affect their sensory profile. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. A-Track: Detecting Moving Objects in FITS images

    Science.gov (United States)

    Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.

    2017-04-01

    A-Track is a fast, open-source, cross-platform pipeline for detecting moving objects (asteroids and comets) in sequential telescope images in FITS format. The moving objects are detected using a modified line detection algorithm.

  8. Analysis of Pairwise Interactions in a Maximum Likelihood Sense to Identify Leaders in a Group

    Directory of Open Access Journals (Sweden)

    Violet Mwaffo

    2017-07-01

    Full Text Available Collective motion in animal groups manifests itself in the form of highly coordinated maneuvers determined by local interactions among individuals. A particularly critical question in understanding the mechanisms behind such interactions is to detect and classify leader–follower relationships within the group. In the technical literature of coupled dynamical systems, several methods have been proposed to reconstruct interaction networks, including linear correlation analysis, transfer entropy, and event synchronization. While these analyses have been helpful in reconstructing network models from neuroscience to public health, rules on the most appropriate method to use for a specific dataset are lacking. Here, we demonstrate the possibility of detecting leaders in a group from raw positional data in a model-free approach that combines multiple methods in a maximum likelihood sense. We test our framework on synthetic data of groups of self-propelled Vicsek particles, where a single agent acts as a leader and both the size of the interaction region and the level of inherent noise are systematically varied. To assess the feasibility of detecting leaders in real-world applications, we study a synthetic dataset of fish shoaling, generated by using a recent data-driven model for social behavior, and an experimental dataset of pharmacologically treated zebrafish. Not only does our approach offer a robust strategy to detect leaders in synthetic data but it also allows for exploring the role of psychoactive compounds on leader–follower relationships.

  9. Maximum-likelihood estimation of the hyperbolic parameters from grouped observations

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1988-01-01

    a least-squares problem. The second procedure Hypesti first approaches the maximum-likelihood estimate by iterating in the profile-log likelihood function for the scale parameter. Close to the maximum of the likelihood function, the estimation is brought to an end by iteration, using all four parameters...

  10. A short proof that phylogenetic tree reconstruction by maximum likelihood is hard.

    Science.gov (United States)

    Roch, Sebastien

    2006-01-01

    Maximum likelihood is one of the most widely used techniques to infer evolutionary histories. Although it is thought to be intractable, a proof of its hardness has been lacking. Here, we give a short proof that computing the maximum likelihood tree is NP-hard by exploiting a connection between likelihood and parsimony observed by Tuffley and Steel.

  11. A Short Proof that Phylogenetic Tree Reconstruction by Maximum Likelihood is Hard

    OpenAIRE

    Roch, S.

    2005-01-01

    Maximum likelihood is one of the most widely used techniques to infer evolutionary histories. Although it is thought to be intractable, a proof of its hardness has been lacking. Here, we give a short proof that computing the maximum likelihood tree is NP-hard by exploiting a connection between likelihood and parsimony observed by Tuffley and Steel.

  12. On Locally Most Powerful Sequential Rank Tests

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 36, č. 1 (2017), s. 111-125 ISSN 0747-4946 R&D Projects: GA ČR GA17-07384S Grant - others:Nadační fond na podporu vědy(CZ) Neuron Institutional support: RVO:67985556 Keywords : nonparametric test s * sequential ranks * stopping variable Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.339, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/kalina-0474065.pdf

  13. Decoding restricted participation in sequential electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Knaut, Andreas; Paschmann, Martin

    2017-06-15

    Restricted participation in sequential markets may cause high price volatility and welfare losses. In this paper we therefore analyze the drivers of restricted participation in the German intraday auction which is a short-term electricity market with quarter-hourly products. Applying a fundamental electricity market model with 15-minute temporal resolution, we identify the lack of sub-hourly market coupling being the most relevant driver of restricted participation. We derive a proxy for price volatility and find that full market coupling may trigger quarter-hourly price volatility to decrease by a factor close to four.

  14. THE DEVELOPMENT OF SPECIAL SEQUENTIALLY-TIMED

    Directory of Open Access Journals (Sweden)

    Stanislav LICHOROBIEC

    2016-06-01

    Full Text Available This article documents the development of the noninvasive use of explosives during the destruction of ice mass in river flows. The system of special sequentially-timed charges utilizes the increase in efficiency of cutting charges by covering them with bags filled with water, while simultaneously increasing the effect of the entire system of timed charges. Timing, spatial combinations during placement, and the linking of these charges results in the loosening of ice barriers on a frozen waterway, while at the same time regulating the size of the ice fragments. The developed charges will increase the operability and safety of IRS units.

  15. Pass-transistor asynchronous sequential circuits

    Science.gov (United States)

    Whitaker, Sterling R.; Maki, Gary K.

    1989-01-01

    Design methods for asynchronous sequential pass-transistor circuits, which result in circuits that are hazard- and critical-race-free and which have added degrees of freedom for the input signals, are discussed. The design procedures are straightforward and easy to implement. Two single-transition-time state assignment methods are presented, and hardware bounds for each are established. A surprising result is that the hardware realizations for each next state variable and output variable is identical for a given flow table. Thus, a state machine with N states and M outputs can be constructed using a single layout replicated N + M times.

  16. Boundary conditions in random sequential adsorption

    Science.gov (United States)

    Cieśla, Michał; Ziff, Robert M.

    2018-04-01

    The influence of different boundary conditions on the density of random packings of disks is studied. Packings are generated using the random sequential adsorption algorithm with three different types of boundary conditions: periodic, open, and wall. It is found that the finite size effects are smallest for periodic boundary conditions, as expected. On the other hand, in the case of open and wall boundaries it is possible to introduce an effective packing size and a constant correction term to significantly improve the packing densities.

  17. Automatic synthesis of sequential control schemes

    International Nuclear Information System (INIS)

    Klein, I.

    1993-01-01

    Of all hard- and software developed for industrial control purposes, the majority is devoted to sequential, or binary valued, control and only a minor part to classical linear control. Typically, the sequential parts of the controller are invoked during startup and shut-down to bring the system into its normal operating region and into some safe standby region, respectively. Despite its importance, fairly little theoretical research has been devoted to this area, and sequential control programs are therefore still created manually without much theoretical support to obtain a systematic approach. We propose a method to create sequential control programs automatically. The main ideas is to spend some effort off-line modelling the plant, and from this model generate the control strategy, that is the plan. The plant is modelled using action structures, thereby concentrating on the actions instead of the states of the plant. In general the planning problem shows exponential complexity in the number of state variables. However, by focusing on the actions, we can identify problem classes as well as algorithms such that the planning complexity is reduced to polynomial complexity. We prove that these algorithms are sound, i.e., the generated solution will solve the stated problem, and complete, i.e., if the algorithms fail, then no solution exists. The algorithms generate a plan as a set of actions and a partial order on this set specifying the execution order. The generated plant is proven to be minimal and maximally parallel. For a larger class of problems we propose a method to split the original problem into a number of simple problems that can each be solved using one of the presented algorithms. It is also shown how a plan can be translated into a GRAFCET chart, and to illustrate these ideas we have implemented a planing tool, i.e., a system that is able to automatically create control schemes. Such a tool can of course also be used on-line if it is fast enough. This

  18. From sequential to parallel programming with patterns

    CERN Document Server

    CERN. Geneva

    2018-01-01

    To increase in both performance and efficiency, our programming models need to adapt to better exploit modern processors. The classic idioms and patterns for programming such as loops, branches or recursion are the pillars of almost every code and are well known among all programmers. These patterns all have in common that they are sequential in nature. Embracing parallel programming patterns, which allow us to program for multi- and many-core hardware in a natural way, greatly simplifies the task of designing a program that scales and performs on modern hardware, independently of the used programming language, and in a generic way.

  19. Sequential extraction of uranium metal contamination

    International Nuclear Information System (INIS)

    Murry, M.M.; Spitz, H.B.; Connick, W.B.

    2016-01-01

    Samples of uranium contaminated dirt collected from the dirt floor of an abandoned metal rolling mill were analyzed for uranium using a sequential extraction protocol involving a series of five increasingly aggressive solvents. The quantity of uranium extracted from the contaminated dirt by each reagent can aid in predicting the fate and transport of the uranium contamination in the environment. Uranium was separated from each fraction using anion exchange, electrodeposition and analyzed by alpha spectroscopy analysis. Results demonstrate that approximately 77 % of the uranium was extracted using NH 4 Ac in 25 % acetic acid. (author)

  20. Simultaneous optimization of sequential IMRT plans

    International Nuclear Information System (INIS)

    Popple, Richard A.; Prellop, Perri B.; Spencer, Sharon A.; Santos, Jennifer F. de los; Duan, Jun; Fiveash, John B.; Brezovich, Ivan A.

    2005-01-01

    Radiotherapy often comprises two phases, in which irradiation of a volume at risk for microscopic disease is followed by a sequential dose escalation to a smaller volume either at a higher risk for microscopic disease or containing only gross disease. This technique is difficult to implement with intensity modulated radiotherapy, as the tolerance doses of critical structures must be respected over the sum of the two plans. Techniques that include an integrated boost have been proposed to address this problem. However, clinical experience with such techniques is limited, and many clinicians are uncomfortable prescribing nonconventional fractionation schemes. To solve this problem, we developed an optimization technique that simultaneously generates sequential initial and boost IMRT plans. We have developed an optimization tool that uses a commercial treatment planning system (TPS) and a high level programming language for technical computing. The tool uses the TPS to calculate the dose deposition coefficients (DDCs) for optimization. The DDCs were imported into external software and the treatment ports duplicated to create the boost plan. The initial, boost, and tolerance doses were specified and used to construct cost functions. The initial and boost plans were optimized simultaneously using a gradient search technique. Following optimization, the fluence maps were exported to the TPS for dose calculation. Seven patients treated using sequential techniques were selected from our clinical database. The initial and boost plans used to treat these patients were developed independently of each other by dividing the tolerance doses proportionally between the initial and boost plans and then iteratively optimizing the plans until a summation that met the treatment goals was obtained. We used the simultaneous optimization technique to generate plans that met the original planning goals. The coverage of the initial and boost target volumes in the simultaneously optimized

  1. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    Science.gov (United States)

    Li, Xinya; Deng, Z. Daniel; Sun, Yannan; Martinez, Jayson J.; Fu, Tao; McMichael, Geoffrey A.; Carlson, Thomas J.

    2014-11-01

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.

  2. Effect of cryoablation sequential chemotherapy on patients with advanced non-small cell lung cancer

    Directory of Open Access Journals (Sweden)

    Shu-Hui Yao

    2016-03-01

    Full Text Available Objective: To evaluate the effect of cryoablation sequential chemotherapy on patients with advanced non-small cell lung cancer. Methods: A total of 39 cases with advanced non-small cell lung cancer who received cryoablation sequential chemotherapy and 39 cases with advanced non-small cell lung cancer who received chemotherapy alone were selected and enrolled in sequential group and control group, disease progression and survival of two groups were followed up, and contents of tumor markers and angiogenesis molecules in serum as well as contents of T-lymphocyte subsets in peripheral blood were detected. Results: Progressionfree survival and median overall survival (mOS of sequential group were longer than those of control group, and cumulative cases of tumor progression at various points in time were significantly less than those of control group (P<0.05; 1 month after treatment, serum tumor markers CEA, CYFRA21-1 and NSE contents, serum angiogenesis molecules PCDGF, VEGF and HDGF contents as well as CD3+CD4-CD8+CD28-T cell content in peripheral blood of sequential group were significantly lower than those of control group (P<0.05, and contents of CD3+CD4+CD8-T cell and CD3+CD4-CD8+CD28+T cell in peripheral blood were higher than those of control group (P<0.05. Conclusions: Cryoablation sequential chemotherapy can improve the prognosis of patients with advanced non-small cell lung cancer, delay disease progression, prolong survival time, inhibit angiogenesis and improve immune function.

  3. Macroscopic Dynamic Modeling of Sequential Batch Cultures of Hybridoma Cells: An Experimental Validation

    Directory of Open Access Journals (Sweden)

    Laurent Dewasme

    2017-02-01

    Full Text Available Hybridoma cells are commonly grown for the production of monoclonal antibodies (MAb. For monitoring and control purposes of the bioreactors, dynamic models of the cultures are required. However these models are difficult to infer from the usually limited amount of available experimental data and do not focus on target protein production optimization. This paper explores an experimental case study where hybridoma cells are grown in a sequential batch reactor. The simplest macroscopic reaction scheme translating the data is first derived using a maximum likelihood principal component analysis. Subsequently, nonlinear least-squares estimation is used to determine the kinetic laws. The resulting dynamic model reproduces quite satisfactorily the experimental data, as evidenced in direct and cross-validation tests. Furthermore, model predictions can also be used to predict optimal medium renewal time and composition.

  4. Comparison of Sequential and Variational Data Assimilation

    Science.gov (United States)

    Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht

    2017-04-01

    Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.

  5. Time scale of random sequential adsorption.

    Science.gov (United States)

    Erban, Radek; Chapman, S Jonathan

    2007-04-01

    A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.

  6. Hybrid Computerized Adaptive Testing: From Group Sequential Design to Fully Sequential Design

    Science.gov (United States)

    Wang, Shiyu; Lin, Haiyan; Chang, Hua-Hua; Douglas, Jeff

    2016-01-01

    Computerized adaptive testing (CAT) and multistage testing (MST) have become two of the most popular modes in large-scale computer-based sequential testing. Though most designs of CAT and MST exhibit strength and weakness in recent large-scale implementations, there is no simple answer to the question of which design is better because different…

  7. Sequential and simultaneous choices: testing the diet selection and sequential choice models.

    Science.gov (United States)

    Freidin, Esteban; Aw, Justine; Kacelnik, Alex

    2009-03-01

    We investigate simultaneous and sequential choices in starlings, using Charnov's Diet Choice Model (DCM) and Shapiro, Siller and Kacelnik's Sequential Choice Model (SCM) to integrate function and mechanism. During a training phase, starlings encountered one food-related option per trial (A, B or R) in random sequence and with equal probability. A and B delivered food rewards after programmed delays (shorter for A), while R ('rejection') moved directly to the next trial without reward. In this phase we measured latencies to respond. In a later, choice, phase, birds encountered the pairs A-B, A-R and B-R, the first implementing a simultaneous choice and the second and third sequential choices. The DCM predicts when R should be chosen to maximize intake rate, and SCM uses latencies of the training phase to predict choices between any pair of options in the choice phase. The predictions of both models coincided, and both successfully predicted the birds' preferences. The DCM does not deal with partial preferences, while the SCM does, and experimental results were strongly correlated to this model's predictions. We believe that the SCM may expose a very general mechanism of animal choice, and that its wider domain of success reflects the greater ecological significance of sequential over simultaneous choices.

  8. Sequential and double sequential fission observed in heavy ion interaction of (11.67 MeV/u){sup 197}Au projectile with {sup 197}Au target

    Energy Technology Data Exchange (ETDEWEB)

    Nasir, Tabassum [Gomal University, Dera Ismail Khan (Pakistan). Dept. of Physics; Khan, Ehsan Ullah [COMSATS Institute of Information Technology (CIIT), Islamabad (Pakistan). Dept. of Physics; Baluch, Javaid Jahan [COMSATS Institute of Information Technology (CIIT), Abbottabad, (Pakistan). Dept. of Environmental Sciences; Shafi-Ur-Rehman, [PAEC, Dera Ghazi Khan (Pakistan). ISL Project; Matiullah, [PINSTECH, Nilore, Islamabad (Pakistan). Physics Div.; Rafique, Muhammad [University of Azad Jammu and Kashmir, Muzaffarabad (Pakistan). Dept. of Physics

    2009-09-15

    The heavy ion interaction of 11.67 MeV/u {sup 197}Au+ {sup 197}Au has been investigated using mica as a passive detector. By employing Solid State Nuclear Track Detection Technique the data of elastic scattering as well as inelastic reaction channel was collected. The off-line data analysis of multi-pronged events was performed by measuring the three-dimensional geometrical coordinates of correlated tracks on event-by-event basis. Multi pronged events observed in this reaction were due to sequential and double sequential fission. Using a computer code PRONGY based on the procedure of internal calibration, it was possible to derive quantities like mass transfer, total kinetic energy loss and scattering angles. (author)

  9. Sequential and double sequential fission observed in heavy ion interaction of (11.67 MeV/u)197Au projectile with 197Au target

    International Nuclear Information System (INIS)

    Nasir, Tabassum; Khan, Ehsan Ullah; Baluch, Javaid Jahan; Shafi-Ur-Rehman; Matiullah; Rafique, Muhammad

    2009-01-01

    The heavy ion interaction of 11.67 MeV/u 197 Au+ 197 Au has been investigated using mica as a passive detector. By employing Solid State Nuclear Track Detection Technique the data of elastic scattering as well as inelastic reaction channel was collected. The off-line data analysis of multi-pronged events was performed by measuring the three-dimensional geometrical coordinates of correlated tracks on event-by-event basis. Multi pronged events observed in this reaction were due to sequential and double sequential fission. Using a computer code PRONGY based on the procedure of internal calibration, it was possible to derive quantities like mass transfer, total kinetic energy loss and scattering angles. (author)

  10. Planck 2013 results. XV. CMB power spectra and likelihood

    Science.gov (United States)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Gaier, T. C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jewell, J.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Laureijs, R. J.; Lawrence, C. R.; Le Jeune, M.; Leach, S.; Leahy, J. P.; Leonardi, R.; León-Tavares, J.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Maino, D.; Mandolesi, N.; Marinucci, D.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I. J.; Orieux, F.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Paykari, P.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rahlin, A.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ringeval, C.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Sanselme, L.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; White, M.; White, S. D. M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-11-01

    This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best estimate of the CMB angular power spectrum from Planck over three decades in multipole moment, ℓ, covering 2 ≤ ℓ ≤ 2500. The main source of uncertainty at ℓ ≲ 1500 is cosmic variance. Uncertainties in small-scale foreground modelling and instrumental noise dominate the error budget at higher ℓs. For ℓ impact of residual foreground and instrumental uncertainties on the final cosmological parameters. We find good internal agreement among the high-ℓ cross-spectra with residuals below a few μK2 at ℓ ≲ 1000, in agreement with estimated calibration uncertainties. We compare our results with foreground-cleaned CMB maps derived from all Planck frequencies, as well as with cross-spectra derived from the 70 GHz Planck map, and find broad agreement in terms of spectrum residuals and cosmological parameters. We further show that the best-fit ΛCDM cosmology is in excellent agreement with preliminary PlanckEE and TE polarisation spectra. We find that the standard ΛCDM cosmology is well constrained by Planck from the measurements at ℓ ≲ 1500. One specific example is the spectral index of scalar perturbations, for which we report a 5.4σ deviation from scale invariance, ns = 1. Increasing the multipole range beyond ℓ ≃ 1500 does not increase our accuracy for the ΛCDM parameters, but instead allows us to study extensions beyond the standard model. We find no indication of significant departures from the ΛCDM framework. Finally, we report a tension between the Planck best-fit ΛCDM model and the low-ℓ spectrum in the form of a power deficit of 5-10% at ℓ ≲ 40, with a statistical significance of 2.5-3σ. Without a theoretically motivated model for

  11. Likelihood ratio model for classification of forensic evidence

    Energy Technology Data Exchange (ETDEWEB)

    Zadora, G., E-mail: gzadora@ies.krakow.pl [Institute of Forensic Research, Westerplatte 9, 31-033 Krakow (Poland); Neocleous, T., E-mail: tereza@stats.gla.ac.uk [University of Glasgow, Department of Statistics, 15 University Gardens, Glasgow G12 8QW (United Kingdom)

    2009-05-29

    One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H{sub 1})/p(E|H{sub 2}). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI{sub b}) and after (RI{sub a}) the annealing process, in the form of dRI = log{sub 10}|RI{sub a} - RI{sub b}|. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this

  12. Likelihood ratio model for classification of forensic evidence

    International Nuclear Information System (INIS)

    Zadora, G.; Neocleous, T.

    2009-01-01

    One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H 1 )/p(E|H 2 ). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI b ) and after (RI a ) the annealing process, in the form of dRI = log 10 |RI a - RI b |. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this model outperformed two other

  13. A simulation study of likelihood inference procedures in rayleigh distribution with censored data

    International Nuclear Information System (INIS)

    Baklizi, S. A.; Baker, H. M.

    2001-01-01

    Inference procedures based on the likelihood function are considered for the one parameter Rayleigh distribution with type1 and type 2 censored data. Using simulation techniques, the finite sample performances of the maximum likelihood estimator and the large sample likelihood interval estimation procedures based on the Wald, the Rao, and the likelihood ratio statistics are investigated. It appears that the maximum likelihood estimator is unbiased. The approximate variance estimates obtained from the asymptotic normal distribution of the maximum likelihood estimator are accurate under type 2 censored data while they tend to be smaller than the actual variances when considering type1 censored data of small size. It appears also that interval estimation based on the Wald and Rao statistics need much more sample size than interval estimation based on the likelihood ratio statistic to attain reasonable accuracy. (authors). 15 refs., 4 tabs

  14. The effects of sequential attention shifts within visual working memory.

    Science.gov (United States)

    Li, Qi; Saiki, Jun

    2014-01-01

    Previous studies have shown conflicting data as to whether it is possible to sequentially shift spatial attention among visual working memory (VWM) representations. The present study investigated this issue by asynchronously presenting attentional cues during the retention interval of a change detection task. In particular, we focused on two types of sequential attention shifts: (1) orienting attention to one location, and then withdrawing attention from it, and (2) switching the focus of attention from one location to another. In Experiment 1, a withdrawal cue was presented after a spatial retro-cue to measure the effect of withdrawing attention. The withdrawal cue significantly reduced the cost of invalid spatial cues, but surprisingly, did not attenuate the benefit of valid spatial cues. This indicates that the withdrawal cue only triggered the activation of facilitative components but not inhibitory components of attention. In Experiment 2, two spatial retro-cues were presented successively to examine the effect of switching the focus of attention. We observed equivalent benefits of the first and second spatial cues, suggesting that participants were able to reorient attention from one location to another within VWM, and the reallocation of attention did not attenuate memory at the first-cued location. In Experiment 3, we found that reducing the validity of the preceding spatial cue did lead to a significant reduction in its benefit. However, performance was still better at first-cued locations than at uncued and neutral locations, indicating that the first cue benefit might have been preserved both partially under automatic control and partially under voluntary control. Our findings revealed new properties of dynamic attentional control in VWM maintenance.

  15. The effects of sequential attention shifts within visual working memory

    Science.gov (United States)

    Li, Qi; Saiki, Jun

    2014-01-01

    Previous studies have shown conflicting data as to whether it is possible to sequentially shift spatial attention among visual working memory (VWM) representations. The present study investigated this issue by asynchronously presenting attentional cues during the retention interval of a change detection task. In particular, we focused on two types of sequential attention shifts: (1) orienting attention to one location, and then withdrawing attention from it, and (2) switching the focus of attention from one location to another. In Experiment 1, a withdrawal cue was presented after a spatial retro-cue to measure the effect of withdrawing attention. The withdrawal cue significantly reduced the cost of invalid spatial cues, but surprisingly, did not attenuate the benefit of valid spatial cues. This indicates that the withdrawal cue only triggered the activation of facilitative components but not inhibitory components of attention. In Experiment 2, two spatial retro-cues were presented successively to examine the effect of switching the focus of attention. We observed equivalent benefits of the first and second spatial cues, suggesting that participants were able to reorient attention from one location to another within VWM, and the reallocation of attention did not attenuate memory at the first-cued location. In Experiment 3, we found that reducing the validity of the preceding spatial cue did lead to a significant reduction in its benefit. However, performance was still better at first-cued locations than at uncued and neutral locations, indicating that the first cue benefit might have been preserved both partially under automatic control and partially under voluntary control. Our findings revealed new properties of dynamic attentional control in VWM maintenance. PMID:25237306

  16. Bias correction for estimated QTL effects using the penalized maximum likelihood method.

    Science.gov (United States)

    Zhang, J; Yue, C; Zhang, Y-M

    2012-04-01

    A penalized maximum likelihood method has been proposed as an important approach to the detection of epistatic quantitative trait loci (QTL). However, this approach is not optimal in two special situations: (1) closely linked QTL with effects in opposite directions and (2) small-effect QTL, because the method produces downwardly biased estimates of QTL effects. The present study aims to correct the bias by using correction coefficients and shifting from the use of a uniform prior on the variance parameter of a QTL effect to that of a scaled inverse chi-square prior. The results of Monte Carlo simulation experiments show that the improved method increases the power from 25 to 88% in the detection of two closely linked QTL of equal size in opposite directions and from 60 to 80% in the identification of QTL with small effects (0.5% of the total phenotypic variance). We used the improved method to detect QTL responsible for the barley kernel weight trait using 145 doubled haploid lines developed in the North American Barley Genome Mapping Project. Application of the proposed method to other shrinkage estimation of QTL effects is discussed.

  17. Efficient algorithms for maximum likelihood decoding in the surface code

    Science.gov (United States)

    Bravyi, Sergey; Suchara, Martin; Vargo, Alexander

    2014-09-01

    We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.

  18. Maximum likelihood estimation for cytogenetic dose-response curves

    International Nuclear Information System (INIS)

    Frome, E.L; DuFrain, R.J.

    1983-10-01

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa[γd + g(t, tau)d 2 ], where t is the time and d is dose. The coefficient of the d 2 term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure

  19. Ringing Artefact Reduction By An Efficient Likelihood Improvement Method

    Science.gov (United States)

    Fuderer, Miha

    1989-10-01

    In MR imaging, the extent of the acquired spatial frequencies of the object is necessarily finite. The resulting image shows artefacts caused by "truncation" of its Fourier components. These are known as Gibbs artefacts or ringing artefacts. These artefacts are particularly. visible when the time-saving reduced acquisition method is used, say, when scanning only the lowest 70% of the 256 data lines. Filtering the data results in loss of resolution. A method is described that estimates the high frequency data from the low-frequency data lines, with the likelihood of the image as criterion. It is a computationally very efficient method, since it requires practically only two extra Fourier transforms, in addition to the normal. reconstruction. The results of this method on MR images of human subjects are promising. Evaluations on a 70% acquisition image show about 20% decrease of the error energy after processing. "Error energy" is defined as the total power of the difference to a 256-data-lines reference image. The elimination of ringing artefacts then appears almost complete..

  20. Scale invariant for one-sided multivariate likelihood ratio tests

    Directory of Open Access Journals (Sweden)

    Samruam Chongcharoen

    2010-07-01

    Full Text Available Suppose 1 2 , ,..., n X X X is a random sample from Np ( ,V distribution. Consider 0 1 2 : ... 0 p H      and1 : 0 for 1, 2,..., i H   i  p , let 1 0 H  H denote the hypothesis that 1 H holds but 0 H does not, and let ~ 0 H denote thehypothesis that 0 H does not hold. Because the likelihood ratio test (LRT of 0 H versus 1 0 H  H is complicated, severalad hoc tests have been proposed. Tang, Gnecco and Geller (1989 proposed an approximate LRT, Follmann (1996 suggestedrejecting 0 H if the usual test of 0 H versus ~ 0 H rejects 0 H with significance level 2 and a weighted sum of the samplemeans is positive, and Chongcharoen, Singh and Wright (2002 modified Follmann’s test to include information about thecorrelation structure in the sum of the sample means. Chongcharoen and Wright (2007, 2006 give versions of the Tang-Gnecco-Geller tests and Follmann-type tests, respectively, with invariance properties. With LRT’s scale invariant desiredproperty, we investigate its powers by using Monte Carlo techniques and compare them with the tests which we recommendin Chongcharoen and Wright (2007, 2006.

  1. Maximum-likelihood estimation of recent shared ancestry (ERSA).

    Science.gov (United States)

    Huff, Chad D; Witherspoon, David J; Simonson, Tatum S; Xing, Jinchuan; Watkins, W Scott; Zhang, Yuhua; Tuohy, Therese M; Neklason, Deborah W; Burt, Randall W; Guthery, Stephen L; Woodward, Scott R; Jorde, Lynn B

    2011-05-01

    Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package.

  2. Quantifying uncertainty, variability and likelihood for ordinary differential equation models

    LENUS (Irish Health Repository)

    Weisse, Andrea Y

    2010-10-28

    Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.

  3. Affective mapping: An activation likelihood estimation (ALE) meta-analysis.

    Science.gov (United States)

    Kirby, Lauren A J; Robinson, Jennifer L

    2017-11-01

    Functional neuroimaging has the spatial resolution to explain the neural basis of emotions. Activation likelihood estimation (ALE), as opposed to traditional qualitative meta-analysis, quantifies convergence of activation across studies within affective categories. Others have used ALE to investigate a broad range of emotions, but without the convenience of the BrainMap database. We used the BrainMap database and analysis resources to run separate meta-analyses on coordinates reported for anger, anxiety, disgust, fear, happiness, humor, and sadness. Resultant ALE maps were compared to determine areas of convergence between emotions, as well as to identify affect-specific networks. Five out of the seven emotions demonstrated consistent activation within the amygdala, whereas all emotions consistently activated the right inferior frontal gyrus, which has been implicated as an integration hub for affective and cognitive processes. These data provide the framework for models of affect-specific networks, as well as emotional processing hubs, which can be used for future studies of functional or effective connectivity. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Maximum likelihood estimation for cytogenetic dose-response curves

    International Nuclear Information System (INIS)

    Frome, E.L.; DuFrain, R.J.

    1986-01-01

    In vitro dose-response curves are used to describe the relation between chromosome aberrations and radiation dose for human lymphocytes. The lymphocytes are exposed to low-LET radiation, and the resulting dicentric chromosome aberrations follow the Poisson distribution. The expected yield depends on both the magnitude and the temporal distribution of the dose. A general dose-response model that describes this relation has been presented by Kellerer and Rossi (1972, Current Topics on Radiation Research Quarterly 8, 85-158; 1978, Radiation Research 75, 471-488) using the theory of dual radiation action. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting dose-time-response models are intrinsically nonlinear in the parameters. A general-purpose maximum likelihood estimation procedure is described, and estimation for the nonlinear models is illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure

  5. Physical activity may decrease the likelihood of children developing constipation.

    Science.gov (United States)

    Seidenfaden, Sandra; Ormarsson, Orri Thor; Lund, Sigrun H; Bjornsson, Einar S

    2018-01-01

    Childhood constipation is common. We evaluated children diagnosed with constipation, who were referred to an Icelandic paediatric emergency department, and determined the effect of lifestyle factors on its aetiology. The parents of children who were diagnosed with constipation and participated in a phase IIB clinical trial on laxative suppositories answered an online questionnaire about their children's lifestyle and constipation in March-April 2013. The parents of nonconstipated children that visited the paediatric department of Landspitali University Hospital or an Icelandic outpatient clinic answered the same questionnaire. We analysed responses regarding 190 children aged one year to 18 years: 60 with constipation and 130 without. We found that 40% of the constipated children had recurrent symptoms, 27% had to seek medical attention more than once and 33% received medication per rectum. The 47 of 130 control group subjects aged 10-18 were much more likely to exercise more than three times a week (72%) and for more than a hour (62%) than the 26 of 60 constipated children of the same age (42% and 35%, respectively). Constipation risk factors varied with age and many children diagnosed with constipation had recurrent symptoms. Physical activity may affect the likelihood of developing constipation in older children. ©2017 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  6. Constructing diagnostic likelihood: clinical decisions using subjective versus statistical probability.

    Science.gov (United States)

    Kinnear, John; Jackson, Ruth

    2017-07-01

    Although physicians are highly trained in the application of evidence-based medicine, and are assumed to make rational decisions, there is evidence that their decision making is prone to biases. One of the biases that has been shown to affect accuracy of judgements is that of representativeness and base-rate neglect, where the saliency of a person's features leads to overestimation of their likelihood of belonging to a group. This results in the substitution of 'subjective' probability for statistical probability. This study examines clinicians' propensity to make estimations of subjective probability when presented with clinical information that is considered typical of a medical condition. The strength of the representativeness bias is tested by presenting choices in textual and graphic form. Understanding of statistical probability is also tested by omitting all clinical information. For the questions that included clinical information, 46.7% and 45.5% of clinicians made judgements of statistical probability, respectively. Where the question omitted clinical information, 79.9% of clinicians made a judgement consistent with statistical probability. There was a statistically significant difference in responses to the questions with and without representativeness information (χ2 (1, n=254)=54.45, pprobability. One of the causes for this representativeness bias may be the way clinical medicine is taught where stereotypic presentations are emphasised in diagnostic decision making. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  7. Race of source effects in the elaboration likelihood model.

    Science.gov (United States)

    White, P H; Harkins, S G

    1994-11-01

    In a series of experiments, we investigated the effect of race of source on persuasive communications in the Elaboration Likelihood Model (R.E. Petty & J.T. Cacioppo, 1981, 1986). In Experiment 1, we found no evidence that White participants responded to a Black source as a simple negative cue. Experiment 2 suggested the possibility that exposure to a Black source led to low-involvement message processing. In Experiments 3 and 4, a distraction paradigm was used to test this possibility, and it was found that participants under low involvement were highly motivated to process a message presented by a Black source. In Experiment 5, we found that attitudes toward the source's ethnic group, rather than violations of expectancies, accounted for this processing effect. Taken together, the results of these experiments are consistent with S.L. Gaertner and J.F. Dovidio's (1986) theory of aversive racism, which suggests that Whites, because of a combination of egalitarian values and underlying negative racial attitudes, are very concerned about not appearing unfavorable toward Blacks, leading them to be highly motivated to process messages presented by a source from this group.

  8. Maximum likelihood estimation for cytogenetic dose-response curves

    Energy Technology Data Exchange (ETDEWEB)

    Frome, E.L; DuFrain, R.J.

    1983-10-01

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa(..gamma..d + g(t, tau)d/sup 2/), where t is the time and d is dose. The coefficient of the d/sup 2/ term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.

  9. Maximum likelihood approach for several stochastic volatility models

    International Nuclear Information System (INIS)

    Camprodon, Jordi; Perelló, Josep

    2012-01-01

    Volatility measures the amplitude of price fluctuations. Despite it being one of the most important quantities in finance, volatility is not directly observable. Here we apply a maximum likelihood method which assumes that price and volatility follow a two-dimensional diffusion process where volatility is the stochastic diffusion coefficient of the log-price dynamics. We apply this method to the simplest versions of the expOU, the OU and the Heston stochastic volatility models and we study their performance in terms of the log-price probability, the volatility probability, and its Mean First-Passage Time. The approach has some predictive power on the future returns amplitude by only knowing the current volatility. The assumed models do not consider long-range volatility autocorrelation and the asymmetric return-volatility cross-correlation but the method still yields very naturally these two important stylized facts. We apply the method to different market indices and with a good performance in all cases. (paper)

  10. Equivalence between quantum simultaneous games and quantum sequential games

    OpenAIRE

    Kobayashi, Naoki

    2007-01-01

    A framework for discussing relationships between different types of games is proposed. Within the framework, quantum simultaneous games, finite quantum simultaneous games, quantum sequential games, and finite quantum sequential games are defined. In addition, a notion of equivalence between two games is defined. Finally, the following three theorems are shown: (1) For any quantum simultaneous game G, there exists a quantum sequential game equivalent to G. (2) For any finite quantum simultaneo...

  11. Discrimination between sequential and simultaneous virtual channels with electrical hearing

    OpenAIRE

    Landsberger, David; Galvin, John J.

    2011-01-01

    In cochlear implants (CIs), simultaneous or sequential stimulation of adjacent electrodes can produce intermediate pitch percepts between those of the component electrodes. However, it is unclear whether simultaneous and sequential virtual channels (VCs) can be discriminated. In this study, CI users were asked to discriminate simultaneous and sequential VCs; discrimination was measured for monopolar (MP) and bipolar + 1 stimulation (BP + 1), i.e., relatively broad and focused stimulation mode...

  12. C-quence: a tool for analyzing qualitative sequential data.

    Science.gov (United States)

    Duncan, Starkey; Collier, Nicholson T

    2002-02-01

    C-quence is a software application that matches sequential patterns of qualitative data specified by the user and calculates the rate of occurrence of these patterns in a data set. Although it was designed to facilitate analyses of face-to-face interaction, it is applicable to any data set involving categorical data and sequential information. C-quence queries are constructed using a graphical user interface. The program does not limit the complexity of the sequential patterns specified by the user.

  13. Discrimination between sequential and simultaneous virtual channels with electrical hearing.

    Science.gov (United States)

    Landsberger, David; Galvin, John J

    2011-09-01

    In cochlear implants (CIs), simultaneous or sequential stimulation of adjacent electrodes can produce intermediate pitch percepts between those of the component electrodes. However, it is unclear whether simultaneous and sequential virtual channels (VCs) can be discriminated. In this study, CI users were asked to discriminate simultaneous and sequential VCs; discrimination was measured for monopolar (MP) and bipolar + 1 stimulation (BP + 1), i.e., relatively broad and focused stimulation modes. For sequential VCs, the interpulse interval (IPI) varied between 0.0 and 1.8 ms. All stimuli were presented at comfortably loud, loudness-balanced levels at a 250 pulse per second per electrode (ppse) stimulation rate. On average, CI subjects were able to reliably discriminate between sequential and simultaneous VCs. While there was no significant effect of IPI or stimulation mode on VC discrimination, some subjects exhibited better VC discrimination with BP + 1 stimulation. Subjects' discrimination between sequential and simultaneous VCs was correlated with electrode discrimination, suggesting that spatial selectivity may influence perception of sequential VCs. To maintain equal loudness, sequential VC amplitudes were nearly double those of simultaneous VCs, presumably resulting in a broader spread of excitation. These results suggest that perceptual differences between simultaneous and sequential VCs might be explained by differences in the spread of excitation. © 2011 Acoustical Society of America

  14. Group-sequential analysis may allow for early trial termination

    DEFF Research Database (Denmark)

    Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich

    2017-01-01

    BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG-PET/CT mea......BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG...

  15. Learning of state-space models with highly informative observations: A tempered sequential Monte Carlo solution

    Science.gov (United States)

    Svensson, Andreas; Schön, Thomas B.; Lindsten, Fredrik

    2018-05-01

    Probabilistic (or Bayesian) modeling and learning offers interesting possibilities for systematic representation of uncertainty using probability theory. However, probabilistic learning often leads to computationally challenging problems. Some problems of this type that were previously intractable can now be solved on standard personal computers thanks to recent advances in Monte Carlo methods. In particular, for learning of unknown parameters in nonlinear state-space models, methods based on the particle filter (a Monte Carlo method) have proven very useful. A notoriously challenging problem, however, still occurs when the observations in the state-space model are highly informative, i.e. when there is very little or no measurement noise present, relative to the amount of process noise. The particle filter will then struggle in estimating one of the basic components for probabilistic learning, namely the likelihood p (data | parameters). To this end we suggest an algorithm which initially assumes that there is substantial amount of artificial measurement noise present. The variance of this noise is sequentially decreased in an adaptive fashion such that we, in the end, recover the original problem or possibly a very close approximation of it. The main component in our algorithm is a sequential Monte Carlo (SMC) sampler, which gives our proposed method a clear resemblance to the SMC2 method. Another natural link is also made to the ideas underlying the approximate Bayesian computation (ABC). We illustrate it with numerical examples, and in particular show promising results for a challenging Wiener-Hammerstein benchmark problem.

  16. Evaluation of penalized likelihood estimation reconstruction on a digital time-of-flight PET/CT scanner for 18F-FDG whole-body examinations.

    Science.gov (United States)

    Lindström, Elin; Sundin, Anders; Trampal, Carlos; Lindsjö, Lars; Ilan, Ezgi; Danfors, Torsten; Antoni, Gunnar; Sörensen, Jens; Lubberink, Mark

    2018-02-15

    Resolution and quantitative accuracy of positron emission tomography (PET) are highly influenced by the reconstruction method. Penalized likelihood estimation algorithms allow for fully convergent iterative reconstruction, generating a higher image contrast while limiting noise compared to ordered subsets expectation maximization (OSEM). In this study, block-sequential regularized expectation maximization (BSREM) was compared to time-of-flight OSEM (TOF-OSEM). Various strengths of noise penalization factor β were tested along with scan durations and transaxial field of views (FOVs) with the aim to evaluate the performance and clinical use of BSREM for 18 F-FDG-PET-computed tomography (CT), both in quantitative terms and in a qualitative visual evaluation. Methods: Eleven clinical whole-body 18 F-FDG-PET/CT examinations acquired on a digital TOF PET/CT scanner were included. The data were reconstructed using BSREM with point spread function (PSF) recovery and β 133, 267, 400 and 533, and TOF-OSEM with PSF, for various acquisition times/bed position (bp) and FOVs. Noise, signal-to-noise ratio (SNR), signal-to-background ratio (SBR), and standardized uptake values (SUVs) were analysed. A blinded visual image quality evaluation, rating several aspects, performed by two nuclear medicine physicians complemented the analysis. Results: The lowest levels of noise were reached with the highest β resulting in the highest SNR, which in turn resulted in the lowest SBR. Noise equivalence to TOF-OSEM was found with β 400 but produced a significant increase of SUV max (11%), SNR (22%) and SBR (12%) compared to TOF-OSEM. BSREM with β 533 at decreased acquisition (2 min/bp) was comparable to TOF-OSEM at full acquisition duration (3 min/bp). Reconstructed FOV had an impact on BSREM outcome measures, SNR increased while SBR decreased when shifting FOV from 70 to 50 cm. The visual image quality evaluation resulted in similar scores for reconstructions although β 400 obtained the

  17. Mining of high utility-probability sequential patterns from uncertain databases.

    Directory of Open Access Journals (Sweden)

    Binbin Zhang

    Full Text Available High-utility sequential pattern mining (HUSPM has become an important issue in the field of data mining. Several HUSPM algorithms have been designed to mine high-utility sequential patterns (HUPSPs. They have been applied in several real-life situations such as for consumer behavior analysis and event detection in sensor networks. Nonetheless, most studies on HUSPM have focused on mining HUPSPs in precise data. But in real-life, uncertainty is an important factor as data is collected using various types of sensors that are more or less accurate. Hence, data collected in a real-life database can be annotated with existing probabilities. This paper presents a novel pattern mining framework called high utility-probability sequential pattern mining (HUPSPM for mining high utility-probability sequential patterns (HUPSPs in uncertain sequence databases. A baseline algorithm with three optional pruning strategies is presented to mine HUPSPs. Moroever, to speed up the mining process, a projection mechanism is designed to create a database projection for each processed sequence, which is smaller than the original database. Thus, the number of unpromising candidates can be greatly reduced, as well as the execution time for mining HUPSPs. Substantial experiments both on real-life and synthetic datasets show that the designed algorithm performs well in terms of runtime, number of candidates, memory usage, and scalability for different minimum utility and minimum probability thresholds.

  18. Popularity Modeling for Mobile Apps: A Sequential Approach.

    Science.gov (United States)

    Zhu, Hengshu; Liu, Chuanren; Ge, Yong; Xiong, Hui; Chen, Enhong

    2015-07-01

    The popularity information in App stores, such as chart rankings, user ratings, and user reviews, provides an unprecedented opportunity to understand user experiences with mobile Apps, learn the process of adoption of mobile Apps, and thus enables better mobile App services. While the importance of popularity information is well recognized in the literature, the use of the popularity information for mobile App services is still fragmented and under-explored. To this end, in this paper, we propose a sequential approach based on hidden Markov model (HMM) for modeling the popularity information of mobile Apps toward mobile App services. Specifically, we first propose a popularity based HMM (PHMM) to model the sequences of the heterogeneous popularity observations of mobile Apps. Then, we introduce a bipartite based method to precluster the popularity observations. This can help to learn the parameters and initial values of the PHMM efficiently. Furthermore, we demonstrate that the PHMM is a general model and can be applicable for various mobile App services, such as trend based App recommendation, rating and review spam detection, and ranking fraud detection. Finally, we validate our approach on two real-world data sets collected from the Apple Appstore. Experimental results clearly validate both the effectiveness and efficiency of the proposed popularity modeling approach.

  19. Sequential infiltration synthesis for advanced lithography

    Energy Technology Data Exchange (ETDEWEB)

    Darling, Seth B.; Elam, Jeffrey W.; Tseng, Yu-Chih; Peng, Qing

    2017-10-10

    A plasma etch resist material modified by an inorganic protective component via sequential infiltration synthesis (SIS) and methods of preparing the modified resist material. The modified resist material is characterized by an improved resistance to a plasma etching or related process relative to the unmodified resist material, thereby allowing formation of patterned features into a substrate material, which may be high-aspect ratio features. The SIS process forms the protective component within the bulk resist material through a plurality of alternating exposures to gas phase precursors which infiltrate the resist material. The plasma etch resist material may be initially patterned using photolithography, electron-beam lithography or a block copolymer self-assembly process.

  20. Clinical evaluation of synthetic aperture sequential beamforming

    DEFF Research Database (Denmark)

    Hansen, Peter Møller; Hemmsen, Martin Christian; Lange, Theis

    2012-01-01

    In this study clinically relevant ultrasound images generated with synthetic aperture sequential beamforming (SASB) is compared to images generated with a conventional technique. The advantage of SASB is the ability to produce high resolution ultrasound images with a high frame rate and at the same...... time massively reduce the amount of generated data. SASB was implemented in a system consisting of a conventional ultrasound scanner connected to a PC via a research interface. This setup enables simultaneous recording with both SASB and conventional technique. Eighteen volunteers were ultrasound...... scanned abdominally, and 84 sequence pairs were recorded. Each sequence pair consists of two simultaneous recordings of the same anatomical location with SASB and conventional B-mode imaging. The images were evaluated in terms of spatial resolution, contrast, unwanted artifacts, and penetration depth...

  1. Sequential cooling insert for turbine stator vane

    Science.gov (United States)

    Jones, Russel B

    2017-04-04

    A sequential flow cooling insert for a turbine stator vane of a small gas turbine engine, where the impingement cooling insert is formed as a single piece from a metal additive manufacturing process such as 3D metal printing, and where the insert includes a plurality of rows of radial extending impingement cooling air holes alternating with rows of radial extending return air holes on a pressure side wall, and where the insert includes a plurality of rows of chordwise extending second impingement cooling air holes on a suction side wall. The insert includes alternating rows of radial extending cooling air supply channels and return air channels that form a series of impingement cooling on the pressure side followed by the suction side of the insert.

  2. Gleason-Busch theorem for sequential measurements

    Science.gov (United States)

    Flatt, Kieran; Barnett, Stephen M.; Croke, Sarah

    2017-12-01

    Gleason's theorem is a statement that, given some reasonable assumptions, the Born rule used to calculate probabilities in quantum mechanics is essentially unique [A. M. Gleason, Indiana Univ. Math. J. 6, 885 (1957), 10.1512/iumj.1957.6.56050]. We show that Gleason's theorem contains within it also the structure of sequential measurements, and along with this the state update rule. We give a small set of axioms, which are physically motivated and analogous to those in Busch's proof of Gleason's theorem [P. Busch, Phys. Rev. Lett. 91, 120403 (2003), 10.1103/PhysRevLett.91.120403], from which the familiar Kraus operator form follows. An axiomatic approach has practical relevance as well as fundamental interest, in making clear those assumptions which underlie the security of quantum communication protocols. Interestingly, the two-time formalism is seen to arise naturally in this approach.

  3. Multilevel sequential Monte-Carlo samplers

    KAUST Repository

    Jasra, Ajay

    2016-01-01

    Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.

  4. Sequential Stereotype Priming: A Meta-Analysis.

    Science.gov (United States)

    Kidder, Ciara K; White, Katherine R; Hinojos, Michelle R; Sandoval, Mayra; Crites, Stephen L

    2017-08-01

    Psychological interest in stereotype measurement has spanned nearly a century, with researchers adopting implicit measures in the 1980s to complement explicit measures. One of the most frequently used implicit measures of stereotypes is the sequential priming paradigm. The current meta-analysis examines stereotype priming, focusing specifically on this paradigm. To contribute to ongoing discussions regarding methodological rigor in social psychology, one primary goal was to identify methodological moderators of the stereotype priming effect-whether priming is due to a relation between the prime and target stimuli, the prime and target response, participant task, stereotype dimension, stimulus onset asynchrony (SOA), and stimuli type. Data from 39 studies yielded 87 individual effect sizes from 5,497 participants. Analyses revealed that stereotype priming is significantly moderated by the presence of prime-response relations, participant task, stereotype dimension, target stimulus type, SOA, and prime repetition. These results carry both practical and theoretical implications for future research on stereotype priming.

  5. Dancing Twins: Stellar Hierarchies That Formed Sequentially?

    Science.gov (United States)

    Tokovinin, Andrei

    2018-04-01

    This paper draws attention to the class of resolved triple stars with moderate ratios of inner and outer periods (possibly in a mean motion resonance) and nearly circular, mutually aligned orbits. Moreover, stars in the inner pair are twins with almost identical masses, while the mass sum of the inner pair is comparable to the mass of the outer component. Such systems could be formed either sequentially (inside-out) by disk fragmentation with subsequent accretion and migration, or by a cascade hierarchical fragmentation of a rotating cloud. Orbits of the outer and inner subsystems are computed or updated in four such hierarchies: LHS 1070 (GJ 2005, periods 77.6 and 17.25 years), HIP 9497 (80 and 14.4 years), HIP 25240 (1200 and 47.0 years), and HIP 78842 (131 and 10.5 years).

  6. Multilevel sequential Monte-Carlo samplers

    KAUST Repository

    Jasra, Ajay

    2016-01-05

    Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.

  7. Sequential Therapy in Metastatic Renal Cell Carcinoma

    Directory of Open Access Journals (Sweden)

    Bradford R Hirsch

    2016-04-01

    Full Text Available The treatment of metastatic renal cell carcinoma (mRCC has changed dramatically in the past decade. As the number of available agents, and related volume of research, has grown, it is increasingly complex to know how to optimally treat patients. The authors are practicing medical oncologists at the US Oncology Network, the largest community-based network of oncology providers in the country, and represent the leadership of the Network's Genitourinary Research Committee. We outline our thought process in approaching sequential therapy of mRCC and the use of real-world data to inform our approach. We also highlight the evolving literature that will impact practicing oncologists in the near future.

  8. Microstructure history effect during sequential thermomechanical processing

    International Nuclear Information System (INIS)

    Yassar, Reza S.; Murphy, John; Burton, Christina; Horstemeyer, Mark F.; El kadiri, Haitham; Shokuhfar, Tolou

    2008-01-01

    The key to modeling the material processing behavior is the linking of the microstructure evolution to its processing history. This paper quantifies various microstructural features of an aluminum automotive alloy that undergoes sequential thermomechanical processing which is comprised hot rolling of a 150-mm billet to a 75-mm billet, rolling to 3 mm, annealing, and then cold rolling to a 0.8-mm thickness sheet. The microstructural content was characterized by means of electron backscatter diffraction, scanning electron microscopy, and transmission electron microscopy. The results clearly demonstrate the evolution of precipitate morphologies, dislocation structures, and grain orientation distributions. These data can be used to improve material models that claim to capture the history effects of the processing materials

  9. Prosody and alignment: a sequential perspective

    Science.gov (United States)

    Szczepek Reed, Beatrice

    2010-12-01

    In their analysis of a corpus of classroom interactions in an inner city high school, Roth and Tobin describe how teachers and students accomplish interactional alignment by prosodically matching each other's turns. Prosodic matching, and specific prosodic patterns are interpreted as signs of, and contributions to successful interactional outcomes and positive emotions. Lack of prosodic matching, and other specific prosodic patterns are interpreted as features of unsuccessful interactions, and negative emotions. This forum focuses on the article's analysis of the relation between interpersonal alignment, emotion and prosody. It argues that prosodic matching, and other prosodic linking practices, play a primarily sequential role, i.e. one that displays the way in which participants place and design their turns in relation to other participants' turns. Prosodic matching, rather than being a conversational action in itself, is argued to be an interactional practice (Schegloff 1997), which is not always employed for the accomplishment of `positive', or aligning actions.

  10. Monitoring sequential electron transfer with EPR

    International Nuclear Information System (INIS)

    Thurnauer, M.C.; Feezel, L.L.; Snyder, S.W.; Tang, J.; Norris, J.R.; Morris, A.L.; Rustandi, R.R.

    1989-01-01

    A completely general model which treats electron spin polarization (ESP) found in a system in which radical pairs with different magnetic interactions are formed sequentially has been described. This treatment has been applied specifically to the ESP found in the bacterial reaction center. Test cases show clearly how parameters such as structure, lifetime, and magnetic interactions within the successive radical pairs affect the ESP, and demonstrate that previous treatments of this problem have been incomplete. The photosynthetic bacterial reaction center protein is an ideal system for testing the general model of ESP. The radical pair which exhibits ESP, P 870 + Q - (P 870 + is the oxidized, primary electron donor, a bacteriochlorophyll special pair and Q - is the reduced, primary quinone acceptor) is formed via sequential electron transport through the intermediary radical pair P 870 + I - (I - is the reduced, intermediary electron acceptor, a bacteriopheophytin). In addition, it is possible to experimentally vary most of the important parameters, such as the lifetime of the intermediary radical pair and the magnetic interactions in each pair. It has been shown how selective isotopic substitution ( 1 H or 2 H) on P 870 , I and Q affects the ESP of the EPR spectrum of P 870 + Q - , observed at two different microwave frequencies, in Fe 2+ -depleted bacterial reaction centers of Rhodobacter sphaeroides R26. Thus, the relative magnitudes of the magnetic properties (nuclear hyperfine and g-factor differences) which influence ESP development were varied. The results support the general model of ESP in that they suggest that the P 870 + Q - radical pair interactions are the dominant source of ESP production in 2 H bacterial reaction centers

  11. Estimating likelihood of future crashes for crash-prone drivers

    Directory of Open Access Journals (Sweden)

    Subasish Das

    2015-06-01

    Full Text Available At-fault crash-prone drivers are usually considered as the high risk group for possible future incidents or crashes. In Louisiana, 34% of crashes are repeatedly committed by the at-fault crash-prone drivers who represent only 5% of the total licensed drivers in the state. This research has conducted an exploratory data analysis based on the driver faultiness and proneness. The objective of this study is to develop a crash prediction model to estimate the likelihood of future crashes for the at-fault drivers. The logistic regression method is used by employing eight years' traffic crash data (2004–2011 in Louisiana. Crash predictors such as the driver's crash involvement, crash and road characteristics, human factors, collision type, and environmental factors are considered in the model. The at-fault and not-at-fault status of the crashes are used as the response variable. The developed model has identified a few important variables, and is used to correctly classify at-fault crashes up to 62.40% with a specificity of 77.25%. This model can identify as many as 62.40% of the crash incidence of at-fault drivers in the upcoming year. Traffic agencies can use the model for monitoring the performance of an at-fault crash-prone drivers and making roadway improvements meant to reduce crash proneness. From the findings, it is recommended that crash-prone drivers should be targeted for special safety programs regularly through education and regulations.

  12. Smoking increases the likelihood of Helicobacter pylori treatment failure.

    Science.gov (United States)

    Itskoviz, David; Boltin, Doron; Leibovitzh, Haim; Tsadok Perets, Tsachi; Comaneshter, Doron; Cohen, Arnon; Niv, Yaron; Levi, Zohar

    2017-07-01

    Data regarding the impact of smoking on the success of Helicobacter pylori (H. pylori) eradication are conflicting, partially due to the fact that sociodemographic status is associated with both smoking and H. pylori treatment success. We aimed to assess the effect of smoking on H. pylori eradication rates after controlling for sociodemographic confounders. Included were subjects aged 15 years or older, with a first time positive C 13 -urea breath test (C 13 -UBT) between 2007 to 2014, who underwent a second C 13 -UBT after receiving clarithromycin-based triple therapy. Data regarding age, gender, socioeconomic status (SES), smoking (current smokers or "never smoked"), and drug use were extracted from the Clalit health maintenance organization database. Out of 120,914 subjects with a positive first time C 13 -UBT, 50,836 (42.0%) underwent a second C 13 -UBT test. After excluding former smokers, 48,130 remained who were eligible for analysis. The mean age was 44.3±18.2years, 69.2% were females, 87.8% were Jewish and 12.2% Arabs, 25.5% were current smokers. The overall eradication failure rates were 33.3%: 34.8% in current smokers and 32.8% in subjects who never smoked. In a multivariate analysis, eradication failure was positively associated with current smoking (Odds Ratio {OR} 1.15, 95% CI 1.10-1.20, psmoking was found to significantly increase the likelihood of unsuccessful first-line treatment for H. pylori infection. Copyright © 2017 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  13. Obstetric History and Likelihood of Preterm Birth of Twins.

    Science.gov (United States)

    Easter, Sarah Rae; Little, Sarah E; Robinson, Julian N; Mendez-Figueroa, Hector; Chauhan, Suneet P

    2018-01-05

     The objective of this study was to investigate the relationship between preterm birth in a prior pregnancy and preterm birth in a twin pregnancy.  We performed a secondary analysis of a randomized controlled trial evaluating 17-α-hydroxyprogesterone caproate in twins. Women were classified as nulliparous, multiparous with a prior term birth, or multiparous with a prior preterm birth. We used logistic regression to examine the odds of spontaneous preterm birth of twins before 35 weeks according to past obstetric history.  Of the 653 women analyzed, 294 were nulliparas, 310 had a prior term birth, and 49 had a prior preterm birth. Prior preterm birth increased the likelihood of spontaneous delivery before 35 weeks (adjusted odds ratio [aOR]: 2.44, 95% confidence interval [CI]: 1.28-4.66), whereas prior term delivery decreased these odds (aOR: 0.55, 95% CI: 0.38-0.78) in the current twin pregnancy compared with the nulliparous reference group. This translated into a lower odds of composite neonatal morbidity (aOR: 0.38, 95% CI: 0.27-0.53) for women with a prior term delivery.  For women carrying twins, a history of preterm birth increases the odds of spontaneous preterm birth, whereas a prior term birth decreases odds of spontaneous preterm birth and neonatal morbidity for the current twin pregnancy. These results offer risk stratification and reassurance for clinicians. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  14. Supervised maximum-likelihood weighting of composite protein networks for complex prediction

    Directory of Open Access Journals (Sweden)

    Yong Chern Han

    2012-12-01

    Full Text Available Abstract Background Protein complexes participate in many important cellular functions, so finding the set of existent complexes is essential for understanding the organization and regulation of processes in the cell. With the availability of large amounts of high-throughput protein-protein interaction (PPI data, many algorithms have been proposed to discover protein complexes from PPI networks. However, such approaches are hindered by the high rate of noise in high-throughput PPI data, including spurious and missing interactions. Furthermore, many transient interactions are detected between proteins that are not from the same complex, while not all proteins from the same complex may actually interact. As a result, predicted complexes often do not match true complexes well, and many true complexes go undetected. Results We address these challenges by integrating PPI data with other heterogeneous data sources to construct a composite protein network, and using a supervised maximum-likelihood approach to weight each edge based on its posterior probability of belonging to a complex. We then use six different clustering algorithms, and an aggregative clustering strategy, to discover complexes in the weighted network. We test our method on Saccharomyces cerevisiae and Homo sapiens, and show that complex discovery is improved: compared to previously proposed supervised and unsupervised weighting approaches, our method recalls more known complexes, achieves higher precision at all recall levels, and generates novel complexes of greater functional similarity. Furthermore, our maximum-likelihood approach allows learned parameters to be used to visualize and evaluate the evidence of novel predictions, aiding human judgment of their credibility. Conclusions Our approach integrates multiple data sources with supervised learning to create a weighted composite protein network, and uses six clustering algorithms with an aggregative clustering strategy to

  15. Elevated levels of serum IL-5 are associated with an increased likelihood of major depressive disorder

    Directory of Open Access Journals (Sweden)

    Elomaa Antti-Pekka

    2012-01-01

    Full Text Available Abstract Background Inflammatory mediators in both the peripheral circulation and central nervous system (CNS are dysregulated in major depressive disorder (MDD. Nevertheless, relatively little is known about the role of the T-helper (Th-2 effector cytokines interleukin (IL-5 and IL-13 in MDD. Methods We examined the serum levels of these cytokines and a Th-1 comparison cytokine, interferon (IFN-γ, in 116 individuals (MDD, n = 58; controls, n = 58. Results In our basic multivariate model controlling for the effects of potential confounders on the associations between MDD and the examined cytokines, each 1-unit increase in the serum IL-5 level increased the likelihood of belonging to the MDD group by 76% (OR 1.76, 95% CI 1.03-2.99, p = 0.04; model covariates: age, gender, marital status, daily smoking and alcohol use. The likelihood further increased in models additionally controlling for the effects of the use of antidepressants and NSAIDS, and a diagnosis of asthma. No such associations were detected with regard to IL-13 (OR 1.08, 95% CI 0.96-1.22, p = 0.22 or IFN-γ (OR 1.02, 95% CI 0.99-1.05, p = 0.23. Conclusions Elevated levels of IL-5, which uses the neural plasticity-related RAS GTPase-extracellular signal-regulated kinase (Ras-ERK pathway to mediate its actions in the central nervous system (CNS, could be one of the factors underlying the depression-related changes in CNS plasticity.

  16. High-order Composite Likelihood Inference for Max-Stable Distributions and Processes

    KAUST Repository

    Castruccio, Stefano; Huser, Raphaë l; Genton, Marc G.

    2015-01-01

    In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of locations is a very challenging problem in computational statistics, and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely-used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.

  17. High-order Composite Likelihood Inference for Max-Stable Distributions and Processes

    KAUST Repository

    Castruccio, Stefano

    2015-09-29

    In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of locations is a very challenging problem in computational statistics, and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely-used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.

  18. Supplementary Material for: High-Order Composite Likelihood Inference for Max-Stable Distributions and Processes

    KAUST Repository

    Castruccio, Stefano; Huser, Raphaë l; Genton, Marc G.

    2016-01-01

    In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of points is a very challenging problem and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.

  19. Stability of maximum-likelihood-based clustering methods: exploring the backbone of classifications

    International Nuclear Information System (INIS)

    Mungan, Muhittin; Ramasco, José J

    2010-01-01

    Components of complex systems are often classified according to the way they interact with each other. In graph theory such groups are known as clusters or communities. Many different techniques have been recently proposed to detect them, some of which involve inference methods using either Bayesian or maximum likelihood approaches. In this paper, we study a statistical model designed for detecting clusters based on connection similarity. The basic assumption of the model is that the graph was generated by a certain grouping of the nodes and an expectation maximization algorithm is employed to infer that grouping. We show that the method admits further development to yield a stability analysis of the groupings that quantifies the extent to which each node influences its neighbors' group membership. Our approach naturally allows for the identification of the key elements responsible for the grouping and their resilience to changes in the network. Given the generality of the assumptions underlying the statistical model, such nodes are likely to play special roles in the original system. We illustrate this point by analyzing several empirical networks for which further information about the properties of the nodes is available. The search and identification of stabilizing nodes constitutes thus a novel technique to characterize the relevance of nodes in complex networks

  20. Campbell and moment measures for finite sequential spatial processes

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette)

    2006-01-01

    textabstractWe define moment and Campbell measures for sequential spatial processes, prove a Campbell-Mecke theorem, and relate the results to their counterparts in the theory of point processes. In particular, we show that any finite sequential spatial process model can be derived as the vector