WorldWideScience

Sample records for bayesian population decoding

  1. Comparison of classifiers for decoding sensory and cognitive information from prefrontal neuronal populations.

    Science.gov (United States)

    Astrand, Elaine; Enel, Pierre; Ibos, Guilhem; Dominey, Peter Ford; Baraduc, Pierre; Ben Hamed, Suliann

    2014-01-01

    Decoding neuronal information is important in neuroscience, both as a basic means to understand how neuronal activity is related to cerebral function and as a processing stage in driving neuroprosthetic effectors. Here, we compare the readout performance of six commonly used classifiers at decoding two different variables encoded by the spiking activity of the non-human primate frontal eye fields (FEF): the spatial position of a visual cue, and the instructed orientation of the animal's attention. While the first variable is exogenously driven by the environment, the second variable corresponds to the interpretation of the instruction conveyed by the cue; it is endogenously driven and corresponds to the output of internal cognitive operations performed on the visual attributes of the cue. These two variables were decoded using either a regularized optimal linear estimator in its explicit formulation, an optimal linear artificial neural network estimator, a non-linear artificial neural network estimator, a non-linear naïve Bayesian estimator, a non-linear Reservoir recurrent network classifier or a non-linear Support Vector Machine classifier. Our results suggest that endogenous information such as the orientation of attention can be decoded from the FEF with the same accuracy as exogenous visual information. All classifiers did not behave equally in the face of population size and heterogeneity, the available training and testing trials, the subject's behavior and the temporal structure of the variable of interest. In most situations, the regularized optimal linear estimator and the non-linear Support Vector Machine classifiers outperformed the other tested decoders.

  2. Comparison of classifiers for decoding sensory and cognitive information from prefrontal neuronal populations.

    Directory of Open Access Journals (Sweden)

    Elaine Astrand

    Full Text Available Decoding neuronal information is important in neuroscience, both as a basic means to understand how neuronal activity is related to cerebral function and as a processing stage in driving neuroprosthetic effectors. Here, we compare the readout performance of six commonly used classifiers at decoding two different variables encoded by the spiking activity of the non-human primate frontal eye fields (FEF: the spatial position of a visual cue, and the instructed orientation of the animal's attention. While the first variable is exogenously driven by the environment, the second variable corresponds to the interpretation of the instruction conveyed by the cue; it is endogenously driven and corresponds to the output of internal cognitive operations performed on the visual attributes of the cue. These two variables were decoded using either a regularized optimal linear estimator in its explicit formulation, an optimal linear artificial neural network estimator, a non-linear artificial neural network estimator, a non-linear naïve Bayesian estimator, a non-linear Reservoir recurrent network classifier or a non-linear Support Vector Machine classifier. Our results suggest that endogenous information such as the orientation of attention can be decoded from the FEF with the same accuracy as exogenous visual information. All classifiers did not behave equally in the face of population size and heterogeneity, the available training and testing trials, the subject's behavior and the temporal structure of the variable of interest. In most situations, the regularized optimal linear estimator and the non-linear Support Vector Machine classifiers outperformed the other tested decoders.

  3. Bayesian Population Projections for the United Nations.

    Science.gov (United States)

    Raftery, Adrian E; Alkema, Leontine; Gerland, Patrick

    2014-02-01

    The United Nations regularly publishes projections of the populations of all the world's countries broken down by age and sex. These projections are the de facto standard and are widely used by international organizations, governments and researchers. Like almost all other population projections, they are produced using the standard deterministic cohort-component projection method and do not yield statements of uncertainty. We describe a Bayesian method for producing probabilistic population projections for most countries that the United Nations could use. It has at its core Bayesian hierarchical models for the total fertility rate and life expectancy at birth. We illustrate the method and show how it can be extended to address concerns about the UN's current assumptions about the long-term distribution of fertility. The method is implemented in the R packages bayesTFR, bayesLife, bayesPop and bayesDem.

  4. Book review: Bayesian analysis for population ecology

    Science.gov (United States)

    Link, William A.

    2011-01-01

    Brian Dennis described the field of ecology as “fertile, uncolonized ground for Bayesian ideas.” He continued: “The Bayesian propagule has arrived at the shore. Ecologists need to think long and hard about the consequences of a Bayesian ecology. The Bayesian outlook is a successful competitor, but is it a weed? I think so.” (Dennis 2004)

  5. Bayesian population finding with biomarkers in a randomized clinical trial.

    Science.gov (United States)

    Morita, Satoshi; Müller, Peter

    2017-03-03

    The identification of good predictive biomarkers allows investigators to optimize the target population for a new treatment. We propose a novel utility-based Bayesian population finding (BaPoFi) method to analyze data from a randomized clinical trial with the aim of finding a sensitive patient population. Our approach is based on casting the population finding process as a formal decision problem together with a flexible probability model, Bayesian additive regression trees (BART), to summarize observed data. The proposed method evaluates enhanced treatment effects in patient subpopulations based on counter-factual modeling of responses to new treatment and control for each patient. In extensive simulation studies, we examine the operating characteristics of the proposed method. We compare with a Bayesian regression-based method that implements shrinkage estimates of subgroup-specific treatment effects. For illustration, we apply the proposed method to data from a randomized clinical trial.

  6. Modelling Odor Decoding in the Antennal Lobe by Combining Sequential Firing Rate Models with Bayesian Inference.

    Directory of Open Access Journals (Sweden)

    Dario Cuevas Rivera

    2015-10-01

    Full Text Available The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an 'intelligent coincidence detector', which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena.

  7. Modelling Odor Decoding in the Antennal Lobe by Combining Sequential Firing Rate Models with Bayesian Inference

    Science.gov (United States)

    Cuevas Rivera, Dario; Bitzer, Sebastian; Kiebel, Stefan J.

    2015-01-01

    The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an ‘intelligent coincidence detector’, which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena. PMID:26451888

  8. Bayesian Evidence for Two Populations of White Dwarfs: Preliminary Results

    Science.gov (United States)

    Valentim, R.; Romero, A. D.; Kepler, S. O.; Horvath, J. E.; Rangel, E. M.

    2017-03-01

    White dwarf (WD) populations are analyzed using Bayesian tools, which allows inferring possible evolutionary paths through the study of the mass values. We employed a sample of 2761 DA white dwarf stars from the SDSS, and obtained the central mass values and their corresponding standard deviations using a bimodal population as an ansatz. The results indicate a population with M1 = 0.60 M⊙ and σ1 = 0.06 M⊙, corresponding to a single stellar evolution, and a second population with M2 = 1.00 M⊙ and σ1 = 0.11 M⊙ possibly due to binary evolution resulting from mergers.

  9. Bayesian Optimization Algorithm, Population Sizing, and Time to Convergence

    Energy Technology Data Exchange (ETDEWEB)

    Pelikan, M.; Goldberg, D.E.; Cantu-Paz, E.

    2000-01-19

    This paper analyzes convergence properties of the Bayesian optimization algorithm (BOA). It settles the BOA into the framework of problem decomposition used frequently in order to model and understand the behavior of simple genetic algorithms. The growth of the population size and the number of generations until convergence with respect to the size of a problem is theoretically analyzed. The theoretical results are supported by a number of experiments.

  10. Bayesian Analysis of Multiple Populations I: Statistical and Computational Methods

    CERN Document Server

    Stenning, D C; Robinson, E; van Dyk, D A; von Hippel, T; Sarajedini, A; Stein, N

    2016-01-01

    We develop a Bayesian model for globular clusters composed of multiple stellar populations, extending earlier statistical models for open clusters composed of simple (single) stellar populations (vanDyk et al. 2009, Stein et al. 2013). Specifically, we model globular clusters with two populations that differ in helium abundance. Our model assumes a hierarchical structuring of the parameters in which physical properties---age, metallicity, helium abundance, distance, absorption, and initial mass---are common to (i) the cluster as a whole or to (ii) individual populations within a cluster, or are unique to (iii) individual stars. An adaptive Markov chain Monte Carlo (MCMC) algorithm is devised for model fitting that greatly improves convergence relative to its precursor non-adaptive MCMC algorithm. Our model and computational tools are incorporated into an open-source software suite known as BASE-9. We use numerical studies to demonstrate that our method can recover parameters of two-population clusters, and al...

  11. Reference priors of nuisance parameters in Bayesian sequential population analysis

    CERN Document Server

    Bousquet, Nicolas

    2010-01-01

    Prior distributions elicited for modelling the natural fluctuations or the uncertainty on parameters of Bayesian fishery population models, can be chosen among a vast range of statistical laws. Since the statistical framework is defined by observational processes, observational parameters enter into the estimation and must be considered random, similarly to parameters or states of interest like population levels or real catches. The former are thus perceived as nuisance parameters whose values are intrinsically linked to the considered experiment, which also require noninformative priors. In fishery research Jeffreys methodology has been presented by Millar (2002) as a practical way to elicit such priors. However they can present wrong properties in multiparameter contexts. Therefore we suggest to use the elicitation method proposed by Berger and Bernardo to avoid paradoxical results raised by Jeffreys priors. These benchmark priors are derived here in the framework of sequential population analysis.

  12. Modelling of population dynamics of red king crab using Bayesian approach

    Directory of Open Access Journals (Sweden)

    Bakanev Sergey ...

    2012-10-01

    Modeling population dynamics based on the Bayesian approach enables to successfully resolve the above issues. The integration of the data from various studies into a unified model based on Bayesian parameter estimation method provides a much more detailed description of the processes occurring in the population.

  13. Using Bayesian Population Viability Analysis to Define Relevant Conservation Objectives.

    Directory of Open Access Journals (Sweden)

    Adam W Green

    Full Text Available Adaptive management provides a useful framework for managing natural resources in the face of uncertainty. An important component of adaptive management is identifying clear, measurable conservation objectives that reflect the desired outcomes of stakeholders. A common objective is to have a sustainable population, or metapopulation, but it can be difficult to quantify a threshold above which such a population is likely to persist. We performed a Bayesian metapopulation viability analysis (BMPVA using a dynamic occupancy model to quantify the characteristics of two wood frog (Lithobates sylvatica metapopulations resulting in sustainable populations, and we demonstrate how the results could be used to define meaningful objectives that serve as the basis of adaptive management. We explored scenarios involving metapopulations with different numbers of patches (pools using estimates of breeding occurrence and successful metamorphosis from two study areas to estimate the probability of quasi-extinction and calculate the proportion of vernal pools producing metamorphs. Our results suggest that ≥50 pools are required to ensure long-term persistence with approximately 16% of pools producing metamorphs in stable metapopulations. We demonstrate one way to incorporate the BMPVA results into a utility function that balances the trade-offs between ecological and financial objectives, which can be used in an adaptive management framework to make optimal, transparent decisions. Our approach provides a framework for using a standard method (i.e., PVA and available information to inform a formal decision process to determine optimal and timely management policies.

  14. High Accuracy Decoding of Dynamical Motion from a Large Retinal Population.

    Directory of Open Access Journals (Sweden)

    Olivier Marre

    2015-07-01

    Full Text Available Motion tracking is a challenge the visual system has to solve by reading out the retinal population. It is still unclear how the information from different neurons can be combined together to estimate the position of an object. Here we recorded a large population of ganglion cells in a dense patch of salamander and guinea pig retinas while displaying a bar moving diffusively. We show that the bar's position can be reconstructed from retinal activity with a precision in the hyperacuity regime using a linear decoder acting on 100+ cells. We then took advantage of this unprecedented precision to explore the spatial structure of the retina's population code. The classical view would have suggested that the firing rates of the cells form a moving hill of activity tracking the bar's position. Instead, we found that most ganglion cells in the salamander fired sparsely and idiosyncratically, so that their neural image did not track the bar. Furthermore, ganglion cell activity spanned an area much larger than predicted by their receptive fields, with cells coding for motion far in their surround. As a result, population redundancy was high, and we could find multiple, disjoint subsets of neurons that encoded the trajectory with high precision. This organization allows for diverse collections of ganglion cells to represent high-accuracy motion information in a form easily read out by downstream neural circuits.

  15. Bayesian Analysis of Multiple Populations in Galactic Globular Clusters

    Science.gov (United States)

    Wagner-Kaiser, Rachel A.; Sarajedini, Ata; von Hippel, Ted; Stenning, David; Piotto, Giampaolo; Milone, Antonino; van Dyk, David A.; Robinson, Elliot; Stein, Nathan

    2016-01-01

    We use GO 13297 Cycle 21 Hubble Space Telescope (HST) observations and archival GO 10775 Cycle 14 HST ACS Treasury observations of Galactic Globular Clusters to find and characterize multiple stellar populations. Determining how globular clusters are able to create and retain enriched material to produce several generations of stars is key to understanding how these objects formed and how they have affected the structural, kinematic, and chemical evolution of the Milky Way. We employ a sophisticated Bayesian technique with an adaptive MCMC algorithm to simultaneously fit the age, distance, absorption, and metallicity for each cluster. At the same time, we also fit unique helium values to two distinct populations of the cluster and determine the relative proportions of those populations. Our unique numerical approach allows objective and precise analysis of these complicated clusters, providing posterior distribution functions for each parameter of interest. We use these results to gain a better understanding of multiple populations in these clusters and their role in the history of the Milky Way.Support for this work was provided by NASA through grant numbers HST-GO-10775 and HST-GO-13297 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555. This material is based upon work supported by the National Aeronautics and Space Administration under Grant NNX11AF34G issued through the Office of Space Science. This project was supported by the National Aeronautics & Space Administration through the University of Central Florida's NASA Florida Space Grant Consortium.

  16. Decoding target distance and saccade amplitude from population activity in the macaque lateral intraparietal area (LIP

    Directory of Open Access Journals (Sweden)

    Frank Bremmer

    2016-08-01

    Full Text Available Primates perform saccadic eye movements in order to bring the image of an interesting target onto the fovea. Compared to stationary targets, saccades towards moving targets are computationally more demanding since the oculomotor system must use speed and direction information about the target as well as knowledge about its own processing latency to program an adequate, predictive saccade vector. In monkeys, different brain regions have been implicated in the control of voluntary saccades, among them the lateral intraparietal area (LIP. Here we asked, if activity in area LIP reflects the distance between fovea and saccade target, or the amplitude of an upcoming saccade, or both. We recorded single unit activity in area LIP of two macaque monkeys. First, we determined for each neuron its preferred saccade direction. Then, monkeys performed visually guided saccades along the preferred direction towards either stationary or moving targets in pseudo-randomized order. LIP population activity allowed to decode both, the distance between fovea and saccade target as well as the size of an upcoming saccade. Previous work has shown comparable results for saccade direction (Graf and Andersen, 2014a, b. Hence, LIP population activity allows to predict any two-dimensional saccade vector. Functional equivalents of macaque area LIP have been identified in humans. Accordingly, our results provide further support for the concept of activity from area LIP as neural basis for the control of an oculomotor brain-machine interface.

  17. Bayesian data analysis in population ecology: motivations, methods, and benefits

    Science.gov (United States)

    Dorazio, Robert

    2016-01-01

    During the 20th century ecologists largely relied on the frequentist system of inference for the analysis of their data. However, in the past few decades ecologists have become increasingly interested in the use of Bayesian methods of data analysis. In this article I provide guidance to ecologists who would like to decide whether Bayesian methods can be used to improve their conclusions and predictions. I begin by providing a concise summary of Bayesian methods of analysis, including a comparison of differences between Bayesian and frequentist approaches to inference when using hierarchical models. Next I provide a list of problems where Bayesian methods of analysis may arguably be preferred over frequentist methods. These problems are usually encountered in analyses based on hierarchical models of data. I describe the essentials required for applying modern methods of Bayesian computation, and I use real-world examples to illustrate these methods. I conclude by summarizing what I perceive to be the main strengths and weaknesses of using Bayesian methods to solve ecological inference problems.

  18. Bayesian parameter inference and model selection by population annealing in systems biology.

    Science.gov (United States)

    Murakami, Yohei

    2014-01-01

    Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named "posterior parameter ensemble". We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor.

  19. Distinguishing between population bottleneck and population subdivision by a Bayesian model choice procedure.

    Science.gov (United States)

    Peter, Benjamin M; Wegmann, Daniel; Excoffier, Laurent

    2010-11-01

    Although most natural populations are genetically subdivided, they are often analysed as if they were panmictic units. In particular, signals of past demographic size changes are often inferred from genetic data by assuming that the analysed sample is drawn from a population without any internal subdivision. However, it has been shown that a bottleneck signal can result from the presence of some recent immigrants in a population. It thus appears important to contrast these two alternative scenarios in a model choice procedure to prevent wrong conclusions to be made. We use here an Approximate Bayesian Computation (ABC) approach to infer whether observed patterns of genetic diversity in a given sample are more compatible with it being drawn from a panmictic population having gone through some size change, or from one or several demes belonging to a recent finite island model. Simulations show that we can correctly identify samples drawn from a subdivided population in up to 95% of the cases for a wide range of parameters. We apply our model choice procedure to the case of the chimpanzee (Pan troglodytes) and find conclusive evidence that Western and Eastern chimpanzee samples are drawn from a spatially subdivided population.

  20. Individual organisms as units of analysis: Bayesian-clustering alternatives in population genetics.

    Science.gov (United States)

    Mank, Judith E; Avise, John C

    2004-12-01

    Population genetic analyses traditionally focus on the frequencies of alleles or genotypes in 'populations' that are delimited a priori. However, there are potential drawbacks of amalgamating genetic data into such composite attributes of assemblages of specimens: genetic information on individual specimens is lost or submerged as an inherent part of the analysis. A potential also exists for circular reasoning when a population's initial identification and subsequent genetic characterization are coupled. In principle, these problems are circumvented by some newer methods of population identification and individual assignment based on statistical clustering of specimen genotypes. Here we evaluate a recent method in this genre--Bayesian clustering--using four genotypic data sets involving different types of molecular markers in non-model organisms from nature. As expected, measures of population genetic structure (F(ST) and phiST) tended to be significantly greater in Bayesian a posteriori data treatments than in analyses where populations were delimited a priori. In the four biological contexts examined, which involved both geographic population structures and hybrid zones, Bayesian clustering was able to recover differentiated populations, and Bayesian assignments were able to identify likely population sources of specific individuals.

  1. Improving Bayesian population dynamics inference: a coalescent-based model for multiple loci.

    Science.gov (United States)

    Gill, Mandev S; Lemey, Philippe; Faria, Nuno R; Rambaut, Andrew; Shapiro, Beth; Suchard, Marc A

    2013-03-01

    Effective population size is fundamental in population genetics and characterizes genetic diversity. To infer past population dynamics from molecular sequence data, coalescent-based models have been developed for Bayesian nonparametric estimation of effective population size over time. Among the most successful is a Gaussian Markov random field (GMRF) model for a single gene locus. Here, we present a generalization of the GMRF model that allows for the analysis of multilocus sequence data. Using simulated data, we demonstrate the improved performance of our method to recover true population trajectories and the time to the most recent common ancestor (TMRCA). We analyze a multilocus alignment of HIV-1 CRF02_AG gene sequences sampled from Cameroon. Our results are consistent with HIV prevalence data and uncover some aspects of the population history that go undetected in Bayesian parametric estimation. Finally, we recover an older and more reconcilable TMRCA for a classic ancient DNA data set.

  2. Population red blood cell folate concentrations for prevention of neural tube defects: bayesian model

    OpenAIRE

    MOLLOY, ANNE

    2014-01-01

    PUBLISHED OBJECTIVE: To determine an optimal population red blood cell (RBC) folate concentration for the prevention of neural tube birth defects. DESIGN: Bayesian model. SETTING: Data from two population based studies in China. PARTICIPANTS: 247,831 participants in a prospective community intervention project in China (1993-95) to prevent neural tube defects with 400 μg/day folic acid supplementation and 1194 participants in a population based randomized trial (20...

  3. The effect of close relatives on unsupervised Bayesian clustering algorithms in population genetic structure analysis.

    Science.gov (United States)

    Rodríguez-Ramilo, Silvia T; Wang, Jinliang

    2012-09-01

    The inference of population genetic structures is essential in many research areas in population genetics, conservation biology and evolutionary biology. Recently, unsupervised Bayesian clustering algorithms have been developed to detect a hidden population structure from genotypic data, assuming among others that individuals taken from the population are unrelated. Under this assumption, markers in a sample taken from a subpopulation can be considered to be in Hardy-Weinberg and linkage equilibrium. However, close relatives might be sampled from the same subpopulation, and consequently, might cause Hardy-Weinberg and linkage disequilibrium and thus bias a population genetic structure analysis. In this study, we used simulated and real data to investigate the impact of close relatives in a sample on Bayesian population structure analysis. We also showed that, when close relatives were identified by a pedigree reconstruction approach and removed, the accuracy of a population genetic structure analysis can be greatly improved. The results indicate that unsupervised Bayesian clustering algorithms cannot be used blindly to detect genetic structure in a sample with closely related individuals. Rather, when closely related individuals are suspected to be frequent in a sample, these individuals should be first identified and removed before conducting a population structure analysis.

  4. Explaining Inference on a Population of Independent Agents Using Bayesian Networks

    Science.gov (United States)

    Sutovsky, Peter

    2013-01-01

    The main goal of this research is to design, implement, and evaluate a novel explanation method, the hierarchical explanation method (HEM), for explaining Bayesian network (BN) inference when the network is modeling a population of conditionally independent agents, each of which is modeled as a subnetwork. For example, consider disease-outbreak…

  5. A Bayesian Approach to Identifying New Risk Factors for Dementia: A Nationwide Population-Based Study.

    Science.gov (United States)

    Wen, Yen-Hsia; Wu, Shihn-Sheng; Lin, Chun-Hung Richard; Tsai, Jui-Hsiu; Yang, Pinchen; Chang, Yang-Pei; Tseng, Kuan-Hua

    2016-05-01

    Dementia is one of the most disabling and burdensome health conditions worldwide. In this study, we identified new potential risk factors for dementia from nationwide longitudinal population-based data by using Bayesian statistics.We first tested the consistency of the results obtained using Bayesian statistics with those obtained using classical frequentist probability for 4 recognized risk factors for dementia, namely severe head injury, depression, diabetes mellitus, and vascular diseases. Then, we used Bayesian statistics to verify 2 new potential risk factors for dementia, namely hearing loss and senile cataract, determined from the Taiwan's National Health Insurance Research Database.We included a total of 6546 (6.0%) patients diagnosed with dementia. We observed older age, female sex, and lower income as independent risk factors for dementia. Moreover, we verified the 4 recognized risk factors for dementia in the older Taiwanese population; their odds ratios (ORs) ranged from 3.469 to 1.207. Furthermore, we observed that hearing loss (OR = 1.577) and senile cataract (OR = 1.549) were associated with an increased risk of dementia.We found that the results obtained using Bayesian statistics for assessing risk factors for dementia, such as head injury, depression, DM, and vascular diseases, were consistent with those obtained using classical frequentist probability. Moreover, hearing loss and senile cataract were found to be potential risk factors for dementia in the older Taiwanese population. Bayesian statistics could help clinicians explore other potential risk factors for dementia and for developing appropriate treatment strategies for these patients.

  6. A Bayesian model for estimating population means using a link-tracing sampling design.

    Science.gov (United States)

    St Clair, Katherine; O'Connell, Daniel

    2012-03-01

    Link-tracing sampling designs can be used to study human populations that contain "hidden" groups who tend to be linked together by a common social trait. These links can be used to increase the sampling intensity of a hidden domain by tracing links from individuals selected in an initial wave of sampling to additional domain members. Chow and Thompson (2003, Survey Methodology 29, 197-205) derived a Bayesian model to estimate the size or proportion of individuals in the hidden population for certain link-tracing designs. We propose an addition to their model that will allow for the modeling of a quantitative response. We assess properties of our model using a constructed population and a real population of at-risk individuals, both of which contain two domains of hidden and nonhidden individuals. Our results show that our model can produce good point and interval estimates of the population mean and domain means when our population assumptions are satisfied.

  7. Bayesian Analysis of Two Stellar Populations in Galactic Globular Clusters III: Analysis of 30 Clusters

    CERN Document Server

    Wagner-Kaiser, R; Sarajedini, A; von Hippel, T; van Dyk, D A; Robinson, E; Stein, N; Jefferys, W H

    2016-01-01

    We use Cycle 21 Hubble Space Telescope (HST) observations and HST archival ACS Treasury observations of 30 Galactic Globular Clusters to characterize two distinct stellar populations. A sophisticated Bayesian technique is employed to simultaneously sample the joint posterior distribution of age, distance, and extinction for each cluster, as well as unique helium values for two populations within each cluster and the relative proportion of those populations. We find the helium differences among the two populations in the clusters fall in the range of ~0.04 to 0.11. Because adequate models varying in CNO are not presently available, we view these spreads as upper limits and present them with statistical rather than observational uncertainties. Evidence supports previous studies suggesting an increase in helium content concurrent with increasing mass of the cluster and also find that the proportion of the first population of stars increases with mass as well. Our results are examined in the context of proposed g...

  8. ObStruct: a method to objectively analyse factors driving population structure using Bayesian ancestry profiles.

    Directory of Open Access Journals (Sweden)

    Velimir Gayevskiy

    Full Text Available Bayesian inference methods are extensively used to detect the presence of population structure given genetic data. The primary output of software implementing these methods are ancestry profiles of sampled individuals. While these profiles robustly partition the data into subgroups, currently there is no objective method to determine whether the fixed factor of interest (e.g. geographic origin correlates with inferred subgroups or not, and if so, which populations are driving this correlation. We present ObStruct, a novel tool to objectively analyse the nature of structure revealed in Bayesian ancestry profiles using established statistical methods. ObStruct evaluates the extent of structural similarity between sampled and inferred populations, tests the significance of population differentiation, provides information on the contribution of sampled and inferred populations to the observed structure and crucially determines whether the predetermined factor of interest correlates with inferred population structure. Analyses of simulated and experimental data highlight ObStruct's ability to objectively assess the nature of structure in populations. We show the method is capable of capturing an increase in the level of structure with increasing time since divergence between simulated populations. Further, we applied the method to a highly structured dataset of 1,484 humans from seven continents and a less structured dataset of 179 Saccharomyces cerevisiae from three regions in New Zealand. Our results show that ObStruct provides an objective metric to classify the degree, drivers and significance of inferred structure, as well as providing novel insights into the relationships between sampled populations, and adds a final step to the pipeline for population structure analyses.

  9. Bayesian analysis of two stellar populations in Galactic globular clusters- III. Analysis of 30 clusters

    Science.gov (United States)

    Wagner-Kaiser, R.; Stenning, D. C.; Sarajedini, A.; von Hippel, T.; van Dyk, D. A.; Robinson, E.; Stein, N.; Jefferys, W. H.

    2016-12-01

    We use Cycle 21 Hubble Space Telescope (HST) observations and HST archival ACS Treasury observations of 30 Galactic globular clusters to characterize two distinct stellar populations. A sophisticated Bayesian technique is employed to simultaneously sample the joint posterior distribution of age, distance, and extinction for each cluster, as well as unique helium values for two populations within each cluster and the relative proportion of those populations. We find the helium differences among the two populations in the clusters fall in the range of ˜0.04 to 0.11. Because adequate models varying in carbon, nitrogen, and oxygen are not presently available, we view these spreads as upper limits and present them with statistical rather than observational uncertainties. Evidence supports previous studies suggesting an increase in helium content concurrent with increasing mass of the cluster and we also find that the proportion of the first population of stars increases with mass as well. Our results are examined in the context of proposed globular cluster formation scenarios. Additionally, we leverage our Bayesian technique to shed light on the inconsistencies between the theoretical models and the observed data.

  10. A hierarchical Bayesian approach for reconstructing the initial mass function of single stellar populations

    Science.gov (United States)

    Dries, M.; Trager, S. C.; Koopmans, L. V. E.

    2016-11-01

    Recent studies based on the integrated light of distant galaxies suggest that the initial mass function (IMF) might not be universal. Variations of the IMF with galaxy type and/or formation time may have important consequences for our understanding of galaxy evolution. We have developed a new stellar population synthesis (SPS) code specifically designed to reconstruct the IMF. We implement a novel approach combining regularization with hierarchical Bayesian inference. Within this approach, we use a parametrized IMF prior to regulate a direct inference of the IMF. This direct inference gives more freedom to the IMF and allows the model to deviate from parametrized models when demanded by the data. We use Markov chain Monte Carlo sampling techniques to reconstruct the best parameters for the IMF prior, the age and the metallicity of a single stellar population. We present our code and apply our model to a number of mock single stellar populations with different ages, metallicities and IMFs. When systematic uncertainties are not significant, we are able to reconstruct the input parameters that were used to create the mock populations. Our results show that if systematic uncertainties do play a role, this may introduce a bias on the results. Therefore, it is important to objectively compare different ingredients of SPS models. Through its Bayesian framework, our model is well suited for this.

  11. Gaussian process-based Bayesian nonparametric inference of population size trajectories from gene genealogies.

    Science.gov (United States)

    Palacios, Julia A; Minin, Vladimir N

    2013-03-01

    Changes in population size influence genetic diversity of the population and, as a result, leave a signature of these changes in individual genomes in the population. We are interested in the inverse problem of reconstructing past population dynamics from genomic data. We start with a standard framework based on the coalescent, a stochastic process that generates genealogies connecting randomly sampled individuals from the population of interest. These genealogies serve as a glue between the population demographic history and genomic sequences. It turns out that only the times of genealogical lineage coalescences contain information about population size dynamics. Viewing these coalescent times as a point process, estimating population size trajectories is equivalent to estimating a conditional intensity of this point process. Therefore, our inverse problem is similar to estimating an inhomogeneous Poisson process intensity function. We demonstrate how recent advances in Gaussian process-based nonparametric inference for Poisson processes can be extended to Bayesian nonparametric estimation of population size dynamics under the coalescent. We compare our Gaussian process (GP) approach to one of the state-of-the-art Gaussian Markov random field (GMRF) methods for estimating population trajectories. Using simulated data, we demonstrate that our method has better accuracy and precision. Next, we analyze two genealogies reconstructed from real sequences of hepatitis C and human Influenza A viruses. In both cases, we recover more believed aspects of the viral demographic histories than the GMRF approach. We also find that our GP method produces more reasonable uncertainty estimates than the GMRF method.

  12. Bayesian Inference on the Effect of Density Dependence and Weather on a Guanaco Population from Chile

    Science.gov (United States)

    Zubillaga, María; Skewes, Oscar; Soto, Nicolás; Rabinovich, Jorge E.; Colchero, Fernando

    2014-01-01

    Understanding the mechanisms that drive population dynamics is fundamental for management of wild populations. The guanaco (Lama guanicoe) is one of two wild camelid species in South America. We evaluated the effects of density dependence and weather variables on population regulation based on a time series of 36 years of population sampling of guanacos in Tierra del Fuego, Chile. The population density varied between 2.7 and 30.7 guanaco/km2, with an apparent monotonic growth during the first 25 years; however, in the last 10 years the population has shown large fluctuations, suggesting that it might have reached its carrying capacity. We used a Bayesian state-space framework and model selection to determine the effect of density and environmental variables on guanaco population dynamics. Our results show that the population is under density dependent regulation and that it is currently fluctuating around an average carrying capacity of 45,000 guanacos. We also found a significant positive effect of previous winter temperature while sheep density has a strong negative effect on the guanaco population growth. We conclude that there are significant density dependent processes and that climate as well as competition with domestic species have important effects determining the population size of guanacos, with important implications for management and conservation. PMID:25514510

  13. A population-based Bayesian approach to the minimal model of glucose and insulin homeostasis

    DEFF Research Database (Denmark)

    Andersen, Kim Emil; Højbjerre, Malene

    2005-01-01

    for a whole population. Traditionally it has been analysed in a deterministic set-up with only error terms on the measurements. In this work we adopt a Bayesian graphical model to describe the coupled minimal model that accounts for both measurement and process variability, and the model is extended......-posed estimation problem, where the reconstruction most often has been done by non-linear least squares techniques separately for each entity. The minmal model was originally specified for a single individual and does not combine several individuals with the advantage of estimating the metabolic portrait...

  14. A bayesian approach to inferring the genetic population structure of sugarcane accessions from INTA (Argentina

    Directory of Open Access Journals (Sweden)

    Mariana Inés Pocovi

    2015-06-01

    Full Text Available Understanding the population structure and genetic diversity in sugarcane (Saccharum officinarum L. accessions from INTA germplasm bank (Argentina will be of great importance for germplasm collection and breeding improvement as it will identify diverse parental combinations to create segregating progenies with maximum genetic variability for further selection. A Bayesian approach, ordination methods (PCoA, Principal Coordinate Analysis and clustering analysis (UPGMA, Unweighted Pair Group Method with Arithmetic Mean were applied to this purpose. Sixty three INTA sugarcane hybrids were genotyped for 107 Simple Sequence Repeat (SSR and 136 Amplified Fragment Length Polymorphism (AFLP loci. Given the low probability values found with AFLP for individual assignment (4.7%, microsatellites seemed to perform better (54% for STRUCTURE analysis that revealed the germplasm to exist in five optimum groups with partly corresponding to their origin. However clusters shown high degree of admixture, F ST values confirmed the existence of differences among groups. Dissimilarity coefficients ranged from 0.079 to 0.651. PCoA separated sugarcane in groups that did not agree with those identified by STRUCTURE. The clustering including all genotypes neither showed resemblance to populations find by STRUCTURE, but clustering performed considering only individuals displaying a proportional membership > 0.6 in their primary population obtained with STRUCTURE showed close similarities. The Bayesian method indubitably brought more information on cultivar origins than classical PCoA and hierarchical clustering method.

  15. A hierarchical Bayesian approach for reconstructing the Initial Mass Function of Single Stellar Populations

    CERN Document Server

    Dries, M; Koopmans, L V E

    2016-01-01

    Recent studies based on the integrated light of distant galaxies suggest that the initial mass function (IMF) might not be universal. Variations of the IMF with galaxy type and/or formation time may have important consequences for our understanding of galaxy evolution. We have developed a new stellar population synthesis (SPS) code specifically designed to reconstruct the IMF. We implement a novel approach combining regularization with hierarchical Bayesian inference. Within this approach we use a parametrized IMF prior to regulate a direct inference of the IMF. This direct inference gives more freedom to the IMF and allows the model to deviate from parametrized models when demanded by the data. We use Markov Chain Monte Carlo sampling techniques to reconstruct the best parameters for the IMF prior, the age, and the metallicity of a single stellar population. We present our code and apply our model to a number of mock single stellar populations with different ages, metallicities, and IMFs. When systematic unc...

  16. Bayesian Analysis of Two Stellar Populations in Galactic Globular Clusters. I. Statistical and Computational Methods

    Science.gov (United States)

    Stenning, D. C.; Wagner-Kaiser, R.; Robinson, E.; van Dyk, D. A.; von Hippel, T.; Sarajedini, A.; Stein, N.

    2016-07-01

    We develop a Bayesian model for globular clusters composed of multiple stellar populations, extending earlier statistical models for open clusters composed of simple (single) stellar populations. Specifically, we model globular clusters with two populations that differ in helium abundance. Our model assumes a hierarchical structuring of the parameters in which physical properties—age, metallicity, helium abundance, distance, absorption, and initial mass—are common to (i) the cluster as a whole or to (ii) individual populations within a cluster, or are unique to (iii) individual stars. An adaptive Markov chain Monte Carlo (MCMC) algorithm is devised for model fitting that greatly improves convergence relative to its precursor non-adaptive MCMC algorithm. Our model and computational tools are incorporated into an open-source software suite known as BASE-9. We use numerical studies to demonstrate that our method can recover parameters of two-population clusters, and also show how model misspecification can potentially be identified. As a proof of concept, we analyze the two stellar populations of globular cluster NGC 5272 using our model and methods. (BASE-9 is available from GitHub: https://github.com/argiopetech/base/releases).

  17. An empirical Bayesian analysis applied to the globular cluster pulsar population

    CERN Document Server

    Turk, P J

    2013-01-01

    We describe an empirical Bayesian approach to determine the most likely size of an astronomical population of sources of which only a small subset are observed above some limiting flux density threshold. The method is most naturally applied to astronomical source populations at a common distance (e.g.,stellar populations in globular clusters), and can be applied even to populations where a survey detects no objects. The model allows for the inclusion of physical parameters of the stellar population and the detection process. As an example, we apply this method to the current sample of radio pulsars in Galactic globular clusters. Using the sample of flux density limits on pulsar surveys in 94 globular clusters published by Boyles et al., we examine a large number of population models with different dependencies. We find that models which include the globular cluster two-body encounter rate, $\\Gamma$, are strongly favoured over models in which this is not a factor. The optimal model is one in which the mean num...

  18. Full Bayesian hierarchical light curve modeling of core-collapse supernova populations

    Science.gov (United States)

    Sanders, Nathan; Betancourt, Michael; Soderberg, Alicia Margarita

    2016-06-01

    While wide field surveys have yielded remarkable quantities of photometry of transient objects, including supernovae, light curves reconstructed from this data suffer from several characteristic problems. Because most transients are discovered near the detection limit, signal to noise is generally poor; because coverage is limited to the observing season, light curves are often incomplete; and because temporal sampling can be uneven across filters, these problems can be exacerbated at any one wavelength. While the prevailing approach of modeling individual light curves independently is successful at recovering inferences for the objects with the highest quality observations, it typically neglects a substantial portion of the data and can introduce systematic biases. Joint modeling of the light curves of transient populations enables direct inference on population-level characteristics as well as superior measurements for individual objects. We present a new hierarchical Bayesian model for supernova light curves, where information inferred from observations of every individual light curve in a sample is partially pooled across objects to constrain population-level hyperparameters. Using an efficient Hamiltonian Monte Carlo sampling technique, the model posterior can be explored to enable marginalization over weakly-identified hyperparameters through full Bayesian inference. We demonstrate our technique on the Pan-STARRS1 (PS1) Type IIP supernova light curve sample published by Sanders et al. (2015), consisting of nearly 20,000 individual photometric observations of more than 70 supernovae in five photometric filters. We discuss the Stan probabilistic programming language used to implement the model, computational challenges, and prospects for future work including generalization to multiple supernova types. We also discuss scientific results from the PS1 dataset including a new relation between the peak magnitude and decline rate of SNe IIP, a new perspective on the

  19. Phenotypic and Genetic Associations between Reading Comprehension, Decoding Skills, and ADHD Dimensions: Evidence from Two Population-Based Studies

    Science.gov (United States)

    Plourde, Vickie; Boivin, Michel; Forget-Dubois, Nadine; Brendgen, Mara; Vitaro, Frank; Marino, Cecilia; Tremblay, Richard T.; Dionne, Ginette

    2015-01-01

    Background: The phenotypic and genetic associations between decoding skills and ADHD dimensions have been documented but less is known about the association with reading comprehension. The aim of the study is to document the phenotypic and genetic associations between reading comprehension and ADHD dimensions of inattention and…

  20. Bayesian Analysis and Characterization of Multiple Populations in Galactic Globular Clusters

    Science.gov (United States)

    Wagner-Kaiser, Rachel A.; Stenning, David; Sarajedini, Ata; von Hippel, Ted; van Dyk, David A.; Robinson, Elliot; Stein, Nathan; Jefferys, William H.; BASE-9, HST UVIS Globular Cluster Treasury Program

    2017-01-01

    Globular clusters have long been important tools to unlock the early history of galaxies. Thus, it is crucial we understand the formation and characteristics of the globular clusters (GCs) themselves. Historically, GCs were thought to be simple and largely homogeneous populations, formed via collapse of a single molecular cloud. However, this classical view has been overwhelmingly invalidated by recent work. It is now clear that the vast majority of globular clusters in our Galaxy host two or more chemically distinct populations of stars, with variations in helium and light elements at discrete abundance levels. No coherent story has arisen that is able to fully explain the formation of multiple populations in globular clusters nor the mechanisms that drive stochastic variations from cluster to cluster.We use Cycle 21 Hubble Space Telescope (HST) observations and HST archival ACS Treasury observations of 30 Galactic Globular Clusters to characterize two distinct stellar populations. A sophisticated Bayesian technique is employed to simultaneously sample the joint posterior distribution of age, distance, and extinction for each cluster, as well as unique helium values for two populations within each cluster and the relative proportion of those populations. We find the helium differences among the two populations in the clusters fall in the range of 0.04 to 0.11. Because adequate models varying in CNO are not presently available, we view these spreads as upper limits and present them with statistical rather than observational uncertainties. Evidence supports previous studies suggesting an increase in helium content concurrent with increasing mass of the cluster. We also find that the proportion of the first population of stars increases with mass. Our results are examined in the context of proposed globular cluster formation scenarios.

  1. Reconstruction of a beech population bottleneck using archival demographic information and Bayesian analysis of genetic data.

    Science.gov (United States)

    Lander, Tonya A; Oddou-Muratorio, Sylvie; Prouillet-Leplat, Helene; Klein, Etienne K

    2011-12-01

    Range expansion and contraction has occurred in the history of most species and can seriously impact patterns of genetic diversity. Historical data about range change are rare and generally appropriate for studies at large scales, whereas the individual pollen and seed dispersal events that form the basis of geneflow and colonization generally occur at a local scale. In this study, we investigated range change in Fagus sylvatica on Mont Ventoux, France, using historical data from 1838 to the present and approximate Bayesian computation (ABC) analyses of genetic data. From the historical data, we identified a population minimum in 1845 and located remnant populations at least 200 years old. The ABC analysis selected a demographic scenario with three populations, corresponding to two remnant populations and one area of recent expansion. It also identified expansion from a smaller ancestral population but did not find that this expansion followed a population bottleneck, as suggested by the historical data. Despite a strong support to the selected scenario for our data set, the ABC approach showed a low power to discriminate among scenarios on average and a low ability to accurately estimate effective population sizes and divergence dates, probably due to the temporal scale of the study. This study provides an unusual opportunity to test ABC analysis in a system with a well-documented demographic history and identify discrepancies between the results of historical, classical population genetic and ABC analyses. The results also provide valuable insights into genetic processes at work at a fine spatial and temporal scale in range change and colonization.

  2. TP Decoding

    CERN Document Server

    Lu, Yi; Montanari, Andrea

    2007-01-01

    'Tree pruning' (TP) is an algorithm for probabilistic inference on binary Markov random fields. It has been recently derived by Dror Weitz and used to construct the first fully polynomial approximation scheme for counting independent sets up to the `tree uniqueness threshold.' It can be regarded as a clever method for pruning the belief propagation computation tree, in such a way to exactly account for the effect of loops. In this paper we generalize the original algorithm to make it suitable for decoding linear codes, and discuss various schemes for pruning the computation tree. Further, we present the outcomes of numerical simulations on several linear codes, showing that tree pruning allows to interpolate continuously between belief propagation and maximum a posteriori decoding. Finally, we discuss theoretical implications of the new method.

  3. cosmoabc: Likelihood-free inference via Population Monte Carlo Approximate Bayesian Computation

    CERN Document Server

    Ishida, E E O; Penna-Lima, M; Cisewski, J; de Souza, R S; Trindade, A M M; Cameron, E

    2015-01-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogues. Here we present cosmoabc, a Python ABC sampler featuring a Population Monte Carlo (PMC) variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code is very flexible and can be easily coupled to an external simulator, while allowing to incorporate arbitrary distance and prior functions. As an example of practical application, we coupled cosmoabc with the numcosmo library and demonstrate how it can be used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function. cosmoabc is published under the GPLv3 license on PyPI and GitHub and documentation is availabl...

  4. Disaggregating measurement uncertainty from population variability and Bayesian treatment of uncensored results.

    Science.gov (United States)

    Strom, Daniel J; Joyce, Kevin E; MacLellan, Jay A; Watson, David J; Lynch, Timothy P; Antonio, Cheryl L; Birchall, Alan; Anderson, Kevin K; Zharov, Peter A

    2012-04-01

    In making low-level radioactivity measurements of populations, it is commonly observed that a substantial portion of net results is negative. Furthermore, the observed variance of the measurement results arises from a combination of measurement uncertainty and population variability. This paper presents a method for disaggregating measurement uncertainty from population variability to produce a probability density function (PDF) of possibly true results. To do this, simple, justifiable and reasonable assumptions are made about the relationship of the measurements to the measurands (the 'true values'). The measurements are assumed to be unbiased, that is, that their average value is the average of the measurands. Using traditional estimates of each measurement's uncertainty, a likelihood PDF for each individual's measurand is produced. Then using the same assumptions and all the data from the population of individuals, a prior PDF of measurands for the population is produced. The prior PDF is non-negative, and the average is equal to the average of the measurement results for the population. Using Bayes's theorem, posterior PDFs of each individual measurand are calculated. The uncertainty in these bayesian posterior PDFs appears to be all Berkson with no remaining classical component. The method is applied to baseline bioassay data from the Hanford site. The data include (90)Sr urinalysis measurements of 128 people, (137)Cs in vivo measurements of 5337 people and (239)Pu urinalysis measurements of 3270 people. The method produces excellent results for the (90)Sr and (137)Cs measurements, since there are non-zero concentrations of these global fallout radionuclides in people who have not been occupationally exposed. The method does not work for the (239)Pu measurements in non-occupationally exposed people because the population average is essentially zero relative to the sensitivity of the measurement technique. The method is shown to give results similar to

  5. Estimating demographic parameters from large-scale population genomic data using Approximate Bayesian Computation

    Directory of Open Access Journals (Sweden)

    Li Sen

    2012-03-01

    Full Text Available Abstract Background The Approximate Bayesian Computation (ABC approach has been used to infer demographic parameters for numerous species, including humans. However, most applications of ABC still use limited amounts of data, from a small number of loci, compared to the large amount of genome-wide population-genetic data which have become available in the last few years. Results We evaluated the performance of the ABC approach for three 'population divergence' models - similar to the 'isolation with migration' model - when the data consists of several hundred thousand SNPs typed for multiple individuals by simulating data from known demographic models. The ABC approach was used to infer demographic parameters of interest and we compared the inferred values to the true parameter values that was used to generate hypothetical "observed" data. For all three case models, the ABC approach inferred most demographic parameters quite well with narrow credible intervals, for example, population divergence times and past population sizes, but some parameters were more difficult to infer, such as population sizes at present and migration rates. We compared the ability of different summary statistics to infer demographic parameters, including haplotype and LD based statistics, and found that the accuracy of the parameter estimates can be improved by combining summary statistics that capture different parts of information in the data. Furthermore, our results suggest that poor choices of prior distributions can in some circumstances be detected using ABC. Finally, increasing the amount of data beyond some hundred loci will substantially improve the accuracy of many parameter estimates using ABC. Conclusions We conclude that the ABC approach can accommodate realistic genome-wide population genetic data, which may be difficult to analyze with full likelihood approaches, and that the ABC can provide accurate and precise inference of demographic parameters from

  6. A Bayesian integrated population dynamics model to analyze data for protected species

    Directory of Open Access Journals (Sweden)

    Hoyle, S. D.

    2004-06-01

    Full Text Available Managing wildlife-human interactions demands reliable information about the likely consequences of management actions. This requirement is a general one, whatever the taxonomic group. We describe a method for estimating population dynamics and decision analysis that is generally applicable, extremely flexible, uses data efficiently, and gives answers in a useful format. Our case study involves bycatch of a protected species, the Northeastern Offshore Spotted Dolphin (Stenella attenuata, in the tuna fishery of the eastern Pacific Ocean. Informed decision-making requires quantitative analyses taking all relevant information into account, assessing how bycatch affects these species and how regulations affect the fisheries, and describing the uncertainty in analyses. Bayesian analysis is an ideal framework for delivering information on uncertainty to the decision-making process. It also allows information from other populations or species or expert judgment to be included in the analysis, if appropriate. Integrated analysis attempts to include all relevant data for a population into one analysis by combining analyses, sharing parameters, and simultaneously estimating all parameters, using a combined objective function. It ensures that model assumptions and parameter estimates are consistent throughout the analysis, that uncertainty is propagated through the analysis, and that the correlations among parameters are preserved. Perhaps the most important aspect of integrated analysis is the way it both enables and forces consideration of the system as a whole, so that inconsistencies can be observed and resolved.

  7. Stochastic population forecasting based on combinations of expert evaluations within the Bayesian paradigm.

    Science.gov (United States)

    Billari, Francesco C; Graziani, Rebecca; Melilli, Eugenio

    2014-10-01

    This article suggests a procedure to derive stochastic population forecasts adopting an expert-based approach. As in previous work by Billari et al. (2012), experts are required to provide evaluations, in the form of conditional and unconditional scenarios, on summary indicators of the demographic components determining the population evolution: that is, fertility, mortality, and migration. Here, two main purposes are pursued. First, the demographic components are allowed to have some kind of dependence. Second, as a result of the existence of a body of shared information, possible correlations among experts are taken into account. In both cases, the dependence structure is not imposed by the researcher but rather is indirectly derived through the scenarios elicited from the experts. To address these issues, the method is based on a mixture model, within the so-called Supra-Bayesian approach, according to which expert evaluations are treated as data. The derived posterior distribution for the demographic indicators of interest is used as forecasting distribution, and a Markov chain Monte Carlo algorithm is designed to approximate this posterior. This article provides the questionnaire designed by the authors to collect expert opinions. Finally, an application to the forecast of the Italian population from 2010 to 2065 is proposed.

  8. Bayesian time series analysis of segments of the Rocky Mountain trumpeter swan population

    Science.gov (United States)

    Wright, Christopher K.; Sojda, Richard S.; Goodman, Daniel

    2002-01-01

    A Bayesian time series analysis technique, the dynamic linear model, was used to analyze counts of Trumpeter Swans (Cygnus buccinator) summering in Idaho, Montana, and Wyoming from 1931 to 2000. For the Yellowstone National Park segment of white birds (sub-adults and adults combined) the estimated probability of a positive growth rate is 0.01. The estimated probability of achieving the Subcommittee on Rocky Mountain Trumpeter Swans 2002 population goal of 40 white birds for the Yellowstone segment is less than 0.01. Outside of Yellowstone National Park, Wyoming white birds are estimated to have a 0.79 probability of a positive growth rate with a 0.05 probability of achieving the 2002 objective of 120 white birds. In the Centennial Valley in southwest Montana, results indicate a probability of 0.87 that the white bird population is growing at a positive rate with considerable uncertainty. The estimated probability of achieving the 2002 Centennial Valley objective of 160 white birds is 0.14 but under an alternative model falls to 0.04. The estimated probability that the Targhee National Forest segment of white birds has a positive growth rate is 0.03. In Idaho outside of the Targhee National Forest, white birds are estimated to have a 0.97 probability of a positive growth rate with a 0.18 probability of attaining the 2002 goal of 150 white birds.

  9. Bayesian coalescent inference reveals high evolutionary rates and diversification of Zika virus populations.

    Science.gov (United States)

    Fajardo, Alvaro; Soñora, Martín; Moreno, Pilar; Moratorio, Gonzalo; Cristina, Juan

    2016-10-01

    Zika virus (ZIKV) is a member of the family Flaviviridae. In 2015, ZIKV triggered an epidemic in Brazil and spread across Latin America. By May of 2016, the World Health Organization warns over spread of ZIKV beyond this region. Detailed studies on the mode of evolution of ZIKV strains are extremely important for our understanding of the emergence and spread of ZIKV populations. In order to gain insight into these matters, a Bayesian coalescent Markov Chain Monte Carlo analysis of complete genome sequences of recently isolated ZIKV strains was performed. The results of these studies revealed a mean rate of evolution of 1.20 × 10(-3) nucleotide substitutions per site per year (s/s/y) for ZIKV strains enrolled in this study. Several variants isolated in China are grouped together with all strains isolated in Latin America. Another genetic group composed exclusively by Chinese strains were also observed, suggesting the co-circulation of different genetic lineages in China. These findings indicate a high level of diversification of ZIKV populations. Strains isolated from microcephaly cases do not share amino acid substitutions, suggesting that other factors besides viral genetic differences may play a role for the proposed pathogenesis caused by ZIKV infection. J. Med. Virol. 88:1672-1676, 2016. © 2016 Wiley Periodicals, Inc.

  10. Bayesian Reliability-Growth Analysis for Statistical of Diverse Population Based on Non-homogeneous Poisson Process

    Institute of Scientific and Technical Information of China (English)

    MING Zhimao; TAO Junyong; ZHANG Yunan; YI Xiaoshan; CHEN Xun

    2009-01-01

    New armament systems are subjected to the method for dealing with multi-stage system reliability-growth statistical problems of diverse population in order to improve reliability before starting mass production. Aiming at the test process which is high expense and small sample-size in the development of complex system, the specific methods are studied on how to process the statistical information of Bayesian reliability growth regarding diverse populations. Firstly, according to the characteristics of reliability growth during product development, the Bayesian method is used to integrate the testing information of multi-stage and the order relations of distribution parameters. And then a Gamma-Beta prior distribution is proposed based on non-homogeneous Poisson process(NHPP) corresponding to the reliability growth process. The posterior distribution of reliability parameters is obtained regarding different stages of product, and the reliability parameters are evaluated based on the posterior distribution. Finally, Bayesian approach proposed in this paper for multi-stage reliability growth test is applied to the test process which is small sample-size in the astronautics filed. The results of a numerical example show that the presented model can make use of the diverse information synthetically, and pave the way for the application of the Bayesian model for multi-stage reliability growth test evaluation with small sample-size. The method is useful for evaluating multi-stage system reliability and making reliability growth plan rationally.

  11. MASSIVE: A Bayesian analysis of giant planet populations around low-mass stars

    Science.gov (United States)

    Lannier, J.; Delorme, P.; Lagrange, A. M.; Borgniet, S.; Rameau, J.; Schlieder, J. E.; Gagné, J.; Bonavita, M. A.; Malo, L.; Chauvin, G.; Bonnefoy, M.; Girard, J. H.

    2016-12-01

    Context. Direct imaging has led to the discovery of several giant planet and brown dwarf companions. These imaged companions populate a mass, separation and age domain (mass >1 MJup, orbits > 5 AU, age planetary formation models. Methods: We observed 58 young and nearby M-type dwarfs in L'-band with the VLT/NaCo instrument and used angular differential imaging algorithms to optimize the sensitivity to planetary-mass companions and to derive the best detection limits. We estimate the probability of detecting a planet as a function of its mass and physical separation around each target. We conduct a Bayesian analysis to determine the frequency of substellar companions orbiting low-mass stars, using a homogenous sub-sample of 54 stars. Results: We derive a frequency of for companions with masses in the range of 2-80 MJup, and % for planetary mass companions (2-14 MJup), at physical separations of 8 to 400 AU for both cases. Comparing our results with a previous survey targeting more massive stars, we find evidence that substellar companions more massive than 1 MJup with a low mass ratio Q with respect to their host star (Q 2 MJup might be independent from the mass of the host star.

  12. Model-Based Individualized Treatment of Chemotherapeutics: Bayesian Population Modeling and Dose Optimization.

    Directory of Open Access Journals (Sweden)

    Devaraj Jayachandran

    Full Text Available 6-Mercaptopurine (6-MP is one of the key drugs in the treatment of many pediatric cancers, auto immune diseases and inflammatory bowel disease. 6-MP is a prodrug, converted to an active metabolite 6-thioguanine nucleotide (6-TGN through enzymatic reaction involving thiopurine methyltransferase (TPMT. Pharmacogenomic variation observed in the TPMT enzyme produces a significant variation in drug response among the patient population. Despite 6-MP's widespread use and observed variation in treatment response, efforts at quantitative optimization of dose regimens for individual patients are limited. In addition, research efforts devoted on pharmacogenomics to predict clinical responses are proving far from ideal. In this work, we present a Bayesian population modeling approach to develop a pharmacological model for 6-MP metabolism in humans. In the face of scarcity of data in clinical settings, a global sensitivity analysis based model reduction approach is used to minimize the parameter space. For accurate estimation of sensitive parameters, robust optimal experimental design based on D-optimality criteria was exploited. With the patient-specific model, a model predictive control algorithm is used to optimize the dose scheduling with the objective of maintaining the 6-TGN concentration within its therapeutic window. More importantly, for the first time, we show how the incorporation of information from different levels of biological chain-of response (i.e. gene expression-enzyme phenotype-drug phenotype plays a critical role in determining the uncertainty in predicting therapeutic target. The model and the control approach can be utilized in the clinical setting to individualize 6-MP dosing based on the patient's ability to metabolize the drug instead of the traditional standard-dose-for-all approach.

  13. Model-Based Individualized Treatment of Chemotherapeutics: Bayesian Population Modeling and Dose Optimization.

    Science.gov (United States)

    Jayachandran, Devaraj; Laínez-Aguirre, José; Rundell, Ann; Vik, Terry; Hannemann, Robert; Reklaitis, Gintaras; Ramkrishna, Doraiswami

    2015-01-01

    6-Mercaptopurine (6-MP) is one of the key drugs in the treatment of many pediatric cancers, auto immune diseases and inflammatory bowel disease. 6-MP is a prodrug, converted to an active metabolite 6-thioguanine nucleotide (6-TGN) through enzymatic reaction involving thiopurine methyltransferase (TPMT). Pharmacogenomic variation observed in the TPMT enzyme produces a significant variation in drug response among the patient population. Despite 6-MP's widespread use and observed variation in treatment response, efforts at quantitative optimization of dose regimens for individual patients are limited. In addition, research efforts devoted on pharmacogenomics to predict clinical responses are proving far from ideal. In this work, we present a Bayesian population modeling approach to develop a pharmacological model for 6-MP metabolism in humans. In the face of scarcity of data in clinical settings, a global sensitivity analysis based model reduction approach is used to minimize the parameter space. For accurate estimation of sensitive parameters, robust optimal experimental design based on D-optimality criteria was exploited. With the patient-specific model, a model predictive control algorithm is used to optimize the dose scheduling with the objective of maintaining the 6-TGN concentration within its therapeutic window. More importantly, for the first time, we show how the incorporation of information from different levels of biological chain-of response (i.e. gene expression-enzyme phenotype-drug phenotype) plays a critical role in determining the uncertainty in predicting therapeutic target. The model and the control approach can be utilized in the clinical setting to individualize 6-MP dosing based on the patient's ability to metabolize the drug instead of the traditional standard-dose-for-all approach.

  14. Bayesian Analysis of Two Stellar Populations in Galactic Globular Clusters II: NGC 5024, NGC 5272, and NGC 6352

    CERN Document Server

    Wagner-Kaiser, R; Robinson, E; von Hippel, T; Sarajedini, A; van Dyk, D A; Stein, N; Jefferys, W H

    2016-01-01

    We use Cycle 21 Hubble Space Telescope (HST) observations and HST archival ACS Treasury observations of Galactic Globular Clusters to find and characterize two stellar populations in NGC 5024 (M53), NGC 5272 (M3), and NGC 6352. For these three clusters, both single and double-population analyses are used to determine a best fit isochrone(s). We employ a sophisticated Bayesian analysis technique to simultaneously fit the cluster parameters (age, distance, absorption, and metallicity) that characterize each cluster. For the two-population analysis, unique population level helium values are also fit to each distinct population of the cluster and the relative proportions of the populations are determined. We find differences in helium ranging from $\\sim$0.05 to 0.11 for these three clusters. Model grids with solar $\\alpha$-element abundances ([$\\alpha$/Fe] =0.0) and enhanced $\\alpha$-elements ([$\\alpha$/Fe]=0.4) are adopted.

  15. Konstruksi Bayesian Network Dengan Algoritma Bayesian Association Rule Mining Network

    OpenAIRE

    Octavian

    2015-01-01

    Beberapa tahun terakhir, Bayesian Network telah menjadi konsep yang populer digunakan dalam berbagai bidang kehidupan seperti dalam pengambilan sebuah keputusan dan menentukan peluang suatu kejadian dapat terjadi. Sayangnya, pengkonstruksian struktur dari Bayesian Network itu sendiri bukanlah hal yang sederhana. Oleh sebab itu, penelitian ini mencoba memperkenalkan algoritma Bayesian Association Rule Mining Network untuk memudahkan kita dalam mengkonstruksi Bayesian Network berdasarkan data ...

  16. The confounding effect of population structure on bayesian skyline plot inferences of demographic history

    DEFF Research Database (Denmark)

    Heller, Rasmus; Chikhi, Lounes; Siegismund, Hans

    2013-01-01

    when it is violated. Among the most widely applied demographic inference methods are Bayesian skyline plots (BSPs), which are used across a range of biological fields. Violations of the panmixia assumption are to be expected in many biological systems, but the consequences for skyline plot inferences...

  17. Iterative List Decoding

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Hjaltason, Johan

    2005-01-01

    We analyze the relation between iterative decoding and the extended parity check matrix. By considering a modified version of bit flipping, which produces a list of decoded words, we derive several relations between decodable error patterns and the parameters of the code. By developing a tree...

  18. Bayesian salamanders: analysing the demography of an underground population of the European plethodontid Speleomantes strinatii with state-space modelling

    Directory of Open Access Journals (Sweden)

    Salvidio Sebastiano

    2010-02-01

    Full Text Available Abstract Background It has been suggested that Plethodontid salamanders are excellent candidates for indicating ecosystem health. However, detailed, long-term data sets of their populations are rare, limiting our understanding of the demographic processes underlying their population fluctuations. Here we present a demographic analysis based on a 1996 - 2008 data set on an underground population of Speleomantes strinatii (Aellen in NW Italy. We utilised a Bayesian state-space approach allowing us to parameterise a stage-structured Lefkovitch model. We used all the available population data from annual temporary removal experiments to provide us with the baseline data on the numbers of juveniles, subadults and adult males and females present at any given time. Results Sampling the posterior chains of the converged state-space model gives us the likelihood distributions of the state-specific demographic rates and the associated uncertainty of these estimates. Analysing the resulting parameterised Lefkovitch matrices shows that the population growth is very close to 1, and that at population equilibrium we expect half of the individuals present to be adults of reproductive age which is what we also observe in the data. Elasticity analysis shows that adult survival is the key determinant for population growth. Conclusion This analysis demonstrates how an understanding of population demography can be gained from structured population data even in a case where following marked individuals over their whole lifespan is not practical.

  19. Genetic structure and admixture between Bayash Roma from northwestern Croatia and general Croatian population: evidence from Bayesian clustering analysis.

    Science.gov (United States)

    Novokmet, Natalija; Galov, Ana; Marjanović, Damir; Škaro, Vedrana; Projić, Petar; Lauc, Gordan; Primorac, Dragan; Rudan, Pavao

    2015-01-01

    The European Roma represent a transnational mosaic of minority population groups with different migration histories and contrasting experiences in their interactions with majority populations across the European continent. Although historical genetic contributions of European lineages to the Roma pool were investigated before, the extent of contemporary genetic admixture between Bayash Roma and non-Romani majority population remains elusive. The aim of this study was to assess the genetic structure of the Bayash Roma population from northwestern Croatia and the general Croatian population and to investigate the extent of admixture between them. A set of genetic data from two original studies (100 Bayash Roma from northwestern Croatia and 195 individuals from the general Croatian population) was analyzed by Bayesian clustering implemented in STRUCTURE software. By re-analyzing published data we intended to focus for the first time on genetic differentiation and structure and in doing so we clearly pointed to the importance of considering social phenomena in understanding genetic structuring. Our results demonstrated that two population clusters best explain the genetic structure, which is consistent with social exclusion of Roma and the demographic history of Bayash Roma who have settled in NW Croatia only about 150 years ago and mostly applied rules of endogamy. The presence of admixture was revealed, while the percentage of non-Croatian individuals in general Croatian population was approximately twofold higher than the percentage of non-Romani individuals in Roma population corroborating the presence of ethnomimicry in Roma.

  20. Grasp movement decoding from premotor and parietal cortex.

    Science.gov (United States)

    Townsend, Benjamin R; Subasi, Erk; Scherberger, Hansjörg

    2011-10-05

    Despite recent advances in harnessing cortical motor-related activity to control computer cursors and robotic devices, the ability to decode and execute different grasping patterns remains a major obstacle. Here we demonstrate a simple Bayesian decoder for real-time classification of grip type and wrist orientation in macaque monkeys that uses higher-order planning signals from anterior intraparietal cortex (AIP) and ventral premotor cortex (area F5). Real-time decoding was based on multiunit signals, which had similar tuning properties to cells in previous single-unit recording studies. Maximum decoding accuracy for two grasp types (power and precision grip) and five wrist orientations was 63% (chance level, 10%). Analysis of decoder performance showed that grip type decoding was highly accurate (90.6%), with most errors occurring during orientation classification. In a subsequent off-line analysis, we found small but significant performance improvements (mean, 6.25 percentage points) when using an optimized spike-sorting method (superparamagnetic clustering). Furthermore, we observed significant differences in the contributions of F5 and AIP for grasp decoding, with F5 being better suited for classification of the grip type and AIP contributing more toward decoding of object orientation. However, optimum decoding performance was maximal when using neural activity simultaneously from both areas. Overall, these results highlight quantitative differences in the functional representation of grasp movements in AIP and F5 and represent a first step toward using these signals for developing functional neural interfaces for hand grasping.

  1. Bayesian estimation of hepatitis E virus seroprevalence for populations with different exposure levels to swine in The Netherlands.

    Science.gov (United States)

    Bouwknegt, M; Engel, B; Herremans, M M P T; Widdowson, M A; Worm, H C; Koopmans, M P G; Frankena, K; de Roda Husman, A M; De Jong, M C M; Van Der Poel, W H M

    2008-04-01

    Hepatitis E virus (HEV) is ubiquitous in pigs worldwide and may be zoonotic. Previous HEV seroprevalence estimates for groups of people working with swine were higher than for control groups. However, discordance among results of anti-HEV assays means that true seroprevalence estimates, i.e. seroprevalence due to previous exposure to HEV, depends on choice of seroassay. We tested blood samples from three subpopulations (49 swine veterinarians, 153 non-swine veterinarians and 644 randomly selected individuals from the general population) with one IgM and two IgG ELISAs, and subsets with IgG and/or IgM Western blots. A Bayesian stochastical model was used to combine results of all assays. The model accounted for imperfection of each assay by estimating sensitivity and specificity, and accounted for dependence between serological assays. As expected, discordance among assay results occurred. Applying the model yielded seroprevalence estimates of approximately 11% for swine veterinarians,approximately 6% for non-swine veterinarians and approximately 2% for the general population. By combining the results of five serological assays in a Bayesian stochastical model we confirmed that exposure to swine or their environment was associated with elevated HEV seroprevalence.

  2. An Overview of Bayesian Methods for Neural Spike Train Analysis

    Directory of Open Access Journals (Sweden)

    Zhe Chen

    2013-01-01

    Full Text Available Neural spike train analysis is an important task in computational neuroscience which aims to understand neural mechanisms and gain insights into neural circuits. With the advancement of multielectrode recording and imaging technologies, it has become increasingly demanding to develop statistical tools for analyzing large neuronal ensemble spike activity. Here we present a tutorial overview of Bayesian methods and their representative applications in neural spike train analysis, at both single neuron and population levels. On the theoretical side, we focus on various approximate Bayesian inference techniques as applied to latent state and parameter estimation. On the application side, the topics include spike sorting, tuning curve estimation, neural encoding and decoding, deconvolution of spike trains from calcium imaging signals, and inference of neuronal functional connectivity and synchrony. Some research challenges and opportunities for neural spike train analysis are discussed.

  3. Comparison of breeding value prediction for two traits in a Nellore-Angus crossbred population using different Bayesian modeling methodologies

    Directory of Open Access Journals (Sweden)

    Lauren L. Hulsman Hanna

    2014-12-01

    Full Text Available The objectives of this study were to 1 compare four models for breeding value prediction using genomic or pedigree information and 2 evaluate the impact of fixed effects that account for family structure. Comparisons were made in a Nellore-Angus population comprising F2, F3 and half-siblings to embryo transfer F2 calves with records for overall temperament at weaning (TEMP; n = 769 and Warner-Bratzler shear force (WBSF; n = 387. After quality control, there were 34,913 whole genome SNP markers remaining. Bayesian methods employed were BayesB ( π = 0.995 or 0.997 for WBSF or TEMP, respectively and BayesC (π = 0 and π, where π is the ideal proportion of markers not included. Direct genomic values (DGV from single trait Bayesian analyses were compared to conventional pedigree-based animal model breeding values. Numerically, BayesC procedures (using π had the highest accuracy of all models for WBSF and TEMP ( ρgg = 0.843 and 0.923, respectively, but BayesB had the least bias (regression of performance on prediction closest to 1, βy,x = 2.886 and 1.755, respectively. Accounting for family structure decreased accuracy and increased bias in prediction of DGV indicating a detrimental impact when used in these prediction methods that simultaneously fit many markers.

  4. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis

    In this thesis we describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon codes with non-uniform profile. With this scheme decoding with good performance is pos...... of computational overflow. Analytical results for the probability that the first Reed-Solomon word is decoded after C computations are presented. This is supported by simulation results that are also extended to other parameters....

  5. Bayesian approach to the assessment of the population-specific risk of inhibitors in hemophilia A patients: a case study

    Directory of Open Access Journals (Sweden)

    Cheng J

    2016-10-01

    Full Text Available Ji Cheng,1,2 Alfonso Iorio,2,3 Maura Marcucci,4 Vadim Romanov,5 Eleanor M Pullenayegum,6,7 John K Marshall,3,8 Lehana Thabane1,2 1Biostatistics Unit, St Joseph’s Healthcare Hamilton, 2Department of Clinical Epidemiology and Biostatistics, 3Department of Medicine, McMaster University, Hamilton, ON, Canada; 4Geriatrics, Fondazione Ca’ Granda Ospedale Maggiore Policlinico, Università degli Studi di Milano, Milan, Italy; 5Baxter HealthCare, Global Medical Affairs, Westlake Village, CA, USA; 6Child Health Evaluation Sciences, Hospital for Sick Children, 7Dalla Lana School of Public Health, University of Toronto, Toronto, 8Division of Gastroenterology, Hamilton Health Science, Hamilton, ON, Canada Background: Developing inhibitors is a rare event during the treatment of hemophilia A. The multifacets and uncertainty surrounding the development of inhibitors further complicate the process of estimating inhibitor rate from the limited data. Bayesian statistical modeling provides a useful tool in generating, enhancing, and exploring the evidence through incorporating all the available information.Methods: We built our Bayesian analysis using three study cases to estimate the inhibitor rates of patients with hemophilia A in three different scenarios: Case 1, a single cohort of previously treated patients (PTPs or previously untreated patients; Case 2, a meta-analysis of PTP cohorts; and Case 3, a previously unexplored patient population – patients with baseline low-titer inhibitor or history of inhibitor development. The data used in this study were extracted from three published ADVATE (antihemophilic factor [recombinant] is a product of Baxter for treating hemophilia A post-authorization surveillance studies. Noninformative and informative priors were applied to Bayesian standard (Case 1 or random-effects (Case 2 and Case 3 logistic models. Bayesian probabilities of satisfying three meaningful thresholds of the risk of developing a clinical

  6. Population size and stopover duration estimation using mark–resight data and Bayesian analysis of a superpopulation model

    Science.gov (United States)

    Lyons, James E.; Kendall, William; Royle, J. Andrew; Converse, Sarah J.; Andres, Brad A.; Buchanan, Joseph B.

    2016-01-01

    We present a novel formulation of a mark–recapture–resight model that allows estimation of population size, stopover duration, and arrival and departure schedules at migration areas. Estimation is based on encounter histories of uniquely marked individuals and relative counts of marked and unmarked animals. We use a Bayesian analysis of a state–space formulation of the Jolly–Seber mark–recapture model, integrated with a binomial model for counts of unmarked animals, to derive estimates of population size and arrival and departure probabilities. We also provide a novel estimator for stopover duration that is derived from the latent state variable representing the interim between arrival and departure in the state–space model. We conduct a simulation study of field sampling protocols to understand the impact of superpopulation size, proportion marked, and number of animals sampled on bias and precision of estimates. Simulation results indicate that relative bias of estimates of the proportion of the population with marks was low for all sampling scenarios and never exceeded 2%. Our approach does not require enumeration of all unmarked animals detected or direct knowledge of the number of marked animals in the population at the time of the study. This provides flexibility and potential application in a variety of sampling situations (e.g., migratory birds, breeding seabirds, sea turtles, fish, pinnipeds, etc.). Application of the methods is demonstrated with data from a study of migratory sandpipers.

  7. High Speed Viterbi Decoder Architecture

    DEFF Research Database (Denmark)

    Paaske, Erik; Andersen, Jakob Dahl

    1998-01-01

    The fastest commercially available Viterbi decoders for the (171,133) standard rate 1/2 code operate with a decoding speed of 40-50 Mbit/s (net data rate). In this paper we present a suitable architecture for decoders operating with decoding speeds of 150-300 Mbit/s.......The fastest commercially available Viterbi decoders for the (171,133) standard rate 1/2 code operate with a decoding speed of 40-50 Mbit/s (net data rate). In this paper we present a suitable architecture for decoders operating with decoding speeds of 150-300 Mbit/s....

  8. Estimation of hominoid ancestral population sizes under bayesian coalescent models incorporating mutation rate variation and sequencing errors.

    Science.gov (United States)

    Burgess, Ralph; Yang, Ziheng

    2008-09-01

    Estimation of population parameters for the common ancestors of humans and the great apes is important in understanding our evolutionary history. In particular, inference of population size for the human-chimpanzee common ancestor may shed light on the process by which the 2 species separated and on whether the human population experienced a severe size reduction in its early evolutionary history. In this study, the Bayesian method of ancestral inference of Rannala and Yang (2003. Bayes estimation of species divergence times and ancestral population sizes using DNA sequences from multiple loci. Genetics. 164:1645-1656) was extended to accommodate variable mutation rates among loci and random species-specific sequencing errors. The model was applied to analyze a genome-wide data set of approximately 15,000 neutral loci (7.4 Mb) aligned for human, chimpanzee, gorilla, orangutan, and macaque. We obtained robust and precise estimates for effective population sizes along the hominoid lineage extending back approximately 30 Myr to the cercopithecoid divergence. The results showed that ancestral populations were 5-10 times larger than modern humans along the entire hominoid lineage. The estimates were robust to the priors used and to model assumptions about recombination. The unusually low X chromosome divergence between human and chimpanzee could not be explained by variation in the male mutation bias or by current models of hybridization and introgression. Instead, our parameter estimates were consistent with a simple instantaneous process for human-chimpanzee speciation but showed a major reduction in X chromosome effective population size peculiar to the human-chimpanzee common ancestor, possibly due to selective sweeps on the X prior to separation of the 2 species.

  9. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  10. Bayesian inference on the effect of density dependence and weather on a guanaco population from Chile

    DEFF Research Database (Denmark)

    Zubillaga, Maria; Skewes, Oscar; Soto, Nicolás

    2014-01-01

    on a time series of 36 years of population sampling of guanacos in Tierra del Fuego, Chile. The population density varied between 2.7 and 30.7 guanaco/km², with an apparent monotonic growth during the first 25 years; however, in the last 10 years the population has shown large fluctuations, suggesting...

  11. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1998-01-01

    the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability of computational overflow. Analytical results for the probability that the first RS word is decoded after C computations are presented. These results are supported...

  12. Quantifying inter- and intra-population niche variability using hierarchical bayesian stable isotope mixing models.

    Science.gov (United States)

    Semmens, Brice X; Ward, Eric J; Moore, Jonathan W; Darimont, Chris T

    2009-07-09

    Variability in resource use defines the width of a trophic niche occupied by a population. Intra-population variability in resource use may occur across hierarchical levels of population structure from individuals to subpopulations. Understanding how levels of population organization contribute to population niche width is critical to ecology and evolution. Here we describe a hierarchical stable isotope mixing model that can simultaneously estimate both the prey composition of a consumer diet and the diet variability among individuals and across levels of population organization. By explicitly estimating variance components for multiple scales, the model can deconstruct the niche width of a consumer population into relevant levels of population structure. We apply this new approach to stable isotope data from a population of gray wolves from coastal British Columbia, and show support for extensive intra-population niche variability among individuals, social groups, and geographically isolated subpopulations. The analytic method we describe improves mixing models by accounting for diet variability, and improves isotope niche width analysis by quantitatively assessing the contribution of levels of organization to the niche width of a population.

  13. From individual to population level effects of toxicants in the tubicifid Branchiura sowerbyi using threshold effect models in a Bayesian framework.

    Science.gov (United States)

    Ducrot, Virginie; Billoir, Elise; Péry, Alexandre R R; Garric, Jeanne; Charles, Sandrine

    2010-05-01

    Effects of zinc were studied in the freshwater worm Branchiura sowerbyi using partial and full life-cycle tests. Only newborn and juveniles were sensitive to zinc, displaying effects on survival, growth, and age at first brood at environmentally relevant concentrations. Threshold effect models were proposed to assess toxic effects on individuals. They were fitted to life-cycle test data using Bayesian inference and adequately described life-history trait data in exposed organisms. The daily asymptotic growth rate of theoretical populations was then simulated with a matrix population model, based upon individual-level outputs. Population-level outputs were in accordance with existing literature for controls. Working in a Bayesian framework allowed incorporating parameter uncertainty in the simulation of the population-level response to zinc exposure, thus increasing the relevance of test results in the context of ecological risk assessment.

  14. Inferring cetacean population densities from the absolute dynamic topography of the ocean in a hierarchical Bayesian framework.

    Directory of Open Access Journals (Sweden)

    Mario A Pardo

    Full Text Available We inferred the population densities of blue whales (Balaenoptera musculus and short-beaked common dolphins (Delphinus delphis in the Northeast Pacific Ocean as functions of the water-column's physical structure by implementing hierarchical models in a Bayesian framework. This approach allowed us to propagate the uncertainty of the field observations into the inference of species-habitat relationships and to generate spatially explicit population density predictions with reduced effects of sampling heterogeneity. Our hypothesis was that the large-scale spatial distributions of these two cetacean species respond primarily to ecological processes resulting from shoaling and outcropping of the pycnocline in regions of wind-forced upwelling and eddy-like circulation. Physically, these processes affect the thermodynamic balance of the water column, decreasing its volume and thus the height of the absolute dynamic topography (ADT. Biologically, they lead to elevated primary productivity and persistent aggregation of low-trophic-level prey. Unlike other remotely sensed variables, ADT provides information about the structure of the entire water column and it is also routinely measured at high spatial-temporal resolution by satellite altimeters with uniform global coverage. Our models provide spatially explicit population density predictions for both species, even in areas where the pycnocline shoals but does not outcrop (e.g. the Costa Rica Dome and the North Equatorial Countercurrent thermocline ridge. Interannual variations in distribution during El Niño anomalies suggest that the population density of both species decreases dramatically in the Equatorial Cold Tongue and the Costa Rica Dome, and that their distributions retract to particular areas that remain productive, such as the more oceanic waters in the central California Current System, the northern Gulf of California, the North Equatorial Countercurrent thermocline ridge, and the more

  15. Inferring cetacean population densities from the absolute dynamic topography of the ocean in a hierarchical Bayesian framework.

    Science.gov (United States)

    Pardo, Mario A; Gerrodette, Tim; Beier, Emilio; Gendron, Diane; Forney, Karin A; Chivers, Susan J; Barlow, Jay; Palacios, Daniel M

    2015-01-01

    We inferred the population densities of blue whales (Balaenoptera musculus) and short-beaked common dolphins (Delphinus delphis) in the Northeast Pacific Ocean as functions of the water-column's physical structure by implementing hierarchical models in a Bayesian framework. This approach allowed us to propagate the uncertainty of the field observations into the inference of species-habitat relationships and to generate spatially explicit population density predictions with reduced effects of sampling heterogeneity. Our hypothesis was that the large-scale spatial distributions of these two cetacean species respond primarily to ecological processes resulting from shoaling and outcropping of the pycnocline in regions of wind-forced upwelling and eddy-like circulation. Physically, these processes affect the thermodynamic balance of the water column, decreasing its volume and thus the height of the absolute dynamic topography (ADT). Biologically, they lead to elevated primary productivity and persistent aggregation of low-trophic-level prey. Unlike other remotely sensed variables, ADT provides information about the structure of the entire water column and it is also routinely measured at high spatial-temporal resolution by satellite altimeters with uniform global coverage. Our models provide spatially explicit population density predictions for both species, even in areas where the pycnocline shoals but does not outcrop (e.g. the Costa Rica Dome and the North Equatorial Countercurrent thermocline ridge). Interannual variations in distribution during El Niño anomalies suggest that the population density of both species decreases dramatically in the Equatorial Cold Tongue and the Costa Rica Dome, and that their distributions retract to particular areas that remain productive, such as the more oceanic waters in the central California Current System, the northern Gulf of California, the North Equatorial Countercurrent thermocline ridge, and the more southern portion of the

  16. Bayesian estimates of male and female African lion mortality for future use in population management

    DEFF Research Database (Denmark)

    Barthold, Julia A; Loveridge, Andrew; Macdonald, David;

    2016-01-01

    1. The global population size of African lions is plummeting, and many small fragmented populations face local extinction. Extinction risks are amplified through the common practice of trophy hunting for males, which makes setting sustainable hunting quotas a vital task. 2. Various demographic...... models evaluate consequences of hunting on lion population growth. However, none of the models use unbiased estimates of male age-specific mortality because such estimates do not exist. Until now, estimating mortality from resighting records of marked males has been impossible due to the uncertain fates...

  17. Latent state-space models for neural decoding.

    Science.gov (United States)

    Aghagolzadeh, Mehdi; Truccolo, Wilson

    2014-01-01

    Ensembles of single-neurons in motor cortex can show strong low-dimensional collective dynamics. In this study, we explore an approach where neural decoding is applied to estimated low-dimensional dynamics instead of to the full recorded neuronal population. A latent state-space model (SSM) approach is used to estimate the low-dimensional neural dynamics from the measured spiking activity in population of neurons. A second state-space model representation is then used to decode kinematics, via a Kalman filter, from the estimated low-dimensional dynamics. The latent SSM-based decoding approach is illustrated on neuronal activity recorded from primary motor cortex in a monkey performing naturalistic 3-D reach and grasp movements. Our analysis show that 3-D reach decoding performance based on estimated low-dimensional dynamics is comparable to the decoding performance based on the full recorded neuronal population.

  18. Estimating temporal trend in the presence of spatial complexity: A Bayesian hierarchical model for a wetland plant population undergoing restoration

    Science.gov (United States)

    Rodhouse, T.J.; Irvine, K.M.; Vierling, K.T.; Vierling, L.A.

    2011-01-01

    Monitoring programs that evaluate restoration and inform adaptive management are important for addressing environmental degradation. These efforts may be well served by spatially explicit hierarchical approaches to modeling because of unavoidable spatial structure inherited from past land use patterns and other factors. We developed Bayesian hierarchical models to estimate trends from annual density counts observed in a spatially structured wetland forb (Camassia quamash [camas]) population following the cessation of grazing and mowing on the study area, and in a separate reference population of camas. The restoration site was bisected by roads and drainage ditches, resulting in distinct subpopulations ("zones") with different land use histories. We modeled this spatial structure by fitting zone-specific intercepts and slopes. We allowed spatial covariance parameters in the model to vary by zone, as in stratified kriging, accommodating anisotropy and improving computation and biological interpretation. Trend estimates provided evidence of a positive effect of passive restoration, and the strength of evidence was influenced by the amount of spatial structure in the model. Allowing trends to vary among zones and accounting for topographic heterogeneity increased precision of trend estimates. Accounting for spatial autocorrelation shifted parameter coefficients in ways that varied among zones depending on strength of statistical shrinkage, autocorrelation and topographic heterogeneity-a phenomenon not widely described. Spatially explicit estimates of trend from hierarchical models will generally be more useful to land managers than pooled regional estimates and provide more realistic assessments of uncertainty. The ability to grapple with historical contingency is an appealing benefit of this approach.

  19. Bayesian estimates of male and female African lion mortality for future use in population management

    DEFF Research Database (Denmark)

    Barthold, Julia A; Loveridge, Andrew; Macdonald, David

    2016-01-01

    models evaluate consequences of hunting on lion population growth. However, none of the models use unbiased estimates of male age-specific mortality because such estimates do not exist. Until now, estimating mortality from resighting records of marked males has been impossible due to the uncertain fates...... higher mortality across all ages in both populations. We discuss the role that different drivers of lion mortality may play in explaining these differences and whether their effects need to be included in lion demographic models. 5. Synthesis and applications. Our mortality estimates can be used......1. The global population size of African lions is plummeting, and many small fragmented populations face local extinction. Extinction risks are amplified through the common practice of trophy hunting for males, which makes setting sustainable hunting quotas a vital task. 2. Various demographic...

  20. Decoding Astronomical Concepts

    Science.gov (United States)

    Durisen, Richard H.; Pilachowski, Catherine A.

    2004-01-01

    Two astronomy professors, using the Decoding the Disciplines process, help their students use abstract theories to analyze light and to visualize the enormous scale of astronomical concepts. (Contains 5 figures.)

  1. Optimization of MPEG decoding

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1999-01-01

    MPEG-2 video decoding is examined. A unified approach to quality improvement, chrominance upsampling, de-interlacing and superresolution is presented. The information over several frames is combined as part of the processing....

  2. MASSIVE: A Bayesian analysis of giant planet populations around low-mass stars

    CERN Document Server

    Lannier, J; Lagrange, A M; Borgniet, S; Rameau, J; Schlieder, J E; Gagné, J; Bonavita, M A; Malo, L; Chauvin, G; Bonnefoy, M; Girard, J H

    2016-01-01

    Direct imaging has led to the discovery of several giant planet and brown dwarf companions. These imaged companions populate a mass, separation and age domain (mass>1MJup, orbits>5AU, age2MJup might be independent from the mass of the host star.

  3. The impact of ancestral population size and incomplete lineage sorting on Bayesian estimation of species divergence times

    Institute of Scientific and Technical Information of China (English)

    Konstantinos ANGELIS; Mario DOS REIS

    2015-01-01

    Although the effects of the coalescent process on sequence divergence and genealogies are well understood, the vir-tual majority of studies that use molecular sequences to estimate times of divergence among species have failed to account for the coalescent process. Here we study the impact of ancestral population size and incomplete lineage sorting on Bayesian estimates of species divergence times under the molecular clock when the inference model ignores the coalescent process. Using a combi-nation of mathematical analysis, computer simulations and analysis of real data, we find that the errors on estimates of times and the molecular rate can be substantial when ancestral populations are large and when there is substantial incomplete lineage sort-ing. For example, in a simple three-species case, we find that if the most precise fossil calibration is placed on the root of the phylogeny, the age of the internal node is overestimated, while if the most precise calibration is placed on the internal node, then the age of the root is underestimated. In both cases, the molecular rate is overestimated. Using simulations on a phylogeny of nine species, we show that substantial errors in time and rate estimates can be obtained even when dating ancient divergence events. We analyse the hominoid phylogeny and show that estimates of the neutral mutation rate obtained while ignoring the coalescent are too high. Using a coalescent-based technique to obtain geological times of divergence, we obtain estimates of the mutation rate that are within experimental estimates and we also obtain substantially older divergence times within the phylogeny [Current Zoology 61 (5): 874–885, 2015].

  4. List Decoding of Algebraic Codes

    DEFF Research Database (Denmark)

    Nielsen, Johan Sebastian Rosenkilde

    We investigate three paradigms for polynomial-time decoding of Reed–Solomon codes beyond half the minimum distance: the Guruswami–Sudan algorithm, Power decoding and the Wu algorithm. The main results concern shaping the computational core of all three methods to a problem solvable by module...... give: a fast maximum-likelihood list decoder based on the Guruswami–Sudan algorithm; a new variant of Power decoding, Power Gao, along with some new insights into Power decoding; and a new, module based method for performing rational interpolation for theWu algorithm. We also show how to decode...

  5. Bayesian Lensing Shear Measurement

    CERN Document Server

    Bernstein, Gary M

    2013-01-01

    We derive an estimator of weak gravitational lensing shear from background galaxy images that avoids noise-induced biases through a rigorous Bayesian treatment of the measurement. The Bayesian formalism requires a prior describing the (noiseless) distribution of the target galaxy population over some parameter space; this prior can be constructed from low-noise images of a subsample of the target population, attainable from long integrations of a fraction of the survey field. We find two ways to combine this exact treatment of noise with rigorous treatment of the effects of the instrumental point-spread function and sampling. The Bayesian model fitting (BMF) method assigns a likelihood of the pixel data to galaxy models (e.g. Sersic ellipses), and requires the unlensed distribution of galaxies over the model parameters as a prior. The Bayesian Fourier domain (BFD) method compresses galaxies to a small set of weighted moments calculated after PSF correction in Fourier space. It requires the unlensed distributi...

  6. Improved decoding of limb-state feedback from natural sensors.

    Science.gov (United States)

    Wagenaar, J B; Ventura, V; Weber, D J

    2009-01-01

    Limb state feedback is of great importance for achieving stable and adaptive control of FES neuroprostheses. A natural way to determine limb state is to measure and decode the activity of primary afferent neurons in the limb. The feasibility of doing so has been demonstrated by [1] and [2]. Despite positive results, some drawbacks in these works are associated with the application of reverse regression techniques for decoding the afferent neuronal signals. Decoding methods that are based on direct regression are now favored over reverse regression for decoding neural responses in higher regions in the central nervous system [3]. In this paper, we apply a direct regression approach to decode the movement of the hind limb of a cat from a population of primary afferent neurons. We show that this approach is more principled, more efficient, and more generalizable than reverse regression.

  7. Bayesian population structure analysis reveals presence of phylogeographically specific sublineages within previously ill-defined T group of Mycobacterium tuberculosis.

    Science.gov (United States)

    Reynaud, Yann; Zheng, Chao; Wu, Guihui; Sun, Qun; Rastogi, Nalin

    2017-01-01

    Mycobacterium tuberculosis genetic structure, and evolutionary history have been studied for years by several genotyping approaches, but delineation of a few sublineages remains controversial and needs better characterization. This is particularly the case of T group within lineage 4 (L4) which was first described using spoligotyping to pool together a number of strains with ill-defined signatures. Although T strains were not traditionally considered as a real phylogenetic group, they did contain a few phylogenetically meaningful sublineages as shown using SNPs. We therefore decided to investigate if this observation could be corroborated using other robust genetic markers. We consequently made a first assessment of genetic structure using 24-loci MIRU-VNTRs data extracted from the SITVIT2 database (n = 607 clinical isolates collected in Russia, Albania, Turkey, Iraq, Brazil and China). Combining Minimum Spanning Trees and Bayesian population structure analyses (using STRUCTURE and TESS softwares), we distinctly identified eight tentative phylogenetic groups (T1-T8) with a remarkable correlation with geographical origin. We further compared the present structure observed with other L4 sublineages (n = 416 clinical isolates belonging to LAM, Haarlem, X, S sublineages), and showed that 5 out of 8 T groups seemed phylogeographically well-defined as opposed to the remaining 3 groups that partially mixed with other L4 isolates. These results provide with novel evidence about phylogeographically specificity of a proportion of ill-defined T group of M. tuberculosis. The genetic structure observed will now be further validated on an enlarged worldwide dataset using Whole Genome Sequencing (WGS).

  8. Bayesian population structure analysis reveals presence of phylogeographically specific sublineages within previously ill-defined T group of Mycobacterium tuberculosis

    Science.gov (United States)

    Reynaud, Yann; Zheng, Chao; Wu, Guihui; Sun, Qun; Rastogi, Nalin

    2017-01-01

    Mycobacterium tuberculosis genetic structure, and evolutionary history have been studied for years by several genotyping approaches, but delineation of a few sublineages remains controversial and needs better characterization. This is particularly the case of T group within lineage 4 (L4) which was first described using spoligotyping to pool together a number of strains with ill-defined signatures. Although T strains were not traditionally considered as a real phylogenetic group, they did contain a few phylogenetically meaningful sublineages as shown using SNPs. We therefore decided to investigate if this observation could be corroborated using other robust genetic markers. We consequently made a first assessment of genetic structure using 24-loci MIRU-VNTRs data extracted from the SITVIT2 database (n = 607 clinical isolates collected in Russia, Albania, Turkey, Iraq, Brazil and China). Combining Minimum Spanning Trees and Bayesian population structure analyses (using STRUCTURE and TESS softwares), we distinctly identified eight tentative phylogenetic groups (T1-T8) with a remarkable correlation with geographical origin. We further compared the present structure observed with other L4 sublineages (n = 416 clinical isolates belonging to LAM, Haarlem, X, S sublineages), and showed that 5 out of 8 T groups seemed phylogeographically well-defined as opposed to the remaining 3 groups that partially mixed with other L4 isolates. These results provide with novel evidence about phylogeographically specificity of a proportion of ill-defined T group of M. tuberculosis. The genetic structure observed will now be further validated on an enlarged worldwide dataset using Whole Genome Sequencing (WGS). PMID:28166309

  9. Decoding Children's Expressions of Affect.

    Science.gov (United States)

    Feinman, Joel A.; Feldman, Robert S.

    Mothers' ability to decode the emotional expressions of their male and female children was compared to the decoding ability of non-mothers. Happiness, sadness, fear and anger were induced in children in situations that varied in terms of spontaneous and role-played encoding modes. It was hypothesized that mothers would be more accurate decoders of…

  10. Analysis of regional scale risk to whirling disease in populations of Colorado and Rio Grande cutthroat trout using Bayesian belief network model

    Science.gov (United States)

    Kolb Ayre, Kimberley; Caldwell, Colleen A.; Stinson, Jonah; Landis, Wayne G.

    2014-01-01

    Introduction and spread of the parasite Myxobolus cerebralis, the causative agent of whirling disease, has contributed to the collapse of wild trout populations throughout the intermountain west. Of concern is the risk the disease may have on conservation and recovery of native cutthroat trout. We employed a Bayesian belief network to assess probability of whirling disease in Colorado River and Rio Grande cutthroat trout (Oncorhynchus clarkii pleuriticus and Oncorhynchus clarkii virginalis, respectively) within their current ranges in the southwest United States. Available habitat (as defined by gradient and elevation) for intermediate oligochaete worm host, Tubifex tubifex, exerted the greatest influence on the likelihood of infection, yet prevalence of stream barriers also affected the risk outcome. Management areas that had the highest likelihood of infected Colorado River cutthroat trout were in the eastern portion of their range, although the probability of infection was highest for populations in the southern, San Juan subbasin. Rio Grande cutthroat trout had a relatively low likelihood of infection, with populations in the southernmost Pecos management area predicted to be at greatest risk. The Bayesian risk assessment model predicted the likelihood of whirling disease infection from its principal transmission vector, fish movement, and suggested that barriers may be effective in reducing risk of exposure to native trout populations. Data gaps, especially with regard to location of spawning, highlighted the importance in developing monitoring plans that support future risk assessments and adaptive management for subspecies of cutthroat trout.

  11. Bayesian threshold analysis of direct and maternal genetic parameters for piglet mortality at farrowing in Large White, Landrace, and Pietrain populations.

    Science.gov (United States)

    Ibáñez-Escriche, N; Varona, L; Casellas, J; Quintanilla, R; Noguera, J L

    2009-01-01

    A Bayesian threshold model was fitted to analyze the genetic parameters for farrowing mortality at the piglet level in Large White, Landrace, and Pietrain populations. Field data were collected between 1999 and 2006. They were provided by 3 pig selection nucleus farms of a commercial breeding company registered in the Spanish Pig Data Bank (BDporc). Analyses were performed on 3 data sets of Large White (60,535 piglets born from 4,551 litters), Landrace (57,987 piglets from 5,008 litters), and Pietrain (42,707 piglets from 4,328 litters) populations. In the analysis, farrowing mortality was considered as a binary trait at the piglet level and scored as 1 (alive piglet) or 0 (dead piglet) at farrowing or within the first 12 h of life. Each breed was analyzed separately, and operational models included systematic effects (year-season, sex, litter size, and order of parity), direct and maternal additive genetic effects, and common litter effects. Analyses were performed by Bayesian methods using Gibbs sampling. The posterior means of direct heritability were 0.02, 0.06, and 0.10, and the posterior means of maternal heritability were 0.05, 0.13, and 0.06 for Large White, Landrace, and Pietrain populations, respectively. The posterior means of genetic correlation between the direct and maternal genetic effects for Landrace and Pietrain populations were -0.56 and -0.53, and the highest posterior intervals at 95% did not include zero. In contrast, the posterior mean of the genetic correlation between direct and maternal effects was 0.15 in the Large White population, with the null correlation included in the highest posterior interval at 95%. These results suggest that the genetic model of evaluation for the Landrace and Pietrain populations should include direct and maternal genetic effects, whereas farrowing mortality could be considered as a sow trait in the Large White population.

  12. Decoding the human genome

    CERN Document Server

    CERN. Geneva. Audiovisual Unit; Antonerakis, S E

    2002-01-01

    Decoding the Human genome is a very up-to-date topic, raising several questions besides purely scientific, in view of the two competing teams (public and private), the ethics of using the results, and the fact that the project went apparently faster and easier than expected. The lecture series will address the following chapters: Scientific basis and challenges. Ethical and social aspects of genomics.

  13. Bayesian biostatistics

    CERN Document Server

    Lesaffre, Emmanuel

    2012-01-01

    The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets. Through examples, exercises and a combination of introd

  14. Loneliness and Interpersonal Decoding Skills.

    Science.gov (United States)

    Zakahi, Walter R.; Goss, Blaine

    1995-01-01

    Finds that the romantic loneliness dimension of the Differential Loneliness Scale is related to decoding ability, and that there are moderate linear relationships among several of the dimensions of the Differential Loneliness Scale, the self-report of listening ability, and participants' view of their own decoding ability. (SR)

  15. Assessment of Breast Cancer Risk in an Iranian Female Population Using Bayesian Networks with Varying Node Number

    Science.gov (United States)

    Rezaianzadeh, Abbas; Sepandi, Mojtaba; Rahimikazerooni, Salar

    2016-11-01

    Objective: As a source of information, medical data can feature hidden relationships. However, the high volume of datasets and complexity of decision-making in medicine introduce difficulties for analysis and interpretation and processing steps may be needed before the data can be used by clinicians in their work. This study focused on the use of Bayesian models with different numbers of nodes to aid clinicians in breast cancer risk estimation. Methods: Bayesian networks (BNs) with a retrospectively collected dataset including mammographic details, risk factor exposure, and clinical findings was assessed for prediction of the probability of breast cancer in individual patients. Area under the receiver-operating characteristic curve (AUC), accuracy, sensitivity, specificity, and positive and negative predictive values were used to evaluate discriminative performance. Result: A network incorporating selected features performed better (AUC = 0.94) than that incorporating all the features (AUC = 0.93). The results revealed no significant difference among 3 models regarding performance indices at the 5% significance level. Conclusion: BNs could effectively discriminate malignant from benign abnormalities and accurately predict the risk of breast cancer in individuals. Moreover, the overall performance of the 9-node BN was better, and due to the lower number of nodes it might be more readily be applied in clinical settings.

  16. Bayesian statistics

    OpenAIRE

    新家, 健精

    2013-01-01

    © 2012 Springer Science+Business Media, LLC. All rights reserved. Article Outline: Glossary Definition of the Subject and Introduction The Bayesian Statistical Paradigm Three Examples Comparison with the Frequentist Statistical Paradigm Future Directions Bibliography

  17. The Formal Specifications for Protocols of Decoders

    Institute of Scientific and Technical Information of China (English)

    YUAN Meng-ting; WU Guo-qing; SHU Feng-di

    2004-01-01

    This paper presents a formal approach, FSPD (Formal Specifications for Protocols of Decoders), to specify decoder communication protocols. Based on axiomatic, FSPD is a precise language with which programmers could use only one suitable driver to handle various types of decoders. FSPD is helpful for programmers to get high adaptability and reusability of decoder-driver software.

  18. Decoding the productivity code

    DEFF Research Database (Denmark)

    Hansen, David

    .e., to be prepared to initiate improvement. The study shows how the effectiveness of the improvement system depends on the congruent fit between the five elements as well as the bridging coherence between the improvement system and the work system. The bridging coherence depends on how improvements are activated...... approach often ends up with demanding intense employee focus to sustain improvement and engagement. Likewise, a single-minded employee development approach often ends up demanding rationalization to achieve the desired financial results. These ineffective approaches make organizations react like pendulums...... that swing between rationalization and employee development. The productivity code is the lack of alternatives to this ineffective approach. This thesis decodes the productivity code based on the results from a 3-year action research study at a medium-sized manufacturing facility. During the project period...

  19. Astrophysics Decoding the cosmos

    CERN Document Server

    Irwin, Judith A

    2007-01-01

    Astrophysics: Decoding the Cosmos is an accessible introduction to the key principles and theories underlying astrophysics. This text takes a close look at the radiation and particles that we receive from astronomical objects, providing a thorough understanding of what this tells us, drawing the information together using examples to illustrate the process of astrophysics. Chapters dedicated to objects showing complex processes are written in an accessible manner and pull relevant background information together to put the subject firmly into context. The intention of the author is that the book will be a 'tool chest' for undergraduate astronomers wanting to know the how of astrophysics. Students will gain a thorough grasp of the key principles, ensuring that this often-difficult subject becomes more accessible.

  20. Decoding by Embedding: Correct Decoding Radius and DMT Optimality

    CERN Document Server

    Ling, Cong; Luzzi, Laura; Stehle, Damien

    2011-01-01

    In lattice-coded multiple-input multiple-output (MIMO) systems, optimal decoding amounts to solving the closest vector problem (CVP). Embedding is a powerful technique for the approximate CVP, yet its remarkable performance is not well understood. In this paper, we analyze the embedding technique from a bounded distance decoding (BDD) viewpoint. We prove that the Lenstra, Lenstra and Lov\\'az (LLL) algorithm can achieve 1/(2{\\gamma}) -BDD for {\\gamma} \\approx O(2^(n/4)), yielding a polynomial-complexity decoding algorithm performing exponentially better than Babai's which achieves {\\gamma} = O(2^(n/2)). This substantially improves the existing result {\\gamma} = O(2^(n)) for embedding decoding. We also prove that BDD of the regularized lattice is optimal in terms of the diversity-multiplexing gain tradeoff (DMT).

  1. Decoding OvTDM with sphere-decoding algorithm

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Overlapped time division multiplexing (OvTDM) is a new type of transmission scheme with high spectrum efficiency and low threshold signal-to-noise ratio (SNR). In this article, the structure of OvTDM is introduced and the sphere-decoding algorithm of complex domain is proposed for OvTDM. Simulations demonstrate that the proposed algorithm can achieve maximum likelihood (ML) decoding with lower complexity as compared to traditional maximum likelihood sequence demodulation (MLSD) or viterbi algorithm (VA).

  2. Understanding uncertainties in non-linear population trajectories: a Bayesian semi-parametric hierarchical approach to large-scale surveys of coral cover.

    Directory of Open Access Journals (Sweden)

    Julie Vercelloni

    Full Text Available Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making.

  3. Body size and geographic range do not explain long term variation in fish populations: a Bayesian phylogenetic approach to testing assembly processes in stream fish assemblages.

    Directory of Open Access Journals (Sweden)

    Stephen J Jacquemin

    Full Text Available We combine evolutionary biology and community ecology to test whether two species traits, body size and geographic range, explain long term variation in local scale freshwater stream fish assemblages. Body size and geographic range are expected to influence several aspects of fish ecology, via relationships with niche breadth, dispersal, and abundance. These traits are expected to scale inversely with niche breadth or current abundance, and to scale directly with dispersal potential. However, their utility to explain long term temporal patterns in local scale abundance is not known. Comparative methods employing an existing molecular phylogeny were used to incorporate evolutionary relatedness in a test for covariation of body size and geographic range with long term (1983 - 2010 local scale population variation of fishes in West Fork White River (Indiana, USA. The Bayesian model incorporating phylogenetic uncertainty and correlated predictors indicated that neither body size nor geographic range explained significant variation in population fluctuations over a 28 year period. Phylogenetic signal data indicated that body size and geographic range were less similar among taxa than expected if trait evolution followed a purely random walk. We interpret this as evidence that local scale population variation may be influenced less by species-level traits such as body size or geographic range, and instead may be influenced more strongly by a taxon's local scale habitat and biotic assemblages.

  4. Decode the Sodium Label Lingo

    Science.gov (United States)

    ... For Preschooler For Gradeschooler For Teen Decode the Sodium Label Lingo Published January 24, 2013 Print Email Reading food labels can help you slash sodium. Here's how to decipher them. "Sodium free" or " ...

  5. Bayesian Theory

    CERN Document Server

    Bernardo, Jose M

    2000-01-01

    This highly acclaimed text, now available in paperback, provides a thorough account of key concepts and theoretical results, with particular emphasis on viewing statistical inference as a special case of decision theory. Information-theoretic concepts play a central role in the development of the theory, which provides, in particular, a detailed discussion of the problem of specification of so-called prior ignorance . The work is written from the authors s committed Bayesian perspective, but an overview of non-Bayesian theories is also provided, and each chapter contains a wide-ranging critica

  6. Pipelined Viterbi Decoder Using FPGA

    Directory of Open Access Journals (Sweden)

    Nayel Al-Zubi

    2013-02-01

    Full Text Available Convolutional encoding is used in almost all digital communication systems to get better gain in BER (Bit Error Rate, and all applications needs high throughput rate. The Viterbi algorithm is the solution in decoding process. The nonlinear and feedback nature of the Viterbi decoder makes its high speed implementation harder. One of promising approaches to get high throughput in the Viterbi decoder is to introduce a pipelining. This work applies a carry-save technique, which gets the advantage that the critical path in the ACS feedback becomes in one direction and get rid of carry ripple in the “Add” part of ACS unit. In this simulation and implementation show how this technique will improve the throughput of the Viterbi decoder. The design complexities for the bit-pipelined architecture are evaluated and demonstrated using Verilog HDL simulation. And a general algorithm in software that simulates a Viterbi Decoder was developed. Our research is concerned with implementation of the Viterbi Decoders for Field Programmable Gate Arrays (FPGA. Generally FPGA's are slower than custom integrated circuits but can be configured in the lab in few hours as compared to fabrication which takes few months. The design implemented using Verilog HDL and synthesized for Xilinx FPGA's.

  7. Bayesian Methods for Statistical Analysis

    OpenAIRE

    Puza, Borek

    2015-01-01

    Bayesian methods for statistical analysis is a book on statistical methods for analysing a wide variety of data. The book consists of 12 chapters, starting with basic concepts and covering numerous topics, including Bayesian estimation, decision theory, prediction, hypothesis testing, hierarchical models, Markov chain Monte Carlo methods, finite population inference, biased sampling and nonignorable nonresponse. The book contains many exercises, all with worked solutions, including complete c...

  8. Bayesian SPLDA

    OpenAIRE

    Villalba, Jesús

    2015-01-01

    In this document we are going to derive the equations needed to implement a Variational Bayes estimation of the parameters of the simplified probabilistic linear discriminant analysis (SPLDA) model. This can be used to adapt SPLDA from one database to another with few development data or to implement the fully Bayesian recipe. Our approach is similar to Bishop's VB PPCA.

  9. A limited sampling method to estimate methotrexate pharmacokinetics in patients with rheumatoid arthritis using a Bayesian approach and the population data modeling program P-PHARM.

    Science.gov (United States)

    Bressolle, F; Bologna, C; Edno, L; Bernard, J C; Gomeni, R; Sany, J; Combe, B

    1996-01-01

    This paper describes a methodology to calculate methotrexate (MTX) pharmacokinetic parameters after intramuscular administration using two samples and the population parameters. Total and free MTX were measured over a 36-h period in 56 rheumatoid arthritis patients; 14 patients were studied after a two-dose scheme at 15-day intervals. The Hill equation was used to relate the free MTX to the total MTX changes in plasma concentrations, and a two-compartment open model was used to fit the total MTX plasma concentrations. A non-linear mixed effect procedure was used to estimate the population parameters and to explore the interindividual variability in relation to the following covariables: age, weight, height, haemoglobin, erythrocyte sedimentation rate, platelet count, creatinine clearance, rheumatoid factor, C-reactive protein, swelling joint count, and Ritchie's articular index. Population parameters were evaluated for 40 patients using a three-step approach. The population average parameters and the interindividual variabilities expressed as coefficients of variation (CV%) were: CL, 6.94 l center dot h-1 (20.5%); V, 34.8 l (32.2%); k12, 0.0838 h-1 (47.7%); k21, 0.0769 h-1 (61.6%); ka, 4.31 h-1 (58%); Emax, 1.12 mu mol center dot l-1 (19.7%); gamma, 0.932 (12.3%); and EC50, 2.14 mu mol center dot l-1 (27.3%). Thirty additional data sets (16 new patients and 14 patients of the previous population but treated on a separate occasion) were used to evaluate the predictive performance of the population parameters. Twelve blood samples were collected from each individual in order to calculate individual parameters using standard fitting procedures. These values were compared to the ones estimated using a Bayesian approach with population parameters as a priori information together with two samples, selected from the individual observations. The results show that the bias was not statistically different from zero and the precision of these parameters was excellent.

  10. Orientation decoding: Sense in spirals?

    Science.gov (United States)

    Clifford, Colin W G; Mannion, Damien J

    2015-04-15

    The orientation of a visual stimulus can be successfully decoded from the multivariate pattern of fMRI activity in human visual cortex. Whether this capacity requires coarse-scale orientation biases is controversial. We and others have advocated the use of spiral stimuli to eliminate a potential coarse-scale bias-the radial bias toward local orientations that are collinear with the centre of gaze-and hence narrow down the potential coarse-scale biases that could contribute to orientation decoding. The usefulness of this strategy is challenged by the computational simulations of Carlson (2014), who reported the ability to successfully decode spirals of opposite sense (opening clockwise or counter-clockwise) from the pooled output of purportedly unbiased orientation filters. Here, we elaborate the mathematical relationship between spirals of opposite sense to confirm that they cannot be discriminated on the basis of the pooled output of unbiased or radially biased orientation filters. We then demonstrate that Carlson's (2014) reported decoding ability is consistent with the presence of inadvertent biases in the set of orientation filters; biases introduced by their digital implementation and unrelated to the brain's processing of orientation. These analyses demonstrate that spirals must be processed with an orientation bias other than the radial bias for successful decoding of spiral sense.

  11. Population coding in mouse visual cortex: response reliability and dissociability of stimulus tuning and noise correlation.

    Science.gov (United States)

    Montijn, Jorrit S; Vinck, Martin; Pennartz, Cyriel M A

    2014-01-01

    The primary visual cortex is an excellent model system for investigating how neuronal populations encode information, because of well-documented relationships between stimulus characteristics and neuronal activation patterns. We used two-photon calcium imaging data to relate the performance of different methods for studying population coding (population vectors, template matching, and Bayesian decoding algorithms) to their underlying assumptions. We show that the variability of neuronal responses may hamper the decoding of population activity, and that a normalization to correct for this variability may be of critical importance for correct decoding of population activity. Second, by comparing noise correlations and stimulus tuning we find that these properties have dissociated anatomical correlates, even though noise correlations have been previously hypothesized to reflect common synaptic input. We hypothesize that noise correlations arise from large non-specific increases in spiking activity acting on many weak synapses simultaneously, while neuronal stimulus response properties are dependent on more reliable connections. Finally, this paper provides practical guidelines for further research on population coding and shows that population coding cannot be approximated by a simple summation of inputs, but is heavily influenced by factors such as input reliability and noise correlation structure.

  12. Bayesian signaling

    OpenAIRE

    Hedlund, Jonas

    2014-01-01

    This paper introduces private sender information into a sender-receiver game of Bayesian persuasion with monotonic sender preferences. I derive properties of increasing differences related to the precision of signals and use these to fully characterize the set of equilibria robust to the intuitive criterion. In particular, all such equilibria are either separating, i.e., the sender's choice of signal reveals his private information to the receiver, or fully disclosing, i.e., the outcome of th...

  13. Bayesian Monitoring.

    OpenAIRE

    Kirstein, Roland

    2005-01-01

    This paper presents a modification of the inspection game: The ?Bayesian Monitoring? model rests on the assumption that judges are interested in enforcing compliant behavior and making correct decisions. They may base their judgements on an informative but imperfect signal which can be generated costlessly. In the original inspection game, monitoring is costly and generates a perfectly informative signal. While the inspection game has only one mixed strategy equilibrium, three Perfect Bayesia...

  14. Differences in the predictors of reading comprehension in first graders from low socio-economic status families with either good or poor decoding skills.

    Science.gov (United States)

    Gentaz, Edouard; Sprenger-Charolles, Liliane; Theurel, Anne

    2015-01-01

    Based on the assumption that good decoding skills constitute a bootstrapping mechanism for reading comprehension, the present study investigated the relative contribution of the former skill to the latter compared to that of three other predictors of reading comprehension (listening comprehension, vocabulary and phonemic awareness) in 392 French-speaking first graders from low SES families. This large sample was split into three groups according to their level of decoding skills assessed by pseudoword reading. Using a cutoff of 1 SD above or below the mean of the entire population, there were 63 good decoders, 267 average decoders and 62 poor decoders. 58% of the variance in reading comprehension was explained by our four predictors, with decoding skills proving to be the best predictor (12.1%, 7.3% for listening comprehension, 4.6% for vocabulary and 3.3% for phonemic awareness). Interaction between group versus decoding skills, listening comprehension and phonemic awareness accounted for significant additional variance (3.6%, 1.1% and 1.0%, respectively). The effects on reading comprehension of decoding skills and phonemic awareness were higher in poor and average decoders than in good decoders whereas listening comprehension accounted for more variance in good and average decoders than in poor decoders. Furthermore, the percentage of children with impaired reading comprehension skills was higher in the group of poor decoders (55%) than in the two other groups (average decoders: 7%; good decoders: 0%) and only 6 children (1.5%) had impaired reading comprehension skills with unimpaired decoding skills, listening comprehension or vocabulary. These results challenge the outcomes of studies on "poor comprehenders" by showing that, at least in first grade, poor reading comprehension is strongly linked to the level of decoding skills.

  15. Differences in the predictors of reading comprehension in first graders from low socio-economic status families with either good or poor decoding skills.

    Directory of Open Access Journals (Sweden)

    Edouard Gentaz

    Full Text Available Based on the assumption that good decoding skills constitute a bootstrapping mechanism for reading comprehension, the present study investigated the relative contribution of the former skill to the latter compared to that of three other predictors of reading comprehension (listening comprehension, vocabulary and phonemic awareness in 392 French-speaking first graders from low SES families. This large sample was split into three groups according to their level of decoding skills assessed by pseudoword reading. Using a cutoff of 1 SD above or below the mean of the entire population, there were 63 good decoders, 267 average decoders and 62 poor decoders. 58% of the variance in reading comprehension was explained by our four predictors, with decoding skills proving to be the best predictor (12.1%, 7.3% for listening comprehension, 4.6% for vocabulary and 3.3% for phonemic awareness. Interaction between group versus decoding skills, listening comprehension and phonemic awareness accounted for significant additional variance (3.6%, 1.1% and 1.0%, respectively. The effects on reading comprehension of decoding skills and phonemic awareness were higher in poor and average decoders than in good decoders whereas listening comprehension accounted for more variance in good and average decoders than in poor decoders. Furthermore, the percentage of children with impaired reading comprehension skills was higher in the group of poor decoders (55% than in the two other groups (average decoders: 7%; good decoders: 0% and only 6 children (1.5% had impaired reading comprehension skills with unimpaired decoding skills, listening comprehension or vocabulary. These results challenge the outcomes of studies on "poor comprehenders" by showing that, at least in first grade, poor reading comprehension is strongly linked to the level of decoding skills.

  16. Decoding Dyslexia, a Common Learning Disability

    Science.gov (United States)

    ... of this page please turn JavaScript on. Feature: Dyslexia Decoding Dyslexia, a Common Learning Disability Past Issues / Winter 2016 ... Dyslexic" Articles In Their Own Words: Dealing with Dyslexia / Decoding Dyslexia, a Common Learning Disability / What is ...

  17. Busulfan in infants to adult hematopoietic cell transplant recipients: A population pharmacokinetic model for initial and Bayesian dose personalization

    Science.gov (United States)

    McCune, Jeannine S.; Bemer, Meagan J.; Barrett, Jeffrey S.; Baker, K. Scott; Gamis, Alan S.; Holford, Nicholas H.G.

    2014-01-01

    Purpose Personalizing intravenous (IV) busulfan doses to a target plasma concentration at steady state (Css) is an essential component of hematopoietic cell transplantation (HCT). We sought to develop a population pharmacokinetic model to predict IV busulfan doses over a wide age spectrum (0.1 – 66 years) that accounts for differences in age and body size. Experimental design A population pharmacokinetic model based on normal fat mass and maturation based on post-menstrual age was built from 12,380 busulfan concentration-time points obtained after IV busulfan administration in 1,610 HCT recipients. Subsequently, simulation results of the initial dose necessary to achieve a target Css with this model were compared with pediatric-only models. Results A two-compartment model with first-order elimination best fit the data. The population busulfan clearance was 12.4 L/h for an adult male with 62kg normal fat mass (equivalent to 70kg total body weight). Busulfan clearance, scaled to body size – specifically normal fat mass, is predicted to be 95% of the adult clearance at 2.5 years post-natal age. With a target Css of 770 ng/mL, a higher proportion of initial doses achieved the therapeutic window with this age- and size-dependent model (72%) compared to dosing recommended by the Food and Drug Administration (57%) or the European Medicines Agency (70%). Conclusion This is the first population pharmacokinetic model developed to predict initial IV busulfan doses and personalize to a target Css over a wide age spectrum, ranging from infants to adults. PMID:24218510

  18. Far beyond stacking: Fully bayesian constraints on sub-microJy radio source populations over the XMM-LSS-VIDEO field

    CERN Document Server

    Zwart, Jonathan T L; Jarvis, Matt J

    2015-01-01

    Measuring radio source counts is critical for characterizing new extragalactic populations, brings a wealth of science within reach and will inform forecasts for SKA and its pathfinders. Yet there is currently great debate (and few measurements) about the behaviour of the 1.4-GHz counts in the microJy regime. One way to push the counts to these levels is via 'stacking', the covariance of a map with a catalogue at higher resolution and (often) a different wavelength. For the first time, we cast stacking in a fully bayesian framework, applying it to (i) the SKADS simulation and (ii) VLA data stacked at the positions of sources from the VIDEO survey. In the former case, the algorithm recovers the counts correctly when applied to the catalogue, but is biased high when confusion comes into play. This needs to be accounted for in the analysis of data from any relatively-low-resolution SKA pathfinders. For the latter case, the observed radio source counts remain flat below the 5-sigma level of 85 microJy as far as 4...

  19. Modular VLSI Reed-Solomon Decoder

    Science.gov (United States)

    Hsu, In-Shek; Truong, Trieu-Kie

    1991-01-01

    Proposed Reed-Solomon decoder contains multiple very-large-scale integrated (VLSI) circuit chips of same type. Each chip contains sets of logic cells and subcells performing functions from all stages of decoding process. Full decoder assembled by concatenating chips, with selective utilization of cells in particular chips. Cost of development reduced by factor of 5. In addition, decoder programmable in field and switched between 8-bit and 10-bit symbol sizes.

  20. Interference Decoding for Deterministic Channels

    CERN Document Server

    Bandemer, Bernd

    2010-01-01

    An inner bound to the capacity region of a class of three user pair deterministic interference channels is presented. The key idea is to simultaneously decode the combined interference signal and the intended message at each receiver. It is shown that this interference decoding inner bound is strictly larger than the inner bound obtained by treating interference as noise, which includes interference alignment for deterministic channels. The gain comes from judicious analysis of the number of combined interference sequences in different regimes of input distributions and message rates.

  1. FPGA Realization of Memory 10 Viterbi Decoder

    DEFF Research Database (Denmark)

    Paaske, Erik; Bach, Thomas Bo; Andersen, Jakob Dahl

    1997-01-01

    sequence mode when feedback from the Reed-Solomon decoder is available. The Viterbi decoder is realized using two Altera FLEX 10K50 FPGA's. The overall operating speed is 30 kbit/s, and since up to three iterations are performed for each frame and only one decoder is used, the operating speed...

  2. TURBO DECODER USING LOCAL SUBSIDIARY MAXIMUM LIKELIHOOD DECODING IN PRIOR ESTIMATION OF THE EXTRINSIC INFORMATION

    Institute of Scientific and Technical Information of China (English)

    Yang Fengfan

    2004-01-01

    A new technique for turbo decoder is proposed by using a local subsidiary maximum likelihood decoding and a probability distributions family for the extrinsic information. The optimal distribution of the extrinsic information is dynamically specified for each component decoder.The simulation results show that the iterative decoder with the new technique outperforms that of the decoder with the traditional Gaussian approach for the extrinsic information under the same conditions.

  3. SYNTHESIZED EXPECTED BAYESIAN METHOD OF PARAMETRIC ESTIMATE

    Institute of Scientific and Technical Information of China (English)

    Ming HAN; Yuanyao DING

    2004-01-01

    This paper develops a new method of parametric estimate, which is named as "synthesized expected Bayesian method". When samples of products are tested and no failure events occur, thedefinition of expected Bayesian estimate is introduced and the estimates of failure probability and failure rate are provided. After some failure information is introduced by making an extra-test, a synthesized expected Bayesian method is defined and used to estimate failure probability, failure rateand some other parameters in exponential distribution and Weibull distribution of populations. Finally,calculations are performed according to practical problems, which show that the synthesized expected Bayesian method is feasible and easy to operate.

  4. Bayesian demography 250 years after Bayes.

    Science.gov (United States)

    Bijak, Jakub; Bryant, John

    2016-01-01

    Bayesian statistics offers an alternative to classical (frequentist) statistics. It is distinguished by its use of probability distributions to describe uncertain quantities, which leads to elegant solutions to many difficult statistical problems. Although Bayesian demography, like Bayesian statistics more generally, is around 250 years old, only recently has it begun to flourish. The aim of this paper is to review the achievements of Bayesian demography, address some misconceptions, and make the case for wider use of Bayesian methods in population studies. We focus on three applications: demographic forecasts, limited data, and highly structured or complex models. The key advantages of Bayesian methods are the ability to integrate information from multiple sources and to describe uncertainty coherently. Bayesian methods also allow for including additional (prior) information next to the data sample. As such, Bayesian approaches are complementary to many traditional methods, which can be productively re-expressed in Bayesian terms.

  5. Variational bayesian method of estimating variance components.

    Science.gov (United States)

    Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi

    2016-07-01

    We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.

  6. Bayesian programming

    CERN Document Server

    Bessiere, Pierre; Ahuactzin, Juan Manuel; Mekhnacha, Kamel

    2013-01-01

    Probability as an Alternative to Boolean LogicWhile logic is the mathematical foundation of rational reasoning and the fundamental principle of computing, it is restricted to problems where information is both complete and certain. However, many real-world problems, from financial investments to email filtering, are incomplete or uncertain in nature. Probability theory and Bayesian computing together provide an alternative framework to deal with incomplete and uncertain data. Decision-Making Tools and Methods for Incomplete and Uncertain DataEmphasizing probability as an alternative to Boolean

  7. Decoding intention at sensorimotor timescales.

    Directory of Open Access Journals (Sweden)

    Mathew Salvaris

    Full Text Available The ability to decode an individual's intentions in real time has long been a 'holy grail' of research on human volition. For example, a reliable method could be used to improve scientific study of voluntary action by allowing external probe stimuli to be delivered at different moments during development of intention and action. Several Brain Computer Interface applications have used motor imagery of repetitive actions to achieve this goal. These systems are relatively successful, but only if the intention is sustained over a period of several seconds; much longer than the timescales identified in psychophysiological studies for normal preparation for voluntary action. We have used a combination of sensorimotor rhythms and motor imagery training to decode intentions in a single-trial cued-response paradigm similar to those used in human and non-human primate motor control research. Decoding accuracy of over 0.83 was achieved with twelve participants. With this approach, we could decode intentions to move the left or right hand at sub-second timescales, both for instructed choices instructed by an external stimulus and for free choices generated intentionally by the participant. The implications for volition are considered.

  8. Behavioral approach to list decoding

    NARCIS (Netherlands)

    Polderman, Jan Willem; Kuijper, Margreta

    2002-01-01

    List decoding may be translated into a bivariate interpolation problem. The interpolation problem is to find a bivariate polynomial of minimal weighted degree that interpolates a given set of pairs taken from a finite field. We present a behavioral approach to this interpolation problem. With the da

  9. The effects of sampling strategy on the quality of reconstruction of viral population dynamics using Bayesian skyline family coalescent methods: A simulation study.

    Science.gov (United States)

    Hall, Matthew D; Woolhouse, Mark E J; Rambaut, Andrew

    2016-01-01

    The ongoing large-scale increase in the total amount of genetic data for viruses and other pathogens has led to a situation in which it is often not possible to include every available sequence in a phylogenetic analysis and expect the procedure to complete in reasonable computational time. This raises questions about how a set of sequences should be selected for analysis, particularly if the data are used to infer more than just the phylogenetic tree itself. The design of sampling strategies for molecular epidemiology has been a neglected field of research. This article describes a large-scale simulation exercise that was undertaken to select an appropriate strategy when using the GMRF skygrid, one of the Bayesian skyline family of coalescent methods, in order to reconstruct past population dynamics. The simulated scenarios were intended to represent sampling for the population of an endemic virus across multiple geographical locations. Large phylogenies were simulated under a coalescent or structured coalescent model and sequences simulated from these trees; the resulting datasets were then downsampled for analyses according to a variety of schemes. Variation in results between different replicates of the same scheme was not insignificant, and as a result, we recommend that where possible analyses are repeated with different datasets in order to establish that elements of a reconstruction are not simply the result of the particular set of samples selected. We show that an individual stochastic choice of sequences can introduce spurious behaviour in the median line of the skygrid plot and that even marginal likelihood estimation can suggest complicated dynamics that were not in fact present. We recommend that the median line should not be used to infer historical events on its own. Sampling sequences with uniform probability with respect to both time and spatial location (deme) never performed worse than sampling with probability proportional to the effective

  10. Projection of Diabetes Population Size and Associated Economic Burden through 2030 in Iran: Evidence from Micro-Simulation Markov Model and Bayesian Meta-Analysis.

    Directory of Open Access Journals (Sweden)

    Mehdi Javanbakht

    Full Text Available The aim of this study was to estimate the economic burden of diabetes mellitus (DM in Iran from 2009 to 2030.A Markov micro-simulation (MM model was developed to predict the DM population size and associated economic burden. Age- and sex-specific prevalence and incidence of diagnosed and undiagnosed DM were derived from national health surveys. A systematic review was performed to identify the cost of diabetes in Iran and the mean annual direct and indirect costs of patients with DM were estimated using a random-effect Bayesian meta-analysis. Face, internal, cross and predictive validity of the MM model were assessed by consulting an expert group, performing sensitivity analysis (SA and comparing model results with published literature and national survey reports. Sensitivity analysis was also performed to explore the effect of uncertainty in the model.We estimated 3.78 million cases of DM (2.74 million diagnosed and 1.04 million undiagnosed in Iran in 2009. This number is expected to rise to 9.24 million cases (6.73 million diagnosed and 2.50 million undiagnosed by 2030. The mean annual direct and indirect costs of patients with DM in 2009 were US$ 556 (posterior standard deviation, 221 and US$ 689 (619, respectively. Total estimated annual cost of DM was $3.64 (2009 US$ billion (including US$1.71 billion direct and US$1.93 billion indirect costs in 2009 and is predicted to increase to $9.0 (in 2009 US$ billion (including US$4.2 billion direct and US$4.8 billion indirect costs by 2030.The economic burden of DM in Iran is predicted to increase markedly in the coming decades. Identification and implementation of effective strategies to prevent and manage DM should be considered as a public health priority.

  11. GENETIC ALGORITHM FOR DECODING LINEAR CODES OVER AWGN AND FADING CHANNELS

    Directory of Open Access Journals (Sweden)

    H. BERBIA

    2011-08-01

    Full Text Available This paper introduces a decoder for binary linear codes based on Genetic Algorithm (GA over the Gaussian and Rayleigh flat fading channel. The performances and compututional complexity of our decoder applied to BCH and convolutional codes are good compared to Chase-2 and Viterbi algorithm respectively. It show that our algorithm is less complex for linear block codes of large block length; furthermore it's performances can be improved by tuning the decoder's parameters, in particular the number of individuals by population and the number of generations

  12. Decoding a wide range of hand configurations from macaque motor, premotor, and parietal cortices.

    Science.gov (United States)

    Schaffelhofer, Stefan; Agudelo-Toro, Andres; Scherberger, Hansjörg

    2015-01-21

    Despite recent advances in decoding cortical activity for motor control, the development of hand prosthetics remains a major challenge. To reduce the complexity of such applications, higher cortical areas that also represent motor plans rather than just the individual movements might be advantageous. We investigated the decoding of many grip types using spiking activity from the anterior intraparietal (AIP), ventral premotor (F5), and primary motor (M1) cortices. Two rhesus monkeys were trained to grasp 50 objects in a delayed task while hand kinematics and spiking activity from six implanted electrode arrays (total of 192 electrodes) were recorded. Offline, we determined 20 grip types from the kinematic data and decoded these hand configurations and the grasped objects with a simple Bayesian classifier. When decoding from AIP, F5, and M1 combined, the mean accuracy was 50% (using planning activity) and 62% (during motor execution) for predicting the 50 objects (chance level, 2%) and substantially larger when predicting the 20 grip types (planning, 74%; execution, 86%; chance level, 5%). When decoding from individual arrays, objects and grip types could be predicted well during movement planning from AIP (medial array) and F5 (lateral array), whereas M1 predictions were poor. In contrast, predictions during movement execution were best from M1, whereas F5 performed only slightly worse. These results demonstrate for the first time that a large number of grip types can be decoded from higher cortical areas during movement preparation and execution, which could be relevant for future neuroprosthetic devices that decode motor plans.

  13. Multichannel Error Correction Code Decoder

    Science.gov (United States)

    1996-01-01

    NASA Lewis Research Center's Digital Systems Technology Branch has an ongoing program in modulation, coding, onboard processing, and switching. Recently, NASA completed a project to incorporate a time-shared decoder into the very-small-aperture terminal (VSAT) onboard-processing mesh architecture. The primary goal was to demonstrate a time-shared decoder for a regenerative satellite that uses asynchronous, frequency-division multiple access (FDMA) uplink channels, thereby identifying hardware and power requirements and fault-tolerant issues that would have to be addressed in a operational system. A secondary goal was to integrate and test, in a system environment, two NASA-sponsored, proof-of-concept hardware deliverables: the Harris Corp. high-speed Bose Chaudhuri-Hocquenghem (BCH) codec and the TRW multichannel demultiplexer/demodulator (MCDD). A beneficial byproduct of this project was the development of flexible, multichannel-uplink signal-generation equipment.

  14. Fingerprinting with Minimum Distance Decoding

    CERN Document Server

    Lin, Shih-Chun; Gamal, Hesham El

    2007-01-01

    This work adopts an information theoretic framework for the design of collusion-resistant coding/decoding schemes for digital fingerprinting. More specifically, the minimum distance decision rule is used to identify 1 out of t pirates. Achievable rates, under this detection rule, are characterized in two distinct scenarios. First, we consider the averaging attack where a random coding argument is used to show that the rate 1/2 is achievable with t=2 pirates. Our study is then extended to the general case of arbitrary $t$ highlighting the underlying complexity-performance tradeoff. Overall, these results establish the significant performance gains offered by minimum distance decoding as compared to other approaches based on orthogonal codes and correlation detectors. In the second scenario, we characterize the achievable rates, with minimum distance decoding, under any collusion attack that satisfies the marking assumption. For t=2 pirates, we show that the rate $1-H(0.25)\\approx 0.188$ is achievable using an ...

  15. Ensemble Fractional Sensitivity: A Quantitative Approach to Neuron Selection for Decoding Motor Tasks

    Directory of Open Access Journals (Sweden)

    Girish Singhal

    2010-01-01

    Full Text Available A robust method to help identify the population of neurons used for decoding motor tasks is developed. We use sensitivity analysis to develop a new metric for quantifying the relative contribution of a neuron towards the decoded output, called “fractional sensitivity.” Previous model-based approaches for neuron ranking have been shown to largely depend on the collection of training data. We suggest the use of an ensemble of models that are trained on random subsets of trials to rank neurons. For this work, we tested a decoding algorithm on neuronal data recorded from two male rhesus monkeys while they performed a reach to grasp a bar at three orientations (45∘, 90∘, or 135∘. An ensemble approach led to a statistically significant increase of 5% in decoding accuracy and 25% increase in identification accuracy of simulated noisy neurons, when compared to a single model. Furthermore, ranking neurons based on the ensemble fractional sensitivities resulted in decoding accuracies 10%–20% greater than when randomly selecting neurons or ranking based on firing rates alone. By systematically reducing the size of the input space, we determine the optimal number of neurons needed for decoding the motor output. This selection approach has practical benefits for other BMI applications where limited number of electrodes and training datasets are available, but high decoding accuracies are desirable.

  16. Towards joint decoding of Tardos fingerprinting codes

    CERN Document Server

    Meerwald, Peter

    2011-01-01

    The class of joint decoder of probabilistic fingerprinting codes is of utmost importance in theoretical papers to establish the concept of fingerprint capacity. However, no implementation supporting a large user base is known to date. This paper presents an iterative decoder which is, as far as we are aware of, the first practical attempt towards joint decoding. The discriminative feature of the scores benefits on one hand from the side-information of previously accused users, and on the other hand, from recently introduced universal linear decoders for compound channels. Neither the code construction nor the decoder make precise assumptions about the collusion (size or strategy). The extension to incorporate soft outputs from the watermarking layer is straightforward. An intensive experimental work benchmarks the very good performances and offers a clear comparison with previous state-of-the-art decoders.

  17. LDPC Decoding on GPU for Mobile Device

    Directory of Open Access Journals (Sweden)

    Yiqin Lu

    2016-01-01

    Full Text Available A flexible software LDPC decoder that exploits data parallelism for simultaneous multicode words decoding on the mobile device is proposed in this paper, supported by multithreading on OpenCL based graphics processing units. By dividing the check matrix into several parts to make full use of both the local memory and private memory on GPU and properly modify the code capacity each time, our implementation on a mobile phone shows throughputs above 100 Mbps and delay is less than 1.6 millisecond in decoding, which make high-speed communication like video calling possible. To realize efficient software LDPC decoding on the mobile device, the LDPC decoding feature on communication baseband chip should be replaced to save the cost and make it easier to upgrade decoder to be compatible with a variety of channel access schemes.

  18. Just-in-time adaptive decoder engine: a universal video decoder based on MPEG RVC

    OpenAIRE

    Gorin, Jérôme; Yviquel, Hervé; Prêteux, Françoise; Raulet, Mickaël

    2011-01-01

    International audience; In this paper, we introduce the Just-In-Time Adaptive Decoder Engine (Jade) project, which is shipped as part of the Open RVC-CAL Compiler (Orcc) project. Orcc provides a set of open-source software tools for managing decoders standardized within MPEG by the Reconfigurable Video Coding (RVC) experts. In this framework, Jade acts as a Virtual Machine for any decoder description that uses the MPEG RVC paradigm. Jade dynamically generates a native decoder representation s...

  19. Introduction to Bayesian statistics

    CERN Document Server

    Bolstad, William M

    2017-01-01

    There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. In this Third Edition, four newly-added chapters address topics that reflect the rapid advances in the field of Bayesian staistics. The author continues to provide a Bayesian treatment of introductory statistical topics, such as scientific data gathering, discrete random variables, robust Bayesian methods, and Bayesian approaches to inferenfe cfor discrete random variables, bionomial proprotion, Poisson, normal mean, and simple linear regression. In addition, newly-developing topics in the field are presented in four new chapters: Bayesian inference with unknown mean and variance; Bayesian inference for Multivariate Normal mean vector; Bayesian inference for Multiple Linear RegressionModel; and Computati...

  20. Improved decoding for a concatenated coding system

    OpenAIRE

    Paaske, Erik

    1990-01-01

    The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new decoding procedures based on repeated decoding trials and exchange of information between the two decoders and the deinterleaver are proposed. In the first one, where the improvement is 0.3-0.4 dB, onl...

  1. Analysis of peeling decoder for MET ensembles

    CERN Document Server

    Hinton, Ryan

    2009-01-01

    The peeling decoder introduced by Luby, et al. allows analysis of LDPC decoding for the binary erasure channel (BEC). For irregular ensembles, they analyze the decoder state as a Markov process and present a solution to the differential equations describing the process mean. Multi-edge type (MET) ensembles allow greater precision through specifying graph connectivity. We generalize the the peeling decoder for MET ensembles and derive analogous differential equations. We offer a new change of variables and solution to the node fraction evolutions in the general (MET) case. This result is preparatory to investigating finite-length ensemble behavior.

  2. Identification of reciprocal causality between non-alcoholic fatty liver disease and metabolic syndrome by a simplified Bayesian network in a Chinese population

    Science.gov (United States)

    Zhang, Yongyuan; Zhang, Tao; Zhang, Chengqi; Tang, Fang; Zhong, Nvjuan; Li, Hongkai; Song, Xinhong; Lin, Haiyan; Liu, Yanxun; Xue, Fuzhong

    2015-01-01

    Objectives It remains unclear whether non-alcoholic fatty liver disease (NAFLD) is a cause or a consequence of metabolic syndrome (MetS). We proposed a simplified Bayesian network (BN) and attempted to confirm their reciprocal causality. Setting Bidirectional longitudinal cohorts (subcohorts A and B) were designed and followed up from 2005 to 2011 based on a large-scale health check-up in a Chinese population. Participants Subcohort A (from NAFLD to MetS, n=8426) included the participants with or without NAFLD at baseline to follow-up the incidence of MetS, while subcohort B (from MetS to NAFLD, n=16 110) included the participants with or without MetS at baseline to follow-up the incidence of NAFLD. Results Incidence densities were 2.47 and 17.39 per 100 person-years in subcohorts A and B, respectively. Generalised estimating equation analyses demonstrated that NAFLD was a potential causal factor for MetS (relative risk, RR, 95% CI 5.23, 3.50 to 7.81), while MetS was also a factor for NAFLD (2.55, 2.23 to 2.92). A BN with 5 simplification strategies was used for the reciprocal causal inference. The BN's causal inference illustrated that the total effect of NAFLD on MetS (attributable risks, AR%) was 2.49%, while it was 19.92% for MetS on NAFLD. The total effect of NAFLD on MetS components was different, with dyslipidemia having the greatest (AR%, 10.15%), followed by obesity (7.63%), diabetes (3.90%) and hypertension (3.51%). Similar patterns were inferred for MetS components on NAFLD, with obesity having the greatest (16.37%) effect, followed by diabetes (10.85%), dyslipidemia (10.74%) and hypertension (7.36%). Furthermore, the most important causal pathway from NAFLD to MetS was that NAFLD led to elevated GGT, then to MetS components, while the dominant causal pathway from MetS to NAFLD began with dyslipidaemia. Conclusions The findings suggest a reciprocal causality between NAFLD and MetS, and the effect of MetS on NAFLD is significantly greater than that of

  3. Comparison of linear mixed model analysis and genealogy-based haplotype clustering with a Bayesian approach for association mapping in a pedigreed population

    DEFF Research Database (Denmark)

    Dashab, Golam Reza; Kadri, Naveen Kumar; Mahdi Shariati, Mohammad;

    2012-01-01

    ) Mixed model analysis (MMA), 2) Random haplotype model (RHM), 3) Genealogy-based mixed model (GENMIX), and 4) Bayesian variable selection (BVS). The data consisted of phenotypes of 2000 animals from 20 sire families and were genotyped with 9990 SNPs on five chromosomes. Results: Out of the eight...

  4. Bayesian artificial intelligence

    CERN Document Server

    Korb, Kevin B

    2003-01-01

    As the power of Bayesian techniques has become more fully realized, the field of artificial intelligence has embraced Bayesian methodology and integrated it to the point where an introduction to Bayesian techniques is now a core course in many computer science programs. Unlike other books on the subject, Bayesian Artificial Intelligence keeps mathematical detail to a minimum and covers a broad range of topics. The authors integrate all of Bayesian net technology and learning Bayesian net technology and apply them both to knowledge engineering. They emphasize understanding and intuition but also provide the algorithms and technical background needed for applications. Software, exercises, and solutions are available on the authors' website.

  5. Bayesian artificial intelligence

    CERN Document Server

    Korb, Kevin B

    2010-01-01

    Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology.New to the Second EditionNew chapter on Bayesian network classifiersNew section on object-oriente

  6. Application of Beyond Bound Decoding for High Speed Optical Communications

    DEFF Research Database (Denmark)

    Li, Bomin; Larsen, Knud J.; Vegas Olmos, Juan José;

    2013-01-01

    This paper studies the application of beyond bound decoding method for high speed optical communications. This hard-decision decoding method outperforms traditional minimum distance decoding method, with a total net coding gain of 10.36 dB.......This paper studies the application of beyond bound decoding method for high speed optical communications. This hard-decision decoding method outperforms traditional minimum distance decoding method, with a total net coding gain of 10.36 dB....

  7. Statistical coding and decoding of heartbeat intervals.

    Directory of Open Access Journals (Sweden)

    Fausto Lucena

    Full Text Available The heart integrates neuroregulatory messages into specific bands of frequency, such that the overall amplitude spectrum of the cardiac output reflects the variations of the autonomic nervous system. This modulatory mechanism seems to be well adjusted to the unpredictability of the cardiac demand, maintaining a proper cardiac regulation. A longstanding theory holds that biological organisms facing an ever-changing environment are likely to evolve adaptive mechanisms to extract essential features in order to adjust their behavior. The key question, however, has been to understand how the neural circuitry self-organizes these feature detectors to select behaviorally relevant information. Previous studies in computational perception suggest that a neural population enhances information that is important for survival by minimizing the statistical redundancy of the stimuli. Herein we investigate whether the cardiac system makes use of a redundancy reduction strategy to regulate the cardiac rhythm. Based on a network of neural filters optimized to code heartbeat intervals, we learn a population code that maximizes the information across the neural ensemble. The emerging population code displays filter tuning proprieties whose characteristics explain diverse aspects of the autonomic cardiac regulation, such as the compromise between fast and slow cardiac responses. We show that the filters yield responses that are quantitatively similar to observed heart rate responses during direct sympathetic or parasympathetic nerve stimulation. Our findings suggest that the heart decodes autonomic stimuli according to information theory principles analogous to how perceptual cues are encoded by sensory systems.

  8. Applied Bayesian Hierarchical Methods

    CERN Document Server

    Congdon, Peter D

    2010-01-01

    Bayesian methods facilitate the analysis of complex models and data structures. Emphasizing data applications, alternative modeling specifications, and computer implementation, this book provides a practical overview of methods for Bayesian analysis of hierarchical models.

  9. On minimizing the maximum broadcast decoding delay for instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.

    2014-09-01

    In this paper, we consider the problem of minimizing the maximum broadcast decoding delay experienced by all the receivers of generalized instantly decodable network coding (IDNC). Unlike the sum decoding delay, the maximum decoding delay as a definition of delay for IDNC allows a more equitable distribution of the delays between the different receivers and thus a better Quality of Service (QoS). In order to solve this problem, we first derive the expressions for the probability distributions of maximum decoding delay increments. Given these expressions, we formulate the problem as a maximum weight clique problem in the IDNC graph. Although this problem is known to be NP-hard, we design a greedy algorithm to perform effective packet selection. Through extensive simulations, we compare the sum decoding delay and the max decoding delay experienced when applying the policies to minimize the sum decoding delay and our policy to reduce the max decoding delay. Simulations results show that our policy gives a good agreement among all the delay aspects in all situations and outperforms the sum decoding delay policy to effectively minimize the sum decoding delay when the channel conditions become harsher. They also show that our definition of delay significantly improve the number of served receivers when they are subject to strict delay constraints.

  10. Decoding Generalized Reed-Solomon Codes and Its Application to RLCE Encryption Schemes

    OpenAIRE

    Wang, Yongge

    2017-01-01

    This paper presents a survey on generalized Reed-Solomon codes and various decoding algorithms: Berlekamp-Massey decoding algorithms; Berlekamp-Welch decoding algorithms; Euclidean decoding algorithms; discrete Fourier decoding algorithms, Chien's search algorithm, and Forney's algorithm.

  11. Approximate Bayesian computation.

    Directory of Open Access Journals (Sweden)

    Mikael Sunnåker

    Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.

  12. Decoder for Nonbinary CWS Quantum Codes

    CERN Document Server

    Melo, Nolmar; Portugal, Renato

    2012-01-01

    We present a decoder for nonbinary CWS quantum codes using the structure of union codes. The decoder runs in two steps: first we use a union of stabilizer codes to detect a sequence of errors, and second we build a new code, called union code, that allows to correct the errors.

  13. Bayesian data analysis

    CERN Document Server

    Gelman, Andrew; Stern, Hal S; Dunson, David B; Vehtari, Aki; Rubin, Donald B

    2013-01-01

    FUNDAMENTALS OF BAYESIAN INFERENCEProbability and InferenceSingle-Parameter Models Introduction to Multiparameter Models Asymptotics and Connections to Non-Bayesian ApproachesHierarchical ModelsFUNDAMENTALS OF BAYESIAN DATA ANALYSISModel Checking Evaluating, Comparing, and Expanding ModelsModeling Accounting for Data Collection Decision AnalysisADVANCED COMPUTATION Introduction to Bayesian Computation Basics of Markov Chain Simulation Computationally Efficient Markov Chain Simulation Modal and Distributional ApproximationsREGRESSION MODELS Introduction to Regression Models Hierarchical Linear

  14. Design Space of Flexible Multigigabit LDPC Decoders

    Directory of Open Access Journals (Sweden)

    Philipp Schläfer

    2012-01-01

    Full Text Available Multigigabit LDPC decoders are demanded by standards like IEEE 802.15.3c and IEEE 802.11ad. To achieve the high throughput while supporting the needed flexibility, sophisticated architectures are mandatory. This paper comprehensively presents the design space for flexible multigigabit LDPC applications for the first time. The influence of various design parameters on the hardware is investigated in depth. Two new decoder architectures in a 65 nm CMOS technology are presented to further explore the design space. In the past, the memory domination was the bottleneck for throughputs of up to 1 Gbit/s. Our systematic investigation of column- versus row-based partially parallel decoders shows that this is no more a bottleneck for multigigabit architectures. The evolutionary progress in flexible multigigabit LDPC decoder design is highlighted in an extensive comparison of state-of-the-art decoders.

  15. Turbo decoding using two soft output values

    Institute of Scientific and Technical Information of China (English)

    李建平; 潘申富; 梁庆林

    2004-01-01

    It is well known that turbo decoding always begins from the first component decoder and supposes that the apriori information is "0" at the first iterative decoding. To alternatively start decoding at two component decoders, we can gain two soft output values for the received observation of an input bit. It is obvious that two soft output values comprise more sufficient extrinsic information than only one output value obtained in the conventional scheme since different start points of decoding result in different combinations of the a priori information and the input codewords with different symbol orders due to the permutation of an interleaver. Summarizing two soft output values for erery bit before making hard decisions, we can correct more errors due to their complement. Consequently, turbo codes can achieve better error correcting performance than before in this way. Simulation results show that the performance of turbo codes using the novel proposed decoding scheme can get a growing improvement with the increment of SNR in general compared to the conventional scheme. When the bit error probability is 10-5, the proposed scheme can achieve 0.5 dB asymptotic coding gain or so under the given simulation conditions.

  16. Application of RS Codes in Decoding QR Code

    Institute of Scientific and Technical Information of China (English)

    Zhu Suxia(朱素霞); Ji Zhenzhou; Cao Zhiyan

    2003-01-01

    The QR Code is a 2-dimensional matrix code with high error correction capability. It employs RS codes to generate error correction codewords in encoding and recover errors and damages in decoding. This paper presents several QR Code's virtues, analyzes RS decoding algorithm and gives a software flow chart of decoding the QR Code with RS decoding algorithm.

  17. Three phase full wave dc motor decoder

    Science.gov (United States)

    Studer, P. A. (Inventor)

    1977-01-01

    A three phase decoder for dc motors is disclosed which employs an extremely simple six transistor circuit to derive six properly phased output signals for fullwave operation of dc motors. Six decoding transistors are coupled at their base-emitter junctions across a resistor network arranged in a delta configuration. Each point of the delta configuration is coupled to one of three position sensors which sense the rotational position of the motor. A second embodiment of the invention is disclosed in which photo-optical isolators are used in place of the decoding transistors.

  18. An Encoder/Decoder Scheme of OCDMA Based on Waveguide

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A new encoder/decoder scheme of OCDMA based on waveguide isproposed in this paper. The principle as well as the structure of waveguide encoder/decoder is given. It can be seen that all-optical OCDMA encoder/decoder can be realized by the proposed scheme of the waveguide encoder/decoder. It can also make the OCDMA encoder/decoder integrated easily and the access controlled easily. The system based on this scheme can work under the entirely asynchronous condition.

  19. Facial age affects emotional expression decoding

    OpenAIRE

    2014-01-01

    Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions. Previous studies have often followed up this phenomenon by examining the effect of the observers' age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and fo...

  20. Facial age affects emotional expression decoding

    OpenAIRE

    2014-01-01

    Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions.Previous studies have often followed up this phenomenon by examining the effect of the observers’ age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and fol...

  1. Bayesian Games with Intentions

    Directory of Open Access Journals (Sweden)

    Adam Bjorndahl

    2016-06-01

    Full Text Available We show that standard Bayesian games cannot represent the full spectrum of belief-dependent preferences. However, by introducing a fundamental distinction between intended and actual strategies, we remove this limitation. We define Bayesian games with intentions, generalizing both Bayesian games and psychological games, and prove that Nash equilibria in psychological games correspond to a special class of equilibria as defined in our setting.

  2. Facial age affects emotional expression decoding

    Directory of Open Access Journals (Sweden)

    Mara eFölster

    2014-02-01

    Full Text Available Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions.Previous studies have often followed up this phenomenon by examining the effect of the observers’ age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and folds may render facial expressions of older adults harder to decode. In this paper, we review theoretical frameworks and empirical findings on age effects on decoding emotional expressions, with an emphasis on age-of-face effects. We conclude that the age of the face plays an important role for facial expression decoding. Lower expressivity, age-related changes in the face, less elaborated emotion schemas for older faces, negative attitudes toward older adults, and different visual scan patterns and neural processing of older than younger faces may lower decoding accuracy for older faces. Furthermore, age-related stereotypes and age-related changes in the face may bias the attribution of specific emotions such as sadness to older faces.

  3. Facial age affects emotional expression decoding.

    Science.gov (United States)

    Fölster, Mara; Hess, Ursula; Werheid, Katja

    2014-01-01

    Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions. Previous studies have often followed up this phenomenon by examining the effect of the observers' age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and folds may render facial expressions of older adults harder to decode. In this paper, we review theoretical frameworks and empirical findings on age effects on decoding emotional expressions, with an emphasis on age-of-face effects. We conclude that the age of the face plays an important role for facial expression decoding. Lower expressivity, age-related changes in the face, less elaborated emotion schemas for older faces, negative attitudes toward older adults, and different visual scan patterns and neural processing of older than younger faces may lower decoding accuracy for older faces. Furthermore, age-related stereotypes and age-related changes in the face may bias the attribution of specific emotions such as sadness to older faces.

  4. Bayesian statistics an introduction

    CERN Document Server

    Lee, Peter M

    2012-01-01

    Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel

  5. Understanding Computational Bayesian Statistics

    CERN Document Server

    Bolstad, William M

    2011-01-01

    A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic

  6. On Decoding Irregular Tanner Codes

    CERN Document Server

    Even, Guy

    2011-01-01

    We present a new combinatorial characterization for local-optimality of a codeword in irregular Tanner codes. This characterization is a generalization of [Arora, Daskalakis, Steurer; 2009] and [Vontobel; 2010]. The main novelty in this characterization is that it is based on a conical combination of subtrees in the computation trees. These subtrees may have any degree in the local-code nodes and may have any height (even greater than the girth). We prove that local-optimality in this new characterization implies Maximum-Likelihood (ML) optimality and LP-optimality. We also show that it is possible to compute efficiently a certificate for the local-optimality of a codeword given the channel output. We apply this characterization to regular Tanner codes. We prove a lower bound on the noise threshold in channels such as BSC and AWGNC. When the noise is below this lower bound, the probability that LP decoding fails diminishes doubly exponentially in the girth of the Tanner graph. We use local optimality also to ...

  7. Bayesian astrostatistics: a backward look to the future

    CERN Document Server

    Loredo, Thomas J

    2012-01-01

    This perspective chapter briefly surveys: (1) past growth in the use of Bayesian methods in astrophysics; (2) current misconceptions about both frequentist and Bayesian statistical inference that hinder wider adoption of Bayesian methods by astronomers; and (3) multilevel (hierarchical) Bayesian modeling as a major future direction for research in Bayesian astrostatistics, exemplified in part by presentations at the first ISI invited session on astrostatistics, commemorated in this volume. It closes with an intentionally provocative recommendation for astronomical survey data reporting, motivated by the multilevel Bayesian perspective on modeling cosmic populations: that astronomers cease producing catalogs of estimated fluxes and other source properties from surveys. Instead, summaries of likelihood functions (or marginal likelihood functions) for source properties should be reported (not posterior probability density functions), including nontrivial summaries (not simply upper limits) for candidate objects ...

  8. Sphere decoding complexity exponent for decoding full rate codes over the quasi-static MIMO channel

    CERN Document Server

    Jalden, Joakim

    2011-01-01

    In the setting of quasi-static multiple-input multiple-output (MIMO) channels, we consider the high signal-to-noise ratio (SNR) asymptotic complexity required by the sphere decoding (SD) algorithm for decoding a large class of full rate linear space-time codes. With SD complexity having random fluctuations induced by the random channel, noise and codeword realizations, the introduced SD complexity exponent manages to concisely describe the computational reserves required by the SD algorithm to achieve arbitrarily close to optimal decoding performance. Bounds and exact expressions for the SD complexity exponent are obtained for the decoding of large families of codes with arbitrary performance characteristics. For the particular example of decoding the recently introduced threaded cyclic division algebra (CDA) based codes -- the only currently known explicit designs that are uniformly optimal with respect to the diversity multiplexing tradeoff (DMT) -- the SD complexity exponent is shown to take a particularly...

  9. Bayesian Mediation Analysis

    Science.gov (United States)

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…

  10. Bayesian Probability Theory

    Science.gov (United States)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  11. Word-decoding as a function of temporal processing in the visual system.

    Directory of Open Access Journals (Sweden)

    Steven R Holloway

    Full Text Available This study explored the relation between visual processing and word-decoding ability in a normal reading population. Forty participants were recruited at Arizona State University. Flicker fusion thresholds were assessed with an optical chopper using the method of limits by a 1-deg diameter green (543 nm test field. Word decoding was measured using reading-word and nonsense-word decoding tests. A non-linguistic decoding measure was obtained using a computer program that consisted of Landolt C targets randomly presented in four cardinal orientations, at 3-radial distances from a focus point, for eight compass points, in a circular pattern. Participants responded by pressing the arrow key on the keyboard that matched the direction the target was facing. The results show a strong correlation between critical flicker fusion thresholds and scores on the reading-word, nonsense-word, and non-linguistic decoding measures. The data suggests that the functional elements of the visual system involved with temporal modulation and spatial processing may affect the ease with which people read.

  12. Word-decoding as a function of temporal processing in the visual system.

    Science.gov (United States)

    Holloway, Steven R; Náñez, José E; Seitz, Aaron R

    2013-01-01

    This study explored the relation between visual processing and word-decoding ability in a normal reading population. Forty participants were recruited at Arizona State University. Flicker fusion thresholds were assessed with an optical chopper using the method of limits by a 1-deg diameter green (543 nm) test field. Word decoding was measured using reading-word and nonsense-word decoding tests. A non-linguistic decoding measure was obtained using a computer program that consisted of Landolt C targets randomly presented in four cardinal orientations, at 3-radial distances from a focus point, for eight compass points, in a circular pattern. Participants responded by pressing the arrow key on the keyboard that matched the direction the target was facing. The results show a strong correlation between critical flicker fusion thresholds and scores on the reading-word, nonsense-word, and non-linguistic decoding measures. The data suggests that the functional elements of the visual system involved with temporal modulation and spatial processing may affect the ease with which people read.

  13. FPGA implementation of low complexity LDPC iterative decoder

    Science.gov (United States)

    Verma, Shivani; Sharma, Sanjay

    2016-07-01

    Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.

  14. SLUG -- Stochastically Lighting Up Galaxies. III: A Suite of Tools for Simulated Photometry, Spectroscopy, and Bayesian Inference with Stochastic Stellar Populations

    CERN Document Server

    Krumholz, Mark R; da Silva, Robert L; Rendahl, Theodore; Parra, Jonathan

    2015-01-01

    Stellar population synthesis techniques for predicting the observable light emitted by a stellar population have extensive applications in numerous areas of astronomy. However, accurate predictions for small populations of young stars, such as those found in individual star clusters, star-forming dwarf galaxies, and small segments of spiral galaxies, require that the population be treated stochastically. Conversely, accurate deductions of the properties of such objects also requires consideration of stochasticity. Here we describe a comprehensive suite of modular, open-source software tools for tackling these related problems. These include: a greatly-enhanced version of the slug code introduced by da Silva et al. (2012), which computes spectra and photometry for stochastically- or deterministically-sampled stellar populations with nearly-arbitrary star formation histories, clustering properties, and initial mass functions; cloudy_slug, a tool that automatically couples slug-computed spectra with the cloudy r...

  15. Completion time reduction in instantly decodable network coding through decoding delay control

    KAUST Repository

    Douik, Ahmed S.

    2014-12-01

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to completely act against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. In this paper, we study the effect of controlling the decoding delay to reduce the completion time below its currently best known solution. We first derive the decoding-delay-dependent expressions of the users\\' and their overall completion times. Although using such expressions to find the optimal overall completion time is NP-hard, we use a heuristic that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Simulation results show that this new algorithm achieves both a lower mean completion time and mean decoding delay compared to the best known heuristic for completion time reduction. The gap in performance becomes significant for harsh erasure scenarios.

  16. Online Testable Decoder using Reversible Logic

    Directory of Open Access Journals (Sweden)

    Hemalatha. K. N. Manjula B. B. Girija. S

    2012-02-01

    Full Text Available The project proposes to design and test 2 to 4 reversible Decoder circuit with arbitrary number of gates to an online testable reversible one and is independent of the type of reversible gate used. The constructed circuit can detect any single bit errors and to convert a decoder circuit that is designed by reversible gates to an online testable reversible decoder circuit. Conventional digital circuits dissipate a significant amount of energy because bits of information are erased during the logic operations. Thus if logic gates are designed such that the information bits are not destroyed, the power consumption can be reduced. The information bits are not lost in case of a reversible computation. Reversible logic can be used to implement any Boolean logic function.

  17. VLSI Design of a Turbo Decoder

    Science.gov (United States)

    Fang, Wai-Chi

    2007-01-01

    A very-large-scale-integrated-circuit (VLSI) turbo decoder has been designed to serve as a compact, high-throughput, low-power, lightweight decoder core of a receiver in a data-communication system. In a typical contemplated application, such a decoder core would be part of a single integrated circuit that would include the rest of the receiver circuitry and possibly some or all of the transmitter circuitry, all designed and fabricated together according to an advanced communication-system-on-a-chip design concept. Turbo codes are forward-error-correction (FEC) codes. Relative to older FEC codes, turbo codes enable communication at lower signal-to-noise ratios and offer greater coding gain. In addition, turbo codes can be implemented by relatively simple hardware. Therefore, turbo codes have been adopted as standard for some advanced broadband communication systems.

  18. Error Exponents of Optimum Decoding for the Interference Channel

    CERN Document Server

    Etkin, Raul; Ordentlich, Erik

    2008-01-01

    Exponential error bounds for the finite-alphabet interference channel (IFC) with two transmitter-receiver pairs, are investigated under the random coding regime. Our focus is on optimum decoding, as opposed to heuristic decoding rules that have been used in previous works, like joint typicality decoding, decoding based on interference cancellation, and decoding that considers the interference as additional noise. Indeed, the fact that the actual interfering signal is a codeword and not an i.i.d. noise process complicates the application of conventional techniques to the performance analysis of the optimum decoder. Using analytical tools rooted in statistical physics, we derive a single letter expression for error exponents achievable under optimum decoding and demonstrate strict improvement over error exponents obtainable using suboptimal decoding rules, but which are amenable to more conventional analysis.

  19. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Thommesen, Christian; Høholdt, Tom

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved Reed/Solomon codes, which allows close to errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes. (NK) N-K...

  20. Model Diagnostics for Bayesian Networks

    Science.gov (United States)

    Sinharay, Sandip

    2006-01-01

    Bayesian networks are frequently used in educational assessments primarily for learning about students' knowledge and skills. There is a lack of works on assessing fit of Bayesian networks. This article employs the posterior predictive model checking method, a popular Bayesian model checking tool, to assess fit of simple Bayesian networks. A…

  1. Neuroprosthetic Decoder Training as Imitation Learning.

    Directory of Open Access Journals (Sweden)

    Josh Merel

    2016-05-01

    Full Text Available Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user's intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user's intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger, can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector.

  2. LP Decoding meets LP Decoding: A Connection between Channel Coding and Compressed Sensing

    CERN Document Server

    Dimakis, Alexandros G

    2009-01-01

    This is a tale of two linear programming decoders, namely channel coding linear programming decoding (CC-LPD) and compressed sensing linear programming decoding (CS-LPD). So far, they have evolved quite independently. The aim of the present paper is to show that there is a tight connection between, on the one hand, CS-LPD based on a zero-one measurement matrix over the reals and, on the other hand, CC-LPD of the binary linear code that is obtained by viewing this measurement matrix as a binary parity-check matrix. This connection allows one to translate performance guarantees from one setup to the other.

  3. Generalized Sudan's List Decoding for Order Domain Codes

    DEFF Research Database (Denmark)

    Geil, Hans Olav; Matsumoto, Ryutaroh

    2007-01-01

    We generalize Sudan's list decoding algorithm without multiplicity to evaluation codes coming from arbitrary order domains. The number of correctable errors by the proposed method is larger than the original list decoding without multiplicity.......We generalize Sudan's list decoding algorithm without multiplicity to evaluation codes coming from arbitrary order domains. The number of correctable errors by the proposed method is larger than the original list decoding without multiplicity....

  4. Bounds on List Decoding Gabidulin Codes

    CERN Document Server

    Wachter-Zeh, Antonia

    2012-01-01

    An open question about Gabidulin codes is whether polynomial-time list decoding beyond half the minimum distance is possible or not. In this contribution, we give a lower and an upper bound on the list size, i.e., the number of codewords in a ball around the received word. The lower bound shows that if the radius of this ball is greater than the Johnson radius, this list size can be exponential and hence, no polynomial-time list decoding is possible. The upper bound on the list size uses subspace properties.

  5. High Speed Frame Synchronization and Viterbi Decoding

    DEFF Research Database (Denmark)

    Paaske, Erik; Justesen, Jørn; Larsen, Knud J.

    1996-01-01

    The purpose of Phase 1 of the study is to describe the system structure and algorithms in sufficient detail to allow drawing the high level architecture of units containing frame synchronization and Viterbi decoding. The systems we consider are high data rate space communication systems. Also, th...... implementation uses a number of commercially available decoders while the the two others are completely new implementations aimed at ASICs, one for a data date of 75 Mbit/s and the second for a data rate of 150 Mbits/s....

  6. MAP decoding of variable length codes over noisy channels

    Science.gov (United States)

    Yao, Lei; Cao, Lei; Chen, Chang Wen

    2005-10-01

    In this paper, we discuss the maximum a-posteriori probability (MAP) decoding of variable length codes(VLCs) and propose a novel decoding scheme for the Huffman VLC coded data in the presence of noise. First, we provide some simulation results of VLC MAP decoding and highlight some features that have not been discussed yet in existing work. We will show that the improvement of MAP decoding over the conventional VLC decoding comes mostly from the memory information in the source and give some observations regarding the advantage of soft VLC MAP decoding over hard VLC MAP decoding when AWGN channel is considered. Second, with the recognition that the difficulty in VLC MAP decoding is the lack of synchronization between the symbol sequence and the coded bit sequence, which makes the parsing from the latter to the former extremely complex, we propose a new MAP decoding algorithm by integrating the information of self-synchronization strings (SSSs), one important feature of the codeword structure, into the conventional MAP decoding. A consistent performance improvement and decoding complexity reduction over the conventional VLC MAP decoding can be achieved with the new scheme.

  7. Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks.

    Science.gov (United States)

    Bitzer, Sebastian; Kiebel, Stefan J

    2012-07-01

    Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a 'recognizing RNN' (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, e.g. fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of RNNs may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics.

  8. Bayesian psychometric scaling

    NARCIS (Netherlands)

    Fox, G.J.A.; Berg, van den S.M.; Veldkamp, B.P.; Irwing, P.; Booth, T.; Hughes, D.

    2015-01-01

    In educational and psychological studies, psychometric methods are involved in the measurement of constructs, and in constructing and validating measurement instruments. Assessment results are typically used to measure student proficiency levels and test characteristics. Recently, Bayesian item resp

  9. Noncausal Bayesian Vector Autoregression

    DEFF Research Database (Denmark)

    Lanne, Markku; Luoto, Jani

    We propose a Bayesian inferential procedure for the noncausal vector autoregressive (VAR) model that is capable of capturing nonlinearities and incorporating effects of missing variables. In particular, we devise a fast and reliable posterior simulator that yields the predictive distribution...

  10. Practical Bayesian Tomography

    CERN Document Server

    Granade, Christopher; Cory, D G

    2015-01-01

    In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of- the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we solve all three problems. First, we use modern statistical methods, as pioneered by Husz\\'ar and Houlsby and by Ferrie, to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first informative priors on quantum states and channels. Finally, we develop a method that allows online tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.

  11. Bayesian modeling of unknown diseases for biosurveillance.

    Science.gov (United States)

    Shen, Yanna; Cooper, Gregory F

    2009-11-14

    This paper investigates Bayesian modeling of unknown causes of events in the context of disease-outbreak detection. We introduce a Bayesian approach that models and detects both (1) known diseases (e.g., influenza and anthrax) by using informative prior probabilities and (2) unknown diseases (e.g., a new, highly contagious respiratory virus that has never been seen before) by using relatively non-informative prior probabilities. We report the results of simulation experiments which support that this modeling method can improve the detection of new disease outbreaks in a population. A key contribution of this paper is that it introduces a Bayesian approach for jointly modeling both known and unknown causes of events. Such modeling has broad applicability in medical informatics, where the space of known causes of outcomes of interest is seldom complete.

  12. A chemical system that mimics decoding operations.

    Science.gov (United States)

    Giansante, Carlo; Ceroni, Paola; Venturi, Margherita; Sakamoto, Junji; Schlüter, A Dieter

    2009-02-23

    The chemical information stored in equilibrium mixtures of molecular species is larger than the sum of information carried by the individual molecules. Protonation equilibria in dilute dichloromethane solution of a shape-persistent macrocycle bearing two 2,2'-bipyridine units and two Coumarin 2 moieties (see figure) can be exploited to mimic decoding operations.

  13. Older Adults Have Difficulty in Decoding Sarcasm

    Science.gov (United States)

    Phillips, Louise H.; Allen, Roy; Bull, Rebecca; Hering, Alexandra; Kliegel, Matthias; Channon, Shelley

    2015-01-01

    Younger and older adults differ in performance on a range of social-cognitive skills, with older adults having difficulties in decoding nonverbal cues to emotion and intentions. Such skills are likely to be important when deciding whether someone is being sarcastic. In the current study we investigated in a life span sample whether there are…

  14. BCS-18A command decoder-selector

    Science.gov (United States)

    Laping, H.

    1980-08-01

    This report describes an 18-channel command decoder-selector which operates in conjunction with an HF command receiver to allow secure and reliable radio control of high altitude balloon payloads. A detailed technical description and test results are also included.

  15. Decoding Algorithms for Random Linear Network Codes

    DEFF Research Database (Denmark)

    Heide, Janus; Pedersen, Morten Videbæk; Fitzek, Frank

    2011-01-01

    achieve a high coding throughput, and reduce energy consumption.We use an on-the-fly version of the Gauss-Jordan algorithm as a baseline, and provide several simple improvements to reduce the number of operations needed to perform decoding. Our tests show that the improvements can reduce the number...

  16. Improved decoding for a concatenated coding system

    DEFF Research Database (Denmark)

    Paaske, Erik

    1990-01-01

    The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new...

  17. Inferring the origin of populations introduced from a genetically structured native range by approximate Bayesian computation: case study of the invasive ladybird Harmonia axyridis

    NARCIS (Netherlands)

    Lombaert, E.; Guillemaud, T.; Thomas, C.E.; Handley, L.J.L.; Li, J.; Wang, S.; Pang, H.; Goryacheva, I.; Zakharov, I.A.; Jousselin, E.; Poland, R.L.; Migeon, A.; Lenteren, van J.C.; Clercq, de P.; Berkvens, N.; Jones, W.; Estoup, A.

    2011-01-01

    Correct identification of the source population of an invasive species is a prerequisite for testing hypotheses concerning the factors responsible for biological invasions. The native area of invasive species may be large, poorly known and/or genetically structured. Because the actual source populat

  18. Unsupervised adaptation of brain machine interface decoders

    Directory of Open Access Journals (Sweden)

    Tayfun eGürel

    2012-11-01

    Full Text Available The performance of neural decoders can degrade over time due to nonstationarities in the relationship between neuronal activity and behavior. In this case, brain-machine interfaces (BMI require adaptation of their decoders to maintain high performance across time. One way to achieve this is by use of periodical calibration phases, during which the BMI system (or an external human demonstrator instructs the user to perform certain movements or behaviors. This approach has two disadvantages: (i calibration phases interrupt the autonomous operation of the BMI and (ii between two calibration phases the BMI performance might not be stable but continuously decrease. A better alternative would be that the BMI decoder is able to continuously adapt in an unsupervised manner during autonomous BMI operation, i.e. without knowing the movement intentions of the user. In the present article, we present an efficient method for such unsupervised training of BMI systems for continuous movement control. The proposed method utilizes a cost function derived from neuronal recordings, which guides a learning algorithm to evaluate the decoding parameters. We verify the performance of our adaptive method by simulating a BMI user with an optimal feedback control model and its interaction with our adaptive BMI decoder. The simulation results show that the cost function and the algorithm yield fast and precise trajectories towards targets at random orientations on a 2-dimensional computer screen. For initially unknown and nonstationary tuning parameters, our unsupervised method is still able to generate precise trajectories and to keep its performance stable in the long term. The algorithm can optionally work also with neuronal error signals instead or in conjunction with the proposed unsupervised adaptation.

  19. Efficient Decoding of Turbo Codes with Nonbinary Belief Propagation

    Directory of Open Access Journals (Sweden)

    Thierry Lestable

    2008-05-01

    Full Text Available This paper presents a new approach to decode turbo codes using a nonbinary belief propagation decoder. The proposed approach can be decomposed into two main steps. First, a nonbinary Tanner graph representation of the turbo code is derived by clustering the binary parity-check matrix of the turbo code. Then, a group belief propagation decoder runs several iterations on the obtained nonbinary Tanner graph. We show in particular that it is necessary to add a preprocessing step on the parity-check matrix of the turbo code in order to ensure good topological properties of the Tanner graph and then good iterative decoding performance. Finally, by capitalizing on the diversity which comes from the existence of distinct efficient preprocessings, we propose a new decoding strategy, called decoder diversity, that intends to take benefits from the diversity through collaborative decoding schemes.

  20. Bayesian phylogeography finds its roots.

    Directory of Open Access Journals (Sweden)

    Philippe Lemey

    2009-09-01

    Full Text Available As a key factor in endemic and epidemic dynamics, the geographical distribution of viruses has been frequently interpreted in the light of their genetic histories. Unfortunately, inference of historical dispersal or migration patterns of viruses has mainly been restricted to model-free heuristic approaches that provide little insight into the temporal setting of the spatial dynamics. The introduction of probabilistic models of evolution, however, offers unique opportunities to engage in this statistical endeavor. Here we introduce a Bayesian framework for inference, visualization and hypothesis testing of phylogeographic history. By implementing character mapping in a Bayesian software that samples time-scaled phylogenies, we enable the reconstruction of timed viral dispersal patterns while accommodating phylogenetic uncertainty. Standard Markov model inference is extended with a stochastic search variable selection procedure that identifies the parsimonious descriptions of the diffusion process. In addition, we propose priors that can incorporate geographical sampling distributions or characterize alternative hypotheses about the spatial dynamics. To visualize the spatial and temporal information, we summarize inferences using virtual globe software. We describe how Bayesian phylogeography compares with previous parsimony analysis in the investigation of the influenza A H5N1 origin and H5N1 epidemiological linkage among sampling localities. Analysis of rabies in West African dog populations reveals how virus diffusion may enable endemic maintenance through continuous epidemic cycles. From these analyses, we conclude that our phylogeographic framework will make an important asset in molecular epidemiology that can be easily generalized to infer biogeogeography from genetic data for many organisms.

  1. Bayesian Face Sketch Synthesis.

    Science.gov (United States)

    Wang, Nannan; Gao, Xinbo; Sun, Leiyu; Li, Jie

    2017-03-01

    Exemplar-based face sketch synthesis has been widely applied to both digital entertainment and law enforcement. In this paper, we propose a Bayesian framework for face sketch synthesis, which provides a systematic interpretation for understanding the common properties and intrinsic difference in different methods from the perspective of probabilistic graphical models. The proposed Bayesian framework consists of two parts: the neighbor selection model and the weight computation model. Within the proposed framework, we further propose a Bayesian face sketch synthesis method. The essential rationale behind the proposed Bayesian method is that we take the spatial neighboring constraint between adjacent image patches into consideration for both aforementioned models, while the state-of-the-art methods neglect the constraint either in the neighbor selection model or in the weight computation model. Extensive experiments on the Chinese University of Hong Kong face sketch database demonstrate that the proposed Bayesian method could achieve superior performance compared with the state-of-the-art methods in terms of both subjective perceptions and objective evaluations.

  2. Distinct neural patterns enable grasp types decoding in monkey dorsal premotor cortex

    Science.gov (United States)

    Hao, Yaoyao; Zhang, Qiaosheng; Controzzi, Marco; Cipriani, Christian; Li, Yue; Li, Juncheng; Zhang, Shaomin; Wang, Yiwen; Chen, Weidong; Chiara Carrozza, Maria; Zheng, Xiaoxiang

    2014-12-01

    Objective. Recent studies have shown that dorsal premotor cortex (PMd), a cortical area in the dorsomedial grasp pathway, is involved in grasp movements. However, the neural ensemble firing property of PMd during grasp movements and the extent to which it can be used for grasp decoding are still unclear. Approach. To address these issues, we used multielectrode arrays to record both spike and local field potential (LFP) signals in PMd in macaque monkeys performing reaching and grasping of one of four differently shaped objects. Main results. Single and population neuronal activity showed distinct patterns during execution of different grip types. Cluster analysis of neural ensemble signals indicated that the grasp related patterns emerged soon (200-300 ms) after the go cue signal, and faded away during the hold period. The timing and duration of the patterns varied depending on the behaviors of individual monkey. Application of support vector machine model to stable activity patterns revealed classification accuracies of 94% and 89% for each of the two monkeys, indicating a robust, decodable grasp pattern encoded in the PMd. Grasp decoding using LFPs, especially the high-frequency bands, also produced high decoding accuracies. Significance. This study is the first to specify the neuronal population encoding of grasp during the time course of grasp. We demonstrate high grasp decoding performance in PMd. These findings, combined with previous evidence for reach related modulation studies, suggest that PMd may play an important role in generation and maintenance of grasp action and may be a suitable locus for brain-machine interface applications.

  3. Bayesian least squares deconvolution

    Science.gov (United States)

    Asensio Ramos, A.; Petit, P.

    2015-11-01

    Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  4. Hybrid Batch Bayesian Optimization

    CERN Document Server

    Azimi, Javad; Fern, Xiaoli

    2012-01-01

    Bayesian Optimization aims at optimizing an unknown non-convex/concave function that is costly to evaluate. We are interested in application scenarios where concurrent function evaluations are possible. Under such a setting, BO could choose to either sequentially evaluate the function, one input at a time and wait for the output of the function before making the next selection, or evaluate the function at a batch of multiple inputs at once. These two different settings are commonly referred to as the sequential and batch settings of Bayesian Optimization. In general, the sequential setting leads to better optimization performance as each function evaluation is selected with more information, whereas the batch setting has an advantage in terms of the total experimental time (the number of iterations). In this work, our goal is to combine the strength of both settings. Specifically, we systematically analyze Bayesian optimization using Gaussian process as the posterior estimator and provide a hybrid algorithm t...

  5. Bayesian least squares deconvolution

    CERN Document Server

    Ramos, A Asensio

    2015-01-01

    Aims. To develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods. We consider LSD under the Bayesian framework and we introduce a flexible Gaussian Process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results. We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  6. Bayesian Exploratory Factor Analysis

    DEFF Research Database (Denmark)

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.;

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corr......This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor......, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...

  7. Bayesian Visual Odometry

    Science.gov (United States)

    Center, Julian L.; Knuth, Kevin H.

    2011-03-01

    Visual odometry refers to tracking the motion of a body using an onboard vision system. Practical visual odometry systems combine the complementary accuracy characteristics of vision and inertial measurement units. The Mars Exploration Rovers, Spirit and Opportunity, used this type of visual odometry. The visual odometry algorithms in Spirit and Opportunity were based on Bayesian methods, but a number of simplifying approximations were needed to deal with onboard computer limitations. Furthermore, the allowable motion of the rover had to be severely limited so that computations could keep up. Recent advances in computer technology make it feasible to implement a fully Bayesian approach to visual odometry. This approach combines dense stereo vision, dense optical flow, and inertial measurements. As with all true Bayesian methods, it also determines error bars for all estimates. This approach also offers the possibility of using Micro-Electro Mechanical Systems (MEMS) inertial components, which are more economical, weigh less, and consume less power than conventional inertial components.

  8. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid

    2012-10-01

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  9. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid

    2013-04-04

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the \\\\textit{lattice decoder}. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  10. Probabilistic Inferences in Bayesian Networks

    OpenAIRE

    Ding, Jianguo

    2010-01-01

    This chapter summarizes the popular inferences methods in Bayesian networks. The results demonstrates that the evidence can propagated across the Bayesian networks by any links, whatever it is forward or backward or intercausal style. The belief updating of Bayesian networks can be obtained by various available inference techniques. Theoretically, exact inferences in Bayesian networks is feasible and manageable. However, the computing and inference is NP-hard. That means, in applications, in ...

  11. Hardware Implementation of Serially Concatenated PPM Decoder

    Science.gov (United States)

    Moision, Bruce; Hamkins, Jon; Barsoum, Maged; Cheng, Michael; Nakashima, Michael

    2009-01-01

    A prototype decoder for a serially concatenated pulse position modulation (SCPPM) code has been implemented in a field-programmable gate array (FPGA). At the time of this reporting, this is the first known hardware SCPPM decoder. The SCPPM coding scheme, conceived for free-space optical communications with both deep-space and terrestrial applications in mind, is an improvement of several dB over the conventional Reed-Solomon PPM scheme. The design of the FPGA SCPPM decoder is based on a turbo decoding algorithm that requires relatively low computational complexity while delivering error-rate performance within approximately 1 dB of channel capacity. The SCPPM encoder consists of an outer convolutional encoder, an interleaver, an accumulator, and an inner modulation encoder (more precisely, a mapping of bits to PPM symbols). Each code is describable by a trellis (a finite directed graph). The SCPPM decoder consists of an inner soft-in-soft-out (SISO) module, a de-interleaver, an outer SISO module, and an interleaver connected in a loop (see figure). Each SISO module applies the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm to compute a-posteriori bit log-likelihood ratios (LLRs) from apriori LLRs by traversing the code trellis in forward and backward directions. The SISO modules iteratively refine the LLRs by passing the estimates between one another much like the working of a turbine engine. Extrinsic information (the difference between the a-posteriori and a-priori LLRs) is exchanged rather than the a-posteriori LLRs to minimize undesired feedback. All computations are performed in the logarithmic domain, wherein multiplications are translated into additions, thereby reducing complexity and sensitivity to fixed-point implementation roundoff errors. To lower the required memory for storing channel likelihood data and the amounts of data transfer between the decoder and the receiver, one can discard the majority of channel likelihoods, using only the remainder in

  12. Space vehicle Viterbi decoder. [data converters, algorithms

    Science.gov (United States)

    1975-01-01

    The design and fabrication of an extremely low-power, constraint-length 7, rate 1/3 Viterbi decoder brassboard capable of operating at information rates of up to 100 kb/s is presented. The brassboard is partitioned to facilitate a later transition to an LSI version requiring even less power. The effect of soft-decision thresholds, path memory lengths, and output selection algorithms on the bit error rate is evaluated. A branch synchronization algorithm is compared with a more conventional approach. The implementation of the decoder and its test set (including all-digital noise source) are described along with the results of various system tests and evaluations. Results and recommendations are presented.

  13. Olfactory Decoding Method Using Neural Spike Signals

    Institute of Scientific and Technical Information of China (English)

    Kyung-jin YOU; Hyun-chool SHIN

    2010-01-01

    This paper presents a travel method for inferring the odor based on naval activities observed from rats'main olfactory bulbs.Mufti-channel extmcellular single unit recordings are done by microwire electrodes(Tungsten,50μm,32 channels)innplanted in the mitral/tufted cell layers of the main olfactory bulb of the anesthetized rats to obtain neural responses to various odors.Neural responses as a key feature are measured by subtraction firing rates before stimulus from after.For odor irderenoe,a decoding method is developed based on the ML estimation.The results show that the average decoding acauacy is about 100.0%,96.0%,and 80.0% with three rats,respectively.This wait has profound implications for a novel brain-madune interface system far odor inference.

  14. Bayesian multiple target tracking

    CERN Document Server

    Streit, Roy L

    2013-01-01

    This second edition has undergone substantial revision from the 1999 first edition, recognizing that a lot has changed in the multiple target tracking field. One of the most dramatic changes is in the widespread use of particle filters to implement nonlinear, non-Gaussian Bayesian trackers. This book views multiple target tracking as a Bayesian inference problem. Within this framework it develops the theory of single target tracking, multiple target tracking, and likelihood ratio detection and tracking. In addition to providing a detailed description of a basic particle filter that implements

  15. iBOA: The Incremental Bayesian Optimization Algorithm

    CERN Document Server

    Pelikan, Martin; Goldberg, David E

    2008-01-01

    This paper proposes the incremental Bayesian optimization algorithm (iBOA), which modifies standard BOA by removing the population of solutions and using incremental updates of the Bayesian network. iBOA is shown to be able to learn and exploit unrestricted Bayesian networks using incremental techniques for updating both the structure as well as the parameters of the probabilistic model. This represents an important step toward the design of competent incremental estimation of distribution algorithms that can solve difficult nearly decomposable problems scalably and reliably.

  16. Simplified Digital Subband Coders And Decoders

    Science.gov (United States)

    Glover, Daniel R.

    1994-01-01

    Simplified digital subband coders and decoders developed for use in converting digitized samples of source signals into compressed and encoded forms that maintain integrity of source signals while enabling transmission at low data rates. Examples of coding methods used in subbands include coarse quantization in high-frequency subbands, differential coding, predictive coding, vector quantization, and entropy or statistical coding. Encoders simpler, less expensive and operate rapidly enough to process video signals.

  17. Decoding Hermitian Codes with Sudan's Algorithm

    DEFF Research Database (Denmark)

    Høholdt, Tom; Nielsen, Rasmus Refslund

    1999-01-01

    We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q-polynomial......, and a reduction of the problem of factoring the Q-polynomial to the problem of factoring a univariate polynomial over a large finite field....

  18. Kernel Temporal Differences for Neural Decoding

    Directory of Open Access Journals (Sweden)

    Jihye Bae

    2015-01-01

    Full Text Available We study the feasibility and capability of the kernel temporal difference (KTD(λ algorithm for neural decoding. KTD(λ is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm’s convergence can be guaranteed for policy evaluation. The algorithm’s nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement. KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey’s neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm’s capabilities in reinforcement learning brain machine interfaces.

  19. Sequential decoders for large MIMO systems

    KAUST Repository

    Ali, Konpal S.

    2014-05-01

    Due to their ability to provide high data rates, multiple-input multiple-output (MIMO) systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In this paper, we employ the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. Numerical results are done that show moderate bias values result in a decent performance-complexity trade-off. We also attempt to bound the error by bounding the bias, using the minimum distance of a lattice. The variations in complexity with SNR have an interesting trend that shows room for considerable improvement. Our work is compared against linear decoders (LDs) aided with Element-based Lattice Reduction (ELR) and Complex Lenstra-Lenstra-Lovasz (CLLL) reduction. © 2014 IFIP.

  20. Markov source model for printed music decoding

    Science.gov (United States)

    Kopec, Gary E.; Chou, Philip A.; Maltz, David A.

    1995-03-01

    This paper describes a Markov source model for a simple subset of printed music notation. The model is based on the Adobe Sonata music symbol set and a message language of our own design. Chord imaging is the most complex part of the model. Much of the complexity follows from a rule of music typography that requires the noteheads for adjacent pitches to be placed on opposite sides of the chord stem. This rule leads to a proliferation of cases for other typographic details such as dot placement. We describe the language of message strings accepted by the model and discuss some of the imaging issues associated with various aspects of the message language. We also point out some aspects of music notation that appear problematic for a finite-state representation. Development of the model was greatly facilitated by the duality between image synthesis and image decoding. Although our ultimate objective was a music image model for use in decoding, most of the development proceeded by using the evolving model for image synthesis, since it is computationally far less costly to image a message than to decode an image.

  1. Bayesian methods for hackers probabilistic programming and Bayesian inference

    CERN Document Server

    Davidson-Pilon, Cameron

    2016-01-01

    Bayesian methods of inference are deeply natural and extremely powerful. However, most discussions of Bayesian inference rely on intensely complex mathematical analyses and artificial examples, making it inaccessible to anyone without a strong mathematical background. Now, though, Cameron Davidson-Pilon introduces Bayesian inference from a computational perspective, bridging theory to practice–freeing you to get results using computing power. Bayesian Methods for Hackers illuminates Bayesian inference through probabilistic programming with the powerful PyMC language and the closely related Python tools NumPy, SciPy, and Matplotlib. Using this approach, you can reach effective solutions in small increments, without extensive mathematical intervention. Davidson-Pilon begins by introducing the concepts underlying Bayesian inference, comparing it with other techniques and guiding you through building and training your first Bayesian model. Next, he introduces PyMC through a series of detailed examples a...

  2. A Modified max-log-MAP Decoding Algorithm for Turbo Decoding

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Turbo decoding is iterative decoding, and the MAP algorithm is optimal in terms of performance in Turbo decoding. The log-MAP algorithm is the MAP executed in the logarithmic domain, so it is also optimal. Both the MAP and the log-MAP algorithm are complicated for implementation. The max-log-MAP algorithm is derived from the log-MAP with approximation, which is simply compared with the log-MAP algorithm but is suboptimal in terms of performance. A modified max-log-MAP algorithm is presented in this paper, based on the Taylor series of logarithm and exponent. Analysis and simulation results show that the modified max-log-MAP algorithm outperforms the max-log-MAP algorithm with almost the same complexity.

  3. Interleaved Convolutional Code and Its Viterbi Decoder Architecture

    Directory of Open Access Journals (Sweden)

    Jun Jin Kong

    2003-12-01

    Full Text Available We propose an area-efficient high-speed interleaved Viterbi decoder architecture, which is based on the state-parallel architecture with register exchange path memory structure, for interleaved convolutional code. The state-parallel architecture uses as many add-compare-select (ACS units as the number of trellis states. By replacing each delay (or storage element in state metrics memory (or path metrics memory and path memory (or survival memory with I delays, interleaved Viterbi decoder is obtained where I is the interleaving degree. The decoding speed of this decoder architecture is as fast as the operating clock speed. The latency of proposed interleaved Viterbi decoder is “decoding depth (DD × interleaving degree (I+ extra delays (A,” which increases linearly with the interleaving degree I.

  4. Performance Analysis of Viterbi Decoder for Wireless Applications

    Directory of Open Access Journals (Sweden)

    G.Sivasankar

    2014-07-01

    Full Text Available Viterbi decoder is employed in wireless communication to decode the convolutional codes; those codes are used in every robust digital communication systems. Convolutional encoding and viterbi decoding is a powerful method for forward error correction. This paper deals with synthesis and implementation of viterbi decoder with a constraint length of three as well as seven and the code rate of ½ in FPGA (Field Programmable Gate Array. The performance of viterbi decoder is analyzed in terms of resource utilization. The design of viterbi decoder is simulated using Verilog HDL. It is synthesized and implemented using Xilinx 9.1ise and Spartan 3E Kit. It is compatible with many common standards such as 3GPP, IEEE 802.16 and LTE.

  5. Deciphering elapsed time and predicting action timing from neuronal population signals

    Directory of Open Access Journals (Sweden)

    Shigeru eShinomoto

    2011-06-01

    Full Text Available The proper timing of actions is necessary for the survival of animals, whether in hunting prey or escaping predators. Researchers in the field of neuroscience have begun to explore neuronal signals correlated to behavioral interval timing. Here, we attempt to decode the lapse of time from neuronal population signals recorded from the frontal cortex of monkeys performing a multiple-interval timing task. We designed a Bayesian algorithm that deciphers temporal information hidden in noisy signals dispersed within the activity of individual neurons recorded from monkeys trained to determine the passage of time before initiating an action. With this decoder, we succeeded in estimating the elapsed time with a precision of approximately 1 sec throughout the relevant behavioral period from firing rates of 25 neurons in the pre-supplementary motor area. Further, an extended algorithm makes it possible to determine the total length of the time interval required to wait in each trial. This enables observers to predict the moment at which the subject will take action from the neuronal activity in the brain. A separate population analysis reveals that the neuronal ensemble represents the lapse of time in a manner scaled relative to the scheduled interval, rather than representing it as the real physical time.

  6. A Bayesian nonparametric meta-analysis model.

    Science.gov (United States)

    Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G

    2015-03-01

    In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall effect size, such models may be adequate, but for prediction, they surely are not if the effect-size distribution exhibits non-normal behavior. To address this issue, we propose a Bayesian nonparametric meta-analysis model, which can describe a wider range of effect-size distributions, including unimodal symmetric distributions, as well as skewed and more multimodal distributions. We demonstrate our model through the analysis of real meta-analytic data arising from behavioral-genetic research. We compare the predictive performance of the Bayesian nonparametric model against various conventional and more modern normal fixed-effects and random-effects models.

  7. Decoding Delay Controlled Completion Time Reduction in Instantly Decodable Network Coding

    KAUST Repository

    Douik, Ahmed

    2016-06-27

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to act completely against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. This paper investigates the effect of controlling the decoding delay to reduce the completion time below its currently best-known solution in both perfect and imperfect feedback with persistent erasure channels. To solve the problem, the decodingdelay- dependent expressions of the users’ and overall completion times are derived in the complete feedback scenario. Although using such expressions to find the optimal overall completion time is NP-hard, the paper proposes two novel heuristics that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Afterward, the paper extends the study to the imperfect feedback scenario in which uncertainties at the sender affects its ability to anticipate accurately the decoding delay increase at each user. The paper formulates the problem in such environment and derives the expression of the minimum increase in the completion time. Simulation results show the performance of the proposed solutions and suggest that both heuristics achieves a lower mean completion time as compared to the best-known heuristics for the completion time reduction in perfect and imperfect feedback. The gap in performance becomes more significant as the erasure of the channel increases.

  8. Bayesian Graphical Models

    DEFF Research Database (Denmark)

    Jensen, Finn Verner; Nielsen, Thomas Dyhre

    2016-01-01

    and edges. The nodes represent variables, which may be either discrete or continuous. An edge between two nodes A and B indicates a direct influence between the state of A and the state of B, which in some domains can also be interpreted as a causal relation. The wide-spread use of Bayesian networks...

  9. Subjective Bayesian Beliefs

    DEFF Research Database (Denmark)

    Antoniou, Constantinos; Harrison, Glenn W.; Lau, Morten I.;

    2015-01-01

    A large literature suggests that many individuals do not apply Bayes’ Rule when making decisions that depend on them correctly pooling prior information and sample data. We replicate and extend a classic experimental study of Bayesian updating from psychology, employing the methods of experimental...

  10. Design of a VLSI Decoder for Partially Structured LDPC Codes

    Directory of Open Access Journals (Sweden)

    Fabrizio Vacca

    2008-01-01

    of their parity matrix can be partitioned into two disjoint sets, namely, the structured and the random ones. For the proposed class of codes a constructive design method is provided. To assess the value of this method the constructed codes performance are presented. From these results, a novel decoding method called split decoding is introduced. Finally, to prove the effectiveness of the proposed approach a whole VLSI decoder is designed and characterized.

  11. Network Topologies Decoding Cervical Cancer.

    Directory of Open Access Journals (Sweden)

    Sarika Jalan

    Full Text Available According to the GLOBOCAN statistics, cervical cancer is one of the leading causes of death among women worldwide. It is found to be gradually increasing in the younger population, specifically in the developing countries. We analyzed the protein-protein interaction networks of the uterine cervix cells for the normal and disease states. It was found that the disease network was less random than the normal one, providing an insight into the change in complexity of the underlying network in disease state. The study also portrayed that, the disease state has faster signal processing as the diameter of the underlying network was very close to its corresponding random control. This may be a reason for the normal cells to change into malignant state. Further, the analysis revealed VEGFA and IL-6 proteins as the distinctly high degree nodes in the disease network, which are known to manifest a major contribution in promoting cervical cancer. Our analysis, being time proficient and cost effective, provides a direction for developing novel drugs, therapeutic targets and biomarkers by identifying specific interaction patterns, that have structural importance.

  12. Efficient Decoding of Partial Unit Memory Codes of Arbitrary Rate

    CERN Document Server

    Wachter-Zeh, Antonia; Bossert, Martin

    2012-01-01

    Partial Unit Memory (PUM) codes are a special class of convolutional codes, which are often constructed by means of block codes. Decoding of PUM codes may take advantage of existing decoders for the block code. The Dettmar--Sorger algorithm is an efficient decoding algorithm for PUM codes, but allows only low code rates. The same restriction holds for several known PUM code constructions. In this paper, an arbitrary-rate construction, the analysis of its distance parameters and a generalized decoding algorithm for PUM codes of arbitrary rate are provided. The correctness of the algorithm is proven and it is shown that its complexity is cubic in the length.

  13. Joint Decoding of Concatenated VLEC and STTC System

    Directory of Open Access Journals (Sweden)

    Chen Huijun

    2008-01-01

    Full Text Available Abstract We consider the decoding of wireless communication systems with both source coding in the application layer and channel coding in the physical layer for high-performance transmission over fading channels. Variable length error correcting codes (VLECs and space time trellis codes (STTCs are used to provide bandwidth efficient data compression as well as coding and diversity gains. At the receiver, an iterative joint source and space time decoding scheme are developed to utilize redundancy in both STTC and VLEC to improve overall decoding performance. Issues such as the inseparable systematic information in the symbol level, the asymmetric trellis structure of VLEC, and information exchange between bit and symbol domains have been considered in the maximum a posteriori probability (MAP decoding algorithm. Simulation results indicate that the developed joint decoding scheme achieves a significant decoding gain over the separate decoding in fading channels, whether or not the channel information is perfectly known at the receiver. Furthermore, how rate allocation between STTC and VLEC affects the performance of the joint source and space-time decoder is investigated. Different systems with a fixed overall information rate are studied. It is shown that for a system with more redundancy dedicated to the source code and a higher order modulation of STTC, the joint decoding yields better performance, though with increased complexity.

  14. Joint Decoding of Concatenated VLEC and STTC System

    Directory of Open Access Journals (Sweden)

    Huijun Chen

    2008-07-01

    Full Text Available We consider the decoding of wireless communication systems with both source coding in the application layer and channel coding in the physical layer for high-performance transmission over fading channels. Variable length error correcting codes (VLECs and space time trellis codes (STTCs are used to provide bandwidth efficient data compression as well as coding and diversity gains. At the receiver, an iterative joint source and space time decoding scheme are developed to utilize redundancy in both STTC and VLEC to improve overall decoding performance. Issues such as the inseparable systematic information in the symbol level, the asymmetric trellis structure of VLEC, and information exchange between bit and symbol domains have been considered in the maximum a posteriori probability (MAP decoding algorithm. Simulation results indicate that the developed joint decoding scheme achieves a significant decoding gain over the separate decoding in fading channels, whether or not the channel information is perfectly known at the receiver. Furthermore, how rate allocation between STTC and VLEC affects the performance of the joint source and space-time decoder is investigated. Different systems with a fixed overall information rate are studied. It is shown that for a system with more redundancy dedicated to the source code and a higher order modulation of STTC, the joint decoding yields better performance, though with increased complexity.

  15. A positive detecting code and its decoding algorithm for DNA library screening.

    Science.gov (United States)

    Uehara, Hiroaki; Jimbo, Masakazu

    2009-01-01

    The study of gene functions requires high-quality DNA libraries. However, a large number of tests and screenings are necessary for compiling such libraries. We describe an algorithm for extracting as much information as possible from pooling experiments for library screening. Collections of clones are called pools, and a pooling experiment is a group test for detecting all positive clones. The probability of positiveness for each clone is estimated according to the outcomes of the pooling experiments. Clones with high chance of positiveness are subjected to confirmatory testing. In this paper, we introduce a new positive clone detecting algorithm, called the Bayesian network pool result decoder (BNPD). The performance of BNPD is compared, by simulation, with that of the Markov chain pool result decoder (MCPD) proposed by Knill et al. in 1996. Moreover, the combinatorial properties of pooling designs suitable for the proposed algorithm are discussed in conjunction with combinatorial designs and d-disjunct matrices. We also show the advantage of utilizing packing designs or BIB designs for the BNPD algorithm.

  16. A primer on equalization, decoding and non-iterative joint equalization and decoding

    Science.gov (United States)

    Myburgh, Hermanus C.; Olivier, Jan C.

    2013-12-01

    In this article, a general model for non-iterative joint equalization and decoding is systematically derived for use in systems transmitting convolutionally encoded BPSK-modulated information through a multipath channel, with and without interleaving. Optimal equalization and decoding are discussed first, by presenting the maximum likelihood sequence estimation and maximum a posteriori probability algorithms and relating them to equalization in single-carrier channels with memory, and to the decoding of convolutional codes. The non-iterative joint equalizer/decoder (NI-JED) is then derived for the case where no interleaver is used, as well as for the case when block interleavers of varying depths are used, and complexity analyses are performed in each case. Simulation results are performed to compare the performance of the NI-JED to that of a conventional turbo equalizer (CTE), and it is shown that the NI-JED outperforms the CTE, although at much higher computational cost. This article serves to explain the state-of-the-art to students and professionals in the field of wireless communication systems, presenting these fundamental topics clearly and concisely.

  17. On Rational Interpolation-Based List-Decoding and List-Decoding Binary Goppa Codes

    DEFF Research Database (Denmark)

    Beelen, Peter; Høholdt, Tom; Nielsen, Johan Sebastian Rosenkilde;

    2013-01-01

    We derive the Wu list-decoding algorithm for generalized Reed–Solomon (GRS) codes by using Gröbner bases over modules and the Euclidean algorithm as the initial algorithm instead of the Berlekamp–Massey algorithm. We present a novel method for constructing the interpolation polynomial fast. We give...

  18. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  19. GLRT-Optimal Noncoherent Lattice Decoding

    CERN Document Server

    Ryan, Daniel J; Clarkson, I Vaughan L

    2007-01-01

    This paper presents new low-complexity lattice-decoding algorithms for noncoherent block detection of QAM and PAM signals over complex-valued fading channels. The algorithms are optimal in terms of the generalized likelihood ratio test (GLRT). The computational complexity is polynomial in the block length; making GLRT-optimal noncoherent detection feasible for implementation. We also provide even lower complexity suboptimal algorithms. Simulations show that the suboptimal algorithms have performance indistinguishable from the optimal algorithms. Finally, we consider block based transmission, and propose to use noncoherent detection as an alternative to pilot assisted transmission (PAT). The new technique is shown to outperform PAT.

  20. Probability and Bayesian statistics

    CERN Document Server

    1987-01-01

    This book contains selected and refereed contributions to the "Inter­ national Symposium on Probability and Bayesian Statistics" which was orga­ nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa­ pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub­ jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...

  1. Bayesian community detection

    DEFF Research Database (Denmark)

    Mørup, Morten; Schmidt, Mikkel N

    2012-01-01

    Many networks of scientific interest naturally decompose into clusters or communities with comparatively fewer external than internal links; however, current Bayesian models of network communities do not exert this intuitive notion of communities. We formulate a nonparametric Bayesian model...... consistent with ground truth, and on real networks, it outperforms existing approaches in predicting missing links. This suggests that community structure is an important structural property of networks that should be explicitly modeled....... for community detection consistent with an intuitive definition of communities and present a Markov chain Monte Carlo procedure for inferring the community structure. A Matlab toolbox with the proposed inference procedure is available for download. On synthetic and real networks, our model detects communities...

  2. Bayesian Independent Component Analysis

    DEFF Research Database (Denmark)

    Winther, Ole; Petersen, Kaare Brandt

    2007-01-01

    In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...... in a Matlab toolbox, is demonstrated for non-negative decompositions and compared with non-negative matrix factorization....

  3. Bayesian theory and applications

    CERN Document Server

    Dellaportas, Petros; Polson, Nicholas G; Stephens, David A

    2013-01-01

    The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This volume guides the reader along a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book has a unique format. There is an explanatory chapter devoted to each conceptual advance followed by journal-style chapters that provide applications or further advances on the concept. Thus, the volume is both a textbook and a compendium of papers covering a vast range of topics. It is appropriate for a well-informed novice interested in understanding the basic approach, methods and recent applications. Because of its advanced chapters and recent work, it is also appropriate for a more mature reader interested in recent applications and devel...

  4. Interim Manual for the DST: Decoding Skills Test.

    Science.gov (United States)

    Richardson, Ellis; And Others

    The Decoding Skills Test (DST) was developed to provide a detailed measurement of decoding skills which could be used in research on developmental dyslexia. Another purpose of the test is to provide a diagnostic-prescriptive instrument to be used in the evaluation of, and program planning for, children needing remedial reading. The test is…

  5. A Method of Coding and Decoding in Underwater Image Transmission

    Institute of Scientific and Technical Information of China (English)

    程恩

    2001-01-01

    A new method of coding and decoding in the system of underwater image transmission is introduced, including the rapid digital frequency synthesizer in multiple frequency shift keying,image data generator, image grayscale decoder with intelligent fuzzy algorithm, image restoration and display on microcomputer.

  6. Decoding Information in the Human Hippocampus: A User's Guide

    Science.gov (United States)

    Chadwick, Martin J.; Bonnici, Heidi M.; Maguire, Eleanor A.

    2012-01-01

    Multi-voxel pattern analysis (MVPA), or "decoding", of fMRI activity has gained popularity in the neuroimaging community in recent years. MVPA differs from standard fMRI analyses by focusing on whether information relating to specific stimuli is encoded in patterns of activity across multiple voxels. If a stimulus can be predicted, or decoded,…

  7. Socialization Processes in Encoding and Decoding: Learning Effective Nonverbal Behavior.

    Science.gov (United States)

    Feldman, Robert S.; Coats, Erik

    This study examined the relationship of nonverbal encoding and decoding skills to the level of exposure to television. Subjects were children in second through sixth grade. Three nonverbal skills (decoding, spontaneous encoding, and posed encoding) were assessed for each of five emotions: anger, disgust, fear or surprise, happiness, and sadness.…

  8. Codes on the Klein quartic, ideals, and decoding

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    1987-01-01

    descriptions as left ideals in the group-algebra GF(2^{3})[G]. This description allows for easy decoding. For instance, in the case of the single error correcting code of length21and dimension16with minimal distance3. decoding is obtained by multiplication with an idempotent in the group algebra....

  9. Word Processing in Dyslexics: An Automatic Decoding Deficit?

    Science.gov (United States)

    Yap, Regina; Van Der Leu, Aryan

    1993-01-01

    Compares dyslexic children with normal readers on measures of phonological decoding and automatic word processing. Finds that dyslexics have a deficit in automatic phonological decoding skills. Discusses results within the framework of the phonological deficit and the automatization deficit hypotheses. (RS)

  10. A VLSI design for a trace-back Viterbi decoder

    Science.gov (United States)

    Truong, T. K.; Shih, Ming-Tang; Reed, Irving S.; Satorius, E. H.

    1992-01-01

    A systolic Viterbi decoder for convolutional codes is developed which uses the trace-back method to reduce the amount of data needed to be stored in registers. It is shown that this new algorithm requires a smaller chip size and achieves a faster decoding time than other existing methods.

  11. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Thommesen, Christian

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes....

  12. Decoding Technique of Concatenated Hadamard Codes and Its Performance

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    The decoding technique of concatenated Hadamard codes and its performance are studied. Efficient softin-soft-out decoding algorithms based on the fast Hadamard transform are developed. Performance required by CDMA mobile or PCS speech services, e.g. , BER= 10-3, can be achieved at Eb/No = 0.9 dB using short interleaving length of 192 bits.

  13. FPGA Prototyping of RNN Decoder for Convolutional Codes

    Directory of Open Access Journals (Sweden)

    Salcic Zoran

    2006-01-01

    Full Text Available This paper presents prototyping of a recurrent type neural network (RNN convolutional decoder using system-level design specification and design flow that enables easy mapping to the target FPGA architecture. Implementation and the performance measurement results have shown that an RNN decoder for hard-decision decoding coupled with a simple hard-limiting neuron activation function results in a very low complexity, which easily fits into standard Altera FPGA. Moreover, the design methodology allowed modeling of complete testbed for prototyping RNN decoders in simulation and real-time environment (same FPGA, thus enabling evaluation of BER performance characteristics of the decoder for various conditions of communication channel in real time.

  14. Multilevel Decoders Surpassing Belief Propagation on the Binary Symmetric Channel

    CERN Document Server

    Planjery, Shiva Kumar; Chilappagari, Shashi Kiran; Vasić, Bane

    2010-01-01

    In this paper, we propose a new class of quantized message-passing decoders for LDPC codes over the BSC. The messages take values (or levels) from a finite set. The update rules do not mimic belief propagation but instead are derived using the knowledge of trapping sets. We show that the update rules can be derived to correct certain error patterns that are uncorrectable by algorithms such as BP and min-sum. In some cases even with a small message set, these decoders can guarantee correction of a higher number of errors than BP and min-sum. We provide particularly good 3-bit decoders for 3-left-regular LDPC codes. They significantly outperform the BP and min-sum decoders, but more importantly, they achieve this at only a fraction of the complexity of the BP and min-sum decoders.

  15. An efficient VLSI implementation of H.264/AVC entropy decoder

    Institute of Scientific and Technical Information of China (English)

    Jongsik; PARK; Jeonhak; MOON; Seongsoo; LEE

    2010-01-01

    <正>This paper proposes an efficient H.264/AVC entropy decoder.It requires no ROM/RAM fabrication process that decreases fabrication cost and increases operation speed.It was achieved by optimizing lookup tables and internal buffers,which significantly improves area,speed,and power.The proposed entropy decoder does not exploit embedded processor for bitstream manipulation, which also improves area,speed,and power.Its gate counts and maximum operation frequency are 77515 gates and 175MHz in 0.18um fabrication process,respectively.The proposed entropy decoder needs 2303 cycles in average for one macroblock decoding.It can run at 28MHz to meet the real-time processing requirement for CIF format video decoding on mobile applications.

  16. Implementation of Huffman Decoder on Fpga

    Directory of Open Access Journals (Sweden)

    Safia Amir Dahri

    2016-01-01

    Full Text Available Lossless data compression algorithm is most widely used algorithm in data transmission, reception and storage systems in order to increase data rate, speed and save lots of space on storage devices. Now-a-days, different algorithms are implemented in hardware to achieve benefits of hardware realizations. Hardware implementation of algorithms, digital signal processing algorithms and filter realization is done on programmable devices i.e. FPGA. In lossless data compression algorithms, Huffman algorithm is most widely used because of its variable length coding features and many other benefits. Huffman algorithms are used in many applications in software form, e.g. Zip and Unzip, communication, etc. In this paper, Huffman algorithm is implemented on Xilinx Spartan 3E board. This FPGA is programmed by Xilinx tool, Xilinx ISE 8.2i. The program is written in VHDL and text data is decoded by a Huffman algorithm on Hardware board which was previously encoded by Huffman algorithm. In order to visualize the output clearly in waveforms, the same code is simulated on ModelSim v6.4. Huffman decoder is also implemented in the MATLAB for verification of operation. The FPGA is a configurable device which is more efficient in all aspects. Text application, image processing, video streaming and in many other applications Huffman algorithms are implemented.

  17. SDRAM bus schedule of HDTV video decoder

    Science.gov (United States)

    Wang, Hui; He, Yan L.; Yu, Lu

    2001-12-01

    In this paper, a time division multiplexed task scheduling (TDM) is designed for HDTV video decoder is proposed. There are three tasks: to fetch decoded data from SDRAM for displaying (DIS), read the reference data from SDRAM for motion compensating (REF) and write the motion compensated data back to SDRAM (WB) on the bus. The proposed schedule is based on the novel 4 banks interlaced SDRAM storage structure which results in less overhead on read/write time. Two SDRAM of 64M bits (4Bank×512K×32bit) are used. Compared with two banks, the four banks storage strategy read/write data with 45% less time. Therefore the process data rates for those three tasks are reduced. TDM is developed by round robin scheduling and fixed slot allocating. There are both MB slot and task slot. As a result the conflicts on bus are avoided, and the buffer size is reduced 48% compared with the priority bus scheduling. Moreover, there is a compacted bus schedule for the worst case of stuffing owning to the reduced executing time on tasks. The size of buffer is reduced and the control logic is simplified.

  18. Decoding suprathreshold stochastic resonance with optimal weights

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Liyan, E-mail: xuliyan@qdu.edu.cn [Institute of Complexity Science, Qingdao University, Qingdao 266071 (China); Vladusich, Tony [Computational and Theoretical Neuroscience Laboratory, Institute for Telecommunications Research, School of Information Technology and Mathematical Sciences, University of South Australia, SA 5095 (Australia); Duan, Fabing [Institute of Complexity Science, Qingdao University, Qingdao 266071 (China); Gunn, Lachlan J.; Abbott, Derek [Centre for Biomedical Engineering (CBME) and School of Electrical & Electronic Engineering, The University of Adelaide, Adelaide, SA 5005 (Australia); McDonnell, Mark D. [Computational and Theoretical Neuroscience Laboratory, Institute for Telecommunications Research, School of Information Technology and Mathematical Sciences, University of South Australia, SA 5095 (Australia); Centre for Biomedical Engineering (CBME) and School of Electrical & Electronic Engineering, The University of Adelaide, Adelaide, SA 5005 (Australia)

    2015-10-09

    We investigate an array of stochastic quantizers for converting an analog input signal into a discrete output in the context of suprathreshold stochastic resonance. A new optimal weighted decoding is considered for different threshold level distributions. We show that for particular noise levels and choices of the threshold levels optimally weighting the quantizer responses provides a reduced mean square error in comparison with the original unweighted array. However, there are also many parameter regions where the original array provides near optimal performance, and when this occurs, it offers a much simpler approach than optimally weighting each quantizer's response. - Highlights: • A weighted summing array of independently noisy binary comparators is investigated. • We present an optimal linearly weighted decoding scheme for combining the comparator responses. • We solve for the optimal weights by applying least squares regression to simulated data. • We find that the MSE distortion of weighting before summation is superior to unweighted summation of comparator responses. • For some parameter regions, the decrease in MSE distortion due to weighting is negligible.

  19. Reduced complexity turbo equalization using a dynamic Bayesian network

    Science.gov (United States)

    Myburgh, Hermanus C.; Olivier, Jan C.; van Zyl, Augustinus J.

    2012-12-01

    It is proposed that a dynamic Bayesian network (DBN) is used to perform turbo equalization in a system transmitting information over a Rayleigh fading multipath channel. The DBN turbo equalizer (DBN-TE) is modeled on a single directed acyclic graph by relaxing the Markov assumption and allowing weak connections to past and future states. Its complexity is exponential in encoder constraint length and approximately linear in the channel memory length. Results show that the performance of the DBN-TE closely matches that of a traditional turbo equalizer that uses a maximum a posteriori equalizer and decoder pair. The DBN-TE achieves full convergence and near-optimal performance after small number of iterations.

  20. Lattice Sequential Decoder for Coded MIMO Channel: Performance and Complexity Analysis

    CERN Document Server

    Abediseid, Walid

    2011-01-01

    In this paper, the performance limit of lattice sequential decoder for lattice space-time coded MIMO channel is analysed. We determine the rates achievable by lattice coding and sequential decoding applied to such channel. The diversity-multiplexing tradeoff (DMT) under lattice sequential decoding is derived as a function of its parameter---the bias term. The bias parameter is critical for controlling the amount of computations required at the decoding stage. Achieving low decoding complexity requires increasing the value of the bias term. However, this is done at the expense of losing the optimal tradeoff of the channel. We show how such a decoder can bridge the gap between lattice decoder and low complexity decoders. Moreover, the computational complexity of lattice sequential decoder is analysed. Specifically, we derive the tail distribution of the decoder's computational complexity in the high signal-to-noise ratio regime. Similar to the conventional sequential decoder used in discrete memoryless channel,...

  1. The Differential Contributions of Auditory-Verbal and Visuospatial Working Memory on Decoding Skills in Children Who Are Poor Decoders

    Science.gov (United States)

    Squires, Katie Ellen

    2013-01-01

    This study investigated the differential contribution of auditory-verbal and visuospatial working memory (WM) on decoding skills in second- and fifth-grade children identified with poor decoding. Thirty-two second-grade students and 22 fifth-grade students completed measures that assessed simple and complex auditory-verbal and visuospatial memory,…

  2. A sliced inverse regression (SIR decoding the forelimb movement from neuronal spikes in the rat motor cortex

    Directory of Open Access Journals (Sweden)

    Shih-Hung Yang

    2016-12-01

    Full Text Available Several neural decoding algorithms have successfully converted brain signals into commands to control a computer cursor and prosthetic devices. A majority of decoding methods, such as population vector algorithms (PVA, optimal linear estimators (OLE, and neural networks (NN, are effective in predicting movement kinematics, including movement direction, speed and trajectory but usually require a large number of neurons to achieve desirable performance. This study proposed a novel decoding algorithm even with signals obtained from a smaller numbers of neurons. We adopted sliced inverse regression (SIR to predict forelimb movement from single-unit activities recorded in the rat primary motor (M1 cortex in a water-reward lever-pressing task. SIR performed weighted principal component analysis (PCA to achieve effective dimension reduction for nonlinear regression. To demonstrate the decoding performance, SIR was compared to PVA, OLE, and NN. Furthermore, PCA and sequential feature selection (SFS which are popular feature selection techniques were implemented for comparison of feature selection effectiveness. Among SIR, PVA, OLE, PCA, SFS, and NN decoding methods, the trajectories predicted by SIR (with a root mean square error, RMSE, of 8.47 ± 1.32 mm was closer to the actual trajectories compared with those predicted by PVA (30.41 ± 11.73 mm, OLE (20.17 ± 6.43 mm, PCA (19.13 ± 0.75 mm, SFS (22.75 ± 2.01 mm, and NN (16.75 ± 2.02 mm. The superiority of SIR was most obvious when the sample size of neurons was small. We concluded that SIR sorted the input data to obtain the effective transform matrices for movement prediction, making it a robust decoding method for conditions with sparse neuronal information.

  3. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which...... nodes and the arc prior models variations in row and column spacing across the grid. Grid matching is done by placing an initial rough grid over the image and applying an ensemble annealing scheme to maximize the posterior distribution of the grid. The method can be applied to noisy images with missing...

  4. Applied Bayesian modelling

    CERN Document Server

    Congdon, Peter

    2014-01-01

    This book provides an accessible approach to Bayesian computing and data analysis, with an emphasis on the interpretation of real data sets. Following in the tradition of the successful first edition, this book aims to make a wide range of statistical modeling applications accessible using tested code that can be readily adapted to the reader's own applications. The second edition has been thoroughly reworked and updated to take account of advances in the field. A new set of worked examples is included. The novel aspect of the first edition was the coverage of statistical modeling using WinBU

  5. Bayesian nonparametric data analysis

    CERN Document Server

    Müller, Peter; Jara, Alejandro; Hanson, Tim

    2015-01-01

    This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book’s structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in on-line software pages.

  6. Encoder-decoder optimization for brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Josh Merel

    2015-06-01

    Full Text Available Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model" and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.

  7. Decoding Generalized Concatenated Codes Using Interleaved Reed-Solomon Codes

    CERN Document Server

    Senger, Christian; Bossert, Martin; Zyablov, Victor

    2008-01-01

    Generalized Concatenated codes are a code construction consisting of a number of outer codes whose code symbols are protected by an inner code. As outer codes, we assume the most frequently used Reed-Solomon codes; as inner code, we assume some linear block code which can be decoded up to half its minimum distance. Decoding up to half the minimum distance of Generalized Concatenated codes is classically achieved by the Blokh-Zyablov-Dumer algorithm, which iteratively decodes by first using the inner decoder to get an estimate of the outer code words and then using an outer error/erasure decoder with a varying number of erasures determined by a set of pre-calculated thresholds. In this paper, a modified version of the Blokh-Zyablov-Dumer algorithm is proposed, which exploits the fact that a number of outer Reed-Solomon codes with average minimum distance d can be grouped into one single Interleaved Reed-Solomon code which can be decoded beyond d/2. This allows to skip a number of decoding iterations on the one...

  8. Approximate Decoding Approaches for Network Coded Correlated Data

    CERN Document Server

    Park, Hyunggon; Frossard, Pascal

    2011-01-01

    This paper considers a framework where data from correlated sources are transmitted with help of network coding in ad-hoc network topologies. The correlated data are encoded independently at sensors and network coding is employed in the intermediate nodes in order to improve the data delivery performance. In such settings, we focus on the problem of reconstructing the sources at decoder when perfect decoding is not possible due to losses or bandwidth bottlenecks. We first show that the source data similarity can be used at decoder to permit decoding based on a novel and simple approximate decoding scheme. We analyze the influence of the network coding parameters and in particular the size of finite coding fields on the decoding performance. We further determine the optimal field size that maximizes the expected decoding performance as a trade-off between information loss incurred by limiting the resolution of the source data and the error probability in the reconstructed data. Moreover, we show that the perfo...

  9. O2-GIDNC: Beyond instantly decodable network coding

    KAUST Repository

    Aboutorab, Neda

    2013-06-01

    In this paper, we are concerned with extending the graph representation of generalized instantly decodable network coding (GIDNC) to a more general opportunistic network coding (ONC) scenario, referred to as order-2 GIDNC (O2-GIDNC). In the O2-GIDNC scheme, receivers can store non-instantly decodable packets (NIDPs) comprising two of their missing packets, and use them in a systematic way for later decodings. Once this graph representation is found, it can be used to extend the GIDNC graph-based analyses to the proposed O2-GIDNC scheme with a limited increase in complexity. In the proposed O2-GIDNC scheme, the information of the stored NIDPs at the receivers and the decoding opportunities they create can be exploited to improve the broadcast completion time and decoding delay compared to traditional GIDNC scheme. The completion time and decoding delay minimizing algorithms that can operate on the new O2-GIDNC graph are further described. The simulation results show that our proposed O2-GIDNC improves the completion time and decoding delay performance of the traditional GIDNC. © 2013 IEEE.

  10. An unbiased Bayesian approach to functional connectomics implicates social-communication networks in autism.

    Science.gov (United States)

    Venkataraman, Archana; Duncan, James S; Yang, Daniel Y-J; Pelphrey, Kevin A

    2015-01-01

    Resting-state functional magnetic resonance imaging (rsfMRI) studies reveal a complex pattern of hyper- and hypo-connectivity in children with autism spectrum disorder (ASD). Whereas rsfMRI findings tend to implicate the default mode network and subcortical areas in ASD, task fMRI and behavioral experiments point to social dysfunction as a unifying impairment of the disorder. Here, we leverage a novel Bayesian framework for whole-brain functional connectomics that aggregates population differences in connectivity to localize a subset of foci that are most affected by ASD. Our approach is entirely data-driven and does not impose spatial constraints on the region foci or dictate the trajectory of altered functional pathways. We apply our method to data from the openly shared Autism Brain Imaging Data Exchange (ABIDE) and pinpoint two intrinsic functional networks that distinguish ASD patients from typically developing controls. One network involves foci in the right temporal pole, left posterior cingulate cortex, left supramarginal gyrus, and left middle temporal gyrus. Automated decoding of this network by the Neurosynth meta-analytic database suggests high-level concepts of "language" and "comprehension" as the likely functional correlates. The second network consists of the left banks of the superior temporal sulcus, right posterior superior temporal sulcus extending into temporo-parietal junction, and right middle temporal gyrus. Associated functionality of these regions includes "social" and "person". The abnormal pathways emanating from the above foci indicate that ASD patients simultaneously exhibit reduced long-range or inter-hemispheric connectivity and increased short-range or intra-hemispheric connectivity. Our findings reveal new insights into ASD and highlight possible neural mechanisms of the disorder.

  11. Coding and Decoding for the Dynamic Decode and Forward Relay Protocol

    CERN Document Server

    Kumar, K Raj

    2008-01-01

    We study the Dynamic Decode and Forward (DDF) protocol for a single half-duplex relay, single-antenna channel with quasi-static fading. The DDF protocol is well-known and has been analyzed in terms of the Diversity-Multiplexing Tradeoff (DMT) in the infinite block length limit. We characterize the finite block length DMT and give new explicit code constructions. The finite block length analysis illuminates a few key aspects that have been neglected in the previous literature: 1) we show that one dominating cause of degradation with respect to the infinite block length regime is the event of decoding error at the relay; 2) we explicitly take into account the fact that the destination does not generally know a priori the relay decision time at which the relay switches from listening to transmit mode. Both the above problems can be tackled by a careful design of the decoding algorithm. In particular, we introduce a decision rejection criterion at the relay based on Forney's decision rule (a variant of the Neyman...

  12. Classification using Bayesian neural nets

    NARCIS (Netherlands)

    J.C. Bioch (Cor); O. van der Meer; R. Potharst (Rob)

    1995-01-01

    textabstractRecently, Bayesian methods have been proposed for neural networks to solve regression and classification problems. These methods claim to overcome some difficulties encountered in the standard approach such as overfitting. However, an implementation of the full Bayesian approach to neura

  13. Bayesian Intersubjectivity and Quantum Theory

    Science.gov (United States)

    Pérez-Suárez, Marcos; Santos, David J.

    2005-02-01

    Two of the major approaches to probability, namely, frequentism and (subjectivistic) Bayesian theory, are discussed, together with the replacement of frequentist objectivity for Bayesian intersubjectivity. This discussion is then expanded to Quantum Theory, as quantum states and operations can be seen as structural elements of a subjective nature.

  14. Bayesian Approach for Inconsistent Information.

    Science.gov (United States)

    Stein, M; Beer, M; Kreinovich, V

    2013-10-01

    In engineering situations, we usually have a large amount of prior knowledge that needs to be taken into account when processing data. Traditionally, the Bayesian approach is used to process data in the presence of prior knowledge. Sometimes, when we apply the traditional Bayesian techniques to engineering data, we get inconsistencies between the data and prior knowledge. These inconsistencies are usually caused by the fact that in the traditional approach, we assume that we know the exact sample values, that the prior distribution is exactly known, etc. In reality, the data is imprecise due to measurement errors, the prior knowledge is only approximately known, etc. So, a natural way to deal with the seemingly inconsistent information is to take this imprecision into account in the Bayesian approach - e.g., by using fuzzy techniques. In this paper, we describe several possible scenarios for fuzzifying the Bayesian approach. Particular attention is paid to the interaction between the estimated imprecise parameters. In this paper, to implement the corresponding fuzzy versions of the Bayesian formulas, we use straightforward computations of the related expression - which makes our computations reasonably time-consuming. Computations in the traditional (non-fuzzy) Bayesian approach are much faster - because they use algorithmically efficient reformulations of the Bayesian formulas. We expect that similar reformulations of the fuzzy Bayesian formulas will also drastically decrease the computation time and thus, enhance the practical use of the proposed methods.

  15. Inference in hybrid Bayesian networks

    DEFF Research Database (Denmark)

    Lanseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2009-01-01

    Since the 1980s, Bayesian Networks (BNs) have become increasingly popular for building statistical models of complex systems. This is particularly true for boolean systems, where BNs often prove to be a more efficient modelling framework than traditional reliability-techniques (like fault trees...... decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability....

  16. Decoding the mechanisms of Antikythera astronomical device

    CERN Document Server

    Lin, Jian-Liang

    2016-01-01

    This book presents a systematic design methodology for decoding the interior structure of the Antikythera mechanism, an astronomical device from ancient Greece. The historical background, surviving evidence and reconstructions of the mechanism are introduced, and the historical development of astronomical achievements and various astronomical instruments are investigated. Pursuing an approach based on the conceptual design of modern mechanisms and bearing in mind the standards of science and technology at the time, all feasible designs of the six lost/incomplete/unclear subsystems are synthesized as illustrated examples, and 48 feasible designs of the complete interior structure are presented. This approach provides not only a logical tool for applying modern mechanical engineering knowledge to the reconstruction of the Antikythera mechanism, but also an innovative research direction for identifying the original structures of the mechanism in the future. In short, the book offers valuable new insights for all...

  17. Interference Alignment for Clustered Multicell Joint Decoding

    CERN Document Server

    Chatzinotas, Symeon

    2010-01-01

    Multicell joint processing has been proven to be very efficient in overcoming the interference-limited nature of the cellular paradigm. However, for reasons of practical implementation global multicell joint decoding is not feasible and thus clusters of cooperating Base Stations have to be considered. In this context, intercluster interference has to be mitigated in order to harvest the full potential of multicell joint processing. In this paper, four scenarios of intercluster interference are investigated, namely a) global multicell joint processing, b) interference alignment, c) resource division multiple access and d) cochannel interference allowance. Each scenario is modelled and analyzed using the per-cell ergodic sum-rate capacity as a figure of merit. In this process, a number of theorems are derived for analytically expressing the asymptotic eigenvalue distributions of the channel covariance matrices. The analysis is based on principles from Free Probability theory and especially properties in the R a...

  18. Academic Training - Bioinformatics: Decoding the Genome

    CERN Multimedia

    Chris Jones

    2006-01-01

    ACADEMIC TRAINING LECTURE SERIES 27, 28 February 1, 2, 3 March 2006 from 11:00 to 12:00 - Auditorium, bldg. 500 Decoding the Genome A special series of 5 lectures on: Recent extraordinary advances in the life sciences arising through new detection technologies and bioinformatics The past five years have seen an extraordinary change in the information and tools available in the life sciences. The sequencing of the human genome, the discovery that we possess far fewer genes than foreseen, the measurement of the tiny changes in the genomes that differentiate us, the sequencing of the genomes of many pathogens that lead to diseases such as malaria are all examples of completely new information that is now available in the quest for improved healthcare. New tools have allowed similar strides in the discovery of the associated protein structures, providing invaluable information for those searching for new drugs. New DNA microarray chips permit simultaneous measurement of the state of expression of tens...

  19. Decoding the future from past experience: learning shapes predictions in early visual cortex.

    Science.gov (United States)

    Luft, Caroline D B; Meeson, Alan; Welchman, Andrew E; Kourtzi, Zoe

    2015-05-01

    Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our ability to predict sensory events. We tested the participants' ability to predict the orientation of a test stimulus after exposure to sequences of leftward- or rightward-oriented gratings. Using fMRI decoding, we identified brain patterns related to the observers' visual predictions rather than stimulus-driven activity. Decoding of predicted orientations following structured sequences was enhanced after training, while decoding of cued orientations following exposure to random sequences did not change. These predictive representations appear to be driven by the same large-scale neural populations that encode actual stimulus orientation and to be specific to the learned sequence structure. Thus our findings provide evidence that learning temporal structures supports our ability to predict future events by reactivating selective sensory representations as early as in primary visual cortex.

  20. Bayesian Inference on Gravitational Waves

    Directory of Open Access Journals (Sweden)

    Asad Ali

    2015-12-01

    Full Text Available The Bayesian approach is increasingly becoming popular among the astrophysics data analysis communities. However, the Pakistan statistics communities are unaware of this fertile interaction between the two disciplines. Bayesian methods have been in use to address astronomical problems since the very birth of the Bayes probability in eighteenth century. Today the Bayesian methods for the detection and parameter estimation of gravitational waves have solid theoretical grounds with a strong promise for the realistic applications. This article aims to introduce the Pakistan statistics communities to the applications of Bayesian Monte Carlo methods in the analysis of gravitational wave data with an  overview of the Bayesian signal detection and estimation methods and demonstration by a couple of simplified examples.

  1. Turbo decoder architecture for beyond-4G applications

    CERN Document Server

    Wong, Cheng-Chi

    2013-01-01

    This book describes the most recent techniques for turbo decoder implementation, especially for 4G and beyond 4G applications. The authors reveal techniques for the design of high-throughput decoders for future telecommunication systems, enabling designers to reduce hardware cost and shorten processing time. Coverage includes an explanation of VLSI implementation of the turbo decoder, from basic functional units to advanced parallel architecture. The authors discuss both hardware architecture techniques and experimental results, showing the variations in area/throughput/performance with respec

  2. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  3. Locally decodable codes and private information retrieval schemes

    CERN Document Server

    Yekhanin, Sergey

    2010-01-01

    Locally decodable codes (LDCs) are codes that simultaneously provide efficient random access retrieval and high noise resilience by allowing reliable reconstruction of an arbitrary bit of a message by looking at only a small number of randomly chosen codeword bits. Local decodability comes with a certain loss in terms of efficiency - specifically, locally decodable codes require longer codeword lengths than their classical counterparts. Private information retrieval (PIR) schemes are cryptographic protocols designed to safeguard the privacy of database users. They allow clients to retrieve rec

  4. Analysis and Design of Binary Message-Passing Decoders

    DEFF Research Database (Denmark)

    Lechner, Gottfried; Pedersen, Troels; Kramer, Gerhard

    2012-01-01

    Binary message-passing decoders for low-density parity-check (LDPC) codes are studied by using extrinsic information transfer (EXIT) charts. The channel delivers hard or soft decisions and the variable node decoder performs all computations in the L-value domain. A hard decision channel results...... message-passing decoders. Finally, it is shown that errors on cycles consisting only of degree two and three variable nodes cannot be corrected and a necessary and sufficient condition for the existence of a cycle-free subgraph is derived....

  5. How to practise Bayesian statistics outside the Bayesian church: What philosophy for Bayesian statistical modelling?

    NARCIS (Netherlands)

    Borsboom, D.; Haig, B.D.

    2013-01-01

    Unlike most other statistical frameworks, Bayesian statistical inference is wedded to a particular approach in the philosophy of science (see Howson & Urbach, 2006); this approach is called Bayesianism. Rather than being concerned with model fitting, this position in the philosophy of science primar

  6. Bayesian coestimation of phylogeny and sequence alignment

    Directory of Open Access Journals (Sweden)

    Jensen Jens

    2005-04-01

    Full Text Available Abstract Background Two central problems in computational biology are the determination of the alignment and phylogeny of a set of biological sequences. The traditional approach to this problem is to first build a multiple alignment of these sequences, followed by a phylogenetic reconstruction step based on this multiple alignment. However, alignment and phylogenetic inference are fundamentally interdependent, and ignoring this fact leads to biased and overconfident estimations. Whether the main interest be in sequence alignment or phylogeny, a major goal of computational biology is the co-estimation of both. Results We developed a fully Bayesian Markov chain Monte Carlo method for coestimating phylogeny and sequence alignment, under the Thorne-Kishino-Felsenstein model of substitution and single nucleotide insertion-deletion (indel events. In our earlier work, we introduced a novel and efficient algorithm, termed the "indel peeling algorithm", which includes indels as phylogenetically informative evolutionary events, and resembles Felsenstein's peeling algorithm for substitutions on a phylogenetic tree. For a fixed alignment, our extension analytically integrates out both substitution and indel events within a proper statistical model, without the need for data augmentation at internal tree nodes, allowing for efficient sampling of tree topologies and edge lengths. To additionally sample multiple alignments, we here introduce an efficient partial Metropolized independence sampler for alignments, and combine these two algorithms into a fully Bayesian co-estimation procedure for the alignment and phylogeny problem. Our approach results in estimates for the posterior distribution of evolutionary rate parameters, for the maximum a-posteriori (MAP phylogenetic tree, and for the posterior decoding alignment. Estimates for the evolutionary tree and multiple alignment are augmented with confidence estimates for each node height and alignment column

  7. A bayesian approach to classification criteria for spectacled eiders

    Science.gov (United States)

    Taylor, B.L.; Wade, P.R.; Stehn, R.A.; Cochrane, J.F.

    1996-01-01

    To facilitate decisions to classify species according to risk of extinction, we used Bayesian methods to analyze trend data for the Spectacled Eider, an arctic sea duck. Trend data from three independent surveys of the Yukon-Kuskokwim Delta were analyzed individually and in combination to yield posterior distributions for population growth rates. We used classification criteria developed by the recovery team for Spectacled Eiders that seek to equalize errors of under- or overprotecting the species. We conducted both a Bayesian decision analysis and a frequentist (classical statistical inference) decision analysis. Bayesian decision analyses are computationally easier, yield basically the same results, and yield results that are easier to explain to nonscientists. With the exception of the aerial survey analysis of the 10 most recent years, both Bayesian and frequentist methods indicated that an endangered classification is warranted. The discrepancy between surveys warrants further research. Although the trend data are abundance indices, we used a preliminary estimate of absolute abundance to demonstrate how to calculate extinction distributions using the joint probability distributions for population growth rate and variance in growth rate generated by the Bayesian analysis. Recent apparent increases in abundance highlight the need for models that apply to declining and then recovering species.

  8. Implementing Bayesian Vector Autoregressions Implementing Bayesian Vector Autoregressions

    Directory of Open Access Journals (Sweden)

    Richard M. Todd

    1988-03-01

    Full Text Available Implementing Bayesian Vector Autoregressions This paper discusses how the Bayesian approach can be used to construct a type of multivariate forecasting model known as a Bayesian vector autoregression (BVAR. In doing so, we mainly explain Doan, Littermann, and Sims (1984 propositions on how to estimate a BVAR based on a certain family of prior probability distributions. indexed by a fairly small set of hyperparameters. There is also a discussion on how to specify a BVAR and set up a BVAR database. A 4-variable model is used to iliustrate the BVAR approach.

  9. Decoding of visual attention from LFP signals of macaque MT.

    Science.gov (United States)

    Esghaei, Moein; Daliri, Mohammad Reza

    2014-01-01

    The local field potential (LFP) has recently been widely used in brain computer interfaces (BCI). Here we used power of LFP recorded from area MT of a macaque monkey to decode where the animal covertly attended. Support vector machines (SVM) were used to learn the pattern of power at different frequencies for attention to two possible positions. We found that LFP power at both low (<9 Hz) and high (31-120 Hz) frequencies contains sufficient information to decode the focus of attention. Highest decoding performance was found for gamma frequencies (31-120 Hz) and reached 82%. In contrast low frequencies (<9 Hz) could help the classifier reach a higher decoding performance with a smaller amount of training data. Consequently, we suggest that low frequency LFP can provide fast but coarse information regarding the focus of attention, while higher frequencies of the LFP deliver more accurate but less timely information about the focus of attention.

  10. Decoding of visual attention from LFP signals of macaque MT.

    Directory of Open Access Journals (Sweden)

    Moein Esghaei

    Full Text Available The local field potential (LFP has recently been widely used in brain computer interfaces (BCI. Here we used power of LFP recorded from area MT of a macaque monkey to decode where the animal covertly attended. Support vector machines (SVM were used to learn the pattern of power at different frequencies for attention to two possible positions. We found that LFP power at both low (<9 Hz and high (31-120 Hz frequencies contains sufficient information to decode the focus of attention. Highest decoding performance was found for gamma frequencies (31-120 Hz and reached 82%. In contrast low frequencies (<9 Hz could help the classifier reach a higher decoding performance with a smaller amount of training data. Consequently, we suggest that low frequency LFP can provide fast but coarse information regarding the focus of attention, while higher frequencies of the LFP deliver more accurate but less timely information about the focus of attention.

  11. Improved List Sphere Decoder for Multiple Antenna Systems

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    An improved list sphere decoder (ILSD) is proposed based on the conventional list sphere decoder (LSD) and the reduced-complexity maximum likelihood sphere-decoding algorithm. Unlike the conventional LSD with fixed initial radius, the ILSD adopts an adaptive radius to accelerate the list construction. Characterized by low-complexity and radius-insensitivity, the proposed algorithm makes iterative joint detection and decoding more realizable in multiple-antenna systems. Simulation results show that computational savings of ILSD over LSD are more apparent with more transmit antennas or larger constellations, and with no performance degradation. Because the complexity of the ILSD algorithm almost keeps invariant with the increasing of initial radius, the BER performance can be improved by selecting a sufficiently large radius.

  12. Multiresolutional encoding and decoding in embedded image and video coders

    Science.gov (United States)

    Xiong, Zixiang; Kim, Beong-Jo; Pearlman, William A.

    1998-07-01

    We address multiresolutional encoding and decoding within the embedded zerotree wavelet (EZW) framework for both images and video. By varying a resolution parameter, one can obtain decoded images at different resolutions from one single encoded bitstream, which is already rate scalable for EZW coders. Similarly one can decode video sequences at different rates and different spatial and temporal resolutions from one bitstream. Furthermore, a layered bitstream can be generated with multiresolutional encoding, from which the higher resolution layers can be used to increase the spatial/temporal resolution of the images/video obtained from the low resolution layer. In other words, we have achieved full scalability in rate and partial scalability in space and time. This added spatial/temporal scalability is significant for emerging multimedia applications such as fast decoding, image/video database browsing, telemedicine, multipoint video conferencing, and distance learning.

  13. On Complexity, Energy- and Implementation-Efficiency of Channel Decoders

    CERN Document Server

    Kienle, Frank; Meyr, Heinrich

    2010-01-01

    Future wireless communication systems require efficient and flexible baseband receivers. Meaningful efficiency metrics are key for design space exploration to quantify the algorithmic and the implementation complexity of a receiver. Most of the current established efficiency metrics are based on counting operations, thus neglecting important issues like data and storage complexity. In this paper we introduce suitable energy and area efficiency metrics which resolve the afore-mentioned disadvantages. These are decoded information bit per energy and throughput per area unit. Efficiency metrics are assessed by various implementations of turbo decoders, LDPC decoders and convolutional decoders. New exploration methodologies are presented, which permit an appropriate benchmarking of implementation efficiency, communications performance, and flexibility trade-offs. These exploration methodologies are based on efficiency trajectories rather than a single snapshot metric as done in state-of-the-art approaches.

  14. Interpolating and filtering decoding algorithm for convolution codes

    Directory of Open Access Journals (Sweden)

    O. O. Shpylka

    2010-01-01

    Full Text Available There has been synthesized interpolating and filtering decoding algorithm for convolution codes on maximum of a posteriori probability criterion, in which combined filtering coder state and interpolation of information signs on sliding interval are processed

  15. Tracing Precept against Self-Protective Tortious Decoder

    Institute of Scientific and Technical Information of China (English)

    Jie Tian; Xin-Fang Zhang; Yi-Lin Song; Wei Xiang

    2007-01-01

    Traceability precept is a broadcast encryption technique that content suppliers can trace malicious authorized users who leak the decryption key to an unauthorized user. To protect the data from eavesdropping, the content supplier encrypts the data and broadcast the cryptograph that only its subscribers can decrypt. However, a traitor may clone his decoder and sell the pirate decoders for profits. The traitor can modify the private key and the decryption program inside the pirate decoder to avoid divulging his identity. Furthermore, some traitors may fabricate a new legal private key together that cannot be traced to the creators. So in this paper, a renewed precept is proposed to achieve both revocation at a different level of capacity in each distribution and black-box tracing against self-protective pirate decoders. The rigorous mathematical deduction shows that our algorithm possess security property.

  16. Performance Analysis of a Decoding Algorithm for Algebraic Geometry Codes

    DEFF Research Database (Denmark)

    Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund; Høholdt, Tom

    1998-01-01

    We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is greater than or equal to [(dFR-1)/2]+1, where dFR is the Feng-Rao distance......We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is greater than or equal to [(dFR-1)/2]+1, where dFR is the Feng-Rao distance...

  17. Recent results in the decoding of Algebraic geometry codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund

    1998-01-01

    We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is [(dFR-1)/2]+1, where dFR is the Feng-Rao distance......We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is [(dFR-1)/2]+1, where dFR is the Feng-Rao distance...

  18. Decoding Reed-Solomon Codes beyond half the minimum distance

    DEFF Research Database (Denmark)

    Høholdt, Tom; Nielsen, Rasmus Refslund

    1999-01-01

    We describe an efficient implementation of M.Sudan"s algorithm for decoding Reed-Solomon codes beyond half the minimum distance. Furthermore we calculate an upper bound of the probabilty of getting more than one codeword as output......We describe an efficient implementation of M.Sudan"s algorithm for decoding Reed-Solomon codes beyond half the minimum distance. Furthermore we calculate an upper bound of the probabilty of getting more than one codeword as output...

  19. VLSI architecture for a Reed-Solomon decoder

    Science.gov (United States)

    Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor)

    1992-01-01

    A basic single-chip building block for a Reed-Solomon (RS) decoder system is partitioned into a plurality of sections, the first of which consists of a plurality of syndrome subcells each of which contains identical standard-basis finite-field multipliers that are programmable between 10 and 8 bit operation. A desired number of basic building blocks may be assembled to provide a RS decoder of any syndrome subcell size that is programmable between 10 and 8 bit operation.

  20. Interleaved Convolutional Code and Its Viterbi Decoder Architecture

    OpenAIRE

    2003-01-01

    We propose an area-efficient high-speed interleaved Viterbi decoder architecture, which is based on the state-parallel architecture with register exchange path memory structure, for interleaved convolutional code. The state-parallel architecture uses as many add-compare-select (ACS) units as the number of trellis states. By replacing each delay (or storage) element in state metrics memory (or path metrics memory) and path memory (or survival memory) with delays, interleaved Viterbi decoder ...

  1. Bayesian Causal Induction

    CERN Document Server

    Ortega, Pedro A

    2011-01-01

    Discovering causal relationships is a hard task, often hindered by the need for intervention, and often requiring large amounts of data to resolve statistical uncertainty. However, humans quickly arrive at useful causal relationships. One possible reason is that humans use strong prior knowledge; and rather than encoding hard causal relationships, they encode beliefs over causal structures, allowing for sound generalization from the observations they obtain from directly acting in the world. In this work we propose a Bayesian approach to causal induction which allows modeling beliefs over multiple causal hypotheses and predicting the behavior of the world under causal interventions. We then illustrate how this method extracts causal information from data containing interventions and observations.

  2. Bayesian Rose Trees

    CERN Document Server

    Blundell, Charles; Heller, Katherine A

    2012-01-01

    Hierarchical structure is ubiquitous in data across many domains. There are many hier- archical clustering methods, frequently used by domain experts, which strive to discover this structure. However, most of these meth- ods limit discoverable hierarchies to those with binary branching structure. This lim- itation, while computationally convenient, is often undesirable. In this paper we ex- plore a Bayesian hierarchical clustering algo- rithm that can produce trees with arbitrary branching structure at each node, known as rose trees. We interpret these trees as mixtures over partitions of a data set, and use a computationally efficient, greedy ag- glomerative algorithm to find the rose trees which have high marginal likelihood given the data. Lastly, we perform experiments which demonstrate that rose trees are better models of data than the typical binary trees returned by other hierarchical clustering algorithms.

  3. Bayesian inference in geomagnetism

    Science.gov (United States)

    Backus, George E.

    1988-01-01

    The inverse problem in empirical geomagnetic modeling is investigated, with critical examination of recently published studies. Particular attention is given to the use of Bayesian inference (BI) to select the damping parameter lambda in the uniqueness portion of the inverse problem. The mathematical bases of BI and stochastic inversion are explored, with consideration of bound-softening problems and resolution in linear Gaussian BI. The problem of estimating the radial magnetic field B(r) at the earth core-mantle boundary from surface and satellite measurements is then analyzed in detail, with specific attention to the selection of lambda in the studies of Gubbins (1983) and Gubbins and Bloxham (1985). It is argued that the selection method is inappropriate and leads to lambda values much larger than those that would result if a reasonable bound on the heat flow at the CMB were assumed.

  4. Bayesian isochrone fitting and stellar ages

    CERN Document Server

    Valls-Gabaud, D

    2016-01-01

    Stellar evolution theory has been extraordinarily successful at explaining the different phases under which stars form, evolve and die. While the strongest constraints have traditionally come from binary stars, the advent of asteroseismology is bringing unique measures in well-characterised stars. For stellar populations in general, however, only photometric measures are usually available, and the comparison with the predictions of stellar evolution theory have mostly been qualitative. For instance, the geometrical shapes of isochrones have been used to infer ages of coeval populations, but without any proper statistical basis. In this chapter we provide a pedagogical review on a Bayesian formalism to make quantitative inferences on the properties of single, binary and small ensembles of stars, including unresolved populations. As an example, we show how stellar evolution theory can be used in a rigorous way as a prior information to measure the ages of stars between the ZAMS and the Helium flash, and their u...

  5. Bayesian Calibration of Microsimulation Models.

    Science.gov (United States)

    Rutter, Carolyn M; Miglioretti, Diana L; Savarino, James E

    2009-12-01

    Microsimulation models that describe disease processes synthesize information from multiple sources and can be used to estimate the effects of screening and treatment on cancer incidence and mortality at a population level. These models are characterized by simulation of individual event histories for an idealized population of interest. Microsimulation models are complex and invariably include parameters that are not well informed by existing data. Therefore, a key component of model development is the choice of parameter values. Microsimulation model parameter values are selected to reproduce expected or known results though the process of model calibration. Calibration may be done by perturbing model parameters one at a time or by using a search algorithm. As an alternative, we propose a Bayesian method to calibrate microsimulation models that uses Markov chain Monte Carlo. We show that this approach converges to the target distribution and use a simulation study to demonstrate its finite-sample performance. Although computationally intensive, this approach has several advantages over previously proposed methods, including the use of statistical criteria to select parameter values, simultaneous calibration of multiple parameters to multiple data sources, incorporation of information via prior distributions, description of parameter identifiability, and the ability to obtain interval estimates of model parameters. We develop a microsimulation model for colorectal cancer and use our proposed method to calibrate model parameters. The microsimulation model provides a good fit to the calibration data. We find evidence that some parameters are identified primarily through prior distributions. Our results underscore the need to incorporate multiple sources of variability (i.e., due to calibration data, unknown parameters, and estimated parameters and predicted values) when calibrating and applying microsimulation models.

  6. Efficient VLSI architecture of CAVLC decoder with power optimized

    Institute of Scientific and Technical Information of China (English)

    CHEN Guang-hua; HU Deng-ji; ZHANG Jin-yi; ZHENG Wei-feng; ZENG Wei-min

    2009-01-01

    This paper presents an efficient VLSI architecture of the contest-based adaptive variable length code (CAVLC) decoder with power optimized for the H.264/advanced video coding (AVC) standard. In the proposed design, according to the regularity of the codewords, the first one detector is used to solve the low efficiency and high power dissipation problem within the traditional method of table-searching. Considering the relevance of the data used in the process of runbefore's decoding,arithmetic operation is combined with finite state machine (FSM), which achieves higher decoding efficiency. According to the CAVLC decoding flow, clock gating is employed in the module level and the register level respectively, which reduces 43% of the overall dynamic power dissipation. The proposed design can decode every syntax element in one clock cycle. When the proposed design is synthesized at the clock constraint of 100 MHz, the synthesis result shows that the design costs 11 300gates under a 0.25 μm CMOS technology, which meets the demand of real time decoding in the H.264/AVC standard.

  7. Decoding Schemes for FBMC with Single-Delay STTC

    Science.gov (United States)

    Lélé, Chrislin; Le Ruyet, Didier

    2010-12-01

    Orthogonally multiplexed Quadrature Amplitude Modulation (OQAM) with Filter-Bank-based MultiCarrier modulation (FBMC) is a multicarrier modulation scheme that can be considered an alternative to the conventional orthogonal frequency division multiplexing (OFDM) with cyclic prefix (CP) for transmission over multipath fading channels. However, as OQAM-based FBMC is based on real orthogonality, transmission over a complex-valued channel makes the decoding process more challenging compared to CP-OFDM case. Moreover, if we apply Multiple Input Multiple Output (MIMO) techniques to OQAM-based FBMC, the decoding schemes are different from the ones used in CP-OFDM. In this paper, we consider the combination of OQAM-based FBMC with single-delay Space-Time Trellis Coding (STTC). We extend the decoding process presented earlier in the case of [InlineEquation not available: see fulltext.] transmit antennas to greater values of [InlineEquation not available: see fulltext.]. Then, for [InlineEquation not available: see fulltext.], we make an analysis of the theoretical and simulation performance of ML and Viterbi decoding. Finally, to improve the performance of this method, we suggest an iterative decoding method. We show that the OQAM-based FBMC iterative decoding scheme can slightly outperform CP-OFDM.

  8. Decoding Schemes for FBMC with Single-Delay STTC

    Directory of Open Access Journals (Sweden)

    Chrislin Lélé

    2010-01-01

    Full Text Available Orthogonally multiplexed Quadrature Amplitude Modulation (OQAM with Filter-Bank-based MultiCarrier modulation (FBMC is a multicarrier modulation scheme that can be considered an alternative to the conventional orthogonal frequency division multiplexing (OFDM with cyclic prefix (CP for transmission over multipath fading channels. However, as OQAM-based FBMC is based on real orthogonality, transmission over a complex-valued channel makes the decoding process more challenging compared to CP-OFDM case. Moreover, if we apply Multiple Input Multiple Output (MIMO techniques to OQAM-based FBMC, the decoding schemes are different from the ones used in CP-OFDM. In this paper, we consider the combination of OQAM-based FBMC with single-delay Space-Time Trellis Coding (STTC. We extend the decoding process presented earlier in the case of Nt=2 transmit antennas to greater values of Nt. Then, for Nt≥2, we make an analysis of the theoretical and simulation performance of ML and Viterbi decoding. Finally, to improve the performance of this method, we suggest an iterative decoding method. We show that the OQAM-based FBMC iterative decoding scheme can slightly outperform CP-OFDM.

  9. Decoding Schemes for FBMC with Single-Delay STTC

    Directory of Open Access Journals (Sweden)

    Lélé Chrislin

    2010-01-01

    Full Text Available Abstract Orthogonally multiplexed Quadrature Amplitude Modulation (OQAM with Filter-Bank-based MultiCarrier modulation (FBMC is a multicarrier modulation scheme that can be considered an alternative to the conventional orthogonal frequency division multiplexing (OFDM with cyclic prefix (CP for transmission over multipath fading channels. However, as OQAM-based FBMC is based on real orthogonality, transmission over a complex-valued channel makes the decoding process more challenging compared to CP-OFDM case. Moreover, if we apply Multiple Input Multiple Output (MIMO techniques to OQAM-based FBMC, the decoding schemes are different from the ones used in CP-OFDM. In this paper, we consider the combination of OQAM-based FBMC with single-delay Space-Time Trellis Coding (STTC. We extend the decoding process presented earlier in the case of transmit antennas to greater values of . Then, for , we make an analysis of the theoretical and simulation performance of ML and Viterbi decoding. Finally, to improve the performance of this method, we suggest an iterative decoding method. We show that the OQAM-based FBMC iterative decoding scheme can slightly outperform CP-OFDM.

  10. Partially blind instantly decodable network codes for lossy feedback environment

    KAUST Repository

    Sorour, Sameh

    2014-09-01

    In this paper, we study the multicast completion and decoding delay minimization problems for instantly decodable network coding (IDNC) in the case of lossy feedback. When feedback loss events occur, the sender falls into uncertainties about packet reception at the different receivers, which forces it to perform partially blind selections of packet combinations in subsequent transmissions. To determine efficient selection policies that reduce the completion and decoding delays of IDNC in such an environment, we first extend the perfect feedback formulation in our previous works to the lossy feedback environment, by incorporating the uncertainties resulting from unheard feedback events in these formulations. For the completion delay problem, we use this formulation to identify the maximum likelihood state of the network in events of unheard feedback and employ it to design a partially blind graph update extension to the multicast IDNC algorithm in our earlier work. For the decoding delay problem, we derive an expression for the expected decoding delay increment for any arbitrary transmission. This expression is then used to find the optimal policy that reduces the decoding delay in such lossy feedback environment. Results show that our proposed solutions both outperform previously proposed approaches and achieve tolerable degradation even at relatively high feedback loss rates.

  11. Evaluation framework for K-best sphere decoders

    KAUST Repository

    Shen, Chungan

    2010-08-01

    While Maximum-Likelihood (ML) is the optimum decoding scheme for most communication scenarios, practical implementation difficulties limit its use, especially for Multiple Input Multiple Output (MIMO) systems with a large number of transmit or receive antennas. Tree-searching type decoder structures such as Sphere decoder and K-best decoder present an interesting trade-off between complexity and performance. Many algorithmic developments and VLSI implementations have been reported in literature with widely varying performance to area and power metrics. In this semi-tutorial paper we present a holistic view of different Sphere decoding techniques and K-best decoding techniques, identifying the key algorithmic and implementation trade-offs. We establish a consistent benchmark framework to investigate and compare the delay cost, power cost, and power-delay-product cost incurred by each method. Finally, using the framework, we propose and analyze a novel architecture and compare that to other published approaches. Our goal is to explicitly elucidate the overall advantages and disadvantages of each proposed algorithms in one coherent framework. © 2010 World Scientific Publishing Company.

  12. Testing interconnected VLSI circuits in the Big Viterbi Decoder

    Science.gov (United States)

    Onyszchuk, I. M.

    1991-01-01

    The Big Viterbi Decoder (BVD) is a powerful error-correcting hardware device for the Deep Space Network (DSN), in support of the Galileo and Comet Rendezvous Asteroid Flyby (CRAF)/Cassini Missions. Recently, a prototype was completed and run successfully at 400,000 or more decoded bits per second. This prototype is a complex digital system whose core arithmetic unit consists of 256 identical very large scale integration (VLSI) gate-array chips, 16 on each of 16 identical boards which are connected through a 28-layer, printed-circuit backplane using 4416 wires. Special techniques were developed for debugging, testing, and locating faults inside individual chips, on boards, and within the entire decoder. The methods are based upon hierarchical structure in the decoder, and require that chips or boards be wired themselves as Viterbi decoders. The basic procedure consists of sending a small set of known, very noisy channel symbols through a decoder, and matching observables against values computed by a software simulation. Also, tests were devised for finding open and short-circuited wires which connect VLSI chips on the boards and through the backplane.

  13. Decoding Lower Limb Muscle Activity and Kinematics from Cortical Neural Spike Trains during Monkey Performing Stand and Squat Movements

    Science.gov (United States)

    Ma, Xuan; Ma, Chaolin; Huang, Jian; Zhang, Peng; Xu, Jiang; He, Jiping

    2017-01-01

    Extensive literatures have shown approaches for decoding upper limb kinematics or muscle activity using multichannel cortical spike recordings toward brain machine interface (BMI) applications. However, similar topics regarding lower limb remain relatively scarce. We previously reported a system for training monkeys to perform visually guided stand and squat tasks. The current study, as a follow-up extension, investigates whether lower limb kinematics and muscle activity characterized by electromyography (EMG) signals during monkey performing stand/squat movements can be accurately decoded from neural spike trains in primary motor cortex (M1). Two monkeys were used in this study. Subdermal intramuscular EMG electrodes were implanted to 8 right leg/thigh muscles. With ample data collected from neurons from a large brain area, we performed a spike triggered average (SpTA) analysis and got a series of density contours which revealed the spatial distributions of different muscle-innervating neurons corresponding to each given muscle. Based on the guidance of these results, we identified the locations optimal for chronic electrode implantation and subsequently carried on chronic neural data recordings. A recursive Bayesian estimation framework was proposed for decoding EMG signals together with kinematics from M1 spike trains. Two specific algorithms were implemented: a standard Kalman filter and an unscented Kalman filter. For the latter one, an artificial neural network was incorporated to deal with the nonlinearity in neural tuning. High correlation coefficient and signal to noise ratio between the predicted and the actual data were achieved for both EMG signals and kinematics on both monkeys. Higher decoding accuracy and faster convergence rate could be achieved with the unscented Kalman filter. These results demonstrate that lower limb EMG signals and kinematics during monkey stand/squat can be accurately decoded from a group of M1 neurons with the proposed

  14. Decoding Lower Limb Muscle Activity and Kinematics from Cortical Neural Spike Trains during Monkey Performing Stand and Squat Movements.

    Science.gov (United States)

    Ma, Xuan; Ma, Chaolin; Huang, Jian; Zhang, Peng; Xu, Jiang; He, Jiping

    2017-01-01

    Extensive literatures have shown approaches for decoding upper limb kinematics or muscle activity using multichannel cortical spike recordings toward brain machine interface (BMI) applications. However, similar topics regarding lower limb remain relatively scarce. We previously reported a system for training monkeys to perform visually guided stand and squat tasks. The current study, as a follow-up extension, investigates whether lower limb kinematics and muscle activity characterized by electromyography (EMG) signals during monkey performing stand/squat movements can be accurately decoded from neural spike trains in primary motor cortex (M1). Two monkeys were used in this study. Subdermal intramuscular EMG electrodes were implanted to 8 right leg/thigh muscles. With ample data collected from neurons from a large brain area, we performed a spike triggered average (SpTA) analysis and got a series of density contours which revealed the spatial distributions of different muscle-innervating neurons corresponding to each given muscle. Based on the guidance of these results, we identified the locations optimal for chronic electrode implantation and subsequently carried on chronic neural data recordings. A recursive Bayesian estimation framework was proposed for decoding EMG signals together with kinematics from M1 spike trains. Two specific algorithms were implemented: a standard Kalman filter and an unscented Kalman filter. For the latter one, an artificial neural network was incorporated to deal with the nonlinearity in neural tuning. High correlation coefficient and signal to noise ratio between the predicted and the actual data were achieved for both EMG signals and kinematics on both monkeys. Higher decoding accuracy and faster convergence rate could be achieved with the unscented Kalman filter. These results demonstrate that lower limb EMG signals and kinematics during monkey stand/squat can be accurately decoded from a group of M1 neurons with the proposed

  15. Current trends in Bayesian methodology with applications

    CERN Document Server

    Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia

    2015-01-01

    Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on

  16. Iterative Decoding of Parallel Concatenated Block Codes and Coset Based MAP Decoding Algorithm for F24 Code

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A multi-dimensional concatenation scheme for block codes is introduced, in which information symbols are interleaved and re-encoded for more than once. It provides a convenient platform to design high performance codes with flexible interleaver size.Coset based MAP soft-in/soft-out decoding algorithms are presented for the F24 code. Simulation results show that the proposed coding scheme can achieve high coding gain with flexible interleaver length and very low decoding complexity.

  17. Irregular-Time Bayesian Networks

    CERN Document Server

    Ramati, Michael

    2012-01-01

    In many fields observations are performed irregularly along time, due to either measurement limitations or lack of a constant immanent rate. While discrete-time Markov models (as Dynamic Bayesian Networks) introduce either inefficient computation or an information loss to reasoning about such processes, continuous-time Markov models assume either a discrete state space (as Continuous-Time Bayesian Networks), or a flat continuous state space (as stochastic dif- ferential equations). To address these problems, we present a new modeling class called Irregular-Time Bayesian Networks (ITBNs), generalizing Dynamic Bayesian Networks, allowing substantially more compact representations, and increasing the expressivity of the temporal dynamics. In addition, a globally optimal solution is guaranteed when learning temporal systems, provided that they are fully observed at the same irregularly spaced time-points, and a semiparametric subclass of ITBNs is introduced to allow further adaptation to the irregular nature of t...

  18. An FPGA Implementation of (3,6-Regular Low-Density Parity-Check Code Decoder

    Directory of Open Access Journals (Sweden)

    Tong Zhang

    2003-05-01

    Full Text Available Because of their excellent error-correcting performance, low-density parity-check (LDPC codes have recently attracted a lot of attention. In this paper, we are interested in the practical LDPC code decoder hardware implementations. The direct fully parallel decoder implementation usually incurs too high hardware complexity for many real applications, thus partly parallel decoder design approaches that can achieve appropriate trade-offs between hardware complexity and decoding throughput are highly desirable. Applying a joint code and decoder design methodology, we develop a high-speed (3,k-regular LDPC code partly parallel decoder architecture based on which we implement a 9216-bit, rate-1/2(3,6-regular LDPC code decoder on Xilinx FPGA device. This partly parallel decoder supports a maximum symbol throughput of 54 Mbps and achieves BER 10−6 at 2 dB over AWGN channel while performing maximum 18 decoding iterations.

  19. Decoding reality the universe as quantum information

    CERN Document Server

    Vedral, Vlatko

    2010-01-01

    In Decoding Reality, Vlatko Vedral offers a mind-stretching look at the deepest questions about the universe--where everything comes from, why things are as they are, what everything is. The most fundamental definition of reality is not matter or energy, he writes, but information--and it is the processing of information that lies at the root of all physical, biological, economic, and social phenomena. This view allows Vedral to address a host of seemingly unrelated questions: Why does DNA bind like it does? What is the ideal diet for longevity? How do you make your first million dollars? We can unify all through the understanding that everything consists of bits of information, he writes, though that raises the question of where these bits come from. To find the answer, he takes us on a guided tour through the bizarre realm of quantum physics. At this sub-sub-subatomic level, we find such things as the interaction of separated quantum particles--what Einstein called "spooky action at a distance." In fact, V...

  20. Infinity-Norm Sphere-Decoding

    CERN Document Server

    Seethaler, Dominik

    2008-01-01

    The most promising approaches for efficient detection in multiple-input multiple-output (MIMO) wireless systems are based on sphere-decoding (SD). The conventional (and optimum) norm that is used to conduct the tree traversal step in SD is the l-two norm. It was, however, recently shown that using the l-infinity norm instead significantly reduces the VLSI implementation complexity of SD at only a marginal performance loss. These savings are due to a reduction in the length of the critical path and the silicon area of the circuit, but also, as observed previously through simulation results, a consequence of a reduction in the computational (algorithmic) complexity. The aim of this paper is an analytical performance and computational complexity analysis of l-infinity norm SD. For i.i.d. Rayleigh fading MIMO channels, we show that l-infinity norm SD achieves full diversity order with an asymptotic SNR gap, compared to l-two norm SD, that increases at most linearly in the number of receive antennas. Moreover, we ...

  1. Why Hawking Radiation Cannot Be Decoded

    CERN Document Server

    Ong, Yen Chin; Chen, Pisin

    2014-01-01

    One of the great difficulties in the theory of black hole evaporation is that the most decisive phenomena tend to occur when the black hole is extremely hot: that is, when the physics is most poorly understood. Fortunately, a crucial step in the Harlow-Hayden approach to the firewall paradox, concerning the time available for decoding of Hawking radiation emanating from charged AdS black holes, can be made to work without relying on the unknown physics of black holes with extremely high temperatures; in fact, it relies on the properties of cold black holes. Here we clarify this surprising point. The approach is based on ideas borrowed from applications of the AdS/CFT correspondence to the quark-gluon plasma. Firewalls aside, our work presents a detailed analysis of the thermodynamics and evolution of evaporating charged AdS black holes with flat event horizons. We show that, in one way or another, these black holes are always eventually destroyed in a time which, while long by normal standards, is short relat...

  2. Fast mental states decoding in mixed reality.

    Directory of Open Access Journals (Sweden)

    Daniele eDe Massari

    2014-11-01

    Full Text Available The combination of Brain-Computer Interface technology, allowing online monitoring and decoding of brain activity, with virtual and mixed reality systems may help to shape and guide implicit and explicit learning using ecological scenarios. Real-time information of ongoing brain states acquired through BCI might be exploited for controlling data presentation in virtual environments. In this context, assessing to what extent brain states can be discriminated during mixed reality experience is critical for adapting specific data features to contingent brain activity. In this study we recorded EEG data while participants experienced a mixed reality scenario implemented through the eXperience Induction Machine (XIM. The XIM is a novel framework modeling the integration of a sensing system that evaluates and measures physiological and psychological states with a number of actuators and effectors that coherently reacts to the user's actions. We then assessed continuous EEG-based discrimination of spatial navigation, reading and calculation performed in mixed reality, using LDA and SVM classifiers. Dynamic single trial classification showed high accuracy of LDA and SVM classifiers in detecting multiple brain states as well as in differentiating between high and low mental workload, using a 5 s time-window shifting every 200 ms. Our results indicate overall better performance of LDA with respect to SVM and suggest applicability of our approach in a BCI-controlled mixed reality scenario. Ultimately, successful prediction of brain states might be used to drive adaptation of data representation in order to boost information processing in mixed reality.

  3. Rate Aware Instantly Decodable Network Codes

    KAUST Repository

    Douik, Ahmed

    2016-02-26

    This paper addresses the problem of reducing the delivery time of data messages to cellular users using instantly decodable network coding (IDNC) with physical-layer rate awareness. While most of the existing literature on IDNC does not consider any physical layer complications, this paper proposes a cross-layer scheme that incorporates the different channel rates of the various users in the decision process of both the transmitted message combinations and the rates with which they are transmitted. The completion time minimization problem in such scenario is first shown to be intractable. The problem is, thus, approximated by reducing, at each transmission, the increase of an anticipated version of the completion time. The paper solves the problem by formulating it as a maximum weight clique problem over a newly designed rate aware IDNC (RA-IDNC) graph. Further, the paper provides a multi-layer solution to improve the completion time approximation. Simulation results suggest that the cross-layer design largely outperforms the uncoded transmissions strategies and the classical IDNC scheme. © 2015 IEEE.

  4. Cognitive Wyner Networks with Clustered Decoding

    CERN Document Server

    Lapidoth, Amos; Shamai, Shlomo; Wigger, Michele

    2012-01-01

    We study the uplink of linear cellular models featuring short range inter-cell interference. This means, we study $K$-transmitter/$K$-receiver interference networks where the transmitters lie on a line and the receivers on a parallel line; each receiver opposite its corresponding transmitter. We assume short-range interference by which we mean that the signal sent by a given transmitter is only interfered either by the signal sent by its left neighbor (we refer to this setup as the asymmetric network) or by the signals sent by both its neighbors (we refer to this setup as the symmetric network). We assume that each transmitter has side-information consisting of the messages of the $t_\\ell$ transmitters to its left and the $t_r$ transmitters to its right, and that each receiver can decode its message using the signals received at its own antenna, at the $r_\\ell$ receiving antennas to its left, and at the $r_r$ receiving antennas to its right. We provide upper and lower bounds on the multiplexing gain of these ...

  5. Efficient blind decoders for additive spread spectrum embedding based data hiding

    Science.gov (United States)

    Valizadeh, Amir; Wang, Z. Jane

    2012-12-01

    This article investigates efficient blind watermark decoding approaches for hidden messages embedded into host images, within the framework of additive spread spectrum (SS) embedding based for data hiding. We study SS embedding in both the discrete cosine transform and the discrete Fourier transform (DFT) domains. The contributions of this article are multiple-fold: first, we show that the conventional SS scheme could not be applied directly into the magnitudes of the DFT, and thus we present a modified SS scheme and the optimal maximum likelihood (ML) decoder based on the Weibull distribution is derived. Secondly, we investigate the improved spread spectrum (ISS) embedding, an improved technique of the traditional additive SS, and propose the modified ISS scheme for information hiding in the magnitudes of the DFT coefficients and the optimal ML decoders for ISS embedding are derived. We also provide thorough theoretical error probability analysis for the aforementioned decoders. Thirdly, sub-optimal decoders, including local optimum decoder (LOD), generalized maximum likelihood (GML) decoder, and linear minimum mean square error (LMMSE) decoder, are investigated to reduce the required prior information at the receiver side, and their theoretical decoding performances are derived. Based on decoding performances and the required prior information for decoding, we discuss the preferred host domain and the preferred decoder for additive SS-based data hiding under different situations. Extensive simulations are conducted to illustrate the decoding performances of the presented decoders.

  6. Bayesian Inference: with ecological applications

    Science.gov (United States)

    Link, William A.; Barker, Richard J.

    2010-01-01

    This text provides a mathematically rigorous yet accessible and engaging introduction to Bayesian inference with relevant examples that will be of interest to biologists working in the fields of ecology, wildlife management and environmental studies as well as students in advanced undergraduate statistics.. This text opens the door to Bayesian inference, taking advantage of modern computational efficiencies and easily accessible software to evaluate complex hierarchical models.

  7. Bayesian Networks and Influence Diagrams

    DEFF Research Database (Denmark)

    Kjærulff, Uffe Bro; Madsen, Anders Læsø

     Probabilistic networks, also known as Bayesian networks and influence diagrams, have become one of the most promising technologies in the area of applied artificial intelligence, offering intuitive, efficient, and reliable methods for diagnosis, prediction, decision making, classification......, troubleshooting, and data mining under uncertainty. Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. Intended...

  8. A Bayesian approach to combining animal abundance and demographic data

    Directory of Open Access Journals (Sweden)

    Brooks, S. P.

    2004-06-01

    Full Text Available In studies of wild animals, one frequently encounters both count and mark-recapture-recovery data. Here, we consider an integrated Bayesian analysis of ring¿recovery and count data using a state-space model. We then impose a Leslie-matrix-based model on the true population counts describing the natural birth-death and age transition processes. We focus upon the analysis of both count and recovery data collected on British lapwings (Vanellus vanellus combined with records of the number of frost days each winter. We demonstrate how the combined analysis of these data provides a more robust inferential framework and discuss how the Bayesian approach using MCMC allows us to remove the potentially restrictive normality assumptions commonly assumed for analyses of this sort. It is shown how WinBUGS may be used to perform the Bayesian analysis. WinBUGS code is provided and its performance is critically discussed.

  9. Some Bayesian statistical techniques useful in estimating frequency and density

    Science.gov (United States)

    Johnson, D.H.

    1977-01-01

    This paper presents some elementary applications of Bayesian statistics to problems faced by wildlife biologists. Bayesian confidence limits for frequency of occurrence are shown to be generally superior to classical confidence limits. Population density can be estimated from frequency data if the species is sparsely distributed relative to the size of the sample plot. For other situations, limits are developed based on the normal distribution and prior knowledge that the density is non-negative, which insures that the lower confidence limit is non-negative. Conditions are described under which Bayesian confidence limits are superior to those calculated with classical methods; examples are also given on how prior knowledge of the density can be used to sharpen inferences drawn from a new sample.

  10. PERFORMANCE OF A NEW DECODING METHOD USED IN OPEN-LOOP ALL-OPTICAL CHAOTIC COMMUNICATION SYSTEM

    Institute of Scientific and Technical Information of China (English)

    Liu Huijie; Feng Jiuchao

    2011-01-01

    A new decoding method with decoder is used in open-loop all-optical chaotic communication system under strong injection condition.The performance of the new decoding method is numerically investigated by comparing it with the common decoding method without decoder.For new decoding method,two cases are analyzed,including whether or not the output of the decoder is adjusted by its input to receiver.The results indicate the decoding quality can be improved by adjusting for the new decoding method.Meanwhile,the injection strength of decoder can be restricted in a certain range.The adjusted new decoding method with decoder can achieve better decoding quality than decoding method without decoder when the bit rate of message is under 5 Gb/s.However,a stronger injection for receiver is needed.Moreover,the new decoding method can broaden the range of injection strength acceptable for good decoding quality.Different message encryption techniques are tested,and the result is similar to that of the common decoding method,indicative of the fact that the message encoded by using Chaotic Modulation (CM) can be best recovered by the new decoding method owning to the essence of this encryption technique.

  11. Dynamic Batch Bayesian Optimization

    CERN Document Server

    Azimi, Javad; Fern, Xiaoli

    2011-01-01

    Bayesian optimization (BO) algorithms try to optimize an unknown function that is expensive to evaluate using minimum number of evaluations/experiments. Most of the proposed algorithms in BO are sequential, where only one experiment is selected at each iteration. This method can be time inefficient when each experiment takes a long time and more than one experiment can be ran concurrently. On the other hand, requesting a fix-sized batch of experiments at each iteration causes performance inefficiency in BO compared to the sequential policies. In this paper, we present an algorithm that asks a batch of experiments at each time step t where the batch size p_t is dynamically determined in each step. Our algorithm is based on the observation that the sequence of experiments selected by the sequential policy can sometimes be almost independent from each other. Our algorithm identifies such scenarios and request those experiments at the same time without degrading the performance. We evaluate our proposed method us...

  12. Measuring Integrated Information from the Decoding Perspective.

    Science.gov (United States)

    Oizumi, Masafumi; Amari, Shun-ichi; Yanagawa, Toru; Fujii, Naotaka; Tsuchiya, Naotsugu

    2016-01-01

    Accumulating evidence indicates that the capacity to integrate information in the brain is a prerequisite for consciousness. Integrated Information Theory (IIT) of consciousness provides a mathematical approach to quantifying the information integrated in a system, called integrated information, Φ. Integrated information is defined theoretically as the amount of information a system generates as a whole, above and beyond the amount of information its parts independently generate. IIT predicts that the amount of integrated information in the brain should reflect levels of consciousness. Empirical evaluation of this theory requires computing integrated information from neural data acquired from experiments, although difficulties with using the original measure Φ precludes such computations. Although some practical measures have been previously proposed, we found that these measures fail to satisfy the theoretical requirements as a measure of integrated information. Measures of integrated information should satisfy the lower and upper bounds as follows: The lower bound of integrated information should be 0 and is equal to 0 when the system does not generate information (no information) or when the system comprises independent parts (no integration). The upper bound of integrated information is the amount of information generated by the whole system. Here we derive the novel practical measure Φ* by introducing a concept of mismatched decoding developed from information theory. We show that Φ* is properly bounded from below and above, as required, as a measure of integrated information. We derive the analytical expression of Φ* under the Gaussian assumption, which makes it readily applicable to experimental data. Our novel measure Φ* can generally be used as a measure of integrated information in research on consciousness, and also as a tool for network analysis on diverse areas of biology.

  13. Measuring Integrated Information from the Decoding Perspective.

    Directory of Open Access Journals (Sweden)

    Masafumi Oizumi

    2016-01-01

    Full Text Available Accumulating evidence indicates that the capacity to integrate information in the brain is a prerequisite for consciousness. Integrated Information Theory (IIT of consciousness provides a mathematical approach to quantifying the information integrated in a system, called integrated information, Φ. Integrated information is defined theoretically as the amount of information a system generates as a whole, above and beyond the amount of information its parts independently generate. IIT predicts that the amount of integrated information in the brain should reflect levels of consciousness. Empirical evaluation of this theory requires computing integrated information from neural data acquired from experiments, although difficulties with using the original measure Φ precludes such computations. Although some practical measures have been previously proposed, we found that these measures fail to satisfy the theoretical requirements as a measure of integrated information. Measures of integrated information should satisfy the lower and upper bounds as follows: The lower bound of integrated information should be 0 and is equal to 0 when the system does not generate information (no information or when the system comprises independent parts (no integration. The upper bound of integrated information is the amount of information generated by the whole system. Here we derive the novel practical measure Φ* by introducing a concept of mismatched decoding developed from information theory. We show that Φ* is properly bounded from below and above, as required, as a measure of integrated information. We derive the analytical expression of Φ* under the Gaussian assumption, which makes it readily applicable to experimental data. Our novel measure Φ* can generally be used as a measure of integrated information in research on consciousness, and also as a tool for network analysis on diverse areas of biology.

  14. Performance evaluation of H.264 decoder on different processors

    Directory of Open Access Journals (Sweden)

    H.S.Prasantha

    2010-08-01

    Full Text Available H.264/AVC (Advanced Video Coding is the newest video coding standard of the moving video coding experts group. The decoder is standardized by imposing restrictions on the bit stream and syntax, and defining the process of decoding syntax elements such that every decoder conforming to the standard will produce similar output when encoded bit stream is provided as input. It uses state of art coding tools and provides enhanced coding efficiency for a wide range of applications, including video telephony, real-time video conferencing, direct-broadcast TV (television, blue-ray disc, DVB (Digital video broadcast broadcast, streaming video and others. The paper proposes to port the H.264/AVC decoder on the various processors such as TI DSP (Digital signal processor, ARM (Advanced risk machines and P4 (Pentium processors. The paper also proposesto analyze and compare Video Quality Metrics for different encoded video sequences. The paper proposes to investigate the decoder performance on different processors with and without deblocking filter and compare the performance based on different video quality measures.

  15. Combinatorial limitations of a strong form of list decoding

    CERN Document Server

    Guruswami, Venkatesan

    2012-01-01

    We prove the following results concerning the combinatorics of list decoding, motivated by the exponential gap between the known upper bound (of $O(1/\\gamma)$) and lower bound (of $\\Omega_p(\\log (1/\\gamma))$) for the list-size needed to decode up to radius $p$ with rate $\\gamma$ away from capacity, i.e., $1-\\h(p)-\\gamma$ (here $p\\in (0,1/2)$ and $\\gamma > 0$). We prove that in any binary code $C \\subseteq \\{0,1\\}^n$ of rate $1-\\h(p)-\\gamma$, there must exist a set $\\mathcal{L} \\subset C$ of $\\Omega_p(1/\\sqrt{\\gamma})$ codewords such that the average distance of the points in $\\mathcal{L}$ from their centroid is at most $pn$. In other words, there must exist $\\Omega_p(1/\\sqrt{\\gamma})$ codewords with low "average radius". The motivation for this result is that it gives a list-size lower bound for a strong notion of list decoding which has been implicitly been used in the previous negative results for list decoding. (The usual notion of list decoding corresponds to replacing {\\em average} radius by the {\\em min...

  16. Bayesian inference from count data using discrete uniform priors.

    Science.gov (United States)

    Comoglio, Federico; Fracchia, Letizia; Rinaldi, Maurizio

    2013-01-01

    We consider a set of sample counts obtained by sampling arbitrary fractions of a finite volume containing an homogeneously dispersed population of identical objects. We report a Bayesian derivation of the posterior probability distribution of the population size using a binomial likelihood and non-conjugate, discrete uniform priors under sampling with or without replacement. Our derivation yields a computationally feasible formula that can prove useful in a variety of statistical problems involving absolute quantification under uncertainty. We implemented our algorithm in the R package dupiR and compared it with a previously proposed Bayesian method based on a Gamma prior. As a showcase, we demonstrate that our inference framework can be used to estimate bacterial survival curves from measurements characterized by extremely low or zero counts and rather high sampling fractions. All in all, we provide a versatile, general purpose algorithm to infer population sizes from count data, which can find application in a broad spectrum of biological and physical problems.

  17. Bayesian seismic AVO inversion

    Energy Technology Data Exchange (ETDEWEB)

    Buland, Arild

    2002-07-01

    A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S

  18. Bayesian microsaccade detection

    Science.gov (United States)

    Mihali, Andra; van Opheusden, Bas; Ma, Wei Ji

    2017-01-01

    Microsaccades are high-velocity fixational eye movements, with special roles in perception and cognition. The default microsaccade detection method is to determine when the smoothed eye velocity exceeds a threshold. We have developed a new method, Bayesian microsaccade detection (BMD), which performs inference based on a simple statistical model of eye positions. In this model, a hidden state variable changes between drift and microsaccade states at random times. The eye position is a biased random walk with different velocity distributions for each state. BMD generates samples from the posterior probability distribution over the eye state time series given the eye position time series. Applied to simulated data, BMD recovers the “true” microsaccades with fewer errors than alternative algorithms, especially at high noise. Applied to EyeLink eye tracker data, BMD detects almost all the microsaccades detected by the default method, but also apparent microsaccades embedded in high noise—although these can also be interpreted as false positives. Next we apply the algorithms to data collected with a Dual Purkinje Image eye tracker, whose higher precision justifies defining the inferred microsaccades as ground truth. When we add artificial measurement noise, the inferences of all algorithms degrade; however, at noise levels comparable to EyeLink data, BMD recovers the “true” microsaccades with 54% fewer errors than the default algorithm. Though unsuitable for online detection, BMD has other advantages: It returns probabilities rather than binary judgments, and it can be straightforwardly adapted as the generative model is refined. We make our algorithm available as a software package. PMID:28114483

  19. Maximum margin Bayesian network classifiers.

    Science.gov (United States)

    Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian

    2012-03-01

    We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.

  20. HIGH THROUGHPUT OF MAP PROCESSOR USING PIPELINE WINDOW DECODING

    Directory of Open Access Journals (Sweden)

    P. Nithya

    2012-11-01

    Full Text Available Turbo codes are one of the most efficient error correcting code which approaches the Shannon limit.The high throughput in turbo decoder can be achieved by parallelizing several soft Input Soft Output(SISOunits together.In this way,multiple SISO decoders work on the same data frame at the same values and delievers soft outputs can be split into three terms like the soft channel and a priori input and the extrinsic value.The extrinsic value is used for the next iteration.The high throughput of Max-Log-MAP processor tha supports both single Binary(SBand Double-binary(DB convolutional turbo codes.Decoding of these codes however an iterative processing is requires high computation rate and latency.Thus in order to achieve high throughput and to reduce latency by using serial processing techniques.The pipeline window(PWdecoding is introduced to support arbitrary frame sizes with high throughput.

  1. Novel Quaternary Quantum Decoder, Multiplexer and Demultiplexer Circuits

    Science.gov (United States)

    Haghparast, Majid; Monfared, Asma Taheri

    2017-02-01

    Multiple valued logic is a promising approach to reduce the width of the reversible or quantum circuits, moreover, quaternary logic is considered as being a good choice for future quantum computing technology hence it is very suitable for the encoded realization of binary logic functions through its grouping of 2-bits together into quaternary values. The Quaternary decoder, multiplexer, and demultiplexer are essential units of quaternary digital systems. In this paper, we have initially designed a quantum realization of the quaternary decoder circuit using quaternary 1-qudit gates and quaternary Muthukrishnan-Stroud gates. Then we have presented quantum realization of quaternary multiplexer and demultiplexer circuits using the constructed quaternary decoder circuit and quaternary controlled Feynman gates. The suggested circuits in this paper have a lower quantum cost and hardware complexity than the existing designs that are currently used in quaternary digital systems. All the scales applied in this paper are based on Nanometric area.

  2. SERS decoding of micro gold shells moving in microfluidic systems.

    Science.gov (United States)

    Lee, Saram; Joo, Segyeong; Park, Sejin; Kim, Soyoun; Kim, Hee Chan; Chung, Taek Dong

    2010-05-01

    In this study, in situ surface-enhanced Raman scattering (SERS) decoding was demonstrated in microfluidic chips using novel thin micro gold shells modified with Raman tags. The micro gold shells were fabricated using electroless gold plating on PMMA beads with diameter of 15 microm. These shells were sophisticatedly optimized to produce the maximum SERS intensity, which minimized the exposure time for quick and safe decoding. The shell surfaces produced well-defined SERS spectra even at an extremely short exposure time, 1 ms, for a single micro gold shell combined with Raman tags such as 2-naphthalenethiol and benzenethiol. The consecutive SERS spectra from a variety of combinations of Raman tags were successfully acquired from the micro gold shells moving in 25 microm deep and 75 microm wide channels on a glass microfluidic chip. The proposed functionalized micro gold shells exhibited the potential of an on-chip microfluidic SERS decoding strategy for micro suspension array.

  3. Joint scheduling and resource allocation for multiple video decoding tasks

    Science.gov (United States)

    Foo, Brian; van der Schaar, Mihaela

    2008-01-01

    In this paper we propose a joint resource allocation and scheduling algorithm for video decoding on a resource-constrained system. By decomposing a multimedia task into decoding jobs using quality-driven priority classes, we demonstrate using queuing theoretic analysis that significant power savings can be achieved under small video quality degradation without requiring the encoder to adapt its transmitted bitstream. Based on this scheduling algorithm, we propose an algorithm for maximizing the sum of video qualities in a multiple task environment, while minimizing system energy consumption, without requiring tasks to reveal information about their performances to the system or to other potentially exploitative applications. Importantly, we offer a method to optimize the performance of multiple video decoding tasks on an energy-constrained system, while protecting private information about the system and the applications.

  4. Ternary Tree and Memory-Efficient Huffman Decoding Algorithm

    Directory of Open Access Journals (Sweden)

    Pushpa R. Suri

    2011-01-01

    Full Text Available In this study, the focus was on the use of ternary tree over binary tree. Here, a new one pass Algorithm for Decoding adaptive Huffman ternary tree codes was implemented. To reduce the memory size and fasten the process of searching for a symbol in a Huffman tree, we exploited the property of the encoded symbols and proposed a memory efficient data structure to represent the codeword length of Huffman ternary tree. In this algorithm we tried to find out the staring and ending address of the code to know the length of the code. And then in second algorithm we tried to decode the ternary tree code using binary search method. In this algorithm we tried to find out the staring and ending address of the code to know the length of the code. And then in second algorithm we tried to decode the ternary tree code using binary search method.

  5. A General Rate K/N Convolutional Decoder Based on Neural Networks with Stopping Criterion

    Directory of Open Access Journals (Sweden)

    Johnny W. H. Kao

    2009-01-01

    Full Text Available A novel algorithm for decoding a general rate K/N convolutional code based on recurrent neural network (RNN is described and analysed. The algorithm is introduced by outlining the mathematical models of the encoder and decoder. A number of strategies for optimising the iterative decoding process are proposed, and a simulator was also designed in order to compare the Bit Error Rate (BER performance of the RNN decoder with the conventional decoder that is based on Viterbi Algorithm (VA. The simulation results show that this novel algorithm can achieve the same bit error rate and has a lower decoding complexity. Most importantly this algorithm allows parallel signal processing, which increases the decoding speed and accommodates higher data rate transmission. These characteristics are inherited from a neural network structure of the decoder and the iterative nature of the algorithm, that outperform the conventional VA algorithm.

  6. On the Optimality of Successive Decoding in Compress-and-Forward Relay Schemes

    CERN Document Server

    Wu, Xiugang

    2010-01-01

    In the classical compress-and-forward relay scheme developed by (Cover and El Gamal, 1979), the decoding process operates in a successive way: the destination first decodes the compressed observation of the relay, and then decodes the original message of the source. Recently, two modified compress-and-forward relay schemes were proposed, and in both of them, the destination jointly decodes the compressed observation of the relay and the original message, instead of successively. Such a modification on the decoding process was motivated by realizing that it is generally easier to decode the compressed observation jointly with the original message, and more importantly, the original message can be decoded even without completely decoding the compressed observation. However, the question remains whether this freedom of choosing a higher compression rate at the relay improves the achievable rate of the original message. It has been shown in (El Gamal and Kim, 2010) that the answer is negative in the single relay ...

  7. A Concatenated ML Decoder for ST/SFBC-OFDM Systems in Double Selective Fading Channels

    Institute of Scientific and Technical Information of China (English)

    李明齐; 张文军

    2004-01-01

    This paper presented a concatenated maximum-likelihood (ML) decoder for space-time/space-frequency block coded orthogonal frequency diversion multiplexing (ST/SFBC-OFDM) systems in double selective fading channels. The proposed decoder first detects space-time or space-frequency codeword elements separately. Then, according to the coarsely estimated codeword elements, the ML decoding is performed in a smaller constellation element set to searching final codeword. It is proved that the proposed decoder has optimal performances if and only if subchannels are constant during a codeword interval. The simulation results show that the performances of proposed decoder is close to that of the optimal ML decoder in severe Doppler and delay spread channels. However, the complexity of proposed decoder is much lower than that of the optimal ML decoder.

  8. Multistep Linear Programming Approaches for Decoding Low-Density Parity-Check Codes

    Institute of Scientific and Technical Information of China (English)

    LIU Haiyang; MA Lianrong; CHEN Jie

    2009-01-01

    The problem of improving the performance of linear programming (LP) decoding of low-density padty-check (LDPC) codes is considered in this paper. A multistep linear programming (MLP) algorithm was developed for decoding LDPC codes that includes a slight increase in computational complexity. The MLP decoder adaptively adds new constraints which are compatible with a selected check node to refine the re-sults when an error is reported by the odginal LP decoder. The MLP decoder result is shown to have the maximum-likelihood (ML) certificate property. Simulations with moderate block length LDPC codes suggest that the MLP decoder gives better performance than both the odginal LP decoder and the conventional sum-product (SP) decoder.

  9. A Discrete Time Markov Chain Model for High Throughput Bidirectional Fano Decoders

    CERN Document Server

    Xu, Ran; Morris, Kevin; Kocak, Taskin

    2010-01-01

    The bidirectional Fano algorithm (BFA) can achieve at least two times decoding throughput compared to the conventional unidirectional Fano algorithm (UFA). In this paper, bidirectional Fano decoding is examined from the queuing theory perspective. A Discrete Time Markov Chain (DTMC) is employed to model the BFA decoder with a finite input buffer. The relationship between the input data rate, the input buffer size and the clock speed of the BFA decoder is established. The DTMC based modelling can be used in designing a high throughput parallel BFA decoding system. It is shown that there is a tradeoff between the number of BFA decoders and the input buffer size, and an optimal input buffer size can be chosen to minimize the hardware complexity for a target decoding throughput in designing a high throughput parallel BFA decoding system.

  10. A hidden Markov model for decoding and the analysis of replay in spike trains.

    Science.gov (United States)

    Box, Marc; Jones, Matt W; Whiteley, Nick

    2016-12-01

    We present a hidden Markov model that describes variation in an animal's position associated with varying levels of activity in action potential spike trains of individual place cell neurons. The model incorporates a coarse-graining of position, which we find to be a more parsimonious description of the system than other models. We use a sequential Monte Carlo algorithm for Bayesian inference of model parameters, including the state space dimension, and we explain how to estimate position from spike train observations (decoding). We obtain greater accuracy over other methods in the conditions of high temporal resolution and small neuronal sample size. We also present a novel, model-based approach to the study of replay: the expression of spike train activity related to behaviour during times of motionlessness or sleep, thought to be integral to the consolidation of long-term memories. We demonstrate how we can detect the time, information content and compression rate of replay events in simulated and real hippocampal data recorded from rats in two different environments, and verify the correlation between the times of detected replay events and of sharp wave/ripples in the local field potential.

  11. Error-correction coding and decoding bounds, codes, decoders, analysis and applications

    CERN Document Server

    Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak

    2017-01-01

    This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...

  12. Sparsity-Aware Sphere Decoding: Algorithms and Complexity Analysis

    Science.gov (United States)

    Barik, Somsubhra; Vikalo, Haris

    2014-05-01

    Integer least-squares problems, concerned with solving a system of equations where the components of the unknown vector are integer-valued, arise in a wide range of applications. In many scenarios the unknown vector is sparse, i.e., a large fraction of its entries are zero. Examples include applications in wireless communications, digital fingerprinting, and array-comparative genomic hybridization systems. Sphere decoding, commonly used for solving integer least-squares problems, can utilize the knowledge about sparsity of the unknown vector to perform computationally efficient search for the solution. In this paper, we formulate and analyze the sparsity-aware sphere decoding algorithm that imposes $\\ell_0$-norm constraint on the admissible solution. Analytical expressions for the expected complexity of the algorithm for alphabets typical of sparse channel estimation and source allocation applications are derived and validated through extensive simulations. The results demonstrate superior performance and speed of sparsity-aware sphere decoder compared to the conventional sparsity-unaware sphere decoding algorithm. Moreover, variance of the complexity of the sparsity-aware sphere decoding algorithm for binary alphabets is derived. The search space of the proposed algorithm can be further reduced by imposing lower bounds on the value of the objective function. The algorithm is modified to allow for such a lower bounding technique and simulations illustrating efficacy of the method are presented. Performance of the algorithm is demonstrated in an application to sparse channel estimation, where it is shown that sparsity-aware sphere decoder performs close to theoretical lower limits.

  13. Bayesian modeling using WinBUGS

    CERN Document Server

    Ntzoufras, Ioannis

    2009-01-01

    A hands-on introduction to the principles of Bayesian modeling using WinBUGS Bayesian Modeling Using WinBUGS provides an easily accessible introduction to the use of WinBUGS programming techniques in a variety of Bayesian modeling settings. The author provides an accessible treatment of the topic, offering readers a smooth introduction to the principles of Bayesian modeling with detailed guidance on the practical implementation of key principles. The book begins with a basic introduction to Bayesian inference and the WinBUGS software and goes on to cover key topics, including: Markov Chain Monte Carlo algorithms in Bayesian inference Generalized linear models Bayesian hierarchical models Predictive distribution and model checking Bayesian model and variable evaluation Computational notes and screen captures illustrate the use of both WinBUGS as well as R software to apply the discussed techniques. Exercises at the end of each chapter allow readers to test their understanding of the presented concepts and all ...

  14. Bayesian Methods and Universal Darwinism

    CERN Document Server

    Campbell, John

    2010-01-01

    Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a 'copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that system...

  15. Attention in a bayesian framework

    DEFF Research Database (Denmark)

    Whiteley, Louise Emma; Sahani, Maneesh

    2012-01-01

    The behavioral phenomena of sensory attention are thought to reflect the allocation of a limited processing resource, but there is little consensus on the nature of the resource or why it should be limited. Here we argue that a fundamental bottleneck emerges naturally within Bayesian models...... of perception, and use this observation to frame a new computational account of the need for, and action of, attention - unifying diverse attentional phenomena in a way that goes beyond previous inferential, probabilistic and Bayesian models. Attentional effects are most evident in cluttered environments......, and include both selective phenomena, where attention is invoked by cues that point to particular stimuli, and integrative phenomena, where attention is invoked dynamically by endogenous processing. However, most previous Bayesian accounts of attention have focused on describing relatively simple experimental...

  16. Probability biases as Bayesian inference

    Directory of Open Access Journals (Sweden)

    Andre; C. R. Martins

    2006-11-01

    Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.

  17. Conventional Tanner Graph for Recursive onvolutional Codes and Associated Decoding

    Institute of Scientific and Technical Information of China (English)

    SUN Hong

    2001-01-01

    A different representation of recur-sive systematic convolutional (RSC) codes is pro-posed. This representation can be realized by a con-ventional Tanner graph. The graph becomes a treeby introducing hidden edge. It is shown that thesum-product algorithm applied to this graph modelis equivalent to the BCJR algorithm for turbo de-coding with lower computational complexity. Themessage-passing chain of the BCJR algorithm is pre-sented more exactly in the graph. In addition, theproposed representation of RSC codes provides an ef-ficient method to set up the trellis and the conven-tional Tanner graph for RSC codes provides directlythe architecture for decoding.

  18. PERFORMANCE OF THREE STAGE TURBO-EQUALIZATION-DECODING

    Institute of Scientific and Technical Information of China (English)

    Kazi Takpaya

    2003-01-01

    An increasing demand for high data rate transmission and protection over bandlimited channels with severe inter-symbol interference has resulted in a flurry of activity to improve channel equalization, In conjunction with equalization, channel coding-decoding can be employed to improve system performance. In this letter, the performance of the three stage turbo-equalization-decoding employing log maximum a posteriori probability is experimentally evaluated by a fading simulator. The BER is evaluated using various information sequence and interleaver sizes taking into account that the communication medium is a noisy inter symbol interference channel.

  19. Optimal encoding and decoding of a spin direction

    CERN Document Server

    Bagán, E; Brey, A; Muñoz-Tàpia, R; Tarrach, Rolf

    2001-01-01

    For a system of N spins 1/2 there are quantum states that can encode a direction in an intrinsic way. Information on this direction can later be decoded by means of a quantum measurement. We present here the optimal encoding and decoding procedure using the fidelity as a figure of merit. We compute the maximal fidelity and prove that it is directly related to the largest zeroes of the Legendre and Jacobi polynomials. We show that this maximal fidelity approaches unity quadratically in 1/N. We also discuss this result in terms of the dimension of the encoding Hilbert space.

  20. Adaptive neuron-to-EMG decoder training for FES neuroprostheses

    Science.gov (United States)

    Ethier, Christian; Acuna, Daniel; Solla, Sara A.; Miller, Lee E.

    2016-08-01

    Objective. We have previously demonstrated a brain-machine interface neuroprosthetic system that provided continuous control of functional electrical stimulation (FES) and restoration of grasp in a primate model of spinal cord injury (SCI). Predicting intended EMG directly from cortical recordings provides a flexible high-dimensional control signal for FES. However, no peripheral signal such as force or EMG is available for training EMG decoders in paralyzed individuals. Approach. Here we present a method for training an EMG decoder in the absence of muscle activity recordings; the decoder relies on mapping behaviorally relevant cortical activity to the inferred EMG activity underlying an intended action. Monkeys were trained at a 2D isometric wrist force task to control a computer cursor by applying force in the flexion, extension, ulnar, and radial directions and execute a center-out task. We used a generic muscle force-to-endpoint force model based on muscle pulling directions to relate each target force to an optimal EMG pattern that attained the target force while minimizing overall muscle activity. We trained EMG decoders during the target hold periods using a gradient descent algorithm that compared EMG predictions to optimal EMG patterns. Main results. We tested this method both offline and online. We quantified both the accuracy of offline force predictions and the ability of a monkey to use these real-time force predictions for closed-loop cursor control. We compared both offline and online results to those obtained with several other direct force decoders, including an optimal decoder computed from concurrently measured neural and force signals. Significance. This novel approach to training an adaptive EMG decoder could make a brain-control FES neuroprosthesis an effective tool to restore the hand function of paralyzed individuals. Clinical implementation would make use of individualized EMG-to-force models. Broad generalization could be achieved by

  1. Algebraic Fast-Decodable Relay Codes for Distributed Communications

    CERN Document Server

    Hollanti, Camilla

    2012-01-01

    In this paper, fast-decodable lattice code constructions are designed for the nonorthogonal amplify-and-forward (NAF) multiple-input multiple-output (MIMO) channel. The constructions are based on different types of algebraic structures, e.g. quaternion division algebras. When satisfying certain properties, these algebras provide us with codes whose structure naturally reduces the decoding complexity. The complexity can be further reduced by shortening the block length, i.e., by considering rectangular codes called less than minimum delay (LMD) codes.

  2. New Iterated Decoding Algorithm Based on Differential Frequency Hopping System

    Institute of Scientific and Technical Information of China (English)

    LIANG Fu-lin; LUO Wei-xiong

    2005-01-01

    A new iterated decoding algorithm is proposed for differential frequency hopping (DFH) encoder concatenated with multi-frequency shift-key (MFSK) modulator. According to the character of the frequency hopping (FH) pattern trellis produced by DFH function, maximum a posteriori (MAP) probability theory is applied to realize the iterate decoding of it. Further, the initial conditions for the new iterate algorithm based on MAP algorithm are modified for better performance. Finally, the simulation result compared with that from traditional algorithms shows good anti-interference performance.

  3. EXIT Chart Analysis of Binary Message-Passing Decoders

    DEFF Research Database (Denmark)

    Lechner, Gottfried; Pedersen, Troels; Kramer, Gerhard

    2007-01-01

    Binary message-passing decoders for LDPC codes are analyzed using EXIT charts. For the analysis, the variable node decoder performs all computations in the L-value domain. For the special case of a hard decision channel, this leads to the well know Gallager B algorithm, while the analysis can...... be extended to channels with larger output alphabets. By increasing the output alphabet from hard decisions to four symbols, a gain of more than 1.0 dB is achieved using optimized codes. For this code optimization, the mixing property of EXIT functions has to be modified to the case of binary message-passing...

  4. Decoding Brain States Based on Magnetoencephalography From Prespecified Cortical Regions.

    Science.gov (United States)

    Zhang, Jinyin; Li, Xin; Foldes, Stephen T; Wang, Wei; Collinger, Jennifer L; Weber, Douglas J; Bagić, Anto

    2016-01-01

    Brain state decoding based on whole-head MEG has been extensively studied over the past decade. Recent MEG applications pose an emerging need of decoding brain states based on MEG signals originating from prespecified cortical regions. Toward this goal, we propose a novel region-of-interest-constrained discriminant analysis algorithm (RDA) in this paper. RDA integrates linear classification and beamspace transformation into a unified framework by formulating a constrained optimization problem. Our experimental results based on human subjects demonstrate that RDA can efficiently extract the discriminant pattern from prespecified cortical regions to accurately distinguish different brain states.

  5. Massively parallel neural circuits for stereoscopic color vision: encoding, decoding and identification.

    Science.gov (United States)

    Lazar, Aurel A; Slutskiy, Yevgeniy B; Zhou, Yiyin

    2015-03-01

    Past work demonstrated how monochromatic visual stimuli could be faithfully encoded and decoded under Nyquist-type rate conditions. Color visual stimuli were then traditionally encoded and decoded in multiple separate monochromatic channels. The brain, however, appears to mix information about color channels at the earliest stages of the visual system, including the retina itself. If information about color is mixed and encoded by a common pool of neurons, how can colors be demixed and perceived? We present Color Video Time Encoding Machines (Color Video TEMs) for encoding color visual stimuli that take into account a variety of color representations within a single neural circuit. We then derive a Color Video Time Decoding Machine (Color Video TDM) algorithm for color demixing and reconstruction of color visual scenes from spikes produced by a population of visual neurons. In addition, we formulate Color Video Channel Identification Machines (Color Video CIMs) for functionally identifying color visual processing performed by a spiking neural circuit. Furthermore, we derive a duality between TDMs and CIMs that unifies the two and leads to a general theory of neural information representation for stereoscopic color vision. We provide examples demonstrating that a massively parallel color visual neural circuit can be first identified with arbitrary precision and its spike trains can be subsequently used to reconstruct the encoded stimuli. We argue that evaluation of the functional identification methodology can be effectively and intuitively performed in the stimulus space. In this space, a signal reconstructed from spike trains generated by the identified neural circuit can be compared to the original stimulus.

  6. Encoding and decoding amplitude-modulated cochlear implant stimuli--a point process analysis.

    Science.gov (United States)

    Goldwyn, Joshua H; Shea-Brown, Eric; Rubinstein, Jay T

    2010-06-01

    Cochlear implant speech processors stimulate the auditory nerve by delivering amplitude-modulated electrical pulse trains to intracochlear electrodes. Studying how auditory nerve cells encode modulation information is of fundamental importance, therefore, to understanding cochlear implant function and improving speech perception in cochlear implant users. In this paper, we analyze simulated responses of the auditory nerve to amplitude-modulated cochlear implant stimuli using a point process model. First, we quantify the information encoded in the spike trains by testing an ideal observer's ability to detect amplitude modulation in a two-alternative forced-choice task. We vary the amount of information available to the observer to probe how spike timing and averaged firing rate encode modulation. Second, we construct a neural decoding method that predicts several qualitative trends observed in psychophysical tests of amplitude modulation detection in cochlear implant listeners. We find that modulation information is primarily available in the sequence of spike times. The performance of an ideal observer, however, is inconsistent with observed trends in psychophysical data. Using a neural decoding method that jitters spike times to degrade its temporal resolution and then computes a common measure of phase locking from spike trains of a heterogeneous population of model nerve cells, we predict the correct qualitative dependence of modulation detection thresholds on modulation frequency and stimulus level. The decoder does not predict the observed loss of modulation sensitivity at high carrier pulse rates, but this framework can be applied to future models that better represent auditory nerve responses to high carrier pulse rate stimuli. The supplemental material of this article contains the article's data in an active, re-usable format.

  7. Bayesian Missile System Reliability from Point Estimates

    Science.gov (United States)

    2014-10-28

    OCT 2014 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Bayesian Missile System Reliability from Point Estimates 5a. CONTRACT...Principle (MEP) to convert point estimates to probability distributions to be used as priors for Bayesian reliability analysis of missile data, and...illustrate this approach by applying the priors to a Bayesian reliability model of a missile system. 15. SUBJECT TERMS priors, Bayesian , missile

  8. Perception, illusions and Bayesian inference.

    Science.gov (United States)

    Nour, Matthew M; Nour, Joseph M

    2015-01-01

    Descriptive psychopathology makes a distinction between veridical perception and illusory perception. In both cases a perception is tied to a sensory stimulus, but in illusions the perception is of a false object. This article re-examines this distinction in light of new work in theoretical and computational neurobiology, which views all perception as a form of Bayesian statistical inference that combines sensory signals with prior expectations. Bayesian perceptual inference can solve the 'inverse optics' problem of veridical perception and provides a biologically plausible account of a number of illusory phenomena, suggesting that veridical and illusory perceptions are generated by precisely the same inferential mechanisms.

  9. Bayesian test and Kuhn's paradigm

    Institute of Scientific and Technical Information of China (English)

    Chen Xiaoping

    2006-01-01

    Kuhn's theory of paradigm reveals a pattern of scientific progress,in which normal science alternates with scientific revolution.But Kuhn underrated too much the function of scientific test in his pattern,because he focuses all his attention on the hypothetico-deductive schema instead of Bayesian schema.This paper employs Bayesian schema to re-examine Kuhn's theory of paradigm,to uncover its logical and rational components,and to illustrate the tensional structure of logic and belief,rationality and irrationality,in the process of scientific revolution.

  10. 3D Bayesian contextual classifiers

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2000-01-01

    We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....

  11. Progressive Image Transmission Based on Joint Source-Channel Decoding Using Adaptive Sum-Product Algorithm

    Directory of Open Access Journals (Sweden)

    David G. Daut

    2007-03-01

    Full Text Available A joint source-channel decoding method is designed to accelerate the iterative log-domain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec making it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. The positions of bits belonging to error-free coding passes are then fed back to the channel decoder. The log-likelihood ratios (LLRs of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the nonsource controlled decoding method by up to 3 dB in terms of PSNR.

  12. Progressive Image Transmission Based on Joint Source-Channel Decoding Using Adaptive Sum-Product Algorithm

    Directory of Open Access Journals (Sweden)

    Liu Weiliang

    2007-01-01

    Full Text Available A joint source-channel decoding method is designed to accelerate the iterative log-domain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec making it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. The positions of bits belonging to error-free coding passes are then fed back to the channel decoder. The log-likelihood ratios (LLRs of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the nonsource controlled decoding method by up to 3 dB in terms of PSNR.

  13. Homogeneous Interpolation Problem and Key Equation for Decoding Reed-Solomon Codes

    Institute of Scientific and Technical Information of China (English)

    忻鼎稼

    1994-01-01

    The concept of homogeneous interpolation problem (HIP) over fields is introduced.It is discovered that solving HIP over finite fields is equivalent to decoding Reed-Solomon (RS) codes.The Welch-Berlekamp algorithm of decoding RS codes is derived;besides,by introducing the concept of incomplete locator of error patterns,the algorithm called incomplete iterative decoding is established.

  14. 47 CFR 11.12 - Two-tone Attention Signal encoder and decoder.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Two-tone Attention Signal encoder and decoder... SYSTEM (EAS) General § 11.12 Two-tone Attention Signal encoder and decoder. Existing two-tone Attention... Attention Signal decoder will no longer be required and the two-tone Attention Signal will be used...

  15. Joint source/channel iterative arithmetic decoding with JPEG 2000 image transmission application

    Science.gov (United States)

    Zaibi, Sonia; Zribi, Amin; Pyndiah, Ramesh; Aloui, Nadia

    2012-12-01

    Motivated by recent results in Joint Source/Channel coding and decoding, we consider the decoding problem of Arithmetic Codes (AC). In fact, in this article we provide different approaches which allow one to unify the arithmetic decoding and error correction tasks. A novel length-constrained arithmetic decoding algorithm based on Maximum A Posteriori sequence estimation is proposed. The latter is based on soft-input decoding using a priori knowledge of the source-symbol sequence and the compressed bit-stream lengths. Performance in the case of transmission over an Additive White Gaussian Noise channel is evaluated in terms of Packet Error Rate. Simulation results show that the proposed decoding algorithm leads to significant performance gain while exhibiting very low complexity. The proposed soft input arithmetic decoder can also generate additional information regarding the reliability of the compressed bit-stream components. We consider the serial concatenation of the AC with a Recursive Systematic Convolutional Code, and perform iterative decoding. We show that, compared to tandem and to trellis-based Soft-Input Soft-Output decoding schemes, the proposed decoder exhibits the best performance/complexity tradeoff. Finally, the practical relevance of the presented iterative decoding system is validated under an image transmission scheme based on the JPEG 2000 standard and excellent results in terms of decoded image quality are obtained.

  16. Construction and decoding of matrix-product codes from nested codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Lally, Kristine; Ruano, Diego

    2009-01-01

    We consider matrix-product codes [C1 ... Cs] · A, where C1, ..., Cs  are nested linear codes and matrix A has full rank. We compute their minimum distance and provide a decoding algorithm when A is a non-singular by columns matrix. The decoding algorithm decodes up to half of the minimum distance....

  17. A Bayesian Nonparametric Approach to Test Equating

    Science.gov (United States)

    Karabatsos, George; Walker, Stephen G.

    2009-01-01

    A Bayesian nonparametric model is introduced for score equating. It is applicable to all major equating designs, and has advantages over previous equating models. Unlike the previous models, the Bayesian model accounts for positive dependence between distributions of scores from two tests. The Bayesian model and the previous equating models are…

  18. Bayesian Model Averaging for Propensity Score Analysis

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  19. Bayesian networks and food security - An introduction

    NARCIS (Netherlands)

    Stein, A.

    2004-01-01

    This paper gives an introduction to Bayesian networks. Networks are defined and put into a Bayesian context. Directed acyclical graphs play a crucial role here. Two simple examples from food security are addressed. Possible uses of Bayesian networks for implementation and further use in decision sup

  20. Plug & Play object oriented Bayesian networks

    DEFF Research Database (Denmark)

    Bangsø, Olav; Flores, J.; Jensen, Finn Verner

    2003-01-01

    Object oriented Bayesian networks have proven themselves useful in recent years. The idea of applying an object oriented approach to Bayesian networks has extended their scope to larger domains that can be divided into autonomous but interrelated entities. Object oriented Bayesian networks have b...

  1. TC81220F (HAWK) MPEG 2 system decoder LSI; MPEG2 system decoder LSI TC81220F (HAWK)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    Satellite broadcasting, cable broadcasting, and ground wave broadcasting have been digitized gradually. In the video and audio data communication, the MPEG2 (Moving Picture Experts Group 2) technology for compression and extension is important. A system LSI that includes an MPEG2 decoder in a receiving set (Set Top Box) is required to each broadcasting. To deduce the system cost, Toshiba developed a TX3904 MCU (microcontroller) that controls the system, a transport processor that selects the packet-multiplexed data, and TC81220F (HAWK) that puts MPEG2 audio video decoders in one chip. (translated by NEDO)

  2. Real Time Decoding of Color Symbol for Optical Positioning System

    Directory of Open Access Journals (Sweden)

    Abdul Waheed Malik

    2015-01-01

    Full Text Available This paper presents the design and real-time decoding of a color symbol that can be used as a reference marker for optical navigation. The designed symbol has a circular shape and is printed on paper using two distinct colors. This pair of colors is selected based on the highest achievable signal to noise ratio. The symbol is designed to carry eight bit information. Real time decoding of this symbol is performed using a heterogeneous combination of Field Programmable Gate Array (FPGA and a microcontroller. An image sensor having a resolution of 1600 by 1200 pixels is used to capture images of symbols in complex back‐ grounds. Dynamic image segmentation, component labeling and feature extraction was performed on the FPGA. The region of interest was further computed from the extracted features. Feature data belonging to the symbol was sent from the FPGA to the microcontroller. Image processing tasks are partitioned between the FPGA and microcontroller based on data intensity. Experiments were performed to verify the rotational independence of the symbols. The maximum distance between camera and symbol allowing for correct detection and decoding was analyzed. Experiments were also performed to analyze the number of generated image components and sub-pixel precision versus different light sources and intensities. The proposed hardware architecture can process up to 55 frames per second for accurate detection and decoding of symbols at two Megapixels resolution. The power consumption of the complete system is 342mw.

  3. Error Locked Encoder and Decoder for Nanomemory Application

    Directory of Open Access Journals (Sweden)

    Y. Sharath

    2014-03-01

    Full Text Available Memory cells have been protected from soft errors for more than a decade; due to the increase in soft error rate in logic circuits, the encoder and decoder circuitry around the memory blocks have become susceptible to soft errors as well and must also be protected. We introduce a new approach to design fault-secure encoder and decoder circuitry for memory designs. The key novel contribution of this paper is identifying and defining a new class of error-correcting codes whose redundancy makes the design of fault-secure detectors (FSD particularly simple. We further quantify the importance of protecting encoder and decoder circuitry against transient errors, illustrating a scenario where the system failure rate (FIT is dominated by the failure rate of the encoder and decoder. We prove that Euclidean Geometry Low-Density Parity-Check (EG-LDPC codes have the fault-secure detector capability. Using some of the smaller EG-LDPC codes, we can tolerate bit or nanowire defect rates of 10% and fault rates of 10-18 upsets/device/cycle, achieving a FIT rate at or below one for the entire memory system and a memory density of 1011 bit/cm with nanowire pitch of 10 nm for memory blocks of 10 Mb or larger. Larger EG-LDPC codes can achieve even higher reliability and lower area overhead.

  4. Method for Veterbi decoding of large constraint length convolutional codes

    Science.gov (United States)

    Hsu, In-Shek; Truong, Trieu-Kie; Reed, Irving S.; Jing, Sun

    1988-05-01

    A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.

  5. Decoding a combined amplitude modulated and frequency modulated signal

    DEFF Research Database (Denmark)

    2015-01-01

    The present disclosure relates to a method for decoding a combined AM/FM encoded signal, comprising the steps of: combining said encoded optical signal with light from a local oscillator configured with a local oscillator frequency; converting the combined local oscillator and encoded optical sig...

  6. Fast decoding of codes from algebraic plane curves

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Jensen, Helge Elbrønd;

    1992-01-01

    Improvement to an earlier decoding algorithm for codes from algebraic geometry is presented. For codes from an arbitrary regular plane curve the authors correct up to d*/2-m2 /8+m/4-9/8 errors, where d* is the designed distance of the code and m is the degree of the curve. The complexity of finding...

  7. Gradient Descent Bit Flipping Algorithms for Decoding LDPC Codes

    OpenAIRE

    Wadayama, Tadashi; Nakamura, Keisuke; Yagita, Masayuki; Funahashi, Yuuki; Usami, Shogo; Takumi, Ichi

    2007-01-01

    A novel class of bit-flipping (BF) algorithms for decoding low-density parity-check (LDPC) codes is presented. The proposed algorithms, which are called gradient descent bit flipping (GDBF) algorithms, can be regarded as simplified gradient descent algorithms. Based on gradient descent formulation, the proposed algorithms are naturally derived from a simple non-linear objective function.

  8. Polar Coding with CRC-Aided List Decoding

    Science.gov (United States)

    2015-08-01

    decoder estimates û1, . . . , ûN , one at a time, in order. For conve- nience, we introduce some non-standard notation. For any k ≤ N and any sequence of... recursive formula, but the true probability cannot be computed efficiently. The estimate differs from the true probability because the recursive formula

  9. Name that tune: decoding music from the listening brain.

    NARCIS (Netherlands)

    Schaefer, R.S.; Farquhar, J.; Blokland, Y.M.; Sadakata, M.; Desain, P.

    2011-01-01

    In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven differe

  10. A quantum algorithm for Viterbi decoding of classical convolutional codes

    Science.gov (United States)

    Grice, Jon R.; Meyer, David A.

    2015-07-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.

  11. Relationships between grammatical encoding and decoding : an experimental psycholinguistic study

    NARCIS (Netherlands)

    Olsthoorn, Nomi Maria

    2007-01-01

    Although usually considered distinct processes, grammatical encoding and decoding have many theoretical and empirical commonalities. In two series of experiments relationships between the two processes are explored. The first series uses a dual task (edited reading aloud (ERA)) paradigm to test the

  12. Name that tune: Decoding music from the listening brain

    NARCIS (Netherlands)

    Schaefer, R.S.; Farquhar, J.D.R.; Blokland, Y.M.; Sadakata, M.; Desain, P.W.M.

    2011-01-01

    In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven differe

  13. The Fluid Reading Primer: Animated Decoding Support for Emergent Readers.

    Science.gov (United States)

    Zellweger, Polle T.; Mackinlay, Jock D.

    A prototype application called the Fluid Reading Primer was developed to help emergent readers with the process of decoding written words into their spoken forms. The Fluid Reading Primer is part of a larger research project called Fluid Documents, which is exploring the use of interactive animation of typography to show additional information in…

  14. Sub-quadratic decoding of one-point hermitian codes

    DEFF Research Database (Denmark)

    Nielsen, Johan Sebastian Rosenkilde; Beelen, Peter

    2015-01-01

    We present the first two sub-quadratic complexity decoding algorithms for one-point Hermitian codes. The first is based on a fast realization of the Guruswami-Sudan algorithm using state-of-the-art algorithms from computer algebra for polynomial-ring matrix minimization. The second is a power...

  15. Bayesian stable isotope mixing models

    Science.gov (United States)

    In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixtur...

  16. Naive Bayesian for Email Filtering

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The paper presents a method of email filter based on Naive Bayesian theory that can effectively filter junk mail and illegal mail. Furthermore, the keys of implementation are discussed in detail. The filtering model is obtained from training set of email. The filtering can be done without the users specification of filtering rules.

  17. Bayesian analysis of binary sequences

    Science.gov (United States)

    Torney, David C.

    2005-03-01

    This manuscript details Bayesian methodology for "learning by example", with binary n-sequences encoding the objects under consideration. Priors prove influential; conformable priors are described. Laplace approximation of Bayes integrals yields posterior likelihoods for all n-sequences. This involves the optimization of a definite function over a convex domain--efficiently effectuated by the sequential application of the quadratic program.

  18. Bayesian NL interpretation and learning

    NARCIS (Netherlands)

    Zeevat, H.

    2011-01-01

    Everyday natural language communication is normally successful, even though contemporary computational linguistics has shown that NL is characterised by very high degree of ambiguity and the results of stochastic methods are not good enough to explain the high success rate. Bayesian natural language

  19. ANALYSIS OF BAYESIAN CLASSIFIER ACCURACY

    Directory of Open Access Journals (Sweden)

    Felipe Schneider Costa

    2013-01-01

    Full Text Available The naïve Bayes classifier is considered one of the most effective classification algorithms today, competing with more modern and sophisticated classifiers. Despite being based on unrealistic (naïve assumption that all variables are independent, given the output class, the classifier provides proper results. However, depending on the scenario utilized (network structure, number of samples or training cases, number of variables, the network may not provide appropriate results. This study uses a process variable selection, using the chi-squared test to verify the existence of dependence between variables in the data model in order to identify the reasons which prevent a Bayesian network to provide good performance. A detailed analysis of the data is also proposed, unlike other existing work, as well as adjustments in case of limit values between two adjacent classes. Furthermore, variable weights are used in the calculation of a posteriori probabilities, calculated with mutual information function. Tests were applied in both a naïve Bayesian network and a hierarchical Bayesian network. After testing, a significant reduction in error rate has been observed. The naïve Bayesian network presented a drop in error rates from twenty five percent to five percent, considering the initial results of the classification process. In the hierarchical network, there was not only a drop in fifteen percent error rate, but also the final result came to zero.

  20. Bayesian inference for Hawkes processes

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl

    The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...

  1. Bayesian Classification of Image Structures

    DEFF Research Database (Denmark)

    Goswami, Dibyendu; Kalkan, Sinan; Krüger, Norbert

    2009-01-01

    In this paper, we describe work on Bayesian classi ers for distinguishing between homogeneous structures, textures, edges and junctions. We build semi-local classiers from hand-labeled images to distinguish between these four different kinds of structures based on the concept of intrinsic dimensi...

  2. 3-D contextual Bayesian classifiers

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    In this paper we will consider extensions of a series of Bayesian 2-D contextual classification pocedures proposed by Owen (1984) Hjort & Mohn (1984) and Welch & Salter (1971) and Haslett (1985) to 3 spatial dimensions. It is evident that compared to classical pixelwise classification further...

  3. Bayesian Networks and Influence Diagrams

    DEFF Research Database (Denmark)

    Kjærulff, Uffe Bro; Madsen, Anders Læsø

    Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis, Second Edition, provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. This new edition contains six new...

  4. Bayesian image restoration, using configurations

    DEFF Research Database (Denmark)

    Thorarinsdottir, Thordis

    configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed...

  5. Bayesian image restoration, using configurations

    DEFF Research Database (Denmark)

    Thorarinsdottir, Thordis Linda

    2006-01-01

    configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for the salt and pepper noise. The inference in the model is discussed...

  6. Bayesian Evidence and Model Selection

    CERN Document Server

    Knuth, Kevin H; Malakar, Nabin K; Mubeen, Asim M; Placek, Ben

    2014-01-01

    In this paper we review the concept of the Bayesian evidence and its application to model selection. The theory is presented along with a discussion of analytic, approximate and numerical techniques. Application to several practical examples within the context of signal processing are discussed.

  7. Differentiated Bayesian Conjoint Choice Designs

    NARCIS (Netherlands)

    Z. Sándor (Zsolt); M. Wedel (Michel)

    2003-01-01

    textabstractPrevious conjoint choice design construction procedures have produced a single design that is administered to all subjects. This paper proposes to construct a limited set of different designs. The designs are constructed in a Bayesian fashion, taking into account prior uncertainty about

  8. Cell categories and K-nearest neighbor algorithm based decoding of primary motor cortical activity during reach-to-grasp task.

    Science.gov (United States)

    Yangyang Guo; Wei Li; Jiping He

    2014-01-01

    Neural decoding is a procedure to acquire intended movement information from neural activity and generate movement commands to control external devices such as intelligent prostheses. In this study, monkey Astra was trained to accomplish a 3-D reach-to-grasp task, and we recorded neural signals from its primary motor cortex (M1) during the task. The task-related cells were divided into four classes based on their correlation with two movement parameters: movement direction and orientation. We adopted the simple k-nearest neighbor (KNN) algorithm as the classifier, and chose cells from appropriate cell classes for movement parameter decoding. Cell classification was shown improving decoding accuracy with relatively less cells, even during movement planning stage (CRT). High decoding accuracy before movement actually performed is of great significance for intelligent prostheses control, and provides evidence that M1 is more than accepting ready-made movement commands but also participating in movement planning. We also found that population of task-related cells in M1 had a preference for specific direction and orientation, and this preference was more significant when it came to population of direction-related cells and orientation-related cells.

  9. Bayesian Alternation During Tactile Augmentation

    Directory of Open Access Journals (Sweden)

    Caspar Mathias Goeke

    2016-10-01

    Full Text Available A large number of studies suggest that the integration of multisensory signals by humans is well described by Bayesian principles. However, there are very few reports about cue combination between a native and an augmented sense. In particular, we asked the question whether adult participants are able to integrate an augmented sensory cue with existing native sensory information. Hence for the purpose of this study we build a tactile augmentation device. Consequently, we compared different hypotheses of how untrained adult participants combine information from a native and an augmented sense. In a two-interval forced choice (2 IFC task, while subjects were blindfolded and seated on a rotating platform, our sensory augmentation device translated information on whole body yaw rotation to tactile stimulation. Three conditions were realized: tactile stimulation only (augmented condition, rotation only (native condition, and both augmented and native information (bimodal condition. Participants had to choose one out of two consecutive rotations with higher angular rotation. For the analysis, we fitted the participants’ responses with a probit model and calculated the just notable difference (JND. Then we compared several models for predicting bimodal from unimodal responses. An objective Bayesian alternation model yielded a better prediction (χred2 = 1.67 than the Bayesian integration model (χred2= 4.34. Slightly higher accuracy showed a non-Bayesian winner takes all model (χred2= 1.64, which either used only native or only augmented values per subject for prediction. However the performance of the Bayesian alternation model could be substantially improved (χred2= 1.09 utilizing subjective weights obtained by a questionnaire. As a result, the subjective Bayesian alternation model predicted bimodal performance most accurately among all tested models. These results suggest that information from augmented and existing sensory modalities in

  10. For whom will the Bayesian agents vote?

    CERN Document Server

    Caticha, Nestor; Vicente, Renato

    2015-01-01

    Within an agent-based model where moral classifications are socially learned, we ask if a population of agents behaves in a way that may be compared with conservative or liberal positions in the real political spectrum. We assume that agents first experience a formative period, in which they adjust their learning style acting as supervised Bayesian adaptive learners. The formative phase is followed by a period of social influence by reinforcement learning. By comparing data generated by the agents with data from a sample of 15000 Moral Foundation questionnaires we found the following. 1. The number of information exchanges in the formative phase correlates positively with statistics identifying liberals in the social influence phase. This is consistent with recent evidence that connects the dopamine receptor D4-7R gene, political orientation and early age social clique size. 2. The learning algorithms that result from the formative phase vary in the way they treat novelty and corroborative information with mo...

  11. Approximate Bayesian computation with functional statistics.

    Science.gov (United States)

    Soubeyrand, Samuel; Carpentier, Florence; Guiton, François; Klein, Etienne K

    2013-03-26

    Functional statistics are commonly used to characterize spatial patterns in general and spatial genetic structures in population genetics in particular. Such functional statistics also enable the estimation of parameters of spatially explicit (and genetic) models. Recently, Approximate Bayesian Computation (ABC) has been proposed to estimate model parameters from functional statistics. However, applying ABC with functional statistics may be cumbersome because of the high dimension of the set of statistics and the dependences among them. To tackle this difficulty, we propose an ABC procedure which relies on an optimized weighted distance between observed and simulated functional statistics. We applied this procedure to a simple step model, a spatial point process characterized by its pair correlation function and a pollen dispersal model characterized by genetic differentiation as a function of distance. These applications showed how the optimized weighted distance improved estimation accuracy. In the discussion, we consider the application of the proposed ABC procedure to functional statistics characterizing non-spatial processes.

  12. On Lattice Sequential Decoding for Large MIMO Systems

    KAUST Repository

    Ali, Konpal S.

    2014-04-01

    Due to their ability to provide high data rates, Multiple-Input Multiple-Output (MIMO) wireless communication systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In the case of large overdetermined MIMO systems, we employ the Sequential Decoder using the Fano Algorithm. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. We attempt to bound the error by bounding the bias, using the minimum distance of a lattice. Also, a particular trend is observed with increasing SNR: a region of low complexity and high error, followed by a region of high complexity and error falling, and finally a region of low complexity and low error. For lower bias values, the stages of the trend are incurred at lower SNR than for higher bias values. This has the important implication that a low enough bias value, at low to moderate SNR, can result in low error and low complexity even for large MIMO systems. Our work is compared against Lattice Reduction (LR) aided Linear Decoders (LDs). Another impressive observation for low bias values that satisfy the error bound is that the Sequential Decoder\\'s error is seen to fall with increasing system size, while it grows for the LR-aided LDs. For the case of large underdetermined MIMO systems, Sequential Decoding with two preprocessing schemes is proposed – 1) Minimum Mean Square Error Generalized Decision Feedback Equalization (MMSE-GDFE) preprocessing 2) MMSE-GDFE preprocessing, followed by Lattice Reduction and Greedy Ordering. Our work is compared against previous work which employs Sphere Decoding preprocessed using MMSE-GDFE, Lattice Reduction and Greedy Ordering. For the case of large systems, this results in high complexity and difficulty in choosing the sphere radius. Our schemes

  13. New concatenated soft decoding of Reed-Solomon codes with lower complexities

    Institute of Scientific and Technical Information of China (English)

    BIAN Yin-bing; FENG Guang-zeng

    2009-01-01

    To improve error-correcting performance, an iterative concatenated soft decoding algorithm for Reed-Solomon (RS) codes is presented in this article. This algorithm brings both complexity as well as advantages in performance over presently popular sott decoding algorithms. The proposed algorithm consists of two powerful soft decoding techniques, adaptive belief propagation (ABP) and box and match algorithm (BMA), which are serially concatenated by the accumulated log-likelihood ratio (ALLR).Simulation results show that, compared with ABP and ABP-BMA algorithms, the proposed algorithm can bring more decoding gains and a better tradeoffbetween the decoding performance and complexity.

  14. A Novel Decoder for Unknown Diversity Channels Employing Space-Time Codes

    Directory of Open Access Journals (Sweden)

    Erez Elona

    2002-01-01

    Full Text Available We suggest new decoding techniques for diversity channels employing space time codes (STC when the channel coefficients are unknown to both transmitter and receiver. Most of the existing decoders for unknown diversity channels employ training sequence in order to estimate the channel. These decoders use the estimates of the channel coefficients in order to perform maximum likelihood (ML decoding. We suggest an efficient implementation of the generalized likelihood ratio test (GLRT algorithm that improves the performance with only slight increase in complexity. We also suggest an energy weighted decoder (EWD that shows additional improvement without further increase in the computational complexity.

  15. Application of Bayesian population physiologically based pharmacokinetic (PBPK) modeling and Markov chain Monte Carlo simulations to pesticide kinetics studies in protected marine mammals: DDT, DDE, and DDD in harbor porpoises.

    Science.gov (United States)

    Weijs, Liesbeth; Yang, Raymond S H; Das, Krishna; Covaci, Adrian; Blust, Ronny

    2013-05-01

    Physiologically based pharmacokinetic (PBPK) modeling in marine mammals is a challenge because of the lack of parameter information and the ban on exposure experiments. To minimize uncertainty and variability, parameter estimation methods are required for the development of reliable PBPK models. The present study is the first to develop PBPK models for the lifetime bioaccumulation of p,p'-DDT, p,p'-DDE, and p,p'-DDD in harbor porpoises. In addition, this study is also the first to apply the Bayesian approach executed with Markov chain Monte Carlo simulations using two data sets of harbor porpoises from the Black and North Seas. Parameters from the literature were used as priors for the first "model update" using the Black Sea data set, the resulting posterior parameters were then used as priors for the second "model update" using the North Sea data set. As such, PBPK models with parameters specific for harbor porpoises could be strengthened with more robust probability distributions. As the science and biomonitoring effort progress in this area, more data sets will become available to further strengthen and update the parameters in the PBPK models for harbor porpoises as a species anywhere in the world. Further, such an approach could very well be extended to other protected marine mammals.

  16. Are visual impairments responsible for emotion decoding deficits in alcohol-dependence?

    Directory of Open Access Journals (Sweden)

    Fabien eD'Hondt

    2014-03-01

    Full Text Available Emotional visual perception deficits constitute a major problem in alcohol-dependence. Indeed, the ability to assess the affective content of external cues is a key adaptive function, as it allows on the one hand the processing of potentially threatening or advantageous stimuli, and on the other hand the establishment of appropriate social interactions (by enabling rapid decoding of the affective state of others from their facial expressions. While such deficits have been classically considered as reflecting a genuine emotion decoding impairment in alcohol-dependence, converging evidence suggests that underlying visual deficits might play a role in emotional alterations. This hypothesis appears to be relevant especially as data from healthy populations indicate that a coarse but fast analysis of visual inputs would allow emotional processing to arise from early stages of perception. After reviewing those findings and the associated models, the present paper underlines data showing that rapid interactions between emotion and vision could be impaired in alcohol-dependence and provides new research avenues that may ultimately offer a better understanding of the roots of emotional deficits in this pathological state.

  17. Bayesian Approach to the Best Estimate of the Hubble Constant

    Institute of Scientific and Technical Information of China (English)

    王晓峰; 陈黎; 李宗伟

    2001-01-01

    A Bayesian approach is used to derive the probability distribution (PD) of the Hubble constant H0 from recent measurements including supernovae Ia, the Tully-Fisher relation, population Ⅱ and physical methods. The discrepancies among these PDs are briefly discussed. The combined value of all the measurements is obtained,with a 95% confidence interval of 58.7 < Ho < 67.3 (km·s-1.Mpc-1).

  18. A Bayesian Predictive Discriminant Analysis with Screened Data

    OpenAIRE

    Hea-Jung Kim

    2015-01-01

    In the application of discriminant analysis, a situation sometimes arises where individual measurements are screened by a multidimensional screening scheme. For this situation, a discriminant analysis with screened populations is considered from a Bayesian viewpoint, and an optimal predictive rule for the analysis is proposed. In order to establish a flexible method to incorporate the prior information of the screening mechanism, we propose a hierarchical screened scale mixture of normal (HSS...

  19. Bayesian analysis of rare events

    Energy Technology Data Exchange (ETDEWEB)

    Straub, Daniel, E-mail: straub@tum.de; Papaioannou, Iason; Betz, Wolfgang

    2016-06-01

    In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.

  20. Fast multiple run_before decoding method for efficient implementation of an H.264/advanced video coding context-adaptive variable length coding decoder

    Science.gov (United States)

    Ki, Dae Wook; Kim, Jae Ho

    2013-07-01

    We propose a fast new multiple run_before decoding method in context-adaptive variable length coding (CAVLC). The transform coefficients are coded using CAVLC, in which the run_before symbols are generated for a 4×4 block input. To speed up the CAVLC decoding, the run_before symbols need to be decoded in parallel. We implemented a new CAVLC table for simultaneous decoding of up to three run_befores. The simulation results show a Total Speed-up Factor of 205%˜144% over various resolutions and quantization steps.

  1. Bayesian Inference and Online Learning in Poisson Neuronal Networks.

    Science.gov (United States)

    Huang, Yanping; Rao, Rajesh P N

    2016-08-01

    Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.

  2. Decoding bipedal locomotion from the rat sensorimotor cortex

    Science.gov (United States)

    Rigosa, J.; Panarese, A.; Dominici, N.; Friedli, L.; van den Brand, R.; Carpaneto, J.; DiGiovanna, J.; Courtine, G.; Micera, S.

    2015-10-01

    Objective. Decoding forelimb movements from the firing activity of cortical neurons has been interfaced with robotic and prosthetic systems to replace lost upper limb functions in humans. Despite the potential of this approach to improve locomotion and facilitate gait rehabilitation, decoding lower limb movement from the motor cortex has received comparatively little attention. Here, we performed experiments to identify the type and amount of information that can be decoded from neuronal ensemble activity in the hindlimb area of the rat motor cortex during bipedal locomotor tasks. Approach. Rats were trained to stand, step on a treadmill, walk overground and climb staircases in a bipedal posture. To impose this gait, the rats were secured in a robotic interface that provided support against the direction of gravity and in the mediolateral direction, but behaved transparently in the forward direction. After completion of training, rats were chronically implanted with a micro-wire array spanning the left hindlimb motor cortex to record single and multi-unit activity, and bipolar electrodes into 10 muscles of the right hindlimb to monitor electromyographic signals. Whole-body kinematics, muscle activity, and neural signals were simultaneously recorded during execution of the trained tasks over multiple days of testing. Hindlimb kinematics, muscle activity, gait phases, and locomotor tasks were decoded using offline classification algorithms. Main results. We found that the stance and swing phases of gait and the locomotor tasks were detected with accuracies as robust as 90% in all rats. Decoded hindlimb kinematics and muscle activity exhibited a larger variability across rats and tasks. Significance. Our study shows that the rodent motor cortex contains useful information for lower limb neuroprosthetic development. However, brain-machine interfaces estimating gait phases or locomotor behaviors, instead of continuous variables such as limb joint positions or speeds

  3. STATISTICAL BAYESIAN ANALYSIS OF EXPERIMENTAL DATA.

    Directory of Open Access Journals (Sweden)

    AHLAM LABDAOUI

    2012-12-01

    Full Text Available The Bayesian researcher should know the basic ideas underlying Bayesian methodology and the computational tools used in modern Bayesian econometrics.  Some of the most important methods of posterior simulation are Monte Carlo integration, importance sampling, Gibbs sampling and the Metropolis- Hastings algorithm. The Bayesian should also be able to put the theory and computational tools together in the context of substantive empirical problems. We focus primarily on recent developments in Bayesian computation. Then we focus on particular models. Inevitably, we combine theory and computation in the context of particular models. Although we have tried to be reasonably complete in terms of covering the basic ideas of Bayesian theory and the computational tools most commonly used by the Bayesian, there is no way we can cover all the classes of models used in econometrics. We propose to the user of analysis of variance and linear regression model.

  4. Bayesian methods for measures of agreement

    CERN Document Server

    Broemeling, Lyle D

    2009-01-01

    Using WinBUGS to implement Bayesian inferences of estimation and testing hypotheses, Bayesian Methods for Measures of Agreement presents useful methods for the design and analysis of agreement studies. It focuses on agreement among the various players in the diagnostic process.The author employs a Bayesian approach to provide statistical inferences based on various models of intra- and interrater agreement. He presents many examples that illustrate the Bayesian mode of reasoning and explains elements of a Bayesian application, including prior information, experimental information, the likelihood function, posterior distribution, and predictive distribution. The appendices provide the necessary theoretical foundation to understand Bayesian methods as well as introduce the fundamentals of programming and executing the WinBUGS software.Taking a Bayesian approach to inference, this hands-on book explores numerous measures of agreement, including the Kappa coefficient, the G coefficient, and intraclass correlation...

  5. A Generalization Belief Propagation Decoding Algorithm for Polar Codes Based on Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Yingxian Zhang

    2014-01-01

    Full Text Available We propose a generalization belief propagation (BP decoding algorithm based on particle swarm optimization (PSO to improve the performance of the polar codes. Through the analysis of the existing BP decoding algorithm, we first introduce a probability modifying factor to each node of the BP decoder, so as to enhance the error correcting capacity of the decoding. Then, we generalize the BP decoding algorithm based on these modifying factors and drive the probability update equations for the proposed decoding. Based on the new probability update equations, we show the intrinsic relationship of the existing decoding algorithms. Finally, in order to achieve the best performance, we formulate an optimization problem to find the optimal probability modifying factors for the proposed decoding algorithm. Furthermore, a method based on the modified PSO algorithm is also introduced to solve that optimization problem. Numerical results show that the proposed generalization BP decoding algorithm achieves better performance than that of the existing BP decoding, which suggests the effectiveness of the proposed decoding algorithm.

  6. NEW ITERATIVE SUPER-TRELLIS DECODING WITH SOURCE A PRIORI INFORMATION FOR VLCS WITH TURBO CODES

    Institute of Scientific and Technical Information of China (English)

    Liu Jianjun; Tu Guofang; Wu Weiren

    2007-01-01

    A novel Joint Source and Channel Decoding (JSCD) scheme for Variable Length Codes (VLCs) concatenated with turbo codes utilizing a new super-trellis decoding algorithm is presented in this letter. The basic idea of our decoding algorithm is that source a priori information with the form of bit transition probabilities corresponding to the VLC tree can be derived directly from sub-state transitions in new composite-state represented super-trellis. A Maximum Likelihood (ML) decoding algorithm for VLC sequence estimations based on the proposed super-trellis is also described. Simulation results show that the new iterative decoding scheme can obtain obvious encoding gain especially for Reversible Variable Length Codes (RVLCs), when compared with the classical separated turbo decoding and the previous joint decoding not considering source statistical characteristics.

  7. Iterative decoding of Generalized Parallel Concatenated Block codes using cyclic permutations

    Directory of Open Access Journals (Sweden)

    Hamid Allouch

    2012-09-01

    Full Text Available Iterative decoding techniques have gain popularity due to their performance and their application in most communications systems. In this paper, we present a new application of our iterative decoder on the GPCB (Generalized Parallel Concatenated Block codes which uses cyclic permutations. We introduce a new variant of the component decoder. After extensive simulation; the obtained result is very promising compared with several existing methods. We evaluate the effects of various parameters component codes, interleaver size, block size, and the number of iterations. Three interesting results are obtained; the first one is that the performances in terms of BER (Bit Error Rate of the new constituent decoder are relatively similar to that of original one. Secondly our turbo decoding outperforms another turbo decoder for some linear block codes. Thirdly the proposed iterative decoding of GPCB-BCH (75, 51 is about 2.1dB from its Shannon limit.

  8. Low Complexity Approach for High Throughput Belief-Propagation based Decoding of LDPC Codes

    Directory of Open Access Journals (Sweden)

    BOT, A.

    2013-11-01

    Full Text Available The paper proposes a low complexity belief propagation (BP based decoding algorithm for LDPC codes. In spite of the iterative nature of the decoding process, the proposed algorithm provides both reduced complexity and increased BER performances as compared with the classic min-sum (MS algorithm, generally used for hardware implementations. Linear approximations of check-nodes update function are used in order to reduce the complexity of the BP algorithm. Considering this decoding approach, an FPGA based hardware architecture is proposed for implementing the decoding algorithm, aiming to increase the decoder throughput. FPGA technology was chosen for the LDPC decoder implementation, due to its parallel computation and reconfiguration capabilities. The obtained results show improvements regarding decoding throughput and BER performances compared with state-of-the-art approaches.

  9. Trellis-Based Check Node Processing for Low-Complexity Nonbinary LP Decoding

    CERN Document Server

    Punekar, Mayur

    2011-01-01

    Linear Programming (LP) decoding is emerging as an attractive alternative to decode Low-Density Parity-Check (LDPC) codes. However, the earliest LP decoders proposed for binary and nonbinary LDPC codes are not suitable for use at moderate and large code lengths. To overcome this problem, Vontobel et al. developed an iterative Low-Complexity LP (LCLP) decoding algorithm for binary LDPC codes. The variable and check node calculations of binary LCLP decoding algorithm are related to those of binary Belief Propagation (BP). The present authors generalized this work to derive an iterative LCLP decoding algorithm for nonbinary linear codes. Contrary to binary LCLP, the variable and check node calculations of this algorithm are in general different from that of nonbinary BP. The overall complexity of nonbinary LCLP decoding is linear in block length; however the complexity of its check node calculations is exponential in the check node degree. In this paper, we propose a modified BCJR algorithm for efficient check n...

  10. NetDecoder: a network biology platform that decodes context-specific biological networks and gene activities.

    Science.gov (United States)

    da Rocha, Edroaldo Lummertz; Ung, Choong Yong; McGehee, Cordelia D; Correia, Cristina; Li, Hu

    2016-06-02

    The sequential chain of interactions altering the binary state of a biomolecule represents the 'information flow' within a cellular network that determines phenotypic properties. Given the lack of computational tools to dissect context-dependent networks and gene activities, we developed NetDecoder, a network biology platform that models context-dependent information flows using pairwise phenotypic comparative analyses of protein-protein interactions. Using breast cancer, dyslipidemia and Alzheimer's disease as case studies, we demonstrate NetDecoder dissects subnetworks to identify key players significantly impacting cell behaviour specific to a given disease context. We further show genes residing in disease-specific subnetworks are enriched in disease-related signalling pathways and information flow profiles, which drive the resulting disease phenotypes. We also devise a novel scoring scheme to quantify key genes-network routers, which influence many genes, key targets, which are influenced by many genes, and high impact genes, which experience a significant change in regulation. We show the robustness of our results against parameter changes. Our network biology platform includes freely available source code (http://www.NetDecoder.org) for researchers to explore genome-wide context-dependent information flow profiles and key genes, given a set of genes of particular interest and transcriptome data. More importantly, NetDecoder will enable researchers to uncover context-dependent drug targets.

  11. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; A Recursive Maximum Likelihood Decoding

    Science.gov (United States)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.

  12. Neural decoding reveals impaired face configural processing in the right fusiform face area of individuals with developmental prosopagnosia.

    Science.gov (United States)

    Zhang, Jiedong; Liu, Jia; Xu, Yaoda

    2015-01-28

    Most of human daily social interactions rely on the ability to successfully recognize faces. Yet ∼2% of the human population suffers from face blindness without any acquired brain damage [this is also known as developmental prosopagnosia (DP) or congenital prosopagnosia]). Despite the presence of severe behavioral face recognition deficits, surprisingly, a majority of DP individuals exhibit normal face selectivity in the right fusiform face area (FFA), a key brain region involved in face configural processing. This finding, together with evidence showing impairments downstream from the right FFA in DP individuals, has led some to argue that perhaps the right FFA is largely intact in DP individuals. Using fMRI multivoxel pattern analysis, here we report the discovery of a neural impairment in the right FFA of DP individuals that may play a critical role in mediating their face-processing deficits. In seven individuals with DP, we discovered that, despite the right FFA's preference for faces and it showing decoding for the different face parts, it exhibited impaired face configural decoding and did not contain distinct neural response patterns for the intact and the scrambled face configurations. This abnormality was not present throughout the ventral visual cortex, as normal neural decoding was found in an adjacent object-processing region. To our knowledge, this is the first direct neural evidence showing impaired face configural processing in the right FFA in individuals with DP. The discovery of this neural impairment provides a new clue to our understanding of the neural basis of DP.

  13. Bayesian multivariate disease mapping and ecological regression with errors in covariates: Bayesian estimation of DALYs and 'preventable' DALYs.

    Science.gov (United States)

    Macnab, Ying C

    2009-04-30

    This paper presents Bayesian multivariate disease mapping and ecological regression models that take into account errors in covariates. Bayesian hierarchical formulations of multivariate disease models and covariate measurement models, with related methods of estimation and inference, are developed as an integral part of a Bayesian disability adjusted life years (DALYs) methodology for the analysis of multivariate disease or injury data and associated ecological risk factors and for small area DALYs estimation, inference, and mapping. The methodology facilitates the estimation of multivariate small area disease and injury rates and associated risk effects, evaluation of DALYs and 'preventable' DALYs, and identification of regions to which disease or injury prevention resources may be directed to reduce DALYs. The methodology interfaces and intersects the Bayesian disease mapping methodology and the global burden of disease framework such that the impact of disease, injury, and risk factors on population health may be evaluated to inform community health, health needs, and priority considerations for disease and injury prevention. A burden of injury study on road traffic accidents in local health areas in British Columbia, Canada, is presented as an illustrative example.

  14. Decoding the genome with an integrative analysis tool: combinatorial CRM Decoder.

    Science.gov (United States)

    Kang, Keunsoo; Kim, Joomyeong; Chung, Jae Hoon; Lee, Daeyoup

    2011-09-01

    The identification of genome-wide cis-regulatory modules (CRMs) and characterization of their associated epigenetic features are fundamental steps toward the understanding of gene regulatory networks. Although integrative analysis of available genome-wide information can provide new biological insights, the lack of novel methodologies has become a major bottleneck. Here, we present a comprehensive analysis tool called combinatorial CRM decoder (CCD), which utilizes the publicly available information to identify and characterize genome-wide CRMs in a species of interest. CCD first defines a set of the epigenetic features which is significantly associated with a set of known CRMs as a code called 'trace code', and subsequently uses the trace code to pinpoint putative CRMs throughout the genome. Using 61 genome-wide data sets obtained from 17 independent mouse studies, CCD successfully catalogued ∼12 600 CRMs (five distinct classes) including polycomb repressive complex 2 target sites as well as imprinting control regions. Interestingly, we discovered that ∼4% of the identified CRMs belong to at least two different classes named 'multi-functional CRM', suggesting their functional importance for regulating spatiotemporal gene expression. From these examples, we show that CCD can be applied to any potential genome-wide datasets and therefore will shed light on unveiling genome-wide CRMs in various species.

  15. Optimal Threshold-Based Multi-Trial Error/Erasure Decoding with the Guruswami-Sudan Algorithm

    CERN Document Server

    Senger, Christian; Bossert, Martin; Zyablov, Victor V

    2011-01-01

    Traditionally, multi-trial error/erasure decoding of Reed-Solomon (RS) codes is based on Bounded Minimum Distance (BMD) decoders with an erasure option. Such decoders have error/erasure tradeoff factor L=2, which means that an error is twice as expensive as an erasure in terms of the code's minimum distance. The Guruswami-Sudan (GS) list decoder can be considered as state of the art in algebraic decoding of RS codes. Besides an erasure option, it allows to adjust L to values in the range 1=1 times. We show that BMD decoders with z_BMD decoding trials can result in lower residual codeword error probability than GS decoders with z_GS trials, if z_BMD is only slightly larger than z_GS. This is of practical interest since BMD decoders generally have lower computational complexity than GS decoders.

  16. Decode-and-Forward Based Differential Modulation for Cooperative Communication System with Unitary and Non-Unitary Constellations

    CERN Document Server

    Bhatnagar, Manav R

    2012-01-01

    In this paper, we derive a maximum likelihood (ML) decoder of the differential data in a decode-and-forward (DF) based cooperative communication system utilizing uncoded transmissions. This decoder is applicable to complex-valued unitary and non-unitary constellations suitable for differential modulation. The ML decoder helps in improving the diversity of the DF based differential cooperative system using an erroneous relaying node. We also derive a piecewise linear (PL) decoder of the differential data transmitted in the DF based cooperative system. The proposed PL decoder significantly reduces the decoding complexity as compared to the proposed ML decoder without any significant degradation in the receiver performance. Existing ML and PL decoders of the differentially modulated uncoded data in the DF based cooperative communication system are only applicable to binary modulated signals like binary phase shift keying (BPSK) and binary frequency shift keying (BFSK), whereas, the proposed decoders are applicab...

  17. Coupled Receiver/Decoders for Low-Rate Turbo Codes

    Science.gov (United States)

    Hamkins, Jon; Divsalar, Dariush

    2005-01-01

    been proposed for receiving weak single- channel phase-modulated radio signals bearing low-rate-turbo-coded binary data. Originally intended for use in receiving telemetry signals from distant spacecraft, the proposed receiver/ decoders may also provide enhanced reception in mobile radiotelephone systems. A radio signal of the type to which the proposal applies comprises a residual carrier signal and a phase-modulated data signal. The residual carrier signal is needed as a phase reference for demodulation as a prerequisite to decoding. Low-rate turbo codes afford high coding gains and thereby enable the extraction of data from arriving radio signals that might otherwise be too weak. In the case of a conventional receiver, if the signal-to-noise ratio (specifically, the symbol energy to one-sided noise power spectral density) of the arriving signal is below approximately 0 dB, then there may not be enough energy per symbol to enable the receiver to recover properly the carrier phase. One could solve the problem at the transmitter by diverting some power from the data signal to the residual carrier. A better solution . a coupled receiver/decoder according to the proposal . could reduce the needed amount of residual carrier power. In all that follows, it is to be understood that all processing would be digital and the incoming signals to be processed would be, more precisely, outputs of analog-to-digital converters that preprocess the residual carrier and data signals at a rate of multiple samples per symbol. The upper part of the figure depicts a conventional receiving system, in which the receiver and decoder are uncoupled, and which is also called a non-data-aided system because output data from the decoder are not used in the receiver to aid in recovering the carrier phase. The receiver tracks the carrier phase from the residual carrier signal and uses the carrier phase to wipe phase noise off the data signal. The receiver typically includes a phase-locked loop

  18. Bayesian versus 'plain-vanilla Bayesian' multitarget statistics

    Science.gov (United States)

    Mahler, Ronald P. S.

    2004-08-01

    Finite-set statistics (FISST) is a direct generalization of single-sensor, single-target Bayes statistics to the multisensor-multitarget realm, based on random set theory. Various aspects of FISST are being investigated by several research teams around the world. In recent years, however, a few partisans have claimed that a "plain-vanilla Bayesian approach" suffices as down-to-earth, "straightforward," and general "first principles" for multitarget problems. Therefore, FISST is mere mathematical "obfuscation." In this and a companion paper I demonstrate the speciousness of these claims. In this paper I summarize general Bayes statistics, what is required to use it in multisensor-multitarget problems, and why FISST is necessary to make it practical. Then I demonstrate that the "plain-vanilla Bayesian approach" is so heedlessly formulated that it is erroneous, not even Bayesian denigrates FISST concepts while unwittingly assuming them, and has resulted in a succession of algorithms afflicted by inherent -- but less than candidly acknowledged -- computational "logjams."

  19. Multiple quantitative trait analysis using bayesian networks.

    Science.gov (United States)

    Scutari, Marco; Howell, Phil; Balding, David J; Mackay, Ian

    2014-09-01

    Models for genome-wide prediction and association studies usually target a single phenotypic trait. However, in animal and plant genetics it is common to record information on multiple phenotypes for each individual that will be genotyped. Modeling traits individually disregards the fact that they are most likely associated due to pleiotropy and shared biological basis, thus providing only a partial, confounded view of genetic effects and phenotypic interactions. In this article we use data from a Multiparent Advanced Generation Inter-Cross (MAGIC) winter wheat population to explore Bayesian networks as a convenient and interpretable framework for the simultaneous modeling of multiple quantitative traits. We show that they are equivalent to multivariate genetic best linear unbiased prediction (GBLUP) and that they are competitive with single-trait elastic net and single-trait GBLUP in predictive performance. Finally, we discuss their relationship with other additive-effects models and their advantages in inference and interpretation. MAGIC populations provide an ideal setting for this kind of investigation because the very low population structure and large sample size result in predictive models with good power and limited confounding due to relatedness.

  20. Bayesian priors for transiting planets

    CERN Document Server

    Kipping, David M

    2016-01-01

    As astronomers push towards discovering ever-smaller transiting planets, it is increasingly common to deal with low signal-to-noise ratio (SNR) events, where the choice of priors plays an influential role in Bayesian inference. In the analysis of exoplanet data, the selection of priors is often treated as a nuisance, with observers typically defaulting to uninformative distributions. Such treatments miss a key strength of the Bayesian framework, especially in the low SNR regime, where even weak a priori information is valuable. When estimating the parameters of a low-SNR transit, two key pieces of information are known: (i) the planet has the correct geometric alignment to transit and (ii) the transit event exhibits sufficient signal-to-noise to have been detected. These represent two forms of observational bias. Accordingly, when fitting transits, the model parameter priors should not follow the intrinsic distributions of said terms, but rather those of both the intrinsic distributions and the observational ...

  1. Bayesian approach to rough set

    CERN Document Server

    Marwala, Tshilidzi

    2007-01-01

    This paper proposes an approach to training rough set models using Bayesian framework trained using Markov Chain Monte Carlo (MCMC) method. The prior probabilities are constructed from the prior knowledge that good rough set models have fewer rules. Markov Chain Monte Carlo sampling is conducted through sampling in the rough set granule space and Metropolis algorithm is used as an acceptance criteria. The proposed method is tested to estimate the risk of HIV given demographic data. The results obtained shows that the proposed approach is able to achieve an average accuracy of 58% with the accuracy varying up to 66%. In addition the Bayesian rough set give the probabilities of the estimated HIV status as well as the linguistic rules describing how the demographic parameters drive the risk of HIV.

  2. Deep Learning and Bayesian Methods

    Directory of Open Access Journals (Sweden)

    Prosper Harrison B.

    2017-01-01

    Full Text Available A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such methods might be used to automate certain aspects of data analysis in particle physics. Next, the connection to Bayesian methods is discussed and the paper ends with thoughts on a significant practical issue, namely, how, from a Bayesian perspective, one might optimize the construction of deep neural networks.

  3. Deep Learning and Bayesian Methods

    Science.gov (United States)

    Prosper, Harrison B.

    2017-03-01

    A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such methods might be used to automate certain aspects of data analysis in particle physics. Next, the connection to Bayesian methods is discussed and the paper ends with thoughts on a significant practical issue, namely, how, from a Bayesian perspective, one might optimize the construction of deep neural networks.

  4. Bayesian Source Separation and Localization

    CERN Document Server

    Knuth, K H

    1998-01-01

    The problem of mixed signals occurs in many different contexts; one of the most familiar being acoustics. The forward problem in acoustics consists of finding the sound pressure levels at various detectors resulting from sound signals emanating from the active acoustic sources. The inverse problem consists of using the sound recorded by the detectors to separate the signals and recover the original source waveforms. In general, the inverse problem is unsolvable without additional information. This general problem is called source separation, and several techniques have been developed that utilize maximum entropy, minimum mutual information, and maximum likelihood. In previous work, it has been demonstrated that these techniques can be recast in a Bayesian framework. This paper demonstrates the power of the Bayesian approach, which provides a natural means for incorporating prior information into a source model. An algorithm is developed that utilizes information regarding both the statistics of the amplitudes...

  5. Bayesian Inference for Radio Observations

    CERN Document Server

    Lochner, Michelle; Zwart, Jonathan T L; Smirnov, Oleg; Bassett, Bruce A; Oozeer, Nadeem; Kunz, Martin

    2015-01-01

    (Abridged) New telescopes like the Square Kilometre Array (SKA) will push into a new sensitivity regime and expose systematics, such as direction-dependent effects, that could previously be ignored. Current methods for handling such systematics rely on alternating best estimates of instrumental calibration and models of the underlying sky, which can lead to inaccurate uncertainty estimates and biased results because such methods ignore any correlations between parameters. These deconvolution algorithms produce a single image that is assumed to be a true representation of the sky, when in fact it is just one realisation of an infinite ensemble of images compatible with the noise in the data. In contrast, here we report a Bayesian formalism that simultaneously infers both systematics and science. Our technique, Bayesian Inference for Radio Observations (BIRO), determines all parameters directly from the raw data, bypassing image-making entirely, by sampling from the joint posterior probability distribution. Thi...

  6. Bayesian inference on proportional elections.

    Directory of Open Access Journals (Sweden)

    Gabriel Hideki Vatanabe Brunello

    Full Text Available Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software.

  7. Decoding polyphase migmatites using monazite petrochronology

    Science.gov (United States)

    Yakymchuk, C.; Brown, M.; Korhonen, F. J.; Piccoli, P. M.; Siddoway, C. S.

    2014-12-01

    Unraveling the P-T-t evolution of deep crustal rocks requires the use of multiple high-temperature geochronometers integrated with careful petrography and quantitative phase equilibria modeling. As an example, in situ U-Pb monazite ages and Lu-Hf garnet geochronology are used to distinguish mineral parageneses associated with overprinting suprasolidus metamorphic events in migmatitic paragneisses and orthogneisses from the Fosdick migmatite-granite complex in West Antarctica. Then phase equilibria modeling is used to quantify the P-T conditions for each event. In the Fosdick complex, U-Pb monazite ages define two populations at 365-300 Ma (minor population; cores of polychronic grains) and 120-96 Ma (dominant population; monochronic grains and rims of polychronic grains). For seven samples, Lu-Hf ages of garnet range from 116 to 111 Ma, which are interpreted to record the approximate timing of peak metamorphism during the overprinting Cretaceous metamorphic event. Phase equilibria modeling constrains peak P-T conditions to 720-800°C at 0.45-1.0 GPa for the older (Devonian-Carboniferous) metamorphic event and 850-880°C at 0.65-0.80 GPa for the overprinting Cretaceous event. This younger metamorphic event is dominant throughout the Fosdick complex; it has extensively reworked evidence of the older metamorphic event as indicated by resorbed Devonian-Carboniferous cores of polychronic monazite grains that are always surrounded by Cretaceous overgrowths. Within the Cretaceous monazite population, the paucity of ages predating peak metamorphism suggests that prograde monazite growth was limited or prograde monazite was obliterated. Y-enriched overgrowths on monazite spatially associated with cordierite and biotite yield ages of 106-97 Ma, which are interpreted to record growth during breakdown of garnet in the presence of melt in the course of exhumation and cooling of the complex. Most monazite in the Cretaceous population yields ages that range from 106 to 96 Ma with

  8. Bayesian analysis for kaon photoproduction

    Energy Technology Data Exchange (ETDEWEB)

    Marsainy, T., E-mail: tmart@fisika.ui.ac.id; Mart, T., E-mail: tmart@fisika.ui.ac.id [Department Fisika, FMIPA, Universitas Indonesia, Depok 16424 (Indonesia)

    2014-09-25

    We have investigated contribution of the nucleon resonances in the kaon photoproduction process by using an established statistical decision making method, i.e. the Bayesian method. This method does not only evaluate the model over its entire parameter space, but also takes the prior information and experimental data into account. The result indicates that certain resonances have larger probabilities to contribute to the process.

  9. Bayesian priors and nuisance parameters

    CERN Document Server

    Gupta, Sourendu

    2016-01-01

    Bayesian techniques are widely used to obtain spectral functions from correlators. We suggest a technique to rid the results of nuisance parameters, ie, parameters which are needed for the regularization but cannot be determined from data. We give examples where the method works, including a pion mass extraction with two flavours of staggered quarks at a lattice spacing of about 0.07 fm. We also give an example where the method does not work.

  10. Space Shuttle RTOS Bayesian Network

    Science.gov (United States)

    Morris, A. Terry; Beling, Peter A.

    2001-01-01

    With shrinking budgets and the requirements to increase reliability and operational life of the existing orbiter fleet, NASA has proposed various upgrades for the Space Shuttle that are consistent with national space policy. The cockpit avionics upgrade (CAU), a high priority item, has been selected as the next major upgrade. The primary functions of cockpit avionics include flight control, guidance and navigation, communication, and orbiter landing support. Secondary functions include the provision of operational services for non-avionics systems such as data handling for the payloads and caution and warning alerts to the crew. Recently, a process to selection the optimal commercial-off-the-shelf (COTS) real-time operating system (RTOS) for the CAU was conducted by United Space Alliance (USA) Corporation, which is a joint venture between Boeing and Lockheed Martin, the prime contractor for space shuttle operations. In order to independently assess the RTOS selection, NASA has used the Bayesian network-based scoring methodology described in this paper. Our two-stage methodology addresses the issue of RTOS acceptability by incorporating functional, performance and non-functional software measures related to reliability, interoperability, certifiability, efficiency, correctness, business, legal, product history, cost and life cycle. The first stage of the methodology involves obtaining scores for the various measures using a Bayesian network. The Bayesian network incorporates the causal relationships between the various and often competing measures of interest while also assisting the inherently complex decision analysis process with its ability to reason under uncertainty. The structure and selection of prior probabilities for the network is extracted from experts in the field of real-time operating systems. Scores for the various measures are computed using Bayesian probability. In the second stage, multi-criteria trade-off analyses are performed between the scores

  11. Elements of Bayesian experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Sivia, D.S. [Rutherford Appleton Lab., Oxon (United Kingdom)

    1997-09-01

    We consider some elements of the Bayesian approach that are important for optimal experimental design. While the underlying principles used are very general, and are explained in detail in a recent tutorial text, they are applied here to the specific case of characterising the inferential value of different resolution peakshapes. This particular issue was considered earlier by Silver, Sivia and Pynn (1989, 1990a, 1990b), and the following presentation confirms and extends the conclusions of their analysis.

  12. Bayesian Sampling using Condition Indicators

    DEFF Research Database (Denmark)

    Faber, Michael H.; Sørensen, John Dalsgaard

    2002-01-01

    . This allows for a Bayesian formulation of the indicators whereby the experience and expertise of the inspection personnel may be fully utilized and consistently updated as frequentistic information is collected. The approach is illustrated on an example considering a concrete structure subject to corrosion....... It is shown how half-cell potential measurements may be utilized to update the probability of excessive repair after 50 years....

  13. Performance-complexity tradeoff in sequential decoding for the unconstrained AWGN channel

    KAUST Repository

    Abediseid, Walid

    2013-06-01

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter - the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity. © 2013 IEEE.

  14. Memory bandwidth efficient two-layer reduced-resolution decoding of high-definition video

    Science.gov (United States)

    Comer, Mary L.

    2000-12-01

    This paper addresses the problem of efficiently decoding high- definition (HD) video for display at a reduced resolution. The decoder presented in this paper is intended for applications that are constrained not only in memory size, but also in peak memory bandwidth. This is the case, for example, during decoding of a high-definition television (HDTV) channel for picture-in-picture (PIP) display, if the reduced resolution PIP-channel decoder is sharing memory with the full-resolution main-channel decoder. The most significant source of video quality degradation in a reduced-resolution decoder is prediction drift, which is caused by the mismatch between the full-resolution reference frames used by the encoder and the subsampled reference frames used by the decoder. to mitigate the visually annoying effects of prediction drift, the decoder described in this paper operates at two different resolutions -- a lower resolution for B pictures, which do not contribute to prediction drift and a higher resolution for I and P pictures. This means that the motion-compensation unit (MCU) essentially operates at the higher resolution, but the peak memory bandwidth is the same as that required to decode at the lower resolution. Storage of additional data, representing the higher resolution for I and P pictures, requires a relatively small amount of additional memory as compared to decoding at the lower resolution. Experimental results will demonstrate the improvement in video quality achieved by the addition of the higher-resolution data in forming predictions for P pictures.

  15. STACK DECODING OF LINEAR BLOCK CODES FOR DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

    Directory of Open Access Journals (Sweden)

    H. Prashantha Kumar

    2012-03-01

    Full Text Available The boundaries between block and convolutional codes have become diffused after recent advances in the understanding of the trellis structure of block codes and the tail-biting structure of some convolutional codes. Therefore, decoding algorithms traditionally proposed for decoding convolutional codes have been applied for decoding certain classes of block codes. This paper presents the decoding of block codes using tree structure. Many good block codes are presently known. Several of them have been used in applications ranging from deep space communication to error control in storage systems. But the primary difficulty with applying Viterbi or BCJR algorithms to decode of block codes is that, even though they are optimum decoding methods, the promised bit error rates are not achieved in practice at data rates close to capacity. This is because the decoding effort is fixed and grows with block length, and thus only short block length codes can be used. Therefore, an important practical question is whether a suboptimal realizable soft decision decoding method can be found for block codes. A noteworthy result which provides a partial answer to this question is described in the following sections. This result of near optimum decoding will be used as motivation for the investigation of different soft decision decoding methods for linear block codes which can lead to the development of efficient decoding algorithms. The code tree can be treated as an expanded version of the trellis, where every path is totally distinct from every other path. We have derived the tree structure for (8, 4 and (16, 11 extended Hamming codes and have succeeded in implementing the soft decision stack algorithm to decode them. For the discrete memoryless channel, gains in excess of 1.5dB at a bit error rate of 10-5 with respect to conventional hard decision decoding are demonstrated for these codes.

  16. On the Estimation of Confidence Intervals for Binomial Population Proportions in Astronomy: The Simplicity and Superiority of the Bayesian Approach (with Recipes for Use in R, Matlab, Mathematica, and IDL)

    CERN Document Server

    Cameron, Ewan

    2010-01-01

    We present a critical review of popular techniques for estimating confidence intervals on binomial population proportions inferred from success counts in small-to-intermediate samples. Population proportions arise frequently as quantities of interest in astronomical research; most notably in studies of the fractions of galaxies exhibiting distinct structural components (stellar bars, supermassive blackholes, AGN, etc.), populating the ('quiescent') red sequence, or undergoing major/minor mergers. However, the two most widely-used techniques for estimating binomial confidence intervals - the 'normal approximation' and the Clopper & Pearson approach - perform poorly under sampling regimes routinely encountered in astronomical datasets. Hence, we provide here an overview of the fundamentals of binomial statistics with two principal aims: (i) to reveal the ease with which binomial confidence intervals with more satisfactory behaviour may be estimated from the quantiles of the beta distribution using modern ma...

  17. Interactive decoding for the CCSDS recommendation for image data compression

    Science.gov (United States)

    García-Vílchez, Fernando; Serra-Sagristà, Joan; Zabala, Alaitz; Pons, Xavier

    2007-10-01

    In 2005, the Consultative Committee for Space Data Systems (CCSDS) approved a new Recommendation (CCSDS 122.0-B-1) for Image Data Compression. Our group has designed a new file syntax for the Recommendation. The proposal consists of adding embedded headers. Such modification provides scalability by quality, spatial location, resolution and component. The main advantages of our proposal are: 1) the definition of multiple types of progression order, which enhances abilities in transmission scenarios, and 2) the support for the extraction and decoding of specific windows of interest without needing to decode the complete code-stream. In this paper we evaluate the performance of our proposal. First we measure the impact of the embedded headers in the encoded stream. Second we compare the compression performance of our technique to JPEG2000.

  18. HDL Implementation of Low Density Parity Check (LDPC Decoder

    Directory of Open Access Journals (Sweden)

    Pawandip Kaur

    2012-03-01

    Full Text Available Low-Density Parity-Check (LDPC codes are one of the most promising error-correcting codes approaching Shannon capacity and have been adopted in many applications. These codes offer huge advantages in terms of coding gain, throughput and power dissipation. Error correction algorithms are often implemented in hardware for fast processing to meet the real-time needs of communication systems. However hardwareimplementation of LDPC decoders using traditional Hardware Description Language (HDL based approach is a complex and time consuming task. In this paper HDL Implementation of Low Density Parity Check Decoder architecture is presented with different rates i.e. 1/2, 2/3, 3/4, 4/7, 8/9, 9/10 and variable data lengths i.e. 8, 16, 32, 64, 128, 256 bits and consequent changeable precision factor.

  19. Two-Bit Bit Flipping Decoding of LDPC Codes

    CERN Document Server

    Nguyen, Dung Viet; Marcellin, Michael W

    2011-01-01

    In this paper, we propose a new class of bit flipping algorithms for low-density parity-check (LDPC) codes over the binary symmetric channel (BSC). Compared to the regular (parallel or serial) bit flipping algorithms, the proposed algorithms employ one additional bit at a variable node to represent its "strength." The introduction of this additional bit increases the guaranteed error correction capability by a factor of at least 2. An additional bit can also be employed at a check node to capture information which is beneficial to decoding. A framework for failure analysis of the proposed algorithms is described. These algorithms outperform the Gallager A/B algorithm and the min-sum algorithm at much lower complexity. Concatenation of two-bit bit flipping algorithms show a potential to approach the performance of belief propagation (BP) decoding in the error floor region, also at lower complexity.

  20. Computational Complexity of Decoding Orthogonal Space-Time Block Codes

    CERN Document Server

    Ayanoglu, Ender; Karipidis, Eleftherios

    2009-01-01

    The computational complexity of optimum decoding for an orthogonal space-time block code G satisfying the orthogonality property that the Hermitian transpose of G multiplied by G is equal to a constant c times the sum of the squared symbols of the code times an identity matrix, where c is a positive integer is quantified. Four equivalent techniques of optimum decoding which have the same computational complexity are specified. Modifications to the basic formulation in special cases are calculated and illustrated by means of examples. This paper corrects and extends [1],[2], and unifies them with the results from the literature. In addition, a number of results from the literature are extended to the case c > 1.

  1. Video watermarking with empirical PCA-based decoding.

    Science.gov (United States)

    Khalilian, Hanieh; Bajic, Ivan V

    2013-12-01

    A new method for video watermarking is presented in this paper. In the proposed method, data are embedded in the LL subband of wavelet coefficients, and decoding is performed based on the comparison among the elements of the first principal component resulting from empirical principal component analysis (PCA). The locations for data embedding are selected such that they offer the most robust PCA-based decoding. Data are inserted in the LL subband in an adaptive manner based on the energy of high frequency subbands and visual saliency. Extensive testing was performed under various types of attacks, such as spatial attacks (uniform and Gaussian noise and median filtering), compression attacks (MPEG-2, H. 263, and H. 264), and temporal attacks (frame repetition, frame averaging, frame swapping, and frame rate conversion). The results show that the proposed method offers improved performance compared with several methods from the literature, especially under additive noise and compression attacks.

  2. Design And Analysis Of Low Power Hierarchical Decoder

    Directory of Open Access Journals (Sweden)

    Abhinav Singh

    2012-11-01

    Full Text Available Due to the high degree of miniaturization possible today in semiconductor technology, the size and complexity of designs that may be implemented in hardware has increased dramatically. Process scaling has been used in the miniaturization process to reduce the area needed for logic functions in an effort to lower the product costs. Precharged Complementary Metal Oxide Semiconductor (CMOS domino logic techniques may be applied to functional blocks to reduce power. Domino logic forms an attractive design style for high performance designs since its low switching threshold and reduced transistor count leads to fast and area efficient circuit implementations. In this paper all the necessary components required to form a 5-to-32 bit decoder using domino logic are designed to perform different analysis at 180nm & 350 nm technologies. Decoderimplemented through domino logic is compared to static decoder.

  3. Decoding of Convolutional Codes over the Erasure Channel

    CERN Document Server

    Tomás, Virtudes; Smarandache, Roxana

    2010-01-01

    In this paper we study the decoding capabilities of convolutional codes over the erasure channel. Of special interest will be maximum distance profile (MDP) convolutional codes. These are codes which have a maximum possible column distance increase. We show how this strong minimum distance condition of MDP convolutional codes help us to solve error situations that maximum distance separable (MDS) block codes fail to solve. Towards this goal, we define two subclasses of MDP codes: reverse-MDP convolutional codes and complete-MDP convolutional codes. Reverse-MDP codes have the capability to recover a maximum number of erasures using an algorithm which runs backward in time. Complete-MDP convolutional codes are both MDP and reverse-MDP codes. They are capable to recover the state of the decoder under the mildest condition. We show that complete-MDP convolutional codes perform in certain sense better than MDS block codes of the same rate over the erasure channel.

  4. 12th Brazilian Meeting on Bayesian Statistics

    CERN Document Server

    Louzada, Francisco; Rifo, Laura; Stern, Julio; Lauretto, Marcelo

    2015-01-01

    Through refereed papers, this volume focuses on the foundations of the Bayesian paradigm; their comparison to objectivistic or frequentist Statistics counterparts; and the appropriate application of Bayesian foundations. This research in Bayesian Statistics is applicable to data analysis in biostatistics, clinical trials, law, engineering, and the social sciences. EBEB, the Brazilian Meeting on Bayesian Statistics, is held every two years by the ISBrA, the International Society for Bayesian Analysis, one of the most active chapters of the ISBA. The 12th meeting took place March 10-14, 2014 in Atibaia. Interest in foundations of inductive Statistics has grown recently in accordance with the increasing availability of Bayesian methodological alternatives. Scientists need to deal with the ever more difficult choice of the optimal method to apply to their problem. This volume shows how Bayes can be the answer. The examination and discussion on the foundations work towards the goal of proper application of Bayesia...

  5. Coset decomposition method for storing and decoding fingerprint data

    OpenAIRE

    Mohamed Sayed

    2014-01-01

    Biometrics such as fingerprints, irises, faces, voice, gait and hands are often used for access control, authentication and encryption instead of PIN and passwords. In this paper a syndrome decoding technique is proposed to provide a secure means of storing and matching various biometrics data. We apply an algebraic coding technique called coset decomposition to the model of fingerprint biometrics. The algorithm which reveals the matching between registered and probe fingerprints is modeled a...

  6. Design of LOG-MAP / MAX-LOG-MAP Decoder

    Directory of Open Access Journals (Sweden)

    Mihai TIMIS, PhD Candidate, Dipl.Eng.

    2007-01-01

    Full Text Available The process of turbo-code decoding starts with the formation of a posteriori probabilities (APPs for each data bit, which is followed by choosing the data-bit value that corresponds to the maximum a posteriori (MAP probability for that data bit. Upon reception of a corrupted code-bit sequence, the process of decision making with APPs allows the MAP algorithm to determine the most likely information bit to have been transmitted at each bit time.

  7. Unsupervised Transient Light Curve Analysis Via Hierarchical Bayesian Inference

    CERN Document Server

    Sanders, Nathan; Soderberg, Alicia

    2014-01-01

    Historically, light curve studies of supernovae (SNe) and other transient classes have focused on individual objects with copious and high signal-to-noise observations. In the nascent era of wide field transient searches, objects with detailed observations are decreasing as a fraction of the overall known SN population, and this strategy sacrifices the majority of the information contained in the data about the underlying population of transients. A population level modeling approach, simultaneously fitting all available observations of objects in a transient sub-class of interest, fully mines the data to infer the properties of the population and avoids certain systematic biases. We present a novel hierarchical Bayesian statistical model for population level modeling of transient light curves, and discuss its implementation using an efficient Hamiltonian Monte Carlo technique. As a test case, we apply this model to the Type IIP SN sample from the Pan-STARRS1 Medium Deep Survey, consisting of 18,837 photometr...

  8. Bayesian Inversion of Seabed Scattering Data

    Science.gov (United States)

    2014-09-30

    Bayesian Inversion of Seabed Scattering Data (Special Research Award in Ocean Acoustics) Gavin A.M.W. Steininger School of Earth & Ocean...project are to carry out joint Bayesian inversion of scattering and reflection data to estimate the in-situ seabed scattering and geoacoustic parameters...valid OMB control number. 1. REPORT DATE 30 SEP 2014 2. REPORT TYPE 3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE Bayesian

  9. Anomaly Detection and Attribution Using Bayesian Networks

    Science.gov (United States)

    2014-06-01

    UNCLASSIFIED Anomaly Detection and Attribution Using Bayesian Networks Andrew Kirk, Jonathan Legg and Edwin El-Mahassni National Security and...detection in Bayesian networks , en- abling both the detection and explanation of anomalous cases in a dataset. By exploiting the structure of a... Bayesian network , our algorithm is able to efficiently search for local maxima of data conflict between closely related vari- ables. Benchmark tests using

  10. Compiling Relational Bayesian Networks for Exact Inference

    DEFF Research Database (Denmark)

    Jaeger, Manfred; Chavira, Mark; Darwiche, Adnan

    2004-01-01

    We describe a system for exact inference with relational Bayesian networks as defined in the publicly available \\primula\\ tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evaluating...... and differentiating these circuits in time linear in their size. We report on experimental results showing the successful compilation, and efficient inference, on relational Bayesian networks whose {\\primula}--generated propositional instances have thousands of variables, and whose jointrees have clusters...

  11. Photometric Redshift with Bayesian Priors on Physical Properties of Galaxies

    CERN Document Server

    Tanaka, Masayuki

    2015-01-01

    We present a proof-of-concept analysis of photometric redshifts with Bayesian priors on physical properties of galaxies. This concept is particularly suited for upcoming/on-going large imaging surveys, in which only several broad-band filters are available and it is hard to break some of the degeneracies in the multi-color space. We construct model templates of galaxies using a stellar population synthesis code and apply Bayesian priors on physical properties such as stellar mass and star formation rate. These priors are a function of redshift and they effectively evolve the templates with time in an observationally motivated way. We demonstrate that the priors help reduce the degeneracy and deliver significantly improved photometric redshifts. Furthermore, we show that a template error function, which corrects for systematic flux errors in the model templates as a function of rest-frame wavelength, delivers further improvements. One great advantage of our technique is that we simultaneously measure redshifts...

  12. A Bayesian Predictive Discriminant Analysis with Screened Data

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2015-09-01

    Full Text Available In the application of discriminant analysis, a situation sometimes arises where individual measurements are screened by a multidimensional screening scheme. For this situation, a discriminant analysis with screened populations is considered from a Bayesian viewpoint, and an optimal predictive rule for the analysis is proposed. In order to establish a flexible method to incorporate the prior information of the screening mechanism, we propose a hierarchical screened scale mixture of normal (HSSMN model, which makes provision for flexible modeling of the screened observations. An Markov chain Monte Carlo (MCMC method using the Gibbs sampler and the Metropolis–Hastings algorithm within the Gibbs sampler is used to perform a Bayesian inference on the HSSMN models and to approximate the optimal predictive rule. A simulation study is given to demonstrate the performance of the proposed predictive discrimination procedure.

  13. LDPC Decoding for Signal Dependent Visible Light Communication Channels

    Institute of Scientific and Technical Information of China (English)

    YUAN Ming; SHA Xiaoshi; LIANG Xiao; JIANG Ming; WANG Jiaheng; ZHAO Chunming

    2016-01-01

    Avalanche photodiodes (APD) are widely employed in visible light communication (VLC) systems. The general signal dependent Gaussian channel is investigated. Experiment results reveal that symbols on different constellation points under official illumi⁃nance inevitably suffer from different levels of noise due to the multiplication process of APDs. In such a case, conventional log likely⁃hood ratio (LLR) calculation for signal independent channels may cause performance loss. The optimal LLR calculation for decoder is then derived because of the existence of non⁃ignorable APD shot noise. To find the decoding thresholds of the optimal and suboptimal detection schemes, the extrinsic information transfer (EXIT) chat is further analyzed. Finally a modified minimum sum algorithm is suggested with reduced complexity and acceptable performance loss. Numerical simulations show that, with a reg⁃ular (3, 6) low⁃density parity check (LDPC) code of block length 20,000, 0.7 dB gain is achieved with our proposed scheme over the LDPC decoder designed for signal independent noise. It is also found that the coding performance is improved for a larger modulation depth.

  14. Context adaptive binary arithmetic decoding on transport triggered architectures

    Science.gov (United States)

    Rouvinen, Joona; Jääskeläinen, Pekka; Rintaluoma, Tero; Silvén, Olli; Takala, Jarmo

    2008-02-01

    Video coding standards, such as MPEG-4, H.264, and VC1, define hybrid transform based block motion compensated techniques that employ almost the same coding tools. This observation has been a foundation for defining the MPEG Reconfigurable Multimedia Coding framework that targets to facilitate multi-format codec design. The idea is to send a description of the codec with the bit stream, and to reconfigure the coding tools accordingly on-the-fly. This kind of approach favors software solutions, and is a substantial challenge for the implementers of mobile multimedia devices that aim at high energy efficiency. In particularly as high definition formats are about to be required from mobile multimedia devices, variable length decoders are becoming a serious bottleneck. Even at current moderate mobile video bitrates software based variable length decoders swallow a major portion of the resources of a mobile processor. In this paper we present a Transport Triggered Architecture (TTA) based programmable implementation for Context Adaptive Binary Arithmetic de-Coding (CABAC) that is used e.g. in the main profile of H.264 and in JPEG2000. The solution can be used even for other variable length codes.

  15. Learning dynamic Bayesian networks with mixed variables

    DEFF Research Database (Denmark)

    Bøttcher, Susanne Gammelgaard

    This paper considers dynamic Bayesian networks for discrete and continuous variables. We only treat the case, where the distribution of the variables is conditional Gaussian. We show how to learn the parameters and structure of a dynamic Bayesian network and also how the Markov order can be learned....... An automated procedure for specifying prior distributions for the parameters in a dynamic Bayesian network is presented. It is a simple extension of the procedure for the ordinary Bayesian networks. Finally the W¨olfer?s sunspot numbers are analyzed....

  16. Design and Implementation of Single Chip WCDMA High Speed Channel Decoder

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A memory and driving clock efficient design scheme to achieve WCDMA high-speed channel decoder on a single XILINX' XVC1000E FPGA chip is presented. Using a modified MAP algorithm, say parallel Sliding Window logarithmic Maximum A Posterior (PSW-log-MAP), the on-chip turbo decoder can decode an information bit by only an average of two clocks per iteration. On the other hand, a high-parallel pipeline Viterbi algorithm is adopted to realize the 256-state convolutional code decoding. The final decoder with an 8×chip-clock (30.72MHz) driving can concurrently process a data rate up to 2.5Mbps of turbo coded sequences and a data rate over 400kbps of convolutional codes. There is no extern memory needed. Test results show that the decoding performance is only 0.2~0.3dB or less lost comparing to float simulation.

  17. Elimination of the Background Noise of the Decoded Image in Fresnel Zone Plate Scanning Holography

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A method of digitally high pass filtering in frequency domain is proposed to eliminate the background noise of the decoded image in Fresnel zone plate scanning holography. The high pass filter is designed as a circular stop, which should be suitable to suppressing the background noise significantly and remain much low frequency information of the object. The principle of high pass filtering is that the Fourier transform of the decoded image is multiplied with the high pass filter. Thus the frequency spectrum of the decoded image without the background noise is achieved. By inverse Fourier transform of the spectrum of the decoded image after multiplying operation, the decoded image without the background noise is obtained. Both of the computer simulations and the experimental results show that the contrast and the signal-to-noise ratio of the decoded image are significantly improved with digital filtering.

  18. Serial Min-max Decoding Algorithm Based on Variable Weighting for Nonbinary LDPC Codes

    Directory of Open Access Journals (Sweden)

    Zhongxun Wang

    2013-09-01

    Full Text Available In this paper, we perform an analysis on the min-max decoding algorithm for nonbinary LDPC(low-density parity-check codes and propose serial min-max decoding algorithm. Combining with the weighted processing of the variable node message, we propose serial min-max decoding algorithm based on variable weighting for nonbinary LDPC codes in the end. The simulation indicates that when the bit error rate is 10^-3,compared with serial min-max decoding algorithm ,traditional min-max decoding algorithm and traditional minsum algorithm ,serial min-max decoding algorithm based on variable weighting can offer additional coding gain 0.2dB、0.8dB and 1.4dB respectively in additional white Gaussian noise channel and under binary phase shift keying modulation.  

  19. On the Properties of Neural Machine Translation: Encoder-Decoder Approaches

    OpenAIRE

    Cho, Kyunghyun; van Merrienboer, Bart; Bahdanau, Dzmitry; Bengio, Yoshua

    2014-01-01

    Neural machine translation is a relatively new approach to statistical machine translation based purely on neural networks. The neural machine translation models often consist of an encoder and a decoder. The encoder extracts a fixed-length representation from a variable-length input sentence, and the decoder generates a correct translation from this representation. In this paper, we focus on analyzing the properties of the neural machine translation using two models; RNN Encoder--Decoder and...

  20. Coding/decoding two-dimensional images with orbital angular momentum of light.

    Science.gov (United States)

    Chu, Jiaqi; Li, Xuefeng; Smithwick, Quinn; Chu, Daping

    2016-04-01

    We investigate encoding and decoding of two-dimensional information using the orbital angular momentum (OAM) of light. Spiral phase plates and phase-only spatial light modulators are used in encoding and decoding of OAM states, respectively. We show that off-axis points and spatial variables encoded with a given OAM state can be recovered through decoding with the corresponding complimentary OAM state.

  1. Complete Decoding and Reporting of Aviation Routine Weather Reports (METARs)

    Science.gov (United States)

    Lui, Man-Cheung Max

    2014-01-01

    Aviation Routine Weather Report (METAR) provides surface weather information at and around observation stations, including airport terminals. These weather observations are used by pilots for flight planning and by air traffic service providers for managing departure and arrival flights. The METARs are also an important source of weather data for Air Traffic Management (ATM) analysts and researchers at NASA and elsewhere. These researchers use METAR to correlate severe weather events with local or national air traffic actions that restrict air traffic, as one example. A METAR is made up of multiple groups of coded text, each with a specific standard coding format. These groups of coded text are located in two sections of a report: Body and Remarks. The coded text groups in a U.S. METAR are intended to follow the coding standards set by National Oceanic and Atmospheric Administration (NOAA). However, manual data entry and edits made by a human report observer may result in coded text elements that do not follow the standards, especially in the Remarks section. And contrary to the standards, some significant weather observations are noted only in the Remarks section and not in the Body section of the reports. While human readers can infer the intended meaning of non-standard coding of weather conditions, doing so with a computer program is far more challenging. However such programmatic pre-processing is necessary to enable efficient and faster database query when researchers need to perform any significant historical weather analysis. Therefore, to support such analysis, a computer algorithm was developed to identify groups of coded text anywhere in a report and to perform subsequent decoding in software. The algorithm considers common deviations from the standards and data entry mistakes made by observers. The implemented software code was tested to decode 12 million reports and the decoding process was able to completely interpret 99.93 of the reports. This

  2. Bayesian Methods and Universal Darwinism

    Science.gov (United States)

    Campbell, John

    2009-12-01

    Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent Champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a `copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the Operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that Systems will evolve to states of highest entropy subject to the constraints of scientific law. This principle may be inverted to provide illumination as to the nature of scientific law. Our best cosmological theories suggest the universe contained much less complexity during the period shortly after the Big Bang than it does at present. The scientific subject matter of atomic physics, chemistry, biology and the social sciences has been created since that time. An explanation is proposed for the existence of this subject matter as due to the evolution of constraints in the form of adaptations imposed on Maximum Entropy. It is argued these adaptations were discovered and instantiated through the Operations of a succession of Darwinian processes.

  3. An Efficient Soft Decoder of Block Codes Based on Compact Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmed Azouaoui

    2012-09-01

    Full Text Available Soft-decision decoding is an NP-hard problem with great interest to developers of communication systems. We present an efficient soft-decision decoder of linear block codes based on compact genetic algorithm (cGA and compare its performances with various other decoding algorithms including Shakeel algorithm. The proposed algorithm uses the dual code in contrast to Shakeel algorithm which uses the code itself. Hence, this new approach reduces the decoding complexity of high rates codes. The complexity and an optimized version of this new algorithm are also presented and discussed.

  4. Continuous motion decoding from EMG using independent component analysis and adaptive model training.

    Science.gov (United States)

    Zhang, Qin; Xiong, Caihua; Chen, Wenbin

    2014-01-01

    Surface Electromyography (EMG) is popularly used to decode human motion intention for robot movement control. Traditional motion decoding method uses pattern recognition to provide binary control command which can only move the robot as predefined limited patterns. In this work, we proposed a motion decoding method which can accurately estimate 3-dimensional (3-D) continuous upper limb motion only from multi-channel EMG signals. In order to prevent the muscle activities from motion artifacts and muscle crosstalk which especially obviously exist in upper limb motion, the independent component analysis (ICA) was applied to extract the independent source EMG signals. The motion data was also transferred from 4-manifold to 2-manifold by the principle component analysis (PCA). A hidden Markov model (HMM) was proposed to decode the motion from the EMG signals after the model trained by an adaptive model identification process. Experimental data were used to train the decoding model and validate the motion decoding performance. By comparing the decoded motion with the measured motion, it is found that the proposed motion decoding strategy was feasible to decode 3-D continuous motion from EMG signals.

  5. Analysis and design of raptor codes for joint decoding using Information Content evolution

    CERN Document Server

    Venkiah, Auguste; Declercq, David

    2007-01-01

    In this paper, we present an analytical analysis of the convergence of raptor codes under joint decoding over the binary input additive white noise channel (BIAWGNC), and derive an optimization method. We use Information Content evolution under Gaussian approximation, and focus on a new decoding scheme that proves to be more efficient: the joint decoding of the two code components of the raptor code. In our general model, the classical tandem decoding scheme appears to be a subcase, and thus, the design of LT codes is also possible.

  6. Sensitivity analysis of the channel estimation deviation to the MAP decoding algorithm

    Institute of Scientific and Technical Information of China (English)

    WAN Ke; FAN Ping-zhi

    2006-01-01

    As a necessary input parameter for maximum a-posteriori(MAP) decoding algorithm,SNR is normally obtained from the channel estimation unit.Corresponding research indicated that SNR estimation deviation degraded the performance of Turbo decoding significantly.In this paper,MAP decoding algorithm with SNR estimation deviation was investigated in detail,and the degradation mechanism of Turbo decoding was explained analytically.The theoretical analysis and computer simulation disclosed the specific reasons for the performance degradation when SNR estimation was less than the actual value,and for the higher sensitivity of SNR estimation to long-frame Turbo codes.

  7. On the Joint Error-and-Erasure Decoding for Irreducible Polynomial Remainder Codes

    CERN Document Server

    Yu, Jiun-Hung

    2012-01-01

    A general class of polynomial remainder codes is considered. Such codes are very flexible in rate and length and include Reed-Solomon codes as a special case. As an extension of previous work, two joint error-and-erasure decoding approaches are proposed. In particular, both the decoding approaches by means of a fixed transform are treated in a way compatible with the error-only decoding. In the end, a collection of gcd-based decoding algorithm is obtained, some of which appear to be new even when specialized to Reed-Solomon codes.

  8. VLSI design of turbo decoder for integrated communication system on a chip applications

    Science.gov (United States)

    Fang, Wai-Chi; Sethuram, Ashwin; Belevi, Kemal

    2003-01-01

    A high-throughput low-power turbo decoder core has been developed for integrated communication system applications such as satellite communications, wireless LAN, digital TV, cable modem, Digital Video Broadcast (DVB), and xDSL systems. The turbo decoder is based on convolutional constituent codes, which outperform all other Forward Error Correction techniques. This turbo decoder core is parameterizable and can be modified easily to fit any size for advanced communication system-on-chip products. The turbo decoder core provides Forward Error Correction of up to 15 Mbits/sec on a 0.13-micron CMOS FPGA prototyping chip at a power of 0.1 watts.

  9. VLSI Architectures for Sliding-Window-Based Space-Time Turbo Trellis Code Decoders

    Directory of Open Access Journals (Sweden)

    Georgios Passas

    2012-01-01

    Full Text Available The VLSI implementation of SISO-MAP decoders used for traditional iterative turbo coding has been investigated in the literature. In this paper, a complete architectural model of a space-time turbo code receiver that includes elementary decoders is presented. These architectures are based on newly proposed building blocks such as a recursive add-compare-select-offset (ACSO unit, A-, B-, Γ-, and LLR output calculation modules. Measurements of complexity and decoding delay of several sliding-window-technique-based MAP decoder architectures and a proposed parameter set lead to defining equations and comparison between those architectures.

  10. Bayesian Query-Focused Summarization

    CERN Document Server

    Daumé, Hal

    2009-01-01

    We present BayeSum (for ``Bayesian summarization''), a model for sentence extraction in query-focused summarization. BayeSum leverages the common case in which multiple documents are relevant to a single query. Using these documents as reinforcement for query terms, BayeSum is not afflicted by the paucity of information in short queries. We show that approximate inference in BayeSum is possible on large data sets and results in a state-of-the-art summarization system. Furthermore, we show how BayeSum can be understood as a justified query expansion technique in the language modeling for IR framework.

  11. Numeracy, frequency, and Bayesian reasoning

    Directory of Open Access Journals (Sweden)

    Gretchen B. Chapman

    2009-02-01

    Full Text Available Previous research has demonstrated that Bayesian reasoning performance is improved if uncertainty information is presented as natural frequencies rather than single-event probabilities. A questionnaire study of 342 college students replicated this effect but also found that the performance-boosting benefits of the natural frequency presentation occurred primarily for participants who scored high in numeracy. This finding suggests that even comprehension and manipulation of natural frequencies requires a certain threshold of numeracy abilities, and that the beneficial effects of natural frequency presentation may not be as general as previously believed.

  12. Bayesian inference for Hawkes processes

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl

    2013-01-01

    The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....

  13. Bayesian inference for Hawkes processes

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl

    The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....

  14. Bayesian homeopathy: talking normal again.

    Science.gov (United States)

    Rutten, A L B

    2007-04-01

    Homeopathy has a communication problem: important homeopathic concepts are not understood by conventional colleagues. Homeopathic terminology seems to be comprehensible only after practical experience of homeopathy. The main problem lies in different handling of diagnosis. In conventional medicine diagnosis is the starting point for randomised controlled trials to determine the effect of treatment. In homeopathy diagnosis is combined with other symptoms and personal traits of the patient to guide treatment and predict response. Broadening our scope to include diagnostic as well as treatment research opens the possibility of multi factorial reasoning. Adopting Bayesian methodology opens the possibility of investigating homeopathy in everyday practice and of describing some aspects of homeopathy in conventional terms.

  15. Bayesian credible interval construction for Poisson statistics

    Institute of Scientific and Technical Information of China (English)

    ZHU Yong-Sheng

    2008-01-01

    The construction of the Bayesian credible (confidence) interval for a Poisson observable including both the signal and background with and without systematic uncertainties is presented.Introducing the conditional probability satisfying the requirement of the background not larger than the observed events to construct the Bayesian credible interval is also discussed.A Fortran routine,BPOCI,has been developed to implement the calculation.

  16. Advances in Bayesian Modeling in Educational Research

    Science.gov (United States)

    Levy, Roy

    2016-01-01

    In this article, I provide a conceptually oriented overview of Bayesian approaches to statistical inference and contrast them with frequentist approaches that currently dominate conventional practice in educational research. The features and advantages of Bayesian approaches are illustrated with examples spanning several statistical modeling…

  17. Nonparametric Bayesian Modeling of Complex Networks

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Mørup, Morten

    2013-01-01

    Modeling structure in complex networks using Bayesian nonparametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This article provides a gentle introduction to nonparametric Bayesian modeling of complex networks: Using...... for complex networks can be derived and point out relevant literature....

  18. Modeling Diagnostic Assessments with Bayesian Networks

    Science.gov (United States)

    Almond, Russell G.; DiBello, Louis V.; Moulder, Brad; Zapata-Rivera, Juan-Diego

    2007-01-01

    This paper defines Bayesian network models and examines their applications to IRT-based cognitive diagnostic modeling. These models are especially suited to building inference engines designed to be synchronous with the finer grained student models that arise in skills diagnostic assessment. Aspects of the theory and use of Bayesian network models…

  19. Using Bayesian Networks to Improve Knowledge Assessment

    Science.gov (United States)

    Millan, Eva; Descalco, Luis; Castillo, Gladys; Oliveira, Paula; Diogo, Sandra

    2013-01-01

    In this paper, we describe the integration and evaluation of an existing generic Bayesian student model (GBSM) into an existing computerized testing system within the Mathematics Education Project (PmatE--Projecto Matematica Ensino) of the University of Aveiro. This generic Bayesian student model had been previously evaluated with simulated…

  20. The Bayesian Revolution Approaches Psychological Development

    Science.gov (United States)

    Shultz, Thomas R.

    2007-01-01

    This commentary reviews five articles that apply Bayesian ideas to psychological development, some with psychology experiments, some with computational modeling, and some with both experiments and modeling. The reviewed work extends the current Bayesian revolution into tasks often studied in children, such as causal learning and word learning, and…