WorldWideScience

Sample records for bayesian population decoding

  1. Bayesian decoding using unsorted spikes in the rat hippocampus

    OpenAIRE

    Kloosterman, Fabian; Layton, Stuart P.; Chen, Zhe; Wilson, Matthew A

    2013-01-01

    A fundamental task in neuroscience is to understand how neural ensembles represent information. Population decoding is a useful tool to extract information from neuronal populations based on the ensemble spiking activity. We propose a novel Bayesian decoding paradigm to decode unsorted spikes in the rat hippocampus. Our approach uses a direct mapping between spike waveform features and covariates of interest and avoids accumulation of spike sorting errors. Our decoding paradigm is nonparametr...

  2. Kernel density compression for real-time Bayesian encoding/decoding of unsorted hippocampal spikes

    OpenAIRE

    Sodkomkham, Danaipat; Ciliberti, Davide; Wilson, Matthew A.; Fukui, Ken-ichi; Moriyama, Koichi; Numao, Masayuki; Kloosterman, Fabian

    2015-01-01

    To gain a better understanding of how neural ensembles communicate and process information, neural decoding algorithms are used to extract information encoded in their spiking activity. Bayesian decoding is one of the most used neural population decoding approaches to extract information from the ensemble spiking activity of rat hippocampal neurons. Recently it has been shown how Bayesian decoding can be implemented without the intermediate step of sorting spike waveforms into groups of singl...

  3. Adaptive decoding for brain-machine interfaces through Bayesian parameter updates.

    Science.gov (United States)

    Li, Zheng; O'Doherty, Joseph E; Lebedev, Mikhail A; Nicolelis, Miguel A L

    2011-12-01

    Brain-machine interfaces (BMIs) transform the activity of neurons recorded in motor areas of the brain into movements of external actuators. Representation of movements by neuronal populations varies over time, during both voluntary limb movements and movements controlled through BMIs, due to motor learning, neuronal plasticity, and instability in recordings. To ensure accurate BMI performance over long time spans, BMI decoders must adapt to these changes. We propose the Bayesian regression self-training method for updating the parameters of an unscented Kalman filter decoder. This novel paradigm uses the decoder's output to periodically update its neuronal tuning model in a Bayesian linear regression. We use two previously known statistical formulations of Bayesian linear regression: a joint formulation, which allows fast and exact inference, and a factorized formulation, which allows the addition and temporary omission of neurons from updates but requires approximate variational inference. To evaluate these methods, we performed offline reconstructions and closed-loop experiments with rhesus monkeys implanted cortically with microwire electrodes. Offline reconstructions used data recorded in areas M1, S1, PMd, SMA, and PP of three monkeys while they controlled a cursor using a handheld joystick. The Bayesian regression self-training updates significantly improved the accuracy of offline reconstructions compared to the same decoder without updates. We performed 11 sessions of real-time, closed-loop experiments with a monkey implanted in areas M1 and S1. These sessions spanned 29 days. The monkey controlled the cursor using the decoder with and without updates. The updates maintained control accuracy and did not require information about monkey hand movements, assumptions about desired movements, or knowledge of the intended movement goals as training signals. These results indicate that Bayesian regression self-training can maintain BMI control accuracy over long

  4. A Bayesian Bootstrap for a Finite Population

    OpenAIRE

    Lo, Albert Y.

    1988-01-01

    A Bayesian bootstrap for a finite population is introduced; its small-sample distributional properties are discussed and compared with those of the frequentist bootstrap for a finite population. It is also shown that the two are first-order asymptotically equivalent.

  5. Approximate Bayesian computation in population genetics.

    OpenAIRE

    Beaumont, Mark A; Zhang, Wenyang; Balding, David J.

    2002-01-01

    We propose a new method for approximate Bayesian statistical inference on the basis of summary statistics. The method is suited to complex problems that arise in population genetics, extending ideas developed in this setting by earlier authors. Properties of the posterior distribution of a parameter, such as its mean or density curve, are approximated without explicit likelihood calculations. This is achieved by fitting a local-linear regression of simulated parameter values on simulated summ...

  6. Book review: Bayesian analysis for population ecology

    Science.gov (United States)

    Link, William A.

    2011-01-01

    Brian Dennis described the field of ecology as “fertile, uncolonized ground for Bayesian ideas.” He continued: “The Bayesian propagule has arrived at the shore. Ecologists need to think long and hard about the consequences of a Bayesian ecology. The Bayesian outlook is a successful competitor, but is it a weed? I think so.” (Dennis 2004)

  7. IQ Predicts Word Decoding Skills in Populations with Intellectual Disabilities

    Science.gov (United States)

    Levy, Yonata

    2011-01-01

    This is a study of word decoding in adolescents with Down syndrome and in adolescents with Intellectual Deficits of unknown etiology. It was designed as a replication of studies of word decoding in English speaking and in Hebrew speaking adolescents with Williams syndrome ([0230] and [0235]). Participants' IQ was matched to IQ in the groups with…

  8. Emergence of optimal decoding of population codes through STDP.

    Science.gov (United States)

    Habenschuss, Stefan; Puhr, Helmut; Maass, Wolfgang

    2013-06-01

    The brain faces the problem of inferring reliable hidden causes from large populations of noisy neurons, for example, the direction of a moving object from spikes in area MT. It is known that a theoretically optimal likelihood decoding could be carried out by simple linear readout neurons if weights of synaptic connections were set to certain values that depend on the tuning functions of sensory neurons. We show here that such theoretically optimal readout weights emerge autonomously through STDP in conjunction with lateral inhibition between readout neurons. In particular, we identify a class of optimal STDP learning rules with homeostatic plasticity, for which the autonomous emergence of optimal readouts can be explained on the basis of a rigorous learning theory. This theory shows that the network motif we consider approximates expectation-maximization for creating internal generative models for hidden causes of high-dimensional spike inputs. Notably, we find that this optimal functionality can be well approximated by a variety of STDP rules beyond those predicted by theory. Furthermore, we show that this learning process is very stable and automatically adjusts weights to changes in the number of readout neurons, the tuning functions of sensory neurons, and the statistics of external stimuli. PMID:23517096

  9. Brain Decoding-Classification of Hand Written Digits from fMRI Data Employing Bayesian Networks

    Science.gov (United States)

    Yargholi, Elahe'; Hossein-Zadeh, Gholam-Ali

    2016-01-01

    We are frequently exposed to hand written digits 0–9 in today's modern life. Success in decoding-classification of hand written digits helps us understand the corresponding brain mechanisms and processes and assists seriously in designing more efficient brain–computer interfaces. However, all digits belong to the same semantic category and similarity in appearance of hand written digits makes this decoding-classification a challenging problem. In present study, for the first time, augmented naïve Bayes classifier is used for classification of functional Magnetic Resonance Imaging (fMRI) measurements to decode the hand written digits which took advantage of brain connectivity information in decoding-classification. fMRI was recorded from three healthy participants, with an age range of 25–30. Results in different brain lobes (frontal, occipital, parietal, and temporal) show that utilizing connectivity information significantly improves decoding-classification and capability of different brain lobes in decoding-classification of hand written digits were compared to each other. In addition, in each lobe the most contributing areas and brain connectivities were determined and connectivities with short distances between their endpoints were recognized to be more efficient. Moreover, data driven method was applied to investigate the similarity of brain areas in responding to stimuli and this revealed both similarly active areas and active mechanisms during this experiment. Interesting finding was that during the experiment of watching hand written digits, there were some active networks (visual, working memory, motor, and language processing), but the most relevant one to the task was language processing network according to the voxel selection. PMID:27468261

  10. Bayesian Variable Selection for Detecting Adaptive Genomic Differences Among Populations

    OpenAIRE

    Riebler, Andrea; Held, Leonhard; Stephan, Wolfgang

    2008-01-01

    We extend an Fst-based Bayesian hierarchical model, implemented via Markov chain Monte Carlo, for the detection of loci that might be subject to positive selection. This model divides the Fst-influencing factors into locus-specific effects, population-specific effects, and effects that are specific for the locus in combination with the population. We introduce a Bayesian auxiliary variable for each locus effect to automatically select nonneutral locus effects. As a by-product, the efficiency ...

  11. Using Bayesian Population Viability Analysis to Define Relevant Conservation Objectives

    OpenAIRE

    Green, Adam W.; Bailey, Larissa L.

    2015-01-01

    Adaptive management provides a useful framework for managing natural resources in the face of uncertainty. An important component of adaptive management is identifying clear, measurable conservation objectives that reflect the desired outcomes of stakeholders. A common objective is to have a sustainable population, or metapopulation, but it can be difficult to quantify a threshold above which such a population is likely to persist. We performed a Bayesian metapopulation viability analysis (BM...

  12. What is the `relevant population' in Bayesian forensic inference?

    OpenAIRE

    Brümmer, Niko; de Villiers, Edward

    2014-01-01

    In works discussing the Bayesian paradigm for presenting forensic evidence in court, the concept of a `relevant population' is often mentioned, without a clear definition of what is meant, and without recommendations of how to select such populations. This note is to try to better understand this concept. Our analysis is intended to be general enough to be applicable to different forensic technologies and we shall consider both DNA profiling and speaker recognition as examples.

  13. Modelling Odor Decoding in the Antennal Lobe by Combining Sequential Firing Rate Models with Bayesian Inference.

    Science.gov (United States)

    Cuevas Rivera, Dario; Bitzer, Sebastian; Kiebel, Stefan J

    2015-10-01

    The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an 'intelligent coincidence detector', which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena. PMID:26451888

  14. Modelling Odor Decoding in the Antennal Lobe by Combining Sequential Firing Rate Models with Bayesian Inference.

    Directory of Open Access Journals (Sweden)

    Dario Cuevas Rivera

    2015-10-01

    Full Text Available The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an 'intelligent coincidence detector', which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena.

  15. Bayesian inference of population size history from multiple loci

    Directory of Open Access Journals (Sweden)

    Drummond Alexei J

    2008-10-01

    Full Text Available Abstract Background Effective population size (Ne is related to genetic variability and is a basic parameter in many models of population genetics. A number of methods for inferring current and past population sizes from genetic data have been developed since JFC Kingman introduced the n-coalescent in 1982. Here we present the Extended Bayesian Skyline Plot, a non-parametric Bayesian Markov chain Monte Carlo algorithm that extends a previous coalescent-based method in several ways, including the ability to analyze multiple loci. Results Through extensive simulations we show the accuracy and limitations of inferring population size as a function of the amount of data, including recovering information about evolutionary bottlenecks. We also analyzed two real data sets to demonstrate the behavior of the new method; a single gene Hepatitis C virus data set sampled from Egypt and a 10 locus Drosophila ananassae data set representing 16 different populations. Conclusion The results demonstrate the essential role of multiple loci in recovering population size dynamics. Multi-locus data from a small number of individuals can precisely recover past bottlenecks in population size which can not be characterized by analysis of a single locus. We also demonstrate that sequence data quality is important because even moderate levels of sequencing errors result in a considerable decrease in estimation accuracy for realistic levels of population genetic variability.

  16. Bayesian Optimization Algorithm, Population Sizing, and Time to Convergence

    Energy Technology Data Exchange (ETDEWEB)

    Pelikan, M.; Goldberg, D.E.; Cantu-Paz, E.

    2000-01-19

    This paper analyzes convergence properties of the Bayesian optimization algorithm (BOA). It settles the BOA into the framework of problem decomposition used frequently in order to model and understand the behavior of simple genetic algorithms. The growth of the population size and the number of generations until convergence with respect to the size of a problem is theoretically analyzed. The theoretical results are supported by a number of experiments.

  17. Bayesian Analysis of Multiple Populations I: Statistical and Computational Methods

    CERN Document Server

    Stenning, D C; Robinson, E; van Dyk, D A; von Hippel, T; Sarajedini, A; Stein, N

    2016-01-01

    We develop a Bayesian model for globular clusters composed of multiple stellar populations, extending earlier statistical models for open clusters composed of simple (single) stellar populations (vanDyk et al. 2009, Stein et al. 2013). Specifically, we model globular clusters with two populations that differ in helium abundance. Our model assumes a hierarchical structuring of the parameters in which physical properties---age, metallicity, helium abundance, distance, absorption, and initial mass---are common to (i) the cluster as a whole or to (ii) individual populations within a cluster, or are unique to (iii) individual stars. An adaptive Markov chain Monte Carlo (MCMC) algorithm is devised for model fitting that greatly improves convergence relative to its precursor non-adaptive MCMC algorithm. Our model and computational tools are incorporated into an open-source software suite known as BASE-9. We use numerical studies to demonstrate that our method can recover parameters of two-population clusters, and al...

  18. [Contribution of computers to pharmacokinetics, Bayesian approach and population pharmacokinetics].

    Science.gov (United States)

    Hecquet, B

    1995-12-01

    A major objective for pharmacokineticians is to help practicians to define drug administration protocols. Protocols are generally designed for all the patients but inter individual variability would need monitoring for each patient. Computers are widely used to determine pharmacokinetic parameters and to try to individualize drug administration. Severals examples are summarily described: terminal half-life determination by regression; model fitting to experimental data; Bayesian statistics for individual dose adaptation; population pharmacokinetic methods for parameter evaluation. These methods do not replace the pharmacokinetician thought but could make possible drug administration taking into account individual characteristics. PMID:8680074

  19. Bayesian variable selection for detecting adaptive genomic differences among populations.

    Science.gov (United States)

    Riebler, Andrea; Held, Leonhard; Stephan, Wolfgang

    2008-03-01

    We extend an F(st)-based Bayesian hierarchical model, implemented via Markov chain Monte Carlo, for the detection of loci that might be subject to positive selection. This model divides the F(st)-influencing factors into locus-specific effects, population-specific effects, and effects that are specific for the locus in combination with the population. We introduce a Bayesian auxiliary variable for each locus effect to automatically select nonneutral locus effects. As a by-product, the efficiency of the original approach is improved by using a reparameterization of the model. The statistical power of the extended algorithm is assessed with simulated data sets from a Wright-Fisher model with migration. We find that the inclusion of model selection suggests a clear improvement in discrimination as measured by the area under the receiver operating characteristic (ROC) curve. Additionally, we illustrate and discuss the quality of the newly developed method on the basis of an allozyme data set of the fruit fly Drosophila melanogaster and a sequence data set of the wild tomato Solanum chilense. For data sets with small sample sizes, high mutation rates, and/or long sequences, however, methods based on nucleotide statistics should be preferred. PMID:18245358

  20. Modelling of population dynamics of red king crab using Bayesian approach

    Directory of Open Access Journals (Sweden)

    Bakanev Sergey ...

    2012-10-01

    Modeling population dynamics based on the Bayesian approach enables to successfully resolve the above issues. The integration of the data from various studies into a unified model based on Bayesian parameter estimation method provides a much more detailed description of the processes occurring in the population.

  1. Using Bayesian Population Viability Analysis to Define Relevant Conservation Objectives.

    Directory of Open Access Journals (Sweden)

    Adam W Green

    Full Text Available Adaptive management provides a useful framework for managing natural resources in the face of uncertainty. An important component of adaptive management is identifying clear, measurable conservation objectives that reflect the desired outcomes of stakeholders. A common objective is to have a sustainable population, or metapopulation, but it can be difficult to quantify a threshold above which such a population is likely to persist. We performed a Bayesian metapopulation viability analysis (BMPVA using a dynamic occupancy model to quantify the characteristics of two wood frog (Lithobates sylvatica metapopulations resulting in sustainable populations, and we demonstrate how the results could be used to define meaningful objectives that serve as the basis of adaptive management. We explored scenarios involving metapopulations with different numbers of patches (pools using estimates of breeding occurrence and successful metamorphosis from two study areas to estimate the probability of quasi-extinction and calculate the proportion of vernal pools producing metamorphs. Our results suggest that ≥50 pools are required to ensure long-term persistence with approximately 16% of pools producing metamorphs in stable metapopulations. We demonstrate one way to incorporate the BMPVA results into a utility function that balances the trade-offs between ecological and financial objectives, which can be used in an adaptive management framework to make optimal, transparent decisions. Our approach provides a framework for using a standard method (i.e., PVA and available information to inform a formal decision process to determine optimal and timely management policies.

  2. Using Bayesian Population Viability Analysis to Define Relevant Conservation Objectives.

    Science.gov (United States)

    Green, Adam W; Bailey, Larissa L

    2015-01-01

    Adaptive management provides a useful framework for managing natural resources in the face of uncertainty. An important component of adaptive management is identifying clear, measurable conservation objectives that reflect the desired outcomes of stakeholders. A common objective is to have a sustainable population, or metapopulation, but it can be difficult to quantify a threshold above which such a population is likely to persist. We performed a Bayesian metapopulation viability analysis (BMPVA) using a dynamic occupancy model to quantify the characteristics of two wood frog (Lithobates sylvatica) metapopulations resulting in sustainable populations, and we demonstrate how the results could be used to define meaningful objectives that serve as the basis of adaptive management. We explored scenarios involving metapopulations with different numbers of patches (pools) using estimates of breeding occurrence and successful metamorphosis from two study areas to estimate the probability of quasi-extinction and calculate the proportion of vernal pools producing metamorphs. Our results suggest that ≥50 pools are required to ensure long-term persistence with approximately 16% of pools producing metamorphs in stable metapopulations. We demonstrate one way to incorporate the BMPVA results into a utility function that balances the trade-offs between ecological and financial objectives, which can be used in an adaptive management framework to make optimal, transparent decisions. Our approach provides a framework for using a standard method (i.e., PVA) and available information to inform a formal decision process to determine optimal and timely management policies. PMID:26658734

  3. Bayesian Analysis of Multiple Populations in Galactic Globular Clusters

    Science.gov (United States)

    Wagner-Kaiser, Rachel A.; Sarajedini, Ata; von Hippel, Ted; Stenning, David; Piotto, Giampaolo; Milone, Antonino; van Dyk, David A.; Robinson, Elliot; Stein, Nathan

    2016-01-01

    We use GO 13297 Cycle 21 Hubble Space Telescope (HST) observations and archival GO 10775 Cycle 14 HST ACS Treasury observations of Galactic Globular Clusters to find and characterize multiple stellar populations. Determining how globular clusters are able to create and retain enriched material to produce several generations of stars is key to understanding how these objects formed and how they have affected the structural, kinematic, and chemical evolution of the Milky Way. We employ a sophisticated Bayesian technique with an adaptive MCMC algorithm to simultaneously fit the age, distance, absorption, and metallicity for each cluster. At the same time, we also fit unique helium values to two distinct populations of the cluster and determine the relative proportions of those populations. Our unique numerical approach allows objective and precise analysis of these complicated clusters, providing posterior distribution functions for each parameter of interest. We use these results to gain a better understanding of multiple populations in these clusters and their role in the history of the Milky Way.Support for this work was provided by NASA through grant numbers HST-GO-10775 and HST-GO-13297 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555. This material is based upon work supported by the National Aeronautics and Space Administration under Grant NNX11AF34G issued through the Office of Space Science. This project was supported by the National Aeronautics & Space Administration through the University of Central Florida's NASA Florida Space Grant Consortium.

  4. Bayesian and Non–Bayesian Estimation for Two Generalized Exponential Populations Under Joint Type II Censored Scheme

    Directory of Open Access Journals (Sweden)

    Samir Kamel Ashour

    2014-05-01

    Full Text Available In this paper, Bayesian and non-Bayesian estimators have been obtained for two generalized exponential populations under joint type II censored scheme, which generalize results of Balakrishnan and Rasouli (2008 and Shafay et al. (2013. The maximum likelihood estimators (MLEs of the parameters and Bayes estimators have been developed under squared error loss function as well as under LINEX loss function. Moreover, approximate confidence region are also discussed and compared with two Bootstrap confidence regions. Also the MLE and three confidence intervals for the stress–strength parameter  are explored. A numerical illustration for these new results is given. 

  5. Bayesian data analysis in population ecology: motivations, methods, and benefits

    Science.gov (United States)

    Dorazio, Robert

    2016-01-01

    During the 20th century ecologists largely relied on the frequentist system of inference for the analysis of their data. However, in the past few decades ecologists have become increasingly interested in the use of Bayesian methods of data analysis. In this article I provide guidance to ecologists who would like to decide whether Bayesian methods can be used to improve their conclusions and predictions. I begin by providing a concise summary of Bayesian methods of analysis, including a comparison of differences between Bayesian and frequentist approaches to inference when using hierarchical models. Next I provide a list of problems where Bayesian methods of analysis may arguably be preferred over frequentist methods. These problems are usually encountered in analyses based on hierarchical models of data. I describe the essentials required for applying modern methods of Bayesian computation, and I use real-world examples to illustrate these methods. I conclude by summarizing what I perceive to be the main strengths and weaknesses of using Bayesian methods to solve ecological inference problems.

  6. Kernel Approximate Bayesian Computation for Population Genetic Inferences

    OpenAIRE

    Nakagome, Shigeki; Fukumizu, Kenji; Mano, Shuhei

    2012-01-01

    Approximate Bayesian computation (ABC) is a likelihood-free approach for Bayesian inferences based on a rejection algorithm method that applies a tolerance of dissimilarity between summary statistics from observed and simulated data. Although several improvements to the algorithm have been proposed, none of these improvements avoid the following two sources of approximation: 1) lack of sufficient statistics: sampling is not from the true posterior density given data but from an approximate po...

  7. A population-based Bayesian approach to the minimal model of glucose and insulin homeostasis

    DEFF Research Database (Denmark)

    Andersen, Kim Emil; Højbjerre, Malene

    2005-01-01

    for a whole population. Traditionally it has been analysed in a deterministic set-up with only error terms on the measurements. In this work we adopt a Bayesian graphical model to describe the coupled minimal model that accounts for both measurement and process variability, and the model is extended...... to a population-based model. The estimation of the parameters are efficiently implemented in a Bayesian approach where posterior inference is made through the use of Markov chain Monte Carlo techniques. Hereby we obtain a powerful and flexible modelling framework for regularizing the ill-posed estimation problem...

  8. A Bayesian Approach to Identifying New Risk Factors for Dementia: A Nationwide Population-Based Study.

    Science.gov (United States)

    Wen, Yen-Hsia; Wu, Shihn-Sheng; Lin, Chun-Hung Richard; Tsai, Jui-Hsiu; Yang, Pinchen; Chang, Yang-Pei; Tseng, Kuan-Hua

    2016-05-01

    Dementia is one of the most disabling and burdensome health conditions worldwide. In this study, we identified new potential risk factors for dementia from nationwide longitudinal population-based data by using Bayesian statistics.We first tested the consistency of the results obtained using Bayesian statistics with those obtained using classical frequentist probability for 4 recognized risk factors for dementia, namely severe head injury, depression, diabetes mellitus, and vascular diseases. Then, we used Bayesian statistics to verify 2 new potential risk factors for dementia, namely hearing loss and senile cataract, determined from the Taiwan's National Health Insurance Research Database.We included a total of 6546 (6.0%) patients diagnosed with dementia. We observed older age, female sex, and lower income as independent risk factors for dementia. Moreover, we verified the 4 recognized risk factors for dementia in the older Taiwanese population; their odds ratios (ORs) ranged from 3.469 to 1.207. Furthermore, we observed that hearing loss (OR = 1.577) and senile cataract (OR = 1.549) were associated with an increased risk of dementia.We found that the results obtained using Bayesian statistics for assessing risk factors for dementia, such as head injury, depression, DM, and vascular diseases, were consistent with those obtained using classical frequentist probability. Moreover, hearing loss and senile cataract were found to be potential risk factors for dementia in the older Taiwanese population. Bayesian statistics could help clinicians explore other potential risk factors for dementia and for developing appropriate treatment strategies for these patients. PMID:27227925

  9. Bayesian Population Physiologically-Based Pharmacokinetic (PBPK Approach for a Physiologically Realistic Characterization of Interindividual Variability in Clinically Relevant Populations.

    Directory of Open Access Journals (Sweden)

    Markus Krauss

    Full Text Available Interindividual variability in anatomical and physiological properties results in significant differences in drug pharmacokinetics. The consideration of such pharmacokinetic variability supports optimal drug efficacy and safety for each single individual, e.g. by identification of individual-specific dosings. One clear objective in clinical drug development is therefore a thorough characterization of the physiological sources of interindividual variability. In this work, we present a Bayesian population physiologically-based pharmacokinetic (PBPK approach for the mechanistically and physiologically realistic identification of interindividual variability. The consideration of a generic and highly detailed mechanistic PBPK model structure enables the integration of large amounts of prior physiological knowledge, which is then updated with new experimental data in a Bayesian framework. A covariate model integrates known relationships of physiological parameters to age, gender and body height. We further provide a framework for estimation of the a posteriori parameter dependency structure at the population level. The approach is demonstrated considering a cohort of healthy individuals and theophylline as an application example. The variability and co-variability of physiological parameters are specified within the population; respectively. Significant correlations are identified between population parameters and are applied for individual- and population-specific visual predictive checks of the pharmacokinetic behavior, which leads to improved results compared to present population approaches. In the future, the integration of a generic PBPK model into an hierarchical approach allows for extrapolations to other populations or drugs, while the Bayesian paradigm allows for an iterative application of the approach and thereby a continuous updating of physiological knowledge with new data. This will facilitate decision making e.g. from preclinical to

  10. Inferring population history with DIYABC: a user-friendly approach to Approximate Bayesian Computation

    OpenAIRE

    Cornuet, Jean-Marie; Santos, Filipe; Beaumont, Mark A; Robert, Christian P.; Marin, Jean-Michel; Balding, David J.; Guillemaud, Thomas; Estoup, Arnaud

    2008-01-01

    Summary: Genetic data obtained on population samples convey information about their evolutionary history. Inference methods can extract part of this information but they require sophisticated statistical techniques that have been made available to the biologist community (through computer programs) only for simple and standard situations typically involving a small number of samples. We propose here a computer program (DIY ABC) for inference based on approximate Bayesian computation (ABC), in...

  11. Bayesian Estimation of Population-Level Trends in Measures of Health Status

    OpenAIRE

    Mariel M. Finucane; Paciorek, Christopher J; Danaei, Goodarz; Ezzati, Majid

    2014-01-01

    Improving health worldwide will require rigorous quantification of population-level trends in health status. However, global-level surveys are not available, forcing researchers to rely on fragmentary country-specific data of varying quality. We present a Bayesian model that systematically combines disparate data to make country-, region- and global-level estimates of time trends in important health indicators. ¶ The model allows for time and age nonlinearity, and it borrows strength in...

  12. Bayesian Analysis of Two Stellar Populations in Galactic Globular Clusters III: Analysis of 30 Clusters

    CERN Document Server

    Wagner-Kaiser, R; Sarajedini, A; von Hippel, T; van Dyk, D A; Robinson, E; Stein, N; Jefferys, W H

    2016-01-01

    We use Cycle 21 Hubble Space Telescope (HST) observations and HST archival ACS Treasury observations of 30 Galactic Globular Clusters to characterize two distinct stellar populations. A sophisticated Bayesian technique is employed to simultaneously sample the joint posterior distribution of age, distance, and extinction for each cluster, as well as unique helium values for two populations within each cluster and the relative proportion of those populations. We find the helium differences among the two populations in the clusters fall in the range of ~0.04 to 0.11. Because adequate models varying in CNO are not presently available, we view these spreads as upper limits and present them with statistical rather than observational uncertainties. Evidence supports previous studies suggesting an increase in helium content concurrent with increasing mass of the cluster and also find that the proportion of the first population of stars increases with mass as well. Our results are examined in the context of proposed g...

  13. ObStruct: a method to objectively analyse factors driving population structure using Bayesian ancestry profiles.

    Directory of Open Access Journals (Sweden)

    Velimir Gayevskiy

    Full Text Available Bayesian inference methods are extensively used to detect the presence of population structure given genetic data. The primary output of software implementing these methods are ancestry profiles of sampled individuals. While these profiles robustly partition the data into subgroups, currently there is no objective method to determine whether the fixed factor of interest (e.g. geographic origin correlates with inferred subgroups or not, and if so, which populations are driving this correlation. We present ObStruct, a novel tool to objectively analyse the nature of structure revealed in Bayesian ancestry profiles using established statistical methods. ObStruct evaluates the extent of structural similarity between sampled and inferred populations, tests the significance of population differentiation, provides information on the contribution of sampled and inferred populations to the observed structure and crucially determines whether the predetermined factor of interest correlates with inferred population structure. Analyses of simulated and experimental data highlight ObStruct's ability to objectively assess the nature of structure in populations. We show the method is capable of capturing an increase in the level of structure with increasing time since divergence between simulated populations. Further, we applied the method to a highly structured dataset of 1,484 humans from seven continents and a less structured dataset of 179 Saccharomyces cerevisiae from three regions in New Zealand. Our results show that ObStruct provides an objective metric to classify the degree, drivers and significance of inferred structure, as well as providing novel insights into the relationships between sampled populations, and adds a final step to the pipeline for population structure analyses.

  14. Determining the causes behind the collapse of a small pelagic fishery using Bayesian population modeling.

    Science.gov (United States)

    Taboadai, Fernando G; Anadón, Ricardo

    2016-04-01

    Small pelagic fish species present complex dynamics that challenge population biologists and prevent effective management. Huge fluctuations in abundance have traditionally been associated with external environmental forcing on recruitment, exempting other processes from contributing to fisheries collapse. On the other hand, theory predicts that density dependence and overexploitation can increase the likelihood of population oscillations. Here, we combined nonlinear population modeling with Bayesian analysis to examine the importance of different regulatory mechanisms on the collapse of European anchovy (Engraulis encrasicolus) in the Bay of Biscay. The approach relied on detailed population data and in a careful characterization of changes in the environment experienced by anchovy early stages based mainly on satellite remote sensing. Alternative hypotheses about external forcing on recruitment determined prediction skill and provided alternative interpretations of the causes behind the collapse. Density dependence was weak and unable to generate huge oscillations. Instead, models considering changes in phytoplankton phenology or in larval drift presented the best prediction skill. Nevertheless, an extensive surrogate analysis showed that environmental fluctuations alone barely explain anchovy collapse without considering the impact of fishing. Our results highlight the effectiveness of a Bayesian approach to analyze the dynamics and collapse of managed populations. PMID:27411258

  15. Bayesian Inference on the Effect of Density Dependence and Weather on a Guanaco Population from Chile

    Science.gov (United States)

    Zubillaga, María; Skewes, Oscar; Soto, Nicolás; Rabinovich, Jorge E.; Colchero, Fernando

    2014-01-01

    Understanding the mechanisms that drive population dynamics is fundamental for management of wild populations. The guanaco (Lama guanicoe) is one of two wild camelid species in South America. We evaluated the effects of density dependence and weather variables on population regulation based on a time series of 36 years of population sampling of guanacos in Tierra del Fuego, Chile. The population density varied between 2.7 and 30.7 guanaco/km2, with an apparent monotonic growth during the first 25 years; however, in the last 10 years the population has shown large fluctuations, suggesting that it might have reached its carrying capacity. We used a Bayesian state-space framework and model selection to determine the effect of density and environmental variables on guanaco population dynamics. Our results show that the population is under density dependent regulation and that it is currently fluctuating around an average carrying capacity of 45,000 guanacos. We also found a significant positive effect of previous winter temperature while sheep density has a strong negative effect on the guanaco population growth. We conclude that there are significant density dependent processes and that climate as well as competition with domestic species have important effects determining the population size of guanacos, with important implications for management and conservation. PMID:25514510

  16. Bayesian inference on the effect of density dependence and weather on a guanaco population from Chile.

    Directory of Open Access Journals (Sweden)

    María Zubillaga

    Full Text Available Understanding the mechanisms that drive population dynamics is fundamental for management of wild populations. The guanaco (Lama guanicoe is one of two wild camelid species in South America. We evaluated the effects of density dependence and weather variables on population regulation based on a time series of 36 years of population sampling of guanacos in Tierra del Fuego, Chile. The population density varied between 2.7 and 30.7 guanaco/km2, with an apparent monotonic growth during the first 25 years; however, in the last 10 years the population has shown large fluctuations, suggesting that it might have reached its carrying capacity. We used a Bayesian state-space framework and model selection to determine the effect of density and environmental variables on guanaco population dynamics. Our results show that the population is under density dependent regulation and that it is currently fluctuating around an average carrying capacity of 45,000 guanacos. We also found a significant positive effect of previous winter temperature while sheep density has a strong negative effect on the guanaco population growth. We conclude that there are significant density dependent processes and that climate as well as competition with domestic species have important effects determining the population size of guanacos, with important implications for management and conservation.

  17. A hierarchical Bayesian approach for reconstructing the Initial Mass Function of Single Stellar Populations

    CERN Document Server

    Dries, M; Koopmans, L V E

    2016-01-01

    Recent studies based on the integrated light of distant galaxies suggest that the initial mass function (IMF) might not be universal. Variations of the IMF with galaxy type and/or formation time may have important consequences for our understanding of galaxy evolution. We have developed a new stellar population synthesis (SPS) code specifically designed to reconstruct the IMF. We implement a novel approach combining regularization with hierarchical Bayesian inference. Within this approach we use a parametrized IMF prior to regulate a direct inference of the IMF. This direct inference gives more freedom to the IMF and allows the model to deviate from parametrized models when demanded by the data. We use Markov Chain Monte Carlo sampling techniques to reconstruct the best parameters for the IMF prior, the age, and the metallicity of a single stellar population. We present our code and apply our model to a number of mock single stellar populations with different ages, metallicities, and IMFs. When systematic unc...

  18. Disambiguating the role of noise correlations when decoding neural populations together

    OpenAIRE

    Eyherabide, Hugo Gabriel

    2016-01-01

    Objective: Integrating information from populations of correlated neurons can become too complex even for the human brain. Ignoring correlations may simplify the process but also cause an information loss. This loss has been quantified using many methods, one of which has always been deemed exact due to its rigorous communication-theoretical foundations. However, we have recently shown that this method can overestimate the loss in real applications. Approach: To solve this problem, we disting...

  19. Connecting multiple spatial scales to decode the population activity of grid cells

    OpenAIRE

    Stemmler, Martin; Mathis, Alexander; Andreas V. M Herz

    2015-01-01

    Mammalian grid cells fire when an animal crosses the points of an imaginary hexagonal grid tessellating the environment. We show how animals can navigate by reading out a simple population vector of grid cell activity across multiple spatial scales, even though neural activity is intrinsically stochastic. This theory of dead reckoning explains why grid cells are organized into discrete modules within which all cells have the same lattice scale and orientation. The lattice scale changes from m...

  20. An Approximate Bayesian Method Applied to Estimating the Trajectories of Four British Grey Seal (Halichoerus grypus) Populations from Pup Counts

    OpenAIRE

    Mike Lonergan; Dave Thompson; Len Thomas; Callan Duck

    2011-01-01

    1. For British grey seals, as with many pinniped species, population monitoring is implemented by aerial surveys of pups at breeding colonies. Scaling pup counts up to population estimates requires assumptions about population structure; this is straightforward when populations are growing exponentially, but not when growth slows, since it is unclear whether density dependence affects pup survival or fecundity. 2. We present an approximate Bayesian method for fitting pup trajectories, estimat...

  1. Bayesian Analysis of Two Stellar Populations in Galactic Globular Clusters. I. Statistical and Computational Methods

    Science.gov (United States)

    Stenning, D. C.; Wagner-Kaiser, R.; Robinson, E.; van Dyk, D. A.; von Hippel, T.; Sarajedini, A.; Stein, N.

    2016-07-01

    We develop a Bayesian model for globular clusters composed of multiple stellar populations, extending earlier statistical models for open clusters composed of simple (single) stellar populations. Specifically, we model globular clusters with two populations that differ in helium abundance. Our model assumes a hierarchical structuring of the parameters in which physical properties—age, metallicity, helium abundance, distance, absorption, and initial mass—are common to (i) the cluster as a whole or to (ii) individual populations within a cluster, or are unique to (iii) individual stars. An adaptive Markov chain Monte Carlo (MCMC) algorithm is devised for model fitting that greatly improves convergence relative to its precursor non-adaptive MCMC algorithm. Our model and computational tools are incorporated into an open-source software suite known as BASE-9. We use numerical studies to demonstrate that our method can recover parameters of two-population clusters, and also show how model misspecification can potentially be identified. As a proof of concept, we analyze the two stellar populations of globular cluster NGC 5272 using our model and methods. (BASE-9 is available from GitHub: https://github.com/argiopetech/base/releases).

  2. Connecting multiple spatial scales to decode the population activity of grid cells.

    Science.gov (United States)

    Stemmler, Martin; Mathis, Alexander; Herz, Andreas V M

    2015-12-01

    Mammalian grid cells fire when an animal crosses the points of an imaginary hexagonal grid tessellating the environment. We show how animals can navigate by reading out a simple population vector of grid cell activity across multiple spatial scales, even though neural activity is intrinsically stochastic. This theory of dead reckoning explains why grid cells are organized into discrete modules within which all cells have the same lattice scale and orientation. The lattice scale changes from module to module and should form a geometric progression with a scale ratio of around 3/2 to minimize the risk of making large-scale errors in spatial localization. Such errors should also occur if intermediate-scale modules are silenced, whereas knocking out the module at the smallest scale will only affect spatial precision. For goal-directed navigation, the allocentric grid cell representation can be readily transformed into the egocentric goal coordinates needed for planning movements. The goal location is set by nonlinear gain fields that act on goal vector cells. This theory predicts neural and behavioral correlates of grid cell readout that transcend the known link between grid cells of the medial entorhinal cortex and place cells of the hippocampus. PMID:26824061

  3. Full Bayesian hierarchical light curve modeling of core-collapse supernova populations

    Science.gov (United States)

    Sanders, Nathan; Betancourt, Michael; Soderberg, Alicia Margarita

    2016-06-01

    While wide field surveys have yielded remarkable quantities of photometry of transient objects, including supernovae, light curves reconstructed from this data suffer from several characteristic problems. Because most transients are discovered near the detection limit, signal to noise is generally poor; because coverage is limited to the observing season, light curves are often incomplete; and because temporal sampling can be uneven across filters, these problems can be exacerbated at any one wavelength. While the prevailing approach of modeling individual light curves independently is successful at recovering inferences for the objects with the highest quality observations, it typically neglects a substantial portion of the data and can introduce systematic biases. Joint modeling of the light curves of transient populations enables direct inference on population-level characteristics as well as superior measurements for individual objects. We present a new hierarchical Bayesian model for supernova light curves, where information inferred from observations of every individual light curve in a sample is partially pooled across objects to constrain population-level hyperparameters. Using an efficient Hamiltonian Monte Carlo sampling technique, the model posterior can be explored to enable marginalization over weakly-identified hyperparameters through full Bayesian inference. We demonstrate our technique on the Pan-STARRS1 (PS1) Type IIP supernova light curve sample published by Sanders et al. (2015), consisting of nearly 20,000 individual photometric observations of more than 70 supernovae in five photometric filters. We discuss the Stan probabilistic programming language used to implement the model, computational challenges, and prospects for future work including generalization to multiple supernova types. We also discuss scientific results from the PS1 dataset including a new relation between the peak magnitude and decline rate of SNe IIP, a new perspective on the

  4. Disaggregating measurement uncertainty from population variability and Bayesian treatment of uncensored results

    International Nuclear Information System (INIS)

    In making low-level radioactivity measurements of populations, it is commonly observed that a substantial portion of net results are negative. Furthermore, the observed variance of the measurement results arises from a combination of measurement uncertainty and population variability. This paper presents a method for disaggregating measurement uncertainty from population variability to produce a probability density function (PDF) of possibly true results. To do this, simple, justifiable, and reasonable assumptions are made about the relationship of the measurements to the measurands (the 'true values'). The measurements are assumed to be unbiased, that is, that their average value is the average of the measurands. Using traditional estimates of each measurement's uncertainty to disaggregate population variability from measurement uncertainty, a PDF of measurands for the population is produced. Then, using Bayes's theorem, the same assumptions, and all the data from the population of individuals, a prior PDF is computed for each individual's measurand. These PDFs are non-negative, and their average is equal to the average of the measurement results for the population. The uncertainty in these Bayesian posterior PDFs is all Berkson with no remaining classical component. The methods are applied to baseline bioassay data from the Hanford site. The data include 90Sr urinalysis measurements on 128 people, 137Cs in vivo measurements on 5,337 people, and 239Pu urinalysis measurements on 3,270 people. The method produces excellent results for the 90Sr and 137Cs measurements, since there are nonzero concentrations of these global fallout radionuclides in people who have not been occupationally exposed. The method does not work for the 239Pu measurements in non-occupationally exposed people because the population average is essentially zero.

  5. Bayesian population analysis of a washin-washout physiologically based pharmacokinetic model for acetone

    International Nuclear Information System (INIS)

    The aim of this study was to derive improved estimates of population variability and uncertainty of physiologically based pharmacokinetic (PBPK) model parameters, especially of those related to the washin-washout behavior of polar volatile substances. This was done by optimizing a previously published washin-washout PBPK model for acetone in a Bayesian framework using Markov chain Monte Carlo simulation. The sensitivity of the model parameters was investigated by creating four different prior sets, where the uncertainty surrounding the population variability of the physiological model parameters was given values corresponding to coefficients of variation of 1%, 25%, 50%, and 100%, respectively. The PBPK model was calibrated to toxicokinetic data from 2 previous studies where 18 volunteers were exposed to 250-550 ppm of acetone at various levels of workload. The updated PBPK model provided a good description of the concentrations in arterial, venous, and exhaled air. The precision of most of the model parameter estimates was improved. New information was particularly gained on the population distribution of the parameters governing the washin-washout effect. The results presented herein provide a good starting point to estimate the target dose of acetone in the working and general populations for risk assessment purposes.

  6. An Approximate Bayesian Method Applied to Estimating the Trajectories of Four British Grey Seal (Halichoerus grypus Populations from Pup Counts

    Directory of Open Access Journals (Sweden)

    Mike Lonergan

    2011-01-01

    Full Text Available For British grey seals, as with many pinniped species, population monitoring is implemented by aerial surveys of pups at breeding colonies. Scaling pup counts up to population estimates requires assumptions about population structure; this is straightforward when populations are growing exponentially but not when growth slows, since it is unclear whether density dependence affects pup survival or fecundity. We present an approximate Bayesian method for fitting pup trajectories, estimating adult population size and investigating alternative biological models. The method is equivalent to fitting a density-dependent Leslie matrix model, within a Bayesian framework, but with the forms of the density-dependent effects as outputs rather than assumptions. It requires fewer assumptions than the state space models currently used and produces similar estimates. We discuss the potential and limitations of the method and suggest that this approach provides a useful tool for at least the preliminary analysis of similar datasets.

  7. cosmoabc: Likelihood-free inference via Population Monte Carlo Approximate Bayesian Computation

    CERN Document Server

    Ishida, E E O; Penna-Lima, M; Cisewski, J; de Souza, R S; Trindade, A M M; Cameron, E

    2015-01-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogues. Here we present cosmoabc, a Python ABC sampler featuring a Population Monte Carlo (PMC) variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code is very flexible and can be easily coupled to an external simulator, while allowing to incorporate arbitrary distance and prior functions. As an example of practical application, we coupled cosmoabc with the numcosmo library and demonstrate how it can be used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function. cosmoabc is published under the GPLv3 license on PyPI and GitHub and documentation is availabl...

  8. Likelihood-free inference of population structure and local adaptation in a Bayesian hierarchical model.

    Science.gov (United States)

    Bazin, Eric; Dawson, Kevin J; Beaumont, Mark A

    2010-06-01

    We address the problem of finding evidence of natural selection from genetic data, accounting for the confounding effects of demographic history. In the absence of natural selection, gene genealogies should all be sampled from the same underlying distribution, often approximated by a coalescent model. Selection at a particular locus will lead to a modified genealogy, and this motivates a number of recent approaches for detecting the effects of natural selection in the genome as "outliers" under some models. The demographic history of a population affects the sampling distribution of genealogies, and therefore the observed genotypes and the classification of outliers. Since we cannot see genealogies directly, we have to infer them from the observed data under some model of mutation and demography. Thus the accuracy of an outlier-based approach depends to a greater or a lesser extent on the uncertainty about the demographic and mutational model. A natural modeling framework for this type of problem is provided by Bayesian hierarchical models, in which parameters, such as mutation rates and selection coefficients, are allowed to vary across loci. It has proved quite difficult computationally to implement fully probabilistic genealogical models with complex demographies, and this has motivated the development of approximations such as approximate Bayesian computation (ABC). In ABC the data are compressed into summary statistics, and computation of the likelihood function is replaced by simulation of data under the model. In a hierarchical setting one may be interested both in hyperparameters and parameters, and there may be very many of the latter--for example, in a genetic model, these may be parameters describing each of many loci or populations. This poses a problem for ABC in that one then requires summary statistics for each locus, which, if used naively, leads to a consequent difficulty in conditional density estimation. We develop a general method for applying

  9. Enhanced Bayesian modelling in BAPS software for learning genetic structures of populations

    Directory of Open Access Journals (Sweden)

    Sirén Jukka

    2008-12-01

    Full Text Available Abstract Background During the most recent decade many Bayesian statistical models and software for answering questions related to the genetic structure underlying population samples have appeared in the scientific literature. Most of these methods utilize molecular markers for the inferences, while some are also capable of handling DNA sequence data. In a number of earlier works, we have introduced an array of statistical methods for population genetic inference that are implemented in the software BAPS. However, the complexity of biological problems related to genetic structure analysis keeps increasing such that in many cases the current methods may provide either inappropriate or insufficient solutions. Results We discuss the necessity of enhancing the statistical approaches to face the challenges posed by the ever-increasing amounts of molecular data generated by scientists over a wide range of research areas and introduce an array of new statistical tools implemented in the most recent version of BAPS. With these methods it is possible, e.g., to fit genetic mixture models using user-specified numbers of clusters and to estimate levels of admixture under a genetic linkage model. Also, alleles representing a different ancestry compared to the average observed genomic positions can be tracked for the sampled individuals, and a priori specified hypotheses about genetic population structure can be directly compared using Bayes' theorem. In general, we have improved further the computational characteristics of the algorithms behind the methods implemented in BAPS facilitating the analyses of large and complex datasets. In particular, analysis of a single dataset can now be spread over multiple computers using a script interface to the software. Conclusion The Bayesian modelling methods introduced in this article represent an array of enhanced tools for learning the genetic structure of populations. Their implementations in the BAPS software are

  10. Estimating demographic parameters from large-scale population genomic data using Approximate Bayesian Computation

    Directory of Open Access Journals (Sweden)

    Li Sen

    2012-03-01

    Full Text Available Abstract Background The Approximate Bayesian Computation (ABC approach has been used to infer demographic parameters for numerous species, including humans. However, most applications of ABC still use limited amounts of data, from a small number of loci, compared to the large amount of genome-wide population-genetic data which have become available in the last few years. Results We evaluated the performance of the ABC approach for three 'population divergence' models - similar to the 'isolation with migration' model - when the data consists of several hundred thousand SNPs typed for multiple individuals by simulating data from known demographic models. The ABC approach was used to infer demographic parameters of interest and we compared the inferred values to the true parameter values that was used to generate hypothetical "observed" data. For all three case models, the ABC approach inferred most demographic parameters quite well with narrow credible intervals, for example, population divergence times and past population sizes, but some parameters were more difficult to infer, such as population sizes at present and migration rates. We compared the ability of different summary statistics to infer demographic parameters, including haplotype and LD based statistics, and found that the accuracy of the parameter estimates can be improved by combining summary statistics that capture different parts of information in the data. Furthermore, our results suggest that poor choices of prior distributions can in some circumstances be detected using ABC. Finally, increasing the amount of data beyond some hundred loci will substantially improve the accuracy of many parameter estimates using ABC. Conclusions We conclude that the ABC approach can accommodate realistic genome-wide population genetic data, which may be difficult to analyze with full likelihood approaches, and that the ABC can provide accurate and precise inference of demographic parameters from

  11. A Bayesian integrated population dynamics model to analyze data for protected species

    Directory of Open Access Journals (Sweden)

    Hoyle, S. D.

    2004-06-01

    Full Text Available Managing wildlife-human interactions demands reliable information about the likely consequences of management actions. This requirement is a general one, whatever the taxonomic group. We describe a method for estimating population dynamics and decision analysis that is generally applicable, extremely flexible, uses data efficiently, and gives answers in a useful format. Our case study involves bycatch of a protected species, the Northeastern Offshore Spotted Dolphin (Stenella attenuata, in the tuna fishery of the eastern Pacific Ocean. Informed decision-making requires quantitative analyses taking all relevant information into account, assessing how bycatch affects these species and how regulations affect the fisheries, and describing the uncertainty in analyses. Bayesian analysis is an ideal framework for delivering information on uncertainty to the decision-making process. It also allows information from other populations or species or expert judgment to be included in the analysis, if appropriate. Integrated analysis attempts to include all relevant data for a population into one analysis by combining analyses, sharing parameters, and simultaneously estimating all parameters, using a combined objective function. It ensures that model assumptions and parameter estimates are consistent throughout the analysis, that uncertainty is propagated through the analysis, and that the correlations among parameters are preserved. Perhaps the most important aspect of integrated analysis is the way it both enables and forces consideration of the system as a whole, so that inconsistencies can be observed and resolved.

  12. Estimating the Sizes of Populations At Risk of HIV Infection From Multiple Data Sources Using a Bayesian Hierarchical Model

    OpenAIRE

    Bao, Le; Raftery, Adrian E.; Reddy, Amala

    2015-01-01

    In most countries in the world outside of sub-Saharan Africa, HIV is largely concentrated in sub-populations whose behavior puts them at higher risk of contracting and transmitting HIV, such as people who inject drugs, sex workers and men who have sex with men. Estimating the size of these sub-populations is important for assessing overall HIV prevalence and designing effective interventions. We present a Bayesian hierarchical model for estimating the sizes of local and national HIV key affec...

  13. Decoders for MST radars

    Science.gov (United States)

    Woodman, R. F.

    1983-01-01

    Decoding techniques and equipment used by MST radars are described and some recommendations for new systems are presented. Decoding can be done either by software in special-purpose (array processors, etc.) or general-purpose computers or in specially designed digital decoders. Both software and hardware decoders are discussed and the special case of decoding for bistatic radars is examined.

  14. Marginal Bayesian nonparametric model for time to disease arrival of threatened amphibian populations.

    Science.gov (United States)

    Zhou, Haiming; Hanson, Timothy; Knapp, Roland

    2015-12-01

    The global emergence of Batrachochytrium dendrobatidis (Bd) has caused the extinction of hundreds of amphibian species worldwide. It has become increasingly important to be able to precisely predict time to Bd arrival in a population. The data analyzed herein present a unique challenge in terms of modeling because there is a strong spatial component to Bd arrival time and the traditional proportional hazards assumption is grossly violated. To address these concerns, we develop a novel marginal Bayesian nonparametric survival model for spatially correlated right-censored data. This class of models assumes that the logarithm of survival times marginally follow a mixture of normal densities with a linear-dependent Dirichlet process prior as the random mixing measure, and their joint distribution is induced by a Gaussian copula model with a spatial correlation structure. To invert high-dimensional spatial correlation matrices, we adopt a full-scale approximation that can capture both large- and small-scale spatial dependence. An efficient Markov chain Monte Carlo algorithm with delayed rejection is proposed for posterior computation, and an R package spBayesSurv is provided to fit the model. This approach is first evaluated through simulations, then applied to threatened frog populations in Sequoia-Kings Canyon National Park. PMID:26148536

  15. Bayesian coalescent inference reveals high evolutionary rates and diversification of Zika virus populations.

    Science.gov (United States)

    Fajardo, Alvaro; Soñora, Martín; Moreno, Pilar; Moratorio, Gonzalo; Cristina, Juan

    2016-10-01

    Zika virus (ZIKV) is a member of the family Flaviviridae. In 2015, ZIKV triggered an epidemic in Brazil and spread across Latin America. By May of 2016, the World Health Organization warns over spread of ZIKV beyond this region. Detailed studies on the mode of evolution of ZIKV strains are extremely important for our understanding of the emergence and spread of ZIKV populations. In order to gain insight into these matters, a Bayesian coalescent Markov Chain Monte Carlo analysis of complete genome sequences of recently isolated ZIKV strains was performed. The results of these studies revealed a mean rate of evolution of 1.20 × 10(-3) nucleotide substitutions per site per year (s/s/y) for ZIKV strains enrolled in this study. Several variants isolated in China are grouped together with all strains isolated in Latin America. Another genetic group composed exclusively by Chinese strains were also observed, suggesting the co-circulation of different genetic lineages in China. These findings indicate a high level of diversification of ZIKV populations. Strains isolated from microcephaly cases do not share amino acid substitutions, suggesting that other factors besides viral genetic differences may play a role for the proposed pathogenesis caused by ZIKV infection. J. Med. Virol. 88:1672-1676, 2016. © 2016 Wiley Periodicals, Inc. PMID:27278855

  16. Bayesian inference for generalized linear mixed model based on the multivariate t distribution in population pharmacokinetic study.

    Directory of Open Access Journals (Sweden)

    Fang-Rong Yan

    Full Text Available This article provides a fully bayesian approach for modeling of single-dose and complete pharmacokinetic data in a population pharmacokinetic (PK model. To overcome the impact of outliers and the difficulty of computation, a generalized linear model is chosen with the hypothesis that the errors follow a multivariate Student t distribution which is a heavy-tailed distribution. The aim of this study is to investigate and implement the performance of the multivariate t distribution to analyze population pharmacokinetic data. Bayesian predictive inferences and the Metropolis-Hastings algorithm schemes are used to process the intractable posterior integration. The precision and accuracy of the proposed model are illustrated by the simulating data and a real example of theophylline data.

  17. Estimating Temporal Trend in the Presence of Spatial Complexity: A Bayesian Hierarchical Model for a Wetland Plant Population Undergoing Restoration

    OpenAIRE

    Rodhouse, Thomas J.; Kathryn M Irvine; Vierling, Kerri T.; Lee A Vierling

    2011-01-01

    Monitoring programs that evaluate restoration and inform adaptive management are important for addressing environmental degradation. These efforts may be well served by spatially explicit hierarchical approaches to modeling because of unavoidable spatial structure inherited from past land use patterns and other factors. We developed Bayesian hierarchical models to estimate trends from annual density counts observed in a spatially structured wetland forb (Camassia quamash [camas]) population f...

  18. Bayesian Analysis of Two Stellar Populations in Galactic Globular Clusters. II. NGC 5024, NGC 5272, and NGC 6352

    Science.gov (United States)

    Wagner-Kaiser, R.; Stenning, D. C.; Robinson, E.; von Hippel, T.; Sarajedini, A.; van Dyk, D. A.; Stein, N.; Jefferys, W. H.

    2016-07-01

    We use Cycle 21 Hubble Space Telescope (HST) observations and HST archival Advanced Camera for Surveys Treasury observations of Galactic Globular Clusters to find and characterize two stellar populations in NGC 5024 (M53), NGC 5272 (M3), and NGC 6352. For these three clusters, both single and double-population analyses are used to determine a best fit isochrone(s). We employ a sophisticated Bayesian analysis technique to simultaneously fit the cluster parameters (age, distance, absorption, and metallicity) that characterize each cluster. For the two-population analysis, unique population level helium values are also fit to each distinct population of the cluster and the relative proportions of the populations are determined. We find differences in helium ranging from ˜0.05 to 0.11 for these three clusters. Model grids with solar α-element abundances ([α/Fe] = 0.0) and enhanced α-elements ([α/Fe] = 0.4) are adopted.

  19. Bayesian Analysis of Two Stellar Populations in Galactic Globular Clusters II: NGC 5024, NGC 5272, and NGC 6352

    CERN Document Server

    Wagner-Kaiser, R; Robinson, E; von Hippel, T; Sarajedini, A; van Dyk, D A; Stein, N; Jefferys, W H

    2016-01-01

    We use Cycle 21 Hubble Space Telescope (HST) observations and HST archival ACS Treasury observations of Galactic Globular Clusters to find and characterize two stellar populations in NGC 5024 (M53), NGC 5272 (M3), and NGC 6352. For these three clusters, both single and double-population analyses are used to determine a best fit isochrone(s). We employ a sophisticated Bayesian analysis technique to simultaneously fit the cluster parameters (age, distance, absorption, and metallicity) that characterize each cluster. For the two-population analysis, unique population level helium values are also fit to each distinct population of the cluster and the relative proportions of the populations are determined. We find differences in helium ranging from $\\sim$0.05 to 0.11 for these three clusters. Model grids with solar $\\alpha$-element abundances ([$\\alpha$/Fe] =0.0) and enhanced $\\alpha$-elements ([$\\alpha$/Fe]=0.4) are adopted.

  20. Bayesian Analysis of Two Stellar Populations in Galactic Globular Clusters. II. NGC 5024, NGC 5272, and NGC 6352

    Science.gov (United States)

    Wagner-Kaiser, R.; Stenning, D. C.; Robinson, E.; von Hippel, T.; Sarajedini, A.; van Dyk, D. A.; Stein, N.; Jefferys, W. H.

    2016-07-01

    We use Cycle 21 Hubble Space Telescope (HST) observations and HST archival Advanced Camera for Surveys Treasury observations of Galactic Globular Clusters to find and characterize two stellar populations in NGC 5024 (M53), NGC 5272 (M3), and NGC 6352. For these three clusters, both single and double-population analyses are used to determine a best fit isochrone(s). We employ a sophisticated Bayesian analysis technique to simultaneously fit the cluster parameters (age, distance, absorption, and metallicity) that characterize each cluster. For the two-population analysis, unique population level helium values are also fit to each distinct population of the cluster and the relative proportions of the populations are determined. We find differences in helium ranging from ∼0.05 to 0.11 for these three clusters. Model grids with solar α-element abundances ([α/Fe] = 0.0) and enhanced α-elements ([α/Fe] = 0.4) are adopted.

  1. Bayesian salamanders: analysing the demography of an underground population of the European plethodontid Speleomantes strinatii with state-space modelling

    Directory of Open Access Journals (Sweden)

    Salvidio Sebastiano

    2010-02-01

    Full Text Available Abstract Background It has been suggested that Plethodontid salamanders are excellent candidates for indicating ecosystem health. However, detailed, long-term data sets of their populations are rare, limiting our understanding of the demographic processes underlying their population fluctuations. Here we present a demographic analysis based on a 1996 - 2008 data set on an underground population of Speleomantes strinatii (Aellen in NW Italy. We utilised a Bayesian state-space approach allowing us to parameterise a stage-structured Lefkovitch model. We used all the available population data from annual temporary removal experiments to provide us with the baseline data on the numbers of juveniles, subadults and adult males and females present at any given time. Results Sampling the posterior chains of the converged state-space model gives us the likelihood distributions of the state-specific demographic rates and the associated uncertainty of these estimates. Analysing the resulting parameterised Lefkovitch matrices shows that the population growth is very close to 1, and that at population equilibrium we expect half of the individuals present to be adults of reproductive age which is what we also observe in the data. Elasticity analysis shows that adult survival is the key determinant for population growth. Conclusion This analysis demonstrates how an understanding of population demography can be gained from structured population data even in a case where following marked individuals over their whole lifespan is not practical.

  2. Bayesian networks and agent-based modeling approach for urban land-use and population density change: a BNAS model

    Science.gov (United States)

    Kocabas, Verda; Dragicevic, Suzana

    2013-10-01

    Land-use change models grounded in complexity theory such as agent-based models (ABMs) are increasingly being used to examine evolving urban systems. The objective of this study is to develop a spatial model that simulates land-use change under the influence of human land-use choice behavior. This is achieved by integrating the key physical and social drivers of land-use change using Bayesian networks (BNs) coupled with agent-based modeling. The BNAS model, integrated Bayesian network-based agent system, presented in this study uses geographic information systems, ABMs, BNs, and influence diagram principles to model population change on an irregular spatial structure. The model is parameterized with historical data and then used to simulate 20 years of future population and land-use change for the City of Surrey, British Columbia, Canada. The simulation results identify feasible new urban areas for development around the main transportation corridors. The obtained new development areas and the projected population trajectories with the“what-if” scenario capabilities can provide insights into urban planners for better and more informed land-use policy or decision-making processes.

  3. Iterative List Decoding

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Hjaltason, Johan

    2005-01-01

    We analyze the relation between iterative decoding and the extended parity check matrix. By considering a modified version of bit flipping, which produces a list of decoded words, we derive several relations between decodable error patterns and the parameters of the code. By developing a tree of...

  4. Bayesian Modeling of Prion Disease Dynamics in Mule Deer Using Population Monitoring and Capture-Recapture Data

    Science.gov (United States)

    Geremia, Chris; Miller, Michael W.; Hoeting, Jennifer A.; Antolin, Michael F.; Hobbs, N. Thompson

    2015-01-01

    Epidemics of chronic wasting disease (CWD) of North American Cervidae have potential to harm ecosystems and economies. We studied a migratory population of mule deer (Odocoileus hemionus) affected by CWD for at least three decades using a Bayesian framework to integrate matrix population and disease models with long-term monitoring data and detailed process-level studies. We hypothesized CWD prevalence would be stable or increase between two observation periods during the late 1990s and after 2010, with higher CWD prevalence making deer population decline more likely. The weight of evidence suggested a reduction in the CWD outbreak over time, perhaps in response to intervening harvest-mediated population reductions. Disease effects on deer population growth under current conditions were subtle with a 72% chance that CWD depressed population growth. With CWD, we forecasted a growth rate near one and largely stable deer population. Disease effects appear to be moderated by timing of infection, prolonged disease course, and locally variable infection. Long-term outcomes will depend heavily on whether current conditions hold and high prevalence remains a localized phenomenon. PMID:26509806

  5. Understanding the recent colonization history of a plant pathogenic fungus using population genetic tools and Approximate Bayesian Computation.

    Science.gov (United States)

    Barrès, B; Carlier, J; Seguin, M; Fenouillet, C; Cilas, C; Ravigné, V

    2012-11-01

    Understanding the processes by which new diseases are introduced in previously healthy areas is of major interest in elaborating prevention and management policies, as well as in understanding the dynamics of pathogen diversity at large spatial scale. In this study, we aimed to decipher the dispersal processes that have led to the emergence of the plant pathogenic fungus Microcyclus ulei, which is responsible for the South American Leaf Blight (SALB). This fungus has devastated rubber tree plantations across Latin America since the beginning of the twentieth century. As only imprecise historical information is available, the study of population evolutionary history based on population genetics appeared most appropriate. The distribution of genetic diversity in a continental sampling of four countries (Brazil, Ecuador, Guatemala and French Guiana) was studied using a set of 16 microsatellite markers developed specifically for this purpose. A very strong genetic structure was found (F(st)=0.70), demonstrating that there has been no regular gene flow between Latin American M. ulei populations. Strong bottlenecks probably occurred at the foundation of each population. The most likely scenario of colonization identified by the Approximate Bayesian Computation (ABC) method implemented in DIYABC suggested two independent sources from the Amazonian endemic area. The Brazilian, Ecuadorian and Guatemalan populations might stem from serial introductions through human-mediated movement of infected plant material from an unsampled source population, whereas the French Guiana population seems to have arisen from an independent colonization event through spore dispersal. PMID:22828899

  6. Across population genomic prediction scenarios in which Bayesian variable selection outperforms GBLUP

    NARCIS (Netherlands)

    Berg, van den S.; Calus, M.P.L.; Meuwissen, T.H.E.; Wientjes, Y.C.J.

    2015-01-01

    Background: The use of information across populations is an attractive approach to increase the accuracy of genomic prediction for numerically small populations. However, accuracies of across population genomic prediction, in which reference and selection individuals are from different population

  7. Bayesian Inference on the Effect of Density Dependence and Weather on a Guanaco Population from Chile

    OpenAIRE

    Zubillaga, María; Skewes, Oscar; Soto, Nicolás; Jorge E Rabinovich; Colchero, Fernando

    2014-01-01

    Understanding the mechanisms that drive population dynamics is fundamental for management of wild populations. The guanaco (Lama guanicoe) is one of two wild camelid species in South America. We evaluated the effects of density dependence and weather variables on population regulation based on a time series of 36 years of population sampling of guanacos in Tierra del Fuego, Chile. The population density varied between 2.7 and 30.7 guanaco/km2, with an apparent monotonic growth during the firs...

  8. Estimating Population Parameters using the Structured Serial Coalescent with Bayesian MCMC Inference when some Demes are Hidden

    Directory of Open Access Journals (Sweden)

    Allen Rodrigo

    2006-01-01

    Full Text Available Using the structured serial coalescent with Bayesian MCMC and serial samples, we estimate population size when some demes are not sampled or are hidden, ie ghost demes. It is found that even with the presence of a ghost deme, accurate inference was possible if the parameters are estimated with the true model. However with an incorrect model, estimates were biased and can be positively misleading. We extend these results to the case where there are sequences from the ghost at the last time sample. This case can arise in HIV patients, when some tissue samples and viral sequences only become available after death. When some sequences from the ghost deme are available at the last sampling time, estimation bias is reduced and accurate estimation of parameters associated with the ghost deme is possible despite sampling bias. Migration rates for this case are also shown to be good estimates when migration values are low.

  9. Estimation of hominoid ancestral population sizes under bayesian coalescent models incorporating mutation rate variation and sequencing errors.

    Science.gov (United States)

    Burgess, Ralph; Yang, Ziheng

    2008-09-01

    Estimation of population parameters for the common ancestors of humans and the great apes is important in understanding our evolutionary history. In particular, inference of population size for the human-chimpanzee common ancestor may shed light on the process by which the 2 species separated and on whether the human population experienced a severe size reduction in its early evolutionary history. In this study, the Bayesian method of ancestral inference of Rannala and Yang (2003. Bayes estimation of species divergence times and ancestral population sizes using DNA sequences from multiple loci. Genetics. 164:1645-1656) was extended to accommodate variable mutation rates among loci and random species-specific sequencing errors. The model was applied to analyze a genome-wide data set of approximately 15,000 neutral loci (7.4 Mb) aligned for human, chimpanzee, gorilla, orangutan, and macaque. We obtained robust and precise estimates for effective population sizes along the hominoid lineage extending back approximately 30 Myr to the cercopithecoid divergence. The results showed that ancestral populations were 5-10 times larger than modern humans along the entire hominoid lineage. The estimates were robust to the priors used and to model assumptions about recombination. The unusually low X chromosome divergence between human and chimpanzee could not be explained by variation in the male mutation bias or by current models of hybridization and introgression. Instead, our parameter estimates were consistent with a simple instantaneous process for human-chimpanzee speciation but showed a major reduction in X chromosome effective population size peculiar to the human-chimpanzee common ancestor, possibly due to selective sweeps on the X prior to separation of the 2 species. PMID:18603620

  10. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  11. Reed-Solomon decoder

    Science.gov (United States)

    Lahmeyer, Charles R. (Inventor)

    1987-01-01

    A Reed-Solomon decoder with dedicated hardware for five sequential algorithms was designed with overall pipelining by memory swapping between input, processing and output memories, and internal pipelining through the five algorithms. The code definition used in decoding is specified by a keyword received with each block of data so that a number of different code formats may be decoded by the same hardware.

  12. Bayesian inference on the effect of density dependence and weather on a guanaco population from Chile

    DEFF Research Database (Denmark)

    Zubillaga, Maria; Skewes, Oscar; Soto, Nicolás;

    2014-01-01

    time series of 36 years of population sampling of guanacos in Tierra del Fuego, Chile. The population density varied between 2.7 and 30.7 guanaco/km², with an apparent monotonic growth during the first 25 years; however, in the last 10 years the population has shown large fluctuations, suggesting that...

  13. Genetic evidence for long-term population decline in a savannah-dwelling primate: inferences from a hierarchical bayesian model.

    Science.gov (United States)

    Storz, Jay F; Beaumont, Mark A; Alberts, Susan C

    2002-11-01

    The purpose of this study was to test for evidence that savannah baboons (Papio cynocephalus) underwent a population expansion in concert with a hypothesized expansion of African human and chimpanzee populations during the late Pleistocene. The rationale is that any type of environmental event sufficient to cause simultaneous population expansions in African humans and chimpanzees would also be expected to affect other codistributed mammals. To test for genetic evidence of population expansion or contraction, we performed a coalescent analysis of multilocus microsatellite data using a hierarchical Bayesian model. Markov chain Monte Carlo (MCMC) simulations were used to estimate the posterior probability density of demographic and genealogical parameters. The model was designed to allow interlocus variation in mutational and demographic parameters, which made it possible to detect aberrant patterns of variation at individual loci that could result from heterogeneity in mutational dynamics or from the effects of selection at linked sites. Results of the MCMC simulations were consistent with zero variance in demographic parameters among loci, but there was evidence for a 10- to 20-fold difference in mutation rate between the most slowly and most rapidly evolving loci. Results of the model provided strong evidence that savannah baboons have undergone a long-term historical decline in population size. The mode of the highest posterior density for the joint distribution of current and ancestral population size indicated a roughly eightfold contraction over the past 1,000 to 250,000 years. These results indicate that savannah baboons apparently did not share a common demographic history with other codistributed primate species. PMID:12411607

  14. Transductive neural decoding for unsorted neuronal spikes of rat hippocampus

    OpenAIRE

    Chen, Zhe; Kloosterman, Fabian; Layton, Stuart; Wilson, Matthew A

    2012-01-01

    Neural decoding is an important approach for extracting information from population codes. We previously proposed a novel transductive neural decoding paradigm and applied it to reconstruct the rat’s position during navigation based on unsorted rat hippocampal ensemble spiking activity. Here, we investigate several important technical issues of this new paradigm using one data set of one animal. Several extensions of our decoding method are discussed.

  15. Inferring cetacean population densities from the absolute dynamic topography of the ocean in a hierarchical Bayesian framework.

    Science.gov (United States)

    Pardo, Mario A; Gerrodette, Tim; Beier, Emilio; Gendron, Diane; Forney, Karin A; Chivers, Susan J; Barlow, Jay; Palacios, Daniel M

    2015-01-01

    We inferred the population densities of blue whales (Balaenoptera musculus) and short-beaked common dolphins (Delphinus delphis) in the Northeast Pacific Ocean as functions of the water-column's physical structure by implementing hierarchical models in a Bayesian framework. This approach allowed us to propagate the uncertainty of the field observations into the inference of species-habitat relationships and to generate spatially explicit population density predictions with reduced effects of sampling heterogeneity. Our hypothesis was that the large-scale spatial distributions of these two cetacean species respond primarily to ecological processes resulting from shoaling and outcropping of the pycnocline in regions of wind-forced upwelling and eddy-like circulation. Physically, these processes affect the thermodynamic balance of the water column, decreasing its volume and thus the height of the absolute dynamic topography (ADT). Biologically, they lead to elevated primary productivity and persistent aggregation of low-trophic-level prey. Unlike other remotely sensed variables, ADT provides information about the structure of the entire water column and it is also routinely measured at high spatial-temporal resolution by satellite altimeters with uniform global coverage. Our models provide spatially explicit population density predictions for both species, even in areas where the pycnocline shoals but does not outcrop (e.g. the Costa Rica Dome and the North Equatorial Countercurrent thermocline ridge). Interannual variations in distribution during El Niño anomalies suggest that the population density of both species decreases dramatically in the Equatorial Cold Tongue and the Costa Rica Dome, and that their distributions retract to particular areas that remain productive, such as the more oceanic waters in the central California Current System, the northern Gulf of California, the North Equatorial Countercurrent thermocline ridge, and the more southern portion of the

  16. Inferring cetacean population densities from the absolute dynamic topography of the ocean in a hierarchical Bayesian framework.

    Directory of Open Access Journals (Sweden)

    Mario A Pardo

    Full Text Available We inferred the population densities of blue whales (Balaenoptera musculus and short-beaked common dolphins (Delphinus delphis in the Northeast Pacific Ocean as functions of the water-column's physical structure by implementing hierarchical models in a Bayesian framework. This approach allowed us to propagate the uncertainty of the field observations into the inference of species-habitat relationships and to generate spatially explicit population density predictions with reduced effects of sampling heterogeneity. Our hypothesis was that the large-scale spatial distributions of these two cetacean species respond primarily to ecological processes resulting from shoaling and outcropping of the pycnocline in regions of wind-forced upwelling and eddy-like circulation. Physically, these processes affect the thermodynamic balance of the water column, decreasing its volume and thus the height of the absolute dynamic topography (ADT. Biologically, they lead to elevated primary productivity and persistent aggregation of low-trophic-level prey. Unlike other remotely sensed variables, ADT provides information about the structure of the entire water column and it is also routinely measured at high spatial-temporal resolution by satellite altimeters with uniform global coverage. Our models provide spatially explicit population density predictions for both species, even in areas where the pycnocline shoals but does not outcrop (e.g. the Costa Rica Dome and the North Equatorial Countercurrent thermocline ridge. Interannual variations in distribution during El Niño anomalies suggest that the population density of both species decreases dramatically in the Equatorial Cold Tongue and the Costa Rica Dome, and that their distributions retract to particular areas that remain productive, such as the more oceanic waters in the central California Current System, the northern Gulf of California, the North Equatorial Countercurrent thermocline ridge, and the more

  17. Decoding Mai Jia

    Institute of Scientific and Technical Information of China (English)

    LiuLiu

    2004-01-01

    Mai Jia has won great success with his novel Decoding overnight. As a mominee of the Mao Dun Literary Prize in 2003 and one of the five candidates for the final of "The 6th China's Books Award", Decoding was ranked the first one in "The chart of China's novels in 2002". It was not only reprinted in 27 newspapers and

  18. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis

    In this thesis we describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon codes with non-uniform profile. With this scheme decoding with good performance is...... of computational overflow. Analytical results for the probability that the first Reed-Solomon word is decoded after C computations are presented. This is supported by simulation results that are also extended to other parameters....... possible as low as Eb/No=0.6 dB, which is about 1.7 dB below the signal-to-noise ratio that marks the cut-off rate for the convolutional code. This is possible since the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability...

  19. Estimating temporal trend in the presence of spatial complexity: a Bayesian hierarchical model for a wetland plant population undergoing restoration.

    Science.gov (United States)

    Rodhouse, Thomas J; Irvine, Kathryn M; Vierling, Kerri T; Vierling, Lee A

    2011-01-01

    Monitoring programs that evaluate restoration and inform adaptive management are important for addressing environmental degradation. These efforts may be well served by spatially explicit hierarchical approaches to modeling because of unavoidable spatial structure inherited from past land use patterns and other factors. We developed bayesian hierarchical models to estimate trends from annual density counts observed in a spatially structured wetland forb (Camassia quamash [camas]) population following the cessation of grazing and mowing on the study area, and in a separate reference population of camas. The restoration site was bisected by roads and drainage ditches, resulting in distinct subpopulations ("zones") with different land use histories. We modeled this spatial structure by fitting zone-specific intercepts and slopes. We allowed spatial covariance parameters in the model to vary by zone, as in stratified kriging, accommodating anisotropy and improving computation and biological interpretation. Trend estimates provided evidence of a positive effect of passive restoration, and the strength of evidence was influenced by the amount of spatial structure in the model. Allowing trends to vary among zones and accounting for topographic heterogeneity increased precision of trend estimates. Accounting for spatial autocorrelation shifted parameter coefficients in ways that varied among zones depending on strength of statistical shrinkage, autocorrelation and topographic heterogeneity--a phenomenon not widely described. Spatially explicit estimates of trend from hierarchical models will generally be more useful to land managers than pooled regional estimates and provide more realistic assessments of uncertainty. The ability to grapple with historical contingency is an appealing benefit of this approach. PMID:22163047

  20. Estimating temporal trend in the presence of spatial complexity: A Bayesian hierarchical model for a wetland plant population undergoing restoration

    Science.gov (United States)

    Rodhouse, T.J.; Irvine, K.M.; Vierling, K.T.; Vierling, L.A.

    2011-01-01

    Monitoring programs that evaluate restoration and inform adaptive management are important for addressing environmental degradation. These efforts may be well served by spatially explicit hierarchical approaches to modeling because of unavoidable spatial structure inherited from past land use patterns and other factors. We developed Bayesian hierarchical models to estimate trends from annual density counts observed in a spatially structured wetland forb (Camassia quamash [camas]) population following the cessation of grazing and mowing on the study area, and in a separate reference population of camas. The restoration site was bisected by roads and drainage ditches, resulting in distinct subpopulations ("zones") with different land use histories. We modeled this spatial structure by fitting zone-specific intercepts and slopes. We allowed spatial covariance parameters in the model to vary by zone, as in stratified kriging, accommodating anisotropy and improving computation and biological interpretation. Trend estimates provided evidence of a positive effect of passive restoration, and the strength of evidence was influenced by the amount of spatial structure in the model. Allowing trends to vary among zones and accounting for topographic heterogeneity increased precision of trend estimates. Accounting for spatial autocorrelation shifted parameter coefficients in ways that varied among zones depending on strength of statistical shrinkage, autocorrelation and topographic heterogeneity-a phenomenon not widely described. Spatially explicit estimates of trend from hierarchical models will generally be more useful to land managers than pooled regional estimates and provide more realistic assessments of uncertainty. The ability to grapple with historical contingency is an appealing benefit of this approach.

  1. Bayesian estimates of male and female African lion mortality for future use in population management

    DEFF Research Database (Denmark)

    Barthold, Julia A; Loveridge, Andrew; Macdonald, David; Packer, Craig; Colchero, Fernando

    2016-01-01

    1. The global population size of African lions is plummeting, and many small fragmented populations face local extinction. Extinction risks are amplified through the common practice of trophy hunting for males, which makes setting sustainable hunting quotas a vital task. 2. Various demographic...... models evaluate consequences of hunting on lion population growth. However, none of the models use unbiased estimates of male age-specific mortality because such estimates do not exist. Until now, estimating mortality from resighting records of marked males has been impossible due to the uncertain fates...... of disappeared individuals: dispersal or death. 3. We develop a new method and infer mortality for male and female lions from two popula- tions that are typical with respect to their experienced levels of human impact. 4. We found that mortality of both sexes differed between the populations and that...

  2. High Speed Viterbi Decoder Architecture

    DEFF Research Database (Denmark)

    Paaske, Erik; Andersen, Jakob Dahl

    1998-01-01

    The fastest commercially available Viterbi decoders for the (171,133) standard rate 1/2 code operate with a decoding speed of 40-50 Mbit/s (net data rate). In this paper we present a suitable architecture for decoders operating with decoding speeds of 150-300 Mbit/s....

  3. MASSIVE: A Bayesian analysis of giant planet populations around low-mass stars

    CERN Document Server

    Lannier, J; Lagrange, A M; Borgniet, S; Rameau, J; Schlieder, J E; Gagné, J; Bonavita, M A; Malo, L; Chauvin, G; Bonnefoy, M; Girard, J H

    2016-01-01

    Direct imaging has led to the discovery of several giant planet and brown dwarf companions. These imaged companions populate a mass, separation and age domain (mass>1MJup, orbits>5AU, age2MJup might be independent from the mass of the host star.

  4. The confounding effect of population structure on bayesian skyline plot inferences of demographic history

    DEFF Research Database (Denmark)

    Heller, Rasmus; Chikhi, Lounes; Siegismund, Hans

    2013-01-01

    have so far not been addressed and quantified. We simulated DNA sequence data under a variety of scenarios involving structured populations with variable levels of gene flow and analysed them using BSPs as implemented in the software package BEAST. Results revealed that BSPs can show false signals...

  5. A Bayesian integrated population dynamics model to analyze data for protected species

    OpenAIRE

    Hoyle, S. D.; Maunder, M. N.

    2004-01-01

    Managing wildlife-human interactions demands reliable information about the likely consequences of management actions. This requirement is a general one, whatever the taxonomic group. We describe a method for estimating population dynamics and decision analysis that is generally applicable, extremely flexible, uses data efficiently, and gives answers in a useful format. Our case study involves bycatch of a protected species, the Northeastern Offshore Spotted Dolphin (Stenella attenuata), in t...

  6. Physiologically based modeling of the pharmacokinetics of acetaminophen and its major metabolites in humans using a Bayesian population approach.

    Science.gov (United States)

    Zurlinden, Todd J; Reisfeld, Brad

    2016-06-01

    The principal aim of this study was to develop, validate, and demonstrate a physiologically based pharmacokinetic (PBPK) model to predict and characterize the absorption, distribution, metabolism, and excretion of acetaminophen (APAP) in humans. A PBPK model was created that included pharmacologically and toxicologically relevant tissue compartments and incorporated mechanistic descriptions of the absorption and metabolism of APAP, such as gastric emptying time, cofactor kinetics, and transporter-mediated movement of conjugated metabolites in the liver. Through the use of a hierarchical Bayesian framework, unknown model parameters were estimated using a large training set of data from human pharmacokinetic studies, resulting in parameter distributions that account for data uncertainty and inter-study variability. Predictions from the model showed good agreement to a diverse test set of data across several measures, including plasma concentrations over time, renal clearance, APAP absorption, and pharmacokinetic and exposure metrics. The utility of the model was then demonstrated through predictions of cofactor depletion, dose response of several pharmacokinetic endpoints, and the relationship between APAP biomarker levels in the plasma and those in the liver. The model addressed several limitations in previous PBPK models for APAP, and it is anticipated that it will be useful in predicting the pharmacokinetics of APAP in a number of contexts, such as extrapolating across doses, estimating internal concentrations, quantifying population variability, assessing possible impacts of drug coadministration, and, when coupled with a suitable pharmacodynamic model, predicting toxicity. PMID:25636597

  7. Estimating temporal trend in the presence of spatial complexity: a Bayesian hierarchical model for a wetland plant population undergoing restoration.

    Directory of Open Access Journals (Sweden)

    Thomas J Rodhouse

    Full Text Available Monitoring programs that evaluate restoration and inform adaptive management are important for addressing environmental degradation. These efforts may be well served by spatially explicit hierarchical approaches to modeling because of unavoidable spatial structure inherited from past land use patterns and other factors. We developed bayesian hierarchical models to estimate trends from annual density counts observed in a spatially structured wetland forb (Camassia quamash [camas] population following the cessation of grazing and mowing on the study area, and in a separate reference population of camas. The restoration site was bisected by roads and drainage ditches, resulting in distinct subpopulations ("zones" with different land use histories. We modeled this spatial structure by fitting zone-specific intercepts and slopes. We allowed spatial covariance parameters in the model to vary by zone, as in stratified kriging, accommodating anisotropy and improving computation and biological interpretation. Trend estimates provided evidence of a positive effect of passive restoration, and the strength of evidence was influenced by the amount of spatial structure in the model. Allowing trends to vary among zones and accounting for topographic heterogeneity increased precision of trend estimates. Accounting for spatial autocorrelation shifted parameter coefficients in ways that varied among zones depending on strength of statistical shrinkage, autocorrelation and topographic heterogeneity--a phenomenon not widely described. Spatially explicit estimates of trend from hierarchical models will generally be more useful to land managers than pooled regional estimates and provide more realistic assessments of uncertainty. The ability to grapple with historical contingency is an appealing benefit of this approach.

  8. Estimation of Coast-Wide Population Trends of Marbled Murrelets in Canada Using a Bayesian Hierarchical Model.

    Science.gov (United States)

    Bertram, Douglas F; Drever, Mark C; McAllister, Murdoch K; Schroeder, Bernard K; Lindsay, David J; Faust, Deborah A

    2015-01-01

    Species at risk with secretive breeding behaviours, low densities, and wide geographic range pose a significant challenge to conservation actions because population trends are difficult to detect. Such is the case with the Marbled Murrelet (Brachyramphus marmoratus), a seabird listed as 'Threatened' by the Species at Risk Act in Canada largely due to the loss of its old growth forest nesting habitat. We report the first estimates of population trend of Marbled Murrelets in Canada derived from a monitoring program that uses marine radar to detect birds as they enter forest watersheds during 923 dawn surveys at 58 radar monitoring stations within the six Marbled Murrelet Conservation Regions on coastal British Columbia, Canada, 1996-2013. Temporal trends in radar counts were analyzed with a hierarchical Bayesian multivariate modeling approach that controlled for variation in tilt of the radar unit and day of year, included year-specific deviations from the overall trend ('year effects'), and allowed for trends to be estimated at three spatial scales. A negative overall trend of -1.6%/yr (95% credibility interval: -3.2%, 0.01%) indicated moderate evidence for a coast-wide decline, although trends varied strongly among the six conservation regions. Negative annual trends were detected in East Vancouver Island (-9%/yr) and South Mainland Coast (-3%/yr) Conservation Regions. Over a quarter of the year effects were significantly different from zero, and the estimated standard deviation in common-shared year effects between sites within each region was about 50% per year. This large common-shared interannual variation in counts may have been caused by regional movements of birds related to changes in marine conditions that affect the availability of prey. PMID:26258803

  9. The impact of ancestral population size and incomplete lineage sorting on Bayesian estimation of species divergence times

    Institute of Scientific and Technical Information of China (English)

    Konstantinos ANGELIS; Mario DOS REIS

    2015-01-01

    Although the effects of the coalescent process on sequence divergence and genealogies are well understood, the vir-tual majority of studies that use molecular sequences to estimate times of divergence among species have failed to account for the coalescent process. Here we study the impact of ancestral population size and incomplete lineage sorting on Bayesian estimates of species divergence times under the molecular clock when the inference model ignores the coalescent process. Using a combi-nation of mathematical analysis, computer simulations and analysis of real data, we find that the errors on estimates of times and the molecular rate can be substantial when ancestral populations are large and when there is substantial incomplete lineage sort-ing. For example, in a simple three-species case, we find that if the most precise fossil calibration is placed on the root of the phylogeny, the age of the internal node is overestimated, while if the most precise calibration is placed on the internal node, then the age of the root is underestimated. In both cases, the molecular rate is overestimated. Using simulations on a phylogeny of nine species, we show that substantial errors in time and rate estimates can be obtained even when dating ancient divergence events. We analyse the hominoid phylogeny and show that estimates of the neutral mutation rate obtained while ignoring the coalescent are too high. Using a coalescent-based technique to obtain geological times of divergence, we obtain estimates of the mutation rate that are within experimental estimates and we also obtain substantially older divergence times within the phylogeny [Current Zoology 61 (5): 874–885, 2015].

  10. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1998-01-01

    We describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon (RS) codes with nonuniform profile. With this scheme decoding with good performance is possible as low...... as Eb/N0=0.6 dB, which is about 1.25 dB below the signal-to-noise ratio (SNR) that marks the cutoff rate for the full system. Accounting for about 0.45 dB due to the outer codes, sequential decoding takes place at about 1.7 dB below the SNR cutoff rate for the convolutional code. This is possible...... since the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability of computational overflow. Analytical results for the probability that the first RS word is decoded after C computations are presented. These results are...

  11. Nonparametric Bayesian Logic

    OpenAIRE

    Carbonetto, Peter; Kisynski, Jacek; De Freitas, Nando; Poole, David L

    2012-01-01

    The Bayesian Logic (BLOG) language was recently developed for defining first-order probability models over worlds with unknown numbers of objects. It handles important problems in AI, including data association and population estimation. This paper extends BLOG by adopting generative processes over function spaces - known as nonparametrics in the Bayesian literature. We introduce syntax for reasoning about arbitrary collections of objects, and their properties, in an intuitive manner. By expl...

  12. Versatile Reed-Solomon decoders

    Science.gov (United States)

    Rajableh-Shayan, Yousef

    1990-08-01

    Reed-Solomon (RS) codes have found many applications such as space and mobile communication due to their error correcting capability (ECC) and optimum structure. It is shown that time domain algorithms are the best candidates for designing versatile hardware decoders, but syndrome based algorithms are advantageous for software decoders. The algorithms for decoding RS codes require algebraic operations over Galois fields. Parallel in, parallel out multipliers and inverters in Galois fields are considered and least complex structures for the multiplier are introduced. A new normal basis multiplier is presented, as well as a universal multiplier for multiplying two elements of Galois field 2 to the m (m=4,5,6,7,8). The time domain algorithm based on transform decoder is restructured and two versatile decoder structures are presented. Both are simple and modular, thus suitable for very large scale integration design, and can be used for decoding any primitive RS code defined in a specific Galois field. The ECC of these decoders is configurable. The structure of a universal RS decoder is also presented. The time domain decoding algorithm based on algebraic decoder is modified to reduce the complexity of the universal decoder. The ECC and the size for the Galois field of this decoder are configurable. A method is also introduced for decoding RS codes generated by any generator polynomial.

  13. Interpretability in Linear Brain Decoding

    OpenAIRE

    Kia, Seyed Mostafa; Passerini, Andrea

    2016-01-01

    Improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of brain decoding models. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, we present a simple definition for interpretability of linear brain decoding models. Then, we propose to combine the...

  14. Decoding Xing-Ling codes

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Refslund

    2002-01-01

    This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed.......This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed....

  15. Analysis of regional scale risk to whirling disease in populations of Colorado and Rio Grande cutthroat trout using Bayesian belief network model

    Science.gov (United States)

    Kolb Ayre, Kimberley; Caldwell, Colleen A.; Stinson, Jonah; Landis, Wayne G.

    2014-01-01

    Introduction and spread of the parasite Myxobolus cerebralis, the causative agent of whirling disease, has contributed to the collapse of wild trout populations throughout the intermountain west. Of concern is the risk the disease may have on conservation and recovery of native cutthroat trout. We employed a Bayesian belief network to assess probability of whirling disease in Colorado River and Rio Grande cutthroat trout (Oncorhynchus clarkii pleuriticus and Oncorhynchus clarkii virginalis, respectively) within their current ranges in the southwest United States. Available habitat (as defined by gradient and elevation) for intermediate oligochaete worm host, Tubifex tubifex, exerted the greatest influence on the likelihood of infection, yet prevalence of stream barriers also affected the risk outcome. Management areas that had the highest likelihood of infected Colorado River cutthroat trout were in the eastern portion of their range, although the probability of infection was highest for populations in the southern, San Juan subbasin. Rio Grande cutthroat trout had a relatively low likelihood of infection, with populations in the southernmost Pecos management area predicted to be at greatest risk. The Bayesian risk assessment model predicted the likelihood of whirling disease infection from its principal transmission vector, fish movement, and suggested that barriers may be effective in reducing risk of exposure to native trout populations. Data gaps, especially with regard to location of spawning, highlighted the importance in developing monitoring plans that support future risk assessments and adaptive management for subspecies of cutthroat trout.

  16. Analysis of regional scale risk of whirling disease in populations of Colorado and Rio Grande cutthroat trout using a Bayesian belief network model.

    Science.gov (United States)

    Ayre, Kimberley Kolb; Caldwell, Colleen A; Stinson, Jonah; Landis, Wayne G

    2014-09-01

    Introduction and spread of the parasite Myxobolus cerebralis, the causative agent of whirling disease, has contributed to the collapse of wild trout populations throughout the intermountain west. Of concern is the risk the disease may have on conservation and recovery of native cutthroat trout. We employed a Bayesian belief network to assess probability of whirling disease in Colorado River and Rio Grande cutthroat trout (Oncorhynchus clarkii pleuriticus and Oncorhynchus clarkii virginalis, respectively) within their current ranges in the southwest United States. Available habitat (as defined by gradient and elevation) for intermediate oligochaete worm host, Tubifex tubifex, exerted the greatest influence on the likelihood of infection, yet prevalence of stream barriers also affected the risk outcome. Management areas that had the highest likelihood of infected Colorado River cutthroat trout were in the eastern portion of their range, although the probability of infection was highest for populations in the southern, San Juan subbasin. Rio Grande cutthroat trout had a relatively low likelihood of infection, with populations in the southernmost Pecos management area predicted to be at greatest risk. The Bayesian risk assessment model predicted the likelihood of whirling disease infection from its principal transmission vector, fish movement, and suggested that barriers may be effective in reducing risk of exposure to native trout populations. Data gaps, especially with regard to location of spawning, highlighted the importance in developing monitoring plans that support future risk assessments and adaptive management for subspecies of cutthroat trout. PMID:24660663

  17. A data-driven Bayesian approach for finding young stellar populations in early-type galaxies from their UV-optical spectra

    CERN Document Server

    Nolan, L A; Kaban, Ata; Raychaudhuri, S

    2006-01-01

    We present the results of a novel application of Bayesian modelling techniques, which, although purely data driven, have a physically interpretable result, and will be useful as an efficient data mining tool. We base our studies on the UV-to-optical spectra (observed and synthetic) of early-type galaxies. A probabilistic latent variable architecture is formulated, and a rigorous Bayesian methodology is employed for solving the inverse modelling problem from the available data. A powerful aspect of our formalism is that it allows us to recover a limited fraction of missing data due to incomplete spectral coverage, as well as to handle observational errors in a principled way. We apply this method to a sample of 21 well-studied early-type spectra, with known star-formation histories. We find that our data-driven Bayesian modelling allows us to identify those early-types which contain a significant stellar population <~ 1 Gyr old. This method would therefore be a very useful tool for automatically discovering...

  18. Decoding the human genome

    CERN Document Server

    CERN. Geneva. Audiovisual Unit; Antonerakis, S E

    2002-01-01

    Decoding the Human genome is a very up-to-date topic, raising several questions besides purely scientific, in view of the two competing teams (public and private), the ethics of using the results, and the fact that the project went apparently faster and easier than expected. The lecture series will address the following chapters: Scientific basis and challenges. Ethical and social aspects of genomics.

  19. Decoding Stacked Denoising Autoencoders

    OpenAIRE

    Sonoda, Sho; Murata, Noboru

    2016-01-01

    Data representation in a stacked denoising autoencoder is investigated. Decoding is a simple technique for translating a stacked denoising autoencoder into a composition of denoising autoencoders in the ground space. In the infinitesimal limit, a composition of denoising autoencoders is reduced to a continuous denoising autoencoder, which is rich in analytic properties and geometric interpretation. For example, the continuous denoising autoencoder solves the backward heat equation and transpo...

  20. Decoding Neuronal Ensembles in the Human Hippocampus

    OpenAIRE

    Hassabis, D; Chu, C; Rees, G.; Weiskopf, N.; Molyneux, P.D.; Maguire, E. A.

    2009-01-01

    Summary Background The hippocampus underpins our ability to navigate, to form and recollect memories, and to imagine future experiences. How activity across millions of hippocampal neurons supports these functions is a fundamental question in neuroscience, wherein the size, sparseness, and organization of the hippocampal neural code are debated. Results Here, by using multivariate pattern classification and high spatial resolution functional MRI, we decoded activity across the population of n...

  1. Bayesian biostatistics

    CERN Document Server

    Lesaffre, Emmanuel

    2012-01-01

    The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets. Through examples, exercises and a combination of introd

  2. Comparing offline decoding performance in physiologically defined neuronal classes

    Science.gov (United States)

    Best, Matthew D.; Takahashi, Kazutaka; Suminski, Aaron J.; Ethier, Christian; Miller, Lee E.; Hatsopoulos, Nicholas G.

    2016-04-01

    Objective: Recently, several studies have documented the presence of a bimodal distribution of spike waveform widths in primary motor cortex. Although narrow and wide spiking neurons, corresponding to the two modes of the distribution, exhibit different response properties, it remains unknown if these differences give rise to differential decoding performance between these two classes of cells. Approach: We used a Gaussian mixture model to classify neurons into narrow and wide physiological classes. Using similar-size, random samples of neurons from these two physiological classes, we trained offline decoding models to predict a variety of movement features. We compared offline decoding performance between these two physiologically defined populations of cells. Main results: We found that narrow spiking neural ensembles decode motor parameters better than wide spiking neural ensembles including kinematics, kinetics, and muscle activity. Significance: These findings suggest that the utility of neural ensembles in brain machine interfaces may be predicted from their spike waveform widths.

  3. Development of a paediatric population pharmacokinetic model for valacyclovir from literature non-compartmental values originating from sparse studies and Bayesian priors: a simulation study.

    Science.gov (United States)

    Kechagia, Irene-Ariadne; Dokoumetzidis, Aristides

    2015-06-01

    A preliminary population pharmacokinetic (PopPK) model of valacyclovir in children was developed from non-compartmental analysis (NCA) parameter values from literature, including several age groups, combined with Bayesian priors from a PopPK model of acyclovir, the active metabolite of valacyclovir, from literature too. Also a simulation study was carried out to evaluate the performance of various modelling choices related to the estimation of model parameters from NCA parameters originating from sparse PK studies. Assuming a one-compartment model with first order absorption, a mixed effects, meta-analysis approach was utilized which allows accounting the random intergroup variability, the detection of covariates and the application of informative Bayesian priors on the parameters. The conclusions from the simulation study calculating bias and precision for various cases, were that a model which takes explicitly into account the sampling schedule, performs better than a model using the theoretical expressions of calculating the NCA parameters. Also by using the geometric rather than the arithmetic means of NCA parameters, less biased results are obtained. These findings guided the choices for the valacyclovir model, for which informative priors from a PopPK model of acyclovir were applied for some of the parameters, in order to include a richer covariate model for clearance, not supported by the NCA dataset and a value for bioavailability. This preliminary valacyclovir model can be used in simulations to provide dosage recommendations for children of various ages and to help design more efficiently prospective clinical trials. PMID:25821006

  4. Bayesian statistics

    OpenAIRE

    Draper, D.

    2001-01-01

    © 2012 Springer Science+Business Media, LLC. All rights reserved. Article Outline: Glossary Definition of the Subject and Introduction The Bayesian Statistical Paradigm Three Examples Comparison with the Frequentist Statistical Paradigm Future Directions Bibliography

  5. List Decoding of Algebraic Codes

    DEFF Research Database (Denmark)

    Nielsen, Johan Sebastian Rosenkilde

    We investigate three paradigms for polynomial-time decoding of Reed–Solomon codes beyond half the minimum distance: the Guruswami–Sudan algorithm, Power decoding and the Wu algorithm. The main results concern shaping the computational core of all three methods to a problem solvable by module mini...

  6. Body size and geographic range do not explain long term variation in fish populations: a Bayesian phylogenetic approach to testing assembly processes in stream fish assemblages.

    Directory of Open Access Journals (Sweden)

    Stephen J Jacquemin

    Full Text Available We combine evolutionary biology and community ecology to test whether two species traits, body size and geographic range, explain long term variation in local scale freshwater stream fish assemblages. Body size and geographic range are expected to influence several aspects of fish ecology, via relationships with niche breadth, dispersal, and abundance. These traits are expected to scale inversely with niche breadth or current abundance, and to scale directly with dispersal potential. However, their utility to explain long term temporal patterns in local scale abundance is not known. Comparative methods employing an existing molecular phylogeny were used to incorporate evolutionary relatedness in a test for covariation of body size and geographic range with long term (1983 - 2010 local scale population variation of fishes in West Fork White River (Indiana, USA. The Bayesian model incorporating phylogenetic uncertainty and correlated predictors indicated that neither body size nor geographic range explained significant variation in population fluctuations over a 28 year period. Phylogenetic signal data indicated that body size and geographic range were less similar among taxa than expected if trait evolution followed a purely random walk. We interpret this as evidence that local scale population variation may be influenced less by species-level traits such as body size or geographic range, and instead may be influenced more strongly by a taxon's local scale habitat and biotic assemblages.

  7. Understanding uncertainties in non-linear population trajectories: a Bayesian semi-parametric hierarchical approach to large-scale surveys of coral cover.

    Directory of Open Access Journals (Sweden)

    Julie Vercelloni

    Full Text Available Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making.

  8. The Formal Specifications for Protocols of Decoders

    Institute of Scientific and Technical Information of China (English)

    YUAN Meng-ting; WU Guo-qing; SHU Feng-di

    2004-01-01

    This paper presents a formal approach, FSPD (Formal Specifications for Protocols of Decoders), to specify decoder communication protocols. Based on axiomatic, FSPD is a precise language with which programmers could use only one suitable driver to handle various types of decoders. FSPD is helpful for programmers to get high adaptability and reusability of decoder-driver software.

  9. Astrophysics Decoding the cosmos

    CERN Document Server

    Irwin, Judith A

    2007-01-01

    Astrophysics: Decoding the Cosmos is an accessible introduction to the key principles and theories underlying astrophysics. This text takes a close look at the radiation and particles that we receive from astronomical objects, providing a thorough understanding of what this tells us, drawing the information together using examples to illustrate the process of astrophysics. Chapters dedicated to objects showing complex processes are written in an accessible manner and pull relevant background information together to put the subject firmly into context. The intention of the author is that the book will be a 'tool chest' for undergraduate astronomers wanting to know the how of astrophysics. Students will gain a thorough grasp of the key principles, ensuring that this often-difficult subject becomes more accessible.

  10. Decoding the productivity code

    DEFF Research Database (Denmark)

    Hansen, David

    , that is, the productivity code of the 21st century, is dissolved. Today, organizations are pressured for operational efficiency, often in terms of productivity, due to increased global competition, demographical changes, and use of natural resources. Taylor’s principles for rationalization founded...... that swing between rationalization and employee development. The productivity code is the lack of alternatives to this ineffective approach. This thesis decodes the productivity code based on the results from a 3-year action research study at a medium-sized manufacturing facility. During the project period....... The improvement system consists of five elements: The improvement process, participants, management, organization, and technology. The improvement system is not an organizational structure but rather a capability and readiness to organize the right improvement activities for a given challenge, i...

  11. Decoding by Embedding: Correct Decoding Radius and DMT Optimality

    CERN Document Server

    Ling, Cong; Luzzi, Laura; Stehle, Damien

    2011-01-01

    In lattice-coded multiple-input multiple-output (MIMO) systems, optimal decoding amounts to solving the closest vector problem (CVP). Embedding is a powerful technique for the approximate CVP, yet its remarkable performance is not well understood. In this paper, we analyze the embedding technique from a bounded distance decoding (BDD) viewpoint. We prove that the Lenstra, Lenstra and Lov\\'az (LLL) algorithm can achieve 1/(2{\\gamma}) -BDD for {\\gamma} \\approx O(2^(n/4)), yielding a polynomial-complexity decoding algorithm performing exponentially better than Babai's which achieves {\\gamma} = O(2^(n/2)). This substantially improves the existing result {\\gamma} = O(2^(n)) for embedding decoding. We also prove that BDD of the regularized lattice is optimal in terms of the diversity-multiplexing gain tradeoff (DMT).

  12. VLSI Reed-Solomon decoder

    Science.gov (United States)

    Kim, Yong H.; Chung, Young Mo; Lee, Sang Uk

    1992-11-01

    In this paper, a VLSI architecture for Reed-Solomon (RS) decoder based on the Berlekamp algorithm is proposed. The proposed decoder provides both erasure and error correcting capability. In order to reduce the chip area, we reformulate the Berlekamp algorithm. The proposed algorithm possesses a recursive structure so that the number of cells for computing the errata locator polynomial can be reduced. Moreover, in our approach, only one finite field multiplication per clock cycle is required for implementation, provided an improvement in the decoding speed. And the overall architecture features parallel and pipelined structure, making a real time decoding possible. It is shown that the proposed VLSI architecture is more efficient in terms of VLSI implementation than the architecture based on the recursive Euclid algorithm.

  13. HEVC real-time decoding

    Science.gov (United States)

    Bross, Benjamin; Alvarez-Mesa, Mauricio; George, Valeri; Chi, Chi Ching; Mayer, Tobias; Juurlink, Ben; Schierl, Thomas

    2013-09-01

    The new High Efficiency Video Coding Standard (HEVC) was finalized in January 2013. Compared to its predecessor H.264 / MPEG4-AVC, this new international standard is able to reduce the bitrate by 50% for the same subjective video quality. This paper investigates decoder optimizations that are needed to achieve HEVC real-time software decoding on a mobile processor. It is shown that HEVC real-time decoding up to high definition video is feasible using instruction extensions of the processor while decoding 4K ultra high definition video in real-time requires additional parallel processing. For parallel processing, a picture-level parallel approach has been chosen because it is generic and does not require bitstreams with special indication.

  14. Bayesian Benchmark Dose Analysis

    OpenAIRE

    Fang, Qijun; Piegorsch, Walter W.; Barnes, Katherine Y.

    2014-01-01

    An important objective in environmental risk assessment is estimation of minimum exposure levels, called Benchmark Doses (BMDs) that induce a pre-specified Benchmark Response (BMR) in a target population. Established inferential approaches for BMD analysis typically involve one-sided, frequentist confidence limits, leading in practice to what are called Benchmark Dose Lower Limits (BMDLs). Appeal to Bayesian modeling and credible limits for building BMDLs is far less developed, however. Indee...

  15. Bayesian Monitoring.

    OpenAIRE

    Kirstein, Roland

    2005-01-01

    This paper presents a modification of the inspection game: The ?Bayesian Monitoring? model rests on the assumption that judges are interested in enforcing compliant behavior and making correct decisions. They may base their judgements on an informative but imperfect signal which can be generated costlessly. In the original inspection game, monitoring is costly and generates a perfectly informative signal. While the inspection game has only one mixed strategy equilibrium, three Perfect Bayesia...

  16. Far beyond stacking: Fully bayesian constraints on sub-microJy radio source populations over the XMM-LSS-VIDEO field

    CERN Document Server

    Zwart, Jonathan T L; Jarvis, Matt J

    2015-01-01

    Measuring radio source counts is critical for characterizing new extragalactic populations, brings a wealth of science within reach and will inform forecasts for SKA and its pathfinders. Yet there is currently great debate (and few measurements) about the behaviour of the 1.4-GHz counts in the microJy regime. One way to push the counts to these levels is via 'stacking', the covariance of a map with a catalogue at higher resolution and (often) a different wavelength. For the first time, we cast stacking in a fully bayesian framework, applying it to (i) the SKADS simulation and (ii) VLA data stacked at the positions of sources from the VIDEO survey. In the former case, the algorithm recovers the counts correctly when applied to the catalogue, but is biased high when confusion comes into play. This needs to be accounted for in the analysis of data from any relatively-low-resolution SKA pathfinders. For the latter case, the observed radio source counts remain flat below the 5-sigma level of 85 microJy as far as 4...

  17. Bayesian demography 250 years after Bayes.

    Science.gov (United States)

    Bijak, Jakub; Bryant, John

    2016-01-01

    Bayesian statistics offers an alternative to classical (frequentist) statistics. It is distinguished by its use of probability distributions to describe uncertain quantities, which leads to elegant solutions to many difficult statistical problems. Although Bayesian demography, like Bayesian statistics more generally, is around 250 years old, only recently has it begun to flourish. The aim of this paper is to review the achievements of Bayesian demography, address some misconceptions, and make the case for wider use of Bayesian methods in population studies. We focus on three applications: demographic forecasts, limited data, and highly structured or complex models. The key advantages of Bayesian methods are the ability to integrate information from multiple sources and to describe uncertainty coherently. Bayesian methods also allow for including additional (prior) information next to the data sample. As such, Bayesian approaches are complementary to many traditional methods, which can be productively re-expressed in Bayesian terms. PMID:26902889

  18. Improved decoding for a concatenated coding system

    DEFF Research Database (Denmark)

    Paaske, Erik

    1990-01-01

    The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new...... decoding procedures based on repeated decoding trials and exchange of information between the two decoders and the deinterleaver are proposed. In the first one, where the improvement is 0.3-0.4 dB, only the RS decoder performs repeated trials. In the second one, where the improvement is 0.5-0.6 dB, both...... decoders perform repeated decoding trials and decoding information is exchanged between them...

  19. Systolic VLSI Reed-Solomon Decoder

    Science.gov (United States)

    Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.

    1986-01-01

    Decoder for digital communications provides high-speed, pipelined ReedSolomon (RS) error-correction decoding of data streams. Principal new feature of proposed decoder is modification of Euclid greatest-common-divisor algorithm to avoid need for time-consuming computations of inverse of certain Galois-field quantities. Decoder architecture suitable for implementation on very-large-scale integrated (VLSI) chips with negative-channel metaloxide/silicon circuitry.

  20. Modular VLSI Reed-Solomon Decoder

    Science.gov (United States)

    Hsu, In-Shek; Truong, Trieu-Kie

    1991-01-01

    Proposed Reed-Solomon decoder contains multiple very-large-scale integrated (VLSI) circuit chips of same type. Each chip contains sets of logic cells and subcells performing functions from all stages of decoding process. Full decoder assembled by concatenating chips, with selective utilization of cells in particular chips. Cost of development reduced by factor of 5. In addition, decoder programmable in field and switched between 8-bit and 10-bit symbol sizes.

  1. An improved hypothetical reference decoder for HEVC

    Science.gov (United States)

    Deshpande, Sachin; Hannuksela, Miska M.; Kazui, Kimihiko; Schierl, Thomas

    2013-02-01

    Hypothetical Reference Decoder is a hypothetical decoder model that specifies constraints on the variability of conforming network abstraction layer unit streams or conforming byte streams that an encoding process may produce. High Efficiency Video Coding (HEVC) builds upon and improves the design of the generalized hypothetical reference decoder of H.264/ AVC. This paper describes some of the main improvements of hypothetical reference decoder of HEVC.

  2. Bayesian programming

    CERN Document Server

    Bessiere, Pierre; Ahuactzin, Juan Manuel; Mekhnacha, Kamel

    2013-01-01

    Probability as an Alternative to Boolean LogicWhile logic is the mathematical foundation of rational reasoning and the fundamental principle of computing, it is restricted to problems where information is both complete and certain. However, many real-world problems, from financial investments to email filtering, are incomplete or uncertain in nature. Probability theory and Bayesian computing together provide an alternative framework to deal with incomplete and uncertain data. Decision-Making Tools and Methods for Incomplete and Uncertain DataEmphasizing probability as an alternative to Boolean

  3. On Decoding Interleaved Chinese Remainder Codes

    DEFF Research Database (Denmark)

    Li, Wenhui; Sidorenko, Vladimir; Nielsen, Johan Sebastian Rosenkilde

    2013-01-01

    We model the decoding of Interleaved Chinese Remainder codes as that of finding a short vector in a Z-lattice. Using the LLL algorithm, we obtain an efficient decoding algorithm, correcting errors beyond the unique decoding bound and having nearly linear complexity. The algorithm can fail with a...

  4. Social Intelligence and Decoding of Nonverbal Cues.

    Science.gov (United States)

    Barnes, Michael L.; Sternberg, Robert J.

    1989-01-01

    The relationship between non-verbal decoding ability and social intelligence, defined as the ability to decode social information accurately, was studied using 40 adults. Results are discussed in the framework of R. J. Sternberg's triarchic theory of human intelligence. Decoding skills appeared to be an important part of social intelligence. (SLD)

  5. Interpolation-based Decoding of Alternant Codes

    CERN Document Server

    Lee, Kwankyu

    2007-01-01

    We formulate the classical decoding algorithm of alternant codes afresh based on interpolation as in Sudan's list decoding of Reed-Solomon codes, and thus get rid of the key equation and the linear recurring sequences in the theory. The result is a streamlined exposition of the decoding algorithm using a bit of the theory of Groebner bases of modules.

  6. Far beyond stacking: fully Bayesian constraints on sub-μJy radio source populations over the XMM-LSS-VIDEO field

    Science.gov (United States)

    Zwart, Jonathan T. L.; Santos, Mario; Jarvis, Matt J.

    2015-10-01

    Measuring radio source counts is critical for characterizing new extragalactic populations, brings a wealth of science within reach and will inform forecasts for SKA and its pathfinders. Yet there is currently great debate (and few measurements) about the behaviour of the 1.4-GHz counts in the μJy regime. One way to push the counts to these levels is via `stacking', the covariance of a map with a catalogue at higher resolution and (often) a different wavelength. For the first time, we cast stacking in a fully Bayesian framework, applying it to (i) the Square Kilometre Array Design Study (SKADS) simulation and (ii) Very Large Array (VLA) data stacked at the positions of sources from the VISTA Infra-red Deep Extragalactic Observations (VIDEO) survey. In the former case, the algorithm recovers the counts correctly when applied to the catalogue, but is biased high when confusion comes into play. This needs to be accounted for in the analysis of data from any relatively low-resolution Square Kilometre Array (SKA) pathfinders. For the latter case, the observed radio source counts remain flat below the 5-σ level of 85 μJy as far as 40 μJy, then fall off earlier than the flux hinted at by the SKADS simulations and a recent P(D) analysis (which is the only other measurement from the literature at these flux-density levels, itself extrapolated in frequency). Division into galaxy type via spectral-energy distribution reveals that normal spiral galaxies dominate the counts at these fluxes.

  7. Fast Reed-Solomon Decoder

    Science.gov (United States)

    Liu, K. Y.

    1986-01-01

    High-speed decoder intended for use with Reed-Solomon (RS) codes of long code length and high error-correcting capability. Design based on algorithm that includes high-radix Fermat transform procedure, which is most efficient for high speeds. RS code in question has code-word length of 256 symbols, of which 224 are information symbols and 32 are redundant.

  8. Dynamics of intracellular information decoding

    International Nuclear Information System (INIS)

    A variety of cellular functions are robust even to substantial intrinsic and extrinsic noise in intracellular reactions and the environment that could be strong enough to impair or limit them. In particular, of substantial importance is cellular decision-making in which a cell chooses a fate or behavior on the basis of information conveyed in noisy external signals. For robust decoding, the crucial step is filtering out the noise inevitably added during information transmission. As a minimal and optimal implementation of such an information decoding process, the autocatalytic phosphorylation and autocatalytic dephosphorylation (aPadP) cycle was recently proposed. Here, we analyze the dynamical properties of the aPadP cycle in detail. We describe the dynamical roles of the stationary and short-term responses in determining the efficiency of information decoding and clarify the optimality of the threshold value of the stationary response and its information-theoretical meaning. Furthermore, we investigate the robustness of the aPadP cycle against the receptor inactivation time and intrinsic noise. Finally, we discuss the relationship among information decoding with information-dependent actions, bet-hedging and network modularity

  9. Dynamics of intracellular information decoding

    Science.gov (United States)

    Kobayashi, Tetsuya J.; Kamimura, Atsushi

    2011-10-01

    A variety of cellular functions are robust even to substantial intrinsic and extrinsic noise in intracellular reactions and the environment that could be strong enough to impair or limit them. In particular, of substantial importance is cellular decision-making in which a cell chooses a fate or behavior on the basis of information conveyed in noisy external signals. For robust decoding, the crucial step is filtering out the noise inevitably added during information transmission. As a minimal and optimal implementation of such an information decoding process, the autocatalytic phosphorylation and autocatalytic dephosphorylation (aPadP) cycle was recently proposed. Here, we analyze the dynamical properties of the aPadP cycle in detail. We describe the dynamical roles of the stationary and short-term responses in determining the efficiency of information decoding and clarify the optimality of the threshold value of the stationary response and its information-theoretical meaning. Furthermore, we investigate the robustness of the aPadP cycle against the receptor inactivation time and intrinsic noise. Finally, we discuss the relationship among information decoding with information-dependent actions, bet-hedging and network modularity.

  10. Decoding intention at sensorimotor timescales.

    Directory of Open Access Journals (Sweden)

    Mathew Salvaris

    Full Text Available The ability to decode an individual's intentions in real time has long been a 'holy grail' of research on human volition. For example, a reliable method could be used to improve scientific study of voluntary action by allowing external probe stimuli to be delivered at different moments during development of intention and action. Several Brain Computer Interface applications have used motor imagery of repetitive actions to achieve this goal. These systems are relatively successful, but only if the intention is sustained over a period of several seconds; much longer than the timescales identified in psychophysiological studies for normal preparation for voluntary action. We have used a combination of sensorimotor rhythms and motor imagery training to decode intentions in a single-trial cued-response paradigm similar to those used in human and non-human primate motor control research. Decoding accuracy of over 0.83 was achieved with twelve participants. With this approach, we could decode intentions to move the left or right hand at sub-second timescales, both for instructed choices instructed by an external stimulus and for free choices generated intentionally by the participant. The implications for volition are considered.

  11. GENETIC ALGORITHM FOR DECODING LINEAR CODES OVER AWGN AND FADING CHANNELS

    Directory of Open Access Journals (Sweden)

    H. BERBIA

    2011-08-01

    Full Text Available This paper introduces a decoder for binary linear codes based on Genetic Algorithm (GA over the Gaussian and Rayleigh flat fading channel. The performances and compututional complexity of our decoder applied to BCH and convolutional codes are good compared to Chase-2 and Viterbi algorithm respectively. It show that our algorithm is less complex for linear block codes of large block length; furthermore it's performances can be improved by tuning the decoder's parameters, in particular the number of individuals by population and the number of generations

  12. Mathematics is differentially related to reading comprehension and word decoding: Evidence from a genetically sensitive design.

    OpenAIRE

    Harlaar, Nicole; Kovas, Yulia; Dale, Philip S; Petrill, Stephen A.; Plomin, Robert

    2012-01-01

    Although evidence suggests that individual differences in reading and mathematics skills are correlated, this relationship has typically only been studied in relation to word decoding or global measures of reading. It is unclear whether mathematics is differentially related to word decoding and reading comprehension. The current study examined these relationships at both a phenotypic and etiological level in a population-based cohort of 5162 twin pairs at age 12. Multivariate genetic analyses...

  13. Frequency-Accommodating Manchester Decoder

    Science.gov (United States)

    Vasquez, Mario J.

    1988-01-01

    No adjustment necessary to cover a 10:1 frequency range. Decoding circuit converts biphase-level pulse-code modulation to nonreturn-to-zero (NRZ)-level pulse-code modulation plus clock signal. Circuit accommodates input data rate of 50 to 500 kb/s. Tracks gradual changes in rate automatically, eliminating need for extra circuits and manual switching to adjust to different rates.

  14. Comparison of linear mixed model analysis and genealogy-based haplotype clustering with a Bayesian approach for association mapping in a pedigreed population

    DEFF Research Database (Denmark)

    Dashab, Golam Reza; Kadri, Naveen Kumar; Mahdi Shariati, Mohammad;

    2012-01-01

    ) Mixed model analysis (MMA), 2) Random haplotype model (RHM), 3) Genealogy-based mixed model (GENMIX), and 4) Bayesian variable selection (BVS). The data consisted of phenotypes of 2000 animals from 20 sire families and were genotyped with 9990 SNPs on five chromosomes. Results: Out of the eight...

  15. FPGA Realization of Memory 10 Viterbi Decoder

    DEFF Research Database (Denmark)

    Paaske, Erik; Bach, Thomas Bo; Andersen, Jakob Dahl

    1997-01-01

    A feasibility study for a low cost, iterative, serially concatenated coding system is performed. The system uses outer (255,223) Reed-Solomon codes and convolutional inner codes with memory 10 and rates 1/4 or 1/6. The corresponding inner decoder is a Viterbi decoder, which can operate in a forced...... sequence mode when feedback from the Reed-Solomon decoder is available. The Viterbi decoder is realized using two Altera FLEX 10K50 FPGA's. The overall operating speed is 30 kbit/s, and since up to three iterations are performed for each frame and only one decoder is used, the operating speed of the...... Viterbi decoder becomes 90 kbit/s. For a BER of 10E-5 the enhanced gain compared to the CCSDS recommended system exceeds 1.5 dB and 1.7 dB for the rate 1/4 and the rate 1/6 codes, respectively....

  16. Towards joint decoding of Tardos fingerprinting codes

    CERN Document Server

    Meerwald, Peter

    2011-01-01

    The class of joint decoder of probabilistic fingerprinting codes is of utmost importance in theoretical papers to establish the concept of fingerprint capacity. However, no implementation supporting a large user base is known to date. This paper presents an iterative decoder which is, as far as we are aware of, the first practical attempt towards joint decoding. The discriminative feature of the scores benefits on one hand from the side-information of previously accused users, and on the other hand, from recently introduced universal linear decoders for compound channels. Neither the code construction nor the decoder make precise assumptions about the collusion (size or strategy). The extension to incorporate soft outputs from the watermarking layer is straightforward. An intensive experimental work benchmarks the very good performances and offers a clear comparison with previous state-of-the-art decoders.

  17. Bayesian artificial intelligence

    CERN Document Server

    Korb, Kevin B

    2010-01-01

    Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology.New to the Second EditionNew chapter on Bayesian network classifiersNew section on object-oriente

  18. Bayesian Graphical Models

    DEFF Research Database (Denmark)

    Jensen, Finn Verner; Nielsen, Thomas Dyhre

    2016-01-01

    Mathematically, a Bayesian graphical model is a compact representation of the joint probability distribution for a set of variables. The most frequently used type of Bayesian graphical models are Bayesian networks. The structural part of a Bayesian graphical model is a graph consisting of nodes and...... largely due to the availability of efficient inference algorithms for answering probabilistic queries about the states of the variables in the network. Furthermore, to support the construction of Bayesian network models, learning algorithms are also available. We give an overview of the Bayesian network...

  19. A decoding failure test for the transform decoder of Reed-Solomon code

    Science.gov (United States)

    Miller, R. L.; Truong, T. K.; Reed, I. S.

    1981-01-01

    Using a finite field transform, a transform decoding algorithm is able to correct erasures as well as errors of any (n,k,d) Reed-Solomon code over the finite field GF(q). A pitfall of transform decoding and how to avoid it are discussed. A simple test is given so that the decoder fails to decode instead of introducing additional errors, whenever the received word contains too many errors and erasures.

  20. A class of Sudan-decodable codes

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Refslund

    2000-01-01

    In this article, Sudan's algorithm is modified into an efficient method to list-decode a class of codes which can be seen as a generalization of Reed-Solomon codes. The algorithm is specialized into a very efficient method for unique decoding. The code construction can be generalized based on...... algebraic-geometry codes and the decoding algorithms are generalized accordingly. Comparisons with Reed-Solomon and Hermitian codes are made....

  1. Iterative Detection and Decoding for Wireless Communications

    OpenAIRE

    Valenti, Matthew C.

    1999-01-01

    Turbo codes are a class of forward error correction (FEC) codes that offer energy efficiencies close to the limits predicted by information theory. The features of turbo codes include parallel code concatenation, recursive convolutional encoding, nonuniform interleaving, and an associated iterative decoding algorithm. Although the iterative decoding algorithm has been primarily used for the decoding of turbo codes, it represents a solution to a more general class of estimation problems tha...

  2. Bayesian data analysis

    CERN Document Server

    Gelman, Andrew; Stern, Hal S; Dunson, David B; Vehtari, Aki; Rubin, Donald B

    2013-01-01

    FUNDAMENTALS OF BAYESIAN INFERENCEProbability and InferenceSingle-Parameter Models Introduction to Multiparameter Models Asymptotics and Connections to Non-Bayesian ApproachesHierarchical ModelsFUNDAMENTALS OF BAYESIAN DATA ANALYSISModel Checking Evaluating, Comparing, and Expanding ModelsModeling Accounting for Data Collection Decision AnalysisADVANCED COMPUTATION Introduction to Bayesian Computation Basics of Markov Chain Simulation Computationally Efficient Markov Chain Simulation Modal and Distributional ApproximationsREGRESSION MODELS Introduction to Regression Models Hierarchical Linear

  3. Bayesian Mediation Analysis

    OpenAIRE

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    This article proposes Bayesian analysis of mediation effects. Compared to conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian mediation analysis, inference is straightforward and exact, which makes it appealing for studies with small samples. Third, the Bayesian approach is conceptua...

  4. Concatenated coding system with iterated sequential inner decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder......We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder...

  5. Bayesian Games with Intentions

    OpenAIRE

    Bjorndahl, Adam; Halpern, Joseph Y.; Pass, Rafael

    2016-01-01

    We show that standard Bayesian games cannot represent the full spectrum of belief-dependent preferences. However, by introducing a fundamental distinction between intended and actual strategies, we remove this limitation. We define Bayesian games with intentions, generalizing both Bayesian games and psychological games, and prove that Nash equilibria in psychological games correspond to a special class of equilibria as defined in our setting.

  6. Bayesian Classification in Medicine: The Transferability Question *

    OpenAIRE

    Zagoria, Ronald J.; Reggia, James A.; Price, Thomas R.; Banko, Maryann

    1981-01-01

    Using probabilities derived from a geographically distant patient population, we applied Bayesian classification to categorize stroke patients by etiology. Performance was assessed both by error rate and with a new linear accuracy coefficient. This approach to patient classification was found to be surprisingly accurate when compared to classification by two neurologists and to classification by the Bayesian method using “low cost” local and subjective probabilities. We conclude that for some...

  7. Bayesian Approach to Handling Informative Sampling

    OpenAIRE

    Sikov, Anna

    2015-01-01

    In the case of informative sampling the sampling scheme explicitly or implicitly depends on the response variable. As a result, the sample distribution of response variable can- not be used for making inference about the population. In this research I investigate the problem of informative sampling from the Bayesian perspective. Application of the Bayesian approach permits solving the problems, which arise due to complexity of the models, being used for handling informative sampling. The main...

  8. On minimizing the maximum broadcast decoding delay for instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.

    2014-09-01

    In this paper, we consider the problem of minimizing the maximum broadcast decoding delay experienced by all the receivers of generalized instantly decodable network coding (IDNC). Unlike the sum decoding delay, the maximum decoding delay as a definition of delay for IDNC allows a more equitable distribution of the delays between the different receivers and thus a better Quality of Service (QoS). In order to solve this problem, we first derive the expressions for the probability distributions of maximum decoding delay increments. Given these expressions, we formulate the problem as a maximum weight clique problem in the IDNC graph. Although this problem is known to be NP-hard, we design a greedy algorithm to perform effective packet selection. Through extensive simulations, we compare the sum decoding delay and the max decoding delay experienced when applying the policies to minimize the sum decoding delay and our policy to reduce the max decoding delay. Simulations results show that our policy gives a good agreement among all the delay aspects in all situations and outperforms the sum decoding delay policy to effectively minimize the sum decoding delay when the channel conditions become harsher. They also show that our definition of delay significantly improve the number of served receivers when they are subject to strict delay constraints.

  9. Blending Wheels: Tools for Decoding Practice

    Science.gov (United States)

    Lane, Holly; Pullen, Paige Cullen

    2015-01-01

    Decoding practice significantly improves students' reading proficiency and is particularly beneficial for those who have or who are at risk for reading difficulties. Finding effective ways to provide decoding practice for struggling readers can be a challenge for teachers. Still, this goal is essential for developing reading proficiency. The…

  10. Oppositional Decoding as an Act of Resistance.

    Science.gov (United States)

    Steiner, Linda

    1988-01-01

    Argues that contributors to the "No Comment" feature of "Ms." magazine are engaging in oppositional decoding and speculates on why this is a satisfying group process. Also notes such decoding presents another challenge to the idea that mass media has the same effect on all audiences. (SD)

  11. VLSI Architectures for WIMAX Channel Decoders

    OpenAIRE

    Martina, Maurizio; Masera, Guido

    2009-01-01

    This chapter describes the main architectures proposed in the literature to implement the channel decoders required by the WiMax standard, namely convolutional codes, turbo codes (both block and convolutional) and LDPC. Then it shows a complete design of a convolutional turbo code encoder/decoder system for WiMax.

  12. Decoding Algorithms for Random Linear Network Codes

    DEFF Research Database (Denmark)

    Heide, Janus; Pedersen, Morten Videbæk; Fitzek, Frank

    2011-01-01

    We consider the problem of efficient decoding of a random linear code over a finite field. In particular we are interested in the case where the code is random, relatively sparse, and use the binary finite field as an example. The goal is to decode the data using fewer operations to potentially a...

  13. Characterizing uncertainty and population variability in the toxicokinetics of trichloroethylene and metabolites in mice, rats, and humans using an updated database, physiologically based pharmacokinetic (PBPK) model, and Bayesian approach

    International Nuclear Information System (INIS)

    We have developed a comprehensive, Bayesian, PBPK model-based analysis of the population toxicokinetics of trichloroethylene (TCE) and its metabolites in mice, rats, and humans, considering a wider range of physiological, chemical, in vitro, and in vivo data than any previously published analysis of TCE. The toxicokinetics of the 'population average,' its population variability, and their uncertainties are characterized in an approach that strives to be maximally transparent and objective. Estimates of experimental variability and uncertainty were also included in this analysis. The experimental database was expanded to include virtually all available in vivo toxicokinetic data, which permitted, in rats and humans, the specification of separate datasets for model calibration and evaluation. The total combination of these approaches and PBPK analysis provides substantial support for the model predictions. In addition, we feel confident that the approach employed also yields an accurate characterization of the uncertainty in metabolic pathways for which available data were sparse or relatively indirect, such as GSH conjugation and respiratory tract metabolism. Key conclusions from the model predictions include the following: (1) as expected, TCE is substantially metabolized, primarily by oxidation at doses below saturation; (2) GSH conjugation and subsequent bioactivation in humans appear to be 10- to 100-fold greater than previously estimated; and (3) mice had the greatest rate of respiratory tract oxidative metabolism as compared to rats and humans. In a situation such as TCE in which there is large database of studies coupled with complex toxicokinetics, the Bayesian approach provides a systematic method of simultaneously estimating model parameters and characterizing their uncertainty and variability. However, care needs to be taken in its implementation to ensure biological consistency, transparency, and objectivity.

  14. Fast decoders for qudit topological codes

    International Nuclear Information System (INIS)

    Qudit toric codes are a natural higher-dimensional generalization of the well-studied qubit toric code. However, standard methods for error correction of the qubit toric code are not applicable to them. Novel decoders are needed. In this paper we introduce two renormalization group decoders for qudit codes and analyse their error correction thresholds and efficiency. The first decoder is a generalization of a ‘hard-decisions’ decoder due to Bravyi and Haah (arXiv:1112.3252). We modify this decoder to overcome a percolation effect which limits its threshold performance for many-level quantum systems. The second decoder is a generalization of a ‘soft-decisions’ decoder due to Poulin and Duclos-Cianci (2010 Phys. Rev. Lett. 104 050504), with a small cell size to optimize the efficiency of implementation in the high dimensional case. In each case, we estimate thresholds for the uncorrelated bit-flip error model and provide a comparative analysis of the performance of both these approaches to error correction of qudit toric codes. (paper)

  15. Application of RS Codes in Decoding QR Code

    Institute of Scientific and Technical Information of China (English)

    Zhu Suxia(朱素霞); Ji Zhenzhou; Cao Zhiyan

    2003-01-01

    The QR Code is a 2-dimensional matrix code with high error correction capability. It employs RS codes to generate error correction codewords in encoding and recover errors and damages in decoding. This paper presents several QR Code's virtues, analyzes RS decoding algorithm and gives a software flow chart of decoding the QR Code with RS decoding algorithm.

  16. Three phase full wave dc motor decoder

    Science.gov (United States)

    Studer, P. A. (Inventor)

    1977-01-01

    A three phase decoder for dc motors is disclosed which employs an extremely simple six transistor circuit to derive six properly phased output signals for fullwave operation of dc motors. Six decoding transistors are coupled at their base-emitter junctions across a resistor network arranged in a delta configuration. Each point of the delta configuration is coupled to one of three position sensors which sense the rotational position of the motor. A second embodiment of the invention is disclosed in which photo-optical isolators are used in place of the decoding transistors.

  17. The Bayesian Bootstrap

    OpenAIRE

    Rubin, Donald B.

    1981-01-01

    The Bayesian bootstrap is the Bayesian analogue of the bootstrap. Instead of simulating the sampling distribution of a statistic estimating a parameter, the Bayesian bootstrap simulates the posterior distribution of the parameter; operationally and inferentially the methods are quite similar. Because both methods of drawing inferences are based on somewhat peculiar model assumptions and the resulting inferences are generally sensitive to these assumptions, neither method should be applied wit...

  18. An Encoder/Decoder Scheme of OCDMA Based on Waveguide

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A new encoder/decoder scheme of OCDMA based on waveguide isproposed in this paper. The principle as well as the structure of waveguide encoder/decoder is given. It can be seen that all-optical OCDMA encoder/decoder can be realized by the proposed scheme of the waveguide encoder/decoder. It can also make the OCDMA encoder/decoder integrated easily and the access controlled easily. The system based on this scheme can work under the entirely asynchronous condition.

  19. Algebraic Soft-Decision Decoding of Hermitian Codes

    OpenAIRE

    Lee, Kwankyu; O'Sullivan, Michael E.

    2008-01-01

    An algebraic soft-decision decoder for Hermitian codes is presented. We apply Koetter and Vardy's soft-decision decoding framework, now well established for Reed-Solomon codes, to Hermitian codes. First we provide an algebraic foundation for soft-decision decoding. Then we present an interpolation algorithm finding the Q-polynomial that plays a key role in the decoding. With some simulation results, we compare performances of the algebraic soft-decision decoders for Hermitian codes and Reed-S...

  20. Bayesian statistics an introduction

    CERN Document Server

    Lee, Peter M

    2012-01-01

    Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel

  1. Understanding Computational Bayesian Statistics

    CERN Document Server

    Bolstad, William M

    2011-01-01

    A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic

  2. Low-Power Bitstream-Residual Decoder for H.264/AVC Baseline Profile Decoding

    Directory of Open Access Journals (Sweden)

    Xu Ke

    2009-01-01

    Full Text Available Abstract We present the design and VLSI implementation of a novel low-power bitstream-residual decoder for H.264/AVC baseline profile. It comprises a syntax parser, a parameter decoder, and an Inverse Quantization Inverse Transform (IQIT decoder. The syntax parser detects and decodes each incoming codeword in the bitstream under the control of a hierarchical Finite State Machine (FSM; the IQIT decoder performs inverse transform and quantization with pipelining and parallelism. Various power reduction techniques, such as data-driven based on statistic results, nonuniform partition, precomputation, guarded evaluation, hierarchical FSM decomposition, TAG method, zero-block skipping, and clock gating , are adopted and integrated throughout the bitstream-residual decoder. With innovative architecture, the proposed design is able to decode QCIF video sequences of 30 fps at a clock rate as low as 1.5 MHz. A prototype H.264/AVC baseline decoding chip utilizing the proposed decoder is fabricated in UMC 0.18  m 1P6M CMOS technology. The proposed design is measured under 1 V 1.8 V supply with 0.1 V step. It dissipates 76  W at 1 V and 253  W at 1.8 V.

  3. Low Power Decoding of LDPC Codes

    OpenAIRE

    Mohamed Ismail; Imran Ahmed; Justin Coon

    2013-01-01

    Wireless sensor networks are used in many diverse application scenarios that require the network designer to trade off different factors. Two such factors of importance in many wireless sensor networks are communication reliability and battery life. This paper describes an efficient, low complexity, high throughput channel decoder suited to decoding low-density parity-check (LDPC) codes. LDPC codes have demonstrated excellent error-correcting ability such that a number of recent wireless stan...

  4. Simplified decoding techniques for linear block codes

    OpenAIRE

    Srivastava, Shraddha

    2013-01-01

    Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The resea...

  5. Encoding and decoding of femtosecond pulses.

    Science.gov (United States)

    Weiner, A M; Heritage, J P; Salehi, J A

    1988-04-01

    We demonstrate the spreading of femtosecond optical pulses into picosecond-duration pseudonoise bursts. Spreading is accomplished by encoding pseudorandom binary phase codes onto the optical frequency spectrum. Subsequent decoding of the spectral phases restores the original pulse. We propose that frequency-domain encoding and decoding of coherent ultrashort pulses could form the basis for a rapidly reconfigurable, code-division multiple-access optical telecommunications network. PMID:19745879

  6. On Fuzzy Bayesian Inference

    OpenAIRE

    Frühwirth-Schnatter, Sylvia

    1990-01-01

    In the paper at hand we apply it to Bayesian statistics to obtain "Fuzzy Bayesian Inference". In the subsequent sections we will discuss a fuzzy valued likelihood function, Bayes' theorem for both fuzzy data and fuzzy priors, a fuzzy Bayes' estimator, fuzzy predictive densities and distributions, and fuzzy H.P.D .-Regions. (author's abstract)

  7. Bayesian Mediation Analysis

    Science.gov (United States)

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…

  8. Sphere decoding complexity exponent for decoding full rate codes over the quasi-static MIMO channel

    CERN Document Server

    Jalden, Joakim

    2011-01-01

    In the setting of quasi-static multiple-input multiple-output (MIMO) channels, we consider the high signal-to-noise ratio (SNR) asymptotic complexity required by the sphere decoding (SD) algorithm for decoding a large class of full rate linear space-time codes. With SD complexity having random fluctuations induced by the random channel, noise and codeword realizations, the introduced SD complexity exponent manages to concisely describe the computational reserves required by the SD algorithm to achieve arbitrarily close to optimal decoding performance. Bounds and exact expressions for the SD complexity exponent are obtained for the decoding of large families of codes with arbitrary performance characteristics. For the particular example of decoding the recently introduced threaded cyclic division algebra (CDA) based codes -- the only currently known explicit designs that are uniformly optimal with respect to the diversity multiplexing tradeoff (DMT) -- the SD complexity exponent is shown to take a particularly...

  9. On the decoder error probability for Reed-Solomon codes

    OpenAIRE

    McEliece, Robert J.; Swanson, Laif

    1986-01-01

    Upper bounds On the decoder error probability for Reed-Solomon codes are derived. By definition, "decoder error" occurs when the decoder finds a codeword other than the transitted codeword; this is in contrast to "decoder failure," which occurs when the decoder fails to find any codeword at all. These results imply, for example, that for a t error-correcting Reed-Solomon code of length q - 1 over GF(q), if more than t errors occur, the probability of decoder error is less than 1/t!.

  10. Exponential Lower Bound for 2-Query Locally Decodable Codes

    CERN Document Server

    Kerenidis, I; Kerenidis, Iordanis; Wolf, Ronald de

    2002-01-01

    We prove exponential lower bounds on the length of 2-query locally decodable codes. Goldreich et al. recently proved such bounds for the special case of linear locally decodable codes. Our proof shows that a 2-query locally decodable code can be decoded with only 1 quantum query, and then proves an exponential lower bound for such 1-query locally quantum-decodable codes. We also exhibit q-query locally quantum-decodable codes that are much shorter than the best known q-query classical codes. Finally, we give some new lower bounds for (not necessarily linear) private information retrieval systems.

  11. SLUG -- Stochastically Lighting Up Galaxies. III: A Suite of Tools for Simulated Photometry, Spectroscopy, and Bayesian Inference with Stochastic Stellar Populations

    CERN Document Server

    Krumholz, Mark R; da Silva, Robert L; Rendahl, Theodore; Parra, Jonathan

    2015-01-01

    Stellar population synthesis techniques for predicting the observable light emitted by a stellar population have extensive applications in numerous areas of astronomy. However, accurate predictions for small populations of young stars, such as those found in individual star clusters, star-forming dwarf galaxies, and small segments of spiral galaxies, require that the population be treated stochastically. Conversely, accurate deductions of the properties of such objects also requires consideration of stochasticity. Here we describe a comprehensive suite of modular, open-source software tools for tackling these related problems. These include: a greatly-enhanced version of the slug code introduced by da Silva et al. (2012), which computes spectra and photometry for stochastically- or deterministically-sampled stellar populations with nearly-arbitrary star formation histories, clustering properties, and initial mass functions; cloudy_slug, a tool that automatically couples slug-computed spectra with the cloudy r...

  12. Word-decoding as a function of temporal processing in the visual system.

    Directory of Open Access Journals (Sweden)

    Steven R Holloway

    Full Text Available This study explored the relation between visual processing and word-decoding ability in a normal reading population. Forty participants were recruited at Arizona State University. Flicker fusion thresholds were assessed with an optical chopper using the method of limits by a 1-deg diameter green (543 nm test field. Word decoding was measured using reading-word and nonsense-word decoding tests. A non-linguistic decoding measure was obtained using a computer program that consisted of Landolt C targets randomly presented in four cardinal orientations, at 3-radial distances from a focus point, for eight compass points, in a circular pattern. Participants responded by pressing the arrow key on the keyboard that matched the direction the target was facing. The results show a strong correlation between critical flicker fusion thresholds and scores on the reading-word, nonsense-word, and non-linguistic decoding measures. The data suggests that the functional elements of the visual system involved with temporal modulation and spatial processing may affect the ease with which people read.

  13. Phonological or orthographic training for children with phonological or orthographic decoding deficits.

    Science.gov (United States)

    Gustafson, Stefan; Ferreira, Janna; Rönnberg, Jerker

    2007-08-01

    In a longitudinal intervention study, Swedish reading disabled children in grades 2-3 received either a phonological (n = 41) or an orthographic (n = 39) training program. Both programs were computerized and interventions took place in ordinary school settings with trained special instruction teachers. Two comparison groups, ordinary special instruction and normal readers, were also included in the study. Results showed strong average training effects on text reading and general word decoding for both phonological and orthographic training, but not significantly higher improvements than for the comparison groups. The main research finding was a double dissociation: children with pronounced phonological problems improved their general word decoding skill more from phonological than from orthographic training, whereas the opposite was observed for children with pronounced orthographic problems. Thus, in this population of children, training should focus on children's relative weakness rather than their relative strength in word decoding. PMID:17624906

  14. VHDL Modelling of Reed Solomon Decoder

    Directory of Open Access Journals (Sweden)

    Zi-Yi Lam

    2012-12-01

    Full Text Available In digital communication systems, both random and burst errors may occur in the transmission channel. As a result, the signal will be distorted at the receiver. Error correction coding is required to eliminate such errors. In this study, a Reed Solomon (255, 191 error correction code is modelled to detect and correct the data transmitted in a noisy channel. Reed Solomon (RS codec is a powerful error correction tool that is used to ensure the errors correction in digital communication systems. However, RS codec is computionally intensive and custom design is required for different digital systems. RS decoder modeling using Very High speed hardware Description Language (VHDL made it suitable to be implemented on a Field Programmable logic Array (FPGA based copocessor. The flexibility of FPGA in hardware reconfiguration greatly reduces the development time for RS decoder in all kinds of specialized circuit designs. The arithmetic operations which are used in RS code were Galois Fields (GF addition and multiplication. This study presented: i RS encoder modelled using MATLAB with data encoded in the noisy channel for functional verification. ii RS decoder modelled in Very High speed hardware Description Language (VHDL to recover the erroneous data. RS decoder has been successfully simplified to only four sub-modules in order to reduce the FPGA’s resources utilization. The VHDL modelled RS (255, 191 decoder has the capability of 32 symbol-errors detection and correction. It can be added into the VHDL designer library for future system designs.

  15. Completion time reduction in instantly decodable network coding through decoding delay control

    KAUST Repository

    Douik, Ahmed S.

    2014-12-01

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to completely act against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. In this paper, we study the effect of controlling the decoding delay to reduce the completion time below its currently best known solution. We first derive the decoding-delay-dependent expressions of the users\\' and their overall completion times. Although using such expressions to find the optimal overall completion time is NP-hard, we use a heuristic that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Simulation results show that this new algorithm achieves both a lower mean completion time and mean decoding delay compared to the best known heuristic for completion time reduction. The gap in performance becomes significant for harsh erasure scenarios.

  16. Informed Network Coding for Minimum Decoding Delay

    CERN Document Server

    Costa, Rui A; Widmer, Joerg; Barros, Joao

    2008-01-01

    Network coding is a highly efficient data dissemination mechanism for wireless networks. Since network coded information can only be recovered after delivering a sufficient number of coded packets, the resulting decoding delay can become problematic for delay-sensitive applications such as real-time media streaming. Motivated by this observation, we consider several algorithms that minimize the decoding delay and analyze their performance by means of simulation. The algorithms differ both in the required information about the state of the neighbors' buffers and in the way this knowledge is used to decide which packets to combine through coding operations. Our results show that a greedy algorithm, whose encodings maximize the number of nodes at which a coded packet is immediately decodable significantly outperforms existing network coding protocols.

  17. Online Testable Decoder using Reversible Logic

    Directory of Open Access Journals (Sweden)

    Hemalatha. K. N. Manjula B. B. Girija. S

    2012-02-01

    Full Text Available The project proposes to design and test 2 to 4 reversible Decoder circuit with arbitrary number of gates to an online testable reversible one and is independent of the type of reversible gate used. The constructed circuit can detect any single bit errors and to convert a decoder circuit that is designed by reversible gates to an online testable reversible decoder circuit. Conventional digital circuits dissipate a significant amount of energy because bits of information are erased during the logic operations. Thus if logic gates are designed such that the information bits are not destroyed, the power consumption can be reduced. The information bits are not lost in case of a reversible computation. Reversible logic can be used to implement any Boolean logic function.

  18. Practical Bayesian Tomography

    CERN Document Server

    Granade, Christopher; Cory, D G

    2015-01-01

    In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of- the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we solve all three problems. First, we use modern statistical methods, as pioneered by Husz\\'ar and Houlsby and by Ferrie, to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first informative priors on quantum states and channels. Finally, we develop a method that allows online tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.

  19. Neuroprosthetic Decoder Training as Imitation Learning.

    Science.gov (United States)

    Merel, Josh; Carlson, David; Paninski, Liam; Cunningham, John P

    2016-05-01

    Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user's intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user's intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger), can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy) for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector. PMID:27191387

  20. Neuroprosthetic Decoder Training as Imitation Learning.

    Directory of Open Access Journals (Sweden)

    Josh Merel

    2016-05-01

    Full Text Available Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user's intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user's intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger, can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector.

  1. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Thommesen, Christian

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes.......Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes....

  2. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Thommesen, Christian; Høholdt, Tom

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved Reed/Solomon codes, which allows close to errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes. (NK) N-K......Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved Reed/Solomon codes, which allows close to errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes. (NK) N-K...

  3. Power Decoding of Reed-Solomon Codes Revisited

    OpenAIRE

    Nielsen, Johan S. R.

    2013-01-01

    Power decoding, or "decoding by virtual interleaving", of Reed--Solomon codes is a method for unique decoding beyond half the minimum distance. We give a new variant of the Power decoding scheme, building upon the key equation of Gao. We show various interesting properties such as behavioural equivalence to the classical scheme using syndromes, as well as a new bound on the failure probability when the powering degree is 3.

  4. Generalized Sudan's List Decoding for Order Domain Codes

    DEFF Research Database (Denmark)

    Geil, Hans Olav; Matsumoto, Ryutaroh

    We generalize Sudan's list decoding algorithm without multiplicity to evaluation codes coming from arbitrary order domains. The number of correctable errors by the proposed method is larger than the original list decoding without multiplicity.......We generalize Sudan's list decoding algorithm without multiplicity to evaluation codes coming from arbitrary order domains. The number of correctable errors by the proposed method is larger than the original list decoding without multiplicity....

  5. Bayesian exploratory factor analysis

    OpenAIRE

    Gabriella Conti; Sylvia Frühwirth-Schnatter; James Heckman; Rémi Piatek

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identifi cation criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study c...

  6. Bayesian Exploratory Factor Analysis

    OpenAIRE

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study co...

  7. Bayesian Exploratory Factor Analysis

    OpenAIRE

    Gabriella Conti; Sylvia Fruehwirth-Schnatter; Heckman, James J.; Remi Piatek

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on \\emph{ad hoc} classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo s...

  8. Bayesian exploratory factor analysis

    OpenAIRE

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo st...

  9. Bayesian exploratory factor analysis

    OpenAIRE

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James; Piatek, Rémi

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study co...

  10. Bayesian default probability models

    OpenAIRE

    Andrlíková, Petra

    2014-01-01

    This paper proposes a methodology for default probability estimation for low default portfolios, where the statistical inference may become troublesome. The author suggests using logistic regression models with the Bayesian estimation of parameters. The piecewise logistic regression model and Box-Cox transformation of credit risk score is used to derive the estimates of probability of default, which extends the work by Neagu et al. (2009). The paper shows that the Bayesian models are more acc...

  11. LP Decoding meets LP Decoding: A Connection between Channel Coding and Compressed Sensing

    CERN Document Server

    Dimakis, Alexandros G

    2009-01-01

    This is a tale of two linear programming decoders, namely channel coding linear programming decoding (CC-LPD) and compressed sensing linear programming decoding (CS-LPD). So far, they have evolved quite independently. The aim of the present paper is to show that there is a tight connection between, on the one hand, CS-LPD based on a zero-one measurement matrix over the reals and, on the other hand, CC-LPD of the binary linear code that is obtained by viewing this measurement matrix as a binary parity-check matrix. This connection allows one to translate performance guarantees from one setup to the other.

  12. Real-Time Reed-Solomon Decoder

    Science.gov (United States)

    Maki, Gary K.; Cameron, Kelly B.; Owsley, Patrick A.

    1994-01-01

    Generic Reed-Solomon decoder fast enough to correct errors in real time in practical applications designed to be implemented in fewer and smaller very-large-scale integrated, VLSI, circuit chips. Configured to operate in pipelined manner. One outstanding aspect of decoder design is that Euclid multiplier and divider modules contain Galoisfield multipliers configured as combinational-logic cells. Operates at speeds greater than older multipliers. Cellular configuration highly regular and requires little interconnection area, making it ideal for implementation in extraordinarily dense VLSI circuitry. Flight electronics single chip version of this technology implemented and available.

  13. Bounds on the Threshold of Linear Programming Decoding

    OpenAIRE

    Vontobel, Pascal O.; Koetter, Ralf

    2006-01-01

    Whereas many results are known about thresholds for ensembles of low-density parity-check codes under message-passing iterative decoding, this is not the case for linear programming decoding. Towards closing this knowledge gap, this paper presents some bounds on the thresholds of low-density parity-check code ensembles under linear programming decoding.

  14. Dose-effect relationships of intravenous enoxaparin in patients undergoing percutaneous coronary intervention:a population and bayesian approach based study

    Institute of Scientific and Technical Information of China (English)

    PaolaSANCHEZ-PENA; JeanSEBASTIENHULOT[; SalkURIEN; MD; AnnickANKRI; GillesMONTALESCOT; PhilippeLECHAT

    2004-01-01

    AIM: Recent studies have suggested that intravenous enoxaparin can be used as an alternative therapy in patients percutaneous coronary intervention (PCI); yet the optimal regimen is to be defined. METHODS: Anti-Xa activities were measured in 556 patients who received a single 0.5 mg/kg dose of enoxaparin intravenously immediately before PCI. A population pharmacoki-

  15. Inferring the origin of populations introduced from a genetically structured native range by approximate Bayesian computation: case study of the invasive ladybird Harmonia axyridis

    NARCIS (Netherlands)

    Lombaert, E.; Guillemaud, T.; Thomas, C.E.; Handley, L.J.L.; Li, J.; Wang, S.; Pang, H.; Goryacheva, I.; Zakharov, I.A.; Jousselin, E.; Poland, R.L.; Migeon, A.; Lenteren, van J.C.; Clercq, de P.; Berkvens, N.; Jones, W.; Estoup, A.

    2011-01-01

    Correct identification of the source population of an invasive species is a prerequisite for testing hypotheses concerning the factors responsible for biological invasions. The native area of invasive species may be large, poorly known and/or genetically structured. Because the actual source populat

  16. The decoding of Reed-Solomon codes

    Science.gov (United States)

    McEliece, R. J.

    1988-11-01

    Reed-Solomon (RS) codes form an important part of the high-rate downlink telemetry system for the Magellan mission, and the RS decoding function for this project will be done by DSN. Although the basic idea behind all Reed-Solomon decoding algorithms was developed by Berlekamp in 1968, there are dozens of variants of Berlekamp's algorithm in current use. An attempt to restore order is made by presenting a mathematical theory which explains the working of almost all known RS decoding algorithms. The key innovation that makes this possible is the unified approach to the solution of the key equation, which simultaneously describes the Berlekamp, Berlekamp-Massey, Euclid, and continued fractions approaches. Additionally, a detailed analysis is made of what can happen to a generic RS decoding algorithm when the number of errors and erasures exceeds the code's designed correction capability, and it is shown that while most published algorithms do not detect as many of these error-erasure patterns as possible, by making a small change in the algorithms, this problem can be overcome.

  17. Sudan-decoding generalized geometric Goppa codes

    DEFF Research Database (Denmark)

    Heydtmann, Agnes Eileen

    2003-01-01

    Generalized geometric Goppa codes are vector spaces of n-tuples with entries from different extension fields of a ground field. They are derived from evaluating functions similar to conventional geometric Goppa codes, but allowing evaluation in places of arbitrary degree. A decoding scheme for...

  18. Perceptual Learning via Decoded-EEG Neurofeedback

    NARCIS (Netherlands)

    Brandmeyer, A.; Sadakata, M.; Spyrou, L.; McQueen, J.M.; Desain, P.W.M.

    2013-01-01

    An experiment was conducted to determine whether decoding auditory evoked potentials during passive listening and providing the classifier output as a neurofeedback signal leads to the enhancement of auditory perceptual discrimination and/or brain responses related to auditory perception. Results in

  19. Performance breakdown in optimal stimulus decoding

    Czech Academy of Sciences Publication Activity Database

    Košťál, Lubomír; Lánský, Petr; Pilarski, Stevan

    2015-01-01

    Roč. 12, č. 3 (2015), 036012. ISSN 1741-2560 R&D Projects: GA ČR(CZ) GA15-08066S Institutional support: RVO:67985823 Keywords : decoding accuracy * Fisher information * threshold effect Subject RIV: BD - Theory of Information Impact factor: 3.295, year: 2014

  20. High Speed Frame Synchronization and Viterbi Decoding

    DEFF Research Database (Denmark)

    Paaske, Erik; Justesen, Jørn; Larsen, Knud J.;

    1996-01-01

    The purpose of Phase 1 of the study is to describe the system structure and algorithms in sufficient detail to allow drawing the high level architecture of units containing frame synchronization and Viterbi decoding. The systems we consider are high data rate space communication systems. Also, th...

  1. High Speed Frame Synchronization and Viterbi Decoding

    DEFF Research Database (Denmark)

    Paaske, Erik; Justesen, Jørn; Larsen, Knud J.;

    1998-01-01

    The study has been divided into two phases. The purpose of Phase 1 of the study was to describe the system structure and algorithms in sufficient detail to allow drawing the high level architecture of units containing frame synchronization and Viterbi decoding. After selection of which specific u...

  2. Erasure information for a Reed-Solomon decoder

    Science.gov (United States)

    Pitt, G. H., III; Swanson, L.

    1985-11-01

    Many Reed-Solomon decoders, including the one decoding the outer code for Voyager data from Uranus, assume that all symbols have the same chance of being correct or incorrect. Insome cases, like in a burst of incorrect symbols, this is not the case, and a Reed-Solomon decoder could make use of this. The use of information about bit quality sent to the Reed-Solomon from an (inner) Viterbi decoder is examined, as well as information about the error status of adjacent symbols in decoding interleaved Reed-Solomon encoded symbols. It is discovered that, in a region of interest, only about 0.04 dB can gained.

  3. Algebraic Soft-Decision Decoding of Hermitian Codes

    CERN Document Server

    Lee, Kwankyu

    2008-01-01

    An algebraic soft-decision decoder for Hermitian codes is presented. We apply Koetter and Vardy's soft-decision decoding framework, now well established for Reed-Solomon codes, to Hermitian codes. First we provide an algebraic foundation for soft-decision decoding. Then we present an interpolation algorithm finding the Q-polynomial that plays a key role in the decoding. With some simulation results, we compare performances of the algebraic soft-decision decoders for Hermitian codes and Reed-Solomon codes, favorable to the former.

  4. Bayesian phylogeography finds its roots.

    Directory of Open Access Journals (Sweden)

    Philippe Lemey

    2009-09-01

    Full Text Available As a key factor in endemic and epidemic dynamics, the geographical distribution of viruses has been frequently interpreted in the light of their genetic histories. Unfortunately, inference of historical dispersal or migration patterns of viruses has mainly been restricted to model-free heuristic approaches that provide little insight into the temporal setting of the spatial dynamics. The introduction of probabilistic models of evolution, however, offers unique opportunities to engage in this statistical endeavor. Here we introduce a Bayesian framework for inference, visualization and hypothesis testing of phylogeographic history. By implementing character mapping in a Bayesian software that samples time-scaled phylogenies, we enable the reconstruction of timed viral dispersal patterns while accommodating phylogenetic uncertainty. Standard Markov model inference is extended with a stochastic search variable selection procedure that identifies the parsimonious descriptions of the diffusion process. In addition, we propose priors that can incorporate geographical sampling distributions or characterize alternative hypotheses about the spatial dynamics. To visualize the spatial and temporal information, we summarize inferences using virtual globe software. We describe how Bayesian phylogeography compares with previous parsimony analysis in the investigation of the influenza A H5N1 origin and H5N1 epidemiological linkage among sampling localities. Analysis of rabies in West African dog populations reveals how virus diffusion may enable endemic maintenance through continuous epidemic cycles. From these analyses, we conclude that our phylogeographic framework will make an important asset in molecular epidemiology that can be easily generalized to infer biogeogeography from genetic data for many organisms.

  5. On the decode error probability for Reed-Solomon codes

    Science.gov (United States)

    McEliece, R. J.; Swanson, L.

    1986-02-01

    Upper bounds on the decoder error probability for Reed-Solomon codes are derived. By definition, decoder error occurs when the decoder finds a codeword other than the transmitted codeword; this is in contrast to decoder failure, which occurs when the decoder fails to find any codeword at all. The results imply, for example, that for a t error correcting Reed-Solomon code of length q - 1 over GF(q), if more than t errors occur, the probability of decoder error is less than 1/t] In particular, for the Voyager Reed-Solomon code, the probability of decoder error given a word error is smaller than 3 x 10 to the minus 14th power. Thus, in a typical operating region with probability 100,000 of word error, the probability of undetected word error is about 10 to the minus 14th power.

  6. Efficient Decoding of Turbo Codes with Nonbinary Belief Propagation

    Directory of Open Access Journals (Sweden)

    Thierry Lestable

    2008-05-01

    Full Text Available This paper presents a new approach to decode turbo codes using a nonbinary belief propagation decoder. The proposed approach can be decomposed into two main steps. First, a nonbinary Tanner graph representation of the turbo code is derived by clustering the binary parity-check matrix of the turbo code. Then, a group belief propagation decoder runs several iterations on the obtained nonbinary Tanner graph. We show in particular that it is necessary to add a preprocessing step on the parity-check matrix of the turbo code in order to ensure good topological properties of the Tanner graph and then good iterative decoding performance. Finally, by capitalizing on the diversity which comes from the existence of distinct efficient preprocessings, we propose a new decoding strategy, called decoder diversity, that intends to take benefits from the diversity through collaborative decoding schemes.

  7. Decoding subjective mental states from fMRI activity patterns

    International Nuclear Information System (INIS)

    In recent years, functional magnetic resonance imaging (fMRI) decoding has emerged as a powerful tool to read out detailed stimulus features from multi-voxel brain activity patterns. Moreover, the method has been extended to perform a primitive form of 'mind-reading,' by applying a decoder 'objectively' trained using stimulus features to more 'subjective' conditions. In this paper, we first introduce basic procedures for fMRI decoding based on machine learning techniques. Second, we discuss the source of information used for decoding, in particular, the possibility of extracting information from subvoxel neural structures. We next introduce two experimental designs for decoding subjective mental states: the 'objective-to-subjective design' and the 'subjective-to-subjective design.' Then, we illustrate recent studies on the decoding of a variety of mental states, such as, attention, awareness, decision making, memory, and mental imagery. Finally, we discuss the challenges and new directions of fMRI decoding. (author)

  8. Populism

    OpenAIRE

    Abts, Koenraad; van Kessel, Stijn

    2015-01-01

    Populism is a concept applied to a wide range of political movements and actors across the globe. There is, at the same time, considerable confusion about the attributes and manifestation of populism, as well as its impact on democracy. This contribution identifies the defining elements of the populist ideology and discusses the varieties in which populism manifests itself, for instance as a component of certain party families. We finally discuss various normative interpretations of populism,...

  9. Bayesian least squares deconvolution

    Science.gov (United States)

    Asensio Ramos, A.; Petit, P.

    2015-11-01

    Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  10. Bayesian least squares deconvolution

    CERN Document Server

    Ramos, A Asensio

    2015-01-01

    Aims. To develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods. We consider LSD under the Bayesian framework and we introduce a flexible Gaussian Process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results. We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  11. Bayesian Adaptive Exploration

    CERN Document Server

    Loredo, T J

    2004-01-01

    I describe a framework for adaptive scientific exploration based on iterating an Observation--Inference--Design cycle that allows adjustment of hypotheses and observing protocols in response to the results of observation on-the-fly, as data are gathered. The framework uses a unified Bayesian methodology for the inference and design stages: Bayesian inference to quantify what we have learned from the available data and predict future data, and Bayesian decision theory to identify which new observations would teach us the most. When the goal of the experiment is simply to make inferences, the framework identifies a computationally efficient iterative ``maximum entropy sampling'' strategy as the optimal strategy in settings where the noise statistics are independent of signal properties. Results of applying the method to two ``toy'' problems with simulated data--measuring the orbit of an extrasolar planet, and locating a hidden one-dimensional object--show the approach can significantly improve observational eff...

  12. iBOA: The Incremental Bayesian Optimization Algorithm

    CERN Document Server

    Pelikan, Martin; Goldberg, David E

    2008-01-01

    This paper proposes the incremental Bayesian optimization algorithm (iBOA), which modifies standard BOA by removing the population of solutions and using incremental updates of the Bayesian network. iBOA is shown to be able to learn and exploit unrestricted Bayesian networks using incremental techniques for updating both the structure as well as the parameters of the probabilistic model. This represents an important step toward the design of competent incremental estimation of distribution algorithms that can solve difficult nearly decomposable problems scalably and reliably.

  13. Bayesian and frequentist inequality tests

    OpenAIRE

    David M. Kaplan; Zhuo, Longhao

    2016-01-01

    Bayesian and frequentist criteria are fundamentally different, but often posterior and sampling distributions are asymptotically equivalent (and normal). We compare Bayesian and frequentist hypothesis tests of inequality restrictions in such cases. For finite-dimensional parameters, if the null hypothesis is that the parameter vector lies in a certain half-space, then the Bayesian test has (frequentist) size $\\alpha$; if the null hypothesis is any other convex subspace, then the Bayesian test...

  14. Bayesian multiple target tracking

    CERN Document Server

    Streit, Roy L

    2013-01-01

    This second edition has undergone substantial revision from the 1999 first edition, recognizing that a lot has changed in the multiple target tracking field. One of the most dramatic changes is in the widespread use of particle filters to implement nonlinear, non-Gaussian Bayesian trackers. This book views multiple target tracking as a Bayesian inference problem. Within this framework it develops the theory of single target tracking, multiple target tracking, and likelihood ratio detection and tracking. In addition to providing a detailed description of a basic particle filter that implements

  15. Bayesian Exploratory Factor Analysis

    DEFF Research Database (Denmark)

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.;

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the...... corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...

  16. A Bayesian Approach to Identifying New Risk Factors for Dementia

    OpenAIRE

    Wen, Yen-Hsia; Wu, Shihn-Sheng; Lin, Chun-Hung Richard; Tsai, Jui-Hsiu; Yang, Pinchen; Chang, Yang-Pei; Tseng, Kuan-Hua

    2016-01-01

    Abstract Dementia is one of the most disabling and burdensome health conditions worldwide. In this study, we identified new potential risk factors for dementia from nationwide longitudinal population-based data by using Bayesian statistics. We first tested the consistency of the results obtained using Bayesian statistics with those obtained using classical frequentist probability for 4 recognized risk factors for dementia, namely severe head injury, depression, diabetes mellitus, and vascular...

  17. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid

    2013-04-04

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the \\\\textit{lattice decoder}. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  18. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid

    2012-10-01

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  19. Neural decoding of visual imagery during sleep.

    Science.gov (United States)

    Horikawa, T; Tamaki, M; Miyawaki, Y; Kamitani, Y

    2013-05-01

    Visual imagery during sleep has long been a topic of persistent speculation, but its private nature has hampered objective analysis. Here we present a neural decoding approach in which machine-learning models predict the contents of visual imagery during the sleep-onset period, given measured brain activity, by discovering links between human functional magnetic resonance imaging patterns and verbal reports with the assistance of lexical and image databases. Decoding models trained on stimulus-induced brain activity in visual cortical areas showed accurate classification, detection, and identification of contents. Our findings demonstrate that specific visual experience during sleep is represented by brain activity patterns shared by stimulus perception, providing a means to uncover subjective contents of dreaming using objective neural measurement. PMID:23558170

  20. Encoding and decoding time in neural development.

    Science.gov (United States)

    Toma, Kenichi; Wang, Tien-Cheng; Hanashima, Carina

    2016-01-01

    The development of a multicellular organism involves time-dependent changes in molecular and cellular states; therefore 'time' is an indispensable mathematical parameter of ontogenesis. Regardless of their inextricable relationship, there is a limited number of events for which the output of developmental phenomena primarily uses temporal cues that are generated through multilevel interactions between molecules, cells, and tissues. In this review, we focus on neural stem cells, which serve as a faithful decoder of temporal cues to transmit biological information and generate specific output in the developing nervous system. We further explore the identity of the temporal information that is encoded in neural development, and how this information is decoded into various cellular fate decisions. PMID:26748623

  1. Hardware Implementation of Serially Concatenated PPM Decoder

    Science.gov (United States)

    Moision, Bruce; Hamkins, Jon; Barsoum, Maged; Cheng, Michael; Nakashima, Michael

    2009-01-01

    A prototype decoder for a serially concatenated pulse position modulation (SCPPM) code has been implemented in a field-programmable gate array (FPGA). At the time of this reporting, this is the first known hardware SCPPM decoder. The SCPPM coding scheme, conceived for free-space optical communications with both deep-space and terrestrial applications in mind, is an improvement of several dB over the conventional Reed-Solomon PPM scheme. The design of the FPGA SCPPM decoder is based on a turbo decoding algorithm that requires relatively low computational complexity while delivering error-rate performance within approximately 1 dB of channel capacity. The SCPPM encoder consists of an outer convolutional encoder, an interleaver, an accumulator, and an inner modulation encoder (more precisely, a mapping of bits to PPM symbols). Each code is describable by a trellis (a finite directed graph). The SCPPM decoder consists of an inner soft-in-soft-out (SISO) module, a de-interleaver, an outer SISO module, and an interleaver connected in a loop (see figure). Each SISO module applies the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm to compute a-posteriori bit log-likelihood ratios (LLRs) from apriori LLRs by traversing the code trellis in forward and backward directions. The SISO modules iteratively refine the LLRs by passing the estimates between one another much like the working of a turbine engine. Extrinsic information (the difference between the a-posteriori and a-priori LLRs) is exchanged rather than the a-posteriori LLRs to minimize undesired feedback. All computations are performed in the logarithmic domain, wherein multiplications are translated into additions, thereby reducing complexity and sensitivity to fixed-point implementation roundoff errors. To lower the required memory for storing channel likelihood data and the amounts of data transfer between the decoder and the receiver, one can discard the majority of channel likelihoods, using only the remainder in

  2. Olfactory Decoding Method Using Neural Spike Signals

    Institute of Scientific and Technical Information of China (English)

    Kyung-jin YOU; Hyun-chool SHIN

    2010-01-01

    This paper presents a travel method for inferring the odor based on naval activities observed from rats'main olfactory bulbs.Mufti-channel extmcellular single unit recordings are done by microwire electrodes(Tungsten,50μm,32 channels)innplanted in the mitral/tufted cell layers of the main olfactory bulb of the anesthetized rats to obtain neural responses to various odors.Neural responses as a key feature are measured by subtraction firing rates before stimulus from after.For odor irderenoe,a decoding method is developed based on the ML estimation.The results show that the average decoding acauacy is about 100.0%,96.0%,and 80.0% with three rats,respectively.This wait has profound implications for a novel brain-madune interface system far odor inference.

  3. Bayesian Geostatistical Design

    DEFF Research Database (Denmark)

    Diggle, Peter; Lophaven, Søren Nymand

    2006-01-01

    locations to, or deletion of locations from, an existing design, and prospective design, which consists of choosing positions for a new set of sampling locations. We propose a Bayesian design criterion which focuses on the goal of efficient spatial prediction whilst allowing for the fact that model...

  4. Bayesian Filters in Practice

    Czech Academy of Sciences Publication Activity Database

    Krejsa, Jiří; Věchet, S.

    Bratislava: Slovak University of Technology in Bratislava, 2010, s. 217-222. ISBN 978-80-227-3353-3. [Robotics in Education . Bratislava (SK), 16.09.2010-17.09.2010] Institutional research plan: CEZ:AV0Z20760514 Keywords : mobile robot localization * bearing only beacons * Bayesian filters Subject RIV: JD - Computer Applications, Robotics

  5. Subjective Bayesian Beliefs

    DEFF Research Database (Denmark)

    Antoniou, Constantinos; Harrison, Glenn W.; Lau, Morten I.;

    2015-01-01

    A large literature suggests that many individuals do not apply Bayes’ Rule when making decisions that depend on them correctly pooling prior information and sample data. We replicate and extend a classic experimental study of Bayesian updating from psychology, employing the methods of experimenta...

  6. Bayesian Independent Component Analysis

    DEFF Research Database (Denmark)

    Winther, Ole; Petersen, Kaare Brandt

    2007-01-01

    In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...

  7. Noncausal Bayesian Vector Autoregression

    DEFF Research Database (Denmark)

    Lanne, Markku; Luoto, Jani

    We propose a Bayesian inferential procedure for the noncausal vector autoregressive (VAR) model that is capable of capturing nonlinearities and incorporating effects of missing variables. In particular, we devise a fast and reliable posterior simulator that yields the predictive distribution as a...

  8. Bayesian Adaptive Exploration

    Science.gov (United States)

    Loredo, Thomas J.

    2004-04-01

    I describe a framework for adaptive scientific exploration based on iterating an Observation-Inference-Design cycle that allows adjustment of hypotheses and observing protocols in response to the results of observation on-the-fly, as data are gathered. The framework uses a unified Bayesian methodology for the inference and design stages: Bayesian inference to quantify what we have learned from the available data and predict future data, and Bayesian decision theory to identify which new observations would teach us the most. When the goal of the experiment is simply to make inferences, the framework identifies a computationally efficient iterative ``maximum entropy sampling'' strategy as the optimal strategy in settings where the noise statistics are independent of signal properties. Results of applying the method to two ``toy'' problems with simulated data-measuring the orbit of an extrasolar planet, and locating a hidden one-dimensional object-show the approach can significantly improve observational efficiency in settings that have well-defined nonlinear models. I conclude with a list of open issues that must be addressed to make Bayesian adaptive exploration a practical and reliable tool for optimizing scientific exploration.

  9. Bayesian logistic regression analysis

    NARCIS (Netherlands)

    Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.

    2012-01-01

    In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an

  10. Approximate Bayesian inference for complex ecosystems

    OpenAIRE

    Michael P H Stumpf

    2014-01-01

    Mathematical models have been central to ecology for nearly a century. Simple models of population dynamics have allowed us to understand fundamental aspects underlying the dynamics and stability of ecological systems. What has remained a challenge, however, is to meaningfully interpret experimental or observational data in light of mathematical models. Here, we review recent developments, notably in the growing field of approximate Bayesian computation (ABC), that allow us to calibrate mathe...

  11. BEAST: Bayesian evolutionary analysis by sampling trees

    OpenAIRE

    Drummond Alexei J; Rambaut Andrew

    2007-01-01

    Abstract Background The evolutionary analysis of molecular sequence variation is a statistical enterprise. This is reflected in the increased use of probabilistic models for phylogenetic inference, multiple sequence alignment, and molecular population genetics. Here we present BEAST: a fast, flexible software architecture for Bayesian analysis of molecular sequences related by an evolutionary tree. A large number of popular stochastic models of sequence evolution are provided and tree-based m...

  12. BEAST: Bayesian evolutionary analysis by sampling trees

    OpenAIRE

    Drummond, Alexei J.; Rambaut, Andrew

    2007-01-01

    Background: The evolutionary analysis of molecular sequence variation is a statistical enterprise. This is reflected in the increased use of probabilistic models for phylogenetic inference, multiple sequence alignment, and molecular population genetics. Here we present BEAST: a fast, flexible software architecture for Bayesian analysis of molecular sequences related by an evolutionary tree. A large number of popular stochastic models of sequence evolution are provided and tree-based models su...

  13. Decoding perceptual thresholds from MEG/EEG

    OpenAIRE

    Bekhti, Yousra; Zilber, Nicolas; Pedregosa, Fabian; Ciuciu, Philippe; van Wassenhove, Virginie; Gramfort, Alexandre

    2014-01-01

    Magnetoencephalography (MEG) can map brain activity by recording the electromagnetic fields generated by the electrical currents in the brain during a perceptual or cognitive task. This technique offers a very high temporal resolution that allows noninvasive brain exploration at a millisecond (ms) time scale. Decoding, a.k.a. brain reading, consists in predicting from neuroimaging data the subject's behavior and/or the parameters of the perceived stimuli. This is facilitated by the use of sup...

  14. Unsupervised learning of facial emotion decoding skills

    OpenAIRE

    Jan Oliver Huelle; Benjamin eSack; Katja eBroer; Irina eKomlewa; Silke eAnders

    2014-01-01

    Research on the mechanisms underlying human facial emotion recognition has long focussed on genetically determined neural algorithms and often neglected the question of how these algorithms might be tuned by social learning. Here we show that facial emotion decoding skills can be significantly and sustainably improved by practice without an external teaching signal. Participants saw video clips of dynamic facial expressions of five different women and were asked to decide which of four possib...

  15. Decoding Hermitian Codes with Sudan's Algorithm

    DEFF Research Database (Denmark)

    Høholdt, Tom; Nielsen, Rasmus Refslund

    We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q-polynomial, and a...... reduction of the problem of factoring the Q-polynomial to the problem of factoring a univariate polynomial over a large finite field....

  16. Sequential decoders for large MIMO systems

    KAUST Repository

    Ali, Konpal S.

    2014-05-01

    Due to their ability to provide high data rates, multiple-input multiple-output (MIMO) systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In this paper, we employ the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. Numerical results are done that show moderate bias values result in a decent performance-complexity trade-off. We also attempt to bound the error by bounding the bias, using the minimum distance of a lattice. The variations in complexity with SNR have an interesting trend that shows room for considerable improvement. Our work is compared against linear decoders (LDs) aided with Element-based Lattice Reduction (ELR) and Complex Lenstra-Lenstra-Lovasz (CLLL) reduction. © 2014 IFIP.

  17. Kernel Temporal Differences for Neural Decoding

    Directory of Open Access Journals (Sweden)

    Jihye Bae

    2015-01-01

    Full Text Available We study the feasibility and capability of the kernel temporal difference (KTD(λ algorithm for neural decoding. KTD(λ is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm’s convergence can be guaranteed for policy evaluation. The algorithm’s nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement. KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey’s neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm’s capabilities in reinforcement learning brain machine interfaces.

  18. Decline in Mentalising Ability with Healthy Aging: Evidence from Mental State Decoding and Reasoning Tasks

    OpenAIRE

    Koegel, LaKrista, M.

    2010-01-01

    The primary aim of this study was to determine whether individuals constituting an older population would display deficits in tasks assessing aspects of ToM compared to younger participants; secondary goals were to evaluate whether older participants would be differentially affected on cognitive versus affective aspects of the tasks and/or differentially affected on mental state decoding versus reasoning tasks, as well as to asses the role that executive functioning has on thes...

  19. Hardware Implementation of Successive Cancellation Decoders for Polar Codes

    CERN Document Server

    Leroux, Camille; Sarkis, Gabi; Tal, Ido; Vardy, Alexander; Gross, Warren J

    2011-01-01

    The recently-discovered polar codes are seen as a major breakthrough in coding theory; they provably achieve the theoretical capacity of discrete memoryless channels using the low complexity successive cancellation (SC) decoding algorithm. Motivated by recent developments in polar coding theory, we propose a family of efficient hardware implementations for SC polar decoders. We show that such decoders can be implemented with O(n) processing elements, O(n) memory elements, and can provide a constant throughput for a given target clock frequency. Furthermore, we show that SC decoding can be implemented in the logarithm domain, thereby eliminating costly multiplication and division operations and reducing the complexity of each processing element greatly. We also present a detailed architecture for an SC decoder and provide logic synthesis results confirming the linear growth in complexity of the decoder as the code length increases.

  20. Successive Refinement with Decoder Cooperation and its Channel Coding Duals

    CERN Document Server

    Asnani, Himanshu; Weissman, Tsachy

    2012-01-01

    We study cooperation in multi terminal source coding models involving successive refinement. Specifically, we study the case of a single encoder and two decoders, where the encoder provides a common description to both the decoders and a private description to only one of the decoders. The decoders cooperate via cribbing, i.e., the decoder with access only to the common description is allowed to observe, in addition, a deterministic function of the reconstruction symbols produced by the other. We characterize the fundamental performance limits in the respective settings of non-causal, strictly-causal and causal cribbing. We use a new coding scheme, referred to as Forward Encoding and Block Markov Decoding, which is a variant of one recently used by Cuff and Zhao for coordination via implicit communication. Finally, we use the insight gained to introduce and solve some dual channel coding scenarios involving Multiple Access Channels with cribbing.

  1. Source Coding With Side Information Using List Decoding

    CERN Document Server

    Ali, Mortuza

    2010-01-01

    The problem of source coding with side information (SCSI) is closely related to channel coding. Therefore, existing literature focuses on using the most successful channel codes namely, LDPC codes, turbo codes, and their variants, to solve this problem assuming classical unique decoding of the underlying channel code. In this paper, in contrast to classical decoding, we have taken the list decoding approach. We show that syndrome source coding using list decoding can achieve the theoretical limit. We argue that, as opposed to channel coding, the correct sequence from the list produced by the list decoder can effectively be recovered in case of SCSI, since we are dealing with a virtual noisy channel rather than a real noisy channel. Finally, we present a guideline for designing constructive SCSI schemes using Reed Solomon code, BCH code, and Reed-Muller code, which are the known list-decodable codes.

  2. A Modified max-log-MAP Decoding Algorithm for Turbo Decoding

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Turbo decoding is iterative decoding, and the MAP algorithm is optimal in terms of performance in Turbo decoding. The log-MAP algorithm is the MAP executed in the logarithmic domain, so it is also optimal. Both the MAP and the log-MAP algorithm are complicated for implementation. The max-log-MAP algorithm is derived from the log-MAP with approximation, which is simply compared with the log-MAP algorithm but is suboptimal in terms of performance. A modified max-log-MAP algorithm is presented in this paper, based on the Taylor series of logarithm and exponent. Analysis and simulation results show that the modified max-log-MAP algorithm outperforms the max-log-MAP algorithm with almost the same complexity.

  3. Decoding Delay Controlled Completion Time Reduction in Instantly Decodable Network Coding

    KAUST Repository

    Douik, Ahmed

    2016-06-27

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to act completely against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. This paper investigates the effect of controlling the decoding delay to reduce the completion time below its currently best-known solution in both perfect and imperfect feedback with persistent erasure channels. To solve the problem, the decodingdelay- dependent expressions of the users’ and overall completion times are derived in the complete feedback scenario. Although using such expressions to find the optimal overall completion time is NP-hard, the paper proposes two novel heuristics that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Afterward, the paper extends the study to the imperfect feedback scenario in which uncertainties at the sender affects its ability to anticipate accurately the decoding delay increase at each user. The paper formulates the problem in such environment and derives the expression of the minimum increase in the completion time. Simulation results show the performance of the proposed solutions and suggest that both heuristics achieves a lower mean completion time as compared to the best-known heuristics for the completion time reduction in perfect and imperfect feedback. The gap in performance becomes more significant as the erasure of the channel increases.

  4. Pipeline Time- And Transform-Domain Reed-Solomon Decoders

    Science.gov (United States)

    Hsu, In-Shek; Truong, Trieu-Kie; Deutsch, L. J.; Satorius, E. H.; Reed, I. S.

    1990-01-01

    Modification of decoding algorithms leads to simplified conceptual designs for time- and transform-domain Reed-Soloman (RS) decoders suitable for implementation as very-large-scale integrated (VLSI) circuits. New conceptual decoders determine simultaneously errata-locator and errata-evaluator polynomials as part of simplified scheme for corrections of errors and erasures in RS codes. Highly suitable for implementation in both VLSI circuitry and in software on general-purpose computer.

  5. A VLSI design of a pipeline Reed-Solomon decoder

    Science.gov (United States)

    Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.; Reed, I. S.

    1985-01-01

    A pipeline structure of a transform decoder similar to a systolic array was developed to decode Reed-Solomon (RS) codes. An important ingredient of this design is a modified Euclidean algorithm for computing the error locator polynomial. The computation of inverse field elements is completely avoided in this modification of Euclid's algorithm. The new decoder is regular and simple, and naturally suitable for VLSI implementation.

  6. Mitigating Hardware Cyber-Security Risks in Error Correcting Decoders

    OpenAIRE

    Hemati, Saied

    2015-01-01

    This paper investigates hardware cyber-security risks associated with channel decoders, which are commonly acquired as a black box in semiconductor industry. It is shown that channel decoders are potentially attractive targets for hardware cyber-security attacks and can be easily embedded with malicious blocks. Several attack scenarios are considered in this work and suitable methods for mitigating the risks are proposed. These methods are based on randomizing the inputs of the channel decode...

  7. On a turbo decoder design for low power dissipation

    OpenAIRE

    Fei, Jia

    2000-01-01

    (Abstract) A new coding scheme called "turbo coding" has generated tremendous interest in channel coding of digital communication systems due to its high error correcting capability. Two key innovations in turbo coding are parallel concatenated encoding and iterative decoding. A soft-in soft-out component decoder can be implemented using the maximum a posteriori (MAP) or the maximum likelihood (ML) decoding algorithm. While the MAP algorithm offers better performance than the ML algori...

  8. Completion Delay Minimization for Instantly Decodable Network Codes

    OpenAIRE

    Sorour, Sameh; Valaee, Shahrokh

    2012-01-01

    In this paper, we consider the problem of minimizing the completion delay for instantly decodable network coding (IDNC), in wireless multicast and broadcast scenarios. We are interested in this class of network coding due to its numerous benefits, such as low decoding delay, low coding and decoding complexities and simple receiver requirements. We first extend the IDNC graph, which represents all feasible IDNC coding opportunities, to efficiently operate in both multicast and broadcast scenar...

  9. Hardware architectures for Successive Cancellation Decoding of Polar Codes

    OpenAIRE

    Leroux, Camille; Tal, Ido; Vardy, Alexander; Gross, Warren J.

    2010-01-01

    The recently-discovered polar codes are widely seen as a major breakthrough in coding theory. These codes achieve the capacity of many important channels under successive cancellation decoding. Motivated by the rapid progress in the theory of polar codes, we pro pose a family of architectures for efficient hardware implementation of successive cancellation decoders. We show that such decoders can be implemented with O(n) processing elements and O(n) memory elements, while providing constant t...

  10. Optimization of Graph Based Codes for Belief Propagation Decoding

    OpenAIRE

    Jayasooriya, Sachini; Johnson, Sarah J.; Ong, Lawrence; Berretta, Regina

    2016-01-01

    A low-density parity-check (LDPC) code is a linear block code described by a sparse parity-check matrix, which can be efficiently represented by a bipartite Tanner graph. The standard iterative decoding algorithm, known as belief propagation, passes messages along the edges of this Tanner graph. Density evolution is an efficient method to analyze the performance of the belief propagation decoding algorithm for a particular LDPC code ensemble, enabling the determination of a decoding threshold...

  11. Input of Multimedia Information, Cognitive Load & EFL Listening Decoding

    OpenAIRE

    ZHOU, HAIYAN

    2015-01-01

    This paper clarified the concepts of cognitive load and combined EFL listening decoding as well as the relationship between them, and examined the change of learners’ cognitive load and its impact on their EFL listening decoding which were caused by input of pure audio information and that of combined audio information with mixtures such as pictures and images. Based on this, the author proposed some effective strategies to improve learners’ EFL listening decoding, including strengthening the...

  12. Mathematical Programming Decoding of Binary Linear Codes: Theory and Algorithms

    OpenAIRE

    Helmling, Michael; Ruzika, Stefan; Tanatmis, Akin

    2011-01-01

    Mathematical programming is a branch of applied mathematics and has recently been used to derive new decoding approaches, challenging established but often heuristic algorithms based on iterative message passing. Concepts from mathematical programming used in the context of decoding include linear, integer, and nonlinear programming, network flows, notions of duality as well as matroid and polyhedral theory. This survey article reviews and categorizes decoding methods based on mathematical pr...

  13. Bayesian analysis of the Hector’s Dolphin data

    OpenAIRE

    King, R; Brooks, S.P.

    2004-01-01

    In recent years there have been increasing concerns for many wildlife populations, due to decreasing population trends. This has led to the introduction of management schemes to increase the survival rates and hence the population size of many species of animals. We concentrate on a particular dolphin population situated off the coast of New Zealand, and investigate whether the introduction of a fishing gill net ban was effective in decreasing dolphin mortality. We undertake a Bayesian analys...

  14. The design and implementation of TPC encoder and decoder

    Science.gov (United States)

    Xiang, L. J.; Wang, Z. B.; Yuan, J. B.; Zheng, L. H.

    2016-02-01

    Based on the analysis and simulation of TPC codec principle and iteration decoding algorithm based on Chase 2, the design and implementation methods of TPC encoder and decoder are presented in this paper. In particular, circuit design and implementation flow of soft-input soft-output modified method in decoder algorithm and iteration decoding are analyzed. Aiming at (64,57,4) TPC, when the iteration times is 3 and the uncertain position number is 3, simulation and implementation results show that at least 6.8dB encoding gain can be obtained under the condition of BER=10-6

  15. Architecture and finite precision optimization for layered LDPC decoders

    OpenAIRE

    Marchand, Cédric; Conde-Canencia, Laura; Boutillon, Emmanuel

    2010-01-01

    Layered decoding is known to provide efficient and high-throughput implementation of LDPC decoders. In the practical hardware implementation of layered decoders, the performance is strongly affected by quantization. The finite precision model determines the area of the decoder, which is mainly composed of memory, especially for long frames. To be specific, in the DVB-S2,-T2 and -C2 standards, the memory can occupy up to 70% of the total area. In this paper, we focus our attention on the optim...

  16. Adaptive Single-Trial Error/Erasure Decoding of Reed-Solomon Codes

    CERN Document Server

    Senger, Christian; Schober, Steffen; Bossert, Martin; Zyablov, Victor V

    2011-01-01

    Algebraic decoding algorithms are commonly applied for the decoding of Reed-Solomon codes. Their main advantages are low computational complexity and predictable decoding capabilities. Many algorithms can be extended for correction of both errors and erasures. This enables the decoder to exploit binary quantized reliability information obtained from the transmission channel: Received symbols with high reliability are forwarded to the decoding algorithm while symbols with low reliability are erased. In this paper we investigate adaptive single-trial error/erasure decoding of Reed-Solomon codes, i.e. we derive an adaptive erasing strategy which minimizes the residual codeword error probability after decoding. Our result is applicable to any error/erasure decoding algorithm as long as its decoding capabilities can be expressed by a decoder capability function. Examples are Bounded Minimum Distance decoding with the Berlekamp-Massey- or the Sugiyama algorithms and the Guruswami-Sudan list decoder.

  17. Probability and Bayesian statistics

    CERN Document Server

    1987-01-01

    This book contains selected and refereed contributions to the "Inter­ national Symposium on Probability and Bayesian Statistics" which was orga­ nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa­ pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub­ jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...

  18. Bayesian Magic in Asteroseismology

    Science.gov (United States)

    Kallinger, T.

    2015-09-01

    Only a few years ago asteroseismic observations were so rare that scientists had plenty of time to work on individual data sets. They could tune their algorithms in any possible way to squeeze out the last bit of information. Nowadays this is impossible. With missions like MOST, CoRoT, and Kepler we basically drown in new data every day. To handle this in a sufficient way statistical methods become more and more important. This is why Bayesian techniques started their triumph march across asteroseismology. I will go with you on a journey through Bayesian Magic Land, that brings us to the sea of granulation background, the forest of peakbagging, and the stony alley of model comparison.

  19. Decoding the direction of auditory motion in blind humans.

    Science.gov (United States)

    Wolbers, Thomas; Zahorik, Pavel; Giudice, Nicholas A

    2011-05-15

    Accurate processing of nonvisual stimuli is fundamental to humans with visual impairments. In this population, moving sounds activate an occipito-temporal region thought to encompass the equivalent of monkey area MT+, but it remains unclear whether the signal carries information beyond the mere presence of motion. To address this important question, we tested whether the processing in this region retains functional properties that are critical for accurate motion processing and that are well established in the visual modality. Specifically, we focussed on the property of 'directional selectivity', because MT+ neurons in non-human primates fire preferentially to specific directions of visual motion. Recent neuroimaging studies have revealed similar properties in sighted humans by successfully decoding different directions of visual motion from fMRI activation patterns. Here we used fMRI and multivariate pattern classification to demonstrate that the direction in which a sound is moving can be reliably decoded from dorsal occipito-temporal activation in the blind. We also show that classification performance is at chance (i) in a control region in posterior parietal cortex and (ii) when motion information is removed and subjects only hear a sequence of static sounds presented at the same start and end positions. These findings reveal that information about the direction of auditory motion is present in dorsal occipito-temporal responses of blind humans. As such, this area, which appears consistent with the hMT+ complex in the sighted, provides crucial information for the generation of a veridical percept of moving non-visual stimuli. PMID:20451630

  20. Bayesian Nonparametric Graph Clustering

    OpenAIRE

    Banerjee, Sayantan; Akbani, Rehan; Baladandayuthapani, Veerabhadran

    2015-01-01

    We present clustering methods for multivariate data exploiting the underlying geometry of the graphical structure between variables. As opposed to standard approaches that assume known graph structures, we first estimate the edge structure of the unknown graph using Bayesian neighborhood selection approaches, wherein we account for the uncertainty of graphical structure learning through model-averaged estimates of the suitable parameters. Subsequently, we develop a nonparametric graph cluster...

  1. Approximate Bayesian recursive estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav

    2014-01-01

    Roč. 285, č. 1 (2014), s. 100-111. ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf

  2. Bayesian Generalized Rating Curves

    OpenAIRE

    Helgi Sigurðarson 1985

    2014-01-01

    A rating curve is a curve or a model that describes the relationship between water elevation, or stage, and discharge in an observation site in a river. The rating curve is fit from paired observations of stage and discharge. The rating curve then predicts discharge given observations of stage and this methodology is applied as stage is substantially easier to directly observe than discharge. In this thesis a statistical rating curve model is proposed working within the framework of Bayesian...

  3. Heteroscedastic Treed Bayesian Optimisation

    OpenAIRE

    Assael, John-Alexander M.; Wang, Ziyu; Shahriari, Bobak; De Freitas, Nando

    2014-01-01

    Optimising black-box functions is important in many disciplines, such as tuning machine learning models, robotics, finance and mining exploration. Bayesian optimisation is a state-of-the-art technique for the global optimisation of black-box functions which are expensive to evaluate. At the core of this approach is a Gaussian process prior that captures our belief about the distribution over functions. However, in many cases a single Gaussian process is not flexible enough to capture non-stat...

  4. Efficient Bayesian Phase Estimation

    Science.gov (United States)

    Wiebe, Nathan; Granade, Chris

    2016-07-01

    We introduce a new method called rejection filtering that we use to perform adaptive Bayesian phase estimation. Our approach has several advantages: it is classically efficient, easy to implement, achieves Heisenberg limited scaling, resists depolarizing noise, tracks time-dependent eigenstates, recovers from failures, and can be run on a field programmable gate array. It also outperforms existing iterative phase estimation algorithms such as Kitaev's method.

  5. Bayesian Word Sense Induction

    OpenAIRE

    Brody, Samuel; Lapata, Mirella

    2009-01-01

    Sense induction seeks to automatically identify word senses directly from a corpus. A key assumption underlying previous work is that the context surrounding an ambiguous word is indicative of its meaning. Sense induction is thus typically viewed as an unsupervised clustering problem where the aim is to partition a word’s contexts into different classes, each representing a word sense. Our work places sense induction in a Bayesian context by modeling the contexts of the ambiguous word as samp...

  6. Bayesian Neural Word Embedding

    OpenAIRE

    Barkan, Oren

    2016-01-01

    Recently, several works in the domain of natural language processing presented successful methods for word embedding. Among them, the Skip-gram (SG) with negative sampling, known also as Word2Vec, advanced the state-of-the-art of various linguistics tasks. In this paper, we propose a scalable Bayesian neural word embedding algorithm that can be beneficial to general item similarity tasks as well. The algorithm relies on a Variational Bayes solution for the SG objective and a detailed step by ...

  7. On the higher efficiency of parallel Reed-Solomon turbo-decoding

    OpenAIRE

    Leroux, C.; Jego, Christophe; ADDE, Patrick; JEZEQUEL, Michel

    2008-01-01

    In this paper, we demonstrate the higher hardware efficiency of Reed-Solomon (RS) parallel turbo decoding compared with BCH parallel turbo decoding. Based on an innovative architecture, this is the first implementation of fully parallel RS turbo decoder. A performance analysis is performed showing that RS block turbo codes (RS-BTC) have decoding performance equivalent to Bose Ray-Chaudhuri Hocquenghem-block turbo codes (BCH-BTC). A ratio between the decoder throughput and the decoder area is ...

  8. Bayesian methods in bacterial population genomics

    OpenAIRE

    Cheng, Lu

    2013-01-01

    Vast amounts of molecular data are being generated every day. However, how to properly harness the data remains often a challenge for many biologists. Firstly, due to the typical large dimension of the molecular data, analyses can either require exhaustive amounts of computer memory or be very time-consuming, or both. Secondly, biological problems often have their own special features, which put demand on specially designed software to obtain meaningful results from statistical analyses witho...

  9. Bayesian Attractor Learning

    Science.gov (United States)

    Wiegerinck, Wim; Schoenaker, Christiaan; Duane, Gregory

    2016-04-01

    Recently, methods for model fusion by dynamically combining model components in an interactive ensemble have been proposed. In these proposals, fusion parameters have to be learned from data. One can view these systems as parametrized dynamical systems. We address the question of learnability of dynamical systems with respect to both short term (vector field) and long term (attractor) behavior. In particular we are interested in learning in the imperfect model class setting, in which the ground truth has a higher complexity than the models, e.g. due to unresolved scales. We take a Bayesian point of view and we define a joint log-likelihood that consists of two terms, one is the vector field error and the other is the attractor error, for which we take the L1 distance between the stationary distributions of the model and the assumed ground truth. In the context of linear models (like so-called weighted supermodels), and assuming a Gaussian error model in the vector fields, vector field learning leads to a tractable Gaussian solution. This solution can then be used as a prior for the next step, Bayesian attractor learning, in which the attractor error is used as a log-likelihood term. Bayesian attractor learning is implemented by elliptical slice sampling, a sampling method for systems with a Gaussian prior and a non Gaussian likelihood. Simulations with a partially observed driven Lorenz 63 system illustrate the approach.

  10. Bayesian theory and applications

    CERN Document Server

    Dellaportas, Petros; Polson, Nicholas G; Stephens, David A

    2013-01-01

    The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This volume guides the reader along a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book has a unique format. There is an explanatory chapter devoted to each conceptual advance followed by journal-style chapters that provide applications or further advances on the concept. Thus, the volume is both a textbook and a compendium of papers covering a vast range of topics. It is appropriate for a well-informed novice interested in understanding the basic approach, methods and recent applications. Because of its advanced chapters and recent work, it is also appropriate for a more mature reader interested in recent applications and devel...

  11. The role of ECoG magnitude and phase in decoding position, velocity and acceleration during continuous motor behavior

    Directory of Open Access Journals (Sweden)

    Jiri eHammer

    2013-11-01

    Full Text Available In neuronal population signals, including the electroencephalogram (EEG and electrocorticogram (ECoG, the low-frequency component (LFC is particularly informative about motor behavior and can be used for decoding movement parameters for brain-machine interface (BMI applications. An idea previously expressed, but as of yet not quantitatively tested, is that it is the LFC phase that is the main source of decodable information. To test this issue, we analyzed human ECoG recorded during a game-like, one-dimensional, continuous motor task with a novel decoding method suitable for unfolding magnitude and phase explicitly into a complex-valued, time-frequency signal representation, enabling quantification of the decodable information within the temporal, spatial and frequency domains and allowing disambiguation of the phase contribution from that of the spectral magnitude. The decoding accuracy based only on phase information was substantially (at least 2 fold and significantly higher than that based only on magnitudes for position, velocity and acceleration. The frequency profile of movement-related information in the ECoG data matched well with the frequency profile expected when assuming a close time-domain correlate of movement velocity in the ECoG, e.g., a (noisy copy of hand velocity. No such match was observed with the frequency profiles expected when assuming a copy of either hand position or acceleration. There was also no indication of additional magnitude-based mechanisms encoding movement information in the LFC range. Thus, our study contributes to elucidating the nature of the informative low-frequency component of motor cortical population activity and may hence contribute to improve decoding strategies and BMI performance.

  12. On Rational Interpolation-Based List-Decoding and List-Decoding Binary Goppa Codes

    DEFF Research Database (Denmark)

    Beelen, Peter; Høholdt, Tom; Nielsen, Johan Sebastian Rosenkilde; Wu, Yingquan

    2013-01-01

    We derive the Wu list-decoding algorithm for generalized Reed–Solomon (GRS) codes by using Gröbner bases over modules and the Euclidean algorithm as the initial algorithm instead of the Berlekamp–Massey algorithm. We present a novel method for constructing the interpolation polynomial fast. We give...

  13. Bitstream decoding processor for fast entropy decoding of variable length coding-based multiformat videos

    Science.gov (United States)

    Jo, Hyunho; Sim, Donggyu

    2014-06-01

    We present a bitstream decoding processor for entropy decoding of variable length coding-based multiformat videos. Since most of the computational complexity of entropy decoders comes from bitstream accesses and table look-up process, the developed bitstream processing unit (BsPU) has several designated instructions to access bitstreams and to minimize branch operations in the table look-up process. In addition, the instruction for bitstream access has the capability to remove emulation prevention bytes (EPBs) of H.264/AVC without initial delay, repeated memory accesses, and additional buffer. Experimental results show that the proposed method for EPB removal achieves a speed-up of 1.23 times compared to the conventional EPB removal method. In addition, the BsPU achieves speed-ups of 5.6 and 3.5 times in entropy decoding of H.264/AVC and MPEG-4 Visual bitstreams, respectively, compared to an existing processor without designated instructions and a new table mapping algorithm. The BsPU is implemented on a Xilinx Virtex5 LX330 field-programmable gate array. The MPEG-4 Visual (ASP, Level 5) and H.264/AVC (Main Profile, Level 4) are processed using the developed BsPU with a core clock speed of under 250 MHz in real time.

  14. Unbounded Bayesian Optimization via Regularization

    OpenAIRE

    Shahriari, Bobak; Bouchard-Côté, Alexandre; De Freitas, Nando

    2015-01-01

    Bayesian optimization has recently emerged as a popular and efficient tool for global optimization and hyperparameter tuning. Currently, the established Bayesian optimization practice requires a user-defined bounding box which is assumed to contain the optimizer. However, when little is known about the probed objective function, it can be difficult to prescribe such bounds. In this work we modify the standard Bayesian optimization framework in a principled way to allow automatic resizing of t...

  15. Bayesian optimization for materials design

    OpenAIRE

    Frazier, Peter I.; Wang, Jialei

    2015-01-01

    We introduce Bayesian optimization, a technique developed for optimizing time-consuming engineering simulations and for fitting machine learning models on large datasets. Bayesian optimization guides the choice of experiments during materials design and discovery to find good material designs in as few experiments as possible. We focus on the case when materials designs are parameterized by a low-dimensional vector. Bayesian optimization is built on a statistical technique called Gaussian pro...

  16. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  17. Single-Chip VLSI Reed-Solomon Decoder

    Science.gov (United States)

    Shao, Howard M.; Truong, Trieu-Kie; Hsu, In-Shek; Deutsch, Leslie J.

    1988-01-01

    Efficient utilization of computing elements reduces size while preserving throughput. VLSI architecture is pipeline Reed-Solomon decoder for correction of errors and erasures. Uses transform circuit to compute syndrome polynomial. Erasure information enters decoder as binary sequence. Applied to variety of digital communications involving error-correcting RS codes.

  18. Joint Synchronization Of Viterbi And Reed-Solomon Decoders

    Science.gov (United States)

    Statman, Joseph I.; Chauvin, Todd H.; Cheung, Kar-Ming; Rabkin, Jay; Belongie, Mignon L.

    1995-01-01

    Synchronization times reduced to reduce loss of data. Scheme for decoding received doubly encoded binary-data signal provides for joint synchronization of two decoders. Applies to concatenated error-correcting channel coding communication system in which, at transmitter, data first encoded by interleaved Reed-Solomon code (block code), then by convolutional code.

  19. Analysis and Design of Binary Message-Passing Decoders

    DEFF Research Database (Denmark)

    Lechner, Gottfried; Pedersen, Troels; Kramer, Gerhard

    2012-01-01

    Binary message-passing decoders for low-density parity-check (LDPC) codes are studied by using extrinsic information transfer (EXIT) charts. The channel delivers hard or soft decisions and the variable node decoder performs all computations in the L-value domain. A hard decision channel results i...

  20. Decoding Technique of Concatenated Hadamard Codes and Its Performance

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    The decoding technique of concatenated Hadamard codes and its performance are studied. Efficient softin-soft-out decoding algorithms based on the fast Hadamard transform are developed. Performance required by CDMA mobile or PCS speech services, e.g. , BER= 10-3, can be achieved at Eb/No = 0.9 dB using short interleaving length of 192 bits.

  1. High-speed Viterbi decoding with overlapping code sequences

    Science.gov (United States)

    Ross, Michael D.; Osborne, William P.

    1993-01-01

    The Viterbi Algorithm for decoding convolutional codes and Trellis Coded Modulation is suited to VLSI implementation but contains a feedback loop which limits the speed of pipelined architecture. The feedback loop is circumvented by decoding independent sequences simultaneously, resulting in a 5-9 fold speed-up with a two-fold hardware expansion.

  2. Decoding Information in the Human Hippocampus: A User's Guide

    Science.gov (United States)

    Chadwick, Martin J.; Bonnici, Heidi M.; Maguire, Eleanor A.

    2012-01-01

    Multi-voxel pattern analysis (MVPA), or "decoding", of fMRI activity has gained popularity in the neuroimaging community in recent years. MVPA differs from standard fMRI analyses by focusing on whether information relating to specific stimuli is encoded in patterns of activity across multiple voxels. If a stimulus can be predicted, or decoded,…

  3. A Method of Coding and Decoding in Underwater Image Transmission

    Institute of Scientific and Technical Information of China (English)

    程恩

    2001-01-01

    A new method of coding and decoding in the system of underwater image transmission is introduced, including the rapid digital frequency synthesizer in multiple frequency shift keying,image data generator, image grayscale decoder with intelligent fuzzy algorithm, image restoration and display on microcomputer.

  4. Self-Corrected Min-Sum decoding of LDPC codes

    CERN Document Server

    Savin, Valentin

    2008-01-01

    In this paper we propose a very simple but powerful self-correction method for the Min-Sum decoding of LPDC codes. Unlike other correction methods known in the literature, our method does not try to correct the check node processing approximation, but it modifies the variable node processing by erasing unreliable messages. However, this positively affects check node messages, which become symmetric Gaussian distributed, and we show that this is sufficient to ensure a quasi-optimal decoding performance. Monte-Carlo simulations show that the proposed Self-Corrected Min-Sum decoding performs very close to the Sum-Product decoding, while preserving the main features of the Min-Sum decoding, that is low complexity and independence with respect to noise variance estimation errors.

  5. Hardware architectures for Successive Cancellation Decoding of Polar Codes

    CERN Document Server

    Leroux, Camille; Vardy, Alexander; Gross, Warren J

    2010-01-01

    The recently-discovered polar codes are widely seen as a major breakthrough in coding theory. These codes achieve the capacity of many important channels under successive cancellation decoding. Motivated by the rapid progress in the theory of polar codes, we propose a family of architectures for efficient hardware implementation of successive cancellation decoders. We show that such decoders can be implemented with O(n) processing elements and O(n) memory elements, while providing constant throughput. We also propose a technique for overlapping the decoding of several consecutive codewords, thereby achieving a significant speed-up factor. We furthermore show that successive cancellation decoding can be implemented in the logarithmic domain, thereby eliminating the multiplication and division operations and greatly reducing the complexity of each processing element.

  6. Iterative Soft Decision Based Complex K-best MIMO Decoder

    Directory of Open Access Journals (Sweden)

    Mehnaz Rahman

    2015-11-01

    Full Text Available This paper presents an iterative soft decision based complex multiple input multiple output (MIMO decoding algorithm, which reduces the complexity of Maximum Likelihood (ML detector. We develop a novel iterative complex K-best decoder exploiting the techniques of lattice reduction for 8×8 MIMO. Besides list size, a new adjustable variable has been introduced in order to control the on-demand child expansion. Following this method, we obtain 6.9 to 8.0 dB improvement over real domain K-best decoder and 1.4 to 2.5 dB better performance compared to iterative conventional complex decoder for 4th iteration and 64-QAM modulation scheme. We also demonstrate the significance of new parameter on bit error rate. The proposed decoder not only increases the performance, but also reduces the computational complexity to a certain level.

  7. Lattice Sequential Decoder for Coded MIMO Channel: Performance and Complexity Analysis

    CERN Document Server

    Abediseid, Walid

    2011-01-01

    In this paper, the performance limit of lattice sequential decoder for lattice space-time coded MIMO channel is analysed. We determine the rates achievable by lattice coding and sequential decoding applied to such channel. The diversity-multiplexing tradeoff (DMT) under lattice sequential decoding is derived as a function of its parameter---the bias term. The bias parameter is critical for controlling the amount of computations required at the decoding stage. Achieving low decoding complexity requires increasing the value of the bias term. However, this is done at the expense of losing the optimal tradeoff of the channel. We show how such a decoder can bridge the gap between lattice decoder and low complexity decoders. Moreover, the computational complexity of lattice sequential decoder is analysed. Specifically, we derive the tail distribution of the decoder's computational complexity in the high signal-to-noise ratio regime. Similar to the conventional sequential decoder used in discrete memoryless channel,...

  8. Decoding suprathreshold stochastic resonance with optimal weights

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Liyan, E-mail: xuliyan@qdu.edu.cn [Institute of Complexity Science, Qingdao University, Qingdao 266071 (China); Vladusich, Tony [Computational and Theoretical Neuroscience Laboratory, Institute for Telecommunications Research, School of Information Technology and Mathematical Sciences, University of South Australia, SA 5095 (Australia); Duan, Fabing [Institute of Complexity Science, Qingdao University, Qingdao 266071 (China); Gunn, Lachlan J.; Abbott, Derek [Centre for Biomedical Engineering (CBME) and School of Electrical & Electronic Engineering, The University of Adelaide, Adelaide, SA 5005 (Australia); McDonnell, Mark D. [Computational and Theoretical Neuroscience Laboratory, Institute for Telecommunications Research, School of Information Technology and Mathematical Sciences, University of South Australia, SA 5095 (Australia); Centre for Biomedical Engineering (CBME) and School of Electrical & Electronic Engineering, The University of Adelaide, Adelaide, SA 5005 (Australia)

    2015-10-09

    We investigate an array of stochastic quantizers for converting an analog input signal into a discrete output in the context of suprathreshold stochastic resonance. A new optimal weighted decoding is considered for different threshold level distributions. We show that for particular noise levels and choices of the threshold levels optimally weighting the quantizer responses provides a reduced mean square error in comparison with the original unweighted array. However, there are also many parameter regions where the original array provides near optimal performance, and when this occurs, it offers a much simpler approach than optimally weighting each quantizer's response. - Highlights: • A weighted summing array of independently noisy binary comparators is investigated. • We present an optimal linearly weighted decoding scheme for combining the comparator responses. • We solve for the optimal weights by applying least squares regression to simulated data. • We find that the MSE distortion of weighting before summation is superior to unweighted summation of comparator responses. • For some parameter regions, the decrease in MSE distortion due to weighting is negligible.

  9. Bayesian nonparametric data analysis

    CERN Document Server

    Müller, Peter; Jara, Alejandro; Hanson, Tim

    2015-01-01

    This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book’s structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in on-line software pages.

  10. Decentralized Distributed Bayesian Estimation

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Sečkárová, Vladimíra

    Praha: ÚTIA AVČR, v.v.i, 2011 - (Janžura, M.; Ivánek, J.). s. 16-16 [7th International Workshop on Data–Algorithms–Decision Making. 27.11.2011-29.11.2011, Mariánská] R&D Projects: GA ČR 102/08/0567; GA ČR GA102/08/0567 Institutional research plan: CEZ:AV0Z10750506 Keywords : estimation * distributed estimation * model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2011/AS/dedecius-decentralized distributed bayesian estimation.pdf

  11. Applied Bayesian modelling

    CERN Document Server

    Congdon, Peter

    2014-01-01

    This book provides an accessible approach to Bayesian computing and data analysis, with an emphasis on the interpretation of real data sets. Following in the tradition of the successful first edition, this book aims to make a wide range of statistical modeling applications accessible using tested code that can be readily adapted to the reader's own applications. The second edition has been thoroughly reworked and updated to take account of advances in the field. A new set of worked examples is included. The novel aspect of the first edition was the coverage of statistical modeling using WinBU

  12. Computationally efficient Bayesian tracking

    Science.gov (United States)

    Aughenbaugh, Jason; La Cour, Brian

    2012-06-01

    In this paper, we describe the progress we have achieved in developing a computationally efficient, grid-based Bayesian fusion tracking system. In our approach, the probability surface is represented by a collection of multidimensional polynomials, each computed adaptively on a grid of cells representing state space. Time evolution is performed using a hybrid particle/grid approach and knowledge of the grid structure, while sensor updates use a measurement-based sampling method with a Delaunay triangulation. We present an application of this system to the problem of tracking a submarine target using a field of active and passive sonar buoys.

  13. Improved iterative Bayesian unfolding

    CERN Document Server

    D'Agostini, G

    2010-01-01

    This paper reviews the basic ideas behind a Bayesian unfolding published some years ago and improves their implementation. In particular, uncertainties are now treated at all levels by probability density functions and their propagation is performed by Monte Carlo integration. Thus, small numbers are better handled and the final uncertainty does not rely on the assumption of normality. Theoretical and practical issues concerning the iterative use of the algorithm are also discussed. The new program, implemented in the R language, is freely available, together with sample scripts to play with toy models.

  14. An unbiased Bayesian approach to functional connectomics implicates social-communication networks in autism

    Directory of Open Access Journals (Sweden)

    Archana Venkataraman

    2015-01-01

    Full Text Available Resting-state functional magnetic resonance imaging (rsfMRI studies reveal a complex pattern of hyper- and hypo-connectivity in children with autism spectrum disorder (ASD. Whereas rsfMRI findings tend to implicate the default mode network and subcortical areas in ASD, task fMRI and behavioral experiments point to social dysfunction as a unifying impairment of the disorder. Here, we leverage a novel Bayesian framework for whole-brain functional connectomics that aggregates population differences in connectivity to localize a subset of foci that are most affected by ASD. Our approach is entirely data-driven and does not impose spatial constraints on the region foci or dictate the trajectory of altered functional pathways. We apply our method to data from the openly shared Autism Brain Imaging Data Exchange (ABIDE and pinpoint two intrinsic functional networks that distinguish ASD patients from typically developing controls. One network involves foci in the right temporal pole, left posterior cingulate cortex, left supramarginal gyrus, and left middle temporal gyrus. Automated decoding of this network by the Neurosynth meta-analytic database suggests high-level concepts of “language” and “comprehension” as the likely functional correlates. The second network consists of the left banks of the superior temporal sulcus, right posterior superior temporal sulcus extending into temporo-parietal junction, and right middle temporal gyrus. Associated functionality of these regions includes “social” and “person”. The abnormal pathways emanating from the above foci indicate that ASD patients simultaneously exhibit reduced long-range or inter-hemispheric connectivity and increased short-range or intra-hemispheric connectivity. Our findings reveal new insights into ASD and highlight possible neural mechanisms of the disorder.

  15. Bayesian Inference on Gravitational Waves

    Directory of Open Access Journals (Sweden)

    Asad Ali

    2015-12-01

    Full Text Available The Bayesian approach is increasingly becoming popular among the astrophysics data analysis communities. However, the Pakistan statistics communities are unaware of this fertile interaction between the two disciplines. Bayesian methods have been in use to address astronomical problems since the very birth of the Bayes probability in eighteenth century. Today the Bayesian methods for the detection and parameter estimation of gravitational waves have solid theoretical grounds with a strong promise for the realistic applications. This article aims to introduce the Pakistan statistics communities to the applications of Bayesian Monte Carlo methods in the analysis of gravitational wave data with an  overview of the Bayesian signal detection and estimation methods and demonstration by a couple of simplified examples.

  16. Adaptive Dynamic Bayesian Networks

    Energy Technology Data Exchange (ETDEWEB)

    Ng, B M

    2007-10-26

    A discrete-time Markov process can be compactly modeled as a dynamic Bayesian network (DBN)--a graphical model with nodes representing random variables and directed edges indicating causality between variables. Each node has a probability distribution, conditional on the variables represented by the parent nodes. A DBN's graphical structure encodes fixed conditional dependencies between variables. But in real-world systems, conditional dependencies between variables may be unknown a priori or may vary over time. Model errors can result if the DBN fails to capture all possible interactions between variables. Thus, we explore the representational framework of adaptive DBNs, whose structure and parameters can change from one time step to the next: a distribution's parameters and its set of conditional variables are dynamic. This work builds on recent work in nonparametric Bayesian modeling, such as hierarchical Dirichlet processes, infinite-state hidden Markov networks and structured priors for Bayes net learning. In this paper, we will explain the motivation for our interest in adaptive DBNs, show how popular nonparametric methods are combined to formulate the foundations for adaptive DBNs, and present preliminary results.

  17. Bayesian analysis toolkit - BAT

    International Nuclear Information System (INIS)

    Statistical treatment of data is an essential part of any data analysis and interpretation. Different statistical methods and approaches can be used, however the implementation of these approaches is complicated and at times inefficient. The Bayesian analysis toolkit (BAT) is a software package developed in C++ framework that facilitates the statistical analysis of the data using Bayesian theorem. The tool evaluates the posterior probability distributions for models and their parameters using Markov Chain Monte Carlo which in turn provide straightforward parameter estimation, limit setting and uncertainty propagation. Additional algorithms, such as simulated annealing, allow extraction of the global mode of the posterior. BAT sets a well-tested environment for flexible model definition and also includes a set of predefined models for standard statistical problems. The package is interfaced to other software packages commonly used in high energy physics, such as ROOT, Minuit, RooStats and CUBA. We present a general overview of BAT and its algorithms. A few physics examples are shown to introduce the spectrum of its applications. In addition, new developments and features are summarized.

  18. Decoding Generalized Concatenated Codes Using Interleaved Reed-Solomon Codes

    CERN Document Server

    Senger, Christian; Bossert, Martin; Zyablov, Victor

    2008-01-01

    Generalized Concatenated codes are a code construction consisting of a number of outer codes whose code symbols are protected by an inner code. As outer codes, we assume the most frequently used Reed-Solomon codes; as inner code, we assume some linear block code which can be decoded up to half its minimum distance. Decoding up to half the minimum distance of Generalized Concatenated codes is classically achieved by the Blokh-Zyablov-Dumer algorithm, which iteratively decodes by first using the inner decoder to get an estimate of the outer code words and then using an outer error/erasure decoder with a varying number of erasures determined by a set of pre-calculated thresholds. In this paper, a modified version of the Blokh-Zyablov-Dumer algorithm is proposed, which exploits the fact that a number of outer Reed-Solomon codes with average minimum distance d can be grouped into one single Interleaved Reed-Solomon code which can be decoded beyond d/2. This allows to skip a number of decoding iterations on the one...

  19. Encoder-decoder optimization for brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Josh Merel

    2015-06-01

    Full Text Available Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model" and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.

  20. O2-GIDNC: Beyond instantly decodable network coding

    KAUST Repository

    Aboutorab, Neda

    2013-06-01

    In this paper, we are concerned with extending the graph representation of generalized instantly decodable network coding (GIDNC) to a more general opportunistic network coding (ONC) scenario, referred to as order-2 GIDNC (O2-GIDNC). In the O2-GIDNC scheme, receivers can store non-instantly decodable packets (NIDPs) comprising two of their missing packets, and use them in a systematic way for later decodings. Once this graph representation is found, it can be used to extend the GIDNC graph-based analyses to the proposed O2-GIDNC scheme with a limited increase in complexity. In the proposed O2-GIDNC scheme, the information of the stored NIDPs at the receivers and the decoding opportunities they create can be exploited to improve the broadcast completion time and decoding delay compared to traditional GIDNC scheme. The completion time and decoding delay minimizing algorithms that can operate on the new O2-GIDNC graph are further described. The simulation results show that our proposed O2-GIDNC improves the completion time and decoding delay performance of the traditional GIDNC. © 2013 IEEE.

  1. On Multiple Decoding Attempts for Reed-Solomon Codes

    CERN Document Server

    Nguyen, Phong S; Narayanan, Krishna R

    2010-01-01

    One popular approach to soft-decision decoding of Reed-Solomon (RS) codes is based on the idea of using multiple trials of a simple RS decoding algorithm in combination with successively erasing or flipping a set of symbols or bits in each trial. In this paper, we present an framework based on rate-distortion (RD) theory to analyze such multiple-decoding algorithms for RS codes. By defining an appropriate distortion measure between an error pattern and an erasure pattern, it is shown that, for a single errors-and-erasures decoding trial, the condition for successful decoding is equivalent to the condition that the distortion is smaller than a fixed threshold. Finding the best set of erasure patterns for multiple decoding trials then turns out to be a covering problem which can be solved asymptotically by rate-distortion theory. Thus, the proposed approach can be used to understand the asymptotic performance-versus-complexity trade-off of multiple errors-and-erasures decoding of RS codes. We also consider an a...

  2. Coding and Decoding for the Dynamic Decode and Forward Relay Protocol

    CERN Document Server

    Kumar, K Raj

    2008-01-01

    We study the Dynamic Decode and Forward (DDF) protocol for a single half-duplex relay, single-antenna channel with quasi-static fading. The DDF protocol is well-known and has been analyzed in terms of the Diversity-Multiplexing Tradeoff (DMT) in the infinite block length limit. We characterize the finite block length DMT and give new explicit code constructions. The finite block length analysis illuminates a few key aspects that have been neglected in the previous literature: 1) we show that one dominating cause of degradation with respect to the infinite block length regime is the event of decoding error at the relay; 2) we explicitly take into account the fact that the destination does not generally know a priori the relay decision time at which the relay switches from listening to transmit mode. Both the above problems can be tackled by a careful design of the decoding algorithm. In particular, we introduce a decision rejection criterion at the relay based on Forney's decision rule (a variant of the Neyman...

  3. OFDM receiver for fast time-varying channels using block-sparse Bayesian learning

    DEFF Research Database (Denmark)

    Barbu, Oana-Elena; Manchón, Carles Navarro; Rom, Christian;

    2016-01-01

    characterized with a basis expansion model using a small number of terms. As a result, the channel estimation problem is posed as that of estimating a vector of complex coefficients that exhibits a block-sparse structure, which we solve with tools from block-sparse Bayesian learning. Using variational Bayesian...... inference, we embed the channel estimator in a receiver structure that performs iterative channel and noise precision estimation, intercarrier interference cancellation, detection and decoding. Simulation results illustrate the superior performance of the proposed receiver over state-of-art receivers.......We propose an iterative algorithm for OFDM receivers operating over fast time-varying channels. The design relies on the assumptions that the channel response can be characterized by a few non-negligible separable multipath components, and the temporal variation of each component gain can be well...

  4. Decoding the mechanisms of Antikythera astronomical device

    CERN Document Server

    Lin, Jian-Liang

    2016-01-01

    This book presents a systematic design methodology for decoding the interior structure of the Antikythera mechanism, an astronomical device from ancient Greece. The historical background, surviving evidence and reconstructions of the mechanism are introduced, and the historical development of astronomical achievements and various astronomical instruments are investigated. Pursuing an approach based on the conceptual design of modern mechanisms and bearing in mind the standards of science and technology at the time, all feasible designs of the six lost/incomplete/unclear subsystems are synthesized as illustrated examples, and 48 feasible designs of the complete interior structure are presented. This approach provides not only a logical tool for applying modern mechanical engineering knowledge to the reconstruction of the Antikythera mechanism, but also an innovative research direction for identifying the original structures of the mechanism in the future. In short, the book offers valuable new insights for all...

  5. Academic Training - Bioinformatics: Decoding the Genome

    CERN Multimedia

    Chris Jones

    2006-01-01

    ACADEMIC TRAINING LECTURE SERIES 27, 28 February 1, 2, 3 March 2006 from 11:00 to 12:00 - Auditorium, bldg. 500 Decoding the Genome A special series of 5 lectures on: Recent extraordinary advances in the life sciences arising through new detection technologies and bioinformatics The past five years have seen an extraordinary change in the information and tools available in the life sciences. The sequencing of the human genome, the discovery that we possess far fewer genes than foreseen, the measurement of the tiny changes in the genomes that differentiate us, the sequencing of the genomes of many pathogens that lead to diseases such as malaria are all examples of completely new information that is now available in the quest for improved healthcare. New tools have allowed similar strides in the discovery of the associated protein structures, providing invaluable information for those searching for new drugs. New DNA microarray chips permit simultaneous measurement of the state of expression of tens...

  6. A single chip VLSI Reed-Solomon decoder

    Science.gov (United States)

    Shao, H. M.; Truong, T. K.; Hsu, I. S.; Deutsch, L. J.; Reed, I. S.

    1986-02-01

    A new VLSI design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous design is replaced by a time domain algorithm. A new architecture that implements such an algorithm permits efficient pipeline processing with minimum circuitry. A systolic array is also developed to perform erasure corrections in the new design. A modified form of Euclid's algorithm is implemented by a new architecture that maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and a significant reduction in silicon area, therefore making it possible to build a pipeline (31,15)RS decoder on a single VLSI chip.

  7. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  8. Min-Max decoding for non binary LDPC codes

    CERN Document Server

    Savin, Valentin

    2008-01-01

    Iterative decoding of non-binary LDPC codes is currently performed using either the Sum-Product or the Min-Sum algorithms or slightly different versions of them. In this paper, several low-complexity quasi-optimal iterative algorithms are proposed for decoding non-binary codes. The Min-Max algorithm is one of them and it has the benefit of two possible LLR domain implementations: a standard implementation, whose complexity scales as the square of the Galois field's cardinality and a reduced complexity implementation called selective implementation, which makes the Min-Max decoding very attractive for practical purposes.

  9. Simplified Syndrome Decoding of (n, 1) Convolutional Codes

    Science.gov (United States)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.

  10. New syndrome decoding techniques for the (n, k) convolutional codes

    Science.gov (United States)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964

  11. Turbo decoder architecture for beyond-4G applications

    CERN Document Server

    Wong, Cheng-Chi

    2013-01-01

    This book describes the most recent techniques for turbo decoder implementation, especially for 4G and beyond 4G applications. The authors reveal techniques for the design of high-throughput decoders for future telecommunication systems, enabling designers to reduce hardware cost and shorten processing time. Coverage includes an explanation of VLSI implementation of the turbo decoder, from basic functional units to advanced parallel architecture. The authors discuss both hardware architecture techniques and experimental results, showing the variations in area/throughput/performance with respec

  12. Locally decodable codes and private information retrieval schemes

    CERN Document Server

    Yekhanin, Sergey

    2010-01-01

    Locally decodable codes (LDCs) are codes that simultaneously provide efficient random access retrieval and high noise resilience by allowing reliable reconstruction of an arbitrary bit of a message by looking at only a small number of randomly chosen codeword bits. Local decodability comes with a certain loss in terms of efficiency - specifically, locally decodable codes require longer codeword lengths than their classical counterparts. Private information retrieval (PIR) schemes are cryptographic protocols designed to safeguard the privacy of database users. They allow clients to retrieve rec

  13. Bayesian isochrone fitting and stellar ages

    CERN Document Server

    Valls-Gabaud, D

    2016-01-01

    Stellar evolution theory has been extraordinarily successful at explaining the different phases under which stars form, evolve and die. While the strongest constraints have traditionally come from binary stars, the advent of asteroseismology is bringing unique measures in well-characterised stars. For stellar populations in general, however, only photometric measures are usually available, and the comparison with the predictions of stellar evolution theory have mostly been qualitative. For instance, the geometrical shapes of isochrones have been used to infer ages of coeval populations, but without any proper statistical basis. In this chapter we provide a pedagogical review on a Bayesian formalism to make quantitative inferences on the properties of single, binary and small ensembles of stars, including unresolved populations. As an example, we show how stellar evolution theory can be used in a rigorous way as a prior information to measure the ages of stars between the ZAMS and the Helium flash, and their u...

  14. Power Decoding Reed--Solomon Codes Up to the Johnson Radius

    OpenAIRE

    Nielsen, Johan S. R.

    2015-01-01

    Power decoding, or "decoding using virtual interleaving" is a technique for decoding Reed-Solomon codes up to the Sudan radius. Since the method's inception, it has been an open question if it is possible to incorporate "multiplicities", the parameter allowing the Guruswami-Sudan algorithm to decode up to the Johnson radius. In this paper we show that this can be done, and describe how to efficiently solve the resulting key equations. As the original Power decoding, the proposed algorithm is ...

  15. An Interpolation Procedure for List Decoding Reed–Solomon Codes Based on Generalized Key Equations

    OpenAIRE

    Zeh, Alexander; Gentner, Christian; Augot, Daniel

    2011-01-01

    The key step of syndrome-based decoding of Reed-Solomon codes up to half the minimum distance is to solve the so-called Key Equation. List decoding algorithms, capable of decoding beyond half the minimum distance, are based on interpolation and factorization of multivariate polynomials. This article provides a link between syndrome-based decoding approaches based on Key Equations and the interpolation-based list decoding algorithms of Guruswami and Sudan for Reed-Solomon codes. The original i...

  16. An Interpolation Procedure for List Decoding Reed--Solomon codes Based on Generalized Key Equations

    OpenAIRE

    Zeh, Alexander; Gentner, Christian; Augot, Daniel

    2011-01-01

    The key step of syndrome-based decoding of Reed--Solomon codes up to half the minimum distance is to solve the so-called Key Equation. List decoding algorithms, capable of decoding beyond half the minimum distance, are based on interpolation and factorization of multivariate polynomials. This article provides a link between syndrome-based decoding approaches based on Key Equations and the interpolation-based list decoding algorithms of Guruswami and Sudan for Reed--Solomon codes. The original...

  17. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which...... represents the spatial coordinates of the grid nodes. Knowledge of how grid nodes are depicted in the observed image is described through the observation model. The prior consists of a node prior and an arc (edge) prior, both modeled as Gaussian MRFs. The node prior models variations in the positions of grid...... nodes and the arc prior models variations in row and column spacing across the grid. Grid matching is done by placing an initial rough grid over the image and applying an ensemble annealing scheme to maximize the posterior distribution of the grid. The method can be applied to noisy images with missing...

  18. Current trends in Bayesian methodology with applications

    CERN Document Server

    Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia

    2015-01-01

    Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on

  19. VHDL Implementation of Fast and Efficient Viterbi decoder

    Directory of Open Access Journals (Sweden)

    Rajesh. C

    2013-09-01

    Full Text Available Viterbi decoders are used in wide variety of communication applications. In this paper, we focus on different types of VHDL implementations of Viterbi decoder. The two approaches of Implementation of Viterbi decoder are register-exchange approach and trace back approach. There are two methods in trace back approach i.e. shift update and selective update. The behaviour of a Viterbi decoder is described in VHDL. A gate level circuit was obtained from the behavioural description through logic synthesis. We compared the performance characteristics of all approaches in terms of speed, area consumption, power and specific hardware components used by that particular design. Our experimental results show that the performance characteristics of selective update method are better compared to register-exchange and shift update method in terms of area and power consumption. In contrast, the performance characteristics of register-exchange method are better compared to selective update and shift update method in terms of speed.

  20. Modified Euclidean Algorithms for Decoding Reed-Solomon Codes

    CERN Document Server

    Sarwate, Dilip V

    2009-01-01

    The extended Euclidean algorithm (EEA) for polynomial greatest common divisors is commonly used in solving the key equation in the decoding of Reed-Solomon (RS) codes, and more generally in BCH decoding. For this particular application, the iterations in the EEA are stopped when the degree of the remainder polynomial falls below a threshold. While determining the degree of a polynomial is a simple task for human beings, hardware implementation of this stopping rule is more complicated. This paper describes a modified version of the EEA that is specifically adapted to the RS decoding problem. This modified algorithm requires no degree computation or comparison to a threshold, and it uses a fixed number of iterations. Another advantage of this modified version is in its application to the errors-and-erasures decoding problem for RS codes where significant hardware savings can be achieved via seamless computation.

  1. Decoding Reed-Solomon Codes beyond half the minimum distance

    DEFF Research Database (Denmark)

    Høholdt, Tom; Nielsen, Rasmus Refslund

    1999-01-01

    We describe an efficient implementation of M.Sudan"s algorithm for decoding Reed-Solomon codes beyond half the minimum distance. Furthermore we calculate an upper bound of the probabilty of getting more than one codeword as output...

  2. Conceptual design for a universal Reed-Solomon decoder

    Science.gov (United States)

    Miller, R. L.; Deutsch, L. J.

    1981-11-01

    An algorithm which enables one Reed-Solomon decoder to process other Reed-Solomon encoded data from a different code is presented. The sole requirement is that both codes have the same length, the same rate, and the same field of coefficients. It is pointed out that only very simple pre- and post-processing hardware is needed to resolve an encoder/decoder incompatibility and that no encoder modification is needed.

  3. VLSI architecture for a Reed-Solomon decoder

    Science.gov (United States)

    Hsu, In-Shek; Truong, Trieu-Kie

    1992-07-01

    A basic single-chip building block for a Reed-Solomon (RS) decoder system is partitioned into a plurality of sections, the first of which consists of a plurality of syndrome subcells each of which contains identical standard-basis finite-field multipliers that are programmable between 10 and 8 bit operation. A desired number of basic building blocks may be assembled to provide a RS decoder of any syndrome subcell size that is programmable between 10 and 8 bit operation.

  4. A Correlational Encoder Decoder Architecture for Pivot Based Sequence Generation

    OpenAIRE

    SAHA, AMRITA; Khapra, Mitesh M.; Chandar, Sarath; Rajendran, Janarthanan; Cho, Kyunghyun

    2016-01-01

    Interlingua based Machine Translation (MT) aims to encode multiple languages into a common linguistic representation and then decode sentences in multiple target languages from this representation. In this work we explore this idea in the context of neural encoder decoder architectures, albeit on a smaller scale and without MT as the end goal. Specifically, we consider the case of three languages or modalities X, Z and Y wherein we are interested in generating sequences in Y starting from inf...

  5. Distributed STBCs with Full-diversity Partial Interference Cancellation Decoding

    OpenAIRE

    Natarajan, Lakshmi Prasad; Rajan, B. Sundar

    2010-01-01

    Recently, Guo and Xia introduced low complexity decoders called Partial Interference Cancellation (PIC) and PIC with Successive Interference Cancellation (PIC-SIC), which include the Zero Forcing (ZF) and ZF-SIC receivers as special cases, for point-to-point MIMO channels. In this paper, we show that PIC and PIC-SIC decoders are capable of achieving the full cooperative diversity available in wireless relay networks. We give sufficient conditions for a Distributed Space-Time Block Code (DSTBC...

  6. A Low Power Viterbi Decoder for Trellis Coded Modulation System

    OpenAIRE

    M. Jansi Rani; S.Vidheswari

    2014-01-01

    Forward Error Correction (FEC) schemes are an essential component of wireless communication systems. Convolutional codes are employed to implement FEC but the complexity of corresponding decoders increases exponentially according to the constraint length. Present wireless standards such as Third generation (3G) systems, GSM, 802.11A, 802.16 utilize some configuration of convolutional coding. Convolutional encoding with Viterbi decoding is a powerful method for forward error co...

  7. Emotional facial expressions decoding in siblings of children with autism

    OpenAIRE

    Dethier, Marie; Sojic, Barbara; Blairy, Sylvie

    2010-01-01

    The ability to identify other people’s emotions, including their emotional facial expression (EFE), is fundamental to many social processes. Individuals with autism spectrum disorders (ASD) show deficits in several empathy-related processes, including EFE decoding (e.g. Ashwin, Chapman, Colle, & Baron-Cohen, 2007). The object of this study was to investigate the capacity to decode accurately EFE in siblings of children with ASD. Indeed, autism is considered to be substantially influenced by g...

  8. Conceptual design for a universal Reed-Solomon decoder

    Science.gov (United States)

    Miller, R. L.; Deutsch, L. J.

    1981-01-01

    An algorithm which enables one Reed-Solomon decoder to process other Reed-Solomon encoded data from a different code is presented. The sole requirement is that both codes have the same length, the same rate, and the same field of coefficients. It is pointed out that only very simple pre- and post-processing hardware is needed to resolve an encoder/decoder incompatibility and that no encoder modification is needed.

  9. Rethinking the Role of Decodable Texts in Early Literacy Instruction

    OpenAIRE

    Frey, Rick Chan

    2012-01-01

    Decodable books based on previous classroom instruction are the most frequently used texts for 1st grade reading instruction in public schools, yet no empirical studies exist demonstrating their efficacy or their benefits for beginning readers. This study attempts to address this gap in the research literature by analyzing the reading behaviors of a group of 1st grade students reading the decodable texts included as part of the 1st grade reading curriculum in a large public, urban school dist...

  10. New syndrome decoder for (n, 1) convolutional codes

    Science.gov (United States)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.

  11. Modified Euclidean Algorithms for Decoding Reed-Solomon Codes

    OpenAIRE

    Sarwate, Dilip V.; Yan, Zhiyuan

    2009-01-01

    The extended Euclidean algorithm (EEA) for polynomial greatest common divisors is commonly used in solving the key equation in the decoding of Reed-Solomon (RS) codes, and more generally in BCH decoding. For this particular application, the iterations in the EEA are stopped when the degree of the remainder polynomial falls below a threshold. While determining the degree of a polynomial is a simple task for human beings, hardware implementation of this stopping rule is more complicated. This p...

  12. Bit-Serial Reed Solomon Decoders in VLSI

    OpenAIRE

    Whiting, Douglas L.

    1984-01-01

    Reed-Solomon codes are known to provide excellent error-correcting capabilities on many types of communication channels. Although efficient decoding algorithms have been known for over fifteen years, currently available decoder systems are large both in size and in power consumption. Such systems typically use a single, very fast, fully parallel finite-field multiplier in a sequential architecture. Thus, more processing time is required as the code redundancy increases. By using ...

  13. Decoding Reed-Solomon Codes beyond half the minimum distance

    DEFF Research Database (Denmark)

    Høholdt, Tom; Nielsen, Rasmus Refslund

    We describe an efficient implementation of M.Sudan"s algorithm for decoding Reed-Solomon codes beyond half the minimum distance. Furthermore we calculate an upper bound of the probabilty of getting more than one codeword as output......We describe an efficient implementation of M.Sudan"s algorithm for decoding Reed-Solomon codes beyond half the minimum distance. Furthermore we calculate an upper bound of the probabilty of getting more than one codeword as output...

  14. Molecular decoding using luminescence from an entangled porous framework

    OpenAIRE

    Takashima, Yohei; Martínez, Virginia Martínez; Furukawa, Shuhei; Kondo, Mio; Shimomura, Satoru; Uehara, Hiromitsu; Nakahama, Masashi; Sugimoto, Kunihisa; Kitagawa, Susumu

    2011-01-01

    Chemosensors detect a single target molecule from among several molecules, but cannot differentiate targets from one another. In this study, we report a molecular decoding strategy in which a single host domain accommodates a class of molecules and distinguishes between them with a corresponding readout. We synthesized the decoding host by embedding naphthalenediimide into the scaffold of an entangled porous framework that exhibited structural dynamics due to the dislocation of two chemically...

  15. Color decoding in combined radiodiagnosis of lung neoplasms

    International Nuclear Information System (INIS)

    There have been shown that the color decoding of X-ray-radioisotope information on 50 lung cancer patients (150 radiograms and 80 scannograms) raised the diagnostic efficacy. It opens up opportunities for the objective quantitative processing of results by performing histographic analysis of dimilar optical density zones. These data are impossible to obtain by using common methods of visual and roentgenogrammetric analysis. TV decoders of UAR-1 type are recommended fo; clinical practice

  16. Empirical Evaluation of Approximation Algorithms for Probabilistic Decoding

    OpenAIRE

    Rish, Irina; Kask, Kalev; Dechter, Rina

    2013-01-01

    It was recently shown that the problem of decoding messages transmitted through a noisy channel can be formulated as a belief updating task over a probabilistic network [McEliece]. Moreover, it was observed that iterative application of the (linear time) Pearl's belief propagation algorithm designed for polytrees outperformed state of the art decoding algorithms, even though the corresponding networks may have many cycles. This paper demonstrates empirically that an approximation algorithm ap...

  17. Comparing Nonparametric Bayesian Tree Priors for Clonal Reconstruction of Tumors

    OpenAIRE

    Deshwar, Amit G; Vembu, Shankar; Morris, Quaid

    2014-01-01

    Statistical machine learning methods, especially nonparametric Bayesian methods, have become increasingly popular to infer clonal population structure of tumors. Here we describe the treeCRP, an extension of the Chinese restaurant process (CRP), a popular construction used in nonparametric mixture models, to infer the phylogeny and genotype of major subclonal lineages represented in the population of cancer cells. We also propose new split-merge updates tailored to the subclonal reconstructio...

  18. Testing interconnected VLSI circuits in the Big Viterbi Decoder

    Science.gov (United States)

    Onyszchuk, I. M.

    1991-01-01

    The Big Viterbi Decoder (BVD) is a powerful error-correcting hardware device for the Deep Space Network (DSN), in support of the Galileo and Comet Rendezvous Asteroid Flyby (CRAF)/Cassini Missions. Recently, a prototype was completed and run successfully at 400,000 or more decoded bits per second. This prototype is a complex digital system whose core arithmetic unit consists of 256 identical very large scale integration (VLSI) gate-array chips, 16 on each of 16 identical boards which are connected through a 28-layer, printed-circuit backplane using 4416 wires. Special techniques were developed for debugging, testing, and locating faults inside individual chips, on boards, and within the entire decoder. The methods are based upon hierarchical structure in the decoder, and require that chips or boards be wired themselves as Viterbi decoders. The basic procedure consists of sending a small set of known, very noisy channel symbols through a decoder, and matching observables against values computed by a software simulation. Also, tests were devised for finding open and short-circuited wires which connect VLSI chips on the boards and through the backplane.

  19. Evaluation framework for K-best sphere decoders

    KAUST Repository

    Shen, Chungan

    2010-08-01

    While Maximum-Likelihood (ML) is the optimum decoding scheme for most communication scenarios, practical implementation difficulties limit its use, especially for Multiple Input Multiple Output (MIMO) systems with a large number of transmit or receive antennas. Tree-searching type decoder structures such as Sphere decoder and K-best decoder present an interesting trade-off between complexity and performance. Many algorithmic developments and VLSI implementations have been reported in literature with widely varying performance to area and power metrics. In this semi-tutorial paper we present a holistic view of different Sphere decoding techniques and K-best decoding techniques, identifying the key algorithmic and implementation trade-offs. We establish a consistent benchmark framework to investigate and compare the delay cost, power cost, and power-delay-product cost incurred by each method. Finally, using the framework, we propose and analyze a novel architecture and compare that to other published approaches. Our goal is to explicitly elucidate the overall advantages and disadvantages of each proposed algorithms in one coherent framework. © 2010 World Scientific Publishing Company.

  20. Decoding Schemes for FBMC with Single-Delay STTC

    Directory of Open Access Journals (Sweden)

    Chrislin Lélé

    2010-01-01

    Full Text Available Orthogonally multiplexed Quadrature Amplitude Modulation (OQAM with Filter-Bank-based MultiCarrier modulation (FBMC is a multicarrier modulation scheme that can be considered an alternative to the conventional orthogonal frequency division multiplexing (OFDM with cyclic prefix (CP for transmission over multipath fading channels. However, as OQAM-based FBMC is based on real orthogonality, transmission over a complex-valued channel makes the decoding process more challenging compared to CP-OFDM case. Moreover, if we apply Multiple Input Multiple Output (MIMO techniques to OQAM-based FBMC, the decoding schemes are different from the ones used in CP-OFDM. In this paper, we consider the combination of OQAM-based FBMC with single-delay Space-Time Trellis Coding (STTC. We extend the decoding process presented earlier in the case of Nt=2 transmit antennas to greater values of Nt. Then, for Nt≥2, we make an analysis of the theoretical and simulation performance of ML and Viterbi decoding. Finally, to improve the performance of this method, we suggest an iterative decoding method. We show that the OQAM-based FBMC iterative decoding scheme can slightly outperform CP-OFDM.

  1. Decoding Schemes for FBMC with Single-Delay STTC

    Directory of Open Access Journals (Sweden)

    Lélé Chrislin

    2010-01-01

    Full Text Available Abstract Orthogonally multiplexed Quadrature Amplitude Modulation (OQAM with Filter-Bank-based MultiCarrier modulation (FBMC is a multicarrier modulation scheme that can be considered an alternative to the conventional orthogonal frequency division multiplexing (OFDM with cyclic prefix (CP for transmission over multipath fading channels. However, as OQAM-based FBMC is based on real orthogonality, transmission over a complex-valued channel makes the decoding process more challenging compared to CP-OFDM case. Moreover, if we apply Multiple Input Multiple Output (MIMO techniques to OQAM-based FBMC, the decoding schemes are different from the ones used in CP-OFDM. In this paper, we consider the combination of OQAM-based FBMC with single-delay Space-Time Trellis Coding (STTC. We extend the decoding process presented earlier in the case of transmit antennas to greater values of . Then, for , we make an analysis of the theoretical and simulation performance of ML and Viterbi decoding. Finally, to improve the performance of this method, we suggest an iterative decoding method. We show that the OQAM-based FBMC iterative decoding scheme can slightly outperform CP-OFDM.

  2. Partially blind instantly decodable network codes for lossy feedback environment

    KAUST Repository

    Sorour, Sameh

    2014-09-01

    In this paper, we study the multicast completion and decoding delay minimization problems for instantly decodable network coding (IDNC) in the case of lossy feedback. When feedback loss events occur, the sender falls into uncertainties about packet reception at the different receivers, which forces it to perform partially blind selections of packet combinations in subsequent transmissions. To determine efficient selection policies that reduce the completion and decoding delays of IDNC in such an environment, we first extend the perfect feedback formulation in our previous works to the lossy feedback environment, by incorporating the uncertainties resulting from unheard feedback events in these formulations. For the completion delay problem, we use this formulation to identify the maximum likelihood state of the network in events of unheard feedback and employ it to design a partially blind graph update extension to the multicast IDNC algorithm in our earlier work. For the decoding delay problem, we derive an expression for the expected decoding delay increment for any arbitrary transmission. This expression is then used to find the optimal policy that reduces the decoding delay in such lossy feedback environment. Results show that our proposed solutions both outperform previously proposed approaches and achieve tolerable degradation even at relatively high feedback loss rates.

  3. Bayesian Methods and Universal Darwinism

    OpenAIRE

    Campbell, John

    2010-01-01

    Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a 'copy with selective retention' algorithm abstracted from Darwin's theory of...

  4. Portfolio Allocation for Bayesian Optimization

    OpenAIRE

    Brochu, Eric; Hoffman, Matthew W.; De Freitas, Nando

    2010-01-01

    Bayesian optimization with Gaussian processes has become an increasingly popular tool in the machine learning community. It is efficient and can be used when very little is known about the objective function, making it popular in expensive black-box optimization scenarios. It uses Bayesian methods to sample the objective efficiently using an acquisition function which incorporates the model's estimate of the objective and the uncertainty at any given point. However, there are several differen...

  5. Neuronanatomy, neurology and Bayesian networks

    OpenAIRE

    Bielza Lozoya, Maria Concepcion

    2014-01-01

    Bayesian networks are data mining models with clear semantics and a sound theoretical foundation. In this keynote talk we will pinpoint a number of neuroscience problems that can be addressed using Bayesian networks. In neuroanatomy, we will show computer simulation models of dendritic trees and classification of neuron types, both based on morphological features. In neurology, we will present the search for genetic biomarkers in Alzheimer's disease and the prediction of health-related qualit...

  6. Bayesian Networks and Influence Diagrams

    DEFF Research Database (Denmark)

    Kjærulff, Uffe Bro; Madsen, Anders Læsø

     Probabilistic networks, also known as Bayesian networks and influence diagrams, have become one of the most promising technologies in the area of applied artificial intelligence, offering intuitive, efficient, and reliable methods for diagnosis, prediction, decision making, classification......, troubleshooting, and data mining under uncertainty. Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. Intended...

  7. Efficient program for decoding the /255, 223/ Reed-Solomon code over GF/2 to the 8th/ with both errors and erasures, using transform decoding

    Science.gov (United States)

    Miller, R. L.; Truong, T. K.; Reed, I. S.

    1980-07-01

    The paper deals with a method developed for decoding a (255, 223) Reed-Solomon code over GF(2 to the 8th) with both errors and erasures. The matrix of decoding times for correcting errors and erasures of the code using a simplified decoder is presented. It is shown that the algorithm proposed is faster by a factor of from three to seven.

  8. The Differential Contributions of Auditory-verbal and Visuospatial Working Memory on Decoding Skills in Children Who Are Poor Decoders

    OpenAIRE

    Squires, Katie E.

    2013-01-01

    This study investigated the differential contribution of auditory-verbal and visuospatial working memory (WM) on decoding skills in second- and fifth-grade children identified with poor decoding. Thirty-two second-grade students and 22 fifth-grade students completed measures that assessed simple and complex auditory-verbal and visuospatial memory, phonological awareness, orthographic knowledge, listening comprehension and verbal and nonverbal intelligence. Bivariate correlations revealed th...

  9. Iterative Decoding of Parallel Concatenated Block Codes and Coset Based MAP Decoding Algorithm for F24 Code

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A multi-dimensional concatenation scheme for block codes is introduced, in which information symbols are interleaved and re-encoded for more than once. It provides a convenient platform to design high performance codes with flexible interleaver size.Coset based MAP soft-in/soft-out decoding algorithms are presented for the F24 code. Simulation results show that the proposed coding scheme can achieve high coding gain with flexible interleaver length and very low decoding complexity.

  10. Bayesian Interpretations of Heteroskedastic Consistent Covariance Estimators Using the Informed Bayesian Bootstrap

    OpenAIRE

    Dale Poirier

    2008-01-01

    This paper provides Bayesian rationalizations for White’s heteroskedastic consistent (HC) covariance estimator and various modifications of it. An informed Bayesian bootstrap provides the statistical framework.

  11. Order Statistics Based List Decoding Techniques for Linear Binary Block Codes

    CERN Document Server

    Alnawayseh, Saif E A

    2011-01-01

    The order statistics based list decoding techniques for linear binary block codes of small to medium block length are investigated. The construction of the list of the test error patterns is considered. The original order statistics decoding is generalized by assuming segmentation of the most reliable independent positions of the received bits. The segmentation is shown to overcome several drawbacks of the original order statistics decoding. The complexity of the order statistics based decoding is further reduced by assuming a partial ordering of the received bits in order to avoid the complex Gauss elimination. The probability of the test error patterns in the decoding list is derived. The bit error rate performance and the decoding complexity trade-off of the proposed decoding algorithms is studied by computer simulations. Numerical examples show that, in some cases, the proposed decoding schemes are superior to the original order statistics decoding in terms of both the bit error rate performance as well a...

  12. Decoding reality the universe as quantum information

    CERN Document Server

    Vedral, Vlatko

    2010-01-01

    In Decoding Reality, Vlatko Vedral offers a mind-stretching look at the deepest questions about the universe--where everything comes from, why things are as they are, what everything is. The most fundamental definition of reality is not matter or energy, he writes, but information--and it is the processing of information that lies at the root of all physical, biological, economic, and social phenomena. This view allows Vedral to address a host of seemingly unrelated questions: Why does DNA bind like it does? What is the ideal diet for longevity? How do you make your first million dollars? We can unify all through the understanding that everything consists of bits of information, he writes, though that raises the question of where these bits come from. To find the answer, he takes us on a guided tour through the bizarre realm of quantum physics. At this sub-sub-subatomic level, we find such things as the interaction of separated quantum particles--what Einstein called "spooky action at a distance." In fact, V...

  13. Older adults have difficulty in decoding sarcasm.

    Science.gov (United States)

    Phillips, Louise H; Allen, Roy; Bull, Rebecca; Hering, Alexandra; Kliegel, Matthias; Channon, Shelley

    2015-12-01

    Younger and older adults differ in performance on a range of social-cognitive skills, with older adults having difficulties in decoding nonverbal cues to emotion and intentions. Such skills are likely to be important when deciding whether someone is being sarcastic. In the current study we investigated in a life span sample whether there are age-related differences in the interpretation of sarcastic statements. Using both video and verbal materials, 116 participants aged between 18 and 86 completed judgments about whether statements should be interpreted literally or sarcastically. For the verbal stories task, older adults were poorer at understanding sarcastic intent compared with younger and middle-aged participants, but there was no age difference in interpreting control stories. For the video task, older adults showed poorer understanding of sarcastic exchanges compared with younger and middle-aged counterparts, but there was no age difference in understanding the meaning of sincere interactions. For the videos task, the age differences were mediated by the ability to perceive facial expressions of emotion. Age effects could not be explained in terms of variance in working memory. These results indicate that increased age is associated with specific difficulties in using nonverbal and contextual cues to understand sarcastic intent. (PsycINFO Database Record PMID:26501728

  14. Fast mental states decoding in mixed reality.

    Directory of Open Access Journals (Sweden)

    Daniele eDe Massari

    2014-11-01

    Full Text Available The combination of Brain-Computer Interface technology, allowing online monitoring and decoding of brain activity, with virtual and mixed reality systems may help to shape and guide implicit and explicit learning using ecological scenarios. Real-time information of ongoing brain states acquired through BCI might be exploited for controlling data presentation in virtual environments. In this context, assessing to what extent brain states can be discriminated during mixed reality experience is critical for adapting specific data features to contingent brain activity. In this study we recorded EEG data while participants experienced a mixed reality scenario implemented through the eXperience Induction Machine (XIM. The XIM is a novel framework modeling the integration of a sensing system that evaluates and measures physiological and psychological states with a number of actuators and effectors that coherently reacts to the user's actions. We then assessed continuous EEG-based discrimination of spatial navigation, reading and calculation performed in mixed reality, using LDA and SVM classifiers. Dynamic single trial classification showed high accuracy of LDA and SVM classifiers in detecting multiple brain states as well as in differentiating between high and low mental workload, using a 5 s time-window shifting every 200 ms. Our results indicate overall better performance of LDA with respect to SVM and suggest applicability of our approach in a BCI-controlled mixed reality scenario. Ultimately, successful prediction of brain states might be used to drive adaptation of data representation in order to boost information processing in mixed reality.

  15. Why Hawking Radiation Cannot Be Decoded

    CERN Document Server

    Ong, Yen Chin; Chen, Pisin

    2014-01-01

    One of the great difficulties in the theory of black hole evaporation is that the most decisive phenomena tend to occur when the black hole is extremely hot: that is, when the physics is most poorly understood. Fortunately, a crucial step in the Harlow-Hayden approach to the firewall paradox, concerning the time available for decoding of Hawking radiation emanating from charged AdS black holes, can be made to work without relying on the unknown physics of black holes with extremely high temperatures; in fact, it relies on the properties of cold black holes. Here we clarify this surprising point. The approach is based on ideas borrowed from applications of the AdS/CFT correspondence to the quark-gluon plasma. Firewalls aside, our work presents a detailed analysis of the thermodynamics and evolution of evaporating charged AdS black holes with flat event horizons. We show that, in one way or another, these black holes are always eventually destroyed in a time which, while long by normal standards, is short relat...

  16. DECODING OF STRUCTURALLY AND LOGICAL CODES

    Directory of Open Access Journals (Sweden)

    Yu. D. Ivanov

    2016-01-01

    Full Text Available The article deals with the description of the main points of the structural and logical coding and the features of SLC codes. There are shown the basic points of the generalized algorithm of decoding SLC, which is based on the method of perfect matrix arrangement (PMA of the n-dimensional cube vertices for adequate representation and transformation of boolean functions, which is based on the method of generating sequences of variables for building the maximum coverage of the cube vertices. The structural and logical codes (SLC use natural logic redundancy of the infimum disjunctive normal forms (IDNF of boolean functions, which make the basis for building the SLC codes and correcting the errors, that occur during data transfer in real discrete channels, on the channels with independent errors. The main task is to define the basic relations between the implemented SLC codes of the logical redundancy and boundary values of multiplicity of independent errors which are corrected. The principal difference between the SLC codes and the well-known correcting codes is that the redundancy, that is needed to correct the errors in converting the discrete information, is not introduced into an additional code sequence but is defined in a natural way, during the construction of codewords of SLC.

  17. Bayesian individualization via sampling-based methods.

    Science.gov (United States)

    Wakefield, J

    1996-02-01

    We consider the situation where we wish to adjust the dosage regimen of a patient based on (in general) sparse concentration measurements taken on-line. A Bayesian decision theory approach is taken which requires the specification of an appropriate prior distribution and loss function. A simple method for obtaining samples from the posterior distribution of the pharmacokinetic parameters of the patient is described. In general, these samples are used to obtain a Monte Carlo estimate of the expected loss which is then minimized with respect to the dosage regimen. Some special cases which yield analytic solutions are described. When the prior distribution is based on a population analysis then a method of accounting for the uncertainty in the population parameters is described. Two simulation studies showing how the methods work in practice are presented. PMID:8827585

  18. Nonparametric Bayesian Classification

    CERN Document Server

    Coram, M A

    2002-01-01

    A Bayesian approach to the classification problem is proposed in which random partitions play a central role. It is argued that the partitioning approach has the capacity to take advantage of a variety of large-scale spatial structures, if they are present in the unknown regression function $f_0$. An idealized one-dimensional problem is considered in detail. The proposed nonparametric prior uses random split points to partition the unit interval into a random number of pieces. This prior is found to provide a consistent estimate of the regression function in the $\\L^p$ topology, for any $1 \\leq p < \\infty$, and for arbitrary measurable $f_0:[0,1] \\rightarrow [0,1]$. A Markov chain Monte Carlo (MCMC) implementation is outlined and analyzed. Simulation experiments are conducted to show that the proposed estimate compares favorably with a variety of conventional estimators. A striking resemblance between the posterior mean estimate and the bagged CART estimate is noted and discussed. For higher dimensions, a ...

  19. BAT - Bayesian Analysis Toolkit

    International Nuclear Information System (INIS)

    One of the most vital steps in any data analysis is the statistical analysis and comparison with the prediction of a theoretical model. The many uncertainties associated with the theoretical model and the observed data require a robust statistical analysis tool. The Bayesian Analysis Toolkit (BAT) is a powerful statistical analysis software package based on Bayes' Theorem, developed to evaluate the posterior probability distribution for models and their parameters. It implements Markov Chain Monte Carlo to get the full posterior probability distribution that in turn provides a straightforward parameter estimation, limit setting and uncertainty propagation. Additional algorithms, such as Simulated Annealing, allow to evaluate the global mode of the posterior. BAT is developed in C++ and allows for a flexible definition of models. A set of predefined models covering standard statistical cases are also included in BAT. It has been interfaced to other commonly used software packages such as ROOT, Minuit, RooStats and CUBA. An overview of the software and its algorithms is provided along with several physics examples to cover a range of applications of this statistical tool. Future plans, new features and recent developments are briefly discussed.

  20. Efficient universal computing architectures for decoding neural activity.

    Directory of Open Access Journals (Sweden)

    Benjamin I Rapoport

    Full Text Available The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain- machine interfaces (BMIs. Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain- machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than [Formula: see text]. We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA implementation of this portion

  1. A Bayesian Approach to Identifying New Risk Factors for Dementia

    Science.gov (United States)

    Wen, Yen-Hsia; Wu, Shihn-Sheng; Lin, Chun-Hung Richard; Tsai, Jui-Hsiu; Yang, Pinchen; Chang, Yang-Pei; Tseng, Kuan-Hua

    2016-01-01

    Abstract Dementia is one of the most disabling and burdensome health conditions worldwide. In this study, we identified new potential risk factors for dementia from nationwide longitudinal population-based data by using Bayesian statistics. We first tested the consistency of the results obtained using Bayesian statistics with those obtained using classical frequentist probability for 4 recognized risk factors for dementia, namely severe head injury, depression, diabetes mellitus, and vascular diseases. Then, we used Bayesian statistics to verify 2 new potential risk factors for dementia, namely hearing loss and senile cataract, determined from the Taiwan's National Health Insurance Research Database. We included a total of 6546 (6.0%) patients diagnosed with dementia. We observed older age, female sex, and lower income as independent risk factors for dementia. Moreover, we verified the 4 recognized risk factors for dementia in the older Taiwanese population; their odds ratios (ORs) ranged from 3.469 to 1.207. Furthermore, we observed that hearing loss (OR = 1.577) and senile cataract (OR = 1.549) were associated with an increased risk of dementia. We found that the results obtained using Bayesian statistics for assessing risk factors for dementia, such as head injury, depression, DM, and vascular diseases, were consistent with those obtained using classical frequentist probability. Moreover, hearing loss and senile cataract were found to be potential risk factors for dementia in the older Taiwanese population. Bayesian statistics could help clinicians explore other potential risk factors for dementia and for developing appropriate treatment strategies for these patients. PMID:27227925

  2. Bayesian seismic AVO inversion

    Energy Technology Data Exchange (ETDEWEB)

    Buland, Arild

    2002-07-01

    A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S

  3. Reconfigurable and Parallelized Network Coding Decoder for VANETs

    Directory of Open Access Journals (Sweden)

    Sunwoo Kim

    2012-01-01

    Full Text Available Network coding is a promising technique for data communications in wired and wireless networks. However, it places an additional computing overhead on the receiving node in exchange for the improved bandwidth. This paper proposes an FPGA-based reconfigurable and parallelized network coding decoder for embedded systems especially for vehicular ad hoc networks. In our design, rapid decoding process can be achieved by exploiting parallelism in the coefficient vector operations. The proposed decoder is implemented by using a modern Xilinx Virtex-5 device and its performance is evaluated considering the performance of the software decoding on various embedded processors. The performance on four different sizes of the coefficient matrix is measured and the decoding throughput of 18.3 Mbps for the size 16 × 16 and 6.5 Mbps for 128 × 128 has been achieved at the operating frequency of 64.5 MHz. Compared to the recent TEGRA 250 processor, the result obtained with128 × 128 coefficient matrix reaches up to 5.06 in terms of speedup.

  4. Joint source-channel decoding for MPEG-2 video transmission

    Science.gov (United States)

    Yin, Liuguo; Chen, Weigang; Lu, Jianhua; Chen, Chang Wen

    2004-10-01

    Joint source-channel coding schemes have been proven to be effective ways for reliable multimedia communications. In this paper, a joint source-channel decoding (JSCD) scheme that combines the hidden Markov source (HMS) estimation and low-density parity-check (LDPC) coding is proposed for the standard MPEG-2 video transmission. The LDPC code of the proposed scheme has a near-Shannon-limit error-correcting capability, while the HMS estimator may accurately extract the residual redundancy within the MPEG-2 video stream without any prior information. Furthermore, with a joint iterative decoding algorithm, the estimated source redundancy may be well exploited by the LDPC decoder, and the channel decoding feedback may refine the subsequence HMS estimation, thereby effectively improving the system performance. On the other hand, we also show that the proposed JSCD scheme has approximately the same computation complexity as that of the standard decoding scheme. Moreover, it is worth noting that the proposed scheme is based on separation encoding schemes, which is very convenient to be applied to existing multimedia transmission systems.

  5. Performance evaluation of H.264 decoder on different processors

    Directory of Open Access Journals (Sweden)

    H.S.Prasantha

    2010-08-01

    Full Text Available H.264/AVC (Advanced Video Coding is the newest video coding standard of the moving video coding experts group. The decoder is standardized by imposing restrictions on the bit stream and syntax, and defining the process of decoding syntax elements such that every decoder conforming to the standard will produce similar output when encoded bit stream is provided as input. It uses state of art coding tools and provides enhanced coding efficiency for a wide range of applications, including video telephony, real-time video conferencing, direct-broadcast TV (television, blue-ray disc, DVB (Digital video broadcast broadcast, streaming video and others. The paper proposes to port the H.264/AVC decoder on the various processors such as TI DSP (Digital signal processor, ARM (Advanced risk machines and P4 (Pentium processors. The paper also proposesto analyze and compare Video Quality Metrics for different encoded video sequences. The paper proposes to investigate the decoder performance on different processors with and without deblocking filter and compare the performance based on different video quality measures.

  6. Measuring Integrated Information from the Decoding Perspective.

    Science.gov (United States)

    Oizumi, Masafumi; Amari, Shun-ichi; Yanagawa, Toru; Fujii, Naotaka; Tsuchiya, Naotsugu

    2016-01-01

    Accumulating evidence indicates that the capacity to integrate information in the brain is a prerequisite for consciousness. Integrated Information Theory (IIT) of consciousness provides a mathematical approach to quantifying the information integrated in a system, called integrated information, Φ. Integrated information is defined theoretically as the amount of information a system generates as a whole, above and beyond the amount of information its parts independently generate. IIT predicts that the amount of integrated information in the brain should reflect levels of consciousness. Empirical evaluation of this theory requires computing integrated information from neural data acquired from experiments, although difficulties with using the original measure Φ precludes such computations. Although some practical measures have been previously proposed, we found that these measures fail to satisfy the theoretical requirements as a measure of integrated information. Measures of integrated information should satisfy the lower and upper bounds as follows: The lower bound of integrated information should be 0 and is equal to 0 when the system does not generate information (no information) or when the system comprises independent parts (no integration). The upper bound of integrated information is the amount of information generated by the whole system. Here we derive the novel practical measure Φ* by introducing a concept of mismatched decoding developed from information theory. We show that Φ* is properly bounded from below and above, as required, as a measure of integrated information. We derive the analytical expression of Φ* under the Gaussian assumption, which makes it readily applicable to experimental data. Our novel measure Φ* can generally be used as a measure of integrated information in research on consciousness, and also as a tool for network analysis on diverse areas of biology. PMID:26796119

  7. Bayesian modeling using WinBUGS

    CERN Document Server

    Ntzoufras, Ioannis

    2009-01-01

    A hands-on introduction to the principles of Bayesian modeling using WinBUGS Bayesian Modeling Using WinBUGS provides an easily accessible introduction to the use of WinBUGS programming techniques in a variety of Bayesian modeling settings. The author provides an accessible treatment of the topic, offering readers a smooth introduction to the principles of Bayesian modeling with detailed guidance on the practical implementation of key principles. The book begins with a basic introduction to Bayesian inference and the WinBUGS software and goes on to cover key topics, including: Markov Chain Monte Carlo algorithms in Bayesian inference Generalized linear models Bayesian hierarchical models Predictive distribution and model checking Bayesian model and variable evaluation Computational notes and screen captures illustrate the use of both WinBUGS as well as R software to apply the discussed techniques. Exercises at the end of each chapter allow readers to test their understanding of the presented concepts and all ...

  8. Probability biases as Bayesian inference

    Directory of Open Access Journals (Sweden)

    Andre; C. R. Martins

    2006-11-01

    Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.

  9. Bayesian Methods and Universal Darwinism

    CERN Document Server

    Campbell, John

    2010-01-01

    Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a 'copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that system...

  10. Bayesian methods for proteomic biomarker development

    Directory of Open Access Journals (Sweden)

    Belinda Hernández

    2015-12-01

    In this review we provide an introduction to Bayesian inference and demonstrate some of the advantages of using a Bayesian framework. We summarize how Bayesian methods have been used previously in proteomics and other areas of bioinformatics. Finally, we describe some popular and emerging Bayesian models from the statistical literature and provide a worked tutorial including code snippets to show how these methods may be applied for the evaluation of proteomic biomarkers.

  11. Incorporating variability in honey bee waggle dance decoding improves the mapping of communicated resource locations.

    Science.gov (United States)

    Schürch, Roger; Couvillon, Margaret J; Burns, Dominic D R; Tasman, Kiah; Waxman, David; Ratnieks, Francis L W

    2013-12-01

    Honey bees communicate to nestmates locations of resources, including food, water, tree resin and nest sites, by making waggle dances. Dances are composed of repeated waggle runs, which encode the distance and direction vector from the hive or swarm to the resource. Distance is encoded in the duration of the waggle run, and direction is encoded in the angle of the dancer's body relative to vertical. Glass-walled observation hives enable researchers to observe or video, and decode waggle runs. However, variation in these signals makes it impossible to determine exact locations advertised. We present a Bayesian duration to distance calibration curve using Markov Chain Monte Carlo simulations that allows us to quantify how accurately distance to a food resource can be predicted from waggle run durations within a single dance. An angular calibration shows that angular precision does not change over distance, resulting in spatial scatter proportional to distance. We demonstrate how to combine distance and direction to produce a spatial probability distribution of the resource location advertised by the dance. Finally, we show how to map honey bee foraging and discuss how our approach can be integrated with Geographic Information Systems to better understand honey bee foraging ecology. PMID:24132490

  12. Linear Universal Decoding for Compound Channels: a Local to Global Geometric Approach

    CERN Document Server

    Abbe, Emmanuel

    2008-01-01

    Over discrete memoryless channels (DMC), linear decoders (maximizing additive metrics) afford several nice properties. In particular, if suitable encoders are employed, the use of decoding algorithm with manageable complexities is permitted. Maximum likelihood is an example of linear decoder. For a compound DMC, decoders that perform well without the channel's knowledge are required in order to achieve capacity. Several such decoders have been studied in the literature. However, there is no such known decoder which is linear. Hence, the problem of finding linear decoders achieving capacity for compound DMC is addressed, and it is shown that under minor concessions, such decoders exist and can be constructed. This paper also develops a "local geometric analysis", which allows in particular, to solve the above problem. By considering very noisy channels, the original problem is reduced, in the limit, to an inner product space problem, for which insightful solutions can be found. The local setting can then provi...

  13. Bayesian test and Kuhn's paradigm

    Institute of Scientific and Technical Information of China (English)

    Chen Xiaoping

    2006-01-01

    Kuhn's theory of paradigm reveals a pattern of scientific progress,in which normal science alternates with scientific revolution.But Kuhn underrated too much the function of scientific test in his pattern,because he focuses all his attention on the hypothetico-deductive schema instead of Bayesian schema.This paper employs Bayesian schema to re-examine Kuhn's theory of paradigm,to uncover its logical and rational components,and to illustrate the tensional structure of logic and belief,rationality and irrationality,in the process of scientific revolution.

  14. 3D Bayesian contextual classifiers

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2000-01-01

    We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....

  15. Fault Secure Encoder and Decoder with Clock Gating

    Directory of Open Access Journals (Sweden)

    N.Kapileswar

    2012-04-01

    Full Text Available This paper presents circuit design for a low power fault secure encoder and decoder system. Memory cells in logic circuits have been protected from soft errors for more than a decade due to increase in soft error rates. In this paper the circuitry around the memory block have been susceptible to soft errors and must be protected from faults. The proposed design uses error correcting codes and ring counter addressing scheme. In the ring counter several new clock gating techniques are proposed to reduce power consumption. A fault secure Encoder and Decoder error free low power logic circuits can be achieved by the proposed design. Simulation results show great improvement in power consumption. Fault secure Encoder and Decoder with clock gated by CG-element consumes approximately half the power of that consumed by the fault free circuit which doesn’t employ clock gating technique

  16. FEC decoder design optimization for mobile satellite communications

    Science.gov (United States)

    Roy, Ashim; Lewi, Leng

    A new telecommunications service for location determination via satellite is being proposed for the continental USA and Europe, which provides users with the capability to find the location of, and communicate from, a moving vehicle to a central hub and vice versa. This communications system is expected to operate in an extremely noisy channel in the presence of fading. In order to achieve high levels of data integrity, it is essential to employ forward error correcting (FEC) encoding and decoding techniques in such mobile satellite systems. A constraint length k = 7 FEC decoder has been implemented in a single chip for such systems. The single chip implementation of the maximum likelihood decoder helps to minimize the cost, size, and power consumption, and improves the bit error rate (BER) performance of the mobile earth terminal (MET).

  17. A Low Power Viterbi Decoder for Trellis Coded Modulation System

    Directory of Open Access Journals (Sweden)

    M. Jansi Rani

    2014-02-01

    Full Text Available Forward Error Correction (FEC schemes are an essential component of wireless communication systems. Convolutional codes are employed to implement FEC but the complexity of corresponding decoders increases exponentially according to the constraint length. Present wireless standards such as Third generation (3G systems, GSM, 802.11A, 802.16 utilize some configuration of convolutional coding. Convolutional encoding with Viterbi decoding is a powerful method for forward error correction. Viterbi algorithm is the most extensively employed decoding algorithm for convolutional codes. The main aim of this project is to design FPGA based Viterbi algorithm which encrypts / decrypts the data. In this project the encryption / decryption algorithm is designed and programmed in to the FPGA.

  18. Completion Delay Minimization for Instantly Decodable Network Codes

    CERN Document Server

    Sorour, Sameh

    2012-01-01

    In this paper, we consider the problem of minimizing the completion delay for instantly decodable network coding (IDNC), in wireless multicast and broadcast scenarios. We are interested in this class of network coding due to its numerous benefits, such as low decoding delay, low coding and decoding complexities and simple receiver requirements. We first extend the IDNC graph, which represents all feasible IDNC coding opportunities, to efficiently operate in both multicast and broadcast scenarios. We then formulate the minimum completion delay problem for IDNC as a stochastic shortest path (SSP) problem. Although finding the optimal policy using SSP is intractable, we use this formulation to draw the theoretical guidelines for the policies that can efficiently reduce the completion delay in IDNC. Based on these guidelines, we design a maximum weight clique selection algorithm, which can efficiently reduce the IDNC completion delay in polynomial time. We also design a quadratic time heuristic clique selection a...

  19. Ternary Tree and Memory-Efficient Huffman Decoding Algorithm

    Directory of Open Access Journals (Sweden)

    Pushpa R. Suri

    2011-01-01

    Full Text Available In this study, the focus was on the use of ternary tree over binary tree. Here, a new one pass Algorithm for Decoding adaptive Huffman ternary tree codes was implemented. To reduce the memory size and fasten the process of searching for a symbol in a Huffman tree, we exploited the property of the encoded symbols and proposed a memory efficient data structure to represent the codeword length of Huffman ternary tree. In this algorithm we tried to find out the staring and ending address of the code to know the length of the code. And then in second algorithm we tried to decode the ternary tree code using binary search method. In this algorithm we tried to find out the staring and ending address of the code to know the length of the code. And then in second algorithm we tried to decode the ternary tree code using binary search method.

  20. Lexical decoder for continuous speech recognition: sequential neural network approach

    International Nuclear Information System (INIS)

    The work presented in this dissertation concerns the study of a connectionist architecture to treat sequential inputs. In this context, the model proposed by J.L. Elman, a recurrent multilayers network, is used. Its abilities and its limits are evaluated. Modifications are done in order to treat erroneous or noisy sequential inputs and to classify patterns. The application context of this study concerns the realisation of a lexical decoder for analytical multi-speakers continuous speech recognition. Lexical decoding is completed from lattices of phonemes which are obtained after an acoustic-phonetic decoding stage relying on a K Nearest Neighbors search technique. Test are done on sentences formed from a lexicon of 20 words. The results are obtained show the ability of the proposed connectionist model to take into account the sequentiality at the input level, to memorize the context and to treat noisy or erroneous inputs. (author)

  1. Local-Optimality Guaranties for Optimal Decoding Based on Paths

    CERN Document Server

    Halabi, Nissim

    2012-01-01

    This paper presents a unified analysis framework that captures recent advances in the study of local-optimality characterizations for codes on graphs. These local-optimality characterizations are based on combinatorial structures embedded in the Tanner graph of the code. Local-optimality implies both maximum-likelihood (ML) optimality and linear-programming (LP) decoding optimality. Also, an iterative message-passing decoding algorithm is guaranteed to find the unique locally-optimal codeword, if one exists. We demonstrate this proof technique by considering a definition of local-optimality that is based on the simplest combinatorial structures in Tanner graphs, namely, paths of length $h$. We apply the technique of local optimality to a family of Tanner codes. Inverse polynomial bounds in the code length are proved on the word error probability of LP-decoding for this family of Tanner codes.

  2. Bayesian Model Averaging for Propensity Score Analysis

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  3. Bayesian networks and food security - An introduction

    NARCIS (Netherlands)

    Stein, A.

    2004-01-01

    This paper gives an introduction to Bayesian networks. Networks are defined and put into a Bayesian context. Directed acyclical graphs play a crucial role here. Two simple examples from food security are addressed. Possible uses of Bayesian networks for implementation and further use in decision sup

  4. Bayesian variable order Markov models: Towards Bayesian predictive state representations

    NARCIS (Netherlands)

    C. Dimitrakakis

    2009-01-01

    We present a Bayesian variable order Markov model that shares many similarities with predictive state representations. The resulting models are compact and much easier to specify and learn than classical predictive state representations. Moreover, we show that they significantly outperform a more st

  5. Bayesian Approaches to Non-parametric Estimation of Densities on the Unit Interval

    OpenAIRE

    Song Li; Silvapulle, Mervyn J.; Param Silvapulle; Xibin Zhang

    2012-01-01

    This paper investigates nonparametric estimation of density on [0,1]. The kernel estimator of density on [0,1] has been found to be sensitive to both bandwidth and kernel. This paper proposes a unified Bayesian framework for choosing both the bandwidth and kernel function. In a simulation study, the Bayesian bandwidth estimator performed better than others, and kernel estimators were sensitive to the choice of the kernel and the shapes of the population densities on [0,1]. The simulation and ...

  6. Assessing and tuning brain decoders: cross-validation, caveats, and guidelines

    OpenAIRE

    Varoquaux, Gaël; Raamana, Pradeep; Engemann, Denis; Hoyos-Idrobo, Andrés; Schwartz, Yannick; Thirion, Bertrand

    2016-01-01

    Decoding, ie prediction from brain images or signals, calls for empirical evaluation of its predictive power. Such evaluation is achieved via cross-validation, a method also used to tune decoders' hyper-parameters. This paper is a review on cross-validation procedures for decoding in neuroimaging. It includes a didactic overview of the relevant theoretical considerations. Practical aspects are highlighted with an extensive empirical study of the common decoders in within-and across-subject pr...

  7. Complexity Analysis of Reed-Solomon Decoding over GF(2^m) Without Using Syndromes

    OpenAIRE

    Chen, Ning; Yan, Zhiyuan

    2008-01-01

    For the majority of the applications of Reed-Solomon (RS) codes, hard decision decoding is based on syndromes. Recently, there has been renewed interest in decoding RS codes without using syndromes. In this paper, we investigate the complexity of syndromeless decoding for RS codes, and compare it to that of syndrome-based decoding. Aiming to provide guidelines to practical applications, our complexity analysis differs in several aspects from existing asymptotic complexity analysis, which is t...

  8. Polymeric Optical Code-Division Multiple-Access (CDMA) Encoder and Decoder Modules

    OpenAIRE

    Chen, Ray T.; Xuejun Lu

    2011-01-01

    We propose a low cost polymeric optical waveguides-based optical CDMA encoder and decoder modules. The structures of the optical CDMA encoder and decoder modules are presented. The performance of the optical CDMA encoder and decoder modules is simulated using 10-chip binary phase-shift keying (BPSK) coding schemes. The optical CDMA encoder and decoder modules can effectively transmit and recover optical CDMA data streams. The SNR of the received signal is analyzed and determined to be primari...

  9. Joint Estimation and Decoding of Space-Time Trellis Codes

    Directory of Open Access Journals (Sweden)

    Zhang Jianqiu

    2002-01-01

    Full Text Available We explore the possibility of using an emerging tool in statistical signal processing, sequential importance sampling (SIS, for joint estimation and decoding of space-time trellis codes (STTC. First, we provide background on SIS, and then we discuss its application to space-time trellis code (STTC systems. It is shown through simulations that SIS is suitable for joint estimation and decoding of STTC with time-varying flat-fading channels when phase ambiguity is avoided. We used a design criterion for STTCs and temporally correlated channels that combats phase ambiguity without pilot signaling. We have shown by simulations that the design is valid.

  10. Adaptive neuron-to-EMG decoder training for FES neuroprostheses

    Science.gov (United States)

    Ethier, Christian; Acuna, Daniel; Solla, Sara A.; Miller, Lee E.

    2016-08-01

    Objective. We have previously demonstrated a brain-machine interface neuroprosthetic system that provided continuous control of functional electrical stimulation (FES) and restoration of grasp in a primate model of spinal cord injury (SCI). Predicting intended EMG directly from cortical recordings provides a flexible high-dimensional control signal for FES. However, no peripheral signal such as force or EMG is available for training EMG decoders in paralyzed individuals. Approach. Here we present a method for training an EMG decoder in the absence of muscle activity recordings; the decoder relies on mapping behaviorally relevant cortical activity to the inferred EMG activity underlying an intended action. Monkeys were trained at a 2D isometric wrist force task to control a computer cursor by applying force in the flexion, extension, ulnar, and radial directions and execute a center-out task. We used a generic muscle force-to-endpoint force model based on muscle pulling directions to relate each target force to an optimal EMG pattern that attained the target force while minimizing overall muscle activity. We trained EMG decoders during the target hold periods using a gradient descent algorithm that compared EMG predictions to optimal EMG patterns. Main results. We tested this method both offline and online. We quantified both the accuracy of offline force predictions and the ability of a monkey to use these real-time force predictions for closed-loop cursor control. We compared both offline and online results to those obtained with several other direct force decoders, including an optimal decoder computed from concurrently measured neural and force signals. Significance. This novel approach to training an adaptive EMG decoder could make a brain-control FES neuroprosthesis an effective tool to restore the hand function of paralyzed individuals. Clinical implementation would make use of individualized EMG-to-force models. Broad generalization could be achieved by

  11. Codes on the Klein quartic, ideals, and decoding

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    1987-01-01

    distance3. The codes are constructed from algebraic geometry using the dictionary between coding theory and algebraic curves over finite fields established by Goppa. The curve used in the present work is the Klein quartic. This curve has the maximal number of rational points over GF(2^{3})allowed by Serre...... descriptions as left ideals in the group-algebra GF(2^{3})[G]. This description allows for easy decoding. For instance, in the case of the single error correcting code of length21and dimension16with minimal distance3. decoding is obtained by multiplication with an idempotent in the group algebra....

  12. Conventional Tanner Graph for Recursive onvolutional Codes and Associated Decoding

    Institute of Scientific and Technical Information of China (English)

    SUN Hong

    2001-01-01

    A different representation of recur-sive systematic convolutional (RSC) codes is pro-posed. This representation can be realized by a con-ventional Tanner graph. The graph becomes a treeby introducing hidden edge. It is shown that thesum-product algorithm applied to this graph modelis equivalent to the BCJR algorithm for turbo de-coding with lower computational complexity. Themessage-passing chain of the BCJR algorithm is pre-sented more exactly in the graph. In addition, theproposed representation of RSC codes provides an ef-ficient method to set up the trellis and the conven-tional Tanner graph for RSC codes provides directlythe architecture for decoding.

  13. Real Time Decoding of Color Symbol for Optical Positioning System

    OpenAIRE

    Abdul Waheed Malik; Benny Thörnberg; Qaisar Anwar; Tor Arne Johanson; Khurram Shahzad

    2015-01-01

    This paper presents the design and real-time decoding of a color symbol that can be used as a reference marker for optical navigation. The designed symbol has a circular shape and is printed on paper using two distinct colors. This pair of colors is selected based on the highest achievable signal to noise ratio. The symbol is designed to carry eight bit information. Real time decoding of this symbol is performed using a heterogeneous combination of Field Programmable Gate Array (FPGA) and a m...

  14. PERFORMANCE OF THREE STAGE TURBO-EQUALIZATION-DECODING

    Institute of Scientific and Technical Information of China (English)

    Kazi Takpaya

    2003-01-01

    An increasing demand for high data rate transmission and protection over bandlimited channels with severe inter-symbol interference has resulted in a flurry of activity to improve channel equalization, In conjunction with equalization, channel coding-decoding can be employed to improve system performance. In this letter, the performance of the three stage turbo-equalization-decoding employing log maximum a posteriori probability is experimentally evaluated by a fading simulator. The BER is evaluated using various information sequence and interleaver sizes taking into account that the communication medium is a noisy inter symbol interference channel.

  15. A VLSI Reed-Solomon decoder architecture for concatenate-coded space and spread spectrum communications

    Science.gov (United States)

    Liu, K. Y.

    In this paper, a VLSI Reed-Solomon (RS) decoder architecture for concatenate-coded space and spread spectrum communications, is presented. The known decoding procedures for RS codes are exploited and modified to obtain a repetitive and recursive decoding technique which is suitable for VLSI implementation and pipeline processing.

  16. Construction and decoding of matrix-product codes from nested codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Lally, Kristine; Ruano, Diego

    2009-01-01

    We consider matrix-product codes [C1 ... Cs] · A, where C1, ..., Cs  are nested linear codes and matrix A has full rank. We compute their minimum distance and provide a decoding algorithm when A is a non-singular by columns matrix. The decoding algorithm decodes up to half of the minimum distance....

  17. For whom will the Bayesian agents vote?

    CERN Document Server

    Caticha, Nestor; Vicente, Renato

    2015-01-01

    Within an agent-based model where moral classifications are socially learned, we ask if a population of agents behaves in a way that may be compared with conservative or liberal positions in the real political spectrum. We assume that agents first experience a formative period, in which they adjust their learning style acting as supervised Bayesian adaptive learners. The formative phase is followed by a period of social influence by reinforcement learning. By comparing data generated by the agents with data from a sample of 15000 Moral Foundation questionnaires we found the following. 1. The number of information exchanges in the formative phase correlates positively with statistics identifying liberals in the social influence phase. This is consistent with recent evidence that connects the dopamine receptor D4-7R gene, political orientation and early age social clique size. 2. The learning algorithms that result from the formative phase vary in the way they treat novelty and corroborative information with mo...

  18. Bayesian Analysis of Experimental Data

    Directory of Open Access Journals (Sweden)

    Lalmohan Bhar

    2013-10-01

    Full Text Available Analysis of experimental data from Bayesian point of view has been considered. Appropriate methodology has been developed for application into designed experiments. Normal-Gamma distribution has been considered for prior distribution. Developed methodology has been applied to real experimental data taken from long term fertilizer experiments.

  19. Bayesian image restoration, using configurations

    DEFF Research Database (Denmark)

    Thorarinsdottir, Thordis Linda

    2006-01-01

    configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for the salt and pepper noise. The inference in the model is discussed...

  20. Bayesian image restoration, using configurations

    DEFF Research Database (Denmark)

    Thorarinsdottir, Thordis

    configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed in...

  1. ANALYSIS OF BAYESIAN CLASSIFIER ACCURACY

    Directory of Open Access Journals (Sweden)

    Felipe Schneider Costa

    2013-01-01

    Full Text Available The naïve Bayes classifier is considered one of the most effective classification algorithms today, competing with more modern and sophisticated classifiers. Despite being based on unrealistic (naïve assumption that all variables are independent, given the output class, the classifier provides proper results. However, depending on the scenario utilized (network structure, number of samples or training cases, number of variables, the network may not provide appropriate results. This study uses a process variable selection, using the chi-squared test to verify the existence of dependence between variables in the data model in order to identify the reasons which prevent a Bayesian network to provide good performance. A detailed analysis of the data is also proposed, unlike other existing work, as well as adjustments in case of limit values between two adjacent classes. Furthermore, variable weights are used in the calculation of a posteriori probabilities, calculated with mutual information function. Tests were applied in both a naïve Bayesian network and a hierarchical Bayesian network. After testing, a significant reduction in error rate has been observed. The naïve Bayesian network presented a drop in error rates from twenty five percent to five percent, considering the initial results of the classification process. In the hierarchical network, there was not only a drop in fifteen percent error rate, but also the final result came to zero.

  2. Bayesian Agglomerative Clustering with Coalescents

    OpenAIRE

    Teh, Yee Whye; Daumé III, Hal; Roy, Daniel

    2009-01-01

    We introduce a new Bayesian model for hierarchical clustering based on a prior over trees called Kingman's coalescent. We develop novel greedy and sequential Monte Carlo inferences which operate in a bottom-up agglomerative fashion. We show experimentally the superiority of our algorithms over others, and demonstrate our approach in document clustering and phylolinguistics.

  3. Bayesian Networks and Influence Diagrams

    DEFF Research Database (Denmark)

    Kjærulff, Uffe Bro; Madsen, Anders Læsø

    Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis, Second Edition, provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. This new edition contains six new...

  4. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  5. A genetic and spatial Bayesian analysis of mastitis resistance

    OpenAIRE

    Frigessi Arnoldo; Sæbø Solve

    2004-01-01

    Abstract A nationwide health card recording system for dairy cattle was introduced in Norway in 1975 (the Norwegian Cattle Health Services). The data base holds information on mastitis occurrences on an individual cow basis. A reduction in mastitis frequency across the population is desired, and for this purpose risk factors are investigated. In this paper a Bayesian proportional hazards model is used for modelling the time to first veterinary treatment of clinical mastitis, including both ge...

  6. A genetic and spatial Bayesian analysis of mastitis resistance

    OpenAIRE

    Sæbø, Solve; Frigessi, Arnoldo

    2004-01-01

    A nationwide health card recording system for dairy cattle was introduced in Norway in 1975 (the Norwegian Cattle Health Services). The data base holds information on mastitis occurrences on an individual cow basis. A reduction in mastitis frequency across the population is desired, and for this purpose risk factors are investigated. In this paper a Bayesian proportional hazards model is used for modelling the time to first veterinary treatment of clinical mastitis, including both genetic and...

  7. Regional fertility data analysis: A small area Bayesian approach

    OpenAIRE

    Eduardo A. Castro; Zhen Zhang; Arnab Bhattacharjee; Martins, José M.; Taps Maiti

    2013-01-01

    Accurate estimation of demographic variables such as mortality, fertility and migrations, by age groups and regions, is important for analyses and policy. However, traditional estimates based on within cohort counts are often inaccurate, particularly when the sub-populations considered are small. We use small area Bayesian statistics to develop a model for age-specific fertility rates. In turn, such small area estimation requires accurate descriptions of spatial and cross-section dependence. ...

  8. Inferência Bayesiana na análise genética de populações diplóides: estimação do coeficiente de endogamia e da taxa de fecundação cruzada Bayesian inference in genetic analysis of diploid populations: inbreeding coefficient and outcrossing rate estimation

    Directory of Open Access Journals (Sweden)

    Ricardo Luis dos Reis

    2008-08-01

    Full Text Available Neste estudo, utilizou-se a metodologia Bayesiana para estimar o coeficiente de endogamia e a taxa de fecundação cruzada de uma população diplóide por meio do modelo aleatório de COCKERHAM para freqüências alélicas. Um sistema de simulação de dados foi estruturado para validar a metodologia utilizada. O algoritmo Gibbs Sampler foi implementado no software R para obter amostras das distribuições marginais a posteriori para o coeficiente de endogamia e para a taxa de fecundação. O método Bayesiano mostrou-se eficiente na estimação dos parâmetros, pois os valores paramétricos utilizados na simulação encontravam-se dentro do intervalo de credibilidade de 95% em todos os cenários considerados. A convergência do algoritmo Gibbs Sampler foi verificada, validando assim os resultados obtidos.The Bayesian methodology was used to estimate the inbreeding coefficient and outcrossing rate in diploid populations by COCKERHAM random model to allelic frequency. The proposed methodology was evaluated by data simulation. The Gibbs Sampler algorithm was implemented in the R statistical software to obtain the random samples of the inbreeding coefficient and outcrossing rate posteriors marginal distributions. The Bayesian method showed good results, because the 95% credible intervals contained the true parameter values to all of the selected scenes. The Gibbs Sampler convergence was checked and this validated the estimation results.

  9. Bayesian analysis of rare events

    Science.gov (United States)

    Straub, Daniel; Papaioannou, Iason; Betz, Wolfgang

    2016-06-01

    In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.

  10. Bayesian methods for measures of agreement

    CERN Document Server

    Broemeling, Lyle D

    2009-01-01

    Using WinBUGS to implement Bayesian inferences of estimation and testing hypotheses, Bayesian Methods for Measures of Agreement presents useful methods for the design and analysis of agreement studies. It focuses on agreement among the various players in the diagnostic process.The author employs a Bayesian approach to provide statistical inferences based on various models of intra- and interrater agreement. He presents many examples that illustrate the Bayesian mode of reasoning and explains elements of a Bayesian application, including prior information, experimental information, the likelihood function, posterior distribution, and predictive distribution. The appendices provide the necessary theoretical foundation to understand Bayesian methods as well as introduce the fundamentals of programming and executing the WinBUGS software.Taking a Bayesian approach to inference, this hands-on book explores numerous measures of agreement, including the Kappa coefficient, the G coefficient, and intraclass correlation...

  11. Plug & Play object oriented Bayesian networks

    DEFF Research Database (Denmark)

    Bangsø, Olav; Flores, J.; Jensen, Finn Verner

    2003-01-01

    Object oriented Bayesian networks have proven themselves useful in recent years. The idea of applying an object oriented approach to Bayesian networks has extended their scope to larger domains that can be divided into autonomous but interrelated entities. Object oriented Bayesian networks have...... been shown to be quite suitable for dynamic domains as well. However, processing object oriented Bayesian networks in practice does not take advantage of their modular structure. Normally the object oriented Bayesian network is transformed into a Bayesian network and, inference is performed...... by constructing a junction tree from this network. In this paper we propose a method for translating directly from object oriented Bayesian networks to junction trees, avoiding the intermediate translation. We pursue two main purposes: firstly, to maintain the original structure organized in an instance tree...

  12. Decoding the Disciplines: An Approach to Scientific Thinking

    Science.gov (United States)

    Pinnow, Eleni

    2016-01-01

    The Decoding the Disciplines methodology aims to teach students to think like experts in discipline-specific tasks. The central aspect of the methodology is to identify a bottleneck in the course content: a particular topic that a substantial number of students struggle to master. The current study compared the efficacy of standard lecture and…

  13. Fast decoding of codes from algebraic plane curves

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Jensen, Helge Elbrønd;

    1992-01-01

    Improvement to an earlier decoding algorithm for codes from algebraic geometry is presented. For codes from an arbitrary regular plane curve the authors correct up to d*/2-m2 /8+m/4-9/8 errors, where d* is the designed distance of the code and m is the degree of the curve. The complexity of finding...

  14. Complete ML Decoding orf the (73,45) PG Code

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Hjaltason, Johan

    2005-01-01

    Our recent proof of the completeness of decoding by list bit flipping is reviewed. The proof is based on an enumeration of all cosets of low weight in terms of their minimum weight and syndrome weight. By using a geometric description of the error patterns we characterize all remaining cosets....

  15. Designing of precomputational-based low-power Viterbi decoder

    OpenAIRE

    Yang, JL; Wong, AKK

    2004-01-01

    This work addresses the low-power VLSI implementation of the Viterbi decoder (VD). A new precomputational scheme applied to the trellis butterflies calculation is presented. The proposed scheme is implemented in a 16-state, rate 1/3 VD. Gate-level power verification indicates that the proposed design reduces the power dissipated by the original trellis butterflies calculation by 42%.

  16. Gradient Descent Bit Flipping Algorithms for Decoding LDPC Codes

    OpenAIRE

    Wadayama, Tadashi; Nakamura, Keisuke; Yagita, Masayuki; Funahashi, Yuuki; Usami, Shogo; Takumi, Ichi

    2007-01-01

    A novel class of bit-flipping (BF) algorithms for decoding low-density parity-check (LDPC) codes is presented. The proposed algorithms, which are called gradient descent bit flipping (GDBF) algorithms, can be regarded as simplified gradient descent algorithms. Based on gradient descent formulation, the proposed algorithms are naturally derived from a simple non-linear objective function.

  17. Encoding and decoding a telecommunication standard command code

    Science.gov (United States)

    Benjauthrit, B.; Truong, T. K.

    1977-01-01

    A simple encoder/decoder implementation scheme is described for the (63,56) BCH code which can be used to correct single errors and to detect any even-number of errors. The scheme is feasible for onboard-spacecraft implementation.

  18. Interior-Point Algorithms for Linear-Programming Decoding

    OpenAIRE

    Vontobel, Pascal O.

    2008-01-01

    Interior-point algorithms constitute a very interesting class of algorithms for solving linear-programming problems. In this paper we study efficient implementations of such algorithms for solving the linear program that appears in the linear-programming decoder formulation.

  19. Attentional Selection in a Cocktail Party Environment Can Be Decoded from Single-Trial EEG.

    Science.gov (United States)

    O'Sullivan, James A; Power, Alan J; Mesgarani, Nima; Rajaram, Siddharth; Foxe, John J; Shinn-Cunningham, Barbara G; Slaney, Malcolm; Shamma, Shihab A; Lalor, Edmund C

    2015-07-01

    How humans solve the cocktail party problem remains unknown. However, progress has been made recently thanks to the realization that cortical activity tracks the amplitude envelope of speech. This has led to the development of regression methods for studying the neurophysiology of continuous speech. One such method, known as stimulus-reconstruction, has been successfully utilized with cortical surface recordings and magnetoencephalography (MEG). However, the former is invasive and gives a relatively restricted view of processing along the auditory hierarchy, whereas the latter is expensive and rare. Thus it would be extremely useful for research in many populations if stimulus-reconstruction was effective using electroencephalography (EEG), a widely available and inexpensive technology. Here we show that single-trial (≈60 s) unaveraged EEG data can be decoded to determine attentional selection in a naturalistic multispeaker environment. Furthermore, we show a significant correlation between our EEG-based measure of attention and performance on a high-level attention task. In addition, by attempting to decode attention at individual latencies, we identify neural processing at ∼200 ms as being critical for solving the cocktail party problem. These findings open up new avenues for studying the ongoing dynamics of cognition using EEG and for developing effective and natural brain-computer interfaces. PMID:24429136

  20. Are visual impairments responsible for emotion decoding deficits in alcohol-dependence?

    Directory of Open Access Journals (Sweden)

    Fabien eD'Hondt

    2014-03-01

    Full Text Available Emotional visual perception deficits constitute a major problem in alcohol-dependence. Indeed, the ability to assess the affective content of external cues is a key adaptive function, as it allows on the one hand the processing of potentially threatening or advantageous stimuli, and on the other hand the establishment of appropriate social interactions (by enabling rapid decoding of the affective state of others from their facial expressions. While such deficits have been classically considered as reflecting a genuine emotion decoding impairment in alcohol-dependence, converging evidence suggests that underlying visual deficits might play a role in emotional alterations. This hypothesis appears to be relevant especially as data from healthy populations indicate that a coarse but fast analysis of visual inputs would allow emotional processing to arise from early stages of perception. After reviewing those findings and the associated models, the present paper underlines data showing that rapid interactions between emotion and vision could be impaired in alcohol-dependence and provides new research avenues that may ultimately offer a better understanding of the roots of emotional deficits in this pathological state.

  1. Windowed Decoding of Protograph-based LDPC Convolutional Codes over Erasure Channels

    CERN Document Server

    Iyengar, Aravind; Siegel, Paul; Wolf, Jack; Vanelli-Coralli, Alessandro; Corazza, Giovanni

    2010-01-01

    We consider a windowed decoding scheme for LDPC convolutional codes that is based on the belief-propagation (BP) algorithm. We discuss the advantages of this decoding scheme and identify certain characteristics of LDPC convolutional code ensembles that exhibit good performance with the windowed decoder. We will consider the performance of these ensembles and codes over erasure channels with and without memory. We show that the structure of LDPC convolutional code ensembles is suitable to obtain performance close to the theoretical limits over the memoryless erasure channel, both for the BP decoder and windowed decoding. However, the same structure imposes limitations on the performance over erasure channels with memory.

  2. An efficient decoding for low density parity check codes

    Science.gov (United States)

    Zhao, Ling; Zhang, Xiaolin; Zhu, Manjie

    2009-12-01

    Low density parity check (LDPC) codes are a class of forward-error-correction codes. They are among the best-known codes capable of achieving low bit error rates (BER) approaching Shannon's capacity limit. Recently, LDPC codes have been adopted by the European Digital Video Broadcasting (DVB-S2) standard, and have also been proposed for the emerging IEEE 802.16 fixed and mobile broadband wireless-access standard. The consultative committee for space data system (CCSDS) has also recommended using LDPC codes in the deep space communications and near-earth communications. It is obvious that LDPC codes will be widely used in wired and wireless communication, magnetic recording, optical networking, DVB, and other fields in the near future. Efficient hardware implementation of LDPC codes is of great interest since LDPC codes are being considered for a wide range of applications. This paper presents an efficient partially parallel decoder architecture suited for quasi-cyclic (QC) LDPC codes using Belief propagation algorithm for decoding. Algorithmic transformation and architectural level optimization are incorporated to reduce the critical path. First, analyze the check matrix of LDPC code, to find out the relationship between the row weight and the column weight. And then, the sharing level of the check node updating units (CNU) and the variable node updating units (VNU) are determined according to the relationship. After that, rearrange the CNU and the VNU, and divide them into several smaller parts, with the help of some assistant logic circuit, these smaller parts can be grouped into CNU during the check node update processing and grouped into VNU during the variable node update processing. These smaller parts are called node update kernel units (NKU) and the assistant logic circuit are called node update auxiliary unit (NAU). With NAUs' help, the two steps of iteration operation are completed by NKUs, which brings in great hardware resource reduction. Meanwhile

  3. On Lattice Sequential Decoding for Large MIMO Systems

    KAUST Repository

    Ali, Konpal S.

    2014-04-01

    Due to their ability to provide high data rates, Multiple-Input Multiple-Output (MIMO) wireless communication systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In the case of large overdetermined MIMO systems, we employ the Sequential Decoder using the Fano Algorithm. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. We attempt to bound the error by bounding the bias, using the minimum distance of a lattice. Also, a particular trend is observed with increasing SNR: a region of low complexity and high error, followed by a region of high complexity and error falling, and finally a region of low complexity and low error. For lower bias values, the stages of the trend are incurred at lower SNR than for higher bias values. This has the important implication that a low enough bias value, at low to moderate SNR, can result in low error and low complexity even for large MIMO systems. Our work is compared against Lattice Reduction (LR) aided Linear Decoders (LDs). Another impressive observation for low bias values that satisfy the error bound is that the Sequential Decoder\\'s error is seen to fall with increasing system size, while it grows for the LR-aided LDs. For the case of large underdetermined MIMO systems, Sequential Decoding with two preprocessing schemes is proposed – 1) Minimum Mean Square Error Generalized Decision Feedback Equalization (MMSE-GDFE) preprocessing 2) MMSE-GDFE preprocessing, followed by Lattice Reduction and Greedy Ordering. Our work is compared against previous work which employs Sphere Decoding preprocessed using MMSE-GDFE, Lattice Reduction and Greedy Ordering. For the case of large systems, this results in high complexity and difficulty in choosing the sphere radius. Our schemes

  4. Aplicação das inferências clássica e bayesiana na estimação dos parâmetros do modelo de densidade populacional de plantas daninhas Application of classic and bayesian inferences on the estimation of weed population density model parameters

    Directory of Open Access Journals (Sweden)

    L.S. Vismara

    2007-12-01

    Full Text Available A dinâmica da população de plantas daninhas pode ser representada por um sistema de equações que relaciona as densidades de sementes produzidas e de plântulas em áreas de cultivo. Os valores dos parâmetros dos modelos podem ser inferidos diretamente de experimentação e análise estatística ou extraídos da literatura. O presente trabalho teve por objetivo estimar os parâmetros do modelo de densidade populacional de plantas daninhas, a partir de um experimento conduzido na área experimental da Embrapa Milho e Sorgo, Sete Lagoas, MG, via os procedimentos de inferências clássica e Bayesiana.Dynamics of weed populations can be described as a system of equations relating the produced seed and seedling densities in crop areas. The model parameter values can be either directly inferred from experimentation and statistical analysis or obtained from the literature. The objective of this work was to estimate the weed population density model parameters based on experimental field data at Embrapa Milho e Sorgo, Sete Lagoas, MG, using classic and Bayesian inferences.

  5. Flexible Bayesian Nonparametric Priors and Bayesian Computational Methods

    OpenAIRE

    Zhu, Weixuan

    2016-01-01

    The definition of vectors of dependent random probability measures is a topic of interest in Bayesian nonparametrics. They represent dependent nonparametric prior distributions that are useful for modelling observables for which specific covariate values are known. Our first contribution is the introduction of novel multivariate vectors of two-parameter Poisson-Dirichlet process. The dependence is induced by applying a L´evy copula to the marginal L´evy intensities. Our attenti...

  6. Decoding bipedal locomotion from the rat sensorimotor cortex

    Science.gov (United States)

    Rigosa, J.; Panarese, A.; Dominici, N.; Friedli, L.; van den Brand, R.; Carpaneto, J.; DiGiovanna, J.; Courtine, G.; Micera, S.

    2015-10-01

    Objective. Decoding forelimb movements from the firing activity of cortical neurons has been interfaced with robotic and prosthetic systems to replace lost upper limb functions in humans. Despite the potential of this approach to improve locomotion and facilitate gait rehabilitation, decoding lower limb movement from the motor cortex has received comparatively little attention. Here, we performed experiments to identify the type and amount of information that can be decoded from neuronal ensemble activity in the hindlimb area of the rat motor cortex during bipedal locomotor tasks. Approach. Rats were trained to stand, step on a treadmill, walk overground and climb staircases in a bipedal posture. To impose this gait, the rats were secured in a robotic interface that provided support against the direction of gravity and in the mediolateral direction, but behaved transparently in the forward direction. After completion of training, rats were chronically implanted with a micro-wire array spanning the left hindlimb motor cortex to record single and multi-unit activity, and bipolar electrodes into 10 muscles of the right hindlimb to monitor electromyographic signals. Whole-body kinematics, muscle activity, and neural signals were simultaneously recorded during execution of the trained tasks over multiple days of testing. Hindlimb kinematics, muscle activity, gait phases, and locomotor tasks were decoded using offline classification algorithms. Main results. We found that the stance and swing phases of gait and the locomotor tasks were detected with accuracies as robust as 90% in all rats. Decoded hindlimb kinematics and muscle activity exhibited a larger variability across rats and tasks. Significance. Our study shows that the rodent motor cortex contains useful information for lower limb neuroprosthetic development. However, brain-machine interfaces estimating gait phases or locomotor behaviors, instead of continuous variables such as limb joint positions or speeds

  7. The design plan of a VLSI single chip (255, 223) Reed-Solomon decoder

    Science.gov (United States)

    Hsu, I. S.; Shao, H. M.; Deutsch, L. J.

    1987-11-01

    The very large-scale integration (VLSI) architecture of a single chip (255, 223) Reed-Solomon decoder for decoding both errors and erasures is described. A decoding failure detection capability is also included in this system so that the decoder will recognize a failure to decode instead of introducing additional errors. This could happen whenever the received word contains too many errors and erasures for the code to correct. The number of transistors needed to implement this decoder is estimated at about 75,000 if the delay for received message is not included. This is in contrast to the older transform decoding algorithm which needs about 100,000 transistors. However, the transform decoder is simpler in architecture than the time decoder. It is therefore possible to implement a single chip (255, 223) Reed-Solomon decoder with today's VLSI technology. An implementation strategy for the decoder system is presented. This represents the first step in a plan to take advantage of advanced coding techniques to realize a 2.0 dB coding gain for future space missions.

  8. A new VLSI architecture for a single-chip-type Reed-Solomon decoder

    Science.gov (United States)

    Hsu, I. S.; Truong, T. K.

    1989-02-01

    A new very large scale integration (VLSI) architecture for implementing Reed-Solomon (RS) decoders that can correct both errors and erasures is described. This new architecture implements a Reed-Solomon decoder by using replication of a single VLSI chip. It is anticipated that this single chip type RS decoder approach will save substantial development and production costs. It is estimated that reduction in cost by a factor of four is possible with this new architecture. Furthermore, this Reed-Solomon decoder is programmable between 8 bit and 10 bit symbol sizes. Therefore, both an 8 bit Consultative Committee for Space Data Systems (CCSDS) RS decoder and a 10 bit decoder are obtained at the same time, and when concatenated with a (15,1/6) Viterbi decoder, provide an additional 2.1-dB coding gain.

  9. Design of FBG En/decoders in Coherent 2-D Time-polarization OCDMA Systems

    Science.gov (United States)

    Hou, Fen-fei; Yang, Ming

    2012-12-01

    A novel fiber Bragg grating (FBG)-based en/decoder for the two-dimensional (2-D) time-spreading and polarization multiplexer optical coding is proposed. Compared with other 2-D en/decoders, the proposed en/decoding for an optical code-division multiple-access (OCDMA) system uses a single phase-encoded FBG and coherent en/decoding. Furthermore, combined with reconstruction-equivalent-chirp technology, such en/decoders can be realized with a conventional simple fabrication setup. Experimental results of such en/decoders and the corresponding system test at a data rate of 5 Gbit/s demonstrate that this kind of 2-D FBG-based en/decoders could improve the performances of OCDMA systems.

  10. NetDecoder: a network biology platform that decodes context-specific biological networks and gene activities

    Science.gov (United States)

    da Rocha, Edroaldo Lummertz; Ung, Choong Yong; McGehee, Cordelia D.; Correia, Cristina; Li, Hu

    2016-01-01

    The sequential chain of interactions altering the binary state of a biomolecule represents the ‘information flow’ within a cellular network that determines phenotypic properties. Given the lack of computational tools to dissect context-dependent networks and gene activities, we developed NetDecoder, a network biology platform that models context-dependent information flows using pairwise phenotypic comparative analyses of protein–protein interactions. Using breast cancer, dyslipidemia and Alzheimer's disease as case studies, we demonstrate NetDecoder dissects subnetworks to identify key players significantly impacting cell behaviour specific to a given disease context. We further show genes residing in disease-specific subnetworks are enriched in disease-related signalling pathways and information flow profiles, which drive the resulting disease phenotypes. We also devise a novel scoring scheme to quantify key genes—network routers, which influence many genes, key targets, which are influenced by many genes, and high impact genes, which experience a significant change in regulation. We show the robustness of our results against parameter changes. Our network biology platform includes freely available source code (http://www.NetDecoder.org) for researchers to explore genome-wide context-dependent information flow profiles and key genes, given a set of genes of particular interest and transcriptome data. More importantly, NetDecoder will enable researchers to uncover context-dependent drug targets. PMID:26975659

  11. NetDecoder: a network biology platform that decodes context-specific biological networks and gene activities.

    Science.gov (United States)

    da Rocha, Edroaldo Lummertz; Ung, Choong Yong; McGehee, Cordelia D; Correia, Cristina; Li, Hu

    2016-06-01

    The sequential chain of interactions altering the binary state of a biomolecule represents the 'information flow' within a cellular network that determines phenotypic properties. Given the lack of computational tools to dissect context-dependent networks and gene activities, we developed NetDecoder, a network biology platform that models context-dependent information flows using pairwise phenotypic comparative analyses of protein-protein interactions. Using breast cancer, dyslipidemia and Alzheimer's disease as case studies, we demonstrate NetDecoder dissects subnetworks to identify key players significantly impacting cell behaviour specific to a given disease context. We further show genes residing in disease-specific subnetworks are enriched in disease-related signalling pathways and information flow profiles, which drive the resulting disease phenotypes. We also devise a novel scoring scheme to quantify key genes-network routers, which influence many genes, key targets, which are influenced by many genes, and high impact genes, which experience a significant change in regulation. We show the robustness of our results against parameter changes. Our network biology platform includes freely available source code (http://www.NetDecoder.org) for researchers to explore genome-wide context-dependent information flow profiles and key genes, given a set of genes of particular interest and transcriptome data. More importantly, NetDecoder will enable researchers to uncover context-dependent drug targets. PMID:26975659

  12. Genetic evaluation of popcorn families using a Bayesian approach via the independence chain algorithm

    Directory of Open Access Journals (Sweden)

    Marcos Rodovalho

    2014-11-01

    Full Text Available The objective of this study was to examine genetic parameters of popping expansion and grain yield in a trial of 169 halfsib families using a Bayesian approach. The independence chain algorithm with informative priors for the components of residual and family variance (inverse-gamma prior distribution was used. Popping expansion was found to be moderately heritable, with a posterior mode of h2 of 0.34, and 90% Bayesian confidence interval of 0.22 to 0.44. The heritability of grain yield (family level was moderate (h2 = 0.4 with Bayesian confidence interval of 0.28 to 0.49. The target population contains sufficient genetic variability for subsequent breeding cycles, and the Bayesian approach is a useful alternative for scientific inference in the genetic evaluation of popcorn.

  13. Bayesian approach to rough set

    CERN Document Server

    Marwala, Tshilidzi

    2007-01-01

    This paper proposes an approach to training rough set models using Bayesian framework trained using Markov Chain Monte Carlo (MCMC) method. The prior probabilities are constructed from the prior knowledge that good rough set models have fewer rules. Markov Chain Monte Carlo sampling is conducted through sampling in the rough set granule space and Metropolis algorithm is used as an acceptance criteria. The proposed method is tested to estimate the risk of HIV given demographic data. The results obtained shows that the proposed approach is able to achieve an average accuracy of 58% with the accuracy varying up to 66%. In addition the Bayesian rough set give the probabilities of the estimated HIV status as well as the linguistic rules describing how the demographic parameters drive the risk of HIV.

  14. Attention in a bayesian framework

    DEFF Research Database (Denmark)

    Whiteley, Louise Emma; Sahani, Maneesh

    2012-01-01

    include both selective phenomena, where attention is invoked by cues that point to particular stimuli, and integrative phenomena, where attention is invoked dynamically by endogenous processing. However, most previous Bayesian accounts of attention have focused on describing relatively simple experimental...... settings, where cues shape expectations about a small number of upcoming stimuli and thus convey "prior" information about clearly defined objects. While operationally consistent with the experiments it seeks to describe, this view of attention as prior seems to miss many essential elements of both its......The behavioral phenomena of sensory attention are thought to reflect the allocation of a limited processing resource, but there is little consensus on the nature of the resource or why it should be limited. Here we argue that a fundamental bottleneck emerges naturally within Bayesian models of...

  15. Bayesian Sampling using Condition Indicators

    DEFF Research Database (Denmark)

    Faber, Michael H.; Sørensen, John Dalsgaard

    2002-01-01

    allows for a Bayesian formulation of the indicators whereby the experience and expertise of the inspection personnel may be fully utilized and consistently updated as frequentistic information is collected. The approach is illustrated on an example considering a concrete structure subject to corrosion......The problem of control quality of components is considered for the special case where the acceptable failure rate is low, the test costs are high and where it may be difficult or impossible to test the condition of interest directly. Based on the classical control theory and the concept of...... condition indicators introduced by Benjamin and Cornell (1970) a Bayesian approach to quality control is formulated. The formulation is then extended to the case where the quality control is based on sampling of indirect information about the condition of the components, i.e. condition indicators. This...

  16. BAYESIAN IMAGE RESTORATION, USING CONFIGURATIONS

    Directory of Open Access Journals (Sweden)

    Thordis Linda Thorarinsdottir

    2011-05-01

    Full Text Available In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed in detail for 3 X 3 and 5 X 5 configurations and examples of the performance of the procedure are given.

  17. Bayesian Seismology of the Sun

    CERN Document Server

    Gruberbauer, Michael

    2013-01-01

    We perform a Bayesian grid-based analysis of the solar l=0,1,2 and 3 p modes obtained via BiSON in order to deliver the first Bayesian asteroseismic analysis of the solar composition problem. We do not find decisive evidence to prefer either of the contending chemical compositions, although the revised solar abundances (AGSS09) are more probable in general. We do find indications for systematic problems in standard stellar evolution models, unrelated to the consequences of inadequate modelling of the outer layers on the higher-order modes. The seismic observables are best fit by solar models that are several hundred million years older than the meteoritic age of the Sun. Similarly, meteoritic age calibrated models do not adequately reproduce the observed seismic observables. Our results suggest that these problems will affect any asteroseismic inference that relies on a calibration to the Sun.

  18. Bayesian priors for transiting planets

    CERN Document Server

    Kipping, David M

    2016-01-01

    As astronomers push towards discovering ever-smaller transiting planets, it is increasingly common to deal with low signal-to-noise ratio (SNR) events, where the choice of priors plays an influential role in Bayesian inference. In the analysis of exoplanet data, the selection of priors is often treated as a nuisance, with observers typically defaulting to uninformative distributions. Such treatments miss a key strength of the Bayesian framework, especially in the low SNR regime, where even weak a priori information is valuable. When estimating the parameters of a low-SNR transit, two key pieces of information are known: (i) the planet has the correct geometric alignment to transit and (ii) the transit event exhibits sufficient signal-to-noise to have been detected. These represent two forms of observational bias. Accordingly, when fitting transits, the model parameter priors should not follow the intrinsic distributions of said terms, but rather those of both the intrinsic distributions and the observational ...

  19. Bayesian Inference for Radio Observations

    CERN Document Server

    Lochner, Michelle; Zwart, Jonathan T L; Smirnov, Oleg; Bassett, Bruce A; Oozeer, Nadeem; Kunz, Martin

    2015-01-01

    (Abridged) New telescopes like the Square Kilometre Array (SKA) will push into a new sensitivity regime and expose systematics, such as direction-dependent effects, that could previously be ignored. Current methods for handling such systematics rely on alternating best estimates of instrumental calibration and models of the underlying sky, which can lead to inaccurate uncertainty estimates and biased results because such methods ignore any correlations between parameters. These deconvolution algorithms produce a single image that is assumed to be a true representation of the sky, when in fact it is just one realisation of an infinite ensemble of images compatible with the noise in the data. In contrast, here we report a Bayesian formalism that simultaneously infers both systematics and science. Our technique, Bayesian Inference for Radio Observations (BIRO), determines all parameters directly from the raw data, bypassing image-making entirely, by sampling from the joint posterior probability distribution. Thi...

  20. Bayesian inference on proportional elections.

    Science.gov (United States)

    Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio

    2015-01-01

    Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software. PMID:25786259

  1. A Bayesian Nonparametric IRT Model

    OpenAIRE

    Karabatsos, George

    2015-01-01

    This paper introduces a flexible Bayesian nonparametric Item Response Theory (IRT) model, which applies to dichotomous or polytomous item responses, and which can apply to either unidimensional or multidimensional scaling. This is an infinite-mixture IRT model, with person ability and item difficulty parameters, and with a random intercept parameter that is assigned a mixing distribution, with mixing weights a probit function of other person and item parameters. As a result of its flexibility...

  2. Bayesian segmentation of hyperspectral images

    CERN Document Server

    Mohammadpour, Adel; Mohammad-Djafari, Ali

    2007-01-01

    In this paper we consider the problem of joint segmentation of hyperspectral images in the Bayesian framework. The proposed approach is based on a Hidden Markov Modeling (HMM) of the images with common segmentation, or equivalently with common hidden classification label variables which is modeled by a Potts Markov Random Field. We introduce an appropriate Markov Chain Monte Carlo (MCMC) algorithm to implement the method and show some simulation results.

  3. Bayesian segmentation of hyperspectral images

    Science.gov (United States)

    Mohammadpour, Adel; Féron, Olivier; Mohammad-Djafari, Ali

    2004-11-01

    In this paper we consider the problem of joint segmentation of hyperspectral images in the Bayesian framework. The proposed approach is based on a Hidden Markov Modeling (HMM) of the images with common segmentation, or equivalently with common hidden classification label variables which is modeled by a Potts Markov Random Field. We introduce an appropriate Markov Chain Monte Carlo (MCMC) algorithm to implement the method and show some simulation results.

  4. Bayesian Stable Isotope Mixing Models

    OpenAIRE

    Parnell, Andrew C.; Phillips, Donald L.; Bearhop, Stuart; Semmens, Brice X.; Ward, Eric J.; Moore, Jonathan W.; Andrew L Jackson; Inger, Richard

    2012-01-01

    In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixture. The most widely used application is quantifying the diet of organisms based on the food sources they have been observed to consume. At the centre of the multivariate statistical model we propose is a compositional m...

  5. Bayesian Network--Response Regression

    OpenAIRE

    WANG, LU; Durante, Daniele; Dunson, David B.

    2016-01-01

    There is an increasing interest in learning how human brain networks vary with continuous traits (e.g., personality, cognitive abilities, neurological disorders), but flexible procedures to accomplish this goal are limited. We develop a Bayesian semiparametric model, which combines low-rank factorizations and Gaussian process priors to allow flexible shifts of the conditional expectation for a network-valued random variable across the feature space, while including subject-specific random eff...

  6. Bayesian estimation of turbulent motion

    OpenAIRE

    Héas, P.; Herzet, C.; Mémin, E.; Heitz, D.; P. D. Mininni

    2013-01-01

    International audience Based on physical laws describing the multi-scale structure of turbulent flows, this article proposes a regularizer for fluid motion estimation from an image sequence. Regularization is achieved by imposing some scale invariance property between histograms of motion increments computed at different scales. By reformulating this problem from a Bayesian perspective, an algorithm is proposed to jointly estimate motion, regularization hyper-parameters, and to select the ...

  7. Elements of Bayesian experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Sivia, D.S. [Rutherford Appleton Lab., Oxon (United Kingdom)

    1997-09-01

    We consider some elements of the Bayesian approach that are important for optimal experimental design. While the underlying principles used are very general, and are explained in detail in a recent tutorial text, they are applied here to the specific case of characterising the inferential value of different resolution peakshapes. This particular issue was considered earlier by Silver, Sivia and Pynn (1989, 1990a, 1990b), and the following presentation confirms and extends the conclusions of their analysis.

  8. Skill Rating by Bayesian Inference

    OpenAIRE

    Di Fatta, Giuseppe; Haworth, Guy McCrossan; Regan, Kenneth W.

    2009-01-01

    Systems Engineering often involves computer modelling the behaviour of proposed systems and their components. Where a component is human, fallibility must be modelled by a stochastic agent. The identification of a model of decision-making over quantifiable options is investigated using the game-domain of Chess. Bayesian methods are used to infer the distribution of players’ skill levels from the moves they play rather than from their competitive results. The approach is used on large sets of ...

  9. Topics in Nonparametric Bayesian Statistics

    OpenAIRE

    2003-01-01

    The intersection set of Bayesian and nonparametric statistics was almost empty until about 1973, but now seems to be growing at a healthy rate. This chapter gives an overview of various theoretical and applied research themes inside this field, partly complementing and extending recent reviews of Dey, Müller and Sinha (1998) and Walker, Damien, Laud and Smith (1999). The intention is not to be complete or exhaustive, but rather to touch on research areas of interest, partly by example.

  10. Cover Tree Bayesian Reinforcement Learning

    OpenAIRE

    Tziortziotis, Nikolaos; Dimitrakakis, Christos; Blekas, Konstantinos

    2013-01-01

    This paper proposes an online tree-based Bayesian approach for reinforcement learning. For inference, we employ a generalised context tree model. This defines a distribution on multivariate Gaussian piecewise-linear models, which can be updated in closed form. The tree structure itself is constructed using the cover tree method, which remains efficient in high dimensional spaces. We combine the model with Thompson sampling and approximate dynamic programming to obtain effective exploration po...

  11. Bayesian kinematic earthquake source models

    Science.gov (United States)

    Minson, S. E.; Simons, M.; Beck, J. L.; Genrich, J. F.; Galetzka, J. E.; Chowdhury, F.; Owen, S. E.; Webb, F.; Comte, D.; Glass, B.; Leiva, C.; Ortega, F. H.

    2009-12-01

    Most coseismic, postseismic, and interseismic slip models are based on highly regularized optimizations which yield one solution which satisfies the data given a particular set of regularizing constraints. This regularization hampers our ability to answer basic questions such as whether seismic and aseismic slip overlap or instead rupture separate portions of the fault zone. We present a Bayesian methodology for generating kinematic earthquake source models with a focus on large subduction zone earthquakes. Unlike classical optimization approaches, Bayesian techniques sample the ensemble of all acceptable models presented as an a posteriori probability density function (PDF), and thus we can explore the entire solution space to determine, for example, which model parameters are well determined and which are not, or what is the likelihood that two slip distributions overlap in space. Bayesian sampling also has the advantage that all a priori knowledge of the source process can be used to mold the a posteriori ensemble of models. Although very powerful, Bayesian methods have up to now been of limited use in geophysical modeling because they are only computationally feasible for problems with a small number of free parameters due to what is called the "curse of dimensionality." However, our methodology can successfully sample solution spaces of many hundreds of parameters, which is sufficient to produce finite fault kinematic earthquake models. Our algorithm is a modification of the tempered Markov chain Monte Carlo (tempered MCMC or TMCMC) method. In our algorithm, we sample a "tempered" a posteriori PDF using many MCMC simulations running in parallel and evolutionary computation in which models which fit the data poorly are preferentially eliminated in favor of models which better predict the data. We present results for both synthetic test problems as well as for the 2007 Mw 7.8 Tocopilla, Chile earthquake, the latter of which is constrained by InSAR, local high

  12. Bayesian Kernel Mixtures for Counts

    OpenAIRE

    Canale, Antonio; David B Dunson

    2011-01-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviatio...

  13. Bayesian Optimization for Adaptive MCMC

    OpenAIRE

    Mahendran, Nimalan; Wang, Ziyu; Hamze, Firas; De Freitas, Nando

    2011-01-01

    This paper proposes a new randomized strategy for adaptive MCMC using Bayesian optimization. This approach applies to non-differentiable objective functions and trades off exploration and exploitation to reduce the number of potentially costly objective function evaluations. We demonstrate the strategy in the complex setting of sampling from constrained, discrete and densely connected probabilistic graphical models where, for each variation of the problem, one needs to adjust the parameters o...

  14. Inference in hybrid Bayesian networks

    DEFF Research Database (Denmark)

    Lanseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael;

    2009-01-01

    and reliability block diagrams). However, limitations in the BNs' calculation engine have prevented BNs from becoming equally popular for domains containing mixtures of both discrete and continuous variables (so-called hybrid domains). In this paper we focus on these difficulties, and summarize some of the last...... decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability....

  15. Quantile pyramids for Bayesian nonparametrics

    OpenAIRE

    2009-01-01

    P\\'{o}lya trees fix partitions and use random probabilities in order to construct random probability measures. With quantile pyramids we instead fix probabilities and use random partitions. For nonparametric Bayesian inference we use a prior which supports piecewise linear quantile functions, based on the need to work with a finite set of partitions, yet we show that the limiting version of the prior exists. We also discuss and investigate an alternative model based on the so-called substitut...

  16. Space Shuttle RTOS Bayesian Network

    Science.gov (United States)

    Morris, A. Terry; Beling, Peter A.

    2001-01-01

    With shrinking budgets and the requirements to increase reliability and operational life of the existing orbiter fleet, NASA has proposed various upgrades for the Space Shuttle that are consistent with national space policy. The cockpit avionics upgrade (CAU), a high priority item, has been selected as the next major upgrade. The primary functions of cockpit avionics include flight control, guidance and navigation, communication, and orbiter landing support. Secondary functions include the provision of operational services for non-avionics systems such as data handling for the payloads and caution and warning alerts to the crew. Recently, a process to selection the optimal commercial-off-the-shelf (COTS) real-time operating system (RTOS) for the CAU was conducted by United Space Alliance (USA) Corporation, which is a joint venture between Boeing and Lockheed Martin, the prime contractor for space shuttle operations. In order to independently assess the RTOS selection, NASA has used the Bayesian network-based scoring methodology described in this paper. Our two-stage methodology addresses the issue of RTOS acceptability by incorporating functional, performance and non-functional software measures related to reliability, interoperability, certifiability, efficiency, correctness, business, legal, product history, cost and life cycle. The first stage of the methodology involves obtaining scores for the various measures using a Bayesian network. The Bayesian network incorporates the causal relationships between the various and often competing measures of interest while also assisting the inherently complex decision analysis process with its ability to reason under uncertainty. The structure and selection of prior probabilities for the network is extracted from experts in the field of real-time operating systems. Scores for the various measures are computed using Bayesian probability. In the second stage, multi-criteria trade-off analyses are performed between the scores

  17. Bayesian analysis of contingency tables

    OpenAIRE

    Gómez Villegas, Miguel A.; González Pérez, Beatriz

    2005-01-01

    The display of the data by means of contingency tables is used in different approaches to statistical inference, for example, to broach the test of homogeneity of independent multinomial distributions. We develop a Bayesian procedure to test simple null hypotheses versus bilateral alternatives in contingency tables. Given independent samples of two binomial distributions and taking a mixed prior distribution, we calculate the posterior probability that the proportion of successes in the first...

  18. Bayesian Credit Ratings (new version)

    OpenAIRE

    Paola Cerchiello; Paolo Giudici

    2013-01-01

    In this contribution we aim at improving ordinal variable selection in the context of causal models. In this regard, we propose an approach that provides a formal inferential tool to compare the explanatory power of each covariate, and, therefore, to select an effective model for classification purposes. Our proposed model is Bayesian nonparametric, and, thus, keeps the amount of model specification to a minimum. We consider the case in which information from the covariates is at the ordinal ...

  19. Bayesian second law of thermodynamics

    Science.gov (United States)

    Bartolotta, Anthony; Carroll, Sean M.; Leichenauer, Stefan; Pollack, Jason

    2016-08-01

    We derive a generalization of the second law of thermodynamics that uses Bayesian updates to explicitly incorporate the effects of a measurement of a system at some point in its evolution. By allowing an experimenter's knowledge to be updated by the measurement process, this formulation resolves a tension between the fact that the entropy of a statistical system can sometimes fluctuate downward and the information-theoretic idea that knowledge of a stochastically evolving system degrades over time. The Bayesian second law can be written as Δ H (ρm,ρ ) + F |m≥0 , where Δ H (ρm,ρ ) is the change in the cross entropy between the original phase-space probability distribution ρ and the measurement-updated distribution ρm and F |m is the expectation value of a generalized heat flow out of the system. We also derive refined versions of the second law that bound the entropy increase from below by a non-negative number, as well as Bayesian versions of integral fluctuation theorems. We demonstrate the formalism using simple analytical and numerical examples.

  20. Quantum Inference on Bayesian Networks

    Science.gov (United States)

    Yoder, Theodore; Low, Guang Hao; Chuang, Isaac

    2014-03-01

    Because quantum physics is naturally probabilistic, it seems reasonable to expect physical systems to describe probabilities and their evolution in a natural fashion. Here, we use quantum computation to speedup sampling from a graphical probability model, the Bayesian network. A specialization of this sampling problem is approximate Bayesian inference, where the distribution on query variables is sampled given the values e of evidence variables. Inference is a key part of modern machine learning and artificial intelligence tasks, but is known to be NP-hard. Classically, a single unbiased sample is obtained from a Bayesian network on n variables with at most m parents per node in time (nmP(e) - 1 / 2) , depending critically on P(e) , the probability the evidence might occur in the first place. However, by implementing a quantum version of rejection sampling, we obtain a square-root speedup, taking (n2m P(e) -1/2) time per sample. The speedup is the result of amplitude amplification, which is proving to be broadly applicable in sampling and machine learning tasks. In particular, we provide an explicit and efficient circuit construction that implements the algorithm without the need for oracle access.

  1. VHDL Design and FPGA Implementation of a Parallel Reed-Solomon (15, K, D Encoder/Decoder

    Directory of Open Access Journals (Sweden)

    Mustapha ELHAROUSSI

    2013-02-01

    Full Text Available In this article, we propose a Reed Solomon error correcting encoder/decoder with the complete description of a concrete implementation starting from a VHDL description of this decoder. The design on FPGA of the (15, k, d Reed Solomon decoder is studied and simulated in order to implement an encoder/decoder function.The proposed architecture of the decoder can achieve a high data rate, in our case, 5 clock cycles, and having a reasonable complexity (1010 CLBs.

  2. VHDL Design and FPGA Implementation of a Parallel Reed-Solomon (15, K, D) Encoder/Decoder

    OpenAIRE

    Mustapha ELHAROUSSI; Asmaa HAMYANI; Belkasmi, Mostafa

    2013-01-01

    In this article, we propose a Reed Solomon error correcting encoder/decoder with the complete description of a concrete implementation starting from a VHDL description of this decoder. The design on FPGA of the (15, k, d) Reed Solomon decoder is studied and simulated in order to implement an encoder/decoder function.The proposed architecture of the decoder can achieve a high data rate, in our case, 5 clock cycles, and having a reasonable complexity (1010 CLBs).

  3. Unsupervised Transient Light Curve Analysis Via Hierarchical Bayesian Inference

    CERN Document Server

    Sanders, Nathan; Soderberg, Alicia

    2014-01-01

    Historically, light curve studies of supernovae (SNe) and other transient classes have focused on individual objects with copious and high signal-to-noise observations. In the nascent era of wide field transient searches, objects with detailed observations are decreasing as a fraction of the overall known SN population, and this strategy sacrifices the majority of the information contained in the data about the underlying population of transients. A population level modeling approach, simultaneously fitting all available observations of objects in a transient sub-class of interest, fully mines the data to infer the properties of the population and avoids certain systematic biases. We present a novel hierarchical Bayesian statistical model for population level modeling of transient light curves, and discuss its implementation using an efficient Hamiltonian Monte Carlo technique. As a test case, we apply this model to the Type IIP SN sample from the Pan-STARRS1 Medium Deep Survey, consisting of 18,837 photometr...

  4. 12th Brazilian Meeting on Bayesian Statistics

    CERN Document Server

    Louzada, Francisco; Rifo, Laura; Stern, Julio; Lauretto, Marcelo

    2015-01-01

    Through refereed papers, this volume focuses on the foundations of the Bayesian paradigm; their comparison to objectivistic or frequentist Statistics counterparts; and the appropriate application of Bayesian foundations. This research in Bayesian Statistics is applicable to data analysis in biostatistics, clinical trials, law, engineering, and the social sciences. EBEB, the Brazilian Meeting on Bayesian Statistics, is held every two years by the ISBrA, the International Society for Bayesian Analysis, one of the most active chapters of the ISBA. The 12th meeting took place March 10-14, 2014 in Atibaia. Interest in foundations of inductive Statistics has grown recently in accordance with the increasing availability of Bayesian methodological alternatives. Scientists need to deal with the ever more difficult choice of the optimal method to apply to their problem. This volume shows how Bayes can be the answer. The examination and discussion on the foundations work towards the goal of proper application of Bayesia...

  5. A Bayesian Predictive Discriminant Analysis with Screened Data

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2015-09-01

    Full Text Available In the application of discriminant analysis, a situation sometimes arises where individual measurements are screened by a multidimensional screening scheme. For this situation, a discriminant analysis with screened populations is considered from a Bayesian viewpoint, and an optimal predictive rule for the analysis is proposed. In order to establish a flexible method to incorporate the prior information of the screening mechanism, we propose a hierarchical screened scale mixture of normal (HSSMN model, which makes provision for flexible modeling of the screened observations. An Markov chain Monte Carlo (MCMC method using the Gibbs sampler and the Metropolis–Hastings algorithm within the Gibbs sampler is used to perform a Bayesian inference on the HSSMN models and to approximate the optimal predictive rule. A simulation study is given to demonstrate the performance of the proposed predictive discrimination procedure.

  6. Photometric Redshift with Bayesian Priors on Physical Properties of Galaxies

    CERN Document Server

    Tanaka, Masayuki

    2015-01-01

    We present a proof-of-concept analysis of photometric redshifts with Bayesian priors on physical properties of galaxies. This concept is particularly suited for upcoming/on-going large imaging surveys, in which only several broad-band filters are available and it is hard to break some of the degeneracies in the multi-color space. We construct model templates of galaxies using a stellar population synthesis code and apply Bayesian priors on physical properties such as stellar mass and star formation rate. These priors are a function of redshift and they effectively evolve the templates with time in an observationally motivated way. We demonstrate that the priors help reduce the degeneracy and deliver significantly improved photometric redshifts. Furthermore, we show that a template error function, which corrects for systematic flux errors in the model templates as a function of rest-frame wavelength, delivers further improvements. One great advantage of our technique is that we simultaneously measure redshifts...

  7. A Bayesian nonlinear mixed-effects disease progression model

    Science.gov (United States)

    Kim, Seongho; Jang, Hyejeong; Wu, Dongfeng; Abrams, Judith

    2016-01-01

    A nonlinear mixed-effects approach is developed for disease progression models that incorporate variation in age in a Bayesian framework. We further generalize the probability model for sensitivity to depend on age at diagnosis, time spent in the preclinical state and sojourn time. The developed models are then applied to the Johns Hopkins Lung Project data and the Health Insurance Plan for Greater New York data using Bayesian Markov chain Monte Carlo and are compared with the estimation method that does not consider random-effects from age. Using the developed models, we obtain not only age-specific individual-level distributions, but also population-level distributions of sensitivity, sojourn time and transition probability. PMID:26798562

  8. Bayesian Posterior Distributions Without Markov Chains

    OpenAIRE

    Cole, Stephen R.; Chu, Haitao; Greenland, Sander; Hamra, Ghassan; Richardson, David B.

    2012-01-01

    Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976–1983) assessing the relation between residential ex...

  9. Bayesian networks with applications in reliability analysis

    OpenAIRE

    Langseth, Helge

    2002-01-01

    A common goal of the papers in this thesis is to propose, formalize and exemplify the use of Bayesian networks as a modelling tool in reliability analysis. The papers span work in which Bayesian networks are merely used as a modelling tool (Paper I), work where models are specially designed to utilize the inference algorithms of Bayesian networks (Paper II and Paper III), and work where the focus has been on extending the applicability of Bayesian networks to very large domains (Paper IV and ...

  10. Optimal complexity scalable H.264/AVC video decoding scheme for portable multimedia devices

    Science.gov (United States)

    Lee, Hoyoung; Park, Younghyeon; Jeon, Byeungwoo

    2013-07-01

    Limited computing resources in portable multimedia devices are an obstacle in real-time video decoding of high resolution and/or high quality video contents. Ordinary H.264/AVC video decoders cannot decode video contents that exceed the limits set by their processing resources. However, in many real applications especially on portable devices, a simplified decoding with some acceptable degradation may be desirable instead of just refusing to decode such contents. For this purpose, a complexity-scalable H.264/AVC video decoding scheme is investigated in this paper. First, several simplified methods of decoding tools that have different characteristics are investigated to reduce decoding complexity and consequential degradation of reconstructed video. Then a complexity scalable H.264/AVC decoding scheme is designed by selectively combining effective simplified methods to achieve the minimum degradation. Experimental results with the H.264/AVC main profile bitstream show that its decoding complexity can be scalably controlled, and reduced by up to 44% without subjective quality loss.

  11. Memory bandwidth efficient two-layer reduced-resolution decoding of high-definition video

    Science.gov (United States)

    Comer, Mary L.

    2000-12-01

    This paper addresses the problem of efficiently decoding high- definition (HD) video for display at a reduced resolution. The decoder presented in this paper is intended for applications that are constrained not only in memory size, but also in peak memory bandwidth. This is the case, for example, during decoding of a high-definition television (HDTV) channel for picture-in-picture (PIP) display, if the reduced resolution PIP-channel decoder is sharing memory with the full-resolution main-channel decoder. The most significant source of video quality degradation in a reduced-resolution decoder is prediction drift, which is caused by the mismatch between the full-resolution reference frames used by the encoder and the subsampled reference frames used by the decoder. to mitigate the visually annoying effects of prediction drift, the decoder described in this paper operates at two different resolutions -- a lower resolution for B pictures, which do not contribute to prediction drift and a higher resolution for I and P pictures. This means that the motion-compensation unit (MCU) essentially operates at the higher resolution, but the peak memory bandwidth is the same as that required to decode at the lower resolution. Storage of additional data, representing the higher resolution for I and P pictures, requires a relatively small amount of additional memory as compared to decoding at the lower resolution. Experimental results will demonstrate the improvement in video quality achieved by the addition of the higher-resolution data in forming predictions for P pictures.

  12. Performance-complexity tradeoff in sequential decoding for the unconstrained AWGN channel

    KAUST Repository

    Abediseid, Walid

    2013-06-01

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter - the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity. © 2013 IEEE.

  13. STACK DECODING OF LINEAR BLOCK CODES FOR DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

    Directory of Open Access Journals (Sweden)

    H. Prashantha Kumar

    2012-03-01

    Full Text Available The boundaries between block and convolutional codes have become diffused after recent advances in the understanding of the trellis structure of block codes and the tail-biting structure of some convolutional codes. Therefore, decoding algorithms traditionally proposed for decoding convolutional codes have been applied for decoding certain classes of block codes. This paper presents the decoding of block codes using tree structure. Many good block codes are presently known. Several of them have been used in applications ranging from deep space communication to error control in storage systems. But the primary difficulty with applying Viterbi or BCJR algorithms to decode of block codes is that, even though they are optimum decoding methods, the promised bit error rates are not achieved in practice at data rates close to capacity. This is because the decoding effort is fixed and grows with block length, and thus only short block length codes can be used. Therefore, an important practical question is whether a suboptimal realizable soft decision decoding method can be found for block codes. A noteworthy result which provides a partial answer to this question is described in the following sections. This result of near optimum decoding will be used as motivation for the investigation of different soft decision decoding methods for linear block codes which can lead to the development of efficient decoding algorithms. The code tree can be treated as an expanded version of the trellis, where every path is totally distinct from every other path. We have derived the tree structure for (8, 4 and (16, 11 extended Hamming codes and have succeeded in implementing the soft decision stack algorithm to decode them. For the discrete memoryless channel, gains in excess of 1.5dB at a bit error rate of 10-5 with respect to conventional hard decision decoding are demonstrated for these codes.

  14. Slack-Decode Simultaneously and Redundantly Threaded Architecture

    Institute of Scientific and Technical Information of China (English)

    YANG Hua; CUI Gang; LIU Hong-wei; YANG Xiao-zong

    2005-01-01

    Slack-Decode Simultaneously and Redundantly Threaded(SD-SRT) is proposed for detecting transient faults in processors. SD-SRT boosts the previously proposed SRT performance via definitely eliminating redundant instruction fetches. First, the fetch stage is moved out of the Spheres of Replication (SoR), and a unified instruction-fetch-queue(IFQ) is exploited by both the leading and trailing threads.Second, a scheme called slack-decode cooperates with the unified IFQ to harmonize proceeding of the two threads. The simulations show that SD-SRT outperforms original SRT in terms of IPC by 15%, and decreases I-cache access by42%. Meanwhile, SD-SRT leads to a lessened size and complexity for hardware structures such as load-value-queue and store-buffer.

  15. Molecular decoding using luminescence from an entangled porous framework

    Science.gov (United States)

    Takashima, Yohei; Martínez, Virginia Martínez; Furukawa, Shuhei; Kondo, Mio; Shimomura, Satoru; Uehara, Hiromitsu; Nakahama, Masashi; Sugimoto, Kunihisa; Kitagawa, Susumu

    2011-01-01

    Chemosensors detect a single target molecule from among several molecules, but cannot differentiate targets from one another. In this study, we report a molecular decoding strategy in which a single host domain accommodates a class of molecules and distinguishes between them with a corresponding readout. We synthesized the decoding host by embedding naphthalenediimide into the scaffold of an entangled porous framework that exhibited structural dynamics due to the dislocation of two chemically non-interconnected frameworks. An intense turn-on emission was observed on incorporation of a class of aromatic compounds, and the resulting luminescent colour was dependent on the chemical substituent of the aromatic guest. This unprecedented chemoresponsive, multicolour luminescence originates from an enhanced naphthalenediimide–aromatic guest interaction because of the induced-fit structural transformation of the entangled framework. We demonstrate that the cooperative structural transition in mesoscopic crystal domains results in a nonlinear sensor response to the guest concentration. PMID:21266971

  16. Molecular decoding using luminescence from an entangled porous framework.

    Science.gov (United States)

    Takashima, Yohei; Martínez, Virginia Martínez; Furukawa, Shuhei; Kondo, Mio; Shimomura, Satoru; Uehara, Hiromitsu; Nakahama, Masashi; Sugimoto, Kunihisa; Kitagawa, Susumu

    2011-01-25

    Chemosensors detect a single target molecule from among several molecules, but cannot differentiate targets from one another. In this study, we report a molecular decoding strategy in which a single host domain accommodates a class of molecules and distinguishes between them with a corresponding readout. We synthesized the decoding host by embedding naphthalenediimide into the scaffold of an entangled porous framework that exhibited structural dynamics due to the dislocation of two chemically non-interconnected frameworks. An intense turn-on emission was observed on incorporation of a class of aromatic compounds, and the resulting luminescent colour was dependent on the chemical substituent of the aromatic guest. This unprecedented chemoresponsive, multicolour luminescence originates from an enhanced naphthalenediimide-aromatic guest interaction because of the induced-fit structural transformation of the entangled framework. We demonstrate that the cooperative structural transition in mesoscopic crystal domains results in a nonlinear sensor response to the guest concentration. PMID:21266971

  17. Quantum Hardcore Functions by Complexity-Theoretical Quantum List Decoding

    CERN Document Server

    Kawachi, A; Kawachi, Akinori; Yamakami, Tomoyuki

    2006-01-01

    We present three new quantum hardcore functions for any quantum one-way function. We also give a "quantum" solution to Damgard's question (CRYPTO'88) on his pseudorandom generator by proving the quantum hardcore property of his generator, which has been unknown to have the classical hardcore property. Our technical tool is quantum list-decoding of "classical" error-correcting codes (rather than "quantum" error-correcting codes), which is defined on the platform of computational complexity theory and cryptography (rather than information theory). In particular, we give a simple but powerful criterion that makes a polynomial-time computable code (seen as a function) a quantum hardcore for any quantum one-way function. On their own interest, we also give quantum list-decoding algorithms for codes whose associated quantum states (called codeword states) are "almost" orthogonal using the technique of pretty good measurement.

  18. Iterative Equalization and Source Decoding for Vector Quantized Sources

    OpenAIRE

    Yang, L-L.; Wang, J.; Maunder, R.G.; Hanzo, L

    2006-01-01

    In this contribution an iterative (turbo) channel equalization and source decoding scheme is considered. In our investigations the source is modelled as a Gaussian-Markov source, which is compressed with the aid of vector quantization. The communications channel is modelled as a time-invariant channel contaminated by intersymbol interference (ISI). Since the ISI channel can be viewed as a rate-1 encoder and since the redundancy of the source cannot be perfectly removed by source encoding, a j...

  19. An LDPC Decoder Architecture for Wireless Sensor Network Applications

    OpenAIRE

    Guido Masera; Andrea Dario Giancarlo Biroli; Maurizio Martina

    2012-01-01

    The pervasive use of wireless sensors in a growing spectrum of human activities reinforces the need for devices with low energy dissipation. In this work, coded communication between a couple of wireless sensor devices is considered as a method to reduce the dissipated energy per transmitted bit with respect to uncoded communication. Different Low Density Parity Check (LDPC) codes are considered to this purpose and post layout results are shown for a low-area low-energy decoder, which offers ...

  20. Decoding the visual and subjective contents of the human brain

    OpenAIRE

    Kamitani, Yukiyasu; Tong, Frank

    2005-01-01

    The potential for human neuroimaging to read-out the detailed contents of a person’s mental state has yet to be fully explored. We investigated whether the perception of edge orientation, a fundamental visual feature, can be decoded from human brain activity measured with functional magnetic resonance imaging (fMRI). Using statistical algorithms to classify brain states, we found that ensemble fMRI signals in early visual areas could reliably predict on individual trials which of eight stimul...

  1. Fast brain decoding with random sampling and random projections

    OpenAIRE

    Hoyos-Idrobo, Andrés; Varoquaux, Gaël; Thirion, Bertrand

    2016-01-01

    Machine learning from brain images is a central tool for image-based diagnosis and diseases characterization. Predicting behavior from functional imaging, brain decoding, analyzes brain activity in terms of the behavior that it implies. While these multivariate techniques are becoming standard brain mapping tools, like mass-univariate analysis, they entail much larger computational costs. In an time of growing data sizes, with larger cohorts and higher-resolutions imaging, this cost is increa...

  2. Social-sparsity brain decoders: faster spatial sparsity

    OpenAIRE

    Varoquaux, Gaël; Kowalski, Matthieu; Thirion, Bertrand

    2016-01-01

    Spatially-sparse predictors are good models for brain decoding: they give accurate predictions and their weight maps are interpretable as they focus on a small number of regions. However, the state of the art, based on total variation or graph-net, is computationally costly. Here we introduce sparsity in the local neighborhood of each voxel with social-sparsity, a structured shrinkage operator. We find that, on brain imaging classification problems, social-sparsity performs almost as well as ...

  3. Interference Mitigation Techniques for Clustered Multicell Joint Decoding Systems

    OpenAIRE

    Chatzinotas Symeon; Ottersten Björn

    2011-01-01

    Abstract Multicell joint processing has originated from information-theoretic principles as a means of reaching the fundamental capacity limits of cellular networks. However, global multicell joint decoding is highly complex and in practice clusters of cooperating Base Stations constitute a more realistic scenario. In this direction, the mitigation of intercluster interference rises as a critical factor towards achieving the promised throughput gains. In this paper, two intercluster interfere...

  4. Transport on Riemannian Manifold for Connectivity-based Brain Decoding

    OpenAIRE

    Ng, Bernard; Varoquaux, Gaël; Poline, Jean-baptiste; Greicius, Michael D.; Thirion, Bertrand

    2015-01-01

    There is a recent interest in using functional magnetic resonance imaging (fMRI) for decoding more naturalistic, cognitive states, in which subjects perform various tasks in a continuous, self-directed manner. In this setting, the set of brain volumes over the entire task duration is usually taken as a single sample with connectivity estimates, such as Pearson's correlation, employed as features. Since covariance matrices live on the positive semidefinite cone, their elements are inherently i...

  5. Coding Opportunity Densification Strategies for Instantly Decodable Network Coding

    OpenAIRE

    Sorour, Sameh; Valaee, Shahrokh

    2012-01-01

    In this paper, we aim to identify the strategies that can maximize and monotonically increase the density of the coding opportunities in instantly decodable network coding (IDNC).Using the well-known graph representation of IDNC, first derive an expression for the exact evolution of the edge set size after the transmission of any arbitrary coded packet. From the derived expressions, we show that sending commonly wanted packets for all the receivers can maximize the number of coding opportunit...

  6. Coset decomposition method for storing and decoding fingerprint data

    OpenAIRE

    Mohamed Sayed

    2014-01-01

    Biometrics such as fingerprints, irises, faces, voice, gait and hands are often used for access control, authentication and encryption instead of PIN and passwords. In this paper a syndrome decoding technique is proposed to provide a secure means of storing and matching various biometrics data. We apply an algebraic coding technique called coset decomposition to the model of fingerprint biometrics. The algorithm which reveals the matching between registered and probe fingerprints is modeled a...

  7. A Self-Organising State Space Decoder for Reinforcement Learning

    OpenAIRE

    Marriott, S; Harrison, R F

    1995-01-01

    A novel self-organising architecture, loosely based upon a particular implementation of adaptive resonance theory is proposed here as an alternative to the fixed state space decoder in the seminal implementation of reinforcement learning of Barto, Sutton and Anderson. A well known non-linear control problem is considered and the results are compared to those of the original study. The objective is to illustrate the possibility of neurocontrollers that adaptively partition state space through ...

  8. Comparing offline decoding performance in physiologically defined neuronal classes

    OpenAIRE

    Best, Matthew D.; Takahashi, Kazutaka; Suminski, Aaron J; Ethier, Christian; Miller, Lee E.; Hatsopoulos, Nicholas G.

    2016-01-01

    Objective Recently, several studies have documented the presence of a bimodal distribution of spike waveform widths in primary motor cortex. Although narrow and wide spiking neurons, corresponding to the two modes of the distribution, exhibit different response properties, it remains unknown if these differences give rise to differential decoding performance between these two classes of cells. Approach We used a Gaussian mixture model to classify neurons into narrow and wide physiological cla...

  9. Name that tune: Decoding music from the listening brain

    OpenAIRE

    2011-01-01

    In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven different musical fragments of about three seconds, both individually and cross-participants, using only time domain information (the event-related potential, ERP). The best individual results are 70% cor...

  10. Relation between neurophysiological and mental states: possible limits of decodability

    OpenAIRE

    Gierer, Alfred

    2006-01-01

    Validity of physical laws for any aspect of brain activity and strict correlation of mental to physical states of the brain do not imply, with logical necessity, that a complete algorithmic theory of the mind-body relation is possible. A limit of decodability may be imposed by the finite number of possible analytical operations which is rooted in the finiteness of the world. It is considered as a fundamental intrinsic limitation of the scientific approach comparable to quantum indeterminacy a...

  11. LOW POWER SOFTWARE HEVC DECODER DEMO FOR MOBILE DEVICES

    OpenAIRE

    Nogues, Erwan; Lacour, Morgan; Raffin, Erwan; Pelcat, Maxime; Menard, Daniel

    2015-01-01

    Demo session International audience Video on mobile devices is a must-have feature with the prominence of new services and applications using video like streaming or conferencing. By improving the compression performance by a factor of two with a similar quality, the new video standard High Efficiency Video Coding (HEVC) is an appealing technology for service providers. Besides, with the recent progress of Systems-on-Chip (SoC) for mobile devices, software video decoders is now a realit...

  12. An Area-Efficient Reconfigurable LDPC Decoder with Conflict Resolution

    Science.gov (United States)

    Zhou, Changsheng; Huang, Yuebin; Huang, Shuangqu; Chen, Yun; Zeng, Xiaoyang

    Based on Turbo-Decoding Message-Passing (TDMP) and Normalized Min-Sum (NMS) algorithm, an area efficient LDPC decoder that supports both structured and unstructured LDPC codes is proposed in this paper. We introduce a solution to solve the memory access conflict problem caused by TDMP algorithm. We also arrange the main timing schedule carefully to handle the operations of our solution while avoiding much additional hardware consumption. To reduce the memory bits needed, the extrinsic message storing strategy is also optimized. Besides the extrinsic message recover and the accumulate operation are merged together. To verify our architecture, a LDPC decoder that supports both China Multimedia Mobile Broadcasting (CMMB) and Digital Terrestrial/ Television Multimedia Broadcasting (DTMB) standards is developed using SMIC 0.13µm standard CMOS process. The core area is 4.75mm2 and the maximum operating clock frequency is 200MHz. The estimated power consumption is 48.4mW at 25MHz for CMMB and 130.9mW at 50MHz for DTMB with 5 iterations and 1.2V supply.

  13. Design and implementation of a channel decoder with LDPC code

    Science.gov (United States)

    Hu, Diqing; Wang, Peng; Wang, Jianzong; Li, Tianquan

    2008-12-01

    Because Toshiba quit the competition, there is only one standard of blue-ray disc: BLU-RAY DISC, which satisfies the demands of high-density video programs. But almost all the patents are gotten by big companies such as Sony, Philips. As a result we must pay much for these patents when our productions use BD. As our own high-density optical disk storage system, Next-Generation Versatile Disc(NVD) which proposes a new data format and error correction code with independent intellectual property rights and high cost performance owns higher coding efficiency than DVD and 12GB which could meet the demands of playing the high-density video programs. In this paper, we develop Low-Density Parity-Check Codes (LDPC): a new channel encoding process and application scheme using Q-matrix based on LDPC encoding has application in NVD's channel decoder. And combined with the embedded system portable feature of SOPC system, we have completed all the decoding modules by FPGA. In the NVD experiment environment, tests are done. Though there are collisions between LDPC and Run-Length-Limited modulation codes (RLL) which are used in optical storage system frequently, the system is provided as a suitable solution. At the same time, it overcomes the defects of the instability and inextensibility, which occurred in the former decoding system of NVD--it was implemented by hardware.

  14. Secure Lossy Source Coding with Side Information at the Decoders

    CERN Document Server

    Villard, Joffrey

    2010-01-01

    This paper investigates the problem of secure lossy source coding in the presence of an eavesdropper with arbitrary correlated side informations at the legitimate decoder (referred to as Bob) and the eavesdropper (referred to as Eve). This scenario consists of an encoder that wishes to compress a source to satisfy the desired requirements on: (i) the distortion level at Bob and (ii) the equivocation rate at Eve. It is assumed that the decoders have access to correlated sources as side information. For instance, this problem can be seen as a generalization of the well-known Wyner-Ziv problem taking into account the security requirements. A complete characterization of the rate-distortion-equivocation region for the case of arbitrary correlated side informations at the decoders is derived. Several special cases of interest and an application example to secure lossy source coding of binary sources in the presence of binary and ternary side informations are also considered. It is shown that the statistical differ...

  15. Tree Expectation Propagation for LDPC Decoding over the BEC

    CERN Document Server

    Olmos, Pablo M; Pérez-Cruz, Fernando

    2012-01-01

    We present the tree-structure expectation propagation (Tree-EP) to decode low-density parity-check (LDPC) codes over discrete memoryless channels (DMCs). EP generalizes belief propagation (BP) in two ways. First, it can be used with any exponential family distribution over the cliques in the graph. Second, it can impose additional constraints on the marginal distributions. We use this second property to impose pair-wise marginal constraints over pairs of variables connected to a check node of the LDPC code's Tanner graph. Thanks to these additional constraints, the Tree-EP marginal estimates for each variable in the graph are more accurate than those provided by BP. We also reformulate the Tree-EP algorithm for the binary erasure channel (BEC) as a peeling-type algorithm (TEP) and we show that the algorithm has the same computational complexity as BP and it decodes a higher fraction of errors. We describe the TEP decoding process by a set of differential equations that represents the expected residual graph e...

  16. Bayesian Methods and Universal Darwinism

    Science.gov (United States)

    Campbell, John

    2009-12-01

    Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent Champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a `copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the Operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that Systems will evolve to states of highest entropy subject to the constraints of scientific law. This principle may be inverted to provide illumination as to the nature of scientific law. Our best cosmological theories suggest the universe contained much less complexity during the period shortly after the Big Bang than it does at present. The scientific subject matter of atomic physics, chemistry, biology and the social sciences has been created since that time. An explanation is proposed for the existence of this subject matter as due to the evolution of constraints in the form of adaptations imposed on Maximum Entropy. It is argued these adaptations were discovered and instantiated through the Operations of a succession of Darwinian processes.

  17. A Processor Accelerator for Software Decoding of Reed-Solomon Codes

    Science.gov (United States)

    Ito, Kazuhito; Nasu, Keisuke

    Decoding of Reed-Solomon (RS) codes requires many arithmetic operations in the Galois field. While the software decoding of RS codes has the advantage of its flexibility to support RS codes of variable parameters, the speed of the software decoding is slower than dedicated hardware RS decoders because arithmetic operations in the Galois field on an ordinary processor require many instruction steps. To achieve fast software decoding of RS codes, it is effective to accelerate Galois operations by both dedicated circuitry and parallel processing. In this paper, an accelerator is proposed which is attached to the base processor to speed up the software decoding of RS codes by parallel execution of Galois operations.

  18. High data rate Reed-Solomon encoding and decoding using VLSI technology

    Science.gov (United States)

    Miller, Warner; Morakis, James

    Presented as an implementation of a Reed-Solomon encode and decoder, which is 16-symbol error correcting, each symbol is 8 bits. This Reed-Solomon (RS) code is an efficient error correcting code that the National Aeronautics and Space Administration (NASA) will use in future space communications missions. A Very Large Scale Integration (VLSI) implementation of the encoder and decoder accepts data rates up 80 Mbps. A total of seven chips are needed for the decoder (four of the seven decoding chips are customized using 3-micron Complementary Metal Oxide Semiconduction (CMOS) technology) and one chip is required for the encoder. The decoder operates with the symbol clock being the system clock for the chip set. Approximately 1.65 billion Galois Field (GF) operations per second are achieved with the decoder chip set and 640 MOPS are achieved with the encoder chip.

  19. Check Reliability Based Bit-Flipping Decoding Algorithms for LDPC Codes

    CERN Document Server

    Chang, Chi-Yuan; Chen, Yu-Liang; Liu, Yin-Chen

    2010-01-01

    We introduce new reliability definitions for bit and check nodes. Maximizing global reliability, which is the sum reliability of all bit nodes, is shown to be equivalent to minimizing a decoding metric which is closely related to the maximum likelihood decoding metric. We then propose novel bit-flipping (BF) decoding algorithms that take into account the check node reliability. Both hard-decision (HD) and soft-decision (SD) versions are considered. The former performs better than the conventional BF algorithm and, in most cases, suffers less than 1 dB performance loss when compared with some well known SD BF decoders. For one particular code it even outperforms those SD BF decoders. The performance of the SD version is superior to that of SD BF decoders and is comparable to or even better than that of the sum-product algorithm (SPA). The latter is achieved with a complexity much less than that required by the SPA.

  20. On the Joint Decoding of LDPC Codes and Finite-State Channels via Linear Programming

    CERN Document Server

    Kim, Byung-Hak

    2010-01-01

    In this paper, the linear programming (LP) decoder for binary linear codes, introduced by Feldman, et al. is extended to joint-decoding of binary-input finite-state channels. In particular, we provide a rigorous definition of LP joint-decoding pseudo-codewords (JD-PCWs) that enables evaluation of the pairwise error probability between codewords and JD-PCWs. This leads naturally to a provable upper bound on decoder failure probability. If the channel is a finite-state intersymbol interference channel, then the LP joint decoder also has the maximum-likelihood (ML) certificate property and all integer valued solutions are codewords. In this case, the performance loss relative to ML decoding can be explained completely by fractional valued JD-PCWs.