Directory of Open Access Journals (Sweden)
Elaine Astrand
Full Text Available Decoding neuronal information is important in neuroscience, both as a basic means to understand how neuronal activity is related to cerebral function and as a processing stage in driving neuroprosthetic effectors. Here, we compare the readout performance of six commonly used classifiers at decoding two different variables encoded by the spiking activity of the non-human primate frontal eye fields (FEF: the spatial position of a visual cue, and the instructed orientation of the animal's attention. While the first variable is exogenously driven by the environment, the second variable corresponds to the interpretation of the instruction conveyed by the cue; it is endogenously driven and corresponds to the output of internal cognitive operations performed on the visual attributes of the cue. These two variables were decoded using either a regularized optimal linear estimator in its explicit formulation, an optimal linear artificial neural network estimator, a non-linear artificial neural network estimator, a non-linear naïve Bayesian estimator, a non-linear Reservoir recurrent network classifier or a non-linear Support Vector Machine classifier. Our results suggest that endogenous information such as the orientation of attention can be decoded from the FEF with the same accuracy as exogenous visual information. All classifiers did not behave equally in the face of population size and heterogeneity, the available training and testing trials, the subject's behavior and the temporal structure of the variable of interest. In most situations, the regularized optimal linear estimator and the non-linear Support Vector Machine classifiers outperformed the other tested decoders.
Holbrook, Andrew; Vandenberg-Rodes, Alexander; Fortin, Norbert; Shahbaba, Babak
2017-01-01
Neuroscientists are increasingly collecting multimodal data during experiments and observational studies. Different data modalities-such as EEG, fMRI, LFP, and spike trains-offer different views of the complex systems contributing to neural phenomena. Here, we focus on joint modeling of LFP and spike train data, and present a novel Bayesian method for neural decoding to infer behavioral and experimental conditions. This model performs supervised dual-dimensionality reduction: it learns low-dimensional representations of two different sources of information that not only explain variation in the input data itself, but also predict extra-neuronal outcomes. Despite being one probabilistic unit, the model consists of multiple modules: exponential PCA and wavelet PCA are used for dimensionality reduction in the spike train and LFP modules, respectively; these modules simultaneously interface with a Bayesian binary regression module. We demonstrate how this model may be used for prediction, parametric inference, and identification of influential predictors. In prediction, the hierarchical model outperforms other models trained on LFP alone, spike train alone, and combined LFP and spike train data. We compare two methods for modeling the loading matrix and find them to perform similarly. Finally, model parameters and their posterior distributions yield scientific insights.
Directory of Open Access Journals (Sweden)
Dongha Lee
Full Text Available This study proposes a method for classifying event-related fMRI responses in a specialized setting of many known but few unknown stimuli presented in a rapid event-related design. Compared to block design fMRI signals, classification of the response to a single or a few stimulus trial(s is not a trivial problem due to contamination by preceding events as well as the low signal-to-noise ratio. To overcome such problems, we proposed a single trial-based classification method of rapid event-related fMRI signals utilizing sparse multivariate Bayesian decoding of spatio-temporal fMRI responses. We applied the proposed method to classification of memory retrieval processes for two different classes of episodic memories: a voluntarily conducted experience and a passive experience induced by watching a video of others' actions. A cross-validation showed higher classification performance of the proposed method compared to that of a support vector machine or of a classifier based on the general linear model. Evaluation of classification performances for one, two, and three stimuli from the same class and a correlation analysis between classification accuracy and target stimulus positions among trials suggest that presenting two target stimuli at longer inter-stimulus intervals is optimal in the design of classification experiments to identify the target stimuli. The proposed method for decoding subject-specific memory retrieval of voluntary behavior using fMRI would be useful in forensic applications in a natural environment, where many known trials can be extracted from a simulation of everyday tasks and few target stimuli from a crime scene.
Exploiting Cross-sensitivity by Bayesian Decoding of Mixed Potential Sensor Arrays
Energy Technology Data Exchange (ETDEWEB)
Kreller, Cortney [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-10-02
LANL mixed-potential electrochemical sensor (MPES) device arrays were coupled with advanced Bayesian inference treatment of the physical model of relevant sensor-analyte interactions. We demonstrated that our approach could be used to uniquely discriminate the composition of ternary gas sensors with three discreet MPES sensors with an average error of less than 2%. We also observed that the MPES exhibited excellent stability over a year of operation at elevated temperatures in the presence of test gases.
Directory of Open Access Journals (Sweden)
Dario Cuevas Rivera
2015-10-01
Full Text Available The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an 'intelligent coincidence detector', which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena.
Kim, HyungGoo R.; Pitkow, Xaq; Angelaki, Dora E.
2016-01-01
Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: “congruent” and “opposite” cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs. PMID:27334948
Plourde, Vickie; Boivin, Michel; Forget-Dubois, Nadine; Brendgen, Mara; Vitaro, Frank; Marino, Cecilia; Tremblay, Richard T; Dionne, Ginette
2015-10-01
The phenotypic and genetic associations between decoding skills and ADHD dimensions have been documented but less is known about the association with reading comprehension. The aim of the study is to document the phenotypic and genetic associations between reading comprehension and ADHD dimensions of inattention and hyperactivity/impulsivity in early schooling and compare them to those with decoding skills. Data were collected in two population-based samples of twins (Quebec Newborn Twin Study - QNTS) and singletons (Quebec Longitudinal Study of Child Development - QLSCD) totaling ≈ 2300 children. Reading was assessed with normed measures in second or third grade. Teachers assessed ADHD dimensions in kindergarten and first grade. Both decoding and reading comprehension were correlated with ADHD dimensions in a similar way: associations with inattention remained after controlling for the other ADHD dimension, behavior disorder symptoms and nonverbal abilities, whereas associations with hyperactivity/impulsivity did not. Genetic modeling showed that decoding and comprehension largely shared the same genetic etiology at this age and that their associations with inattention were mostly explained by shared genetic influences. Both reading comprehension and decoding are uniquely associated with inattention through a shared genetic etiology. © 2015 Association for Child and Adolescent Mental Health.
Modelling of population dynamics of red king crab using Bayesian approach
Directory of Open Access Journals (Sweden)
Bakanev Sergey ...
2012-10-01
Modeling population dynamics based on the Bayesian approach enables to successfully resolve the above issues. The integration of the data from various studies into a unified model based on Bayesian parameter estimation method provides a much more detailed description of the processes occurring in the population.
Directory of Open Access Journals (Sweden)
Ethan eMeyers
2013-05-01
Full Text Available Population decoding is a powerful way to analyze neural data, however currently only a small percentage of systems neuroscience researchers use this method. In order to increase the use of population decoding, we have created the Neural Decoding Toolbox (NDT which is a Matlab package that makes it easy to apply population decoding analyses to neural activity. The design of the toolbox revolves around four abstract object classes which enables users to interchange particular modules in order to try different analyses while keeping the rest of the processing stream intact. The toolbox is capable of analyzing data from many different types of recording modalities, and we give examples of how it can be used to decode basic visual information from neural spiking activity and how it can be used to examine how invariant the activity of a neural population is to stimulus transformations. Overall this toolbox will make it much easier for neuroscientists to apply population decoding analyses to their data, which should help increase the pace of discovery in neuroscience.
Meyers, Ethan M
2013-01-01
Population decoding is a powerful way to analyze neural data, however, currently only a small percentage of systems neuroscience researchers use this method. In order to increase the use of population decoding, we have created the Neural Decoding Toolbox (NDT) which is a Matlab package that makes it easy to apply population decoding analyses to neural activity. The design of the toolbox revolves around four abstract object classes which enables users to interchange particular modules in order to try different analyses while keeping the rest of the processing stream intact. The toolbox is capable of analyzing data from many different types of recording modalities, and we give examples of how it can be used to decode basic visual information from neural spiking activity and how it can be used to examine how invariant the activity of a neural population is to stimulus transformations. Overall this toolbox will make it much easier for neuroscientists to apply population decoding analyses to their data, which should help increase the pace of discovery in neuroscience.
Counting and confusion: Bayesian rate estimation with multiple populations
Farr, Will M.; Gair, Jonathan R.; Mandel, Ilya; Cutler, Curt
2015-01-01
We show how to obtain a Bayesian estimate of the rates or numbers of signal and background events from a set of events when the shapes of the signal and background distributions are known, can be estimated, or approximated; our method works well even if the foreground and background event distributions overlap significantly and the nature of any individual event cannot be determined with any certainty. We give examples of determining the rates of gravitational-wave events in the presence of background triggers from a template bank when noise parameters are known and/or can be fit from the trigger data. We also give an example of determining globular-cluster shape, location, and density from an observation of a stellar field that contains a nonuniform background density of stars superimposed on the cluster stars.
A Bayesian method for estimating prevalence in the presence of a hidden sub-population.
Xia, Michelle; Gustafson, Paul
2012-09-20
When estimating the prevalence of a binary trait in a population, the presence of a hidden sub-population that cannot be sampled will lead to nonidentifiability and potentially biased estimation. We propose a Bayesian model of trait prevalence for a weighted sample from the non-hidden portion of the population, by modeling the relationship between prevalence and sampling probability. We studied the behavior of the posterior distribution on population prevalence, with the large-sample limits of posterior distributions obtained in simple analytical forms that give intuitively expected properties. We performed MCMC simulations on finite samples to evaluate the effectiveness of statistical learning. We applied the model and the results to two illustrative datasets arising from weighted sampling. Our work confirms that sensible results can be obtained using Bayesian analysis, despite the nonidentifiability in this situation. Copyright © 2012 John Wiley & Sons, Ltd.
Bayesian modeling of bacterial growth for multiple populations
Palacios, Ana Paula; Marín, J. Miguel; Quinto, Emiliano J.; Wiper, Michael P.
2014-01-01
Bacterial growth models are commonly used for the prediction of microbial safety and the shelf life of perishable foods. Growth is affected by several environmental factors such as temperature, acidity level and salt concentration. In this study, we develop two models to describe bacterial growth for multiple populations under both equal and different environmental conditions. Firstly, a semi-parametric model based on the Gompertz equation is proposed. Assuming that the parameters of the Gomp...
Bayesian modelling of bacterial growth for multiple populations
Palacios, Ana Paula; Marín Díazaraque, Juan Miguel; Quinto, Emiliano; Wiper, Michael Peter
2012-01-01
Bacterial growth models are commonly used for the prediction of microbial safety and the shelf life of perishable foods. Growth is affected by several environmental factors such as temperature, acidity level and salt concentration. In this study, we develop two models to describe bacterial growth for multiple populations under both equal and different environmental conditions. Firstly, a semi-parametric model based on the Gompertz equation is proposed. Assuming that the parameters of the Gomp...
The Populations of Carina. I. Decoding the Color–Magnitude Diagram
Energy Technology Data Exchange (ETDEWEB)
Norris, John E.; Yong, David; Dotter, Aaron; Casagrande, Luca [Research School of Astronomy and Astrophysics, The Australian National University, Canberra, ACT 2611 (Australia); Venn, Kim A. [Department of Physics and Astronomy, University of Victoria, 3800 Finnerty Road, Victoria, BC V8P 1A1 (Canada); Gilmore, Gerard, E-mail: jen@mso.anu.edu.au, E-mail: yong@mso.anu.edu.au, E-mail: aaron.dotter@gmail.com, E-mail: luca@mso.anu.edu.au, E-mail: kvenn@uvic.ca, E-mail: gil@ast.cam.ac.uk [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom)
2017-06-01
We investigate the color–magnitude diagram (CMD) of the Carina dwarf spheroidal galaxy using data of Stetson et al. and synthetic CMDs based on isochrones of Dotter et al., in terms of the parameters [Fe/H], age, and [ α /Fe], for the cases when (i) [ α /Fe] is held constant and (ii) [ α /Fe] is varied. The data are well described by four basic epochs of star formation, having [Fe/H] = −1.85, −1.5, −1.2, and ∼−1.15 and ages ∼13, 7, ∼3.5, and ∼1.5 Gyr, respectively (for [ α /Fe] = 0.1, constant [ α /Fe], and [ α /Fe] = 0.2, 0.1, −0.2, −0.2, variable [ α /Fe]), with small spreads in [Fe/H] and age of order 0.1 dex and 1–3 Gyr. Within an elliptical radius 13.′1, the mass fractions of the populations, at their times of formation, were (in decreasing age order) 0.34, 0.39, 0.23, and 0.04. This formalism reproduces five observed CMD features (two distinct subgiant branches of old and intermediate-age populations, two younger, main-sequence components, and the small color dispersion on the red giant branch (RGB). The parameters of the youngest population are less certain than those of the others, and given it is less centrally concentrated, it may not be directly related to them. High-resolution spectroscopically analyzed RGB samples appear statistically incomplete compared with those selected using radial velocity, which contain bluer stars comprising ∼5%–10% of the samples. We conjecture these objects may, at least in part, be members of the youngest population. We use the CMD simulations to obtain insight into the population structure of Carina's upper RGB.
Directory of Open Access Journals (Sweden)
Markus Krauss
Full Text Available Interindividual variability in anatomical and physiological properties results in significant differences in drug pharmacokinetics. The consideration of such pharmacokinetic variability supports optimal drug efficacy and safety for each single individual, e.g. by identification of individual-specific dosings. One clear objective in clinical drug development is therefore a thorough characterization of the physiological sources of interindividual variability. In this work, we present a Bayesian population physiologically-based pharmacokinetic (PBPK approach for the mechanistically and physiologically realistic identification of interindividual variability. The consideration of a generic and highly detailed mechanistic PBPK model structure enables the integration of large amounts of prior physiological knowledge, which is then updated with new experimental data in a Bayesian framework. A covariate model integrates known relationships of physiological parameters to age, gender and body height. We further provide a framework for estimation of the a posteriori parameter dependency structure at the population level. The approach is demonstrated considering a cohort of healthy individuals and theophylline as an application example. The variability and co-variability of physiological parameters are specified within the population; respectively. Significant correlations are identified between population parameters and are applied for individual- and population-specific visual predictive checks of the pharmacokinetic behavior, which leads to improved results compared to present population approaches. In the future, the integration of a generic PBPK model into an hierarchical approach allows for extrapolations to other populations or drugs, while the Bayesian paradigm allows for an iterative application of the approach and thereby a continuous updating of physiological knowledge with new data. This will facilitate decision making e.g. from preclinical to
Gayevskiy, Velimir; Klaere, Steffen; Knight, Sarah; Goddard, Matthew R.
2014-01-01
Bayesian inference methods are extensively used to detect the presence of population structure given genetic data. The primary output of software implementing these methods are ancestry profiles of sampled individuals. While these profiles robustly partition the data into subgroups, currently there is no objective method to determine whether the fixed factor of interest (e.g. geographic origin) correlates with inferred subgroups or not, and if so, which populations are driving this correlation. We present ObStruct, a novel tool to objectively analyse the nature of structure revealed in Bayesian ancestry profiles using established statistical methods. ObStruct evaluates the extent of structural similarity between sampled and inferred populations, tests the significance of population differentiation, provides information on the contribution of sampled and inferred populations to the observed structure and crucially determines whether the predetermined factor of interest correlates with inferred population structure. Analyses of simulated and experimental data highlight ObStruct's ability to objectively assess the nature of structure in populations. We show the method is capable of capturing an increase in the level of structure with increasing time since divergence between simulated populations. Further, we applied the method to a highly structured dataset of 1,484 humans from seven continents and a less structured dataset of 179 Saccharomyces cerevisiae from three regions in New Zealand. Our results show that ObStruct provides an objective metric to classify the degree, drivers and significance of inferred structure, as well as providing novel insights into the relationships between sampled populations, and adds a final step to the pipeline for population structure analyses. PMID:24416362
Directory of Open Access Journals (Sweden)
Velimir Gayevskiy
Full Text Available Bayesian inference methods are extensively used to detect the presence of population structure given genetic data. The primary output of software implementing these methods are ancestry profiles of sampled individuals. While these profiles robustly partition the data into subgroups, currently there is no objective method to determine whether the fixed factor of interest (e.g. geographic origin correlates with inferred subgroups or not, and if so, which populations are driving this correlation. We present ObStruct, a novel tool to objectively analyse the nature of structure revealed in Bayesian ancestry profiles using established statistical methods. ObStruct evaluates the extent of structural similarity between sampled and inferred populations, tests the significance of population differentiation, provides information on the contribution of sampled and inferred populations to the observed structure and crucially determines whether the predetermined factor of interest correlates with inferred population structure. Analyses of simulated and experimental data highlight ObStruct's ability to objectively assess the nature of structure in populations. We show the method is capable of capturing an increase in the level of structure with increasing time since divergence between simulated populations. Further, we applied the method to a highly structured dataset of 1,484 humans from seven continents and a less structured dataset of 179 Saccharomyces cerevisiae from three regions in New Zealand. Our results show that ObStruct provides an objective metric to classify the degree, drivers and significance of inferred structure, as well as providing novel insights into the relationships between sampled populations, and adds a final step to the pipeline for population structure analyses.
Plourde, Vickie; Boivin, Michel; Forget-Dubois, Nadine; Brendgen, Mara; Vitaro, Frank; Marino, Cecilia; Tremblay, Richard T.; Dionne, Ginette
2015-01-01
Background: The phenotypic and genetic associations between decoding skills and ADHD dimensions have been documented but less is known about the association with reading comprehension. The aim of the study is to document the phenotypic and genetic associations between reading comprehension and ADHD dimensions of inattention and…
McCaffery, R; Solonen, A; Crone, E
2012-09-01
1. World-wide extinctions of amphibians are at the forefront of the biodiversity crisis, with climate change figuring prominently as a potential driver of continued amphibian decline. As in other taxa, changes in both the mean and variability of climate conditions may affect amphibian populations in complex, unpredictable ways. In western North America, climate models predict a reduced duration and extent of mountain snowpack and increased variability in precipitation, which may have consequences for amphibians inhabiting montane ecosystems. 2. We used Bayesian capture-recapture methods to estimate survival and transition probabilities in a high-elevation population of the Columbia spotted frog (Rana luteiventris) over 10 years and related these rates to interannual variation in peak snowpack. Then, we forecasted frog population growth and viability under a range of scenarios with varying levels of change in mean and variance in snowpack. 3. Over a range of future scenarios, changes in mean snowpack had a greater effect on viability than changes in the variance of snowpack, with forecasts largely predicting an increase in population viability. Population models based on snowpack during our study period predicted a declining population. 4. Although mean conditions were more important for viability than variance, for a given mean snowpack depth, increases in variability could change a population from increasing to decreasing. Therefore, the influence of changing climate variability on populations should be accounted for in predictive models. The Bayesian modelling framework allows for the explicit characterization of uncertainty in parameter estimates and ecological forecasts, and thus provides a natural approach for examining relative contributions of mean and variability in climatic variables to population dynamics. 5. Longevity and heterogeneous habitat may contribute to the potential for this amphibian species to be resilient to increased climatic variation, and
Shao, Kan; Allen, Bruce C; Wheeler, Matthew W
2017-10-01
Human variability is a very important factor considered in human health risk assessment for protecting sensitive populations from chemical exposure. Traditionally, to account for this variability, an interhuman uncertainty factor is applied to lower the exposure limit. However, using a fixed uncertainty factor rather than probabilistically accounting for human variability can hardly support probabilistic risk assessment advocated by a number of researchers; new methods are needed to probabilistically quantify human population variability. We propose a Bayesian hierarchical model to quantify variability among different populations. This approach jointly characterizes the distribution of risk at background exposure and the sensitivity of response to exposure, which are commonly represented by model parameters. We demonstrate, through both an application to real data and a simulation study, that using the proposed hierarchical structure adequately characterizes variability across different populations. © 2016 Society for Risk Analysis.
DEFF Research Database (Denmark)
Heller, Rasmus; Chikhi, Lounes; Siegismund, Hans
2013-01-01
Many coalescent-based methods aiming to infer the demographic history of populations assume a single, isolated and panmictic population (i.e. a Wright-Fisher model). While this assumption may be reasonable under many conditions, several recent studies have shown that the results can be misleading...... when it is violated. Among the most widely applied demographic inference methods are Bayesian skyline plots (BSPs), which are used across a range of biological fields. Violations of the panmixia assumption are to be expected in many biological systems, but the consequences for skyline plot inferences...... the best scheme for inferring demographic change over a typical time scale. Analyses of data from a structured African buffalo population demonstrate how BSP results can be strengthened by simulations. We recommend that sample selection should be carefully considered in relation to population structure...
Hierarchical Bayesian inference of the initial mass function in composite stellar populations
Dries, M.; Trager, S. C.; Koopmans, L. V. E.; Popping, G.; Somerville, R. S.
2018-03-01
The initial mass function (IMF) is a key ingredient in many studies of galaxy formation and evolution. Although the IMF is often assumed to be universal, there is continuing evidence that it is not universal. Spectroscopic studies that derive the IMF of the unresolved stellar populations of a galaxy often assume that this spectrum can be described by a single stellar population (SSP). To alleviate these limitations, in this paper we have developed a unique hierarchical Bayesian framework for modelling composite stellar populations (CSPs). Within this framework, we use a parametrized IMF prior to regulate a direct inference of the IMF. We use this new framework to determine the number of SSPs that is required to fit a set of realistic CSP mock spectra. The CSP mock spectra that we use are based on semi-analytic models and have an IMF that varies as a function of stellar velocity dispersion of the galaxy. Our results suggest that using a single SSP biases the determination of the IMF slope to a higher value than the true slope, although the trend with stellar velocity dispersion is overall recovered. If we include more SSPs in the fit, the Bayesian evidence increases significantly and the inferred IMF slopes of our mock spectra converge, within the errors, to their true values. Most of the bias is already removed by using two SSPs instead of one. We show that we can reconstruct the variable IMF of our mock spectra for signal-to-noise ratios exceeding ˜75.
Multi-population genomic prediction using a multi-task Bayesian learning model.
Chen, Liuhong; Li, Changxi; Miller, Stephen; Schenkel, Flavio
2014-05-03
Genomic prediction in multiple populations can be viewed as a multi-task learning problem where tasks are to derive prediction equations for each population and multi-task learning property can be improved by sharing information across populations. The goal of this study was to develop a multi-task Bayesian learning model for multi-population genomic prediction with a strategy to effectively share information across populations. Simulation studies and real data from Holstein and Ayrshire dairy breeds with phenotypes on five milk production traits were used to evaluate the proposed multi-task Bayesian learning model and compare with a single-task model and a simple data pooling method. A multi-task Bayesian learning model was proposed for multi-population genomic prediction. Information was shared across populations through a common set of latent indicator variables while SNP effects were allowed to vary in different populations. Both simulation studies and real data analysis showed the effectiveness of the multi-task model in improving genomic prediction accuracy for the smaller Ayshire breed. Simulation studies suggested that the multi-task model was most effective when the number of QTL was small (n = 20), with an increase of accuracy by up to 0.09 when QTL effects were lowly correlated between two populations (ρ = 0.2), and up to 0.16 when QTL effects were highly correlated (ρ = 0.8). When QTL genotypes were included for training and validation, the improvements were 0.16 and 0.22, respectively, for scenarios of the low and high correlation of QTL effects between two populations. When the number of QTL was large (n = 200), improvement was small with a maximum of 0.02 when QTL genotypes were not included for genomic prediction. Reduction in accuracy was observed for the simple pooling method when the number of QTL was small and correlation of QTL effects between the two populations was low. For the real data, the multi-task model achieved an
Thomas, D.L.; Johnson, D.; Griffith, B.
2006-01-01
Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a
Directory of Open Access Journals (Sweden)
David Eriksson
Full Text Available Neurons in the primary visual cortex typically reach their highest firing rate after an abrupt image transition. Since the mutual information between the firing rate and the currently presented image is largest during this early firing period it is tempting to conclude this early firing encodes the current image. This view is, however, made more complicated by the fact that the response to the current image is dependent on the preceding image. Therefore we hypothesize that neurons encode a combination of current and previous images, and that the strength of the current image relative to the previous image changes over time. The temporal encoding is interesting, first, because neurons are, at different time points, sensitive to different features such as luminance, edges and textures; second, because the temporal evolution provides temporal constraints for deciphering the instantaneous population activity. To study the temporal evolution of the encoding we presented a sequence of 250 ms stimulus patterns during multiunit recordings in areas 17 and 18 of the anaesthetized ferret. Using a novel method we decoded the pattern given the instantaneous population-firing rate. Following a stimulus transition from stimulus A to B the decoded stimulus during the first 90ms was more correlated with the difference between A and B (B-A than with B alone. After 90ms the decoded stimulus was more correlated with stimulus B than with B-A. Finally we related our results to information measures of previous (B and current stimulus (A. Despite that the initial transient conveys the majority of the stimulus-related information; we show that it actually encodes a difference image which can be independent of the stimulus. Only later on, spikes gradually encode the stimulus more exclusively.
Mathew, Boby; Léon, Jens; Sannemann, Wiebke; Sillanpää, Mikko J
2018-02-01
Gene-by-gene interactions, also known as epistasis, regulate many complex traits in different species. With the availability of low-cost genotyping it is now possible to study epistasis on a genome-wide scale. However, identifying genome-wide epistasis is a high-dimensional multiple regression problem and needs the application of dimensionality reduction techniques. Flowering Time (FT) in crops is a complex trait that is known to be influenced by many interacting genes and pathways in various crops. In this study, we successfully apply Sure Independence Screening (SIS) for dimensionality reduction to identify two-way and three-way epistasis for the FT trait in a Multiparent Advanced Generation Inter-Cross (MAGIC) barley population using the Bayesian multilocus model. The MAGIC barley population was generated from intercrossing among eight parental lines and thus, offered greater genetic diversity to detect higher-order epistatic interactions. Our results suggest that SIS is an efficient dimensionality reduction approach to detect high-order interactions in a Bayesian multilocus model. We also observe that many of our findings (genomic regions with main or higher-order epistatic effects) overlap with known candidate genes that have been already reported in barley and closely related species for the FT trait. Copyright © 2018 by the Genetics Society of America.
A Bayesian approach to identifying and compensating for model misspecification in population models.
Thorson, James T; Ono, Kotaro; Munch, Stephan B
2014-02-01
State-space estimation methods are increasingly used in ecology to estimate productivity and abundance of natural populations while accounting for variability in both population dynamics and measurement processes. However, functional forms for population dynamics and density dependence often will not match the true biological process, and this may degrade the performance of state-space methods. We therefore developed a Bayesian semiparametric state-space model, which uses a Gaussian process (GP) to approximate the population growth function. This offers two benefits for population modeling. First, it allows data to update a specified "prior" on the population growth function, while reverting to this prior when data are uninformative. Second, it allows variability in population dynamics to be decomposed into random errors around the population growth function ("process error") and errors due to the mismatch between the specified prior and estimated growth function ("model error"). We used simulation modeling to illustrate the utility of GP methods in state-space population dynamics models. Results confirmed that the GP model performs similarly to a conventional state-space model when either (1) the prior matches the true process or (2) data are relatively uninformative. However, GP methods improve estimates of the population growth function when the function is misspecified. Results also demonstrated that the estimated magnitude of "model error" can be used to distinguish cases of model misspecification. We conclude with a discussion of the prospects for GP methods in other state-space models, including age and length-structured, meta-analytic, and individual-movement models.
Bayesian parameter estimation in dynamic population model via particle Markov chain Monte Carlo
Directory of Open Access Journals (Sweden)
Meng Gao
2012-12-01
Full Text Available In nature, population dynamics are subject to multiple sources of stochasticity. State-space models (SSMs provide an ideal framework for incorporating both environmental noises and measurement errors into dynamic population models. In this paper, we present a recently developed method, Particle Markov Chain Monte Carlo (Particle MCMC, for parameter estimation in nonlinear SSMs. We use one effective algorithm of Particle MCMC, Particle Gibbs sampling algorithm, to estimate the parameters of a state-space model of population dynamics. The posterior distributions of parameters are derived given the conjugate prior distribution. Numerical simulations showed that the model parameters can be accurately estimated, no matter the deterministic model is stable, periodic or chaotic. Moreover, we fit the model to 16 representative time series from Global Population Dynamics Database (GPDD. It is verified that the results of parameter and state estimation using Particle Gibbs sampling algorithm are satisfactory for a majority of time series. For other time series, the quality of parameter estimation can also be improved, if prior knowledge is constrained. In conclusion, Particle Gibbs sampling algorithm provides a new Bayesian parameter inference method for studying population dynamics.
Lander, Tonya A; Oddou-Muratorio, Sylvie; Prouillet-Leplat, Helene; Klein, Etienne K
2011-12-01
Range expansion and contraction has occurred in the history of most species and can seriously impact patterns of genetic diversity. Historical data about range change are rare and generally appropriate for studies at large scales, whereas the individual pollen and seed dispersal events that form the basis of geneflow and colonization generally occur at a local scale. In this study, we investigated range change in Fagus sylvatica on Mont Ventoux, France, using historical data from 1838 to the present and approximate Bayesian computation (ABC) analyses of genetic data. From the historical data, we identified a population minimum in 1845 and located remnant populations at least 200 years old. The ABC analysis selected a demographic scenario with three populations, corresponding to two remnant populations and one area of recent expansion. It also identified expansion from a smaller ancestral population but did not find that this expansion followed a population bottleneck, as suggested by the historical data. Despite a strong support to the selected scenario for our data set, the ABC approach showed a low power to discriminate among scenarios on average and a low ability to accurately estimate effective population sizes and divergence dates, probably due to the temporal scale of the study. This study provides an unusual opportunity to test ABC analysis in a system with a well-documented demographic history and identify discrepancies between the results of historical, classical population genetic and ABC analyses. The results also provide valuable insights into genetic processes at work at a fine spatial and temporal scale in range change and colonization. © 2011 Blackwell Publishing Ltd.
Energy Technology Data Exchange (ETDEWEB)
Strom, Daniel J.; Joyce, Kevin E.; Maclellan, Jay A.; Watson, David J.; Lynch, Timothy P.; Antonio, Cheryl L.; Birchall, Alan; Anderson, Kevin K.; Zharov, Peter
2012-04-17
In making low-level radioactivity measurements of populations, it is commonly observed that a substantial portion of net results are negative. Furthermore, the observed variance of the measurement results arises from a combination of measurement uncertainty and population variability. This paper presents a method for disaggregating measurement uncertainty from population variability to produce a probability density function (PDF) of possibly true results. To do this, simple, justifiable, and reasonable assumptions are made about the relationship of the measurements to the measurands (the 'true values'). The measurements are assumed to be unbiased, that is, that their average value is the average of the measurands. Using traditional estimates of each measurement's uncertainty to disaggregate population variability from measurement uncertainty, a PDF of measurands for the population is produced. Then, using Bayes's theorem, the same assumptions, and all the data from the population of individuals, a prior PDF is computed for each individual's measurand. These PDFs are non-negative, and their average is equal to the average of the measurement results for the population. The uncertainty in these Bayesian posterior PDFs is all Berkson with no remaining classical component. The methods are applied to baseline bioassay data from the Hanford site. The data include 90Sr urinalysis measurements on 128 people, 137Cs in vivo measurements on 5,337 people, and 239Pu urinalysis measurements on 3,270 people. The method produces excellent results for the 90Sr and 137Cs measurements, since there are nonzero concentrations of these global fallout radionuclides in people who have not been occupationally exposed. The method does not work for the 239Pu measurements in non-occupationally exposed people because the population average is essentially zero.
Bazin, Eric; Dawson, Kevin J; Beaumont, Mark A
2010-06-01
We address the problem of finding evidence of natural selection from genetic data, accounting for the confounding effects of demographic history. In the absence of natural selection, gene genealogies should all be sampled from the same underlying distribution, often approximated by a coalescent model. Selection at a particular locus will lead to a modified genealogy, and this motivates a number of recent approaches for detecting the effects of natural selection in the genome as "outliers" under some models. The demographic history of a population affects the sampling distribution of genealogies, and therefore the observed genotypes and the classification of outliers. Since we cannot see genealogies directly, we have to infer them from the observed data under some model of mutation and demography. Thus the accuracy of an outlier-based approach depends to a greater or a lesser extent on the uncertainty about the demographic and mutational model. A natural modeling framework for this type of problem is provided by Bayesian hierarchical models, in which parameters, such as mutation rates and selection coefficients, are allowed to vary across loci. It has proved quite difficult computationally to implement fully probabilistic genealogical models with complex demographies, and this has motivated the development of approximations such as approximate Bayesian computation (ABC). In ABC the data are compressed into summary statistics, and computation of the likelihood function is replaced by simulation of data under the model. In a hierarchical setting one may be interested both in hyperparameters and parameters, and there may be very many of the latter--for example, in a genetic model, these may be parameters describing each of many loci or populations. This poses a problem for ABC in that one then requires summary statistics for each locus, which, if used naively, leads to a consequent difficulty in conditional density estimation. We develop a general method for applying
DEFF Research Database (Denmark)
Justesen, Jørn; Høholdt, Tom; Hjaltason, Johan
2005-01-01
We analyze the relation between iterative decoding and the extended parity check matrix. By considering a modified version of bit flipping, which produces a list of decoded words, we derive several relations between decodable error patterns and the parameters of the code. By developing a tree...
Bayesian modeling to assess populated areas impacted by radiation from Fukushima
Hultquist, C.; Cervone, G.
2017-12-01
Citizen-led movements producing spatio-temporal big data are increasingly important sources of information about populations that are impacted by natural disasters. Citizen science can be used to fill gaps in disaster monitoring data, in addition to inferring human exposure and vulnerability to extreme environmental impacts. As a response to the 2011 release of radiation from Fukushima, Japan, the Safecast project began collecting open radiation data which grew to be a global dataset of over 70 million measurements to date. This dataset is spatially distributed primarily where humans are located and demonstrates abnormal patterns of population movements as a result of the disaster. Previous work has demonstrated that Safecast is highly correlated in comparison to government radiation observations. However, there is still a scientific need to understand the geostatistical variability of Safecast data and to assess how reliable the data are over space and time. The Bayesian hierarchical approach can be used to model the spatial distribution of datasets and flexibly integrate new flows of data without losing previous information. This enables an understanding of uncertainty in the spatio-temporal data to inform decision makers on areas of high levels of radiation where populations are located. Citizen science data can be scientifically evaluated and used as a critical source of information about populations that are impacted by a disaster.
Enhanced Bayesian modelling in BAPS software for learning genetic structures of populations
Directory of Open Access Journals (Sweden)
Sirén Jukka
2008-12-01
Full Text Available Abstract Background During the most recent decade many Bayesian statistical models and software for answering questions related to the genetic structure underlying population samples have appeared in the scientific literature. Most of these methods utilize molecular markers for the inferences, while some are also capable of handling DNA sequence data. In a number of earlier works, we have introduced an array of statistical methods for population genetic inference that are implemented in the software BAPS. However, the complexity of biological problems related to genetic structure analysis keeps increasing such that in many cases the current methods may provide either inappropriate or insufficient solutions. Results We discuss the necessity of enhancing the statistical approaches to face the challenges posed by the ever-increasing amounts of molecular data generated by scientists over a wide range of research areas and introduce an array of new statistical tools implemented in the most recent version of BAPS. With these methods it is possible, e.g., to fit genetic mixture models using user-specified numbers of clusters and to estimate levels of admixture under a genetic linkage model. Also, alleles representing a different ancestry compared to the average observed genomic positions can be tracked for the sampled individuals, and a priori specified hypotheses about genetic population structure can be directly compared using Bayes' theorem. In general, we have improved further the computational characteristics of the algorithms behind the methods implemented in BAPS facilitating the analyses of large and complex datasets. In particular, analysis of a single dataset can now be spread over multiple computers using a script interface to the software. Conclusion The Bayesian modelling methods introduced in this article represent an array of enhanced tools for learning the genetic structure of populations. Their implementations in the BAPS software are
Proost, Johannes H.; Eleveld, Douglas J.
2006-01-01
Purpose. To test the suitability of an Iterative Two-Stage Bayesian (ITSB) technique for population pharmacokinetic analysis of rich data sets, and to compare ITSB with Standard Two-Stage (STS) analysis and nonlinear Mixed Effect Modeling (MEM). Materials and Methods. Data from a clinical study with
Directory of Open Access Journals (Sweden)
Li Sen
2012-03-01
Full Text Available Abstract Background The Approximate Bayesian Computation (ABC approach has been used to infer demographic parameters for numerous species, including humans. However, most applications of ABC still use limited amounts of data, from a small number of loci, compared to the large amount of genome-wide population-genetic data which have become available in the last few years. Results We evaluated the performance of the ABC approach for three 'population divergence' models - similar to the 'isolation with migration' model - when the data consists of several hundred thousand SNPs typed for multiple individuals by simulating data from known demographic models. The ABC approach was used to infer demographic parameters of interest and we compared the inferred values to the true parameter values that was used to generate hypothetical "observed" data. For all three case models, the ABC approach inferred most demographic parameters quite well with narrow credible intervals, for example, population divergence times and past population sizes, but some parameters were more difficult to infer, such as population sizes at present and migration rates. We compared the ability of different summary statistics to infer demographic parameters, including haplotype and LD based statistics, and found that the accuracy of the parameter estimates can be improved by combining summary statistics that capture different parts of information in the data. Furthermore, our results suggest that poor choices of prior distributions can in some circumstances be detected using ABC. Finally, increasing the amount of data beyond some hundred loci will substantially improve the accuracy of many parameter estimates using ABC. Conclusions We conclude that the ABC approach can accommodate realistic genome-wide population genetic data, which may be difficult to analyze with full likelihood approaches, and that the ABC can provide accurate and precise inference of demographic parameters from
Zhou, Haiming; Hanson, Timothy; Knapp, Roland
2015-12-01
The global emergence of Batrachochytrium dendrobatidis (Bd) has caused the extinction of hundreds of amphibian species worldwide. It has become increasingly important to be able to precisely predict time to Bd arrival in a population. The data analyzed herein present a unique challenge in terms of modeling because there is a strong spatial component to Bd arrival time and the traditional proportional hazards assumption is grossly violated. To address these concerns, we develop a novel marginal Bayesian nonparametric survival model for spatially correlated right-censored data. This class of models assumes that the logarithm of survival times marginally follow a mixture of normal densities with a linear-dependent Dirichlet process prior as the random mixing measure, and their joint distribution is induced by a Gaussian copula model with a spatial correlation structure. To invert high-dimensional spatial correlation matrices, we adopt a full-scale approximation that can capture both large- and small-scale spatial dependence. An efficient Markov chain Monte Carlo algorithm with delayed rejection is proposed for posterior computation, and an R package spBayesSurv is provided to fit the model. This approach is first evaluated through simulations, then applied to threatened frog populations in Sequoia-Kings Canyon National Park. © 2015, The International Biometric Society.
Yan, Fang-Rong; Huang, Yuan; Liu, Jun-Lin; Lu, Tao; Lin, Jin-Guan
2013-01-01
This article provides a fully bayesian approach for modeling of single-dose and complete pharmacokinetic data in a population pharmacokinetic (PK) model. To overcome the impact of outliers and the difficulty of computation, a generalized linear model is chosen with the hypothesis that the errors follow a multivariate Student t distribution which is a heavy-tailed distribution. The aim of this study is to investigate and implement the performance of the multivariate t distribution to analyze population pharmacokinetic data. Bayesian predictive inferences and the Metropolis-Hastings algorithm schemes are used to process the intractable posterior integration. The precision and accuracy of the proposed model are illustrated by the simulating data and a real example of theophylline data.
Directory of Open Access Journals (Sweden)
Devaraj Jayachandran
Full Text Available 6-Mercaptopurine (6-MP is one of the key drugs in the treatment of many pediatric cancers, auto immune diseases and inflammatory bowel disease. 6-MP is a prodrug, converted to an active metabolite 6-thioguanine nucleotide (6-TGN through enzymatic reaction involving thiopurine methyltransferase (TPMT. Pharmacogenomic variation observed in the TPMT enzyme produces a significant variation in drug response among the patient population. Despite 6-MP's widespread use and observed variation in treatment response, efforts at quantitative optimization of dose regimens for individual patients are limited. In addition, research efforts devoted on pharmacogenomics to predict clinical responses are proving far from ideal. In this work, we present a Bayesian population modeling approach to develop a pharmacological model for 6-MP metabolism in humans. In the face of scarcity of data in clinical settings, a global sensitivity analysis based model reduction approach is used to minimize the parameter space. For accurate estimation of sensitive parameters, robust optimal experimental design based on D-optimality criteria was exploited. With the patient-specific model, a model predictive control algorithm is used to optimize the dose scheduling with the objective of maintaining the 6-TGN concentration within its therapeutic window. More importantly, for the first time, we show how the incorporation of information from different levels of biological chain-of response (i.e. gene expression-enzyme phenotype-drug phenotype plays a critical role in determining the uncertainty in predicting therapeutic target. The model and the control approach can be utilized in the clinical setting to individualize 6-MP dosing based on the patient's ability to metabolize the drug instead of the traditional standard-dose-for-all approach.
High Speed Viterbi Decoder Architecture
DEFF Research Database (Denmark)
Paaske, Erik; Andersen, Jakob Dahl
1998-01-01
The fastest commercially available Viterbi decoders for the (171,133) standard rate 1/2 code operate with a decoding speed of 40-50 Mbit/s (net data rate). In this paper we present a suitable architecture for decoders operating with decoding speeds of 150-300 Mbit/s.......The fastest commercially available Viterbi decoders for the (171,133) standard rate 1/2 code operate with a decoding speed of 40-50 Mbit/s (net data rate). In this paper we present a suitable architecture for decoders operating with decoding speeds of 150-300 Mbit/s....
Indian Academy of Sciences (India)
lowing function is maximized,. This kind of decoding strategy is called the maximum a posteriori probability (MAP) decoding strategy as it attempts to estimate each symbol of the codeword that ..... gate the effects of packet loss over digital networks. Un- doubtedly other applications will use these codes in the years to come.
Ponciano, José Miguel
2017-11-22
Using a nonparametric Bayesian approach Palacios and Minin (2013) dramatically improved the accuracy, precision of Bayesian inference of population size trajectories from gene genealogies. These authors proposed an extension of a Gaussian Process (GP) nonparametric inferential method for the intensity function of non-homogeneous Poisson processes. They found that not only the statistical properties of the estimators were improved with their method, but also, that key aspects of the demographic histories were recovered. The authors' work represents the first Bayesian nonparametric solution to this inferential problem because they specify a convenient prior belief without a particular functional form on the population trajectory. Their approach works so well and provides such a profound understanding of the biological process, that the question arises as to how truly "biology-free" their approach really is. Using well-known concepts of stochastic population dynamics, here I demonstrate that in fact, Palacios and Minin's GP model can be cast as a parametric population growth model with density dependence and environmental stochasticity. Making this link between population genetics and stochastic population dynamics modeling provides novel insights into eliciting biologically meaningful priors for the trajectory of the effective population size. The results presented here also bring novel understanding of GP as models for the evolution of a trait. Thus, the ecological principles foundation of Palacios and Minin (2013)'s prior adds to the conceptual and scientific value of these authors' inferential approach. I conclude this note by listing a series of insights brought about by this connection with Ecology. Copyright © 2017 The Author. Published by Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Flory, Adam E.; Lamarche, Brian L.; Weiland, Mark A.
2013-05-01
The Juvenile Salmon Acoustic Telemetry System (JSATS) Decoder is a software application that converts a digitized acoustic signal (a waveform stored in the .bwm file format) into a list of potential JSATS Acoustic MicroTransmitter (AMT) tagcodes along with other data about the signal including time of arrival and signal to noise ratios (SNR). This software is capable of decoding single files, directories, and viewing raw acoustic waveforms. When coupled with the JSATS Detector, the Decoder is capable of decoding in ‘real-time’ and can also provide statistical information about acoustic beacons placed within receive range of hydrophones within a JSATS array. This document details the features and functionality of the software. The document begins with software installation instructions (section 2), followed in order by instructions for decoder setup (section 3), decoding process initiation (section 4), then monitoring of beacons (section 5) using real-time decoding features. The last section in the manual describes the beacon, beacon statistics, and the results file formats. This document does not consider the raw binary waveform file format.
International Nuclear Information System (INIS)
Chagas Moura, Márcio das; Azevedo, Rafael Valença; Droguett, Enrique López; Chaves, Leandro Rego; Lins, Isis Didier
2016-01-01
Occupational accidents pose several negative consequences to employees, employers, environment and people surrounding the locale where the accident takes place. Some types of accidents correspond to low frequency-high consequence (long sick leaves) events, and then classical statistical approaches are ineffective in these cases because the available dataset is generally sparse and contain censored recordings. In this context, we propose a Bayesian population variability method for the estimation of the distributions of the rates of accident and recovery. Given these distributions, a Markov-based model will be used to estimate the uncertainty over the expected number of accidents and the work time loss. Thus, the use of Bayesian analysis along with the Markov approach aims at investigating future trends regarding occupational accidents in a workplace as well as enabling a better management of the labor force and prevention efforts. One application example is presented in order to validate the proposed approach; this case uses available data gathered from a hydropower company in Brazil. - Highlights: • This paper proposes a Bayesian method to estimate rates of accident and recovery. • The model requires simple data likely to be available in the company database. • These results show the proposed model is not too sensitive to the prior estimates.
DEFF Research Database (Denmark)
Martins, Bo; Forchhammer, Søren
1999-01-01
MPEG-2 video decoding is examined. A unified approach to quality improvement, chrominance upsampling, de-interlacing and superresolution is presented. The information over several frames is combined as part of the processing....
An Overview of Bayesian Methods for Neural Spike Train Analysis
Directory of Open Access Journals (Sweden)
Zhe Chen
2013-01-01
Full Text Available Neural spike train analysis is an important task in computational neuroscience which aims to understand neural mechanisms and gain insights into neural circuits. With the advancement of multielectrode recording and imaging technologies, it has become increasingly demanding to develop statistical tools for analyzing large neuronal ensemble spike activity. Here we present a tutorial overview of Bayesian methods and their representative applications in neural spike train analysis, at both single neuron and population levels. On the theoretical side, we focus on various approximate Bayesian inference techniques as applied to latent state and parameter estimation. On the application side, the topics include spike sorting, tuning curve estimation, neural encoding and decoding, deconvolution of spike trains from calcium imaging signals, and inference of neuronal functional connectivity and synchrony. Some research challenges and opportunities for neural spike train analysis are discussed.
Kocabas, Verda; Dragicevic, Suzana
2013-10-01
Land-use change models grounded in complexity theory such as agent-based models (ABMs) are increasingly being used to examine evolving urban systems. The objective of this study is to develop a spatial model that simulates land-use change under the influence of human land-use choice behavior. This is achieved by integrating the key physical and social drivers of land-use change using Bayesian networks (BNs) coupled with agent-based modeling. The BNAS model, integrated Bayesian network-based agent system, presented in this study uses geographic information systems, ABMs, BNs, and influence diagram principles to model population change on an irregular spatial structure. The model is parameterized with historical data and then used to simulate 20 years of future population and land-use change for the City of Surrey, British Columbia, Canada. The simulation results identify feasible new urban areas for development around the main transportation corridors. The obtained new development areas and the projected population trajectories with the“what-if” scenario capabilities can provide insights into urban planners for better and more informed land-use policy or decision-making processes.
Directory of Open Access Journals (Sweden)
Salvidio Sebastiano
2010-02-01
Full Text Available Abstract Background It has been suggested that Plethodontid salamanders are excellent candidates for indicating ecosystem health. However, detailed, long-term data sets of their populations are rare, limiting our understanding of the demographic processes underlying their population fluctuations. Here we present a demographic analysis based on a 1996 - 2008 data set on an underground population of Speleomantes strinatii (Aellen in NW Italy. We utilised a Bayesian state-space approach allowing us to parameterise a stage-structured Lefkovitch model. We used all the available population data from annual temporary removal experiments to provide us with the baseline data on the numbers of juveniles, subadults and adult males and females present at any given time. Results Sampling the posterior chains of the converged state-space model gives us the likelihood distributions of the state-specific demographic rates and the associated uncertainty of these estimates. Analysing the resulting parameterised Lefkovitch matrices shows that the population growth is very close to 1, and that at population equilibrium we expect half of the individuals present to be adults of reproductive age which is what we also observe in the data. Elasticity analysis shows that adult survival is the key determinant for population growth. Conclusion This analysis demonstrates how an understanding of population demography can be gained from structured population data even in a case where following marked individuals over their whole lifespan is not practical.
Decoding communities in networks
Radicchi, Filippo
2018-02-01
According to a recent information-theoretical proposal, the problem of defining and identifying communities in networks can be interpreted as a classical communication task over a noisy channel: memberships of nodes are information bits erased by the channel, edges and nonedges in the network are parity bits introduced by the encoder but degraded through the channel, and a community identification algorithm is a decoder. The interpretation is perfectly equivalent to the one at the basis of well-known statistical inference algorithms for community detection. The only difference in the interpretation is that a noisy channel replaces a stochastic network model. However, the different perspective gives the opportunity to take advantage of the rich set of tools of coding theory to generate novel insights on the problem of community detection. In this paper, we illustrate two main applications of standard coding-theoretical methods to community detection. First, we leverage a state-of-the-art decoding technique to generate a family of quasioptimal community detection algorithms. Second and more important, we show that the Shannon's noisy-channel coding theorem can be invoked to establish a lower bound, here named as decodability bound, for the maximum amount of noise tolerable by an ideal decoder to achieve perfect detection of communities. When computed for well-established synthetic benchmarks, the decodability bound explains accurately the performance achieved by the best community detection algorithms existing on the market, telling us that only little room for their improvement is still potentially left.
Saleh, Mohammad I
2017-11-01
Pegylated interferon α-2a (PEG-IFN-α-2a) is an antiviral drug used for the treatment of chronic hepatitis C virus (HCV) infection. This study describes the population pharmacokinetics of PEG-IFN-α-2a in hepatitis C patients using a Bayesian approach. A possible association between patient characteristics and pharmacokinetic parameters is also explored. A Bayesian population pharmacokinetic modeling approach, using WinBUGS version 1.4.3, was applied to a cohort of patients (n = 292) with chronic HCV infection. Data were obtained from two phase III studies sponsored by Hoffmann-La Roche. Demographic and clinical information were evaluated as possible predictors of pharmacokinetic parameters during model development. A one-compartment model with an additive error best fitted the data, and a total of 2271 PEG-IFN-α-2a measurements from 292 subjects were analyzed using the proposed population pharmacokinetic model. Sex was identified as a predictor of PEG-IFN-α-2a clearance, and hemoglobin baseline level was identified as a predictor of PEG-IFN-α-2a volume of distribution. A population pharmacokinetic model of PEG-IFN-α-2a in patients with chronic HCV infection was presented in this study. The proposed model can be used to optimize PEG-IFN-α-2a dosing in patients with chronic HCV infection. Optimal PEG-IFN-α-2a selection is important to maximize response and/or to avoid potential side effects such as thrombocytopenia and neutropenia. NV15942 and NV15801.
Directory of Open Access Journals (Sweden)
Cheng J
2016-10-01
Full Text Available Ji Cheng,1,2 Alfonso Iorio,2,3 Maura Marcucci,4 Vadim Romanov,5 Eleanor M Pullenayegum,6,7 John K Marshall,3,8 Lehana Thabane1,2 1Biostatistics Unit, St Joseph’s Healthcare Hamilton, 2Department of Clinical Epidemiology and Biostatistics, 3Department of Medicine, McMaster University, Hamilton, ON, Canada; 4Geriatrics, Fondazione Ca’ Granda Ospedale Maggiore Policlinico, Università degli Studi di Milano, Milan, Italy; 5Baxter HealthCare, Global Medical Affairs, Westlake Village, CA, USA; 6Child Health Evaluation Sciences, Hospital for Sick Children, 7Dalla Lana School of Public Health, University of Toronto, Toronto, 8Division of Gastroenterology, Hamilton Health Science, Hamilton, ON, Canada Background: Developing inhibitors is a rare event during the treatment of hemophilia A. The multifacets and uncertainty surrounding the development of inhibitors further complicate the process of estimating inhibitor rate from the limited data. Bayesian statistical modeling provides a useful tool in generating, enhancing, and exploring the evidence through incorporating all the available information.Methods: We built our Bayesian analysis using three study cases to estimate the inhibitor rates of patients with hemophilia A in three different scenarios: Case 1, a single cohort of previously treated patients (PTPs or previously untreated patients; Case 2, a meta-analysis of PTP cohorts; and Case 3, a previously unexplored patient population – patients with baseline low-titer inhibitor or history of inhibitor development. The data used in this study were extracted from three published ADVATE (antihemophilic factor [recombinant] is a product of Baxter for treating hemophilia A post-authorization surveillance studies. Noninformative and informative priors were applied to Bayesian standard (Case 1 or random-effects (Case 2 and Case 3 logistic models. Bayesian probabilities of satisfying three meaningful thresholds of the risk of developing a clinical
Indian Academy of Sciences (India)
set up a well defined goal - that of achieving a per- formance bound set by the noisy channel coding theo- rem, proved in the paper. Whereas the goal appeared elusive twenty five years ago, today, there are practi- cal codes and decoding algorithms that come close to achieving it. It is interesting to note that all known.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 8; Issue 9. Decoding Codes on Graphs - Low Density Parity Check Codes. A S Madhu Aditya Nori. General Article Volume 8 Issue 9 September 2003 pp 49-59. Fulltext. Click here to view fulltext PDF. Permanent link:
Directory of Open Access Journals (Sweden)
Allen Rodrigo
2006-01-01
Full Text Available Using the structured serial coalescent with Bayesian MCMC and serial samples, we estimate population size when some demes are not sampled or are hidden, ie ghost demes. It is found that even with the presence of a ghost deme, accurate inference was possible if the parameters are estimated with the true model. However with an incorrect model, estimates were biased and can be positively misleading. We extend these results to the case where there are sequences from the ghost at the last time sample. This case can arise in HIV patients, when some tissue samples and viral sequences only become available after death. When some sequences from the ghost deme are available at the last sampling time, estimation bias is reduced and accurate estimation of parameters associated with the ghost deme is possible despite sampling bias. Migration rates for this case are also shown to be good estimates when migration values are low.
Fang, Fang; Chen, Jing; Jiang, Li-Yun; Qu, Yan-Hua; Qiao, Ge-Xia
2018-01-09
Biological invasion is considered one of the most important global environmental problems. Knowledge of the source and dispersal routes of invasion could facilitate the eradication and control of invasive species. Soybean aphid, Aphis glycines Matsumura, is one of the most destructive soybean pests. For effective management of this pest, we conducted genetic analyses and approximate Bayesian computation (ABC) analysis to determine the origins and dispersal of the aphid species, as well as the source of its invasion in the USA, using eight microsatellite loci and the mitochondrial cytochrome c oxidase subunit I (COI) gene. We were able to identify a significant isolation by distance (IBD) pattern and three genetic lineages in the microsatellite data but not in the mtDNA dataset. The genetic structure showed that the USA population has the closest relationship with those from Korea and Japan, indicating that the two latter populations might be the sources of the invasion to the USA. Both population genetic analyses and ABC showed that the northeastern populations in China were the possible sources of the further spread of A. glycines to Indonesia. The dispersal history of this aphid can provide useful information for pest management strategies and can further help predict areas at risk of invasion. This article is protected by copyright. All rights reserved.
Lyons, James E; Kendall, William L; Royle, J Andrew; Converse, Sarah J; Andres, Brad A; Buchanan, Joseph B
2016-03-01
We present a novel formulation of a mark-recapture-resight model that allows estimation of population size, stopover duration, and arrival and departure schedules at migration areas. Estimation is based on encounter histories of uniquely marked individuals and relative counts of marked and unmarked animals. We use a Bayesian analysis of a state-space formulation of the Jolly-Seber mark-recapture model, integrated with a binomial model for counts of unmarked animals, to derive estimates of population size and arrival and departure probabilities. We also provide a novel estimator for stopover duration that is derived from the latent state variable representing the interim between arrival and departure in the state-space model. We conduct a simulation study of field sampling protocols to understand the impact of superpopulation size, proportion marked, and number of animals sampled on bias and precision of estimates. Simulation results indicate that relative bias of estimates of the proportion of the population with marks was low for all sampling scenarios and never exceeded 2%. Our approach does not require enumeration of all unmarked animals detected or direct knowledge of the number of marked animals in the population at the time of the study. This provides flexibility and potential application in a variety of sampling situations (e.g., migratory birds, breeding seabirds, sea turtles, fish, pinnipeds, etc.). Application of the methods is demonstrated with data from a study of migratory sandpipers. © 2015, The International Biometric Society.
Lyons, James E.; Kendall, William L.; Royle, J. Andrew; Converse, Sarah J.; Andres, Brad A.; Buchanan, Joseph B.
2016-01-01
We present a novel formulation of a mark–recapture–resight model that allows estimation of population size, stopover duration, and arrival and departure schedules at migration areas. Estimation is based on encounter histories of uniquely marked individuals and relative counts of marked and unmarked animals. We use a Bayesian analysis of a state–space formulation of the Jolly–Seber mark–recapture model, integrated with a binomial model for counts of unmarked animals, to derive estimates of population size and arrival and departure probabilities. We also provide a novel estimator for stopover duration that is derived from the latent state variable representing the interim between arrival and departure in the state–space model. We conduct a simulation study of field sampling protocols to understand the impact of superpopulation size, proportion marked, and number of animals sampled on bias and precision of estimates. Simulation results indicate that relative bias of estimates of the proportion of the population with marks was low for all sampling scenarios and never exceeded 2%. Our approach does not require enumeration of all unmarked animals detected or direct knowledge of the number of marked animals in the population at the time of the study. This provides flexibility and potential application in a variety of sampling situations (e.g., migratory birds, breeding seabirds, sea turtles, fish, pinnipeds, etc.). Application of the methods is demonstrated with data from a study of migratory sandpipers.
Cheng, Ji; Iorio, Alfonso; Marcucci, Maura; Romanov, Vadim; Pullenayegum, Eleanor M; Marshall, John K; Thabane, Lehana
2016-01-01
Developing inhibitors is a rare event during the treatment of hemophilia A. The multifacets and uncertainty surrounding the development of inhibitors further complicate the process of estimating inhibitor rate from the limited data. Bayesian statistical modeling provides a useful tool in generating, enhancing, and exploring the evidence through incorporating all the available information. We built our Bayesian analysis using three study cases to estimate the inhibitor rates of patients with hemophilia A in three different scenarios: Case 1, a single cohort of previously treated patients (PTPs) or previously untreated patients; Case 2, a meta-analysis of PTP cohorts; and Case 3, a previously unexplored patient population - patients with baseline low-titer inhibitor or history of inhibitor development. The data used in this study were extracted from three published ADVATE (antihemophilic factor [recombinant] is a product of Baxter for treating hemophilia A) post-authorization surveillance studies. Noninformative and informative priors were applied to Bayesian standard (Case 1) or random-effects (Case 2 and Case 3) logistic models. Bayesian probabilities of satisfying three meaningful thresholds of the risk of developing a clinical significant inhibitor (10/100, 5/100 [high rates], and 1/86 [the Food and Drug Administration mandated cutoff rate in PTPs]) were calculated. The effect of discounting prior information or scaling up the study data was evaluated. Results based on noninformative priors were similar to the classical approach. Using priors from PTPs lowered the point estimate and narrowed the 95% credible intervals (Case 1: from 1.3 [0.5, 2.7] to 0.8 [0.5, 1.1]; Case 2: from 1.9 [0.6, 6.0] to 0.8 [0.5, 1.1]; Case 3: 2.3 [0.5, 6.8] to 0.7 [0.5, 1.1]). All probabilities of satisfying a threshold of 1/86 were above 0.65. Increasing the number of patients by two and ten times substantially narrowed the credible intervals for the single cohort study (1.4 [0.7, 2
Directory of Open Access Journals (Sweden)
Simon Boitard
2016-03-01
Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.
Firing rate estimation using infinite mixture models and its application to neural decoding.
Shibue, Ryohei; Komaki, Fumiyasu
2017-11-01
Neural decoding is a framework for reconstructing external stimuli from spike trains recorded by various neural recordings. Kloosterman et al. proposed a new decoding method using marked point processes (Kloosterman F, Layton SP, Chen Z, Wilson MA. J Neurophysiol 111: 217-227, 2014). This method does not require spike sorting and thereby improves decoding accuracy dramatically. In this method, they used kernel density estimation to estimate intensity functions of marked point processes. However, the use of kernel density estimation causes problems such as low decoding accuracy and high computational costs. To overcome these problems, we propose a new decoding method using infinite mixture models to estimate intensity. The proposed method improves decoding performance in terms of accuracy and computational speed. We apply the proposed method to simulation and experimental data to verify its performance. NEW & NOTEWORTHY We propose a new neural decoding method using infinite mixture models and nonparametric Bayesian statistics. The proposed method improves decoding performance in terms of accuracy and computation speed. We have successfully applied the proposed method to position decoding from spike trains recorded in a rat hippocampus. Copyright © 2017 the American Physiological Society.
DEFF Research Database (Denmark)
Zubillaga, Maria; Skewes, Oscar; Soto, Nicolás
2014-01-01
on a time series of 36 years of population sampling of guanacos in Tierra del Fuego, Chile. The population density varied between 2.7 and 30.7 guanaco/km², with an apparent monotonic growth during the first 25 years; however, in the last 10 years the population has shown large fluctuations, suggesting...
A Bayesian approach to modelling heterogeneous calcium responses in cell populations.
Directory of Open Access Journals (Sweden)
Agne Tilūnaitė
2017-10-01
Full Text Available Calcium responses have been observed as spikes of the whole-cell calcium concentration in numerous cell types and are essential for translating extracellular stimuli into cellular responses. While there are several suggestions for how this encoding is achieved, we still lack a comprehensive theory. To achieve this goal it is necessary to reliably predict the temporal evolution of calcium spike sequences for a given stimulus. Here, we propose a modelling framework that allows us to quantitatively describe the timing of calcium spikes. Using a Bayesian approach, we show that Gaussian processes model calcium spike rates with high fidelity and perform better than standard tools such as peri-stimulus time histograms and kernel smoothing. We employ our modelling concept to analyse calcium spike sequences from dynamically-stimulated HEK293T cells. Under these conditions, different cells often experience diverse stimulus time courses, which is a situation likely to occur in vivo. This single cell variability and the concomitant small number of calcium spikes per cell pose a significant modelling challenge, but we demonstrate that Gaussian processes can successfully describe calcium spike rates in these circumstances. Our results therefore pave the way towards a statistical description of heterogeneous calcium oscillations in a dynamic environment.
Prior Knowledge Improves Decoding of Finger Flexion from Electrocorticographic (ECoG Signals
Directory of Open Access Journals (Sweden)
Zuoguan eWang
2011-11-01
Full Text Available Brain-computer interfaces (BCIs use brain signals to convey a user's intent. Some BCI approaches begin by decoding kinematic parameters of movements from brain signals, and then proceed to using these signals, in absence of movements, to allow a user to control an output. Recent results have shown that electrocorticographic (ECoG recordings from the surface of the brain in humans can give information about kinematic parameters (eg{} hand velocity or finger flexion. The decoding approaches in these studies usually employed classical classification/regression algorithms that derive a linear mapping between brain signals and outputs. However, they typically only incorporate little prior information about the target movement parameter. In this paper, we incorporate prior knowledge using a Bayesian decoding method, and use it to decode finger flexion from ECoG signals. Specifically, we exploit the anatomic constraints and dynamic constraints that govern finger flexion and incorporate these constraints in the construction, structure, and the probabilistic functions of the prior model of a switched non-parametric dynamic system (SNDS. Given a measurement model resulting from a traditional linear regression method, we decoded finger flexion using posterior estimation that combined the prior and measurement models. Our results show that the application of the Bayesian decoding model, which incorporates prior knowledge, improves decoding performance compared to the application of a linear regression model, which does not incorporate prior knowledge. Thus, the results presented in this paper may ultimately lead to neurally controlled hand prostheses with full fine-grained finger articulation.
Forced Sequence Sequential Decoding
DEFF Research Database (Denmark)
Jensen, Ole Riis
is possible as low as Eb/No=0.6 dB, which is about 1.7 dB below the signal-to-noise ratio that marks the cut-off rate for the convolutional code. This is possible since the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability...
CERN. Geneva. Audiovisual Unit; Antonerakis, S E
2002-01-01
Decoding the Human genome is a very up-to-date topic, raising several questions besides purely scientific, in view of the two competing teams (public and private), the ethics of using the results, and the fact that the project went apparently faster and easier than expected. The lecture series will address the following chapters: Scientific basis and challenges. Ethical and social aspects of genomics.
Directory of Open Access Journals (Sweden)
Mario A Pardo
Full Text Available We inferred the population densities of blue whales (Balaenoptera musculus and short-beaked common dolphins (Delphinus delphis in the Northeast Pacific Ocean as functions of the water-column's physical structure by implementing hierarchical models in a Bayesian framework. This approach allowed us to propagate the uncertainty of the field observations into the inference of species-habitat relationships and to generate spatially explicit population density predictions with reduced effects of sampling heterogeneity. Our hypothesis was that the large-scale spatial distributions of these two cetacean species respond primarily to ecological processes resulting from shoaling and outcropping of the pycnocline in regions of wind-forced upwelling and eddy-like circulation. Physically, these processes affect the thermodynamic balance of the water column, decreasing its volume and thus the height of the absolute dynamic topography (ADT. Biologically, they lead to elevated primary productivity and persistent aggregation of low-trophic-level prey. Unlike other remotely sensed variables, ADT provides information about the structure of the entire water column and it is also routinely measured at high spatial-temporal resolution by satellite altimeters with uniform global coverage. Our models provide spatially explicit population density predictions for both species, even in areas where the pycnocline shoals but does not outcrop (e.g. the Costa Rica Dome and the North Equatorial Countercurrent thermocline ridge. Interannual variations in distribution during El Niño anomalies suggest that the population density of both species decreases dramatically in the Equatorial Cold Tongue and the Costa Rica Dome, and that their distributions retract to particular areas that remain productive, such as the more oceanic waters in the central California Current System, the northern Gulf of California, the North Equatorial Countercurrent thermocline ridge, and the more
van der Meer, Aize Franciscus; Touw, Daniel J.; Marcus, Marco A. E.; Neef, Cornelis; Proost, Johannes H.
2012-01-01
Background: Observational data sets can be used for population pharmacokinetic (PK) modeling. However, these data sets are generally less precisely recorded than experimental data sets. This article aims to investigate the influence of erroneous records on population PK modeling and individual
Stochastic Decoding of Turbo Codes
Dong , Q. T.; ARZEL , Matthieu; Jego , Christophe; Gross , W. J.
2010-01-01
International audience; Stochastic computation is a technique in which operations on probabilities are performed on random bit streams. Stochastic decoding of forward error-correction (FEC) codes is inspired by this technique. This paper extends the application of the stochastic decoding approach to the families of convolutional codes and turbo codes. It demonstrates that stochastic computation is a promising solution to improve the data throughput of turbo decoders with very simple implement...
Bayesian estimates of male and female African lion mortality for future use in population management
DEFF Research Database (Denmark)
Barthold, Julia A; Loveridge, Andrew; Macdonald, David
2016-01-01
higher mortality across all ages in both populations. We discuss the role that different drivers of lion mortality may play in explaining these differences and whether their effects need to be included in lion demographic models. 5. Synthesis and applications. Our mortality estimates can be used...... models evaluate consequences of hunting on lion population growth. However, none of the models use unbiased estimates of male age-specific mortality because such estimates do not exist. Until now, estimating mortality from resighting records of marked males has been impossible due to the uncertain fates...... to improve lion population management and, in addition, the mortality model itself has potential applications in demographically informed approaches to the conservation of species with sex-biased dispersal....
2010-10-01
... input. (8) Decoder Programming. Access to decoder programming shall be protected by a lock or other... Decoder with preselected Originator, Event and Location codes for either manual or automatic operation. (9...
Decoding the productivity code
DEFF Research Database (Denmark)
Hansen, David
, that is, the productivity code of the 21st century, is dissolved. Today, organizations are pressured for operational efficiency, often in terms of productivity, due to increased global competition, demographical changes, and use of natural resources. Taylor’s principles for rationalization founded...... that swing between rationalization and employee development. The productivity code is the lack of alternatives to this ineffective approach. This thesis decodes the productivity code based on the results from a 3-year action research study at a medium-sized manufacturing facility. During the project period...
Directory of Open Access Journals (Sweden)
Thomas J Rodhouse
Full Text Available Monitoring programs that evaluate restoration and inform adaptive management are important for addressing environmental degradation. These efforts may be well served by spatially explicit hierarchical approaches to modeling because of unavoidable spatial structure inherited from past land use patterns and other factors. We developed bayesian hierarchical models to estimate trends from annual density counts observed in a spatially structured wetland forb (Camassia quamash [camas] population following the cessation of grazing and mowing on the study area, and in a separate reference population of camas. The restoration site was bisected by roads and drainage ditches, resulting in distinct subpopulations ("zones" with different land use histories. We modeled this spatial structure by fitting zone-specific intercepts and slopes. We allowed spatial covariance parameters in the model to vary by zone, as in stratified kriging, accommodating anisotropy and improving computation and biological interpretation. Trend estimates provided evidence of a positive effect of passive restoration, and the strength of evidence was influenced by the amount of spatial structure in the model. Allowing trends to vary among zones and accounting for topographic heterogeneity increased precision of trend estimates. Accounting for spatial autocorrelation shifted parameter coefficients in ways that varied among zones depending on strength of statistical shrinkage, autocorrelation and topographic heterogeneity--a phenomenon not widely described. Spatially explicit estimates of trend from hierarchical models will generally be more useful to land managers than pooled regional estimates and provide more realistic assessments of uncertainty. The ability to grapple with historical contingency is an appealing benefit of this approach.
Directory of Open Access Journals (Sweden)
Douglas F Bertram
Full Text Available Species at risk with secretive breeding behaviours, low densities, and wide geographic range pose a significant challenge to conservation actions because population trends are difficult to detect. Such is the case with the Marbled Murrelet (Brachyramphus marmoratus, a seabird listed as 'Threatened' by the Species at Risk Act in Canada largely due to the loss of its old growth forest nesting habitat. We report the first estimates of population trend of Marbled Murrelets in Canada derived from a monitoring program that uses marine radar to detect birds as they enter forest watersheds during 923 dawn surveys at 58 radar monitoring stations within the six Marbled Murrelet Conservation Regions on coastal British Columbia, Canada, 1996-2013. Temporal trends in radar counts were analyzed with a hierarchical Bayesian multivariate modeling approach that controlled for variation in tilt of the radar unit and day of year, included year-specific deviations from the overall trend ('year effects', and allowed for trends to be estimated at three spatial scales. A negative overall trend of -1.6%/yr (95% credibility interval: -3.2%, 0.01% indicated moderate evidence for a coast-wide decline, although trends varied strongly among the six conservation regions. Negative annual trends were detected in East Vancouver Island (-9%/yr and South Mainland Coast (-3%/yr Conservation Regions. Over a quarter of the year effects were significantly different from zero, and the estimated standard deviation in common-shared year effects between sites within each region was about 50% per year. This large common-shared interannual variation in counts may have been caused by regional movements of birds related to changes in marine conditions that affect the availability of prey.
Fernandes, Ricardo; Grootes, Pieter; Nadeau, Marie-Josée; Nehlich, Olaf
2015-07-14
The island cemetery site of Ostorf (Germany) consists of individual human graves containing Funnel Beaker ceramics dating to the Early or Middle Neolithic. However, previous isotope and radiocarbon analysis demonstrated that the Ostorf individuals had a diet rich in freshwater fish. The present study was undertaken to quantitatively reconstruct the diet of the Ostorf population and establish if dietary habits are consistent with the traditional characterization of a Neolithic diet. Quantitative diet reconstruction was achieved through a novel approach consisting of the use of the Bayesian mixing model Food Reconstruction Using Isotopic Transferred Signals (FRUITS) to model isotope measurements from multiple dietary proxies (δ 13 C collagen , δ 15 N collagen , δ 13 C bioapatite , δ 34 S methione , 14 C collagen ). The accuracy of model estimates was verified by comparing the agreement between observed and estimated human dietary radiocarbon reservoir effects. Quantitative diet reconstruction estimates confirm that the Ostorf individuals had a high protein intake due to the consumption of fish and terrestrial animal products. However, FRUITS estimates also show that plant foods represented a significant source of calories. Observed and estimated human dietary radiocarbon reservoir effects are in good agreement provided that the aquatic reservoir effect at Lake Ostorf is taken as reference. The Ostorf population apparently adopted elements associated with a Neolithic culture but adapted to available local food resources and implemented a subsistence strategy that involved a large proportion of fish and terrestrial meat consumption. This case study exemplifies the diversity of subsistence strategies followed during the Neolithic. Am J Phys Anthropol, 2015. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Astrophysics Decoding the cosmos
Irwin, Judith A
2007-01-01
Astrophysics: Decoding the Cosmos is an accessible introduction to the key principles and theories underlying astrophysics. This text takes a close look at the radiation and particles that we receive from astronomical objects, providing a thorough understanding of what this tells us, drawing the information together using examples to illustrate the process of astrophysics. Chapters dedicated to objects showing complex processes are written in an accessible manner and pull relevant background information together to put the subject firmly into context. The intention of the author is that the book will be a 'tool chest' for undergraduate astronomers wanting to know the how of astrophysics. Students will gain a thorough grasp of the key principles, ensuring that this often-difficult subject becomes more accessible.
Reynaud, Yann; Zheng, Chao; Wu, Guihui; Sun, Qun; Rastogi, Nalin
2017-01-01
Mycobacterium tuberculosis genetic structure, and evolutionary history have been studied for years by several genotyping approaches, but delineation of a few sublineages remains controversial and needs better characterization. This is particularly the case of T group within lineage 4 (L4) which was first described using spoligotyping to pool together a number of strains with ill-defined signatures. Although T strains were not traditionally considered as a real phylogenetic group, they did contain a few phylogenetically meaningful sublineages as shown using SNPs. We therefore decided to investigate if this observation could be corroborated using other robust genetic markers. We consequently made a first assessment of genetic structure using 24-loci MIRU-VNTRs data extracted from the SITVIT2 database (n = 607 clinical isolates collected in Russia, Albania, Turkey, Iraq, Brazil and China). Combining Minimum Spanning Trees and Bayesian population structure analyses (using STRUCTURE and TESS softwares), we distinctly identified eight tentative phylogenetic groups (T1-T8) with a remarkable correlation with geographical origin. We further compared the present structure observed with other L4 sublineages (n = 416 clinical isolates belonging to LAM, Haarlem, X, S sublineages), and showed that 5 out of 8 T groups seemed phylogeographically well-defined as opposed to the remaining 3 groups that partially mixed with other L4 isolates. These results provide with novel evidence about phylogeographically specificity of a proportion of ill-defined T group of M. tuberculosis. The genetic structure observed will now be further validated on an enlarged worldwide dataset using Whole Genome Sequencing (WGS). PMID:28166309
Directory of Open Access Journals (Sweden)
Yann Reynaud
Full Text Available Mycobacterium tuberculosis genetic structure, and evolutionary history have been studied for years by several genotyping approaches, but delineation of a few sublineages remains controversial and needs better characterization. This is particularly the case of T group within lineage 4 (L4 which was first described using spoligotyping to pool together a number of strains with ill-defined signatures. Although T strains were not traditionally considered as a real phylogenetic group, they did contain a few phylogenetically meaningful sublineages as shown using SNPs. We therefore decided to investigate if this observation could be corroborated using other robust genetic markers. We consequently made a first assessment of genetic structure using 24-loci MIRU-VNTRs data extracted from the SITVIT2 database (n = 607 clinical isolates collected in Russia, Albania, Turkey, Iraq, Brazil and China. Combining Minimum Spanning Trees and Bayesian population structure analyses (using STRUCTURE and TESS softwares, we distinctly identified eight tentative phylogenetic groups (T1-T8 with a remarkable correlation with geographical origin. We further compared the present structure observed with other L4 sublineages (n = 416 clinical isolates belonging to LAM, Haarlem, X, S sublineages, and showed that 5 out of 8 T groups seemed phylogeographically well-defined as opposed to the remaining 3 groups that partially mixed with other L4 isolates. These results provide with novel evidence about phylogeographically specificity of a proportion of ill-defined T group of M. tuberculosis. The genetic structure observed will now be further validated on an enlarged worldwide dataset using Whole Genome Sequencing (WGS.
Wei, Shu-Jun; Cao, Li-Jun; Gong, Ya-Jun; Shi, Bao-Cai; Wang, Su; Zhang, Fan; Guo, Xiao-Jun; Wang, Yuan-Min; Chen, Xue-Xin
2015-08-01
The oriental fruit moth (OFM) Grapholita molesta is one of the most destructive orchard pests. Assumed to be native to China, the moth is now distributed throughout the world. However, the evolutionary history of this moth in its native range remains unknown. In this study, we explored the population genetic structure, dispersal routes and demographic history of the OFM in China and South Korea based on mitochondrial genes and microsatellite loci. The Mantel test indicated a significant correlation between genetic distance and geographical distance in the populations. Bayesian analysis of population genetic structure (baps) identified four nested clusters, while the geneland analysis inferred five genetic groups with spatial discontinuities. Based on the approximate Bayesian computation approach, we found that the OFM was originated from southern China near the Shilin area of Yunnan Province. The early divergence and dispersal of this moth was dated to the Penultimate glaciation of Pleistocene. Further dispersal from southern to northern region of China occurred before the last glacial maximum, while the expansion of population size in the derived populations in northern region of China occurred after the last glacial maximum. Our results indicated that the current distribution and structure of the OFM were complicatedly influenced by climatic and geological events and human activities of cultivation and wide dissemination of peach in ancient China. We provide an example on revealing the origin and dispersal history of an agricultural pest insect in its native range as well as the underlying factors. © 2015 The Authors. Molecular Ecology Published by John Wiley & Sons Ltd.
Lesaffre, Emmanuel
2012-01-01
The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets. Through examples, exercises and a combination of introd
Kolb Ayre, Kimberley; Caldwell, Colleen A.; Stinson, Jonah; Landis, Wayne G.
2014-01-01
Introduction and spread of the parasite Myxobolus cerebralis, the causative agent of whirling disease, has contributed to the collapse of wild trout populations throughout the intermountain west. Of concern is the risk the disease may have on conservation and recovery of native cutthroat trout. We employed a Bayesian belief network to assess probability of whirling disease in Colorado River and Rio Grande cutthroat trout (Oncorhynchus clarkii pleuriticus and Oncorhynchus clarkii virginalis, respectively) within their current ranges in the southwest United States. Available habitat (as defined by gradient and elevation) for intermediate oligochaete worm host, Tubifex tubifex, exerted the greatest influence on the likelihood of infection, yet prevalence of stream barriers also affected the risk outcome. Management areas that had the highest likelihood of infected Colorado River cutthroat trout were in the eastern portion of their range, although the probability of infection was highest for populations in the southern, San Juan subbasin. Rio Grande cutthroat trout had a relatively low likelihood of infection, with populations in the southernmost Pecos management area predicted to be at greatest risk. The Bayesian risk assessment model predicted the likelihood of whirling disease infection from its principal transmission vector, fish movement, and suggested that barriers may be effective in reducing risk of exposure to native trout populations. Data gaps, especially with regard to location of spawning, highlighted the importance in developing monitoring plans that support future risk assessments and adaptive management for subspecies of cutthroat trout.
Decoding Dyslexia, a Common Learning Disability
... of this page please turn JavaScript on. Feature: Dyslexia Decoding Dyslexia, a Common Learning Disability Past Issues / Winter 2016 ... Dyslexic" Articles In Their Own Words: Dealing with Dyslexia / Decoding Dyslexia, a Common Learning Disability / What is ...
On Decoding Interleaved Chinese Remainder Codes
DEFF Research Database (Denmark)
Li, Wenhui; Sidorenko, Vladimir; Nielsen, Johan Sebastian Rosenkilde
2013-01-01
We model the decoding of Interleaved Chinese Remainder codes as that of finding a short vector in a Z-lattice. Using the LLL algorithm, we obtain an efficient decoding algorithm, correcting errors beyond the unique decoding bound and having nearly linear complexity. The algorithm can fail...
Decoding intention at sensorimotor timescales.
Directory of Open Access Journals (Sweden)
Mathew Salvaris
Full Text Available The ability to decode an individual's intentions in real time has long been a 'holy grail' of research on human volition. For example, a reliable method could be used to improve scientific study of voluntary action by allowing external probe stimuli to be delivered at different moments during development of intention and action. Several Brain Computer Interface applications have used motor imagery of repetitive actions to achieve this goal. These systems are relatively successful, but only if the intention is sustained over a period of several seconds; much longer than the timescales identified in psychophysiological studies for normal preparation for voluntary action. We have used a combination of sensorimotor rhythms and motor imagery training to decode intentions in a single-trial cued-response paradigm similar to those used in human and non-human primate motor control research. Decoding accuracy of over 0.83 was achieved with twelve participants. With this approach, we could decode intentions to move the left or right hand at sub-second timescales, both for instructed choices instructed by an external stimulus and for free choices generated intentionally by the participant. The implications for volition are considered.
Gentaz, Edouard; Sprenger-Charolles, Liliane; Theurel, Anne
2015-01-01
Based on the assumption that good decoding skills constitute a bootstrapping mechanism for reading comprehension, the present study investigated the relative contribution of the former skill to the latter compared to that of three other predictors of reading comprehension (listening comprehension, vocabulary and phonemic awareness) in 392 French-speaking first graders from low SES families. This large sample was split into three groups according to their level of decoding skills assessed by pseudoword reading. Using a cutoff of 1 SD above or below the mean of the entire population, there were 63 good decoders, 267 average decoders and 62 poor decoders. 58% of the variance in reading comprehension was explained by our four predictors, with decoding skills proving to be the best predictor (12.1%, 7.3% for listening comprehension, 4.6% for vocabulary and 3.3% for phonemic awareness). Interaction between group versus decoding skills, listening comprehension and phonemic awareness accounted for significant additional variance (3.6%, 1.1% and 1.0%, respectively). The effects on reading comprehension of decoding skills and phonemic awareness were higher in poor and average decoders than in good decoders whereas listening comprehension accounted for more variance in good and average decoders than in poor decoders. Furthermore, the percentage of children with impaired reading comprehension skills was higher in the group of poor decoders (55%) than in the two other groups (average decoders: 7%; good decoders: 0%) and only 6 children (1.5%) had impaired reading comprehension skills with unimpaired decoding skills, listening comprehension or vocabulary. These results challenge the outcomes of studies on "poor comprehenders" by showing that, at least in first grade, poor reading comprehension is strongly linked to the level of decoding skills.
Schall, Megan K.; Blazer, Vicki S.; Lorantas, Robert M.; Smith, Geoffrey; Mullican, John E.; Keplinger, Brandon J.; Wagner, Tyler
2018-01-01
Detecting temporal changes in fish abundance is an essential component of fisheries management. Because of the need to understand short‐term and nonlinear changes in fish abundance, traditional linear models may not provide adequate information for management decisions. This study highlights the utility of Bayesian dynamic linear models (DLMs) as a tool for quantifying temporal dynamics in fish abundance. To achieve this goal, we quantified temporal trends of Smallmouth Bass Micropterus dolomieu catch per effort (CPE) from rivers in the mid‐Atlantic states, and we calculated annual probabilities of decline from the posterior distributions of annual rates of change in CPE. We were interested in annual declines because of recent concerns about fish health in portions of the study area. In general, periods of decline were greatest within the Susquehanna River basin, Pennsylvania. The declines in CPE began in the late 1990s—prior to observations of fish health problems—and began to stabilize toward the end of the time series (2011). In contrast, many of the other rivers investigated did not have the same magnitude or duration of decline in CPE. Bayesian DLMs provide information about annual changes in abundance that can inform management and are easily communicated with managers and stakeholders.
Rezaianzadeh, Abbas; Sepandi, Mojtaba; Rahimikazerooni, Salar
2016-11-01
Objective: As a source of information, medical data can feature hidden relationships. However, the high volume of datasets and complexity of decision-making in medicine introduce difficulties for analysis and interpretation and processing steps may be needed before the data can be used by clinicians in their work. This study focused on the use of Bayesian models with different numbers of nodes to aid clinicians in breast cancer risk estimation. Methods: Bayesian networks (BNs) with a retrospectively collected dataset including mammographic details, risk factor exposure, and clinical findings was assessed for prediction of the probability of breast cancer in individual patients. Area under the receiver-operating characteristic curve (AUC), accuracy, sensitivity, specificity, and positive and negative predictive values were used to evaluate discriminative performance. Result: A network incorporating selected features performed better (AUC = 0.94) than that incorporating all the features (AUC = 0.93). The results revealed no significant difference among 3 models regarding performance indices at the 5% significance level. Conclusion: BNs could effectively discriminate malignant from benign abnormalities and accurately predict the risk of breast cancer in individuals. Moreover, the overall performance of the 9-node BN was better, and due to the lower number of nodes it might be more readily be applied in clinical settings. Creative Commons Attribution License
Cornuet, Jean-Marie; Pudlo, Pierre; Veyssier, Julien; Dehne-Garcia, Alexandre; Gautier, Mathieu; Leblois, Raphaël; Marin, Jean-Michel; Estoup, Arnaud
2014-04-15
DIYABC is a software package for a comprehensive analysis of population history using approximate Bayesian computation on DNA polymorphism data. Version 2.0 implements a number of new features and analytical methods. It allows (i) the analysis of single nucleotide polymorphism data at large number of loci, apart from microsatellite and DNA sequence data, (ii) efficient Bayesian model choice using linear discriminant analysis on summary statistics and (iii) the serial launching of multiple post-processing analyses. DIYABC v2.0 also includes a user-friendly graphical interface with various new options. It can be run on three operating systems: GNU/Linux, Microsoft Windows and Apple Os X. Freely available with a detailed notice document and example projects to academic users at http://www1.montpellier.inra.fr/CBGP/diyabc CONTACT: estoup@supagro.inra.fr Supplementary information: Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Improved decoding for a concatenated coding system
DEFF Research Database (Denmark)
Paaske, Erik
1990-01-01
The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new...... decoding procedures based on repeated decoding trials and exchange of information between the two decoders and the deinterleaver are proposed. In the first one, where the improvement is 0.3-0.4 dB, only the RS decoder performs repeated trials. In the second one, where the improvement is 0.5-0.6 dB, both...
FPGA Realization of Memory 10 Viterbi Decoder
DEFF Research Database (Denmark)
Paaske, Erik; Bach, Thomas Bo; Andersen, Jakob Dahl
1997-01-01
A feasibility study for a low cost, iterative, serially concatenated coding system is performed. The system uses outer (255,223) Reed-Solomon codes and convolutional inner codes with memory 10 and rates 1/4 or 1/6. The corresponding inner decoder is a Viterbi decoder, which can operate in a forced...... sequence mode when feedback from the Reed-Solomon decoder is available. The Viterbi decoder is realized using two Altera FLEX 10K50 FPGA's. The overall operating speed is 30 kbit/s, and since up to three iterations are performed for each frame and only one decoder is used, the operating speed...
Li, Hongbao; Hao, Yaoyao; Zhang, Shaomin; Wang, Yiwen; Chen, Weidong; Zheng, Xiaoxiang
2017-01-01
Objective. Previous studies have demonstrated that target direction information presented by the dorsal premotor cortex (PMd) during movement planning could be incorporated into neural decoder for achieving better decoding performance. It is still unknown whether the neural decoder combined with only target direction could work in more complex tasks where obstacles impeded direct reaching paths. Methods. In this study, spike activities were collected from the PMd of two monkeys when performing a delayed obstacle-avoidance task. We examined how target direction and intended movement selection were encoded in neuron population activities of the PMd during movement planning. The decoding performances of movement trajectory were compared for three neural decoders with no prior knowledge, or only target direction, or both target direction and intended movement selection integrated into a mixture of trajectory model (MTM). Results. We found that not only target direction but also intended movement selection was presented in neural activities of the PMd during movement planning. It was further confirmed by quantitative analysis. Combined with prior knowledge, the trajectory decoder achieved the best performance among three decoders. Conclusion. Recruiting prior knowledge about target direction and intended movement selection extracted from the PMd could enhance the decoding performance of hand trajectory in indirect reaching movement.
Directory of Open Access Journals (Sweden)
Yadira Chinique de Armas
Full Text Available The general lack of well-preserved juvenile skeletal remains from Caribbean archaeological sites has, in the past, prevented evaluations of juvenile dietary changes. Canímar Abajo (Cuba, with a large number of well-preserved juvenile and adult skeletal remains, provided a unique opportunity to fully assess juvenile paleodiets from an ancient Caribbean population. Ages for the start and the end of weaning and possible food sources used for weaning were inferred by combining the results of two Bayesian probability models that help to reduce some of the uncertainties inherent to bone collagen isotope based paleodiet reconstructions. Bone collagen (31 juveniles, 18 adult females was used for carbon and nitrogen isotope analyses. The isotope results were assessed using two Bayesian probability models: Weaning Ages Reconstruction with Nitrogen isotopes and Stable Isotope Analyses in R. Breast milk seems to have been the most important protein source until two years of age with some supplementary food such as tropical fruits and root cultigens likely introduced earlier. After two, juvenile diets were likely continuously supplemented by starch rich foods such as root cultigens and legumes. By the age of three, the model results suggest that the weaning process was completed. Additional indications suggest that animal marine/riverine protein and maize, while part of the Canímar Abajo female diets, were likely not used to supplement juvenile diets. The combined use of both models here provided a more complete assessment of the weaning process for an ancient Caribbean population, indicating not only the start and end ages of weaning but also the relative importance of different food sources for different age juveniles.
Directory of Open Access Journals (Sweden)
Tim Reid
Full Text Available The population of flesh-footed shearwaters (Puffinus carneipes breeding on Lord Howe Island was shown to be declining from the 1970's to the early 2000's. This was attributed to destruction of breeding habitat and fisheries mortality in the Australian Eastern Tuna and Billfish Fishery. Recent evidence suggests these impacts have ceased; presumably leading to population recovery. We used Bayesian statistical methods to combine data from the literature with more recent, but incomplete, field data to estimate population parameters and trends. This approach easily accounts for sources of variation and uncertainty while formally incorporating data and variation from different sources into the estimate. There is a 70% probability that the flesh-footed shearwater population on Lord Howe continued to decline during 2003-2009, and a number of possible reasons for this are suggested. During the breeding season, road-based mortality of adults on Lord Howe Island is likely to result in reduced adult survival and there is evidence that breeding success is negatively impacted by marine debris. Interactions with fisheries on flesh-footed shearwater winter grounds should be further investigated.
Harlaar, Nicole; Kovas, Yulia; Dale, Philip S.; Petrill, Stephen A.; Plomin, Robert
2013-01-01
Although evidence suggests that individual differences in reading and mathematics skills are correlated, this relationship has typically only been studied in relation to word decoding or global measures of reading. It is unclear whether mathematics is differentially related to word decoding and reading comprehension. The current study examined these relationships at both a phenotypic and etiological level in a population-based cohort of 5162 twin pairs at age 12. Multivariate genetic analyses of latent phenotypic factors of mathematics, word decoding and reading comprehension revealed substantial genetic and shared environmental correlations among all three domains. However, the phenotypic and genetic correlations between mathematics and reading comprehension were significantly greater than between mathematics and word decoding. Independent of mathematics, there was also evidence for genetic and nonshared environmental links between word decoding and reading comprehension. These findings indicate that word decoding and reading comprehension have partly distinct relationships with mathematics in the middle school years. PMID:24319294
Directory of Open Access Journals (Sweden)
Stephen J Jacquemin
Full Text Available We combine evolutionary biology and community ecology to test whether two species traits, body size and geographic range, explain long term variation in local scale freshwater stream fish assemblages. Body size and geographic range are expected to influence several aspects of fish ecology, via relationships with niche breadth, dispersal, and abundance. These traits are expected to scale inversely with niche breadth or current abundance, and to scale directly with dispersal potential. However, their utility to explain long term temporal patterns in local scale abundance is not known. Comparative methods employing an existing molecular phylogeny were used to incorporate evolutionary relatedness in a test for covariation of body size and geographic range with long term (1983 - 2010 local scale population variation of fishes in West Fork White River (Indiana, USA. The Bayesian model incorporating phylogenetic uncertainty and correlated predictors indicated that neither body size nor geographic range explained significant variation in population fluctuations over a 28 year period. Phylogenetic signal data indicated that body size and geographic range were less similar among taxa than expected if trait evolution followed a purely random walk. We interpret this as evidence that local scale population variation may be influenced less by species-level traits such as body size or geographic range, and instead may be influenced more strongly by a taxon's local scale habitat and biotic assemblages.
Directory of Open Access Journals (Sweden)
Julie Vercelloni
Full Text Available Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making.
Decoding sound level in the marmoset primary auditory cortex.
Sun, Wensheng; Marongelli, Ellisha N; Watkins, Paul V; Barbour, Dennis L
2017-10-01
Neurons that respond favorably to a particular sound level have been observed throughout the central auditory system, becoming steadily more common at higher processing areas. One theory about the role of these level-tuned or nonmonotonic neurons is the level-invariant encoding of sounds. To investigate this theory, we simulated various subpopulations of neurons by drawing from real primary auditory cortex (A1) neuron responses and surveyed their performance in forming different sound level representations. Pure nonmonotonic subpopulations did not provide the best level-invariant decoding; instead, mixtures of monotonic and nonmonotonic neurons provided the most accurate decoding. For level-fidelity decoding, the inclusion of nonmonotonic neurons slightly improved or did not change decoding accuracy until they constituted a high proportion. These results indicate that nonmonotonic neurons fill an encoding role complementary to, rather than alternate to, monotonic neurons. NEW & NOTEWORTHY Neurons with nonmonotonic rate-level functions are unique to the central auditory system. These level-tuned neurons have been proposed to account for invariant sound perception across sound levels. Through systematic simulations based on real neuron responses, this study shows that neuron populations perform sound encoding optimally when containing both monotonic and nonmonotonic neurons. The results indicate that instead of working independently, nonmonotonic neurons complement the function of monotonic neurons in different sound-encoding contexts. Copyright © 2017 the American Physiological Society.
Glover, K.A.; Pertoldi, C.; Besnier, F.; Wennevik, V.; Kent, M.; Skaala, T.
2013-01-01
Background Many native Atlantic salmon populations have been invaded by domesticated escapees for three decades or longer. However, thus far, the cumulative level of gene-flow that has occurred from farmed to wild salmon has not been reported for any native Atlantic salmon population. The aim of the present study was to investigate temporal genetic stability in native populations, and, quantify gene-flow from farmed salmon that caused genetic changes where they were observed. This was achi...
Bessiere, Pierre; Ahuactzin, Juan Manuel; Mekhnacha, Kamel
2013-01-01
Probability as an Alternative to Boolean LogicWhile logic is the mathematical foundation of rational reasoning and the fundamental principle of computing, it is restricted to problems where information is both complete and certain. However, many real-world problems, from financial investments to email filtering, are incomplete or uncertain in nature. Probability theory and Bayesian computing together provide an alternative framework to deal with incomplete and uncertain data. Decision-Making Tools and Methods for Incomplete and Uncertain DataEmphasizing probability as an alternative to Boolean
Interior point decoding for linear vector channels
International Nuclear Information System (INIS)
Wadayama, T
2008-01-01
In this paper, a novel decoding algorithm for low-density parity-check (LDPC) codes based on convex optimization is presented. The decoding algorithm, called interior point decoding, is designed for linear vector channels. The linear vector channels include many practically important channels such as inter-symbol interference channels and partial response channels. It is shown that the maximum likelihood decoding (MLD) rule for a linear vector channel can be relaxed to a convex optimization problem, which is called a relaxed MLD problem
On Rational Interpolation-Based List-Decoding and List-Decoding Binary Goppa Codes
DEFF Research Database (Denmark)
Beelen, Peter; Høholdt, Tom; Nielsen, Johan Sebastian Rosenkilde
2013-01-01
a new application of the Wu list decoder by decoding irreducible binary Goppa codes up to the binary Johnson radius. Finally, we point out a connection between the governing equations of the Wu algorithm and the Guruswami–Sudan algorithm, immediately leading to equality in the decoding range...
Application of Beyond Bound Decoding for High Speed Optical Communications
DEFF Research Database (Denmark)
Li, Bomin; Larsen, Knud J.; Vegas Olmos, Juan José
2013-01-01
This paper studies the application of beyond bound decoding method for high speed optical communications. This hard-decision decoding method outperforms traditional minimum distance decoding method, with a total net coding gain of 10.36 dB....
On minimizing the maximum broadcast decoding delay for instantly decodable network coding
Douik, Ahmed S.
2014-09-01
In this paper, we consider the problem of minimizing the maximum broadcast decoding delay experienced by all the receivers of generalized instantly decodable network coding (IDNC). Unlike the sum decoding delay, the maximum decoding delay as a definition of delay for IDNC allows a more equitable distribution of the delays between the different receivers and thus a better Quality of Service (QoS). In order to solve this problem, we first derive the expressions for the probability distributions of maximum decoding delay increments. Given these expressions, we formulate the problem as a maximum weight clique problem in the IDNC graph. Although this problem is known to be NP-hard, we design a greedy algorithm to perform effective packet selection. Through extensive simulations, we compare the sum decoding delay and the max decoding delay experienced when applying the policies to minimize the sum decoding delay and our policy to reduce the max decoding delay. Simulations results show that our policy gives a good agreement among all the delay aspects in all situations and outperforms the sum decoding delay policy to effectively minimize the sum decoding delay when the channel conditions become harsher. They also show that our definition of delay significantly improve the number of served receivers when they are subject to strict delay constraints.
A class of Sudan-decodable codes
DEFF Research Database (Denmark)
Nielsen, Rasmus Refslund
2000-01-01
In this article, Sudan's algorithm is modified into an efficient method to list-decode a class of codes which can be seen as a generalization of Reed-Solomon codes. The algorithm is specialized into a very efficient method for unique decoding. The code construction can be generalized based...
Deep Space Network turbo decoder implementation
Berner, Jeff B.; Andrews, Kenneth S.; Bryant, Scott H.
2001-01-01
A new decoder is being developed by the Jet Propulsion Laboratory for NASA's Deep Space Network. This unit will decode the new turbo codes, which have recently been approved by the Consultative Committee for Space Data Systems (CCSDS). Turbo codes provide up to 0.8 dB improvement in Eb/No over the current best codes used by deep space missions.
Directory of Open Access Journals (Sweden)
Jorrit Steven Montijn
2014-06-01
Full Text Available The primary visual cortex is an excellent model system for investigating how neuronal populations encode information, because of well-documented relationships between stimulus characteristics and neuronal activation patterns. We used two-photon calcium imaging data to relate the performance of different methods for studying population coding (population vectors, template matching and Bayesian decoding algorithms to their underlying assumptions. We show that the variability of neuronal responses may hamper the decoding of population activity, and that a normalization to correct for this variability may be of critical importance for correct decoding of population activity. Second, by comparing noise correlations and stimulus tuning we find that these properties have dissociated anatomical correlates, even though noise correlations have been previously hypothesized to reflect common synaptic input. We hypothesize that noise correlations arise from large nonspecific increases in spiking activity acting on many weak synapses simultaneously, while neuronal stimulus response properties are dependent on more reliable connections. Finally, this paper provides practical guidelines for further research on population coding and shows that population coding cannot be approximated by a simple summation of inputs, but is heavily influenced by factors such as input reliability and noise correlation structure.
Multiscale brain-machine interface decoders.
Han-Lin Hsieh; Shanechi, Maryam M
2016-08-01
Brain-machine interfaces (BMI) have vastly used a single scale of neural activity, e.g., spikes or electrocorticography (ECoG), as their control signal. New technology allows for simultaneous recording of multiple scales of neural activity, from spikes to local field potentials (LFP) and ECoG. These advances introduce the new challenge of modeling and decoding multiple scales of neural activity jointly. Such multi-scale decoding is challenging for two reasons. First, spikes are discrete-valued and ECoG/LFP are continuous-valued, resulting in fundamental differences in statistical characteristics. Second, the time-scales of these signals are different, with spikes having a millisecond time-scale and ECoG/LFP having much slower time-scales on the order of tens of milliseconds. Here we develop a new multiscale modeling and decoding framework that addresses these challenges. Our multiscale decoder extracts information from ECoG/LFP in addition to spikes, while operating at the fast time-scale of the spikes. The multiscale decoder specializes to a Kalman filter (KF) or to a point process filter (PPF) when no spikes or ECoG/LFP are available, respectively. Using closed-loop BMI simulations, we show that compared to PPF decoding of spikes alone or KF decoding of LFP/ECoG alone, the multiscale decoder significantly improves the accuracy and error performance of BMI control and runs at the fast millisecond time-scale of the spikes. This new multiscale modeling and decoding framework has the potential to improve BMI control using simultaneous multiscale neural activity.
Introduction to Bayesian statistics
Bolstad, William M
2017-01-01
There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. In this Third Edition, four newly-added chapters address topics that reflect the rapid advances in the field of Bayesian staistics. The author continues to provide a Bayesian treatment of introductory statistical topics, such as scientific data gathering, discrete random variables, robust Bayesian methods, and Bayesian approaches to inferenfe cfor discrete random variables, bionomial proprotion, Poisson, normal mean, and simple linear regression. In addition, newly-developing topics in the field are presented in four new chapters: Bayesian inference with unknown mean and variance; Bayesian inference for Multivariate Normal mean vector; Bayesian inference for Multiple Linear RegressionModel; and Computati...
Three phase full wave dc motor decoder
Studer, P. A. (Inventor)
1977-01-01
A three phase decoder for dc motors is disclosed which employs an extremely simple six transistor circuit to derive six properly phased output signals for fullwave operation of dc motors. Six decoding transistors are coupled at their base-emitter junctions across a resistor network arranged in a delta configuration. Each point of the delta configuration is coupled to one of three position sensors which sense the rotational position of the motor. A second embodiment of the invention is disclosed in which photo-optical isolators are used in place of the decoding transistors.
Decoding Algorithms for Random Linear Network Codes
DEFF Research Database (Denmark)
Heide, Janus; Pedersen, Morten Videbæk; Fitzek, Frank
2011-01-01
We consider the problem of efficient decoding of a random linear code over a finite field. In particular we are interested in the case where the code is random, relatively sparse, and use the binary finite field as an example. The goal is to decode the data using fewer operations to potentially...... achieve a high coding throughput, and reduce energy consumption.We use an on-the-fly version of the Gauss-Jordan algorithm as a baseline, and provide several simple improvements to reduce the number of operations needed to perform decoding. Our tests show that the improvements can reduce the number...
An overview on Approximate Bayesian computation*
Directory of Open Access Journals (Sweden)
Baragatti Meïli
2014-01-01
Full Text Available Approximate Bayesian computation techniques, also called likelihood-free methods, are one of the most satisfactory approach to intractable likelihood problems. This overview presents recent results since its introduction about ten years ago in population genetics.
Decoding small surface codes with feedforward neural networks
Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen
2018-01-01
Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.
Bayesian artificial intelligence
Korb, Kevin B
2003-01-01
As the power of Bayesian techniques has become more fully realized, the field of artificial intelligence has embraced Bayesian methodology and integrated it to the point where an introduction to Bayesian techniques is now a core course in many computer science programs. Unlike other books on the subject, Bayesian Artificial Intelligence keeps mathematical detail to a minimum and covers a broad range of topics. The authors integrate all of Bayesian net technology and learning Bayesian net technology and apply them both to knowledge engineering. They emphasize understanding and intuition but also provide the algorithms and technical background needed for applications. Software, exercises, and solutions are available on the authors' website.
Bayesian artificial intelligence
Korb, Kevin B
2010-01-01
Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology.New to the Second EditionNew chapter on Bayesian network classifiersNew section on object-oriente
Low-Power Bitstream-Residual Decoder for H.264/AVC Baseline Profile Decoding
Directory of Open Access Journals (Sweden)
Xu Ke
2009-01-01
Full Text Available Abstract We present the design and VLSI implementation of a novel low-power bitstream-residual decoder for H.264/AVC baseline profile. It comprises a syntax parser, a parameter decoder, and an Inverse Quantization Inverse Transform (IQIT decoder. The syntax parser detects and decodes each incoming codeword in the bitstream under the control of a hierarchical Finite State Machine (FSM; the IQIT decoder performs inverse transform and quantization with pipelining and parallelism. Various power reduction techniques, such as data-driven based on statistic results, nonuniform partition, precomputation, guarded evaluation, hierarchical FSM decomposition, TAG method, zero-block skipping, and clock gating , are adopted and integrated throughout the bitstream-residual decoder. With innovative architecture, the proposed design is able to decode QCIF video sequences of 30 fps at a clock rate as low as 1.5 MHz. A prototype H.264/AVC baseline decoding chip utilizing the proposed decoder is fabricated in UMC 0.18 m 1P6M CMOS technology. The proposed design is measured under 1 V 1.8 V supply with 0.1 V step. It dissipates 76 W at 1 V and 253 W at 1.8 V.
Decoding the Disciplines as a Hermeneutic Practice
Yeo, Michelle
2017-01-01
This chapter argues that expert practice is an inquiry that surfaces a hermeneutic relationship between theory, practice, and the world, with implications for new lines of questioning in the Decoding interview.
Approximate Bayesian computation.
Directory of Open Access Journals (Sweden)
Mikael Sunnåker
Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.
Directory of Open Access Journals (Sweden)
Hoeschele Ina
2007-05-01
Full Text Available Abstract Background Requirements for successful implementation of multivariate animal threshold models including phenotypic and genotypic information are not known yet. Here simulated horse data were used to investigate the properties of multivariate estimators of genetic parameters for categorical, continuous and molecular genetic data in the context of important radiological health traits using mixed linear-threshold animal models via Gibbs sampling. The simulated pedigree comprised 7 generations and 40000 animals per generation. Additive genetic values, residuals and fixed effects for one continuous trait and liabilities of four binary traits were simulated, resembling situations encountered in the Warmblood horse. Quantitative trait locus (QTL effects and genetic marker information were simulated for one of the liabilities. Different scenarios with respect to recombination rate between genetic markers and QTL and polymorphism information content of genetic markers were studied. For each scenario ten replicates were sampled from the simulated population, and within each replicate six different datasets differing in number and distribution of animals with trait records and availability of genetic marker information were generated. (CoVariance components were estimated using a Bayesian mixed linear-threshold animal model via Gibbs sampling. Residual variances were fixed to zero and a proper prior was used for the genetic covariance matrix. Results Effective sample sizes (ESS and biases of genetic parameters differed significantly between datasets. Bias of heritability estimates was -6% to +6% for the continuous trait, -6% to +10% for the binary traits of moderate heritability, and -21% to +25% for the binary traits of low heritability. Additive genetic correlations were mostly underestimated between the continuous trait and binary traits of low heritability, under- or overestimated between the continuous trait and binary traits of moderate
Rahman, Md Shafiur; Rahman, Md Mizanur; Gilmour, Stuart; Swe, Khin Thet; Krull Abe, Sarah; Shibuya, Kenji
2018-01-01
Many countries are implementing health system reforms to achieve universal health coverage (UHC) by 2030. To understand the progress towards UHC in Bangladesh, we estimated trends in indicators of the health service and of financial risk protection. We also estimated the probability of Bangladesh's achieving of UHC targets of 80% essential health-service coverage and 100% financial risk protection by 2030. We estimated the coverage of UHC indicators-13 prevention indicators and four treatment indicators-from 19 nationally representative population-based household surveys done in Bangladesh from Jan 1, 1991, to Dec 31, 2014. We used a Bayesian regression model to estimate the trend and to predict the coverage of UHC indicators along with the probabilities of achieving UHC targets of 80% coverage of health services and 100% coverage of financial risk protection from catastrophic and impoverishing health payments by 2030. We used the concentration index and relative index of inequality to assess wealth-based inequality in UHC indicators. If the current trends remain unchanged, we estimated that coverage of childhood vaccinations, improved water, oral rehydration treatment, satisfaction with family planning, and non-use of tobacco will achieve the 80% target by 2030. However, coverage of four antenatal care visits, facility-based delivery, skilled birth attendance, postnatal checkups, care seeking for pneumonia, exclusive breastfeeding, non-overweight, and adequate sanitation were not projected to achieve the target. Quintile-specific projections showed wide wealth-based inequality in access to antenatal care, postnatal care, delivery care, adequate sanitation, and care seeking for pneumonia, and this inequality was projected to continue for all indicators. The incidence of catastrophic health expenditure and impoverishment were projected to increase from 17% and 4%, respectively, in 2015, to 20% and 9%, respectively, by 2030. Inequality analysis suggested that
Modified Decoding Algorithm of LLR-SPA
Directory of Open Access Journals (Sweden)
Zhongxun Wang
2014-09-01
Full Text Available In wireless sensor networks, the energy consumption is mainly occurred in the stage of information transmission. The Low Density Parity Check code can make full use of the channel information to save energy. Because of the widely used decoding algorithm of the Low Density Parity Check code, this paper proposes a new decoding algorithm which is based on the LLR-SPA (Sum-Product Algorithm in Log-Likelihood-domain to improve the accuracy of the decoding algorithm. In the modified algorithm, a piecewise linear function is used to approximate the complicated Jacobi correction term in LLR-SPA decoding algorithm. Construct the tangent by the tangency point to the function of Jacobi correction term, which is based on the first order Taylor Series. In this way, the proposed piecewise linear approximation offers almost a perfect match to the function of Jacobi correction term. Meanwhile, the proposed piecewise linear approximation could avoid the operation of logarithmic which is more suitable for practical application. The simulation results show that the proposed algorithm could improve the decoding accuracy greatly without noticeable variation of the computational complexity.
Symbol synchronization for the TDRSS decoder
Costello, D. J., Jr.
1983-01-01
Each 8 bits out of the Viterbi decoder correspond to one symbol of the R/S code. Synchronization must be maintained here so that each 8-bit symbol delivered to the R/S decoder corresponds to an 8-bit symbol from the R/S encoder. Lack of synchronization, would cause an error in almost every R/S symbol since even a - 1-bit sync slip shifts every bit in each 8-bit symbol by one position, therby confusing the mapping betweeen 8-bit sequences and symbols. The error correcting capability of the R/S code would be exceeded. Possible ways to correcting this condition include: (1) designing the R/S decoder to recognize the overload and shifting the output sequence of the inner decoder to establish a different sync state; (2) using the characteristics of the inner decoder to establish symbol synchronization for the outer code, with or without a deinterleaver and an interleaver; and (3) modifying the encoder to alternate periodically between two sets of generators.
Completion time reduction in instantly decodable network coding through decoding delay control
Douik, Ahmed S.
2014-12-01
For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to completely act against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. In this paper, we study the effect of controlling the decoding delay to reduce the completion time below its currently best known solution. We first derive the decoding-delay-dependent expressions of the users\\' and their overall completion times. Although using such expressions to find the optimal overall completion time is NP-hard, we use a heuristic that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Simulation results show that this new algorithm achieves both a lower mean completion time and mean decoding delay compared to the best known heuristic for completion time reduction. The gap in performance becomes significant for harsh erasure scenarios.
Neuroprosthetic Decoder Training as Imitation Learning.
Directory of Open Access Journals (Sweden)
Josh Merel
2016-05-01
Full Text Available Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user's intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user's intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger, can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector.
Decoding Hermitian Codes with Sudan's Algorithm
DEFF Research Database (Denmark)
Høholdt, Tom; Nielsen, Rasmus Refslund
1999-01-01
We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q-polynomial, and a reduct......We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q...
High Speed Frame Synchronization and Viterbi Decoding
DEFF Research Database (Denmark)
Paaske, Erik; Justesen, Jørn; Larsen, Knud J.
1996-01-01
The purpose of Phase 1 of the study is to describe the system structure and algorithms in sufficient detail to allow drawing the high level architecture of units containing frame synchronization and Viterbi decoding. The systems we consider are high data rate space communication systems. Also, th...... implementation uses a number of commercially available decoders while the the two others are completely new implementations aimed at ASICs, one for a data date of 75 Mbit/s and the second for a data rate of 150 Mbits/s....
Word-decoding as a function of temporal processing in the visual system.
Directory of Open Access Journals (Sweden)
Steven R Holloway
Full Text Available This study explored the relation between visual processing and word-decoding ability in a normal reading population. Forty participants were recruited at Arizona State University. Flicker fusion thresholds were assessed with an optical chopper using the method of limits by a 1-deg diameter green (543 nm test field. Word decoding was measured using reading-word and nonsense-word decoding tests. A non-linguistic decoding measure was obtained using a computer program that consisted of Landolt C targets randomly presented in four cardinal orientations, at 3-radial distances from a focus point, for eight compass points, in a circular pattern. Participants responded by pressing the arrow key on the keyboard that matched the direction the target was facing. The results show a strong correlation between critical flicker fusion thresholds and scores on the reading-word, nonsense-word, and non-linguistic decoding measures. The data suggests that the functional elements of the visual system involved with temporal modulation and spatial processing may affect the ease with which people read.
Generalized Sudan's List Decoding for Order Domain Codes
DEFF Research Database (Denmark)
Geil, Hans Olav; Matsumoto, Ryutaroh
2007-01-01
We generalize Sudan's list decoding algorithm without multiplicity to evaluation codes coming from arbitrary order domains. The number of correctable errors by the proposed method is larger than the original list decoding without multiplicity.......We generalize Sudan's list decoding algorithm without multiplicity to evaluation codes coming from arbitrary order domains. The number of correctable errors by the proposed method is larger than the original list decoding without multiplicity....
Tail Biting Trellis Representation of Codes: Decoding and Construction
Shao. Rose Y.; Lin, Shu; Fossorier, Marc
1999-01-01
This paper presents two new iterative algorithms for decoding linear codes based on their tail biting trellises, one is unidirectional and the other is bidirectional. Both algorithms are computationally efficient and achieves virtually optimum error performance with a small number of decoding iterations. They outperform all the previous suboptimal decoding algorithms. The bidirectional algorithm also reduces decoding delay. Also presented in the paper is a method for constructing tail biting trellises for linear block codes.
Decoding of concatenated codes with interleaved outer codes
DEFF Research Database (Denmark)
Justesen, Jørn; Thommesen, Christian; Høholdt, Tom
2004-01-01
Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved Reed/Solomon codes, which allows close to errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes. (NK) N-K......Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved Reed/Solomon codes, which allows close to errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes. (NK) N-K...
Decoding of concatenated codes with interleaved outer codes
DEFF Research Database (Denmark)
Justesen, Jørn; Høholdt, Tom; Thommesen, Christian
2004-01-01
Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes.......Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes....
Understanding Computational Bayesian Statistics
Bolstad, William M
2011-01-01
A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic
Bayesian statistics an introduction
Lee, Peter M
2012-01-01
Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel
Punctured Parallel Viterbi Decoding Of Long Constraint Length ...
African Journals Online (AJOL)
This paper considers puncturing applied to the parallel Viterbi decoding of long constraint length convolutional codes involving partitioning of the basic code. It is shown that the combination of the two techniques of parallel Viterbi decoding and puncturing provides a considerable reduction in decoding complexity, which ...
Decoding speech perception from single cell activity in humans.
Ossmy, Ori; Fried, Itzhak; Mukamel, Roy
2015-08-15
Deciphering the content of continuous speech is a challenging task performed daily by the human brain. Here, we tested whether activity of single cells in auditory cortex could be used to support such a task. We recorded neural activity from auditory cortex of two neurosurgical patients while presented with a short video segment containing speech. Population spiking activity (~20 cells per patient) allowed detection of word onset and decoding the identity of perceived words with significantly high accuracy levels. Oscillation phase of local field potentials (8-12Hz) also allowed decoding word identity although with lower accuracy levels. Our results provide evidence that the spiking activity of a relatively small population of cells in human primary auditory cortex contains significant information for classification of words in ongoing speech. Given previous evidence for overlapping neural representation during speech perception and production, this may have implications for developing brain-machine interfaces for patients with deficits in speech production. Copyright © 2015 Elsevier Inc. All rights reserved.
Performance breakdown in optimal stimulus decoding
Czech Academy of Sciences Publication Activity Database
Košťál, Lubomír; Lánský, Petr; Pilarski, Stevan
2015-01-01
Roč. 12, č. 3 (2015), 036012 ISSN 1741-2560 R&D Projects: GA ČR(CZ) GA15-08066S Institutional support: RVO:67985823 Keywords : decoding accuracy * Fisher information * threshold effect Subject RIV: BD - Theory of Information Impact factor: 3.493, year: 2015
Older Adults Have Difficulty in Decoding Sarcasm
Phillips, Louise H.; Allen, Roy; Bull, Rebecca; Hering, Alexandra; Kliegel, Matthias; Channon, Shelley
2015-01-01
Younger and older adults differ in performance on a range of social-cognitive skills, with older adults having difficulties in decoding nonverbal cues to emotion and intentions. Such skills are likely to be important when deciding whether someone is being sarcastic. In the current study we investigated in a life span sample whether there are…
Turbo decoder core design for system development
Chen, Xiaoyi; Yao, Qingdong; Liu, Peng
2003-04-01
Due to their near Shannon-capacity performance, turbo codes have received a considerable amount of attention since their introduction. They are particularly attractive for cellular communication systems and have been included in the specifications for both the WCDMA(UMTS) and cdma2000 third-generation cellular standards. The log-MAP decoding algorithm and some technologies used to reduce the complexity have discussed in the past days. But we can see that if we apply the Turbo code to wireless communications,the decoding process rate is the bottleneck. The software implement is not realistic in today"s DSP process rate. So the hardware design is supposed to realize the decoding. The purpose of this paper is to present a full ASIC design way of Turbo decoding. Many technologies are added to the basic Log-MAP algorithm to simple the design and improve the performance. With the log-MAP algorithm, the Jacobi logarithm is computed exactly using max*()=ln(exp(x)+exp(y))=max()+fc(|y-x|),The correction function fc(|y-x|) is important because there will be 0.5dB SNR loss without it. The linear approximation can be used and the linear parameters was selected carefully to suit hardware realize in our design. In order to save the power consumption and also to assure the performance, the quantization is important in ASIC design, we adopt a compromise scheme to save the power and also there is good BER behaves. Many noisy frames can be corrected with a few iterations while some of the more noisy frames need to experience a full number of iterations (NOI). Thus, if the decoder could stop the iteration as soon as the frame becomes correct, the average NOI would be reduced. There are many ways to stop the iteration such as CRC, compare and so on, we adopt a significantly less computation and much less storage stop criteria. For long frames the memory for storing the entire frame of the forward probability α or the backward probability β can be very large. Available products all
Unsupervised adaptation of brain machine interface decoders
Directory of Open Access Journals (Sweden)
Tayfun eGürel
2012-11-01
Full Text Available The performance of neural decoders can degrade over time due to nonstationarities in the relationship between neuronal activity and behavior. In this case, brain-machine interfaces (BMI require adaptation of their decoders to maintain high performance across time. One way to achieve this is by use of periodical calibration phases, during which the BMI system (or an external human demonstrator instructs the user to perform certain movements or behaviors. This approach has two disadvantages: (i calibration phases interrupt the autonomous operation of the BMI and (ii between two calibration phases the BMI performance might not be stable but continuously decrease. A better alternative would be that the BMI decoder is able to continuously adapt in an unsupervised manner during autonomous BMI operation, i.e. without knowing the movement intentions of the user. In the present article, we present an efficient method for such unsupervised training of BMI systems for continuous movement control. The proposed method utilizes a cost function derived from neuronal recordings, which guides a learning algorithm to evaluate the decoding parameters. We verify the performance of our adaptive method by simulating a BMI user with an optimal feedback control model and its interaction with our adaptive BMI decoder. The simulation results show that the cost function and the algorithm yield fast and precise trajectories towards targets at random orientations on a 2-dimensional computer screen. For initially unknown and nonstationary tuning parameters, our unsupervised method is still able to generate precise trajectories and to keep its performance stable in the long term. The algorithm can optionally work also with neuronal error signals instead or in conjunction with the proposed unsupervised adaptation.
DEFF Research Database (Denmark)
Jensen, Finn Verner; Nielsen, Thomas Dyhre
2016-01-01
is largely due to the availability of efficient inference algorithms for answering probabilistic queries about the states of the variables in the network. Furthermore, to support the construction of Bayesian network models, learning algorithms are also available. We give an overview of the Bayesian network...
Kleibergen, F.R.; Kleijn, R.; Paap, R.
2000-01-01
We propose a novel Bayesian test under a (noninformative) Jeffreys'priorspecification. We check whether the fixed scalar value of the so-calledBayesian Score Statistic (BSS) under the null hypothesis is aplausiblerealization from its known and standardized distribution under thealternative. Unlike
Yuan, Ying; MacKinnon, David P.
2009-01-01
In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…
Decoding subjective mental states from fMRI activity patterns
International Nuclear Information System (INIS)
Tamaki, Masako; Kamitani, Yukiyasu
2011-01-01
In recent years, functional magnetic resonance imaging (fMRI) decoding has emerged as a powerful tool to read out detailed stimulus features from multi-voxel brain activity patterns. Moreover, the method has been extended to perform a primitive form of 'mind-reading,' by applying a decoder 'objectively' trained using stimulus features to more 'subjective' conditions. In this paper, we first introduce basic procedures for fMRI decoding based on machine learning techniques. Second, we discuss the source of information used for decoding, in particular, the possibility of extracting information from subvoxel neural structures. We next introduce two experimental designs for decoding subjective mental states: the 'objective-to-subjective design' and the 'subjective-to-subjective design.' Then, we illustrate recent studies on the decoding of a variety of mental states, such as, attention, awareness, decision making, memory, and mental imagery. Finally, we discuss the challenges and new directions of fMRI decoding. (author)
Efficient Decoding of Turbo Codes with Nonbinary Belief Propagation
Directory of Open Access Journals (Sweden)
Thierry Lestable
2008-05-01
Full Text Available This paper presents a new approach to decode turbo codes using a nonbinary belief propagation decoder. The proposed approach can be decomposed into two main steps. First, a nonbinary Tanner graph representation of the turbo code is derived by clustering the binary parity-check matrix of the turbo code. Then, a group belief propagation decoder runs several iterations on the obtained nonbinary Tanner graph. We show in particular that it is necessary to add a preprocessing step on the parity-check matrix of the turbo code in order to ensure good topological properties of the Tanner graph and then good iterative decoding performance. Finally, by capitalizing on the diversity which comes from the existence of distinct efficient preprocessings, we propose a new decoding strategy, called decoder diversity, that intends to take benefits from the diversity through collaborative decoding schemes.
SYMBOL LEVEL DECODING FOR DUO-BINARY TURBO CODES
Directory of Open Access Journals (Sweden)
Yogesh Beeharry
2017-05-01
Full Text Available This paper investigates the performance of three different symbol level decoding algorithms for Duo-Binary Turbo codes. Explicit details of the computations involved in the three decoding techniques, and a computational complexity analysis are given. Simulation results with different couple lengths, code-rates, and QPSK modulation reveal that the symbol level decoding with bit-level information outperforms the symbol level decoding by 0.1 dB on average in the error floor region. Moreover, a complexity analysis reveals that symbol level decoding with bit-level information reduces the decoding complexity by 19.6 % in terms of the total number of computations required for each half-iteration as compared to symbol level decoding.
Long-term asynchronous decoding of arm motion using electrocorticographic signals in monkey
Directory of Open Access Journals (Sweden)
Zenas C Chao
2010-03-01
Full Text Available Brain–machine interfaces (BMIs employ the electrical activity generated by cortical neurons directly for controlling external devices and have been conceived as a means for restoring human cognitive or sensory-motor functions. The dominant approach in BMI research has been to decode motor variables based on single-unit activity (SUA. Unfortunately, this approach suffers from poor long-term stability and daily recalibration is normally required to maintain reliable performance. A possible alternative is BMIs based on electrocorticograms (ECoGs, which measure population activity and can provide more durable and stable recording. However, the level of long-term stability that ECoG-based decoding can offer remains unclear. Here we propose a novel ECoG-based decoding paradigm and show that we have successfully decoded hand positions and arm joint angles during an asynchronous food-reaching task in monkeys when explicit cues prompting the onset of movement were not required. Performance using our ECoG-based decoder was comparable to existing SUA-based systems while evincing far superior stability and durability. In addition, the same decoder could be used for months without any drift in accuracy or recalibration. These results were achieved by incorporating the spatio-spectro-temporal integration of activity across multiple cortical areas to compensate for the lower fidelity of ECoG signals. These results show the feasibility of high-performance, chronic and versatile ECoG-based neuroprosthetic devices for real-life applications. This new method provides a stable platform for investigating cortical correlates for understanding motor control, sensory perception, and high-level cognitive processes.
Long-Term Asynchronous Decoding of Arm Motion Using Electrocorticographic Signals in Monkeys
Chao, Zenas C.; Nagasaka, Yasuo; Fujii, Naotaka
2009-01-01
Brain–machine interfaces (BMIs) employ the electrical activity generated by cortical neurons directly for controlling external devices and have been conceived as a means for restoring human cognitive or sensory-motor functions. The dominant approach in BMI research has been to decode motor variables based on single-unit activity (SUA). Unfortunately, this approach suffers from poor long-term stability and daily recalibration is normally required to maintain reliable performance. A possible alternative is BMIs based on electrocorticograms (ECoGs), which measure population activity and may provide more durable and stable recording. However, the level of long-term stability that ECoG-based decoding can offer remains unclear. Here we propose a novel ECoG-based decoding paradigm and show that we have successfully decoded hand positions and arm joint angles during an asynchronous food-reaching task in monkeys when explicit cues prompting the onset of movement were not required. Performance using our ECoG-based decoder was comparable to existing SUA-based systems while evincing far superior stability and durability. In addition, the same decoder could be used for months without any drift in accuracy or recalibration. These results were achieved by incorporating the spatio-spectro-temporal integration of activity across multiple cortical areas to compensate for the lower fidelity of ECoG signals. These results show the feasibility of high-performance, chronic and versatile ECoG-based neuroprosthetic devices for real-life applications. This new method provides a stable platform for investigating cortical correlates for understanding motor control, sensory perception, and high-level cognitive processes. PMID:20407639
Neural decoding of visual imagery during sleep.
Horikawa, T; Tamaki, M; Miyawaki, Y; Kamitani, Y
2013-05-03
Visual imagery during sleep has long been a topic of persistent speculation, but its private nature has hampered objective analysis. Here we present a neural decoding approach in which machine-learning models predict the contents of visual imagery during the sleep-onset period, given measured brain activity, by discovering links between human functional magnetic resonance imaging patterns and verbal reports with the assistance of lexical and image databases. Decoding models trained on stimulus-induced brain activity in visual cortical areas showed accurate classification, detection, and identification of contents. Our findings demonstrate that specific visual experience during sleep is represented by brain activity patterns shared by stimulus perception, providing a means to uncover subjective contents of dreaming using objective neural measurement.
A fast and accurate decoder for underwater acoustic telemetry.
Ingraham, J M; Deng, Z D; Li, X; Fu, T; McMichael, G A; Trumbo, B A
2014-07-01
The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system.
On Lattice Sequential Decoding for The Unconstrained AWGN Channel
Abediseid, Walid
2012-10-01
In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.
On Lattice Sequential Decoding for The Unconstrained AWGN Channel
Abediseid, Walid
2013-04-04
In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the \\\\textit{lattice decoder}. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.
Novel LDPC Decoder via MLP Neural Networks
Karami, Alireza; Attari, Mahmoud Ahmadian
2014-01-01
In this paper, a new method for decoding Low Density Parity Check (LDPC) codes, based on Multi-Layer Perceptron (MLP) neural networks is proposed. Due to the fact that in neural networks all procedures are processed in parallel, this method can be considered as a viable alternative to Message Passing Algorithm (MPA), with high computational complexity. Our proposed algorithm runs with soft criterion and concurrently does not use probabilistic quantities to decide what the estimated codeword i...
Bayesian data analysis for newcomers.
Kruschke, John K; Liddell, Torrin M
2018-02-01
This article explains the foundational concepts of Bayesian data analysis using virtually no mathematical notation. Bayesian ideas already match your intuitions from everyday reasoning and from traditional data analysis. Simple examples of Bayesian data analysis are presented that illustrate how the information delivered by a Bayesian analysis can be directly interpreted. Bayesian approaches to null-value assessment are discussed. The article clarifies misconceptions about Bayesian methods that newcomers might have acquired elsewhere. We discuss prior distributions and explain how they are not a liability but an important asset. We discuss the relation of Bayesian data analysis to Bayesian models of mind, and we briefly discuss what methodological problems Bayesian data analysis is not meant to solve. After you have read this article, you should have a clear sense of how Bayesian data analysis works and the sort of information it delivers, and why that information is so intuitive and useful for drawing conclusions from data.
Sequential decoders for large MIMO systems
Ali, Konpal S.
2014-05-01
Due to their ability to provide high data rates, multiple-input multiple-output (MIMO) systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In this paper, we employ the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. Numerical results are done that show moderate bias values result in a decent performance-complexity trade-off. We also attempt to bound the error by bounding the bias, using the minimum distance of a lattice. The variations in complexity with SNR have an interesting trend that shows room for considerable improvement. Our work is compared against linear decoders (LDs) aided with Element-based Lattice Reduction (ELR) and Complex Lenstra-Lenstra-Lovasz (CLLL) reduction. © 2014 IFIP.
Bayesian methods for data analysis
Carlin, Bradley P.
2009-01-01
Approaches for statistical inference Introduction Motivating Vignettes Defining the Approaches The Bayes-Frequentist Controversy Some Basic Bayesian Models The Bayes approach Introduction Prior Distributions Bayesian Inference Hierarchical Modeling Model Assessment Nonparametric Methods Bayesian computation Introduction Asymptotic Methods Noniterative Monte Carlo Methods Markov Chain Monte Carlo Methods Model criticism and selection Bayesian Modeling Bayesian Robustness Model Assessment Bayes Factors via Marginal Density Estimation Bayes Factors
Statistics: a Bayesian perspective
National Research Council Canada - National Science Library
Berry, Donald A
1996-01-01
...: it is the only introductory textbook based on Bayesian ideas, it combines concepts and methods, it presents statistics as a means of integrating data into the significant process, it develops ideas...
Noncausal Bayesian Vector Autoregression
DEFF Research Database (Denmark)
Lanne, Markku; Luoto, Jani
We propose a Bayesian inferential procedure for the noncausal vector autoregressive (VAR) model that is capable of capturing nonlinearities and incorporating effects of missing variables. In particular, we devise a fast and reliable posterior simulator that yields the predictive distribution...
Granade, Christopher; Combes, Joshua; Cory, D. G.
2016-03-01
In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of-the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we address all three problems. First, we use modern statistical methods, as pioneered by Huszár and Houlsby (2012 Phys. Rev. A 85 052120) and by Ferrie (2014 New J. Phys. 16 093035), to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first priors on quantum states and channels that allow for including useful experimental insight. Finally, we develop a method that allows tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.
Variational Bayesian Filtering
Czech Academy of Sciences Publication Activity Database
Šmídl, Václav; Quinn, A.
2008-01-01
Roč. 56, č. 10 (2008), s. 5020-5030 ISSN 1053-587X R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Bayesian filtering * particle filtering * Variational Bayes Subject RIV: BC - Control Systems Theory Impact factor: 2.335, year: 2008 http://library.utia.cas.cz/separaty/2008/AS/smidl-variational bayesian filtering.pdf
Bayesian Networks An Introduction
Koski, Timo
2009-01-01
Bayesian Networks: An Introduction provides a self-contained introduction to the theory and applications of Bayesian networks, a topic of interest and importance for statisticians, computer scientists and those involved in modelling complex data sets. The material has been extensively tested in classroom teaching and assumes a basic knowledge of probability, statistics and mathematics. All notions are carefully explained and feature exercises throughout. Features include:.: An introduction to Dirichlet Distribution, Exponential Families and their applications.; A detailed description of learni
Decoding Delay Controlled Completion Time Reduction in Instantly Decodable Network Coding
Douik, Ahmed
2016-06-27
For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to act completely against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. This paper investigates the effect of controlling the decoding delay to reduce the completion time below its currently best-known solution in both perfect and imperfect feedback with persistent erasure channels. To solve the problem, the decodingdelay- dependent expressions of the users’ and overall completion times are derived in the complete feedback scenario. Although using such expressions to find the optimal overall completion time is NP-hard, the paper proposes two novel heuristics that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Afterward, the paper extends the study to the imperfect feedback scenario in which uncertainties at the sender affects its ability to anticipate accurately the decoding delay increase at each user. The paper formulates the problem in such environment and derives the expression of the minimum increase in the completion time. Simulation results show the performance of the proposed solutions and suggest that both heuristics achieves a lower mean completion time as compared to the best-known heuristics for the completion time reduction in perfect and imperfect feedback. The gap in performance becomes more significant as the erasure of the channel increases.
Distinct neural patterns enable grasp types decoding in monkey dorsal premotor cortex
Hao, Yaoyao; Zhang, Qiaosheng; Controzzi, Marco; Cipriani, Christian; Li, Yue; Li, Juncheng; Zhang, Shaomin; Wang, Yiwen; Chen, Weidong; Chiara Carrozza, Maria; Zheng, Xiaoxiang
2014-12-01
Objective. Recent studies have shown that dorsal premotor cortex (PMd), a cortical area in the dorsomedial grasp pathway, is involved in grasp movements. However, the neural ensemble firing property of PMd during grasp movements and the extent to which it can be used for grasp decoding are still unclear. Approach. To address these issues, we used multielectrode arrays to record both spike and local field potential (LFP) signals in PMd in macaque monkeys performing reaching and grasping of one of four differently shaped objects. Main results. Single and population neuronal activity showed distinct patterns during execution of different grip types. Cluster analysis of neural ensemble signals indicated that the grasp related patterns emerged soon (200-300 ms) after the go cue signal, and faded away during the hold period. The timing and duration of the patterns varied depending on the behaviors of individual monkey. Application of support vector machine model to stable activity patterns revealed classification accuracies of 94% and 89% for each of the two monkeys, indicating a robust, decodable grasp pattern encoded in the PMd. Grasp decoding using LFPs, especially the high-frequency bands, also produced high decoding accuracies. Significance. This study is the first to specify the neuronal population encoding of grasp during the time course of grasp. We demonstrate high grasp decoding performance in PMd. These findings, combined with previous evidence for reach related modulation studies, suggest that PMd may play an important role in generation and maintenance of grasp action and may be a suitable locus for brain-machine interface applications.
Deep Learning Methods for Improved Decoding of Linear Codes
Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair
2018-02-01
The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.
Barnaud, Marie-Lou; Bessière, Pierre; Diard, Julien; Schwartz, Jean-Luc
2017-12-11
While neurocognitive data provide clear evidence for the involvement of the motor system in speech perception, its precise role and the way motor information is involved in perceptual decision remain unclear. In this paper, we discuss some recent experimental results in light of COSMO, a Bayesian perceptuo-motor model of speech communication. COSMO enables us to model both speech perception and speech production with probability distributions relating phonological units with sensory and motor variables. Speech perception is conceived as a sensory-motor architecture combining an auditory and a motor decoder thanks to a Bayesian fusion process. We propose the sketch of a neuroanatomical architecture for COSMO, and we capitalize on properties of the auditory vs. motor decoders to address three neurocognitive studies of the literature. Altogether, this computational study reinforces functional arguments supporting the role of a motor decoding branch in the speech perception process. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
A quantum algorithm for Viterbi decoding of classical convolutional codes
Grice, Jon R.; Meyer, David A.
2014-01-01
We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper the proposed algorithm is applied to decoding classical convolutional codes, for instance; large constraint length $Q$ and short decode frames $N$. Other applications of the classical Viterbi algorithm where $Q$ is large (e.g. speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butter...
Interpolation decoding method with variable parameters for fractal image compression
International Nuclear Information System (INIS)
He Chuanjiang; Li Gaoping; Shen Xiaona
2007-01-01
The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal
Sub-quadratic decoding of one-point hermitian codes
DEFF Research Database (Denmark)
Nielsen, Johan Sebastian Rosenkilde; Beelen, Peter
2015-01-01
We present the first two sub-quadratic complexity decoding algorithms for one-point Hermitian codes. The first is based on a fast realization of the Guruswami-Sudan algorithm using state-of-the-art algorithms from computer algebra for polynomial-ring matrix minimization. The second is a power...... decoding algorithm: an extension of classical key equation decoding which gives a probabilistic decoding algorithm up to the Sudan radius. We show how the resulting key equations can be solved by the matrix minimization algorithms from computer algebra, yielding similar asymptotic complexities....
Joint Decoding of Concatenated VLEC and STTC System
Directory of Open Access Journals (Sweden)
Chen Huijun
2008-01-01
Full Text Available Abstract We consider the decoding of wireless communication systems with both source coding in the application layer and channel coding in the physical layer for high-performance transmission over fading channels. Variable length error correcting codes (VLECs and space time trellis codes (STTCs are used to provide bandwidth efficient data compression as well as coding and diversity gains. At the receiver, an iterative joint source and space time decoding scheme are developed to utilize redundancy in both STTC and VLEC to improve overall decoding performance. Issues such as the inseparable systematic information in the symbol level, the asymmetric trellis structure of VLEC, and information exchange between bit and symbol domains have been considered in the maximum a posteriori probability (MAP decoding algorithm. Simulation results indicate that the developed joint decoding scheme achieves a significant decoding gain over the separate decoding in fading channels, whether or not the channel information is perfectly known at the receiver. Furthermore, how rate allocation between STTC and VLEC affects the performance of the joint source and space-time decoder is investigated. Different systems with a fixed overall information rate are studied. It is shown that for a system with more redundancy dedicated to the source code and a higher order modulation of STTC, the joint decoding yields better performance, though with increased complexity.
Joint Decoding of Concatenated VLEC and STTC System
Directory of Open Access Journals (Sweden)
Huijun Chen
2008-07-01
Full Text Available We consider the decoding of wireless communication systems with both source coding in the application layer and channel coding in the physical layer for high-performance transmission over fading channels. Variable length error correcting codes (VLECs and space time trellis codes (STTCs are used to provide bandwidth efficient data compression as well as coding and diversity gains. At the receiver, an iterative joint source and space time decoding scheme are developed to utilize redundancy in both STTC and VLEC to improve overall decoding performance. Issues such as the inseparable systematic information in the symbol level, the asymmetric trellis structure of VLEC, and information exchange between bit and symbol domains have been considered in the maximum a posteriori probability (MAP decoding algorithm. Simulation results indicate that the developed joint decoding scheme achieves a significant decoding gain over the separate decoding in fading channels, whether or not the channel information is perfectly known at the receiver. Furthermore, how rate allocation between STTC and VLEC affects the performance of the joint source and space-time decoder is investigated. Different systems with a fixed overall information rate are studied. It is shown that for a system with more redundancy dedicated to the source code and a higher order modulation of STTC, the joint decoding yields better performance, though with increased complexity.
Bayesian modeling of unknown diseases for biosurveillance.
Shen, Yanna; Cooper, Gregory F
2009-11-14
This paper investigates Bayesian modeling of unknown causes of events in the context of disease-outbreak detection. We introduce a Bayesian approach that models and detects both (1) known diseases (e.g., influenza and anthrax) by using informative prior probabilities and (2) unknown diseases (e.g., a new, highly contagious respiratory virus that has never been seen before) by using relatively non-informative prior probabilities. We report the results of simulation experiments which support that this modeling method can improve the detection of new disease outbreaks in a population. A key contribution of this paper is that it introduces a Bayesian approach for jointly modeling both known and unknown causes of events. Such modeling has broad applicability in medical informatics, where the space of known causes of outcomes of interest is seldom complete.
Encoding and Decoding Models in Cognitive Electrophysiology.
Holdgraf, Christopher R; Rieger, Jochem W; Micheli, Cristiano; Martin, Stephanie; Knight, Robert T; Theunissen, Frederic E
2017-01-01
Cognitive neuroscience has seen rapid growth in the size and complexity of data recorded from the human brain as well as in the computational tools available to analyze this data. This data explosion has resulted in an increased use of multivariate, model-based methods for asking neuroscience questions, allowing scientists to investigate multiple hypotheses with a single dataset, to use complex, time-varying stimuli, and to study the human brain under more naturalistic conditions. These tools come in the form of "Encoding" models, in which stimulus features are used to model brain activity, and "Decoding" models, in which neural features are used to generated a stimulus output. Here we review the current state of encoding and decoding models in cognitive electrophysiology and provide a practical guide toward conducting experiments and analyses in this emerging field. Our examples focus on using linear models in the study of human language and audition. We show how to calculate auditory receptive fields from natural sounds as well as how to decode neural recordings to predict speech. The paper aims to be a useful tutorial to these approaches, and a practical introduction to using machine learning and applied statistics to build models of neural activity. The data analytic approaches we discuss may also be applied to other sensory modalities, motor systems, and cognitive systems, and we cover some examples in these areas. In addition, a collection of Jupyter notebooks is publicly available as a complement to the material covered in this paper, providing code examples and tutorials for predictive modeling in python. The aim is to provide a practical understanding of predictive modeling of human brain data and to propose best-practices in conducting these analyses.
Sudan-decoding generalized geometric Goppa codes
DEFF Research Database (Denmark)
Heydtmann, Agnes Eileen
2003-01-01
Generalized geometric Goppa codes are vector spaces of n-tuples with entries from different extension fields of a ground field. They are derived from evaluating functions similar to conventional geometric Goppa codes, but allowing evaluation in places of arbitrary degree. A decoding scheme...... for these codes based on Sudan's improved algorithm is presented and its error-correcting capacity is analyzed. For the implementation of the algorithm it is necessary that the so-called increasing zero bases of certain spaces of functions are available. A method to obtain such bases is developed....
Bayesian phylogeography finds its roots.
Directory of Open Access Journals (Sweden)
Philippe Lemey
2009-09-01
Full Text Available As a key factor in endemic and epidemic dynamics, the geographical distribution of viruses has been frequently interpreted in the light of their genetic histories. Unfortunately, inference of historical dispersal or migration patterns of viruses has mainly been restricted to model-free heuristic approaches that provide little insight into the temporal setting of the spatial dynamics. The introduction of probabilistic models of evolution, however, offers unique opportunities to engage in this statistical endeavor. Here we introduce a Bayesian framework for inference, visualization and hypothesis testing of phylogeographic history. By implementing character mapping in a Bayesian software that samples time-scaled phylogenies, we enable the reconstruction of timed viral dispersal patterns while accommodating phylogenetic uncertainty. Standard Markov model inference is extended with a stochastic search variable selection procedure that identifies the parsimonious descriptions of the diffusion process. In addition, we propose priors that can incorporate geographical sampling distributions or characterize alternative hypotheses about the spatial dynamics. To visualize the spatial and temporal information, we summarize inferences using virtual globe software. We describe how Bayesian phylogeography compares with previous parsimony analysis in the investigation of the influenza A H5N1 origin and H5N1 epidemiological linkage among sampling localities. Analysis of rabies in West African dog populations reveals how virus diffusion may enable endemic maintenance through continuous epidemic cycles. From these analyses, we conclude that our phylogeographic framework will make an important asset in molecular epidemiology that can be easily generalized to infer biogeogeography from genetic data for many organisms.
Bayesian Exploratory Factor Analysis
DEFF Research Database (Denmark)
Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corr......This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor......, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...
Decoding bipedal locomotion from the rat sensorimotor cortex
Rigosa, J.; Panarese, A.; Dominici, N.; Friedli, L.; van den Brand, R.; Carpaneto, J.; DiGiovanna, J.; Courtine, G.; Micera, S.
2015-01-01
Objective. Decoding forelimb movements from the firing activity of cortical neurons has been interfaced with robotic and prosthetic systems to replace lost upper limb functions in humans. Despite the potential of this approach to improve locomotion and facilitate gait rehabilitation, decoding lower
Analysis and Design of Binary Message-Passing Decoders
DEFF Research Database (Denmark)
Lechner, Gottfried; Pedersen, Troels; Kramer, Gerhard
2012-01-01
Binary message-passing decoders for low-density parity-check (LDPC) codes are studied by using extrinsic information transfer (EXIT) charts. The channel delivers hard or soft decisions and the variable node decoder performs all computations in the L-value domain. A hard decision channel results i...
Illustrative examples in a bilingual decoding dictionary: An (un ...
African Journals Online (AJOL)
Abstract: The article discusses the principles underlying the inclusion of illustrative examples in a decoding English–Slovene dictionary. The inclusion of examples in decoding bilingual diction-aries can be justified by taking into account the semantic and grammatical differences between the source and the target languages ...
EXIT Chart Analysis of Binary Message-Passing Decoders
DEFF Research Database (Denmark)
Lechner, Gottfried; Pedersen, Troels; Kramer, Gerhard
2007-01-01
Binary message-passing decoders for LDPC codes are analyzed using EXIT charts. For the analysis, the variable node decoder performs all computations in the L-value domain. For the special case of a hard decision channel, this leads to the well know Gallager B algorithm, while the analysis can...
Does Linguistic Comprehension Support the Decoding Skills of Struggling Readers?
Blick, Michele; Nicholson, Tom; Chapman, James; Berman, Jeanette
2017-01-01
This study investigated the contribution of linguistic comprehension to the decoding skills of struggling readers. Participants were 36 children aged between eight and 12 years, all below average in decoding but differing in linguistic comprehension. The children read passages from the Neale Analysis of Reading Ability and their first 25 miscues…
Interim Manual for the DST: Decoding Skills Test.
Richardson, Ellis; And Others
The Decoding Skills Test (DST) was developed to provide a detailed measurement of decoding skills which could be used in research on developmental dyslexia. Another purpose of the test is to provide a diagnostic-prescriptive instrument to be used in the evaluation of, and program planning for, children needing remedial reading. The test is…
Word Processing in Dyslexics: An Automatic Decoding Deficit?
Yap, Regina; Van Der Leu, Aryan
1993-01-01
Compares dyslexic children with normal readers on measures of phonological decoding and automatic word processing. Finds that dyslexics have a deficit in automatic phonological decoding skills. Discusses results within the framework of the phonological deficit and the automatization deficit hypotheses. (RS)
Illustrative Examples in a Bilingual Decoding Dictionary: An (Un ...
African Journals Online (AJOL)
user
Abstract: The article discusses the principles underlying the inclusion of illustrative examples in a decoding English–Slovene dictionary. The inclusion of examples in decoding bilingual diction- aries can be justified by taking into account the semantic and grammatical differences between the source and the target ...
Decoding suprathreshold stochastic resonance with optimal weights
International Nuclear Information System (INIS)
Xu, Liyan; Vladusich, Tony; Duan, Fabing; Gunn, Lachlan J.; Abbott, Derek; McDonnell, Mark D.
2015-01-01
We investigate an array of stochastic quantizers for converting an analog input signal into a discrete output in the context of suprathreshold stochastic resonance. A new optimal weighted decoding is considered for different threshold level distributions. We show that for particular noise levels and choices of the threshold levels optimally weighting the quantizer responses provides a reduced mean square error in comparison with the original unweighted array. However, there are also many parameter regions where the original array provides near optimal performance, and when this occurs, it offers a much simpler approach than optimally weighting each quantizer's response. - Highlights: • A weighted summing array of independently noisy binary comparators is investigated. • We present an optimal linearly weighted decoding scheme for combining the comparator responses. • We solve for the optimal weights by applying least squares regression to simulated data. • We find that the MSE distortion of weighting before summation is superior to unweighted summation of comparator responses. • For some parameter regions, the decrease in MSE distortion due to weighting is negligible
Decoding suprathreshold stochastic resonance with optimal weights
Energy Technology Data Exchange (ETDEWEB)
Xu, Liyan, E-mail: xuliyan@qdu.edu.cn [Institute of Complexity Science, Qingdao University, Qingdao 266071 (China); Vladusich, Tony [Computational and Theoretical Neuroscience Laboratory, Institute for Telecommunications Research, School of Information Technology and Mathematical Sciences, University of South Australia, SA 5095 (Australia); Duan, Fabing [Institute of Complexity Science, Qingdao University, Qingdao 266071 (China); Gunn, Lachlan J.; Abbott, Derek [Centre for Biomedical Engineering (CBME) and School of Electrical & Electronic Engineering, The University of Adelaide, Adelaide, SA 5005 (Australia); McDonnell, Mark D. [Computational and Theoretical Neuroscience Laboratory, Institute for Telecommunications Research, School of Information Technology and Mathematical Sciences, University of South Australia, SA 5095 (Australia); Centre for Biomedical Engineering (CBME) and School of Electrical & Electronic Engineering, The University of Adelaide, Adelaide, SA 5005 (Australia)
2015-10-09
We investigate an array of stochastic quantizers for converting an analog input signal into a discrete output in the context of suprathreshold stochastic resonance. A new optimal weighted decoding is considered for different threshold level distributions. We show that for particular noise levels and choices of the threshold levels optimally weighting the quantizer responses provides a reduced mean square error in comparison with the original unweighted array. However, there are also many parameter regions where the original array provides near optimal performance, and when this occurs, it offers a much simpler approach than optimally weighting each quantizer's response. - Highlights: • A weighted summing array of independently noisy binary comparators is investigated. • We present an optimal linearly weighted decoding scheme for combining the comparator responses. • We solve for the optimal weights by applying least squares regression to simulated data. • We find that the MSE distortion of weighting before summation is superior to unweighted summation of comparator responses. • For some parameter regions, the decrease in MSE distortion due to weighting is negligible.
Unsupervised learning of facial emotion decoding skills
Directory of Open Access Journals (Sweden)
Jan Oliver Huelle
2014-02-01
Full Text Available Research on the mechanisms underlying human facial emotion recognition has long focussed on genetically determined neural algorithms and often neglected the question of how these algorithms might be tuned by social learning. Here we show that facial emotion decoding skills can be significantly and sustainably improved by practise without an external teaching signal. Participants saw video clips of dynamic facial expressions of five different women and were asked to decide which of four possible emotions (anger, disgust, fear and sadness was shown in each clip. Although no external information about the correctness of the participant’s response or the sender’s true affective state was provided, participants showed a significant increase of facial emotion recognition accuracy both within and across two training sessions two days to several weeks apart. We discuss several similarities and differences between the unsupervised improvement of facial decoding skills observed in the current study, unsupervised perceptual learning of simple stimuli described in previous studies and practise effects often observed in cognitive tasks.
Berliner, M.
2017-12-01
Bayesian statistical decision theory offers a natural framework for decision-policy making in the presence of uncertainty. Key advantages of the approach include efficient incorporation of information and observations. However, in complicated settings it is very difficult, perhaps essentially impossible, to formalize the mathematical inputs needed in the approach. Nevertheless, using the approach as a template is useful for decision support; that is, organizing and communicating our analyses. Bayesian hierarchical modeling is valuable in quantifying and managing uncertainty such cases. I review some aspects of the idea emphasizing statistical model development and use in the context of sea-level rise.
Bayesian Exploratory Factor Analysis
Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates from a high dimensional set of psychological measurements. PMID:25431517
Squires, Katie Ellen
2013-01-01
This study investigated the differential contribution of auditory-verbal and visuospatial working memory (WM) on decoding skills in second- and fifth-grade children identified with poor decoding. Thirty-two second-grade students and 22 fifth-grade students completed measures that assessed simple and complex auditory-verbal and visuospatial memory,…
Directory of Open Access Journals (Sweden)
Jiri eHammer
2013-11-01
Full Text Available In neuronal population signals, including the electroencephalogram (EEG and electrocorticogram (ECoG, the low-frequency component (LFC is particularly informative about motor behavior and can be used for decoding movement parameters for brain-machine interface (BMI applications. An idea previously expressed, but as of yet not quantitatively tested, is that it is the LFC phase that is the main source of decodable information. To test this issue, we analyzed human ECoG recorded during a game-like, one-dimensional, continuous motor task with a novel decoding method suitable for unfolding magnitude and phase explicitly into a complex-valued, time-frequency signal representation, enabling quantification of the decodable information within the temporal, spatial and frequency domains and allowing disambiguation of the phase contribution from that of the spectral magnitude. The decoding accuracy based only on phase information was substantially (at least 2 fold and significantly higher than that based only on magnitudes for position, velocity and acceleration. The frequency profile of movement-related information in the ECoG data matched well with the frequency profile expected when assuming a close time-domain correlate of movement velocity in the ECoG, e.g., a (noisy copy of hand velocity. No such match was observed with the frequency profiles expected when assuming a copy of either hand position or acceleration. There was also no indication of additional magnitude-based mechanisms encoding movement information in the LFC range. Thus, our study contributes to elucidating the nature of the informative low-frequency component of motor cortical population activity and may hence contribute to improve decoding strategies and BMI performance.
Bayesian methods for hackers probabilistic programming and Bayesian inference
Davidson-Pilon, Cameron
2016-01-01
Bayesian methods of inference are deeply natural and extremely powerful. However, most discussions of Bayesian inference rely on intensely complex mathematical analyses and artificial examples, making it inaccessible to anyone without a strong mathematical background. Now, though, Cameron Davidson-Pilon introduces Bayesian inference from a computational perspective, bridging theory to practice–freeing you to get results using computing power. Bayesian Methods for Hackers illuminates Bayesian inference through probabilistic programming with the powerful PyMC language and the closely related Python tools NumPy, SciPy, and Matplotlib. Using this approach, you can reach effective solutions in small increments, without extensive mathematical intervention. Davidson-Pilon begins by introducing the concepts underlying Bayesian inference, comparing it with other techniques and guiding you through building and training your first Bayesian model. Next, he introduces PyMC through a series of detailed examples a...
Bayesian logistic regression analysis
Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.
2012-01-01
In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an
Bayesian statistical inference
Directory of Open Access Journals (Sweden)
Bruno De Finetti
2017-04-01
Full Text Available This work was translated into English and published in the volume: Bruno De Finetti, Induction and Probability, Biblioteca di Statistica, eds. P. Monari, D. Cocchi, Clueb, Bologna, 1993.Bayesian statistical Inference is one of the last fundamental philosophical papers in which we can find the essential De Finetti's approach to the statistical inference.
Bayesian optimization for materials science
Packwood, Daniel
2017-01-01
This book provides a short and concise introduction to Bayesian optimization specifically for experimental and computational materials scientists. After explaining the basic idea behind Bayesian optimization and some applications to materials science in Chapter 1, the mathematical theory of Bayesian optimization is outlined in Chapter 2. Finally, Chapter 3 discusses an application of Bayesian optimization to a complicated structure optimization problem in computational surface science. Bayesian optimization is a promising global optimization technique that originates in the field of machine learning and is starting to gain attention in materials science. For the purpose of materials design, Bayesian optimization can be used to predict new materials with novel properties without extensive screening of candidate materials. For the purpose of computational materials science, Bayesian optimization can be incorporated into first-principles calculations to perform efficient, global structure optimizations. While re...
Decoding the mechanisms of Antikythera astronomical device
Lin, Jian-Liang
2016-01-01
This book presents a systematic design methodology for decoding the interior structure of the Antikythera mechanism, an astronomical device from ancient Greece. The historical background, surviving evidence and reconstructions of the mechanism are introduced, and the historical development of astronomical achievements and various astronomical instruments are investigated. Pursuing an approach based on the conceptual design of modern mechanisms and bearing in mind the standards of science and technology at the time, all feasible designs of the six lost/incomplete/unclear subsystems are synthesized as illustrated examples, and 48 feasible designs of the complete interior structure are presented. This approach provides not only a logical tool for applying modern mechanical engineering knowledge to the reconstruction of the Antikythera mechanism, but also an innovative research direction for identifying the original structures of the mechanism in the future. In short, the book offers valuable new insights for all...
Academic Training - Bioinformatics: Decoding the Genome
Chris Jones
2006-01-01
ACADEMIC TRAINING LECTURE SERIES 27, 28 February 1, 2, 3 March 2006 from 11:00 to 12:00 - Auditorium, bldg. 500 Decoding the Genome A special series of 5 lectures on: Recent extraordinary advances in the life sciences arising through new detection technologies and bioinformatics The past five years have seen an extraordinary change in the information and tools available in the life sciences. The sequencing of the human genome, the discovery that we possess far fewer genes than foreseen, the measurement of the tiny changes in the genomes that differentiate us, the sequencing of the genomes of many pathogens that lead to diseases such as malaria are all examples of completely new information that is now available in the quest for improved healthcare. New tools have allowed similar strides in the discovery of the associated protein structures, providing invaluable information for those searching for new drugs. New DNA microarray chips permit simultaneous measurement of the state of expression of tens...
O2-GIDNC: Beyond instantly decodable network coding
Aboutorab, Neda
2013-06-01
In this paper, we are concerned with extending the graph representation of generalized instantly decodable network coding (GIDNC) to a more general opportunistic network coding (ONC) scenario, referred to as order-2 GIDNC (O2-GIDNC). In the O2-GIDNC scheme, receivers can store non-instantly decodable packets (NIDPs) comprising two of their missing packets, and use them in a systematic way for later decodings. Once this graph representation is found, it can be used to extend the GIDNC graph-based analyses to the proposed O2-GIDNC scheme with a limited increase in complexity. In the proposed O2-GIDNC scheme, the information of the stored NIDPs at the receivers and the decoding opportunities they create can be exploited to improve the broadcast completion time and decoding delay compared to traditional GIDNC scheme. The completion time and decoding delay minimizing algorithms that can operate on the new O2-GIDNC graph are further described. The simulation results show that our proposed O2-GIDNC improves the completion time and decoding delay performance of the traditional GIDNC. © 2013 IEEE.
Encoder-decoder optimization for brain-computer interfaces.
Directory of Open Access Journals (Sweden)
Josh Merel
2015-06-01
Full Text Available Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model" and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.
Encoding and Decoding Models in Cognitive Electrophysiology
Directory of Open Access Journals (Sweden)
Christopher R. Holdgraf
2017-09-01
Full Text Available Cognitive neuroscience has seen rapid growth in the size and complexity of data recorded from the human brain as well as in the computational tools available to analyze this data. This data explosion has resulted in an increased use of multivariate, model-based methods for asking neuroscience questions, allowing scientists to investigate multiple hypotheses with a single dataset, to use complex, time-varying stimuli, and to study the human brain under more naturalistic conditions. These tools come in the form of “Encoding” models, in which stimulus features are used to model brain activity, and “Decoding” models, in which neural features are used to generated a stimulus output. Here we review the current state of encoding and decoding models in cognitive electrophysiology and provide a practical guide toward conducting experiments and analyses in this emerging field. Our examples focus on using linear models in the study of human language and audition. We show how to calculate auditory receptive fields from natural sounds as well as how to decode neural recordings to predict speech. The paper aims to be a useful tutorial to these approaches, and a practical introduction to using machine learning and applied statistics to build models of neural activity. The data analytic approaches we discuss may also be applied to other sensory modalities, motor systems, and cognitive systems, and we cover some examples in these areas. In addition, a collection of Jupyter notebooks is publicly available as a complement to the material covered in this paper, providing code examples and tutorials for predictive modeling in python. The aim is to provide a practical understanding of predictive modeling of human brain data and to propose best-practices in conducting these analyses.
Turbo decoder architecture for beyond-4G applications
Wong, Cheng-Chi
2013-01-01
This book describes the most recent techniques for turbo decoder implementation, especially for 4G and beyond 4G applications. The authors reveal techniques for the design of high-throughput decoders for future telecommunication systems, enabling designers to reduce hardware cost and shorten processing time. Coverage includes an explanation of VLSI implementation of the turbo decoder, from basic functional units to advanced parallel architecture. The authors discuss both hardware architecture techniques and experimental results, showing the variations in area/throughput/performance with respec
Bayesian Independent Component Analysis
DEFF Research Database (Denmark)
Winther, Ole; Petersen, Kaare Brandt
2007-01-01
In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...... in a Matlab toolbox, is demonstrated for non-negative decompositions and compared with non-negative matrix factorization.......In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...
Arregui, Iñigo
2018-01-01
In contrast to the situation in a laboratory, the study of the solar atmosphere has to be pursued without direct access to the physical conditions of interest. Information is therefore incomplete and uncertain and inference methods need to be employed to diagnose the physical conditions and processes. One of such methods, solar atmospheric seismology, makes use of observed and theoretically predicted properties of waves to infer plasma and magnetic field properties. A recent development in solar atmospheric seismology consists in the use of inversion and model comparison methods based on Bayesian analysis. In this paper, the philosophy and methodology of Bayesian analysis are first explained. Then, we provide an account of what has been achieved so far from the application of these techniques to solar atmospheric seismology and a prospect of possible future extensions.
Mørup, Morten; Schmidt, Mikkel N
2012-09-01
Many networks of scientific interest naturally decompose into clusters or communities with comparatively fewer external than internal links; however, current Bayesian models of network communities do not exert this intuitive notion of communities. We formulate a nonparametric Bayesian model for community detection consistent with an intuitive definition of communities and present a Markov chain Monte Carlo procedure for inferring the community structure. A Matlab toolbox with the proposed inference procedure is available for download. On synthetic and real networks, our model detects communities consistent with ground truth, and on real networks, it outperforms existing approaches in predicting missing links. This suggests that community structure is an important structural property of networks that should be explicitly modeled.
Probability and Bayesian statistics
1987-01-01
This book contains selected and refereed contributions to the "Inter national Symposium on Probability and Bayesian Statistics" which was orga nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...
Energy Technology Data Exchange (ETDEWEB)
Andrews, Stephen A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sigeti, David E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-11-15
These are a set of slides about Bayesian hypothesis testing, where many hypotheses are tested. The conclusions are the following: The value of the Bayes factor obtained when using the median of the posterior marginal is almost the minimum value of the Bayes factor. The value of τ^{2} which minimizes the Bayes factor is a reasonable choice for this parameter. This allows a likelihood ratio to be computed with is the least favorable to H_{0}.
Bayesian networks in reliability
Energy Technology Data Exchange (ETDEWEB)
Langseth, Helge [Department of Mathematical Sciences, Norwegian University of Science and Technology, N-7491 Trondheim (Norway)]. E-mail: helgel@math.ntnu.no; Portinale, Luigi [Department of Computer Science, University of Eastern Piedmont ' Amedeo Avogadro' , 15100 Alessandria (Italy)]. E-mail: portinal@di.unipmn.it
2007-01-15
Over the last decade, Bayesian networks (BNs) have become a popular tool for modelling many kinds of statistical problems. We have also seen a growing interest for using BNs in the reliability analysis community. In this paper we will discuss the properties of the modelling framework that make BNs particularly well suited for reliability applications, and point to ongoing research that is relevant for practitioners in reliability.
DEFF Research Database (Denmark)
Antoniou, Constantinos; Harrison, Glenn W.; Lau, Morten I.
2015-01-01
A large literature suggests that many individuals do not apply Bayes’ Rule when making decisions that depend on them correctly pooling prior information and sample data. We replicate and extend a classic experimental study of Bayesian updating from psychology, employing the methods of experimental...... economics, with careful controls for the confounding effects of risk aversion. Our results show that risk aversion significantly alters inferences on deviations from Bayes’ Rule....
Approximate Bayesian recursive estimation
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav
2014-01-01
Roč. 285, č. 1 (2014), s. 100-111 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf
Bayesian theory and applications
Dellaportas, Petros; Polson, Nicholas G; Stephens, David A
2013-01-01
The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This volume guides the reader along a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book has a unique format. There is an explanatory chapter devoted to each conceptual advance followed by journal-style chapters that provide applications or further advances on the concept. Thus, the volume is both a textbook and a compendium of papers covering a vast range of topics. It is appropriate for a well-informed novice interested in understanding the basic approach, methods and recent applications. Because of its advanced chapters and recent work, it is also appropriate for a more mature reader interested in recent applications and devel...
Learning from Decoding across Disciplines and within Communities of Practice
Miller-Young, Janice; Boman, Jennifer
2017-01-01
This final chapter synthesizes the findings and implications derived from applying the Decoding the Disciplines model across disciplines and within communities of practice. We make practical suggestions for teachers and researchers who wish to apply and extend this work.
Effect of video decoder errors on video interpretability
Young, Darrell L.
2014-06-01
The advancement in video compression technology can result in more sensitivity to bit errors. Bit errors can propagate causing sustained loss of interpretability. In the worst case, the decoder "freezes" until it can re-synchronize with the stream. Detection of artifacts enables downstream processes to avoid corrupted frames. A simple template approach to detect block stripes and a more advanced cascade approach to detect compression artifacts was shown to correlate to the presence of artifacts and decoder messages.
Partially blind instantly decodable network codes for lossy feedback environment
Sorour, Sameh
2014-09-01
In this paper, we study the multicast completion and decoding delay minimization problems for instantly decodable network coding (IDNC) in the case of lossy feedback. When feedback loss events occur, the sender falls into uncertainties about packet reception at the different receivers, which forces it to perform partially blind selections of packet combinations in subsequent transmissions. To determine efficient selection policies that reduce the completion and decoding delays of IDNC in such an environment, we first extend the perfect feedback formulation in our previous works to the lossy feedback environment, by incorporating the uncertainties resulting from unheard feedback events in these formulations. For the completion delay problem, we use this formulation to identify the maximum likelihood state of the network in events of unheard feedback and employ it to design a partially blind graph update extension to the multicast IDNC algorithm in our earlier work. For the decoding delay problem, we derive an expression for the expected decoding delay increment for any arbitrary transmission. This expression is then used to find the optimal policy that reduces the decoding delay in such lossy feedback environment. Results show that our proposed solutions both outperform previously proposed approaches and achieve tolerable degradation even at relatively high feedback loss rates.
Evaluation framework for K-best sphere decoders
Shen, Chungan
2010-08-01
While Maximum-Likelihood (ML) is the optimum decoding scheme for most communication scenarios, practical implementation difficulties limit its use, especially for Multiple Input Multiple Output (MIMO) systems with a large number of transmit or receive antennas. Tree-searching type decoder structures such as Sphere decoder and K-best decoder present an interesting trade-off between complexity and performance. Many algorithmic developments and VLSI implementations have been reported in literature with widely varying performance to area and power metrics. In this semi-tutorial paper we present a holistic view of different Sphere decoding techniques and K-best decoding techniques, identifying the key algorithmic and implementation trade-offs. We establish a consistent benchmark framework to investigate and compare the delay cost, power cost, and power-delay-product cost incurred by each method. Finally, using the framework, we propose and analyze a novel architecture and compare that to other published approaches. Our goal is to explicitly elucidate the overall advantages and disadvantages of each proposed algorithms in one coherent framework. © 2010 World Scientific Publishing Company.
Bayesian analysis in plant pathology.
Mila, A L; Carriquiry, A L
2004-09-01
ABSTRACT Bayesian methods are currently much discussed and applied in several disciplines from molecular biology to engineering. Bayesian inference is the process of fitting a probability model to a set of data and summarizing the results via probability distributions on the parameters of the model and unobserved quantities such as predictions for new observations. In this paper, after a short introduction of Bayesian inference, we present the basic features of Bayesian methodology using examples from sequencing genomic fragments and analyzing microarray gene-expressing levels, reconstructing disease maps, and designing experiments.
Lei, Ted Chih-Wei; Tseng, Fan-Shuo
2017-07-01
This paper addresses the problem of high-computational complexity decoding in traditional Wyner-Ziv video coding (WZVC). The key focus is the migration of two traditionally high-computationally complex encoder algorithms, namely motion estimation and mode decision. In order to reduce the computational burden in this process, the proposed architecture adopts the partial boundary matching algorithm and four flexible types of block mode decision at the decoder. This approach does away with the need for motion estimation and mode decision at the encoder. The experimental results show that the proposed padding block-based WZVC not only decreases decoder complexity to approximately one hundredth that of the state-of-the-art DISCOVER decoding but also outperforms DISCOVER codec by up to 3 to 4 dB.
Decoding P4-ATPase substrate interactions.
Roland, Bartholomew P; Graham, Todd R
Cellular membranes display a diversity of functions that are conferred by the unique composition and organization of their proteins and lipids. One important aspect of lipid organization is the asymmetric distribution of phospholipids (PLs) across the plasma membrane. The unequal distribution of key PLs between the cytofacial and exofacial leaflets of the bilayer creates physical surface tension that can be used to bend the membrane; and like Ca 2+ , a chemical gradient that can be used to transduce biochemical signals. PL flippases in the type IV P-type ATPase (P4-ATPase) family are the principle transporters used to set and repair this PL gradient and the asymmetric organization of these membranes are encoded by the substrate specificity of these enzymes. Thus, understanding the mechanisms of P4-ATPase substrate specificity will help reveal their role in membrane organization and cell biology. Further, decoding the structural determinants of substrate specificity provides investigators the opportunity to mutationally tune this specificity to explore the role of particular PL substrates in P4-ATPase cellular functions. This work reviews the role of P4-ATPases in membrane biology, presents our current understanding of P4-ATPase substrate specificity, and discusses how these fundamental aspects of P4-ATPase enzymology may be used to enhance our knowledge of cellular membrane biology.
Rate Aware Instantly Decodable Network Codes
Douik, Ahmed
2016-02-26
This paper addresses the problem of reducing the delivery time of data messages to cellular users using instantly decodable network coding (IDNC) with physical-layer rate awareness. While most of the existing literature on IDNC does not consider any physical layer complications, this paper proposes a cross-layer scheme that incorporates the different channel rates of the various users in the decision process of both the transmitted message combinations and the rates with which they are transmitted. The completion time minimization problem in such scenario is first shown to be intractable. The problem is, thus, approximated by reducing, at each transmission, the increase of an anticipated version of the completion time. The paper solves the problem by formulating it as a maximum weight clique problem over a newly designed rate aware IDNC (RA-IDNC) graph. Further, the paper provides a multi-layer solution to improve the completion time approximation. Simulation results suggest that the cross-layer design largely outperforms the uncoded transmissions strategies and the classical IDNC scheme. © 2015 IEEE.
Fast mental states decoding in mixed reality.
Directory of Open Access Journals (Sweden)
Daniele eDe Massari
2014-11-01
Full Text Available The combination of Brain-Computer Interface technology, allowing online monitoring and decoding of brain activity, with virtual and mixed reality systems may help to shape and guide implicit and explicit learning using ecological scenarios. Real-time information of ongoing brain states acquired through BCI might be exploited for controlling data presentation in virtual environments. In this context, assessing to what extent brain states can be discriminated during mixed reality experience is critical for adapting specific data features to contingent brain activity. In this study we recorded EEG data while participants experienced a mixed reality scenario implemented through the eXperience Induction Machine (XIM. The XIM is a novel framework modeling the integration of a sensing system that evaluates and measures physiological and psychological states with a number of actuators and effectors that coherently reacts to the user's actions. We then assessed continuous EEG-based discrimination of spatial navigation, reading and calculation performed in mixed reality, using LDA and SVM classifiers. Dynamic single trial classification showed high accuracy of LDA and SVM classifiers in detecting multiple brain states as well as in differentiating between high and low mental workload, using a 5 s time-window shifting every 200 ms. Our results indicate overall better performance of LDA with respect to SVM and suggest applicability of our approach in a BCI-controlled mixed reality scenario. Ultimately, successful prediction of brain states might be used to drive adaptation of data representation in order to boost information processing in mixed reality.
Decoding reality the universe as quantum information
Vedral, Vlatko
2010-01-01
In Decoding Reality, Vlatko Vedral offers a mind-stretching look at the deepest questions about the universe--where everything comes from, why things are as they are, what everything is. The most fundamental definition of reality is not matter or energy, he writes, but information--and it is the processing of information that lies at the root of all physical, biological, economic, and social phenomena. This view allows Vedral to address a host of seemingly unrelated questions: Why does DNA bind like it does? What is the ideal diet for longevity? How do you make your first million dollars? We can unify all through the understanding that everything consists of bits of information, he writes, though that raises the question of where these bits come from. To find the answer, he takes us on a guided tour through the bizarre realm of quantum physics. At this sub-sub-subatomic level, we find such things as the interaction of separated quantum particles--what Einstein called "spooky action at a distance." In fact, V...
DECODING OF STRUCTURALLY AND LOGICAL CODES
Directory of Open Access Journals (Sweden)
Yu. D. Ivanov
2016-01-01
Full Text Available The article deals with the description of the main points of the structural and logical coding and the features of SLC codes. There are shown the basic points of the generalized algorithm of decoding SLC, which is based on the method of perfect matrix arrangement (PMA of the n-dimensional cube vertices for adequate representation and transformation of boolean functions, which is based on the method of generating sequences of variables for building the maximum coverage of the cube vertices. The structural and logical codes (SLC use natural logic redundancy of the infimum disjunctive normal forms (IDNF of boolean functions, which make the basis for building the SLC codes and correcting the errors, that occur during data transfer in real discrete channels, on the channels with independent errors. The main task is to define the basic relations between the implemented SLC codes of the logical redundancy and boundary values of multiplicity of independent errors which are corrected. The principal difference between the SLC codes and the well-known correcting codes is that the redundancy, that is needed to correct the errors in converting the discrete information, is not introduced into an additional code sequence but is defined in a natural way, during the construction of codewords of SLC.
Ma, Xuan; Ma, Chaolin; Huang, Jian; Zhang, Peng; Xu, Jiang; He, Jiping
2017-01-01
Extensive literatures have shown approaches for decoding upper limb kinematics or muscle activity using multichannel cortical spike recordings toward brain machine interface (BMI) applications. However, similar topics regarding lower limb remain relatively scarce. We previously reported a system for training monkeys to perform visually guided stand and squat tasks. The current study, as a follow-up extension, investigates whether lower limb kinematics and muscle activity characterized by electromyography (EMG) signals during monkey performing stand/squat movements can be accurately decoded from neural spike trains in primary motor cortex (M1). Two monkeys were used in this study. Subdermal intramuscular EMG electrodes were implanted to 8 right leg/thigh muscles. With ample data collected from neurons from a large brain area, we performed a spike triggered average (SpTA) analysis and got a series of density contours which revealed the spatial distributions of different muscle-innervating neurons corresponding to each given muscle. Based on the guidance of these results, we identified the locations optimal for chronic electrode implantation and subsequently carried on chronic neural data recordings. A recursive Bayesian estimation framework was proposed for decoding EMG signals together with kinematics from M1 spike trains. Two specific algorithms were implemented: a standard Kalman filter and an unscented Kalman filter. For the latter one, an artificial neural network was incorporated to deal with the nonlinearity in neural tuning. High correlation coefficient and signal to noise ratio between the predicted and the actual data were achieved for both EMG signals and kinematics on both monkeys. Higher decoding accuracy and faster convergence rate could be achieved with the unscented Kalman filter. These results demonstrate that lower limb EMG signals and kinematics during monkey stand/squat can be accurately decoded from a group of M1 neurons with the proposed
Congdon, Peter
2014-01-01
This book provides an accessible approach to Bayesian computing and data analysis, with an emphasis on the interpretation of real data sets. Following in the tradition of the successful first edition, this book aims to make a wide range of statistical modeling applications accessible using tested code that can be readily adapted to the reader's own applications. The second edition has been thoroughly reworked and updated to take account of advances in the field. A new set of worked examples is included. The novel aspect of the first edition was the coverage of statistical modeling using WinBU
Bayesian nonparametric data analysis
Müller, Peter; Jara, Alejandro; Hanson, Tim
2015-01-01
This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book’s structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in on-line software pages.
Singer product apertures—A coded aperture system with a fast decoding algorithm
Energy Technology Data Exchange (ETDEWEB)
Byard, Kevin, E-mail: kevin.byard@aut.ac.nz [School of Economics, Faculty of Business, Economics and Law, Auckland University of Technology, Auckland 1142 (New Zealand); Shutler, Paul M.E. [National Institute of Education, Nanyang Technological University, 1 Nanyang Walk, Singapore 637616 (Singapore)
2017-06-01
A new type of coded aperture configuration that enables fast decoding of the coded aperture shadowgram data is presented. Based on the products of incidence vectors generated from the Singer difference sets, we call these Singer product apertures. For a range of aperture dimensions, we compare experimentally the performance of three decoding methods: standard decoding, induction decoding and direct vector decoding. In all cases the induction and direct vector methods are several orders of magnitude faster than the standard method, with direct vector decoding being significantly faster than induction decoding. For apertures of the same dimensions the increase in speed offered by direct vector decoding over induction decoding is better for lower throughput apertures.
Singer product apertures—A coded aperture system with a fast decoding algorithm
International Nuclear Information System (INIS)
Byard, Kevin; Shutler, Paul M.E.
2017-01-01
A new type of coded aperture configuration that enables fast decoding of the coded aperture shadowgram data is presented. Based on the products of incidence vectors generated from the Singer difference sets, we call these Singer product apertures. For a range of aperture dimensions, we compare experimentally the performance of three decoding methods: standard decoding, induction decoding and direct vector decoding. In all cases the induction and direct vector methods are several orders of magnitude faster than the standard method, with direct vector decoding being significantly faster than induction decoding. For apertures of the same dimensions the increase in speed offered by direct vector decoding over induction decoding is better for lower throughput apertures.
Classification using Bayesian neural nets
J.C. Bioch (Cor); O. van der Meer; R. Potharst (Rob)
1995-01-01
textabstractRecently, Bayesian methods have been proposed for neural networks to solve regression and classification problems. These methods claim to overcome some difficulties encountered in the standard approach such as overfitting. However, an implementation of the full Bayesian approach to
Bayesian Data Analysis (lecture 1)
CERN. Geneva
2018-01-01
framework but we will also go into more detail and discuss for example the role of the prior. The second part of the lecture will cover further examples and applications that heavily rely on the bayesian approach, as well as some computational tools needed to perform a bayesian analysis.
Bayesian Data Analysis (lecture 2)
CERN. Geneva
2018-01-01
framework but we will also go into more detail and discuss for example the role of the prior. The second part of the lecture will cover further examples and applications that heavily rely on the bayesian approach, as well as some computational tools needed to perform a bayesian analysis.
The Bayesian Covariance Lasso.
Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G
2013-04-01
Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size ( n ) is less than the dimension ( d ), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data.
Bayesian inference with ecological applications
Link, William A
2009-01-01
This text is written to provide a mathematically sound but accessible and engaging introduction to Bayesian inference specifically for environmental scientists, ecologists and wildlife biologists. It emphasizes the power and usefulness of Bayesian methods in an ecological context. The advent of fast personal computers and easily available software has simplified the use of Bayesian and hierarchical models . One obstacle remains for ecologists and wildlife biologists, namely the near absence of Bayesian texts written specifically for them. The book includes many relevant examples, is supported by software and examples on a companion website and will become an essential grounding in this approach for students and research ecologists. Engagingly written text specifically designed to demystify a complex subject Examples drawn from ecology and wildlife research An essential grounding for graduate and research ecologists in the increasingly prevalent Bayesian approach to inference Companion website with analyt...
Bayesian Inference on Gravitational Waves
Directory of Open Access Journals (Sweden)
Asad Ali
2015-12-01
Full Text Available The Bayesian approach is increasingly becoming popular among the astrophysics data analysis communities. However, the Pakistan statistics communities are unaware of this fertile interaction between the two disciplines. Bayesian methods have been in use to address astronomical problems since the very birth of the Bayes probability in eighteenth century. Today the Bayesian methods for the detection and parameter estimation of gravitational waves have solid theoretical grounds with a strong promise for the realistic applications. This article aims to introduce the Pakistan statistics communities to the applications of Bayesian Monte Carlo methods in the analysis of gravitational wave data with an overview of the Bayesian signal detection and estimation methods and demonstration by a couple of simplified examples.
Venkataraman, Archana; Duncan, James S.; Yang, Daniel Y.-J.; Pelphrey, Kevin A.
2015-01-01
Resting-state functional magnetic resonance imaging (rsfMRI) studies reveal a complex pattern of hyper- and hypo-connectivity in children with autism spectrum disorder (ASD). Whereas rsfMRI findings tend to implicate the default mode network and subcortical areas in ASD, task fMRI and behavioral experiments point to social dysfunction as a unifying impairment of the disorder. Here, we leverage a novel Bayesian framework for whole-brain functional connectomics that aggregates population differences in connectivity to localize a subset of foci that are most affected by ASD. Our approach is entirely data-driven and does not impose spatial constraints on the region foci or dictate the trajectory of altered functional pathways. We apply our method to data from the openly shared Autism Brain Imaging Data Exchange (ABIDE) and pinpoint two intrinsic functional networks that distinguish ASD patients from typically developing controls. One network involves foci in the right temporal pole, left posterior cingulate cortex, left supramarginal gyrus, and left middle temporal gyrus. Automated decoding of this network by the Neurosynth meta-analytic database suggests high-level concepts of “language” and “comprehension” as the likely functional correlates. The second network consists of the left banks of the superior temporal sulcus, right posterior superior temporal sulcus extending into temporo-parietal junction, and right middle temporal gyrus. Associated functionality of these regions includes “social” and “person”. The abnormal pathways emanating from the above foci indicate that ASD patients simultaneously exhibit reduced long-range or inter-hemispheric connectivity and increased short-range or intra-hemispheric connectivity. Our findings reveal new insights into ASD and highlight possible neural mechanisms of the disorder. PMID:26106561
Can natural selection encode Bayesian priors?
Ramírez, Juan Camilo; Marshall, James A R
2017-08-07
The evolutionary success of many organisms depends on their ability to make decisions based on estimates of the state of their environment (e.g., predation risk) from uncertain information. These decision problems have optimal solutions and individuals in nature are expected to evolve the behavioural mechanisms to make decisions as if using the optimal solutions. Bayesian inference is the optimal method to produce estimates from uncertain data, thus natural selection is expected to favour individuals with the behavioural mechanisms to make decisions as if they were computing Bayesian estimates in typically-experienced environments, although this does not necessarily imply that favoured decision-makers do perform Bayesian computations exactly. Each individual should evolve to behave as if updating a prior estimate of the unknown environment variable to a posterior estimate as it collects evidence. The prior estimate represents the decision-maker's default belief regarding the environment variable, i.e., the individual's default 'worldview' of the environment. This default belief has been hypothesised to be shaped by natural selection and represent the environment experienced by the individual's ancestors. We present an evolutionary model to explore how accurately Bayesian prior estimates can be encoded genetically and shaped by natural selection when decision-makers learn from uncertain information. The model simulates the evolution of a population of individuals that are required to estimate the probability of an event. Every individual has a prior estimate of this probability and collects noisy cues from the environment in order to update its prior belief to a Bayesian posterior estimate with the evidence gained. The prior is inherited and passed on to offspring. Fitness increases with the accuracy of the posterior estimates produced. Simulations show that prior estimates become accurate over evolutionary time. In addition to these 'Bayesian' individuals, we also
Error-correction coding and decoding bounds, codes, decoders, analysis and applications
Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak
2017-01-01
This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...
Bayesian nonparametric hierarchical modeling.
Dunson, David B
2009-04-01
In biomedical research, hierarchical models are very widely used to accommodate dependence in multivariate and longitudinal data and for borrowing of information across data from different sources. A primary concern in hierarchical modeling is sensitivity to parametric assumptions, such as linearity and normality of the random effects. Parametric assumptions on latent variable distributions can be challenging to check and are typically unwarranted, given available prior knowledge. This article reviews some recent developments in Bayesian nonparametric methods motivated by complex, multivariate and functional data collected in biomedical studies. The author provides a brief review of flexible parametric approaches relying on finite mixtures and latent class modeling. Dirichlet process mixture models are motivated by the need to generalize these approaches to avoid assuming a fixed finite number of classes. Focusing on an epidemiology application, the author illustrates the practical utility and potential of nonparametric Bayes methods.
DEFF Research Database (Denmark)
Hartelius, Karsten; Carstensen, Jens Michael
2003-01-01
A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which...... represents the spatial coordinates of the grid nodes. Knowledge of how grid nodes are depicted in the observed image is described through the observation model. The prior consists of a node prior and an arc (edge) prior, both modeled as Gaussian MRFs. The node prior models variations in the positions of grid...... nodes and the arc prior models variations in row and column spacing across the grid. Grid matching is done by placing an initial rough grid over the image and applying an ensemble annealing scheme to maximize the posterior distribution of the grid. The method can be applied to noisy images with missing...
Bayesian supervised dimensionality reduction.
Gönen, Mehmet
2013-12-01
Dimensionality reduction is commonly used as a preprocessing step before training a supervised learner. However, coupled training of dimensionality reduction and supervised learning steps may improve the prediction performance. In this paper, we introduce a simple and novel Bayesian supervised dimensionality reduction method that combines linear dimensionality reduction and linear supervised learning in a principled way. We present both Gibbs sampling and variational approximation approaches to learn the proposed probabilistic model for multiclass classification. We also extend our formulation toward model selection using automatic relevance determination in order to find the intrinsic dimensionality. Classification experiments on three benchmark data sets show that the new model significantly outperforms seven baseline linear dimensionality reduction algorithms on very low dimensions in terms of generalization performance on test data. The proposed model also obtains the best results on an image recognition task in terms of classification and retrieval performances.
Bayesian Geostatistical Design
DEFF Research Database (Denmark)
Diggle, Peter; Lophaven, Søren Nymand
2006-01-01
This paper describes the use of model-based geostatistics for choosing the set of sampling locations, collectively called the design, to be used in a geostatistical analysis. Two types of design situation are considered. These are retrospective design, which concerns the addition of sampling...... locations to, or deletion of locations from, an existing design, and prospective design, which consists of choosing positions for a new set of sampling locations. We propose a Bayesian design criterion which focuses on the goal of efficient spatial prediction whilst allowing for the fact that model...... parameter values are unknown. The results show that in this situation a wide range of interpoint distances should be included in the design, and the widely used regular design is often not the best choice....
Lexical decoder for continuous speech recognition: sequential neural network approach
International Nuclear Information System (INIS)
Iooss, Christine
1991-01-01
The work presented in this dissertation concerns the study of a connectionist architecture to treat sequential inputs. In this context, the model proposed by J.L. Elman, a recurrent multilayers network, is used. Its abilities and its limits are evaluated. Modifications are done in order to treat erroneous or noisy sequential inputs and to classify patterns. The application context of this study concerns the realisation of a lexical decoder for analytical multi-speakers continuous speech recognition. Lexical decoding is completed from lattices of phonemes which are obtained after an acoustic-phonetic decoding stage relying on a K Nearest Neighbors search technique. Test are done on sentences formed from a lexicon of 20 words. The results are obtained show the ability of the proposed connectionist model to take into account the sequentiality at the input level, to memorize the context and to treat noisy or erroneous inputs. (author) [fr
Analysis of Minimal LDPC Decoder System on a Chip Implementation
Directory of Open Access Journals (Sweden)
T. Palenik
2015-09-01
Full Text Available This paper presents a practical method of potential replacement of several different Quasi-Cyclic Low-Density Parity-Check (QC-LDPC codes with one, with the intention of saving as much memory as required to implement the LDPC encoder and decoder in a memory-constrained System on a Chip (SoC. The presented method requires only a very small modification of the existing encoder and decoder, making it suitable for utilization in a Software Defined Radio (SDR platform. Besides the analysis of the effects of necessary variable-node value fixation during the Belief Propagation (BP decoding algorithm, practical standard-defined code parameters are scrutinized in order to evaluate the feasibility of the proposed LDPC setup simplification. Finally, the error performance of the modified system structure is evaluated and compared with the original system structure by means of simulation.
[Efficacy of decoding training for children with difficulty reading hiragana].
Uchiyama, Hitoshi; Tanaka, Daisuke; Seki, Ayumi; Wakamiya, Eiji; Hirasawa, Noriko; Iketani, Naotake; Kato, Ken; Koeda, Tatsuya
2013-05-01
The present study aimed to clarify the efficacy of decoding training focusing on the correspondence between written symbols and their readings for children with difficulty reading hiragana (Japanese syllabary). Thirty-five children with difficulty reading hiragana were selected from among 367 first-grade elementary school students using a reading aloud test and were then divided into intervention (n=15) and control (n=20) groups. The intervention comprised 5 minutes of decoding training each day for a period of 3 weeks using an original program on a personal computer. Reading time and number of reading errors in the reading aloud test were compared between the groups. The intervention group showed a significant shortening of reading time (F(1,33)=5.40, preading errors was observed (t= 2.863, preading time. Decoding training was found to be effective for improving both reading time and reading errors in children with difficulty reading hiragana.
Real-time MPEG-1 software decoding on HP workstations
Bhaskaran, Vasudev; Konstantinides, Konstantinos; Lee, Ruby B.
1995-04-01
A MPEG1 codec that is capable of real-time playback of video and audio on HP's RISC-based workstations has been developed. Real-time playback is achieved by examining the complete MPEG1 decoding pipeline and optimizing the algorithms used for the various stages of the video and audio decompression process. For video decompression, efficient implementations are derived by examining the huffman decoder, inverse quantizer and inverse DCT as a single system. For audio decompression, by viewing the subband synthesis function in MPEG1 layer I and layer II decoding as a DCT, a fast 32-point DCT suitable for use in MPEG1 audio decompression has been developed. Besides algorithmic enhancements, in order to achieve real-time performance, minor changes to the CPU architecture and the display pipeline architecture were needed. The integration of algorithmic and architectural enhancements results in real-time playback of MPEG1 video and audio on HP's RISC based workstations.
Soft-decision Viterbi decoding with diversity combining
Sakai, T.; Kobayashi, K.; Kato, S.
1990-01-01
Diversity combining methods for convolutional coded and soft-decision Viterbi decoded channels in mobile satellite communications systems are evaluated and it is clarified that the pre-Viterbi-decoding maximal ratio combining shows better performance than other methods in Rician fading channels by computer simulation. A novel practical technique for maximal ratio combining is proposed, in which the coefficients for weighting are derived from soft-decision demodulated signals only. The proposed diversity combining method with soft-decision Viterbi decoding requires simple hardware and shows satisfactory performance with slight degradation of 0.3 dB in Rician fading channels compared with an ideal weighting scheme. Furthermore, this diversity method is applied to trellis coded modulation and significant Pe performance improvement is achieved.
Individual Differences in the Cognitive Processes of Reading: I. Word Decoding.
Stanovich, Keith E.
1982-01-01
The importance of word decoding in accounting for individual differences in reading comprehension is discussed. Research on individual differences in the cognitive processes that mediate word decoding is reviewed. (Author)
Joint Estimation and Decoding of Space-Time Trellis Codes
Directory of Open Access Journals (Sweden)
Zhang Jianqiu
2002-01-01
Full Text Available We explore the possibility of using an emerging tool in statistical signal processing, sequential importance sampling (SIS, for joint estimation and decoding of space-time trellis codes (STTC. First, we provide background on SIS, and then we discuss its application to space-time trellis code (STTC systems. It is shown through simulations that SIS is suitable for joint estimation and decoding of STTC with time-varying flat-fading channels when phase ambiguity is avoided. We used a design criterion for STTCs and temporally correlated channels that combats phase ambiguity without pilot signaling. We have shown by simulations that the design is valid.
Efficient decoding of random errors for quantum expander codes
Fawzi , Omar; Grospellier , Antoine; Leverrier , Anthony
2017-01-01
We show that quantum expander codes, a constant-rate family of quantum LDPC codes, with the quasi-linear time decoding algorithm of Leverrier, Tillich and Z\\'emor can correct a constant fraction of random errors with very high probability. This is the first construction of a constant-rate quantum LDPC code with an efficient decoding algorithm that can correct a linear number of random errors with a negligible failure probability. Finding codes with these properties is also motivated by Gottes...
Melodic intonation therapy in the verbal decoding of aphasics.
Popovici, M
1995-01-01
Melodic Intonation Therapy is a well-known method exploring the verbal encoding of aphasics but within this study, it was used to investigate the verbal decoding (the auditory comprehension). Two separate groups, 240 cases each, were investigated, the former with MIT and the latter with other therapy methods. Each group included three subgroups according to the three frequent types of aphasia (Wernicke, Broca and Anomia). The method of semantic fields was associated to treat the second group since it is usually used in the treatment of aphasics with auditory decoding disturbances. All patients were tested twice, before and after therapy.
Bayesian adaptive methods for clinical trials
Berry, Scott M; Muller, Peter
2010-01-01
Already popular in the analysis of medical device trials, adaptive Bayesian designs are increasingly being used in drug development for a wide variety of diseases and conditions, from Alzheimer's disease and multiple sclerosis to obesity, diabetes, hepatitis C, and HIV. Written by leading pioneers of Bayesian clinical trial designs, Bayesian Adaptive Methods for Clinical Trials explores the growing role of Bayesian thinking in the rapidly changing world of clinical trial analysis. The book first summarizes the current state of clinical trial design and analysis and introduces the main ideas and potential benefits of a Bayesian alternative. It then gives an overview of basic Bayesian methodological and computational tools needed for Bayesian clinical trials. With a focus on Bayesian designs that achieve good power and Type I error, the next chapters present Bayesian tools useful in early (Phase I) and middle (Phase II) clinical trials as well as two recent Bayesian adaptive Phase II studies: the BATTLE and ISP...
Algebraic decoder specification: coupling formal-language theory and statistical machine translation
Büchse, Matthias
2015-01-01
The specification of a decoder, i.e., a program that translates sentences from one natural language into another, is an intricate process, driven by the application and lacking a canonical methodology. The practical nature of decoder development inhibits the transfer of knowledge between theory and application, which is unfortunate because many contemporary decoders are in fact related to formal-language theory. This thesis proposes an algebraic framework where a decoder is specified by an ex...
A bayesian approach to classification criteria for spectacled eiders
Taylor, B.L.; Wade, P.R.; Stehn, R.A.; Cochrane, J.F.
1996-01-01
To facilitate decisions to classify species according to risk of extinction, we used Bayesian methods to analyze trend data for the Spectacled Eider, an arctic sea duck. Trend data from three independent surveys of the Yukon-Kuskokwim Delta were analyzed individually and in combination to yield posterior distributions for population growth rates. We used classification criteria developed by the recovery team for Spectacled Eiders that seek to equalize errors of under- or overprotecting the species. We conducted both a Bayesian decision analysis and a frequentist (classical statistical inference) decision analysis. Bayesian decision analyses are computationally easier, yield basically the same results, and yield results that are easier to explain to nonscientists. With the exception of the aerial survey analysis of the 10 most recent years, both Bayesian and frequentist methods indicated that an endangered classification is warranted. The discrepancy between surveys warrants further research. Although the trend data are abundance indices, we used a preliminary estimate of absolute abundance to demonstrate how to calculate extinction distributions using the joint probability distributions for population growth rate and variance in growth rate generated by the Bayesian analysis. Recent apparent increases in abundance highlight the need for models that apply to declining and then recovering species.
Current trends in Bayesian methodology with applications
Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia
2015-01-01
Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on
47 CFR 11.12 - Two-tone Attention Signal encoder and decoder.
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Two-tone Attention Signal encoder and decoder... SYSTEM (EAS) General § 11.12 Two-tone Attention Signal encoder and decoder. Existing two-tone Attention... Attention Signal decoder will no longer be required and the two-tone Attention Signal will be used to...
Directory of Open Access Journals (Sweden)
Liu Weiliang
2007-01-01
Full Text Available A joint source-channel decoding method is designed to accelerate the iterative log-domain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec making it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. The positions of bits belonging to error-free coding passes are then fed back to the channel decoder. The log-likelihood ratios (LLRs of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the nonsource controlled decoding method by up to 3 dB in terms of PSNR.
Directory of Open Access Journals (Sweden)
David G. Daut
2007-03-01
Full Text Available A joint source-channel decoding method is designed to accelerate the iterative log-domain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec making it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. The positions of bits belonging to error-free coding passes are then fed back to the channel decoder. The log-likelihood ratios (LLRs of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the nonsource controlled decoding method by up to 3 dB in terms of PSNR.
Lazar, Aurel A; Slutskiy, Yevgeniy B
2015-02-01
We present a multi-input multi-output neural circuit architecture for nonlinear processing and encoding of stimuli in the spike domain. In this architecture a bank of dendritic stimulus processors implements nonlinear transformations of multiple temporal or spatio-temporal signals such as spike trains or auditory and visual stimuli in the analog domain. Dendritic stimulus processors may act on both individual stimuli and on groups of stimuli, thereby executing complex computations that arise as a result of interactions between concurrently received signals. The results of the analog-domain computations are then encoded into a multi-dimensional spike train by a population of spiking neurons modeled as nonlinear dynamical systems. We investigate general conditions under which such circuits faithfully represent stimuli and demonstrate algorithms for (i) stimulus recovery, or decoding, and (ii) identification of dendritic stimulus processors from the observed spikes. Taken together, our results demonstrate a fundamental duality between the identification of the dendritic stimulus processor of a single neuron and the decoding of stimuli encoded by a population of neurons with a bank of dendritic stimulus processors. This duality result enabled us to derive lower bounds on the number of experiments to be performed and the total number of spikes that need to be recorded for identifying a neural circuit.
Bayesian Inference: with ecological applications
Link, William A.; Barker, Richard J.
2010-01-01
This text provides a mathematically rigorous yet accessible and engaging introduction to Bayesian inference with relevant examples that will be of interest to biologists working in the fields of ecology, wildlife management and environmental studies as well as students in advanced undergraduate statistics.. This text opens the door to Bayesian inference, taking advantage of modern computational efficiencies and easily accessible software to evaluate complex hierarchical models.
Bayesian image restoration, using configurations
Thorarinsdottir, Thordis
2006-01-01
In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the re...
Decoding English Alphabet Letters Using EEG Phase Information
Wang, YiYan; Wang, Pingxiao; Yu, Yuguo
2018-01-01
Increasing evidence indicates that the phase pattern and power of the low frequency oscillations of brain electroencephalograms (EEG) contain significant information during the human cognition of sensory signals such as auditory and visual stimuli. Here, we investigate whether and how the letters of the alphabet can be directly decoded from EEG phase and power data. In addition, we investigate how different band oscillations contribute to the classification and determine the critical time periods. An English letter recognition task was assigned, and statistical analyses were conducted to decode the EEG signal corresponding to each letter visualized on a computer screen. We applied support vector machine (SVM) with gradient descent method to learn the potential features for classification. It was observed that the EEG phase signals have a higher decoding accuracy than the oscillation power information. Low-frequency theta and alpha oscillations have phase information with higher accuracy than do other bands. The decoding performance was best when the analysis period began from 180 to 380 ms after stimulus presentation, especially in the lateral occipital and posterior temporal scalp regions (PO7 and PO8). These results may provide a new approach for brain-computer interface techniques (BCI) and may deepen our understanding of EEG oscillations in cognition. PMID:29467615
The Interactions of Vocabulary, Phonemic Awareness, Decoding, and Reading Comprehension
Carlson, Elaine; Jenkins, Frank; Li, Tiandong; Brownell, Mary
2013-01-01
The authors used data from a large, national sample to examine the interaction of various literacy measures among young children with disabilities. Using structural equation modeling, they examined the relationships among measures of phonemic awareness, decoding, vocabulary, and reading comprehension. Child and family factors, including sex,…
The Development of the Graphics-Decoding Proficiency Instrument
Lowrie, Tom; Diezmann, Carmel M.; Kay, Russell
2011-01-01
The graphics-decoding proficiency (G-DP) instrument was developed as a screening test for the purpose of measuring students' (aged 8-11 years) capacity to solve graphics-based mathematics tasks. These tasks include number lines, column graphs, maps and pie charts. The instrument was developed within a theoretical framework which highlights the…
Manchester encoder-decoder for optical data communication links.
Al-Sammak, A J; Al-Ruwaihi, K; Al-Kooheji, I
1992-04-20
A new encoder-decoder (CODEC) design of a Manchester coding scheme suitable for optical data communication links is presented. The design is simple and uses off-the-shelf digital electronic components and subsystems. The CODEC can be used for high data rate transmissions, typical of opticalfiber systems and local area networks. The decoder is insensitive to variations in the clock rates within the range of +/-33%, whereas the encoder, which is a simple XOR logic gate, is not affected by clock variations. During high-frequency operation (e.g., at 100 MHz), the CODEC can be operated at a wide range of frequencies (from 66.6 to 133.3 MHz) without modification to the CODEC circuitry. Furthermore, the CODEC can be made to operate at any data rate by a simple change of a single capacitor or a single resistor in the decoder circuit. The CODEC was built in the laboratory by using transistor-transistor logicintegrated circuits. It was experimentally found that with this decoder the transmitted data, as well as the cloc, can be recovered from the Manchester coded signal without being affected by clock variations within the designed range.
Transfer of decoding skills in early foreign language reading
Ven, M.A.M. van de; Voeten, M.J.M.; Steenbeek-Planting, E.G.; Verhoeven, L.T.W.
2018-01-01
This longitudinal study investigated cross-linguistic transfer from native to foreign language decoding abilities in 787 Dutch first-year students in two differential tracks (high vs low) of secondary education. On two occasions, with a six months interval, we tested the students' word and
Decoding by dynamic chunking for statistical machine translation
Yahyaei, S.; Monz, C.; Gerber, L.; Isabelle, P.; Kuhn, R.; Bemish, N.; Dillinger, M.; Goulet, M.-J.
2009-01-01
In this paper we present an extension of a phrase-based decoder that dynamically chunks, reorders, and applies phrase translations in tandem. A maximum entropy classifier is trained based on the word alignments to find the best positions to chunk the source sentence. No language specific or
A quantum algorithm for Viterbi decoding of classical convolutional codes
Grice, Jon R.; Meyer, David A.
2015-07-01
We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.
EEG source imaging assists decoding in a face recognition task
DEFF Research Database (Denmark)
Andersen, Rasmus S.; Eliasen, Anders U.; Pedersen, Nicolai
2017-01-01
of face recognition. This task concerns the differentiation of brain responses to images of faces and scrambled faces and poses a rather difficult decoding problem at the single trial level. We implement the pipeline using spatially focused features and show that this approach is challenged and source...
Music models aberrant rule decoding and reward valuation in dementia.
Clark, Camilla N; Golden, Hannah L; McCallion, Oliver; Nicholas, Jennifer M; Cohen, Miriam H; Slattery, Catherine F; Paterson, Ross W; Fletcher, Phillip D; Mummery, Catherine J; Rohrer, Jonathan D; Crutch, Sebastian J; Warren, Jason D
2018-02-01
Aberrant rule- and reward-based processes underpin abnormalities of socio-emotional behaviour in major dementias. However, these processes remain poorly characterized. Here we used music to probe rule decoding and reward valuation in patients with frontotemporal dementia (FTD) syndromes and Alzheimer's disease (AD) relative to healthy age-matched individuals. We created short melodies that were either harmonically resolved ('finished') or unresolved ('unfinished'); the task was to classify each melody as finished or unfinished (rule processing) and rate its subjective pleasantness (reward valuation). Results were adjusted for elementary pitch and executive processing; neuroanatomical correlates were assessed using voxel-based morphometry. Relative to healthy older controls, patients with behavioural variant FTD showed impairments of both musical rule decoding and reward valuation, while patients with semantic dementia showed impaired reward valuation but intact rule decoding, patients with AD showed impaired rule decoding but intact reward valuation and patients with progressive non-fluent aphasia performed comparably to healthy controls. Grey matter associations with task performance were identified in anterior temporal, medial and lateral orbitofrontal cortices, previously implicated in computing diverse biological and non-biological rules and rewards. The processing of musical rules and reward distils cognitive and neuroanatomical mechanisms relevant to complex socio-emotional dysfunction in major dementias.
Name that tune: Decoding music from the listening brain
Schaefer, R.S.; Farquhar, J.D.R.; Blokland, Y.M.; Sadakata, M.; Desain, P.W.M.
2011-01-01
In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven
Name that tune: decoding music from the listening brain.
Schaefer, R.S.; Farquhar, J.D.R.; Blokland, Y.M.; Sadakata, M.; Desain, P.W.M.
2011-01-01
In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven
Real Time Decoding of Color Symbol for Optical Positioning System
Directory of Open Access Journals (Sweden)
Abdul Waheed Malik
2015-01-01
Full Text Available This paper presents the design and real-time decoding of a color symbol that can be used as a reference marker for optical navigation. The designed symbol has a circular shape and is printed on paper using two distinct colors. This pair of colors is selected based on the highest achievable signal to noise ratio. The symbol is designed to carry eight bit information. Real time decoding of this symbol is performed using a heterogeneous combination of Field Programmable Gate Array (FPGA and a microcontroller. An image sensor having a resolution of 1600 by 1200 pixels is used to capture images of symbols in complex backgrounds. Dynamic image segmentation, component labeling and feature extraction was performed on the FPGA. The region of interest was further computed from the extracted features. Feature data belonging to the symbol was sent from the FPGA to the microcontroller. Image processing tasks are partitioned between the FPGA and microcontroller based on data intensity. Experiments were performed to verify the rotational independence of the symbols. The maximum distance between camera and symbol allowing for correct detection and decoding was analyzed. Experiments were also performed to analyze the number of generated image components and sub-pixel precision versus different light sources and intensities. The proposed hardware architecture can process up to 55 frames per second for accurate detection and decoding of symbols at two Megapixels resolution. The power consumption of the complete system is 342mw.
Adaptive Combined Source and Channel Decoding with Modulation ...
African Journals Online (AJOL)
In this paper, an adaptive system employing combined source and channel decoding with modulation is proposed for slow Rayleigh fading channels. Huffman code is used as the source code and Convolutional code is used for error control. The adaptive scheme employs a family of Convolutional codes of different rates ...
Peeling Decoding of LDPC Codes with Applications in Compressed Sensing
Directory of Open Access Journals (Sweden)
Weijun Zeng
2016-01-01
Full Text Available We present a new approach for the analysis of iterative peeling decoding recovery algorithms in the context of Low-Density Parity-Check (LDPC codes and compressed sensing. The iterative recovery algorithm is particularly interesting for its low measurement cost and low computational complexity. The asymptotic analysis can track the evolution of the fraction of unrecovered signal elements in each iteration, which is similar to the well-known density evolution analysis in the context of LDPC decoding algorithm. Our analysis shows that there exists a threshold on the density factor; if under this threshold, the recovery algorithm is successful; otherwise it will fail. Simulation results are also provided for verifying the agreement between the proposed asymptotic analysis and recovery algorithm. Compared with existing works of peeling decoding algorithm, focusing on the failure probability of the recovery algorithm, our proposed approach gives accurate evolution of performance with different parameters of measurement matrices and is easy to implement. We also show that the peeling decoding algorithm performs better than other schemes based on LDPC codes.
Decoding the Disciplines: An Approach to Scientific Thinking
Pinnow, Eleni
2016-01-01
The Decoding the Disciplines methodology aims to teach students to think like experts in discipline-specific tasks. The central aspect of the methodology is to identify a bottleneck in the course content: a particular topic that a substantial number of students struggle to master. The current study compared the efficacy of standard lecture and…
Precoder and decoder prediction in time-varying MIMO channel
DEFF Research Database (Denmark)
Nguyen, Tuan Hung; Leus, Geert; Khaled, Nadia
2005-01-01
the performance of a prediction scheme for multiple input multiple output (MIMO) systems that apply spatial multiplexing. We aim at predicting the future precoder/decoder directly without going through the prediction of the channel matrix. The results show that in a slowly time varying channel an increase...
On Lattice Sequential Decoding for Large MIMO Systems
Ali, Konpal S.
2014-04-01
Due to their ability to provide high data rates, Multiple-Input Multiple-Output (MIMO) wireless communication systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In the case of large overdetermined MIMO systems, we employ the Sequential Decoder using the Fano Algorithm. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. We attempt to bound the error by bounding the bias, using the minimum distance of a lattice. Also, a particular trend is observed with increasing SNR: a region of low complexity and high error, followed by a region of high complexity and error falling, and finally a region of low complexity and low error. For lower bias values, the stages of the trend are incurred at lower SNR than for higher bias values. This has the important implication that a low enough bias value, at low to moderate SNR, can result in low error and low complexity even for large MIMO systems. Our work is compared against Lattice Reduction (LR) aided Linear Decoders (LDs). Another impressive observation for low bias values that satisfy the error bound is that the Sequential Decoder\\'s error is seen to fall with increasing system size, while it grows for the LR-aided LDs. For the case of large underdetermined MIMO systems, Sequential Decoding with two preprocessing schemes is proposed – 1) Minimum Mean Square Error Generalized Decision Feedback Equalization (MMSE-GDFE) preprocessing 2) MMSE-GDFE preprocessing, followed by Lattice Reduction and Greedy Ordering. Our work is compared against previous work which employs Sphere Decoding preprocessed using MMSE-GDFE, Lattice Reduction and Greedy Ordering. For the case of large systems, this results in high complexity and difficulty in choosing the sphere radius. Our schemes
Automatic detection and decoding of honey bee waggle dances.
Directory of Open Access Journals (Sweden)
Fernando Wario
Full Text Available The waggle dance is one of the most popular examples of animal communication. Forager bees direct their nestmates to profitable resources via a complex motor display. Essentially, the dance encodes the polar coordinates to the resource in the field. Unemployed foragers follow the dancer's movements and then search for the advertised spots in the field. Throughout the last decades, biologists have employed different techniques to measure key characteristics of the waggle dance and decode the information it conveys. Early techniques involved the use of protractors and stopwatches to measure the dance orientation and duration directly from the observation hive. Recent approaches employ digital video recordings and manual measurements on screen. However, manual approaches are very time-consuming. Most studies, therefore, regard only small numbers of animals in short periods of time. We have developed a system capable of automatically detecting, decoding and mapping communication dances in real-time. In this paper, we describe our recording setup, the image processing steps performed for dance detection and decoding and an algorithm to map dances to the field. The proposed system performs with a detection accuracy of 90.07%. The decoded waggle orientation has an average error of -2.92° (± 7.37°, well within the range of human error. To evaluate and exemplify the system's performance, a group of bees was trained to an artificial feeder, and all dances in the colony were automatically detected, decoded and mapped. The system presented here is the first of this kind made publicly available, including source code and hardware specifications. We hope this will foster quantitative analyses of the honey bee waggle dance.
Decoding bipedal locomotion from the rat sensorimotor cortex
Rigosa, J.; Panarese, A.; Dominici, N.; Friedli, L.; van den Brand, R.; Carpaneto, J.; DiGiovanna, J.; Courtine, G.; Micera, S.
2015-10-01
Objective. Decoding forelimb movements from the firing activity of cortical neurons has been interfaced with robotic and prosthetic systems to replace lost upper limb functions in humans. Despite the potential of this approach to improve locomotion and facilitate gait rehabilitation, decoding lower limb movement from the motor cortex has received comparatively little attention. Here, we performed experiments to identify the type and amount of information that can be decoded from neuronal ensemble activity in the hindlimb area of the rat motor cortex during bipedal locomotor tasks. Approach. Rats were trained to stand, step on a treadmill, walk overground and climb staircases in a bipedal posture. To impose this gait, the rats were secured in a robotic interface that provided support against the direction of gravity and in the mediolateral direction, but behaved transparently in the forward direction. After completion of training, rats were chronically implanted with a micro-wire array spanning the left hindlimb motor cortex to record single and multi-unit activity, and bipolar electrodes into 10 muscles of the right hindlimb to monitor electromyographic signals. Whole-body kinematics, muscle activity, and neural signals were simultaneously recorded during execution of the trained tasks over multiple days of testing. Hindlimb kinematics, muscle activity, gait phases, and locomotor tasks were decoded using offline classification algorithms. Main results. We found that the stance and swing phases of gait and the locomotor tasks were detected with accuracies as robust as 90% in all rats. Decoded hindlimb kinematics and muscle activity exhibited a larger variability across rats and tasks. Significance. Our study shows that the rodent motor cortex contains useful information for lower limb neuroprosthetic development. However, brain-machine interfaces estimating gait phases or locomotor behaviors, instead of continuous variables such as limb joint positions or speeds
Volitional and Real-Time Control Cursor Based on Eye Movement Decoding Using a Linear Decoding Model
Directory of Open Access Journals (Sweden)
Jinhua Zhang
2016-01-01
Full Text Available The aim of this study is to build a linear decoding model that reveals the relationship between the movement information and the EOG (electrooculogram data to online control a cursor continuously with blinks and eye pursuit movements. First of all, a blink detection method is proposed to reject a voluntary single eye blink or double-blink information from EOG. Then, a linear decoding model of time series is developed to predict the position of gaze, and the model parameters are calibrated by the RLS (Recursive Least Square algorithm; besides, the assessment of decoding accuracy is assessed through cross-validation procedure. Additionally, the subsection processing, increment control, and online calibration are presented to realize the online control. Finally, the technology is applied to the volitional and online control of a cursor to hit the multiple predefined targets. Experimental results show that the blink detection algorithm performs well with the voluntary blink detection rate over 95%. Through combining the merits of blinks and smooth pursuit movements, the movement information of eyes can be decoded in good conformity with the average Pearson correlation coefficient which is up to 0.9592, and all signal-to-noise ratios are greater than 0. The novel system allows people to successfully and economically control a cursor online with a hit rate of 98%.
Decoding the genome with an integrative analysis tool: combinatorial CRM Decoder.
Kang, Keunsoo; Kim, Joomyeong; Chung, Jae Hoon; Lee, Daeyoup
2011-09-01
The identification of genome-wide cis-regulatory modules (CRMs) and characterization of their associated epigenetic features are fundamental steps toward the understanding of gene regulatory networks. Although integrative analysis of available genome-wide information can provide new biological insights, the lack of novel methodologies has become a major bottleneck. Here, we present a comprehensive analysis tool called combinatorial CRM decoder (CCD), which utilizes the publicly available information to identify and characterize genome-wide CRMs in a species of interest. CCD first defines a set of the epigenetic features which is significantly associated with a set of known CRMs as a code called 'trace code', and subsequently uses the trace code to pinpoint putative CRMs throughout the genome. Using 61 genome-wide data sets obtained from 17 independent mouse studies, CCD successfully catalogued ∼12 600 CRMs (five distinct classes) including polycomb repressive complex 2 target sites as well as imprinting control regions. Interestingly, we discovered that ∼4% of the identified CRMs belong to at least two different classes named 'multi-functional CRM', suggesting their functional importance for regulating spatiotemporal gene expression. From these examples, we show that CCD can be applied to any potential genome-wide datasets and therefore will shed light on unveiling genome-wide CRMs in various species.
On the prior probabilities for two-stage Bayesian estimates
International Nuclear Information System (INIS)
Kohut, P.
1992-01-01
The method of Bayesian inference is reexamined for its applicability and for the required underlying assumptions in obtaining and using prior probability estimates. Two different approaches are suggested to determine the first-stage priors in the two-stage Bayesian analysis which avoid certain assumptions required for other techniques. In the first scheme, the prior is obtained through a true frequency based distribution generated at selected intervals utilizing actual sampling of the failure rate distributions. The population variability distribution is generated as the weighed average of the frequency distributions. The second method is based on a non-parametric Bayesian approach using the Maximum Entropy Principle. Specific features such as integral properties or selected parameters of prior distributions may be obtained with minimal assumptions. It is indicated how various quantiles may also be generated with a least square technique
A Bayesian approach to combining animal abundance and demographic data
Directory of Open Access Journals (Sweden)
Brooks, S. P.
2004-06-01
Full Text Available In studies of wild animals, one frequently encounters both count and mark-recapture-recovery data. Here, we consider an integrated Bayesian analysis of ring¿recovery and count data using a state-space model. We then impose a Leslie-matrix-based model on the true population counts describing the natural birth-death and age transition processes. We focus upon the analysis of both count and recovery data collected on British lapwings (Vanellus vanellus combined with records of the number of frost days each winter. We demonstrate how the combined analysis of these data provides a more robust inferential framework and discuss how the Bayesian approach using MCMC allows us to remove the potentially restrictive normality assumptions commonly assumed for analyses of this sort. It is shown how WinBUGS may be used to perform the Bayesian analysis. WinBUGS code is provided and its performance is critically discussed.
Bayesian Plackett-Luce Mixture Models for Partially Ranked Data.
Mollica, Cristina; Tardella, Luca
2017-06-01
The elicitation of an ordinal judgment on multiple alternatives is often required in many psychological and behavioral experiments to investigate preference/choice orientation of a specific population. The Plackett-Luce model is one of the most popular and frequently applied parametric distributions to analyze rankings of a finite set of items. The present work introduces a Bayesian finite mixture of Plackett-Luce models to account for unobserved sample heterogeneity of partially ranked data. We describe an efficient way to incorporate the latent group structure in the data augmentation approach and the derivation of existing maximum likelihood procedures as special instances of the proposed Bayesian method. Inference can be conducted with the combination of the Expectation-Maximization algorithm for maximum a posteriori estimation and the Gibbs sampling iterative procedure. We additionally investigate several Bayesian criteria for selecting the optimal mixture configuration and describe diagnostic tools for assessing the fitness of ranking distributions conditionally and unconditionally on the number of ranked items. The utility of the novel Bayesian parametric Plackett-Luce mixture for characterizing sample heterogeneity is illustrated with several applications to simulated and real preference ranked data. We compare our method with the frequentist approach and a Bayesian nonparametric mixture model both assuming the Plackett-Luce model as a mixture component. Our analysis on real datasets reveals the importance of an accurate diagnostic check for an appropriate in-depth understanding of the heterogenous nature of the partial ranking data.
Design of FBG En/decoders in Coherent 2-D Time-polarization OCDMA Systems
Hou, Fen-fei; Yang, Ming
2012-12-01
A novel fiber Bragg grating (FBG)-based en/decoder for the two-dimensional (2-D) time-spreading and polarization multiplexer optical coding is proposed. Compared with other 2-D en/decoders, the proposed en/decoding for an optical code-division multiple-access (OCDMA) system uses a single phase-encoded FBG and coherent en/decoding. Furthermore, combined with reconstruction-equivalent-chirp technology, such en/decoders can be realized with a conventional simple fabrication setup. Experimental results of such en/decoders and the corresponding system test at a data rate of 5 Gbit/s demonstrate that this kind of 2-D FBG-based en/decoders could improve the performances of OCDMA systems.
Bayesian seismic AVO inversion
Energy Technology Data Exchange (ETDEWEB)
Buland, Arild
2002-07-01
A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S
Bayesian microsaccade detection
Mihali, Andra; van Opheusden, Bas; Ma, Wei Ji
2017-01-01
Microsaccades are high-velocity fixational eye movements, with special roles in perception and cognition. The default microsaccade detection method is to determine when the smoothed eye velocity exceeds a threshold. We have developed a new method, Bayesian microsaccade detection (BMD), which performs inference based on a simple statistical model of eye positions. In this model, a hidden state variable changes between drift and microsaccade states at random times. The eye position is a biased random walk with different velocity distributions for each state. BMD generates samples from the posterior probability distribution over the eye state time series given the eye position time series. Applied to simulated data, BMD recovers the “true” microsaccades with fewer errors than alternative algorithms, especially at high noise. Applied to EyeLink eye tracker data, BMD detects almost all the microsaccades detected by the default method, but also apparent microsaccades embedded in high noise—although these can also be interpreted as false positives. Next we apply the algorithms to data collected with a Dual Purkinje Image eye tracker, whose higher precision justifies defining the inferred microsaccades as ground truth. When we add artificial measurement noise, the inferences of all algorithms degrade; however, at noise levels comparable to EyeLink data, BMD recovers the “true” microsaccades with 54% fewer errors than the default algorithm. Though unsuitable for online detection, BMD has other advantages: It returns probabilities rather than binary judgments, and it can be straightforwardly adapted as the generative model is refined. We make our algorithm available as a software package. PMID:28114483
Decoding the dynamic representation of musical pitch from human brain activity.
Sankaran, N; Thompson, W F; Carlile, S; Carlson, T A
2018-01-16
In music, the perception of pitch is governed largely by its tonal function given the preceding harmonic structure of the music. While behavioral research has advanced our understanding of the perceptual representation of musical pitch, relatively little is known about its representational structure in the brain. Using Magnetoencephalography (MEG), we recorded evoked neural responses to different tones presented within a tonal context. Multivariate Pattern Analysis (MVPA) was applied to "decode" the stimulus that listeners heard based on the underlying neural activity. We then characterized the structure of the brain's representation using decoding accuracy as a proxy for representational distance, and compared this structure to several well established perceptual and acoustic models. The observed neural representation was best accounted for by a model based on the Standard Tonal Hierarchy, whereby differences in the neural encoding of musical pitches correspond to their differences in perceived stability. By confirming that perceptual differences honor those in the underlying neuronal population coding, our results provide a crucial link in understanding the cognitive foundations of musical pitch across psychological and neural domains.
Kernel Bayesian ART and ARTMAP.
Masuyama, Naoki; Loo, Chu Kiong; Dawood, Farhan
2018-02-01
Adaptive Resonance Theory (ART) is one of the successful approaches to resolving "the plasticity-stability dilemma" in neural networks, and its supervised learning model called ARTMAP is a powerful tool for classification. Among several improvements, such as Fuzzy or Gaussian based models, the state of art model is Bayesian based one, while solving the drawbacks of others. However, it is known that the Bayesian approach for the high dimensional and a large number of data requires high computational cost, and the covariance matrix in likelihood becomes unstable. This paper introduces Kernel Bayesian ART (KBA) and ARTMAP (KBAM) by integrating Kernel Bayes' Rule (KBR) and Correntropy Induced Metric (CIM) to Bayesian ART (BA) and ARTMAP (BAM), respectively, while maintaining the properties of BA and BAM. The kernel frameworks in KBA and KBAM are able to avoid the curse of dimensionality. In addition, the covariance-free Bayesian computation by KBR provides the efficient and stable computational capability to KBA and KBAM. Furthermore, Correntropy-based similarity measurement allows improving the noise reduction ability even in the high dimensional space. The simulation experiments show that KBA performs an outstanding self-organizing capability than BA, and KBAM provides the superior classification ability than BAM, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bayesian analysis of CCDM models
Energy Technology Data Exchange (ETDEWEB)
Jesus, J.F. [Universidade Estadual Paulista (Unesp), Câmpus Experimental de Itapeva, Rua Geraldo Alckmin 519, Vila N. Sra. de Fátima, Itapeva, SP, 18409-010 Brazil (Brazil); Valentim, R. [Departamento de Física, Instituto de Ciências Ambientais, Químicas e Farmacêuticas—ICAQF, Universidade Federal de São Paulo (UNIFESP), Unidade José Alencar, Rua São Nicolau No. 210, Diadema, SP, 09913-030 Brazil (Brazil); Andrade-Oliveira, F., E-mail: jfjesus@itapeva.unesp.br, E-mail: valentim.rodolfo@unifesp.br, E-mail: felipe.oliveira@port.ac.uk [Institute of Cosmology and Gravitation—University of Portsmouth, Burnaby Road, Portsmouth, PO1 3FX United Kingdom (United Kingdom)
2017-09-01
Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3α H {sub 0} model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.
Bayesian modeling using WinBUGS
Ntzoufras, Ioannis
2009-01-01
A hands-on introduction to the principles of Bayesian modeling using WinBUGS Bayesian Modeling Using WinBUGS provides an easily accessible introduction to the use of WinBUGS programming techniques in a variety of Bayesian modeling settings. The author provides an accessible treatment of the topic, offering readers a smooth introduction to the principles of Bayesian modeling with detailed guidance on the practical implementation of key principles. The book begins with a basic introduction to Bayesian inference and the WinBUGS software and goes on to cover key topics, including: Markov Chain Monte Carlo algorithms in Bayesian inference Generalized linear models Bayesian hierarchical models Predictive distribution and model checking Bayesian model and variable evaluation Computational notes and screen captures illustrate the use of both WinBUGS as well as R software to apply the discussed techniques. Exercises at the end of each chapter allow readers to test their understanding of the presented concepts and all ...
3D Bayesian contextual classifiers
DEFF Research Database (Denmark)
Larsen, Rasmus
2000-01-01
We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....
Bayesian image restoration, using configurations
DEFF Research Database (Denmark)
Thorarinsdottir, Thordis Linda
2006-01-01
In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary...... configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for the salt and pepper noise. The inference in the model is discussed...
Bayesian image restoration, using configurations
DEFF Research Database (Denmark)
Thorarinsdottir, Thordis
In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary...... configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed...
Bayesian variable selection in regression
Energy Technology Data Exchange (ETDEWEB)
Mitchell, T.J.; Beauchamp, J.J.
1987-01-01
This paper is concerned with the selection of subsets of ''predictor'' variables in a linear regression model for the prediction of a ''dependent'' variable. We take a Bayesian approach and assign a probability distribution to the dependent variable through a specification of prior distributions for the unknown parameters in the regression model. The appropriate posterior probabilities are derived for each submodel and methods are proposed for evaluating the family of prior distributions. Examples are given that show the application of the Bayesian methodology. 23 refs., 3 figs.
Inference in hybrid Bayesian networks
DEFF Research Database (Denmark)
Lanseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael
2009-01-01
Since the 1980s, Bayesian Networks (BNs) have become increasingly popular for building statistical models of complex systems. This is particularly true for boolean systems, where BNs often prove to be a more efficient modelling framework than traditional reliability-techniques (like fault trees a...... decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability....... and reliability block diagrams). However, limitations in the BNs' calculation engine have prevented BNs from becoming equally popular for domains containing mixtures of both discrete and continuous variables (so-called hybrid domains). In this paper we focus on these difficulties, and summarize some of the last...
Bayesian methods for proteomic biomarker development
Directory of Open Access Journals (Sweden)
Belinda Hernández
2015-12-01
In this review we provide an introduction to Bayesian inference and demonstrate some of the advantages of using a Bayesian framework. We summarize how Bayesian methods have been used previously in proteomics and other areas of bioinformatics. Finally, we describe some popular and emerging Bayesian models from the statistical literature and provide a worked tutorial including code snippets to show how these methods may be applied for the evaluation of proteomic biomarkers.
Decoding Signal Processing at the Single-Cell Level
Energy Technology Data Exchange (ETDEWEB)
Wiley, H. Steven
2017-12-01
The ability of cells to detect and decode information about their extracellular environment is critical to generating an appropriate response. In multicellular organisms, cells must decode dozens of signals from their neighbors and extracellular matrix to maintain tissue homeostasis while still responding to environmental stressors. How cells detect and process information from their surroundings through a surprisingly limited number of signal transduction pathways is one of the most important question in biology. Despite many decades of research, many of the fundamental principles that underlie cell signal processing remain obscure. However, in this issue of Cell Systems, Gillies et al present compelling evidence that the early response gene circuit can act as a linear signal integrator, thus providing significant insight into how cells handle fluctuating signals and noise in their environment.
Reaction Decoder Tool (RDT): extracting features from chemical reactions.
Rahman, Syed Asad; Torrance, Gilliean; Baldacci, Lorenzo; Martínez Cuesta, Sergio; Fenninger, Franz; Gopal, Nimish; Choudhary, Saket; May, John W; Holliday, Gemma L; Steinbeck, Christoph; Thornton, Janet M
2016-07-01
Extracting chemical features like Atom-Atom Mapping (AAM), Bond Changes (BCs) and Reaction Centres from biochemical reactions helps us understand the chemical composition of enzymatic reactions. Reaction Decoder is a robust command line tool, which performs this task with high accuracy. It supports standard chemical input/output exchange formats i.e. RXN/SMILES, computes AAM, highlights BCs and creates images of the mapped reaction. This aids in the analysis of metabolic pathways and the ability to perform comparative studies of chemical reactions based on these features. This software is implemented in Java, supported on Windows, Linux and Mac OSX, and freely available at https://github.com/asad/ReactionDecoder : asad@ebi.ac.uk or s9asad@gmail.com. © The Author 2016. Published by Oxford University Press.
DECODING OF ACADEMIC CONTENT BY THE 1st GRADE STUDENTS
Directory of Open Access Journals (Sweden)
Kamil Błaszczyński
2017-07-01
Full Text Available In the paper a comparative study conducted on the 1st grade students of sociology and pedagogy is discussed. The study was focused on the language skills of students. The most important skills tested were the abilities to decode academic content. The study shows that the students have very poor language skills in decoding the academic content on every level of its complexity. They also have noticeable problems with the definition of basic academic terms. The significance of the obtained results are high because of the innovative topic and character of the study, which was the first such study conducted on students of a Polish university. Results are also valuable for academic teachers who are interested in such problems as effective communication with students.
Bayesian variable order Markov models: Towards Bayesian predictive state representations
Dimitrakakis, C.
2009-01-01
We present a Bayesian variable order Markov model that shares many similarities with predictive state representations. The resulting models are compact and much easier to specify and learn than classical predictive state representations. Moreover, we show that they significantly outperform a more
The humble Bayesian : Model checking from a fully Bayesian perspective
Morey, Richard D.; Romeijn, Jan-Willem; Rouder, Jeffrey N.
Gelman and Shalizi (2012) criticize what they call the usual story in Bayesian statistics: that the distribution over hypotheses or models is the sole means of statistical inference, thus excluding model checking and revision, and that inference is inductivist rather than deductivist. They present
Performance-complexity tradeoff in sequential decoding for the unconstrained AWGN channel
Abediseid, Walid
2013-06-01
In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter - the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity. © 2013 IEEE.
Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi
2017-12-01
We analyze the achievable information rates (AIRs) for coded modulation schemes with QAM constellations with both bit-wise and symbol-wise decoders, corresponding to the case where a binary code is used in combination with a higher-order modulation using the bit-interleaved coded modulation (BICM) paradigm and to the case where a nonbinary code over a field matched to the constellation size is used, respectively. In particular, we consider hard decision decoding, which is the preferable option for fiber-optic communication systems where decoding complexity is a concern. Recently, Liga \\emph{et al.} analyzed the AIRs for bit-wise and symbol-wise decoders considering what the authors called \\emph{hard decision decoder} which, however, exploits \\emph{soft information} of the transition probabilities of discrete-input discrete-output channel resulting from the hard detection. As such, the complexity of the decoder is essentially the same as the complexity of a soft decision decoder. In this paper, we analyze instead the AIRs for the standard hard decision decoder, commonly used in practice, where the decoding is based on the Hamming distance metric. We show that if standard hard decision decoding is used, bit-wise decoders yield significantly higher AIRs than symbol-wise decoders. As a result, contrary to the conclusion by Liga \\emph{et al.}, binary decoders together with the BICM paradigm are preferable for spectrally-efficient fiber-optic systems. We also design binary and nonbinary staircase codes and show that, in agreement with the AIRs, binary codes yield better performance.
STACK DECODING OF LINEAR BLOCK CODES FOR DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM
Directory of Open Access Journals (Sweden)
H. Prashantha Kumar
2012-03-01
Full Text Available The boundaries between block and convolutional codes have become diffused after recent advances in the understanding of the trellis structure of block codes and the tail-biting structure of some convolutional codes. Therefore, decoding algorithms traditionally proposed for decoding convolutional codes have been applied for decoding certain classes of block codes. This paper presents the decoding of block codes using tree structure. Many good block codes are presently known. Several of them have been used in applications ranging from deep space communication to error control in storage systems. But the primary difficulty with applying Viterbi or BCJR algorithms to decode of block codes is that, even though they are optimum decoding methods, the promised bit error rates are not achieved in practice at data rates close to capacity. This is because the decoding effort is fixed and grows with block length, and thus only short block length codes can be used. Therefore, an important practical question is whether a suboptimal realizable soft decision decoding method can be found for block codes. A noteworthy result which provides a partial answer to this question is described in the following sections. This result of near optimum decoding will be used as motivation for the investigation of different soft decision decoding methods for linear block codes which can lead to the development of efficient decoding algorithms. The code tree can be treated as an expanded version of the trellis, where every path is totally distinct from every other path. We have derived the tree structure for (8, 4 and (16, 11 extended Hamming codes and have succeeded in implementing the soft decision stack algorithm to decode them. For the discrete memoryless channel, gains in excess of 1.5dB at a bit error rate of 10-5 with respect to conventional hard decision decoding are demonstrated for these codes.
Hierarchical multi-resolution mesh networks for brain decoding.
Onal Ertugrul, Itir; Ozay, Mete; Yarman Vural, Fatos T
2017-10-04
Human brain is supposed to process information in multiple frequency bands. Therefore, we can extract diverse information from functional Magnetic Resonance Imaging (fMRI) data by processing it at multiple resolutions. We propose a framework, called Hierarchical Multi-resolution Mesh Networks (HMMNs), which establishes a set of brain networks at multiple resolutions of fMRI signal to represent the underlying cognitive process. Our framework, first, decomposes the fMRI signal into various frequency subbands using wavelet transform. Then, a brain network is formed at each subband by ensembling a set of local meshes. Arc weights of each local mesh are estimated by ridge regression. Finally, adjacency matrices of mesh networks obtained at different subbands are used to train classifiers in an ensemble learning architecture, called fuzzy stacked generalization (FSG). Our decoding performances on Human Connectome Project task-fMRI dataset reflect that HMMNs can successfully discriminate tasks with 99% accuracy, across 808 subjects. Diversity of information embedded in mesh networks of multiple subbands enables the ensemble of classifiers to collaborate with each other for brain decoding. The suggested HMMNs decode the cognitive tasks better than a single classifier applied to any subband. Also mesh networks have a better representation power compared to pairwise correlations or average voxel time series. Moreover, fusion of diverse information using FSG outperforms fusion with majority voting. We conclude that, fMRI data, recorded during a cognitive task, provide diverse information in multi-resolution mesh networks. Our framework fuses this complementary information and boosts the brain decoding performances obtained at individual subbands.
A Novel Modified Algorithm with Reduced Complexity LDPC Code Decoder
Song Yang; Bao Nanhai; Cai Chaoshi
2014-01-01
A novel efficient decoding algorithm reduced the sum-product algorithm (SPA) Complexity with LPDC code is proposed. Base on the hyperbolic tangent rule, modified the Check node update with two horizontal process, which have similar calculation, Motivated by the finding that sun- min (MS) algorithm reduce the complexity reducing the approximation error in the horizontal process, simplify the information weight small part. Compared with the exiting approximations, the proposed method is less co...
Decoding material-specific memory reprocessing during sleep in humans
Sch?nauer, M.; Alizadeh, S.; Jamalabadi, H.; Abraham, A.; Pawlizki, A.; Gais, S.
2017-01-01
Neuronal learning activity is reactivated during sleep but the dynamics of this reactivation in humans are still poorly understood. Here we use multivariate pattern classification to decode electrical brain activity during sleep and determine what type of images participants had viewed in a preceding learning session. We find significant patterns of learning-related processing during rapid eye movement (REM) and non-REM (NREM) sleep, which are generalizable across subjects. This processing oc...
Design and Implementation of Viterbi Decoder Using VHDL
Thakur, Akash; Chattopadhyay, Manju K.
2018-03-01
A digital design conversion of Viterbi decoder for ½ rate convolutional encoder with constraint length k = 3 is presented in this paper. The design is coded with the help of VHDL, simulated and synthesized using XILINX ISE 14.7. Synthesis results show a maximum frequency of operation for the design is 100.725 MHz. The requirement of memory is less as compared to conventional method.
Reaction Decoder Tool (RDT): extracting features from chemical reactions
Rahman, Syed Asad; Torrance, Gilliean; Baldacci, Lorenzo; Mart?nez Cuesta, Sergio; Fenninger, Franz; Gopal, Nimish; Choudhary, Saket; May, John W.; Holliday, Gemma L.; Steinbeck, Christoph; Thornton, Janet M.
2016-01-01
Summary: Extracting chemical features like Atom?Atom Mapping (AAM), Bond Changes (BCs) and Reaction Centres from biochemical reactions helps us understand the chemical composition of enzymatic reactions. Reaction Decoder is a robust command line tool, which performs this task with high accuracy. It supports standard chemical input/output exchange formats i.e. RXN/SMILES, computes AAM, highlights BCs and creates images of the mapped reaction. This aids in the analysis of metabolic pathways and...
Approximate MAP Decoding on Tail-Biting Trellises
Madhu, A. S.; Shankar, Priti
2005-01-01
We propose two approximate algorithms for MAP decoding on tail-biting trellises. The algorithms work on a subset of nodes of the tail-biting trellis, judiciously selected. We report the results of simulations on an AWGN channel using the approximate algorithms on tail-biting trellises for the $(24,12)$ Extended Golay Code and a rate 1/2 convolutional code with memory 6.
Bayesian Model Averaging for Propensity Score Analysis
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
Bayesian models in cognitive neuroscience: A tutorial
O'Reilly, J.X.; Mars, R.B.
2015-01-01
This chapter provides an introduction to Bayesian models and their application in cognitive neuroscience. The central feature of Bayesian models, as opposed to other classes of models, is that Bayesian models represent the beliefs of an observer as probability distributions, allowing them to
A Bayesian framework for risk perception
van Erp, H.R.N.
2017-01-01
We present here a Bayesian framework of risk perception. This framework encompasses plausibility judgments, decision making, and question asking. Plausibility judgments are modeled by way of Bayesian probability theory, decision making is modeled by way of a Bayesian decision theory, and relevancy
Bayesian inference from count data using discrete uniform priors.
Comoglio, Federico; Fracchia, Letizia; Rinaldi, Maurizio
2013-01-01
We consider a set of sample counts obtained by sampling arbitrary fractions of a finite volume containing an homogeneously dispersed population of identical objects. We report a Bayesian derivation of the posterior probability distribution of the population size using a binomial likelihood and non-conjugate, discrete uniform priors under sampling with or without replacement. Our derivation yields a computationally feasible formula that can prove useful in a variety of statistical problems involving absolute quantification under uncertainty. We implemented our algorithm in the R package dupiR and compared it with a previously proposed Bayesian method based on a Gamma prior. As a showcase, we demonstrate that our inference framework can be used to estimate bacterial survival curves from measurements characterized by extremely low or zero counts and rather high sampling fractions. All in all, we provide a versatile, general purpose algorithm to infer population sizes from count data, which can find application in a broad spectrum of biological and physical problems.
Decoding Visual Object Categories in Early Somatosensory Cortex
Smith, Fraser W.; Goodale, Melvyn A.
2015-01-01
Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects. PMID:24122136
Efficient algorithms for maximum likelihood decoding in the surface code
Bravyi, Sergey; Suchara, Martin; Vargo, Alexander
2014-09-01
We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.
Local Mode Analysis: Decoding IR Spectra by Visualizing Molecular Details.
Massarczyk, M; Rudack, T; Schlitter, J; Kuhne, J; Kötting, C; Gerwert, K
2017-04-20
Integration of experimental and computational approaches to investigate chemical reactions in proteins has proven to be very successful. Experimentally, time-resolved FTIR difference-spectroscopy monitors chemical reactions at atomic detail. To decode detailed structural information encoded in IR spectra, QM/MM calculations are performed. Here, we present a novel method which we call local mode analysis (LMA) for calculating IR spectra and assigning spectral IR-bands on the basis of movements of nuclei and partial charges from just a single QM/MM trajectory. Through LMA the decoding of IR spectra no longer requires several simulations or optimizations. The novel approach correlates the motions of atoms of a single simulation with the corresponding IR bands and provides direct access to the structural information encoded in IR spectra. Either the contributions of a particular atom or atom group to the complete IR spectrum of the molecule are visualized, or an IR-band is selected to visualize the corresponding structural motions. Thus, LMA decodes the detailed information contained in IR spectra and provides an intuitive approach for structural biologists and biochemists. The unique feature of LMA is the bidirectional analysis connecting structural details to spectral features and vice versa spectral features to molecular motions.
Robustifying Bayesian nonparametric mixtures for count data.
Canale, Antonio; Prünster, Igor
2017-03-01
Our motivating application stems from surveys of natural populations and is characterized by large spatial heterogeneity in the counts, which makes parametric approaches to modeling local animal abundance too restrictive. We adopt a Bayesian nonparametric approach based on mixture models and innovate with respect to popular Dirichlet process mixture of Poisson kernels by increasing the model flexibility at the level both of the kernel and the nonparametric mixing measure. This allows to derive accurate and robust estimates of the distribution of local animal abundance and of the corresponding clusters. The application and a simulation study for different scenarios yield also some general methodological implications. Adding flexibility solely at the level of the mixing measure does not improve inferences, since its impact is severely limited by the rigidity of the Poisson kernel with considerable consequences in terms of bias. However, once a kernel more flexible than the Poisson is chosen, inferences can be robustified by choosing a prior more general than the Dirichlet process. Therefore, to improve the performance of Bayesian nonparametric mixtures for count data one has to enrich the model simultaneously at both levels, the kernel and the mixing measure. © 2016, The International Biometric Society.
Bayesian automated cortical segmentation for neonatal MRI
Chou, Zane; Paquette, Natacha; Ganesh, Bhavana; Wang, Yalin; Ceschin, Rafael; Nelson, Marvin D.; Macyszyn, Luke; Gaonkar, Bilwaj; Panigrahy, Ashok; Lepore, Natasha
2017-11-01
Several attempts have been made in the past few years to develop and implement an automated segmentation of neonatal brain structural MRI. However, accurate automated MRI segmentation remains challenging in this population because of the low signal-to-noise ratio, large partial volume effects and inter-individual anatomical variability of the neonatal brain. In this paper, we propose a learning method for segmenting the whole brain cortical grey matter on neonatal T2-weighted images. We trained our algorithm using a neonatal dataset composed of 3 fullterm and 4 preterm infants scanned at term equivalent age. Our segmentation pipeline combines the FAST algorithm from the FSL library software and a Bayesian segmentation approach to create a threshold matrix that minimizes the error of mislabeling brain tissue types. Our method shows promising results with our pilot training set. In both preterm and full-term neonates, automated Bayesian segmentation generates a smoother and more consistent parcellation compared to FAST, while successfully removing the subcortical structure and cleaning the edges of the cortical grey matter. This method show promising refinement of the FAST segmentation by considerably reducing manual input and editing required from the user, and further improving reliability and processing time of neonatal MR images. Further improvement will include a larger dataset of training images acquired from different manufacturers.
Decoding noises in HIV computational genotyping.
Jia, MingRui; Shaw, Timothy; Zhang, Xing; Liu, Dong; Shen, Ye; Ezeamama, Amara E; Yang, Chunfu; Zhang, Ming
2017-11-01
Lack of a consistent and reliable genotyping system can critically impede HIV genomic research on pathogenesis, fitness, virulence, drug resistance, and genomic-based healthcare and treatment. At present, mis-genotyping, i.e., background noises in molecular genotyping, and its impact on epidemic surveillance is unknown. For the first time, we present a comprehensive assessment of HIV genotyping quality. HIV sequence data were retrieved from worldwide published records, and subjected to a systematic genotyping assessment pipeline. Results showed that mis-genotyped cases occurred at 4.6% globally, with some regional and high-risk population heterogeneities. Results also revealed a consistent mis-genotyping pattern in gp120 in all studied populations except the group of men who have sex with men. Our study also suggests novel virus diversities in the mis-genotyped cases. Finally, this study reemphasizes the importance of implementing a standardized genotyping pipeline to avoid genotyping disparity and to advance our understanding of virus evolution in various epidemiological settings. Copyright © 2017 Elsevier Inc. All rights reserved.
International Planned Parenthood Federation, London (England).
In an effort to help meet the growing interest and concern about the problems created by the rapid growth of population, The International Planned Parenthood Federation has prepared this booklet with the aim of assisting the study of the history and future trends of population growth and its impact on individual and family welfare, national,…
Differentiated Bayesian Conjoint Choice Designs
Z. Sándor (Zsolt); M. Wedel (Michel)
2003-01-01
textabstractPrevious conjoint choice design construction procedures have produced a single design that is administered to all subjects. This paper proposes to construct a limited set of different designs. The designs are constructed in a Bayesian fashion, taking into account prior uncertainty about
Bayesian networks in levee reliability
Roscoe, K.; Hanea, A.
2015-01-01
We applied a Bayesian network to a system of levees for which the results of traditional reliability analysis showed high failure probabilities, which conflicted with the intuition and experience of those managing the levees. We made use of forty proven strength observations - high water levels with
Bayesian Classification of Image Structures
DEFF Research Database (Denmark)
Goswami, Dibyendu; Kalkan, Sinan; Krüger, Norbert
2009-01-01
In this paper, we describe work on Bayesian classi ers for distinguishing between homogeneous structures, textures, edges and junctions. We build semi-local classiers from hand-labeled images to distinguish between these four different kinds of structures based on the concept of intrinsic...... dimensionality. The built classi er is tested on standard and non-standard images...
Computational Neuropsychology and Bayesian Inference.
Parr, Thomas; Rees, Geraint; Friston, Karl J
2018-01-01
Computational theories of brain function have become very influential in neuroscience. They have facilitated the growth of formal approaches to disease, particularly in psychiatric research. In this paper, we provide a narrative review of the body of computational research addressing neuropsychological syndromes, and focus on those that employ Bayesian frameworks. Bayesian approaches to understanding brain function formulate perception and action as inferential processes. These inferences combine 'prior' beliefs with a generative (predictive) model to explain the causes of sensations. Under this view, neuropsychological deficits can be thought of as false inferences that arise due to aberrant prior beliefs (that are poor fits to the real world). This draws upon the notion of a Bayes optimal pathology - optimal inference with suboptimal priors - and provides a means for computational phenotyping. In principle, any given neuropsychological disorder could be characterized by the set of prior beliefs that would make a patient's behavior appear Bayes optimal. We start with an overview of some key theoretical constructs and use these to motivate a form of computational neuropsychology that relates anatomical structures in the brain to the computations they perform. Throughout, we draw upon computational accounts of neuropsychological syndromes. These are selected to emphasize the key features of a Bayesian approach, and the possible types of pathological prior that may be present. They range from visual neglect through hallucinations to autism. Through these illustrative examples, we review the use of Bayesian approaches to understand the link between biology and computation that is at the heart of neuropsychology.
Bayesian Alternation During Tactile Augmentation
Directory of Open Access Journals (Sweden)
Caspar Mathias Goeke
2016-10-01
Full Text Available A large number of studies suggest that the integration of multisensory signals by humans is well described by Bayesian principles. However, there are very few reports about cue combination between a native and an augmented sense. In particular, we asked the question whether adult participants are able to integrate an augmented sensory cue with existing native sensory information. Hence for the purpose of this study we build a tactile augmentation device. Consequently, we compared different hypotheses of how untrained adult participants combine information from a native and an augmented sense. In a two-interval forced choice (2 IFC task, while subjects were blindfolded and seated on a rotating platform, our sensory augmentation device translated information on whole body yaw rotation to tactile stimulation. Three conditions were realized: tactile stimulation only (augmented condition, rotation only (native condition, and both augmented and native information (bimodal condition. Participants had to choose one out of two consecutive rotations with higher angular rotation. For the analysis, we fitted the participants’ responses with a probit model and calculated the just notable difference (JND. Then we compared several models for predicting bimodal from unimodal responses. An objective Bayesian alternation model yielded a better prediction (χred2 = 1.67 than the Bayesian integration model (χred2= 4.34. Slightly higher accuracy showed a non-Bayesian winner takes all model (χred2= 1.64, which either used only native or only augmented values per subject for prediction. However the performance of the Bayesian alternation model could be substantially improved (χred2= 1.09 utilizing subjective weights obtained by a questionnaire. As a result, the subjective Bayesian alternation model predicted bimodal performance most accurately among all tested models. These results suggest that information from augmented and existing sensory modalities in
Hellmuth, O.; Falch, C.; Herre, J.; Hilpert, J.; Ridderbusch, F.; Terentiev, L.
2010-01-01
An audio signal decoder for providing an upmix signal representation in dependence on a downmix signal representation and an object-related parametric information comprises an object separator configured to decompose the downmix signal representation, to provide a first audio information describing a first set of one or more audio objects of a first audio object type and a second audio information describing a second set of one or more audio objects of a second audio object type, in dependenc...
Topics in Bayesian statistics and maximum entropy
International Nuclear Information System (INIS)
Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.
1998-12-01
Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)
Bayesian analysis of rare events
Straub, Daniel; Papaioannou, Iason; Betz, Wolfgang
2016-06-01
In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.
Polytomies and Bayesian phylogenetic inference.
Lewis, Paul O; Holder, Mark T; Holsinger, Kent E
2005-04-01
Bayesian phylogenetic analyses are now very popular in systematics and molecular evolution because they allow the use of much more realistic models than currently possible with maximum likelihood methods. There are, however, a growing number of examples in which large Bayesian posterior clade probabilities are associated with very short branch lengths and low values for non-Bayesian measures of support such as nonparametric bootstrapping. For the four-taxon case when the true tree is the star phylogeny, Bayesian analyses become increasingly unpredictable in their preference for one of the three possible resolved tree topologies as data set size increases. This leads to the prediction that hard (or near-hard) polytomies in nature will cause unpredictable behavior in Bayesian analyses, with arbitrary resolutions of the polytomy receiving very high posterior probabilities in some cases. We present a simple solution to this problem involving a reversible-jump Markov chain Monte Carlo (MCMC) algorithm that allows exploration of all of tree space, including unresolved tree topologies with one or more polytomies. The reversible-jump MCMC approach allows prior distributions to place some weight on less-resolved tree topologies, which eliminates misleadingly high posteriors associated with arbitrary resolutions of hard polytomies. Fortunately, assigning some prior probability to polytomous tree topologies does not appear to come with a significant cost in terms of the ability to assess the level of support for edges that do exist in the true tree. Methods are discussed for applying arbitrary prior distributions to tree topologies of varying resolution, and an empirical example showing evidence of polytomies is analyzed and discussed.
Bayesian methods for measures of agreement
Broemeling, Lyle D
2009-01-01
Using WinBUGS to implement Bayesian inferences of estimation and testing hypotheses, Bayesian Methods for Measures of Agreement presents useful methods for the design and analysis of agreement studies. It focuses on agreement among the various players in the diagnostic process.The author employs a Bayesian approach to provide statistical inferences based on various models of intra- and interrater agreement. He presents many examples that illustrate the Bayesian mode of reasoning and explains elements of a Bayesian application, including prior information, experimental information, the likelihood function, posterior distribution, and predictive distribution. The appendices provide the necessary theoretical foundation to understand Bayesian methods as well as introduce the fundamentals of programming and executing the WinBUGS software.Taking a Bayesian approach to inference, this hands-on book explores numerous measures of agreement, including the Kappa coefficient, the G coefficient, and intraclass correlation...
On the Properties of Neural Machine Translation: Encoder-Decoder Approaches
Cho, Kyunghyun; van Merrienboer, Bart; Bahdanau, Dzmitry; Bengio, Yoshua
2014-01-01
Neural machine translation is a relatively new approach to statistical machine translation based purely on neural networks. The neural machine translation models often consist of an encoder and a decoder. The encoder extracts a fixed-length representation from a variable-length input sentence, and the decoder generates a correct translation from this representation. In this paper, we focus on analyzing the properties of the neural machine translation using two models; RNN Encoder--Decoder and...
Design of optical decoders using a polarization-encoded optical shadow-casting scheme
Rizvi, Rizwan A.; Zaheer, K.; Zubairy, M. Zuhail
1988-12-01
The design of various decoders in the polarization-encoded optical shadow-casting scheme is presented. A general algorithm is used to design these multioutput combinational logic units with separate and simultaneous generation of outputs. In particular, the design of a binary coded decimal (BCD) to an Octal decoder and a BCD to an Excess-3 converter is presented for a fixed source plane as well as a fixed decoding mask. The results have been verified experimentally.
Decoding temporally encoded sensory input by cortical oscillations and thalamic phase comparators
Ahissar, Ehud; Haidarliu, Sebastian; Zacksenhouse, Miriam
1997-01-01
The temporally encoded information obtained by vibrissal touch could be decoded “passively,” involving only input-driven elements, or “actively,” utilizing intrinsically driven oscillators. A previous study suggested that the trigeminal somatosensory system of rats does not obey the bottom-up order of activation predicted by passive decoding. Thus, we have tested whether this system obeys the predictions of active decoding. We have studied cortical single units in ...
Layland, J. W.
1974-01-01
An approximate analysis of the effect of a noisy carrier reference on the performance of sequential decoding is presented. The analysis uses previously developed techniques for evaluating noisy reference performance for medium-rate uncoded communications adapted to sequential decoding for data rates of 8 to 2048 bits/s. In estimating the ten to the minus fourth power deletion probability thresholds for Helios, the model agrees with experimental data to within the experimental tolerances. The computational problem involved in sequential decoding, carrier loop effects, the main characteristics of the medium-rate model, modeled decoding performance, and perspectives on future work are discussed.
Convergence Analysis of Turbo Decoding of Serially Concatenated Block Codes and Product Codes
Directory of Open Access Journals (Sweden)
Krause Amir
2005-01-01
Full Text Available The geometric interpretation of turbo decoding has founded a framework, and provided tools for the analysis of parallel-concatenated codes decoding. In this paper, we extend this analytical basis for the decoding of serially concatenated codes, and focus on serially concatenated product codes (SCPC (i.e., product codes with checks on checks. For this case, at least one of the component (i.e., rows/columns decoders should calculate the extrinsic information not only for the information bits, but also for the check bits. We refer to such a component decoder as a serial decoding module (SDM. We extend the framework accordingly, derive the update equations for a general turbo decoder of SCPC, and the expressions for the main analysis tools: the Jacobian and stability matrices. We explore the stability of the SDM. Specifically, for high SNR, we prove that the maximal eigenvalue of the SDM's stability matrix approaches , where is the minimum Hamming distance of the component code. Hence, for practical codes, the SDM is unstable. Further, we analyze the two turbo decoding schemes, proposed by Benedetto and Pyndiah, by deriving the corresponding update equations and by demonstrating the structure of their stability matrices for the repetition code and an SCPC code with information bits. Simulation results for the Hamming and Golay codes are presented, analyzed, and compared to the theoretical results and to simulations of turbo decoding of parallel concatenation of the same codes.
Multiple-Symbol Noncoherent Decoding of Uncoded and Convolutionally Codes Continous Phase Modulation
Divsalar, D.; Raphaeli, D.
2000-01-01
Recently, a method for combined noncoherent detection and decoding of trellis-codes (noncoherent coded modulation) has been proposed, which can practically approach the performance of coherent detection.
VLSI Architectures for Sliding-Window-Based Space-Time Turbo Trellis Code Decoders
Directory of Open Access Journals (Sweden)
Georgios Passas
2012-01-01
Full Text Available The VLSI implementation of SISO-MAP decoders used for traditional iterative turbo coding has been investigated in the literature. In this paper, a complete architectural model of a space-time turbo code receiver that includes elementary decoders is presented. These architectures are based on newly proposed building blocks such as a recursive add-compare-select-offset (ACSO unit, A-, B-, Γ-, and LLR output calculation modules. Measurements of complexity and decoding delay of several sliding-window-technique-based MAP decoder architectures and a proposed parameter set lead to defining equations and comparison between those architectures.
Evolution in Mind: Evolutionary Dynamics, Cognitive Processes, and Bayesian Inference.
Suchow, Jordan W; Bourgin, David D; Griffiths, Thomas L
2017-07-01
Evolutionary theory describes the dynamics of population change in settings affected by reproduction, selection, mutation, and drift. In the context of human cognition, evolutionary theory is most often invoked to explain the origins of capacities such as language, metacognition, and spatial reasoning, framing them as functional adaptations to an ancestral environment. However, evolutionary theory is useful for understanding the mind in a second way: as a mathematical framework for describing evolving populations of thoughts, ideas, and memories within a single mind. In fact, deep correspondences exist between the mathematics of evolution and of learning, with perhaps the deepest being an equivalence between certain evolutionary dynamics and Bayesian inference. This equivalence permits reinterpretation of evolutionary processes as algorithms for Bayesian inference and has relevance for understanding diverse cognitive capacities, including memory and creativity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bayesian Model Averaging for Propensity Score Analysis.
Kaplan, David; Chen, Jianshen
2014-01-01
This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A detailed study of our approach examines the differences in the causal estimate when incorporating noninformative versus informative priors in the model averaging stage. We examine these approaches under common methods of propensity score implementation. In addition, we evaluate the impact of changing the size of Occam's window used to narrow down the range of possible models. We also assess the predictive performance of both Bayesian model averaging propensity score approaches and compare it with the case without Bayesian model averaging. Overall, results show that both Bayesian model averaging propensity score approaches recover the treatment effect estimates well and generally provide larger uncertainty estimates, as expected. Both Bayesian model averaging approaches offer slightly better prediction of the propensity score compared with the Bayesian approach with a single propensity score equation. Covariate balance checks for the case study show that both Bayesian model averaging approaches offer good balance. The fully Bayesian model averaging approach also provides posterior probability intervals of the balance indices.
Bayesian mapping QTL for fruit and growth phenological traits in ...
African Journals Online (AJOL)
STORAGESEVER
2009-01-19
Jan 19, 2009 ... 172 Afr. J. Biotechnol. Table 4. Summary of statistics for espistatic effects obtained with Bayesian model selection on fruit traits and growth phenological traits in the F2 population and F3 lines. Traita. Generation. Position. Heritabilityb. (%) aac ddd ade daf. 2lnBF. PL. F2. LG5[88.5, 163.4]×LG6[60.7, 78.6].
Pedestrian dynamics via Bayesian networks
Venkat, Ibrahim; Khader, Ahamad Tajudin; Subramanian, K. G.
2014-06-01
Studies on pedestrian dynamics have vital applications in crowd control management relevant to organizing safer large scale gatherings including pilgrimages. Reasoning pedestrian motion via computational intelligence techniques could be posed as a potential research problem within the realms of Artificial Intelligence. In this contribution, we propose a "Bayesian Network Model for Pedestrian Dynamics" (BNMPD) to reason the vast uncertainty imposed by pedestrian motion. With reference to key findings from literature which include simulation studies, we systematically identify: What are the various factors that could contribute to the prediction of crowd flow status? The proposed model unifies these factors in a cohesive manner using Bayesian Networks (BNs) and serves as a sophisticated probabilistic tool to simulate vital cause and effect relationships entailed in the pedestrian domain.
Bayesian Networks and Influence Diagrams
DEFF Research Database (Denmark)
Kjærulff, Uffe Bro; Madsen, Anders Læsø
Probabilistic networks, also known as Bayesian networks and influence diagrams, have become one of the most promising technologies in the area of applied artificial intelligence, offering intuitive, efficient, and reliable methods for diagnosis, prediction, decision making, classification......, troubleshooting, and data mining under uncertainty. Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. Intended...... primarily for practitioners, this book does not require sophisticated mathematical skills or deep understanding of the underlying theory and methods nor does it discuss alternative technologies for reasoning under uncertainty. The theory and methods presented are illustrated through more than 140 examples...
BAYESIAN IMAGE RESTORATION, USING CONFIGURATIONS
Directory of Open Access Journals (Sweden)
Thordis Linda Thorarinsdottir
2011-05-01
Full Text Available In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed in detail for 3 X 3 and 5 X 5 configurations and examples of the performance of the procedure are given.
Bayesian Inference on Proportional Elections
Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio
2015-01-01
Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software. PMID:25786259
Bayesian analyses of cognitive architecture.
Houpt, Joseph W; Heathcote, Andrew; Eidels, Ami
2017-06-01
The question of cognitive architecture-how cognitive processes are temporally organized-has arisen in many areas of psychology. This question has proved difficult to answer, with many proposed solutions turning out to be spurious. Systems factorial technology (Townsend & Nozawa, 1995) provided the first rigorous empirical and analytical method of identifying cognitive architecture, using the survivor interaction contrast (SIC) to determine when people are using multiple sources of information in parallel or in series. Although the SIC is based on rigorous nonparametric mathematical modeling of response time distributions, for many years inference about cognitive architecture has relied solely on visual assessment. Houpt and Townsend (2012) recently introduced null hypothesis significance tests, and here we develop both parametric and nonparametric (encompassing prior) Bayesian inference. We show that the Bayesian approaches can have considerable advantages. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Deep Learning and Bayesian Methods
Directory of Open Access Journals (Sweden)
Prosper Harrison B.
2017-01-01
Full Text Available A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such methods might be used to automate certain aspects of data analysis in particle physics. Next, the connection to Bayesian methods is discussed and the paper ends with thoughts on a significant practical issue, namely, how, from a Bayesian perspective, one might optimize the construction of deep neural networks.
Bayesian inference on proportional elections.
Directory of Open Access Journals (Sweden)
Gabriel Hideki Vatanabe Brunello
Full Text Available Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software.
Relating the Structure of Noise Correlations in Macaque Primary Visual Cortex to Decoder Performance
Directory of Open Access Journals (Sweden)
Or P. Mendels
2018-03-01
Full Text Available Noise correlations in neuronal responses can have a strong influence on the information available in large populations. In addition, the structure of noise correlations may have a great impact on the utility of different algorithms to extract this information that may depend on the specific algorithm, and hence may affect our understanding of population codes in the brain. Thus, a better understanding of the structure of noise correlations and their interplay with different readout algorithms is required. Here we use eigendecomposition to investigate the structure of noise correlations in populations of about 50–100 simultaneously recorded neurons in the primary visual cortex of anesthetized monkeys, and we relate this structure to the performance of two common decoders: the population vector and the optimal linear estimator. Our analysis reveals a non-trivial correlation structure, in which the eigenvalue spectrum is composed of several distinct large eigenvalues that represent different shared modes of fluctuation extending over most of the population, and a semi-continuous tail. The largest eigenvalue represents a uniform collective mode of fluctuation. The second and third eigenvalues typically show either a clear functional (i.e., dependent on the preferred orientation of the neurons or spatial structure (i.e., dependent on the physical position of the neurons. We find that the number of shared modes increases with the population size, being roughly 10% of that size. Furthermore, we find that the noise in each of these collective modes grows linearly with the population. This linear growth of correlated noise power can have limiting effects on the utility of averaging neuronal responses across large populations, depending on the readout. Specifically, the collective modes of fluctuation limit the accuracy of the population vector but not of the optimal linear estimator.
Decoding spatial attention with EEG and virtual acoustic space.
Dong, Yue; Raif, Kaan E; Determan, Sarah C; Gai, Yan
2017-11-01
Decoding spatial attention based on brain signals has wide applications in brain-computer interface (BCI). Previous BCI systems mostly relied on visual patterns or auditory stimulation (e.g., loudspeakers) to evoke synchronous brain signals. There would be difficulties to cover a large range of spatial locations with such a stimulation protocol. The present study explored the possibility of using virtual acoustic space and a visual-auditory matching paradigm to overcome this issue. The technique has the flexibility of generating sound stimulation from virtually any spatial location. Brain signals of eight human subjects were obtained with a 32-channel Electroencephalogram (EEG). Two amplitude-modulated noise or speech sentences carrying distinct spatial information were presented concurrently. Each sound source was tagged with a unique modulation phase so that the phase of the recorded EEG signals indicated the sound being attended to. The phase-tagged sound was further filtered with head-related transfer functions to create the sense of virtual space. Subjects were required to pay attention to the sound source that best matched the location of a visual target. For all the subjects, the phase of a single sound could be accurately reflected over the majority of electrodes based on EEG responses of 90 s or less. The electrodes providing significant decoding performance on auditory attention were fewer and may require longer EEG responses. The reliability and efficiency of decoding with a single electrode varied with subjects. Overall, the virtual acoustic space protocol has the potential of being used in practical BCI systems. © 2017 Saint Louis University. Physiological Reports published by Wiley Periodicals, Inc. on behalf of The Physiological Society and the American Physiological Society.
Space Shuttle RTOS Bayesian Network
Morris, A. Terry; Beling, Peter A.
2001-01-01
With shrinking budgets and the requirements to increase reliability and operational life of the existing orbiter fleet, NASA has proposed various upgrades for the Space Shuttle that are consistent with national space policy. The cockpit avionics upgrade (CAU), a high priority item, has been selected as the next major upgrade. The primary functions of cockpit avionics include flight control, guidance and navigation, communication, and orbiter landing support. Secondary functions include the provision of operational services for non-avionics systems such as data handling for the payloads and caution and warning alerts to the crew. Recently, a process to selection the optimal commercial-off-the-shelf (COTS) real-time operating system (RTOS) for the CAU was conducted by United Space Alliance (USA) Corporation, which is a joint venture between Boeing and Lockheed Martin, the prime contractor for space shuttle operations. In order to independently assess the RTOS selection, NASA has used the Bayesian network-based scoring methodology described in this paper. Our two-stage methodology addresses the issue of RTOS acceptability by incorporating functional, performance and non-functional software measures related to reliability, interoperability, certifiability, efficiency, correctness, business, legal, product history, cost and life cycle. The first stage of the methodology involves obtaining scores for the various measures using a Bayesian network. The Bayesian network incorporates the causal relationships between the various and often competing measures of interest while also assisting the inherently complex decision analysis process with its ability to reason under uncertainty. The structure and selection of prior probabilities for the network is extracted from experts in the field of real-time operating systems. Scores for the various measures are computed using Bayesian probability. In the second stage, multi-criteria trade-off analyses are performed between the scores
Multiview Bayesian Correlated Component Analysis
DEFF Research Database (Denmark)
Kamronn, Simon Due; Poulsen, Andreas Trier; Hansen, Lars Kai
2015-01-01
are identical. Here we propose a hierarchical probabilistic model that can infer the level of universality in such multiview data, from completely unrelated representations, corresponding to canonical correlation analysis, to identical representations as in correlated component analysis. This new model, which...... we denote Bayesian correlated component analysis, evaluates favorably against three relevant algorithms in simulated data. A well-established benchmark EEG data set is used to further validate the new model and infer the variability of spatial representations across multiple subjects....
Reliability analysis with Bayesian networks
Zwirglmaier, Kilian Martin
2017-01-01
Bayesian networks (BNs) represent a probabilistic modeling tool with large potential for reliability engineering. While BNs have been successfully applied to reliability engineering, there are remaining issues, some of which are addressed in this work. Firstly a classification of BN elicitation approaches is proposed. Secondly two approximate inference approaches, one of which is based on discretization and the other one on sampling, are proposed. These approaches are applicable to hybrid/con...
Interim Bayesian Persuasion: First Steps
Perez, Eduardo
2015-01-01
This paper makes a first attempt at building a theory of interim Bayesian persuasion. I work in a minimalist model where a low or high type sender seeks validation from a receiver who is willing to validate high types exclusively. After learning her type, the sender chooses a complete conditional information structure for the receiver from a possibly restricted feasible set. I suggest a solution to this game that takes into account the signaling potential of the sender's choice.
Bayesian Sampling using Condition Indicators
DEFF Research Database (Denmark)
Faber, Michael H.; Sørensen, John Dalsgaard
2002-01-01
. This allows for a Bayesian formulation of the indicators whereby the experience and expertise of the inspection personnel may be fully utilized and consistently updated as frequentistic information is collected. The approach is illustrated on an example considering a concrete structure subject to corrosion....... It is shown how half-cell potential measurements may be utilized to update the probability of excessive repair after 50 years....
Computational Neuropsychology and Bayesian Inference
Directory of Open Access Journals (Sweden)
Thomas Parr
2018-02-01
Full Text Available Computational theories of brain function have become very influential in neuroscience. They have facilitated the growth of formal approaches to disease, particularly in psychiatric research. In this paper, we provide a narrative review of the body of computational research addressing neuropsychological syndromes, and focus on those that employ Bayesian frameworks. Bayesian approaches to understanding brain function formulate perception and action as inferential processes. These inferences combine ‘prior’ beliefs with a generative (predictive model to explain the causes of sensations. Under this view, neuropsychological deficits can be thought of as false inferences that arise due to aberrant prior beliefs (that are poor fits to the real world. This draws upon the notion of a Bayes optimal pathology – optimal inference with suboptimal priors – and provides a means for computational phenotyping. In principle, any given neuropsychological disorder could be characterized by the set of prior beliefs that would make a patient’s behavior appear Bayes optimal. We start with an overview of some key theoretical constructs and use these to motivate a form of computational neuropsychology that relates anatomical structures in the brain to the computations they perform. Throughout, we draw upon computational accounts of neuropsychological syndromes. These are selected to emphasize the key features of a Bayesian approach, and the possible types of pathological prior that may be present. They range from visual neglect through hallucinations to autism. Through these illustrative examples, we review the use of Bayesian approaches to understand the link between biology and computation that is at the heart of neuropsychology.
An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces.
Li, Simin; Li, Jie; Li, Zheng
2016-01-01
Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well.
A Novel Modified Algorithm with Reduced Complexity LDPC Code Decoder
Directory of Open Access Journals (Sweden)
Song Yang
2014-10-01
Full Text Available A novel efficient decoding algorithm reduced the sum-product algorithm (SPA Complexity with LPDC code is proposed. Base on the hyperbolic tangent rule, modified the Check node update with two horizontal process, which have similar calculation, Motivated by the finding that sun- min (MS algorithm reduce the complexity reducing the approximation error in the horizontal process, simplify the information weight small part. Compared with the exiting approximations, the proposed method is less computational complexity than SPA algorithm. Simulation results show that the author algorithm can achieve performance very close SPA.
Multiple LDPC decoding for distributed source coding and video coding
DEFF Research Database (Denmark)
Forchhammer, Søren; Luong, Huynh Van; Huang, Xin
2011-01-01
Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... results on DVC show that the LDPCA perfomance implies a loss compared to the conditional entropy, but also that the proposed scheme reduces the DVC bit rate up to 3.9% and improves the rate-distortion (RD) performance of a Transform Domain Wyner-Ziv (TDWZ) video codec....
Construction and decoding of a class of algebraic geometry codes
DEFF Research Database (Denmark)
Justesen, Jørn; Larsen, Knud J.; Jensen, Helge Elbrønd
1989-01-01
A class of codes derived from algebraic plane curves is constructed. The concepts and results from algebraic geometry that were used are explained in detail; no further knowledge of algebraic geometry is needed. Parameters, generator and parity-check matrices are given. The main result is a decod......A class of codes derived from algebraic plane curves is constructed. The concepts and results from algebraic geometry that were used are explained in detail; no further knowledge of algebraic geometry is needed. Parameters, generator and parity-check matrices are given. The main result...
Decoding the auditory brain with canonical component analysis
DEFF Research Database (Denmark)
de Cheveigné, Alain; Wong, Daniel D E; Di Liberto, Giovanni M
2018-01-01
The relation between a stimulus and the evoked brain response can shed light on perceptual processes within the brain. Signals derived from this relation can also be harnessed to control external devices for Brain Computer Interface (BCI) applications. While the classic event-related potential (ERP......) is appropriate for isolated stimuli, more sophisticated "decoding" strategies are needed to address continuous stimuli such as speech, music or environmental sounds. Here we describe an approach based on Canonical Correlation Analysis (CCA) that finds the optimal transform to apply to both the stimulus...
Synthesizer for decoding a coded short wave length irradiation
International Nuclear Information System (INIS)
1976-01-01
The system uses point irradiation source, typically an X-ray emitter, which illuminates a three dimensional object consisting of a set of parallel planes, each of which acts as a source of coded information. The secondary source images are superimposed on a common flat screen. The decoding system comprises an imput light-screen detector, a picture screen amplifier, a beam deflector, on output picture screen, an optical focussing unit including three lenses, a masking unit, an output light screen detector and a video signal reproduction unit of cathode ray tube from, or similar, to create a three dimensional image of the object. (G.C.)
Decoding the organization of spinal circuits that control locomotion
Kiehn, Ole
2016-01-01
Unravelling the functional operation of neuronal networks and linking cellular activity to specific behavioural outcomes are among the biggest challenges in neuroscience. In this broad field of research, substantial progress has been made in studies of the spinal networks that control locomotion. Through united efforts using electrophysiological and molecular genetic network approaches and behavioural studies in phylogenetically diverse experimental models, the organization of locomotor networks has begun to be decoded. The emergent themes from this research are that the locomotor networks have a modular organization with distinct transmitter and molecular codes and that their organization is reconfigured with changes to the speed of locomotion or changes in gait. PMID:26935168
Bayesian methods applied to GWAS.
Fernando, Rohan L; Garrick, Dorian
2013-01-01
Bayesian multiple-regression methods are being successfully used for genomic prediction and selection. These regression models simultaneously fit many more markers than the number of observations available for the analysis. Thus, the Bayes theorem is used to combine prior beliefs of marker effects, which are expressed in terms of prior distributions, with information from data for inference. Often, the analyses are too complex for closed-form solutions and Markov chain Monte Carlo (MCMC) sampling is used to draw inferences from posterior distributions. This chapter describes how these Bayesian multiple-regression analyses can be used for GWAS. In most GWAS, false positives are controlled by limiting the genome-wise error rate, which is the probability of one or more false-positive results, to a small value. As the number of test in GWAS is very large, this results in very low power. Here we show how in Bayesian GWAS false positives can be controlled by limiting the proportion of false-positive results among all positives to some small value. The advantage of this approach is that the power of detecting associations is not inversely related to the number of markers.
ABCtoolbox: a versatile toolkit for approximate Bayesian computations.
Wegmann, Daniel; Leuenberger, Christoph; Neuenschwander, Samuel; Excoffier, Laurent
2010-03-04
The estimation of demographic parameters from genetic data often requires the computation of likelihoods. However, the likelihood function is computationally intractable for many realistic evolutionary models, and the use of Bayesian inference has therefore been limited to very simple models. The situation changed recently with the advent of Approximate Bayesian Computation (ABC) algorithms allowing one to obtain parameter posterior distributions based on simulations not requiring likelihood computations. Here we present ABCtoolbox, a series of open source programs to perform Approximate Bayesian Computations (ABC). It implements various ABC algorithms including rejection sampling, MCMC without likelihood, a Particle-based sampler and ABC-GLM. ABCtoolbox is bundled with, but not limited to, a program that allows parameter inference in a population genetics context and the simultaneous use of different types of markers with different ploidy levels. In addition, ABCtoolbox can also interact with most simulation and summary statistics computation programs. The usability of the ABCtoolbox is demonstrated by inferring the evolutionary history of two evolutionary lineages of Microtus arvalis. Using nuclear microsatellites and mitochondrial sequence data in the same estimation procedure enabled us to infer sex-specific population sizes and migration rates and to find that males show smaller population sizes but much higher levels of migration than females. ABCtoolbox allows a user to perform all the necessary steps of a full ABC analysis, from parameter sampling from prior distributions, data simulations, computation of summary statistics, estimation of posterior distributions, model choice, validation of the estimation procedure, and visualization of the results.
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes
Lin, Shu
1998-01-01
A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and
12th Brazilian Meeting on Bayesian Statistics
Louzada, Francisco; Rifo, Laura; Stern, Julio; Lauretto, Marcelo
2015-01-01
Through refereed papers, this volume focuses on the foundations of the Bayesian paradigm; their comparison to objectivistic or frequentist Statistics counterparts; and the appropriate application of Bayesian foundations. This research in Bayesian Statistics is applicable to data analysis in biostatistics, clinical trials, law, engineering, and the social sciences. EBEB, the Brazilian Meeting on Bayesian Statistics, is held every two years by the ISBrA, the International Society for Bayesian Analysis, one of the most active chapters of the ISBA. The 12th meeting took place March 10-14, 2014 in Atibaia. Interest in foundations of inductive Statistics has grown recently in accordance with the increasing availability of Bayesian methodological alternatives. Scientists need to deal with the ever more difficult choice of the optimal method to apply to their problem. This volume shows how Bayes can be the answer. The examination and discussion on the foundations work towards the goal of proper application of Bayesia...
Gupta, Neha; Parihar, Priyanka; Neema, Vaibhav
2018-04-01
Researchers have proposed many circuit techniques to reduce leakage power dissipation in memory cells. If we want to reduce the overall power in the memory system, we have to work on the input circuitry of memory architecture i.e. row and column decoder. In this research work, low leakage power with a high speed row and column decoder for memory array application is designed and four new techniques are proposed. In this work, the comparison of cluster DECODER, body bias DECODER, source bias DECODER, and source coupling DECODER are designed and analyzed for memory array application. Simulation is performed for the comparative analysis of different DECODER design parameters at 180 nm GPDK technology file using the CADENCE tool. Simulation results show that the proposed source bias DECODER circuit technique decreases the leakage current by 99.92% and static energy by 99.92% at a supply voltage of 1.2 V. The proposed circuit also improves dynamic power dissipation by 5.69%, dynamic PDP/EDP 65.03% and delay 57.25% at 1.2 V supply voltage.
Multi-Trial Guruswami–Sudan Decoding for Generalised Reed–Solomon Codes
DEFF Research Database (Denmark)
Nielsen, Johan Sebastian Rosenkilde; Zeh, Alexander
2013-01-01
An iterated refinement procedure for the Guruswami–Sudan list decoding algorithm for Generalised Reed–Solomon codes based on Alekhnovich’s module minimisation is proposed. The method is parametrisable and allows variants of the usual list decoding approach. In particular, finding the list...
Using convolutional decoding to improve time delay and phase estimation in digital communications
Ormesher, Richard C [Albuquerque, NM; Mason, John J [Albuquerque, NM
2010-01-26
The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.
Mutiple LDPC Decoding using Bitplane Correlation for Transform Domain Wyner-Ziv Video Coding
DEFF Research Database (Denmark)
Luong, Huynh Van; Huang, Xin; Forchhammer, Søren
2011-01-01
codec. To improve the LDPC coding performance in the context of TDWZ, this paper proposes a Wyner-Ziv video codec using bitplane correlation through multiple parallel LDPC decoding. The proposed scheme utilizes inter bitplane correlation to enhance the bitplane decoding performance. Experimental results...
Tingerthal, John Steven
2013-01-01
Using case study methodology and autoethnographic methods, this study examines a process of curricular development known as "Decoding the Disciplines" (Decoding) by documenting the experience of its application in a construction engineering mechanics course. Motivated by the call to integrate what is known about teaching and learning…
Using Exit Charts to Develop an Early Stopping Criterion for Turbo Decoding
Potman, J.; Hoeksema, F.W.; Slump, Cornelis H.
2004-01-01
Early stopping criteria are used to determine when additional decoder iterations result in little or no improvement in the bit- error-rate (BER) at the output of a Turbo decoder. This paper proposes a stopping criterion based on Extrinsic Information Transfer (EXIT) charts. The generation and
Directory of Open Access Journals (Sweden)
Hong Gi Yeom
2014-01-01
Full Text Available Decoding neural signals into control outputs has been a key to the development of brain-computer interfaces (BCIs. While many studies have identified neural correlates of kinematics or applied advanced machine learning algorithms to improve decoding performance, relatively less attention has been paid to optimal design of decoding models. For generating continuous movements from neural activity, design of decoding models should address how to incorporate movement dynamics into models and how to select a model given specific BCI objectives. Considering nonlinear and independent speed characteristics, we propose a hybrid Kalman filter to decode the hand direction and speed independently. We also investigate changes in performance of different decoding models (the linear and Kalman filters when they predict reaching movements only or predict both reach and rest. Our offline study on human magnetoencephalography (MEG during point-to-point arm movements shows that the performance of the linear filter or the Kalman filter is affected by including resting states for training and predicting movements. However, the hybrid Kalman filter consistently outperforms others regardless of movement states. The results demonstrate that better design of decoding models is achieved by incorporating movement dynamics into modeling or selecting a model according to decoding objectives.
Application of MTR soft-decision decoding in multiple-head ...
Indian Academy of Sciences (India)
known as anextrinsicinformation, is present between decoders and detectors, in such systems. The known MTR codes are found on thehard-decisionbasis, producing on their output only symbols 0/1, without any other information about decision, itself. Therefore, our idea was to somehow implement soft-decision decoding ...
A Fully Parallel VLSI-implementation of the Viterbi Decoding Algorithm
DEFF Research Database (Denmark)
Sparsø, Jens; Jørgensen, Henrik Nordtorp; Paaske, Erik
1989-01-01
In this paper we describe the implementation of a K = 7, R = 1/2 single-chip Viterbi decoder intended to operate at 10-20 Mbit/sec. We propose a general, regular and area efficient floor-plan that is also suitable for implementation of decoders for codes with different generator polynomials or di...
The Contribution of Attentional Control and Working Memory to Reading Comprehension and Decoding
Arrington, C. Nikki; Kulesz, Paulina A.; Francis, David J.; Fletcher, Jack M.; Barnes, Marcia A.
2014-01-01
Little is known about how specific components of working memory, namely, attentional processes including response inhibition, sustained attention, and cognitive inhibition, are related to reading decoding and comprehension. The current study evaluated the relations of reading comprehension, decoding, working memory, and attentional control in…
Early Word Decoding Ability as a Longitudinal Predictor of Academic Performance
Nordström, Thomas; Jacobson, Christer; Söderberg, Pernilla
2016-01-01
This study, using a longitudinal design with a Swedish cohort of young readers, investigates if children's early word decoding ability in second grade can predict later academic performance. In an effort to estimate the unique effect of early word decoding (grade 2) with academic performance (grade 9), gender and non-verbal cognitive ability were…
Compiling Relational Bayesian Networks for Exact Inference
DEFF Research Database (Denmark)
Jaeger, Manfred; Chavira, Mark; Darwiche, Adnan
2004-01-01
We describe a system for exact inference with relational Bayesian networks as defined in the publicly available \\primula\\ tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evaluating...... and differentiating these circuits in time linear in their size. We report on experimental results showing the successful compilation, and efficient inference, on relational Bayesian networks whose {\\primula}--generated propositional instances have thousands of variables, and whose jointrees have clusters...
Bayesian Posterior Distributions Without Markov Chains
Cole, Stephen R.; Chu, Haitao; Greenland, Sander; Hamra, Ghassan; Richardson, David B.
2012-01-01
Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976–1983) assessing the relation between residential ex...
Iterative Decoding for an Optical CDMA based Laser communication System
Energy Technology Data Exchange (ETDEWEB)
Kim, Jin Young; Kim, Eun Cheol [Kwangwoon Univ., Seoul (Korea, Republic of); Cha, Jae Sang [Seoul National Univ. of Technology, Seoul (Korea, Republic of)
2008-11-15
An optical CDMA(code division multiple access)based Laser communication system has attracted much attention since it requires minimal optical Laser signal processing and it is virtually delay free, while from the theoretical point of view, its performance depends on the auto and cross correlation properties of employed sequences. Various kinds of channel coding schemes for optical CDMA based Laser communication systems have been proposed and analyzed to compensate nonideal channel and receiver conditions in impaired photon channels. In this paper, we propose and analyze an iterative decoding of optical CDMA based Laser communication signals for both shot noise limited and thermal noise limited systems. It is assumed that optical channel is an intensity modulated (IM)channel and direct detection scheme is employed to detect the received optical signal. The performance is evaluated in terms of bit error probability and throughput. It is demonstrated that the BER and throughput performance is substantially improved with interleaver length for a fixed code rate and with alphabet size of PPM (pulse position modulation). Also, the BER and throughput performance is significantly enhanced with the number of iterations for decoding process. The results in this paper can be applied to the optical CDMA based Laser communication network with multiple access applications.
Iterative Decoding for an Optical CDMA based Laser communication System
International Nuclear Information System (INIS)
Kim, Jin Young; Kim, Eun Cheol; Cha, Jae Sang
2008-01-01
An optical CDMA(code division multiple access)based Laser communication system has attracted much attention since it requires minimal optical Laser signal processing and it is virtually delay free, while from the theoretical point of view, its performance depends on the auto and cross correlation properties of employed sequences. Various kinds of channel coding schemes for optical CDMA based Laser communication systems have been proposed and analyzed to compensate nonideal channel and receiver conditions in impaired photon channels. In this paper, we propose and analyze an iterative decoding of optical CDMA based Laser communication signals for both shot noise limited and thermal noise limited systems. It is assumed that optical channel is an intensity modulated (IM)channel and direct detection scheme is employed to detect the received optical signal. The performance is evaluated in terms of bit error probability and throughput. It is demonstrated that the BER and throughput performance is substantially improved with interleaver length for a fixed code rate and with alphabet size of PPM (pulse position modulation). Also, the BER and throughput performance is significantly enhanced with the number of iterations for decoding process. The results in this paper can be applied to the optical CDMA based Laser communication network with multiple access applications
Frame Decoder for Consultative Committee for Space Data Systems (CCSDS)
Reyes, Miguel A. De Jesus
2014-01-01
GNU Radio is a free and open source development toolkit that provides signal processing to implement software radios. It can be used with low-cost external RF hardware to create software defined radios, or without hardware in a simulation-like environment. GNU Radio applications are primarily written in Python and C++. The Universal Software Radio Peripheral (USRP) is a computer-hosted software radio designed by Ettus Research. The USRP connects to a host computer via high-speed Gigabit Ethernet. Using the open source Universal Hardware Driver (UHD), we can run GNU Radio applications using the USRP. An SDR is a "radio in which some or all physical layer functions are software defined"(IEEE Definition). A radio is any kind of device that wirelessly transmits or receives radio frequency (RF) signals in the radio frequency. An SDR is a radio communication system where components that have been typically implemented in hardware are implemented in software. GNU Radio has a generic packet decoder block that is not optimized for CCSDS frames. Using this generic packet decoder will add bytes to the CCSDS frames and will not permit for bit error correction using Reed-Solomon. The CCSDS frames consist of 256 bytes, including a 32-bit sync marker (0x1ACFFC1D). This frames are generated by the Space Data Processor and GNU Radio will perform the modulation and framing operations, including frame synchronization.
A method for decoding the neurophysiological spike-response transform.
Stern, Estee; García-Crescioni, Keyla; Miller, Mark W; Peskin, Charles S; Brezina, Vladimir
2009-11-15
Many physiological responses elicited by neuronal spikes-intracellular calcium transients, synaptic potentials, muscle contractions-are built up of discrete, elementary responses to each spike. However, the spikes occur in trains of arbitrary temporal complexity, and each elementary response not only sums with previous ones, but can itself be modified by the previous history of the activity. A basic goal in system identification is to characterize the spike-response transform in terms of a small number of functions-the elementary response kernel and additional kernels or functions that describe the dependence on previous history-that will predict the response to any arbitrary spike train. Here we do this by developing further and generalizing the "synaptic decoding" approach of Sen et al. (1996). Given the spike times in a train and the observed overall response, we use least-squares minimization to construct the best estimated response and at the same time best estimates of the elementary response kernel and the other functions that characterize the spike-response transform. We avoid the need for any specific initial assumptions about these functions by using techniques of mathematical analysis and linear algebra that allow us to solve simultaneously for all of the numerical function values treated as independent parameters. The functions are such that they may be interpreted mechanistically. We examine the performance of the method as applied to synthetic data. We then use the method to decode real synaptic and muscle contraction transforms.
Mapping of MPEG-4 decoding on a flexible architecture platform
van der Tol, Erik B.; Jaspers, Egbert G.
2001-12-01
In the field of consumer electronics, the advent of new features such as Internet, games, video conferencing, and mobile communication has triggered the convergence of television and computers technologies. This requires a generic media-processing platform that enables simultaneous execution of very diverse tasks such as high-throughput stream-oriented data processing and highly data-dependent irregular processing with complex control flows. As a representative application, this paper presents the mapping of a Main Visual profile MPEG-4 for High-Definition (HD) video onto a flexible architecture platform. A stepwise approach is taken, going from the decoder application toward an implementation proposal. First, the application is decomposed into separate tasks with self-contained functionality, clear interfaces, and distinct characteristics. Next, a hardware-software partitioning is derived by analyzing the characteristics of each task such as the amount of inherent parallelism, the throughput requirements, the complexity of control processing, and the reuse potential over different applications and different systems. Finally, a feasible implementation is proposed that includes amongst others a very-long-instruction-word (VLIW) media processor, one or more RISC processors, and some dedicated processors. The mapping study of the MPEG-4 decoder proves the flexibility and extensibility of the media-processing platform. This platform enables an effective HW/SW co-design yielding a high performance density.
Decoding covert shifts of attention induced by ambiguous visuospatial cues
Directory of Open Access Journals (Sweden)
Romain eTrachel
2015-06-01
Full Text Available Simple and unambiguous visual cues (e.g. an arrow can be used to trigger covert shifts of visual attention away from the center of gaze. The processing of visual stimuli is enhanced at the attended location. Covert shifts of attention modulate the power of cerebral oscillations in the alpha band over parietal and occipital regions. These modulations are sufficiently robust to be decoded on a single trial basis from electro-encephalography (EEG signals. It is often assumed that covert attention shifts are under voluntary control, and also occur in more natural and complex environments, but there is no direct evidence to support this assumption. We address this important issue by using random-dot stimuli to cue one of two opposite locations, where a visual target is presented. We contrast two conditions in which the random-dot motion is either predictive of the target location or contains ambiguous information. Behavioral results show attention shifts in anticipation of the visual target, in both conditions. In addition, these attention shifts involve similar neural sources, and the EEG can be decoded on a single trial basis. These results shed a new light on the behavioral and neural correlates of visuospatial attention, with implications for Brain-Computer Interfaces (BCI based on covert attention shifts.
Bayesian Contact Tracing for Communicable Respiratory Disease
Shalaby, Ayman; Lizotte, Daniel
2013-01-01
: Symptoms are ambiguous or easily misdiagnosed (e.g. 2009 H1N1 influenza outbreak shared many symptoms with many other influenza like illnesses)When the symptoms emerge after the individual become infectious. Methods We developed a dynamic Bayesian network model to process sensor information from users’ cellphones together with (possibly incomplete) diagnosis information to track the spread of disease in a population. Our model tracks real-time proximity contacts and can provide public health agencies with the probability of infection for each individual in the model. For testing our algorithm, we used a real-world mobile sensor dataset with 120 individuals collected over a period of 9 months, and we simulated an outbreak. Results We ran several experiments where different sub-populations were “infected” and “diagnosed.” By using the contact information, our model was able to automatically identify individuals in the population who were likely to be infected even though they were not directly “diagnosed” with an illness. Conclusions Automatic contact tracing for respiratory pathogens is a powerful idea, however we have identified several implementation challenges. The first challenge is scalability: we note that a contact tracing system with a hundred thousand individuals requires a Bayesian model with a billion nodes. Bayesian inference on models of this scale is an open problem and an active area of research. The second challenge is privacy protection: although the test data were collected in an academic setting, deploying any system will require appropriate safeguards for user privacy. Nonetheless, our work llustrates the potential for broader use of contact tracing for modeling and controlling disease transmission.
3rd Bayesian Young Statisticians Meeting
Lanzarone, Ettore; Villalobos, Isadora; Mattei, Alessandra
2017-01-01
This book is a selection of peer-reviewed contributions presented at the third Bayesian Young Statisticians Meeting, BAYSM 2016, Florence, Italy, June 19-21. The meeting provided a unique opportunity for young researchers, M.S. students, Ph.D. students, and postdocs dealing with Bayesian statistics to connect with the Bayesian community at large, to exchange ideas, and to network with others working in the same field. The contributions develop and apply Bayesian methods in a variety of fields, ranging from the traditional (e.g., biostatistics and reliability) to the most innovative ones (e.g., big data and networks).
Learning dynamic Bayesian networks with mixed variables
DEFF Research Database (Denmark)
Bøttcher, Susanne Gammelgaard
This paper considers dynamic Bayesian networks for discrete and continuous variables. We only treat the case, where the distribution of the variables is conditional Gaussian. We show how to learn the parameters and structure of a dynamic Bayesian network and also how the Markov order can be learn....... An automated procedure for specifying prior distributions for the parameters in a dynamic Bayesian network is presented. It is a simple extension of the procedure for the ordinary Bayesian networks. Finally the W¨olfer?s sunspot numbers are analyzed....
Locating and decoding barcodes in fuzzy images captured by smart phones
Deng, Wupeng; Hu, Jiwei; Liu, Quan; Lou, Ping
2017-07-01
With the development of barcodes for commercial use, people's requirements for detecting barcodes by smart phone become increasingly pressing. The low quality of barcode image captured by mobile phone always affects the decoding and recognition rates. This paper focuses on locating and decoding EAN-13 barcodes in fuzzy images. We present a more accurate locating algorithm based on segment length and high fault-tolerant rate algorithm for decoding barcodes. Unlike existing approaches, location algorithm is based on the edge segment length of EAN -13 barcodes, while our decoding algorithm allows the appearance of fuzzy region in barcode image. Experimental results are performed on damaged, contaminated and scratched digital images, and provide a quite promising result for EAN -13 barcode location and decoding.
Efficient Dual Domain Decoding of Linear Block Codes Using Genetic Algorithms
Directory of Open Access Journals (Sweden)
Ahmed Azouaoui
2012-01-01
Full Text Available A computationally efficient algorithm for decoding block codes is developed using a genetic algorithm (GA. The proposed algorithm uses the dual code in contrast to the existing genetic decoders in the literature that use the code itself. Hence, this new approach reduces the complexity of decoding the codes of high rates. We simulated our algorithm in various transmission channels. The performance of this algorithm is investigated and compared with competitor decoding algorithms including Maini and Shakeel ones. The results show that the proposed algorithm gives large gains over the Chase-2 decoding algorithm and reach the performance of the OSD-3 for some quadratic residue (QR codes. Further, we define a new crossover operator that exploits the domain specific information and compare it with uniform and two point crossover. The complexity of this algorithm is also discussed and compared to other algorithms.
Analysis of Iterated Hard Decision Decoding of Product Codes with Reed-Solomon Component Codes
DEFF Research Database (Denmark)
Justesen, Jørn; Høholdt, Tom
2007-01-01
Products of Reed-Solomon codes are important in applications because they offer a combination of large blocks, low decoding complexity, and good performance. A recent result on random graphs can be used to show that with high probability a large number of errors can be corrected by iterating mini...... minimum distance decoding. We present an analysis related to density evolution which gives the exact asymptotic value of the decoding threshold and also provides a closed form approximation to the distribution of errors in each step of the decoding of finite length codes.......Products of Reed-Solomon codes are important in applications because they offer a combination of large blocks, low decoding complexity, and good performance. A recent result on random graphs can be used to show that with high probability a large number of errors can be corrected by iterating...
Directory of Open Access Journals (Sweden)
Marcos Rodovalho
2014-11-01
Full Text Available The objective of this study was to examine genetic parameters of popping expansion and grain yield in a trial of 169 halfsib families using a Bayesian approach. The independence chain algorithm with informative priors for the components of residual and family variance (inverse-gamma prior distribution was used. Popping expansion was found to be moderately heritable, with a posterior mode of h2 of 0.34, and 90% Bayesian confidence interval of 0.22 to 0.44. The heritability of grain yield (family level was moderate (h2 = 0.4 with Bayesian confidence interval of 0.28 to 0.49. The target population contains sufficient genetic variability for subsequent breeding cycles, and the Bayesian approach is a useful alternative for scientific inference in the genetic evaluation of popcorn.
Directory of Open Access Journals (Sweden)
Ricardo Luis dos Reis
2008-08-01
Full Text Available Neste estudo, utilizou-se a metodologia Bayesiana para estimar o coeficiente de endogamia e a taxa de fecundação cruzada de uma população diplóide por meio do modelo aleatório de COCKERHAM para freqüências alélicas. Um sistema de simulação de dados foi estruturado para validar a metodologia utilizada. O algoritmo Gibbs Sampler foi implementado no software R para obter amostras das distribuições marginais a posteriori para o coeficiente de endogamia e para a taxa de fecundação. O método Bayesiano mostrou-se eficiente na estimação dos parâmetros, pois os valores paramétricos utilizados na simulação encontravam-se dentro do intervalo de credibilidade de 95% em todos os cenários considerados. A convergência do algoritmo Gibbs Sampler foi verificada, validando assim os resultados obtidos.The Bayesian methodology was used to estimate the inbreeding coefficient and outcrossing rate in diploid populations by COCKERHAM random model to allelic frequency. The proposed methodology was evaluated by data simulation. The Gibbs Sampler algorithm was implemented in the R statistical software to obtain the random samples of the inbreeding coefficient and outcrossing rate posteriors marginal distributions. The Bayesian method showed good results, because the 95% credible intervals contained the true parameter values to all of the selected scenes. The Gibbs Sampler convergence was checked and this validated the estimation results.
Bayesian flood forecasting methods: A review
Han, Shasha; Coulibaly, Paulin
2017-08-01
Over the past few decades, floods have been seen as one of the most common and largely distributed natural disasters in the world. If floods could be accurately forecasted in advance, then their negative impacts could be greatly minimized. It is widely recognized that quantification and reduction of uncertainty associated with the hydrologic forecast is of great importance for flood estimation and rational decision making. Bayesian forecasting system (BFS) offers an ideal theoretic framework for uncertainty quantification that can be developed for probabilistic flood forecasting via any deterministic hydrologic model. It provides suitable theoretical structure, empirically validated models and reasonable analytic-numerical computation method, and can be developed into various Bayesian forecasting approaches. This paper presents a comprehensive review on Bayesian forecasting approaches applied in flood forecasting from 1999 till now. The review starts with an overview of fundamentals of BFS and recent advances in BFS, followed with BFS application in river stage forecasting and real-time flood forecasting, then move to a critical analysis by evaluating advantages and limitations of Bayesian forecasting methods and other predictive uncertainty assessment approaches in flood forecasting, and finally discusses the future research direction in Bayesian flood forecasting. Results show that the Bayesian flood forecasting approach is an effective and advanced way for flood estimation, it considers all sources of uncertainties and produces a predictive distribution of the river stage, river discharge or runoff, thus gives more accurate and reliable flood forecasts. Some emerging Bayesian forecasting methods (e.g. ensemble Bayesian forecasting system, Bayesian multi-model combination) were shown to overcome limitations of single model or fixed model weight and effectively reduce predictive uncertainty. In recent years, various Bayesian flood forecasting approaches have been
To P or Not to P: Backing Bayesian Statistics.
Buchinsky, Farrel J; Chadha, Neil K
2017-12-01
In biomedical research, it is imperative to differentiate chance variation from truth before we generalize what we see in a sample of subjects to the wider population. For decades, we have relied on null hypothesis significance testing, where we calculate P values for our data to decide whether to reject a null hypothesis. This methodology is subject to substantial misinterpretation and errant conclusions. Instead of working backward by calculating the probability of our data if the null hypothesis were true, Bayesian statistics allow us instead to work forward, calculating the probability of our hypothesis given the available data. This methodology gives us a mathematical means of incorporating our "prior probabilities" from previous study data (if any) to produce new "posterior probabilities." Bayesian statistics tell us how confidently we should believe what we believe. It is time to embrace and encourage their use in our otolaryngology research.
Cross-view gait recognition using joint Bayesian
Li, Chao; Sun, Shouqian; Chen, Xiaoyu; Min, Xin
2017-07-01
Human gait, as a soft biometric, helps to recognize people by walking. To further improve the recognition performance under cross-view condition, we propose Joint Bayesian to model the view variance. We evaluated our prosed method with the largest population (OULP) dataset which makes our result reliable in a statically way. As a result, we confirmed our proposed method significantly outperformed state-of-the-art approaches for both identification and verification tasks. Finally, sensitivity analysis on the number of training subjects was conducted, we find Joint Bayesian could achieve competitive results even with a small subset of training subjects (100 subjects). For further comparison, experimental results, learning models, and test codes are available.
Bayesian inference for Hawkes processes
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl
2013-01-01
The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....
Bayesian inference for Hawkes processes
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl
The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....
Attention in a bayesian framework
DEFF Research Database (Denmark)
Whiteley, Louise Emma; Sahani, Maneesh
2012-01-01
, and include both selective phenomena, where attention is invoked by cues that point to particular stimuli, and integrative phenomena, where attention is invoked dynamically by endogenous processing. However, most previous Bayesian accounts of attention have focused on describing relatively simple experimental...... settings, where cues shape expectations about a small number of upcoming stimuli and thus convey "prior" information about clearly defined objects. While operationally consistent with the experiments it seeks to describe, this view of attention as prior seems to miss many essential elements of both its...
Developmental origins of recoding and decoding in memory.
Kibbe, Melissa M; Feigenson, Lisa
2014-12-01
Working memory is severely limited in both adults and children, but one way that adults can overcome this limit is through the process of recoding. Recoding happens when representations of individual items are chunked together into a higher order representation, and the chunk is assigned a label. That label can then be decoded to retrieve the individual items from long-term memory. Whereas this ability has been extensively studied in adults (as, for example, in classic studies of memory in chess), little is known about recoding's developmental origins. Here we asked whether 2- to 3-year-old children also can recode-that is, can they restructure representations of individual objects into a higher order chunk, assign this new representation a verbal label, and then later decode the label to retrieve the represented individuals from memory. In Experiments 1 and 2, we showed children identical blocks that could be connected to make tools. Children learned a novel name for a tool that could be built from two blocks, and for a tool that could be built from three blocks. Later we told children that one of the tools was hidden in a box, with no visual information provided. Children were allowed to search the box and retrieve varying numbers of blocks. Critically, the retrieved blocks were identical and unconnected, so the only way children could know whether any blocks remained was by using the verbal label to recall how many objects comprised each tool (or chunk). We found that even children who could not yet count adjusted their searching of the box depending on the label they had heard. This suggests that they had recoded representations of individual blocks into higher-order chunks, attached labels to the chunks, and then later decoded the labels to infer how many blocks were hidden. In Experiments 3 and 4 we asked whether recoding also can expand the number of individual objects children could remember, as in the classic studies with adults. We found that when no
Bayesian analysis of stress thallium-201 scintigraphy
International Nuclear Information System (INIS)
The variation of the diagnostic value of stress T1-201 scintigraphy with prevalence of coronary heart disease (CHD) in the population has been investigated using Bayesian reasoning. From scintigraphic and arteriographic data obtained in 100 consecutive patients presenting with chest pain, the sensitivity of stress T1-201 scintigraphy for the detection of significant CHD was 90% and the specificity was 88%. From Bayes' Theorem, the posterior probability of having CHD for a given test result was calculated for prevalences of CHD ranging from 1% to 99%. the discriminant value of stress T1-201 scintigraphy was best when the prevalence of CHD lay between 30% and 70% and maximum for a prevalence of 52%. Thus, stress T1-201 scintigraphy would be an unsuitable diagnostic test where the prior probability of CHD is low, e.g., population screening programmes, and would add little where the clinical probability of having CHD is intermediate stress T1-201 scintigraphy may provide valuable diagnostic information. (orig.)
Efficient Bayesian inference under the structured coalescent.
Vaughan, Timothy G; Kühnert, Denise; Popinga, Alex; Welch, David; Drummond, Alexei J
2014-08-15
Population structure significantly affects evolutionary dynamics. Such structure may be due to spatial segregation, but may also reflect any other gene-flow-limiting aspect of a model. In combination with the structured coalescent, this fact can be used to inform phylogenetic tree reconstruction, as well as to infer parameters such as migration rates and subpopulation sizes from annotated sequence data. However, conducting Bayesian inference under the structured coalescent is impeded by the difficulty of constructing Markov Chain Monte Carlo (MCMC) sampling algorithms (samplers) capable of efficiently exploring the state space. In this article, we present a new MCMC sampler capable of sampling from posterior distributions over structured trees: timed phylogenetic trees in which lineages are associated with the distinct subpopulation in which they lie. The sampler includes a set of MCMC proposal functions that offer significant mixing improvements over a previously published method. Furthermore, its implementation as a BEAST 2 package ensures maximum flexibility with respect to model and prior specification. We demonstrate the usefulness of this new sampler by using it to infer migration rates and effective population sizes of H3N2 influenza between New Zealand, New York and Hong Kong from publicly available hemagglutinin (HA) gene sequences under the structured coalescent. The sampler has been implemented as a publicly available BEAST 2 package that is distributed under version 3 of the GNU General Public License at http://compevol.github.io/MultiTypeTree. © The Author 2014. Published by Oxford University Press.
Robust bayesian inference of generalized Pareto distribution ...
African Journals Online (AJOL)
Abstract. In this work, robust Bayesian estimation of the generalized Pareto distribution is proposed. The methodology is presented in terms of oscillation of posterior risks of the Bayesian estimators. By using a Monte Carlo simulation study, we show that, under a suitable generalized loss function, we can obtain a robust ...
Bayesian Decision Theoretical Framework for Clustering
Chen, Mo
2011-01-01
In this thesis, we establish a novel probabilistic framework for the data clustering problem from the perspective of Bayesian decision theory. The Bayesian decision theory view justifies the important questions: what is a cluster and what a clustering algorithm should optimize. We prove that the spectral clustering (to be specific, the…
Using Bayesian belief networks in adaptive management.
J.B. Nyberg; B.G. Marcot; R. Sulyma
2006-01-01
Bayesian belief and decision networks are relatively new modeling methods that are especially well suited to adaptive-management applications, but they appear not to have been widely used in adaptive management to date. Bayesian belief networks (BBNs) can serve many purposes for practioners of adaptive management, from illustrating system relations conceptually to...
Calibration in a Bayesian modelling framework
Jansen, M.J.W.; Hagenaars, T.H.J.
2004-01-01
Bayesian statistics may constitute the core of a consistent and comprehensive framework for the statistical aspects of modelling complex processes that involve many parameters whose values are derived from many sources. Bayesian statistics holds great promises for model calibration, provides the
Particle identification in ALICE: a Bayesian approach
Adam, J.; Adamova, D.; Aggarwal, M. M.; Rinella, G. Aglieri; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahn, S. U.; Aiola, S.; Akindinov, A.; Alam, S. N.; Albuquerque, D. S. D.; Aleksandrov, D.; Alessandro, B.; Alexandre, D.; Alfaro Molina, R.; Alici, A.; Alkin, A.; Almaraz, J. R. M.; Alme, J.; Alt, T.; Altinpinar, S.; Altsybeev, I.; Alves Garcia Prado, C.; Andrei, C.; Andronic, A.; Anguelov, V.; Anticic, T.; Antinori, F.; Antonioli, P.; Aphecetche, L.; Appelshaeuser, H.; Arcelli, S.; Arnaldi, R.; Arnold, O. W.; Arsene, I. C.; Arslandok, M.; Audurier, B.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Badala, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Balasubramanian, S.; Baldisseri, A.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barnafoeldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Barth, K.; Bartke, J.; Bartsch, E.; Basile, M.; Bastid, N.; Bathen, B.; Batigne, G.; Camejo, A. Batista; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Belmont, R.; Belmont-Moreno, E.; Belyaev, V.; Benacek, P.; Bencedi, G.; Beole, S.; Berceanu, I.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielcik, J.; Bielcikova, J.; Bilandzic, A.; Biro, G.; Biswas, R.; Biswas, S.; Bjelogrlic, S.; Blair, J. T.; Blau, D.; Blume, C.; Bock, F.; Bogdanov, A.; Boggild, H.; Boldizsar, L.; Bombara, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Bossu, F.; Botta, E.; Bourjau, C.; Braun-Munzinger, P.; Bregant, M.; Breitner, T.; Broker, T. A.; Browning, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buncic, P.; Busch, O.; Buthelezi, Z.; Butt, J. B.; Buxton, J. T.; Cabala, J.; Caffarri, D.; Cai, X.; Caines, H.; Diaz, L. Calero; Caliva, A.; Calvo Villar, E.; Camerini, P.; Carena, F.; Carena, W.; Carnesecchi, F.; Castellanos, J. Castillo; Castro, A. J.; Casula, E. A. R.; Sanchez, C. Ceballos; Cepila, J.; Cerello, P.; Cerkala, J.; Chang, B.; Chapeland, S.; Chartier, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chauvin, A.; Chelnokov, V.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Barroso, V. Chibante; Chinellato, D. D.; Cho, S.; Chochula, P.; Choi, K.; Chojnacki, M.; Choudhury, S.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Balbastre, G. Conesa; del Valle, Z. Conesa; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Morales, Y. Corrales; Cortes Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Crochet, P.; Cruz Albino, R.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danisch, M. C.; Danu, A.; Das, I.; Das, S.; Dash, A.; Dash, S.; De, S.; De Caro, A.; de Cataldo, G.; de Conti, C.; de Cuveland, J.; De Falco, A.; De Gruttola, D.; De Marco, N.; De Pasquale, S.; Deisting, A.; Deloff, A.; Denes, E.; Deplano, C.; Dhankher, P.; Di Bari, D.; Di Mauro, A.; Di Nezza, P.; Corchero, M. A. Diaz; Dietel, T.; Dillenseger, P.; Divia, R.; Djuvsland, O.; Dobrin, A.; Gimenez, D. Domenicis; Doenigus, B.; Dordic, O.; Drozhzhova, T.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Endress, E.; Engel, H.; Epple, E.; Erazmus, B.; Erdemir, I.; Erhardt, F.; Espagnon, B.; Estienne, M.; Esumi, S.; Eum, J.; Evans, D.; Evdokimov, S.; Eyyubova, G.; Fabbietti, L.; Fabris, D.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Feliciello, A.; Feofilov, G.; Ferencei, J.; Fernandez Tellez, A.; Ferreiro, E. G.; Ferretti, A.; Festanti, A.; Feuillard, V. J. G.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Fiore, E. M.; Fleck, M. G.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Frankenfeld, U.; Fronze, G. G.; Fuchs, U.; Furget, C.; Furs, A.; Girard, M. Fusco; Gaardhoje, J. J.; Gagliardi, M.; Gago, A. M.; Gallio, M.; Gangadharan, D. R.; Ganoti, P.; Gao, C.; Garabatos, C.; Garcia-Solis, E.; Gargiulo, C.; Gasik, P.; Gauger, E. F.; Germain, M.; Gheata, A.; Gheata, M.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glaessel, P.; Gomez Coral, D. M.; Ramirez, A. Gomez; Gonzalez, A. S.; Gonzalez, V.; Gonzalez-Zamora, P.; Gorbunov, S.; Goerlich, L.; Gotovac, S.; Grabski, V.; Grachov, O. A.; Graczykowski, L. K.; Graham, K. L.; Grelli, A.; Grigoras, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Grinyov, B.; Grion, N.; Gronefeld, J. M.; Grosse-Oetringhaus, J. F.; Grosso, R.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gunji, T.; Gupta, A.; Haake, R.; Haaland, O.; Hadjidakis, C.; Haiduc, M.; Hamagaki, H.; Hamar, G.; Hamon, J. C.; Harris, J. W.; Harton, A.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Hellbaer, E.; Helstrup, H.; Herghelegiu, A.; Herrera Corral, G.; Hess, B. A.; Hetland, K. F.; Hillemanns, H.; Hippolyte, B.; Horak, D.; Hosokawa, R.; Hristov, P.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Ilkaev, R.; Inaba, M.; Incani, E.; Ippolitov, M.; Irfan, M.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jacazio, N.; Jadhav, M. B.; Jadlovska, S.; Jadlovsky, J.; Jahnke, C.; Jakubowska, M. J.; Jang, H. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Bustamante, R. T. Jimenez; Jones, P. G.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kamin, J.; Kaplin, V.; Kar, S.; Uysal, A. Karasu; Karavichev, O.; Karavicheva, T.; Karayan, L.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Khan, M. Mohisin; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Kileng, B.; Kim, D. W.; Kim, D. J.; Kim, D.; Kim, J. S.; Kim, M.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein-Boesing, C.; Klewin, S.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobdaj, C.; Kofarago, M.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Kopcik, M.; Kostarakis, P.; Kour, M.; Kouzinopoulos, C.; Kovalenko, O.; Kovalenko, V.; Kowalski, M.; Meethaleveedu, G. Koyithatta; Kralik, I.; Kravcakova, A.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kucera, V.; Kuijer, P. G.; Kumar, J.; Kumar, L.; Kumar, S.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Ladron de Guevara, P.; Lagana Fernandes, C.; Lakomov, I.; Langoy, R.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lea, R.; Leardini, L.; Lee, G. R.; Lee, S.; Lehas, F.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; Monzon, I. Leon; Leon Vargas, H.; Leoncino, M.; Levai, P.; Lien, J.; Lietava, R.; Lindal, S.; Lindenstruth, V.; Lippmann, C.; Lisa, M. A.; Ljunggren, H. M.; Lodato, D. F.; Loenne, P. I.; Loginov, V.; Loizides, C.; Lopez, X.; Torres, E. Lopez; Lowe, A.; Luettig, P.; Lunardon, M.; Luparello, G.; Lutz, T. H.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Maldonado Cervantes, I.; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manko, V.; Manso, F.; Manzari, V.; Marchisone, M.; Mares, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marin, A.; Markert, C.; Marquard, M.; Martin, N. A.; Blanco, J. Martin; Martinengo, P.; Martinez, M. I.; Garcia, G. Martinez; Pedreira, M. Martinez; Mas, A.; Masciocchi, S.; Masera, M.; Masoni, A.; Mastroserio, A.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzoni, M. A.; Mcdonald, D.; Meddi, F.; Melikyan, Y.; Menchaca-Rocha, A.; Meninno, E.; Perez, J. Mercado; Meres, M.; Miake, Y.; Mieskolainen, M. M.; Mikhaylov, K.; Milano, L.; Milosevic, J.; Mischke, A.; Mishra, A. N.; Miskowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Molnar, L.; Montano Zetina, L.; Montes, E.; De Godoy, D. A. Moreira; Moreno, L. A. P.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Muehlheim, D.; Muhuri, S.; Mukherjee, M.; Mulligan, J. D.; Munhoz, M. G.; Munzer, R. H.; Murakami, H.; Murray, S.; Musa, L.; Musinsky, J.; Naik, B.; Nair, R.; Nandi, B. K.; Nania, R.; Nappi, E.; Naru, M. U.; Natal da Luz, H.; Nattrass, C.; Navarro, S. R.; Nayak, K.; Nayak, R.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Nellen, L.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Noferini, F.; Nomokonov, P.; Nooren, G.; Noris, J. C. C.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Oh, S. K.; Ohlson, A.; Okatan, A.; Okubo, T.; Olah, L.; Oleniacz, J.; Oliveira Da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Orava, R.; Oravec, M.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Ozdemir, M.; Pachmayer, Y.; Pagano, D.; Pagano, P.; Paic, G.; Pal, S. K.; Pan, J.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, W. J.; Parmar, S.; Passfeld, A.; Paticchio, V.; Patra, R. N.; Paul, B.; Pei, H.; Peitzmann, T.; Da Costa, H. Pereira; Peresunko, D.; Lara, C. E. Perez; Lezama, E. Perez; Peskov, V.; Pestov, Y.; Petracek, V.; Petrov, V.; Petrovici, M.; Petta, C.; Piano, S.; Pikna, M.; Pillot, P.; Pimentel, L. O. D. L.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Ploskon, M.; Planinic, M.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Porteboeuf-Houssais, S.; Porter, J.; Pospisil, J.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Qvigstad, H.; Rachevski, A.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Rami, F.; Raniwala, R.; Raniwala, S.; Raesaenen, S. S.; Rascanu, B. T.; Rathee, D.; Read, K. F.; Redlich, K.; Reed, R. J.; Reichelt, P.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rocco, E.; Rodriguez Cahuantzi, M.; Manso, A. Rodriguez; Roed, K.; Rogochaya, E.; Rohr, D.; Roehrich, D.; Ronchetti, F.; Ronflette, L.; Rosnet, P.; Rossi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Montero, A. J. Rubio; Rui, R.; Russo, R.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Saarinen, S.; Sadhu, S.; Sadovsky, S.; Safarik, K.; Sahlmuller, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Sandor, L.; Sandoval, A.; Sano, M.; Sarkar, D.; Sarkar, N.; Sarma, P.; Scapparone, E.; Scarlassara, F.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schuchmann, S.; Schukraft, J.; Schulc, M.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Scott, R.; Sefcik, M.; Seger, J. E.; Sekiguchi, Y.; Sekihata, D.; Selyuzhenkov, I.; Senosi, K.; Senyukov, S.; Serradilla, E.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shadura, O.; Shahoyan, R.; Shahzad, M. I.; Shangaraev, A.; Sharma, M.; Sharma, M.; Sharma, N.; Sheikh, A. I.; Shigaki, K.; Shou, Q.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singha, S.; Singhal, V.; Sinha, B. C.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Song, J.; Song, M.; Song, Z.; Soramel, F.; Sorensen, S.; de Souza, R. D.; Sozzi, F.; Spacek, M.; Spiriti, E.; Sputowska, I.; Spyropoulou-Stassinaki, M.; Stachel, J.; Stan, I.; Stankus, P.; Stenlund, E.; Steyn, G.; Stiller, J. H.; Stocco, D.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Suljic, M.; Sultanov, R.; Sumbera, M.; Sumowidagdo, S.; Szabo, A.; Szanto de Toledo, A.; Szarka, I.; Szczepankiewicz, A.; Szymanski, M.; Tabassam, U.; Takahashi, J.; Tambave, G. J.; Tanaka, N.; Tarhini, M.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Tejeda Munoz, G.; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thaeder, J.; Thakur, D.; Thomas, D.; Tieulent, R.; Timmins, A. R.; Toia, A.; Trogolo, S.; Trombetta, G.; Trubnikov, V.; Trzaska, W. H.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vala, M.; Palomo, L. Valencia; Vallero, S.; Van Der Maarel, J.; Van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vyvre, P. Vande; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vechernin, V.; Veen, A. M.; Veldhoen, M.; Velure, A.; Vercellin, E.; Vergara Limon, S.; Vernet, R.; Verweij, M.; Vickovic, L.; Viesti, G.; Viinikainen, J.; Vilakazi, Z.; Baillie, O. Villalobos; Villatoro Tello, A.; Vinogradov, A.; Vinogradov, L.; Vinogradov, Y.; Virgili, T.; Vislavicius, V.; Viyogi, Y. P.; Vodopyanov, A.; Voelkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Vranic, D.; Vrlakova, J.; Vulpescu, B.; Wagner, B.; Wagner, J.; Wang, H.; Watanabe, D.; Watanabe, Y.; Weiser, D. F.; Westerhoff, U.; Whitehead, A. M.; Wiechula, J.; Wikne, J.; Wilk, G.; Wilkinson, J.; Williams, M. C. S.; Windelband, B.; Winn, M.; Yang, H.; Yano, S.; Yasin, Z.; Yokoyama, H.; Yoo, I. -K.; Yoon, J. H.; Yurchenko, V.; Yushmanov, I.; Zaborowska, A.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zaporozhets, S.; Zardoshti, N.; Zarochentsev, A.; Zavada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zgura, I. S.; Zhalov, M.; Zhang, C.; Zhao, C.; Zhigareva, N.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zinovjev, G.; Zyzak, M.; Collaboration, ALICE
2016-01-01
We present a Bayesian approach to particle identification (PID) within the ALICE experiment. The aim is to more effectively combine the particle identification capabilities of its various detectors. After a brief explanation of the adopted methodology and formalism, the performance of the Bayesian
Bayesian Network for multiple hypthesis tracking
Zajdel, W.P.; Kröse, B.J.A.; Blockeel, H.; Denecker, M.
2002-01-01
For a flexible camera-to-camera tracking of multiple objects we model the objects behavior with a Bayesian network and combine it with the multiple hypohesis framework that associates observations with objects. Bayesian networks offer a possibility to factor complex, joint distributions into a
Bayesian learning theory applied to human cognition.
Jacobs, Robert A; Kruschke, John K
2011-01-01
Probabilistic models based on Bayes' rule are an increasingly popular approach to understanding human cognition. Bayesian models allow immense representational latitude and complexity. Because they use normative Bayesian mathematics to process those representations, they define optimal performance on a given task. This article focuses on key mechanisms of Bayesian information processing, and provides numerous examples illustrating Bayesian approaches to the study of human cognition. We start by providing an overview of Bayesian modeling and Bayesian networks. We then describe three types of information processing operations-inference, parameter learning, and structure learning-in both Bayesian networks and human cognition. This is followed by a discussion of the important roles of prior knowledge and of active learning. We conclude by outlining some challenges for Bayesian models of human cognition that will need to be addressed by future research. WIREs Cogn Sci 2011 2 8-21 DOI: 10.1002/wcs.80 For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.
Properties of the Bayesian Knowledge Tracing Model
van de Sande, Brett
2013-01-01
Bayesian Knowledge Tracing is used very widely to model student learning. It comes in two different forms: The first form is the Bayesian Knowledge Tracing "hidden Markov model" which predicts the probability of correct application of a skill as a function of the number of previous opportunities to apply that skill and the model…
Plug & Play object oriented Bayesian networks
DEFF Research Database (Denmark)
Bangsø, Olav; Flores, J.; Jensen, Finn Verner
2003-01-01
and secondly, to gain efficiency during modification of an object oriented Bayesian network. To accomplish these two goals we have exploited a mechanism allowing local triangulation of instances to develop a method for updating the junction trees associated with object oriented Bayesian networks in highly...
Using Bayesian Networks to Improve Knowledge Assessment
Millan, Eva; Descalco, Luis; Castillo, Gladys; Oliveira, Paula; Diogo, Sandra
2013-01-01
In this paper, we describe the integration and evaluation of an existing generic Bayesian student model (GBSM) into an existing computerized testing system within the Mathematics Education Project (PmatE--Projecto Matematica Ensino) of the University of Aveiro. This generic Bayesian student model had been previously evaluated with simulated…
Bayesian models: A statistical primer for ecologists
Hobbs, N. Thompson; Hooten, Mevin B.
2015-01-01
Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods—in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach.Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. This unique book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals.This primer enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management.Presents the mathematical and statistical foundations of Bayesian modeling in language accessible to non-statisticiansCovers basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and moreDeemphasizes computer coding in favor of basic principlesExplains how to write out properly factored statistical expressions representing Bayesian models
Modeling Diagnostic Assessments with Bayesian Networks
Almond, Russell G.; DiBello, Louis V.; Moulder, Brad; Zapata-Rivera, Juan-Diego
2007-01-01
This paper defines Bayesian network models and examines their applications to IRT-based cognitive diagnostic modeling. These models are especially suited to building inference engines designed to be synchronous with the finer grained student models that arise in skills diagnostic assessment. Aspects of the theory and use of Bayesian network models…
Flexible Bayesian Human Fecundity Models.
Kim, Sungduk; Sundaram, Rajeshwari; Buck Louis, Germaine M; Pyper, Cecilia
2012-12-01
Human fecundity is an issue of considerable interest for both epidemiological and clinical audiences, and is dependent upon a couple's biologic capacity for reproduction coupled with behaviors that place a couple at risk for pregnancy. Bayesian hierarchical models have been proposed to better model the conception probabilities by accounting for the acts of intercourse around the day of ovulation, i.e., during the fertile window. These models can be viewed in the framework of a generalized nonlinear model with an exponential link. However, a fixed choice of link function may not always provide the best fit, leading to potentially biased estimates for probability of conception. Motivated by this, we propose a general class of models for fecundity by relaxing the choice of the link function under the generalized nonlinear model framework. We use a sample from the Oxford Conception Study (OCS) to illustrate the utility and fit of this general class of models for estimating human conception. Our findings reinforce the need for attention to be paid to the choice of link function in modeling conception, as it may bias the estimation of conception probabilities. Various properties of the proposed models are examined and a Markov chain Monte Carlo sampling algorithm was developed for implementing the Bayesian computations. The deviance information criterion measure and logarithm of pseudo marginal likelihood are used for guiding the choice of links. The supplemental material section contains technical details of the proof of the theorem stated in the paper, and contains further simulation results and analysis.
Bayesian Nonparametric Longitudinal Data Analysis.
Quintana, Fernando A; Johnson, Wesley O; Waetjen, Elaine; Gold, Ellen
2016-01-01
Practical Bayesian nonparametric methods have been developed across a wide variety of contexts. Here, we develop a novel statistical model that generalizes standard mixed models for longitudinal data that include flexible mean functions as well as combined compound symmetry (CS) and autoregressive (AR) covariance structures. AR structure is often specified through the use of a Gaussian process (GP) with covariance functions that allow longitudinal data to be more correlated if they are observed closer in time than if they are observed farther apart. We allow for AR structure by considering a broader class of models that incorporates a Dirichlet Process Mixture (DPM) over the covariance parameters of the GP. We are able to take advantage of modern Bayesian statistical methods in making full predictive inferences and about characteristics of longitudinal profiles and their differences across covariate combinations. We also take advantage of the generality of our model, which provides for estimation of a variety of covariance structures. We observe that models that fail to incorporate CS or AR structure can result in very poor estimation of a covariance or correlation matrix. In our illustration using hormone data observed on women through the menopausal transition, biology dictates the use of a generalized family of sigmoid functions as a model for time trends across subpopulation categories.
BELM: Bayesian extreme learning machine.
Soria-Olivas, Emilio; Gómez-Sanchis, Juan; Martín, José D; Vila-Francés, Joan; Martínez, Marcelino; Magdalena, José R; Serrano, Antonio J
2011-03-01
The theory of extreme learning machine (ELM) has become very popular on the last few years. ELM is a new approach for learning the parameters of the hidden layers of a multilayer neural network (as the multilayer perceptron or the radial basis function neural network). Its main advantage is the lower computational cost, which is especially relevant when dealing with many patterns defined in a high-dimensional space. This brief proposes a bayesian approach to ELM, which presents some advantages over other approaches: it allows the introduction of a priori knowledge; obtains the confidence intervals (CIs) without the need of applying methods that are computationally intensive, e.g., bootstrap; and presents high generalization capabilities. Bayesian ELM is benchmarked against classical ELM in several artificial and real datasets that are widely used for the evaluation of machine learning algorithms. Achieved results show that the proposed approach produces a competitive accuracy with some additional advantages, namely, automatic production of CIs, reduction of probability of model overfitting, and use of a priori knowledge.
Venesjärvi, Riikka
2012-01-01
The amount of operated oil transports continues to increase in the Gulf of Finland and in the case of an accident hazardous amounts of oil may be spilled into the sea. The oil accident may be harmful for the common guillemot and long-tailed duck populations. In this study expert knowledge regarding the behaviour and population dynamics of common guillemot and long-tailed duck in the Gulf of Finland was used to build a model to assess the impacts of an oil spill on the mortality and population...
Directory of Open Access Journals (Sweden)
L.S. Vismara
2007-12-01
Full Text Available A dinâmica da população de plantas daninhas pode ser representada por um sistema de equações que relaciona as densidades de sementes produzidas e de plântulas em áreas de cultivo. Os valores dos parâmetros dos modelos podem ser inferidos diretamente de experimentação e análise estatística ou extraídos da literatura. O presente trabalho teve por objetivo estimar os parâmetros do modelo de densidade populacional de plantas daninhas, a partir de um experimento conduzido na área experimental da Embrapa Milho e Sorgo, Sete Lagoas, MG, via os procedimentos de inferências clássica e Bayesiana.Dynamics of weed populations can be described as a system of equations relating the produced seed and seedling densities in crop areas. The model parameter values can be either directly inferred from experimentation and statistical analysis or obtained from the literature. The objective of this work was to estimate the weed population density model parameters based on experimental field data at Embrapa Milho e Sorgo, Sete Lagoas, MG, using classic and Bayesian inferences.
2nd Bayesian Young Statisticians Meeting
Bitto, Angela; Kastner, Gregor; Posekany, Alexandra
2015-01-01
The Second Bayesian Young Statisticians Meeting (BAYSM 2014) and the research presented here facilitate connections among researchers using Bayesian Statistics by providing a forum for the development and exchange of ideas. WU Vienna University of Business and Economics hosted BAYSM 2014 from September 18th to 19th. The guidance of renowned plenary lecturers and senior discussants is a critical part of the meeting and this volume, which follows publication of contributions from BAYSM 2013. The meeting's scientific program reflected the variety of fields in which Bayesian methods are currently employed or could be introduced in the future. Three brilliant keynote lectures by Chris Holmes (University of Oxford), Christian Robert (Université Paris-Dauphine), and Mike West (Duke University), were complemented by 24 plenary talks covering the major topics Dynamic Models, Applications, Bayesian Nonparametrics, Biostatistics, Bayesian Methods in Economics, and Models and Methods, as well as a lively poster session ...
Bayesian natural language semantics and pragmatics
Zeevat, Henk
2015-01-01
The contributions in this volume focus on the Bayesian interpretation of natural languages, which is widely used in areas of artificial intelligence, cognitive science, and computational linguistics. This is the first volume to take up topics in Bayesian Natural Language Interpretation and make proposals based on information theory, probability theory, and related fields. The methodologies offered here extend to the target semantic and pragmatic analyses of computational natural language interpretation. Bayesian approaches to natural language semantics and pragmatics are based on methods from signal processing and the causal Bayesian models pioneered by especially Pearl. In signal processing, the Bayesian method finds the most probable interpretation by finding the one that maximizes the product of the prior probability and the likelihood of the interpretation. It thus stresses the importance of a production model for interpretation as in Grice's contributions to pragmatics or in interpretation by abduction.
Crystal structure prediction accelerated by Bayesian optimization
Yamashita, Tomoki; Sato, Nobuya; Kino, Hiori; Miyake, Takashi; Tsuda, Koji; Oguchi, Tamio
2018-01-01
We propose a crystal structure prediction method based on Bayesian optimization. Our method is classified as a selection-type algorithm which is different from evolution-type algorithms such as an evolutionary algorithm and particle swarm optimization. Crystal structure prediction with Bayesian optimization can efficiently select the most stable structure from a large number of candidate structures with a lower number of searching trials using a machine learning technique. Crystal structure prediction using Bayesian optimization combined with random search is applied to known systems such as NaCl and Y2Co17 to discuss the efficiency of Bayesian optimization. These results demonstrate that Bayesian optimization can significantly reduce the number of searching trials required to find the global minimum structure by 30-40% in comparison with pure random search, which leads to much less computational cost.
Generalized instantly decodable network coding for relay-assisted networks
Elmahdy, Adel M.
2013-09-01
In this paper, we investigate the problem of minimizing the frame completion delay for Instantly Decodable Network Coding (IDNC) in relay-assisted wireless multicast networks. We first propose a packet recovery algorithm in the single relay topology which employs generalized IDNC instead of strict IDNC previously proposed in the literature for the same relay-assisted topology. This use of generalized IDNC is supported by showing that it is a super-set of the strict IDNC scheme, and thus can generate coding combinations that are at least as efficient as strict IDNC in reducing the average completion delay. We then extend our study to the multiple relay topology and propose a joint generalized IDNC and relay selection algorithm. This proposed algorithm benefits from the reception diversity of the multiple relays to further reduce the average completion delay in the network. Simulation results show that our proposed solutions achieve much better performance compared to previous solutions in the literature. © 2013 IEEE.
Decoding of thermal data in Fleischmann and Pons paper
International Nuclear Information System (INIS)
Ohashi, Hiroshi; Morozumi, Takashi
1989-01-01
The recent 'discovery' of cold nuclear fusion by Fleischmann and Pons (abbreviated to F and P below) contradicts the general concept of nuclear fusion between deuterium nuclei. Since they were not able to confirm a relation between the heat evolution and the production of neutrons, tritium and helium, they claimed that a new type of nuclear fusion process took place, and that the only evidence was a large amount of heat evolution. Their claim that the heat evolution exceeded breakeven, is suspicious, as there are many flaws and contradictions in their data and logic. The following shows the results of our decoding of the complicated thermal data in the F and P paper, and concludes that the heat evolution reported by the F and P paper can be adequately explained by considering a recombination of hydrogen and oxygen evolved during the experiment. (author)
Remote Control and Testing of the Interactive TV-Decoder
Directory of Open Access Journals (Sweden)
K. Vlcek
1995-12-01
Full Text Available The article deals with assembling and application of a complex sequential circuit VHDL (VHSIC (Very High-Speed Integrated Circuit Hardware Description Language model. The circuit model is a core of a cryptographic device for the signal encoding and decoding of discreet transmissions by TV-cable net. The cryptographic algorithm is changable according to the user's wishes. The principles of creation and example implementations are presented in the article. The behavioural model is used to minimize mistakes in the ASICs (Application Specific Integrated Circuits. The circuit implementation uses the FPGA (Field Programmable Gate Array technology. The diagnostics of the circuit is based on remote testing by the IEEE Std 1149.1-1990. The VHDL model of diagnostic subsystem is created as an orthogonal model in relation to the cryptographic circuit VHDL model.
Low Bit Rate Motion Video Coder/Decoder For Teleconferencing
Koga, T.; Niwa, K.; Iijima, Y.; Iinuma, K.
1987-07-01
This paper describes motion video compression transmission for teleconferencing at a subprimary rate, i.e., at 384 kbits/s, including audio signal through the integrated services digital network (ISDN) HO channel. A subprimary rate video coder/decoder (codec), NETEC-XV, is available commercially that can operate at any bit rate (in multiples of 64 kbits/s) from 384 to 2048 kbits/s. In this paper, new algorithms are described that have been very useful in lowering the bit rate to 384 kbits/s. These algorithms are (1) separation of moving and still parts, followed by encoding of the two parts using different sets of parameters, and (2) scene change detection and its application to encoding parameter control. According to a brief subjective evaluation, the codec provides good picture quality even at a transmission bit rate of 384 kbits/s.
Exact performance analysis of decode-and-forward opportunistic relaying
Tourki, Kamel
2010-06-01
In this paper, we investigate a dual-hop decode-and-forward opportunistic relaying scheme where the source may or may not be able to communicate directly with the destination. In our study, we consider a regenerative relaying scheme in which the decision to cooperate takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive an exact closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact statistics of each hop. Unlike existing works where the analysis focused on high signal-to-noise ratio (SNR) regime, such results are important to enable the designers to take decisions regarding practical systems that operate at low SNR regime. We show that performance simulation results coincide with our analytical results.
Outage analysis of opportunistic decode-and-forward relaying
Tourki, Kamel
2010-09-01
In this paper, we investigate a dual-hop opportunistic decode-and-forward relaying scheme where the source may or not be able to communicate directly with the destination. We first derive statistics based on exact probability density function (PDF) of each hop. Then, the PDFs are used to determine closed-form outage probability expression for a transmission rate R. Furthermore, we evaluate the asymptotic outage performance and the diversity order is deduced. Unlike existing works where the analysis focused on high signal-to-noise ratio (SNR) regime, such results are important to enable the designers to take decisions regarding practical systems that operate at low SNR regime. We show that performance simulation results coincide with our analytical results under practical assumption of unbalanced hops. © 2010 IEEE.
Decoding Computer Games: Studying “Special Operation 85”
Directory of Open Access Journals (Sweden)
Bahareh Jalalzadeh
2009-11-01
Full Text Available As other media, computer games convey messages which have tow features: explicit and implicit. Semiologically studying computer games and comparing them with narrative structures, the present study attempts to discover the messages they convey. Therefore we have studied and decoded “Special operation 85” as a semiological text. Results show that the game’s features, as naming, interests and motivations of the engaged people, and the events narrated, all lead the producers to their goals of introducing and publicizing Iranian-Islamic cultural values. Although this feature makes “Special Opreation 85” a unique game, it fails in its attempt to produce a mythical personage in Iranian-Islamic cultural context.
Decoding of thermal data in Fleischmann and Pons paper
Energy Technology Data Exchange (ETDEWEB)
Ohashi, Hiroshi; Morozumi, Takashi (Hokkaido Univ., Sapporo (Japan). Faculty of Engineering)
1989-07-01
The recent 'discovery' of cold nuclear fusion by Fleischmann and Pons (abbreviated to F and P below) contradicts the general concept of nuclear fusion between deuterium nuclei. Since they were not able to confirm a relation between the heat evolution and the production of neutrons, tritium and helium, they claimed that a new type of nuclear fusion process took place, and that the only evidence was a large amount of heat evolution. Their claim that the heat evolution exceeded breakeven, is suspicious, as there are many flaws and contradictions in their data and logic. The following shows the results of our decoding of the complicated thermal data in the F and P paper, and concludes that the heat evolution reported by the F and P paper can be adequately explained by considering a recombination of hydrogen and oxygen evolved during the experiment. (author).
Cellular automaton decoders of topological quantum memories in the fault tolerant setting
International Nuclear Information System (INIS)
Herold, Michael; Eisert, Jens; Kastoryano, Michael J; Campbell, Earl T
2017-01-01
Active error decoding and correction of topological quantum codes—in particular the toric code—remains one of the most viable routes to large scale quantum information processing. In contrast, passive error correction relies on the natural physical dynamics of a system to protect encoded quantum information. However, the search is ongoing for a completely satisfactory passive scheme applicable to locally interacting two-dimensional systems. Here, we investigate dynamical decoders that provide passive error correction by embedding the decoding process into local dynamics. We propose a specific discrete time cellular-automaton decoder in the fault tolerant setting and provide numerical evidence showing that the logical qubit has a survival time extended by several orders of magnitude over that of a bare unencoded qubit. We stress that (asynchronous) dynamical decoding gives rise to a Markovian dissipative process. We hence equate cellular-automaton decoding to a fully dissipative topological quantum memory, which removes errors continuously. In this sense, uncontrolled and unwanted local noise can be corrected for by a controlled local dissipative process. We analyze the required resources, commenting on additional polylogarithmic factors beyond those incurred by an ideal constant resource dynamical decoder. (paper)
Multiscale decoding for reliable brain-machine interface performance over time.
Han-Lin Hsieh; Wong, Yan T; Pesaran, Bijan; Shanechi, Maryam M
2017-07-01
Recordings from invasive implants can degrade over time, resulting in a loss of spiking activity for some electrodes. For brain-machine interfaces (BMI), such a signal degradation lowers control performance. Achieving reliable performance over time is critical for BMI clinical viability. One approach to improve BMI longevity is to simultaneously use spikes and other recording modalities such as local field potentials (LFP), which are more robust to signal degradation over time. We have developed a multiscale decoder that can simultaneously model the different statistical profiles of multi-scale spike/LFP activity (discrete spikes vs. continuous LFP). This decoder can also run at multiple time-scales (millisecond for spikes vs. tens of milliseconds for LFP). Here, we validate the multiscale decoder for estimating the movement of 7 major upper-arm joint angles in a non-human primate (NHP) during a 3D reach-to-grasp task. The multiscale decoder uses motor cortical spike/LFP recordings as its input. We show that the multiscale decoder can improve decoding accuracy by adding information from LFP to spikes, while running at the fast millisecond time-scale of the spiking activity. Moreover, this improvement is achieved using relatively few LFP channels, demonstrating the robustness of the approach. These results suggest that using multiscale decoders has the potential to improve the reliability and longevity of BMIs.
ESVD: An Integrated Energy Scalable Framework for Low-Power Video Decoding Systems
Directory of Open Access Journals (Sweden)
Wen Ji
2010-01-01
Full Text Available Video applications using mobile wireless devices are a challenging task due to the limited capacity of batteries. The higher complex functionality of video decoding needs high resource requirements. Thus, power efficient control has become more critical design with devices integrating complex video processing techniques. Previous works on power efficient control in video decoding systems often aim at the low complexity design and not explicitly consider the scalable impact of subfunctions in decoding process, and seldom consider the relationship with the features of compressed video date. This paper is dedicated to developing an energy-scalable video decoding (ESVD strategy for energy-limited mobile terminals. First, ESVE can dynamically adapt the variable energy resources due to the device aware technique. Second, ESVD combines the decoder control with decoded data, through classifying the data into different partition profiles according to its characteristics. Third, it introduces utility theoretical analysis during the resource allocation process, so as to maximize the resource utilization. Finally, it adapts the energy resource as different energy budget and generates the scalable video decoding output under energy-limited systems. Experimental results demonstrate the efficiency of the proposed approach.
Cellular automaton decoders of topological quantum memories in the fault tolerant setting
Herold, Michael; Kastoryano, Michael J.; Campbell, Earl T.; Eisert, Jens
2017-06-01
Active error decoding and correction of topological quantum codes—in particular the toric code—remains one of the most viable routes to large scale quantum information processing. In contrast, passive error correction relies on the natural physical dynamics of a system to protect encoded quantum information. However, the search is ongoing for a completely satisfactory passive scheme applicable to locally interacting two-dimensional systems. Here, we investigate dynamical decoders that provide passive error correction by embedding the decoding process into local dynamics. We propose a specific discrete time cellular-automaton decoder in the fault tolerant setting and provide numerical evidence showing that the logical qubit has a survival time extended by several orders of magnitude over that of a bare unencoded qubit. We stress that (asynchronous) dynamical decoding gives rise to a Markovian dissipative process. We hence equate cellular-automaton decoding to a fully dissipative topological quantum memory, which removes errors continuously. In this sense, uncontrolled and unwanted local noise can be corrected for by a controlled local dissipative process. We analyze the required resources, commenting on additional polylogarithmic factors beyond those incurred by an ideal constant resource dynamical decoder.
Low Complexity List Decoding for Polar Codes with Multiple CRC Codes
Directory of Open Access Journals (Sweden)
Jong-Hwan Kim
2017-04-01
Full Text Available Polar codes are the first family of error correcting codes that provably achieve the capacity of symmetric binary-input discrete memoryless channels with low complexity. Since the development of polar codes, there have been many studies to improve their finite-length performance. As a result, polar codes are now adopted as a channel code for the control channel of 5G new radio of the 3rd generation partnership project. However, the decoder implementation is one of the big practical problems and low complexity decoding has been studied. This paper addresses a low complexity successive cancellation list decoding for polar codes utilizing multiple cyclic redundancy check (CRC codes. While some research uses multiple CRC codes to reduce memory and time complexity, we consider the operational complexity of decoding, and reduce it by optimizing CRC positions in combination with a modified decoding operation. Resultingly, the proposed scheme obtains not only complexity reduction from early stopping of decoding, but also additional reduction from the reduced number of decoding paths.
Improved HDRG decoders for qudit and non-Abelian quantum error correction
Hutter, Adrian; Loss, Daniel; Wootton, James R.
2015-03-01
Hard-decision renormalization group (HDRG) decoders are an important class of decoding algorithms for topological quantum error correction. Due to their versatility, they have been used to decode systems with fractal logical operators, color codes, qudit topological codes, and non-Abelian systems. In this work, we develop a method of performing HDRG decoding which combines strengths of existing decoders and further improves upon them. In particular, we increase the minimal number of errors necessary for a logical error in a system of linear size L from \\Theta ({{L}2/3}) to Ω ({{L}1-ε }) for any ε \\gt 0. We apply our algorithm to decoding D({{{Z}}d}) quantum double models and a non-Abelian anyon model with Fibonacci-like fusion rules, and show that it indeed significantly outperforms previous HDRG decoders. Furthermore, we provide the first study of continuous error correction with imperfect syndrome measurements for the D({{{Z}}d}) quantum double models. The parallelized runtime of our algorithm is poly(log L) for the perfect measurement case. In the continuous case with imperfect syndrome measurements, the averaged runtime is O(1) for Abelian systems, while continuous error correction for non-Abelian anyons stays an open problem.
Directory of Open Access Journals (Sweden)
Andrew Young Paek
2014-03-01
Full Text Available We investigated how well repetitive finger tapping movements can be decoded from scalp electroencephalography (EEG signals. A linear decoder with memory was used to infer continuous index finger angular velocities from the low-pass filtered fluctuations of the amplitude of a plurality of EEG signals distributed across the scalp. To evaluate the accuracy of the decoder, the Pearson’s correlation coefficient (r between the observed and predicted trajectories was calculated in a 10-fold cross-validation scheme. We also assessed attempts to decode finger kinematics from EEG data that was cleaned with independent component analysis (ICA, EEG data from peripheral sensors, and EEG data from rest periods. A genetic algorithm was used to select combinations of EEG channels that maximized decoding accuracies. Our results (lower quartile r = 0.20, median r = 0.36, upper quartile r = 0.52 show that delta-band EEG signals contain useful information that can be used to infer finger kinematics. Further, the highest decoding accuracies were characterized by highly correlated delta band EEG activity mostly localized to the contralateral central areas of the scalp. Spectral analysis in alpha (8-13Hz and beta (20-30 Hz EEG respectively also showed focused bilateral alpha event related desynchronizations (ERDs over central scalp areas and contralateral beta event related synchronizations (ERSs localized to central scalp areas. Overall, this study demonstrates the feasibility of decoding finger kinematics from scalp EEG signals.
Decoding individual natural scene representations during perception and imagery
Directory of Open Access Journals (Sweden)
Matthew Robert Johnson
2014-02-01
Full Text Available We used a multi-voxel classification analysis of functional magnetic resonance imaging (fMRI data to determine to what extent item-specific information about complex natural scenes is represented in several category-selective areas of human extrastriate visual cortex during visual perception and visual mental imagery. Participants in the scanner either viewed or were instructed to visualize previously memorized natural scene exemplars, and the neuroimaging data were subsequently subjected to a multi-voxel pattern analysis (MVPA using a support vector machine (SVM classifier. We found that item-specific information was represented in multiple scene-selective areas: the occipital place area (OPA, parahippocampal place area (PPA, retrosplenial cortex (RSC, and a scene-selective portion of the precuneus/intraparietal sulcus region (PCu/IPS. Furthermore, item-specific information from perceived scenes was re-instantiated during mental imagery of the same scenes. These results support findings from previous decoding analyses for other types of visual information and/or brain areas during imagery or working memory, and extend them to the case of visual scenes (and scene-selective cortex. Taken together, such findings support models suggesting that reflective mental processes are subserved by the re-instantiation of perceptual information in high-level visual cortex. We also examined activity in the fusiform face area (FFA and found that it, too, contained significant item-specific scene information during perception, but not during mental imagery. This suggests that although decodable scene-relevant activity occurs in FFA during perception, FFA activity may not be a necessary (or even relevant component of one’s mental representation of visual scenes.
Decoding the Locus of Covert Visuospatial Attention from EEG Signals.
Directory of Open Access Journals (Sweden)
Thomas Thiery
Full Text Available Visuospatial attention can be deployed to different locations in space independently of ocular fixation, and studies have shown that event-related potential (ERP components can effectively index whether such covert visuospatial attention is deployed to the left or right visual field. However, it is not clear whether we may obtain a more precise spatial localization of the focus of attention based on the EEG signals during central fixation. In this study, we used a modified Posner cueing task with an endogenous cue to determine the degree to which information in the EEG signal can be used to track visual spatial attention in presentation sequences lasting 200 ms. We used a machine learning classification method to evaluate how well EEG signals discriminate between four different locations of the focus of attention. We then used a multi-class support vector machine (SVM and a leave-one-out cross-validation framework to evaluate the decoding accuracy (DA. We found that ERP-based features from occipital and parietal regions showed a statistically significant valid prediction of the location of the focus of visuospatial attention (DA = 57%, p < .001, chance-level 25%. The mean distance between the predicted and the true focus of attention was 0.62 letter positions, which represented a mean error of 0.55 degrees of visual angle. In addition, ERP responses also successfully predicted whether spatial attention was allocated or not to a given location with an accuracy of 79% (p < .001. These findings are discussed in terms of their implications for visuospatial attention decoding and future paths for research are proposed.
Bayesian adaptive survey protocols for resource management
Halstead, Brian J.; Wylie, Glenn D.; Coates, Peter S.; Casazza, Michael L.
2011-01-01
Transparency in resource management decisions requires a proper accounting of uncertainty at multiple stages of the decision-making process. As information becomes available, periodic review and updating of resource management protocols reduces uncertainty and improves management decisions. One of the most basic steps to mitigating anthropogenic effects on populations is determining if a population of a species occurs in an area that will be affected by human activity. Species are rarely detected with certainty, however, and falsely declaring a species absent can cause improper conservation decisions or even extirpation of populations. We propose a method to design survey protocols for imperfectly detected species that accounts for multiple sources of uncertainty in the detection process, is capable of quantitatively incorporating expert opinion into the decision-making process, allows periodic updates to the protocol, and permits resource managers to weigh the severity of consequences if the species is falsely declared absent. We developed our method using the giant gartersnake (Thamnophis gigas), a threatened species precinctive to the Central Valley of California, as a case study. Survey date was negatively related to the probability of detecting the giant gartersnake, and water temperature was positively related to the probability of detecting the giant gartersnake at a sampled location. Reporting sampling effort, timing and duration of surveys, and water temperatures would allow resource managers to evaluate the probability that the giant gartersnake occurs at sampled sites where it is not detected. This information would also allow periodic updates and quantitative evaluation of changes to the giant gartersnake survey protocol. Because it naturally allows multiple sources of information and is predicated upon the idea of updating information, Bayesian analysis is well-suited to solving the problem of developing efficient sampling protocols for species of
Bayesian Approach to Inverse Problems
2008-01-01
Many scientific, medical or engineering problems raise the issue of recovering some physical quantities from indirect measurements; for instance, detecting or quantifying flaws or cracks within a material from acoustic or electromagnetic measurements at its surface is an essential problem of non-destructive evaluation. The concept of inverse problems precisely originates from the idea of inverting the laws of physics to recover a quantity of interest from measurable data.Unfortunately, most inverse problems are ill-posed, which means that precise and stable solutions are not easy to devise. Regularization is the key concept to solve inverse problems.The goal of this book is to deal with inverse problems and regularized solutions using the Bayesian statistical tools, with a particular view to signal and image estimation
Bayesian modelling of fusion diagnostics
Fischer, R.; Dinklage, A.; Pasch, E.
2003-07-01
Integrated data analysis of fusion diagnostics is the combination of different, heterogeneous diagnostics in order to improve physics knowledge and reduce the uncertainties of results. One example is the validation of profiles of plasma quantities. Integration of different diagnostics requires systematic and formalized error analysis for all uncertainties involved. The Bayesian probability theory (BPT) allows a systematic combination of all information entering the measurement descriptive model that considers all uncertainties of the measured data, calibration measurements, physical model parameters and measurement nuisance parameters. A sensitivity analysis of model parameters allows crucial uncertainties to be found, which has an impact on both diagnostic improvement and design. The systematic statistical modelling within the BPT is used for reconstructing electron density and electron temperature profiles from Thomson scattering data from the Wendelstein 7-AS stellarator. The inclusion of different diagnostics and first-principle information is discussed in terms of improvements.
Bayesian networks in educational assessment
Almond, Russell G; Steinberg, Linda S; Yan, Duanli; Williamson, David M
2015-01-01
Bayesian inference networks, a synthesis of statistics and expert systems, have advanced reasoning under uncertainty in medicine, business, and social sciences. This innovative volume is the first comprehensive treatment exploring how they can be applied to design and analyze innovative educational assessments. Part I develops Bayes nets’ foundations in assessment, statistics, and graph theory, and works through the real-time updating algorithm. Part II addresses parametric forms for use with assessment, model-checking techniques, and estimation with the EM algorithm and Markov chain Monte Carlo (MCMC). A unique feature is the volume’s grounding in Evidence-Centered Design (ECD) framework for assessment design. This “design forward” approach enables designers to take full advantage of Bayes nets’ modularity and ability to model complex evidentiary relationships that arise from performance in interactive, technology-rich assessments such as simulations. Part III describes ECD, situates Bayes nets as ...
Bayesian Networks and Influence Diagrams
DEFF Research Database (Denmark)
Kjærulff, Uffe Bro; Madsen, Anders Læsø
Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis, Second Edition, provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. This new edition contains six new...... sections, in addition to fully-updated examples, tables, figures, and a revised appendix. Intended primarily for practitioners, this book does not require sophisticated mathematical skills or deep understanding of the underlying theory and methods nor does it discuss alternative technologies for reasoning...... under uncertainty. The theory and methods presented are illustrated through more than 140 examples, and exercises are included for the reader to check his or her level of understanding. The techniques and methods presented on model construction and verification, modeling techniques and tricks, learning...
On Bayesian System Reliability Analysis
Energy Technology Data Exchange (ETDEWEB)
Soerensen Ringi, M.
1995-05-01
The view taken in this thesis is that reliability, the probability that a system will perform a required function for a stated period of time, depends on a person`s state of knowledge. Reliability changes as this state of knowledge changes, i.e. when new relevant information becomes available. Most existing models for system reliability prediction are developed in a classical framework of probability theory and they overlook some information that is always present. Probability is just an analytical tool to handle uncertainty, based on judgement and subjective opinions. It is argued that the Bayesian approach gives a much more comprehensive understanding of the foundations of probability than the so called frequentistic school. A new model for system reliability prediction is given in two papers. The model encloses the fact that component failures are dependent because of a shared operational environment. The suggested model also naturally permits learning from failure data of similar components in non identical environments. 85 refs.
Nonparametric Bayesian inference in biostatistics
Müller, Peter
2015-01-01
As chapters in this book demonstrate, BNP has important uses in clinical sciences and inference for issues like unknown partitions in genomics. Nonparametric Bayesian approaches (BNP) play an ever expanding role in biostatistical inference from use in proteomics to clinical trials. Many research problems involve an abundance of data and require flexible and complex probability models beyond the traditional parametric approaches. As this book's expert contributors show, BNP approaches can be the answer. Survival Analysis, in particular survival regression, has traditionally used BNP, but BNP's potential is now very broad. This applies to important tasks like arrangement of patients into clinically meaningful subpopulations and segmenting the genome into functionally distinct regions. This book is designed to both review and introduce application areas for BNP. While existing books provide theoretical foundations, this book connects theory to practice through engaging examples and research questions. Chapters c...
Bayesian Kernel Mixtures for Counts.
Canale, Antonio; Dunson, David B
2011-12-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.
On Bayesian System Reliability Analysis
International Nuclear Information System (INIS)
Soerensen Ringi, M.
1995-01-01
The view taken in this thesis is that reliability, the probability that a system will perform a required function for a stated period of time, depends on a person's state of knowledge. Reliability changes as this state of knowledge changes, i.e. when new relevant information becomes available. Most existing models for system reliability prediction are developed in a classical framework of probability theory and they overlook some information that is always present. Probability is just an analytical tool to handle uncertainty, based on judgement and subjective opinions. It is argued that the Bayesian approach gives a much more comprehensive understanding of the foundations of probability than the so called frequentistic school. A new model for system reliability prediction is given in two papers. The model encloses the fact that component failures are dependent because of a shared operational environment. The suggested model also naturally permits learning from failure data of similar components in non identical environments. 85 refs
A Bayesian Reflection on Surfaces
Directory of Open Access Journals (Sweden)
David R. Wolf
1999-10-01
Full Text Available Abstract: The topic of this paper is a novel Bayesian continuous-basis field representation and inference framework. Within this paper several problems are solved: The maximally informative inference of continuous-basis fields, that is where the basis for the field is itself a continuous object and not representable in a finite manner; the tradeoff between accuracy of representation in terms of information learned, and memory or storage capacity in bits; the approximation of probability distributions so that a maximal amount of information about the object being inferred is preserved; an information theoretic justification for multigrid methodology. The maximally informative field inference framework is described in full generality and denoted the Generalized Kalman Filter. The Generalized Kalman Filter allows the update of field knowledge from previous knowledge at any scale, and new data, to new knowledge at any other scale. An application example instance, the inference of continuous surfaces from measurements (for example, camera image data, is presented.
Hierarchical Bayesian spatial models for multispecies conservation planning and monitoring.
Carroll, Carlos; Johnson, Devin S; Dunk, Jeffrey R; Zielinski, William J
2010-12-01
Biologists who develop and apply habitat models are often familiar with the statistical challenges posed by their data's spatial structure but are unsure of whether the use of complex spatial models will increase the utility of model results in planning. We compared the relative performance of nonspatial and hierarchical Bayesian spatial models for three vertebrate and invertebrate taxa of conservation concern (Church's sideband snails [Monadenia churchi], red tree voles [Arborimus longicaudus], and Pacific fishers [Martes pennanti pacifica]) that provide examples of a range of distributional extents and dispersal abilities. We used presence-absence data derived from regional monitoring programs to develop models with both landscape and site-level environmental covariates. We used Markov chain Monte Carlo algorithms and a conditional autoregressive or intrinsic conditional autoregressive model framework to fit spatial models. The fit of Bayesian spatial models was between 35 and 55% better than the fit of nonspatial analogue models. Bayesian spatial models outperformed analogous models developed with maximum entropy (Maxent) methods. Although the best spatial and nonspatial models included similar environmental variables, spatial models provided estimates of residual spatial effects that suggested how ecological processes might structure distribution patterns. Spatial models built from presence-absence data improved fit most for localized endemic species with ranges constrained by poorly known biogeographic factors and for widely distributed species suspected to be strongly affected by unmeasured environmental variables or population processes. By treating spatial effects as a variable of interest rather than a nuisance, hierarchical Bayesian spatial models, especially when they are based on a common broad-scale spatial lattice (here the national Forest Inventory and Analysis grid of 24 km(2) hexagons), can increase the relevance of habitat models to multispecies
Shen, Yanna; Cooper, Gregory F
2012-09-01
This paper investigates Bayesian modeling of known and unknown causes of events in the context of disease-outbreak detection. We introduce a multivariate Bayesian approach that models multiple evidential features of every person in the population. This approach models and detects (1) known diseases (e.g., influenza and anthrax) by using informative prior probabilities and (2) unknown diseases (e.g., a new, highly contagious respiratory virus that has never been seen before) by using relatively non-informative prior probabilities. We report the results of simulation experiments which support that this modeling method can improve the detection of new disease outbreaks in a population. A contribution of this paper is that it introduces a multivariate Bayesian approach for jointly modeling both known and unknown causes of events. Such modeling has general applicability in domains where the space of known causes is incomplete. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Robust bayesian analysis of an autoregressive model with ...
African Journals Online (AJOL)
In this work, robust Bayesian analysis of the Bayesian estimation of an autoregressive model with exponential innovations is performed. Using a Bayesian robustness methodology, we show that, using a suitable generalized quadratic loss, we obtain optimal Bayesian estimators of the parameters corresponding to the ...
BEAST: Bayesian evolutionary analysis by sampling trees.
Drummond, Alexei J; Rambaut, Andrew
2007-11-08
The evolutionary analysis of molecular sequence variation is a statistical enterprise. This is reflected in the increased use of probabilistic models for phylogenetic inference, multiple sequence alignment, and molecular population genetics. Here we present BEAST: a fast, flexible software architecture for Bayesian analysis of molecular sequences related by an evolutionary tree. A large number of popular stochastic models of sequence evolution are provided and tree-based models suitable for both within- and between-species sequence data are implemented. BEAST version 1.4.6 consists of 81000 lines of Java source code, 779 classes and 81 packages. It provides models for DNA and protein sequence evolution, highly parametric coalescent analysis, relaxed clock phylogenetics, non-contemporaneous sequence data, statistical alignment and a wide range of options for prior distributions. BEAST source code is object-oriented, modular in design and freely available at http://beast-mcmc.googlecode.com/ under the GNU LGPL license. BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. It also provides a resource for the further development of new models and statistical methods of evolutionary analysis.
BEAST: Bayesian evolutionary analysis by sampling trees
Directory of Open Access Journals (Sweden)
Drummond Alexei J
2007-11-01
Full Text Available Abstract Background The evolutionary analysis of molecular sequence variation is a statistical enterprise. This is reflected in the increased use of probabilistic models for phylogenetic inference, multiple sequence alignment, and molecular population genetics. Here we present BEAST: a fast, flexible software architecture for Bayesian analysis of molecular sequences related by an evolutionary tree. A large number of popular stochastic models of sequence evolution are provided and tree-based models suitable for both within- and between-species sequence data are implemented. Results BEAST version 1.4.6 consists of 81000 lines of Java source code, 779 classes and 81 packages. It provides models for DNA and protein sequence evolution, highly parametric coalescent analysis, relaxed clock phylogenetics, non-contemporaneous sequence data, statistical alignment and a wide range of options for prior distributions. BEAST source code is object-oriented, modular in design and freely available at http://beast-mcmc.googlecode.com/ under the GNU LGPL license. Conclusion BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. It also provides a resource for the further development of new models and statistical methods of evolutionary analysis.
Bayesian models a statistical primer for ecologists
Hobbs, N Thompson
2015-01-01
Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods-in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach. Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probabili
Compiling Relational Bayesian Networks for Exact Inference
DEFF Research Database (Denmark)
Jaeger, Manfred; Darwiche, Adnan; Chavira, Mark
2006-01-01
We describe in this paper a system for exact inference with relational Bayesian networks as defined in the publicly available PRIMULA tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference...... by evaluating and differentiating these circuits in time linear in their size. We report on experimental results showing successful compilation and efficient inference on relational Bayesian networks, whose PRIMULA--generated propositional instances have thousands of variables, and whose jointrees have clusters...
An efficient algorithm for encoding and decoding of raptor codes over the binary erasure channel
Zhang, Ya-Hang; Cheng, Bo-Wen; Zou, Guang-Nan; Wen, Wei-Ping; Qing, Si-Han
2009-12-01
As the most advanced rateless fountain codes, Systematic Raptor codes has been adopted by the 3GPP standard as a forward error correction scheme in Multimedia Broadcast/Multicast Services (MBMS). It has been shown to be an efficient channel coding technique which guarantees high symbol diversity in overlay networks. The 3GPP standard outlined a time-efficient maximum-likelihood (ML) decoding scheme that can be implemented using Gaussian elimination. But when the number of encoding symbols grows large, Gaussian elimination need to deal with a large matrix with O (K3) binary arithmetic operations, so the larger K becomes, the worse ML decoding scheme performs. This paper presents a better time-efficient encoding and decoding scheme while maintaining the same symbol recoverable performance, this encoding and decoding scheme is named Rapid Raptor Code. It will be shown that the proposed Rapid Raptor code Scheme significantly improves traditional Raptor codes' efficiency while maintaining the same performance.
DS-OCDMA Encoder/Decoder Performance Analysis Using Optical Low-Coherence Reflectometry
Fsaifes, Ihsan; Lepers, Catherine; Obaton, Anne-Francoise; Gallion, Philippe
2006-08-01
Direct-sequence optical code-division multiple-access (DS-OCDMA) encoder/decoder based on sampled fiber Bragg gratings (S-FBGs) is characterized using phase-sensitive optical low-coherence reflectometry (OLCR). The OLCR technique allows localized measurements of FBG wavelength and physical length inside one S-FBG. This paper shows how the discrepancies between specifications and measurements of the different FBGs have some impact on spectral and temporal pulse responses of the OCDMA encoder/decoder. The FBG physical lengths lower than the specified ones are shown to affect the mean optical power reflected by the OCDMA encoder/decoder. The FBG wavelengths that are detuned from each other induce some modulations of S-FBG reflectivity resulting in encoder/decoder sensitivity to laser wavelength drift of the OCDMA system. Finally, highlighted by this OLCR study, some solutions to overcome limitations in performance with the S-FBG technology are suggested.
Optimal coding-decoding for systems controlled via a communication channel
Yi-wei, Feng; Guo, Ge
2013-12-01
In this article, we study the problem of controlling plants over a signal-to-noise ratio (SNR) constrained communication channel. Different from previous research, this article emphasises the importance of the actual channel model and coder/decoder in the study of network performance. Our major objectives include coder/decoder design for an additive white Gaussian noise (AWGN) channel with both standard network configuration and Youla parameter network architecture. We find that the optimal coder and decoder can be realised for different network configuration. The results are useful in determining the minimum channel capacity needed in order to stabilise plants over communication channels. The coder/decoder obtained can be used to analyse the effect of uncertainty on the channel capacity. An illustrative example is provided to show the effectiveness of the results.
Multiple-Symbol Decision-Feedback Space-Time Differential Decoding in Fading Channels
Directory of Open Access Journals (Sweden)
Wang Xiaodong
2002-01-01
Full Text Available Space-time differential coding (STDC is an effective technique for exploiting transmitter diversity while it does not require the channel state information at the receiver. However, like conventional differential modulation schemes, it exhibits an error floor in fading channels. In this paper, we develop an STDC decoding technique based on multiple-symbol detection and decision-feedback, which makes use of the second-order statistic of the fading processes and has a very low computational complexity. This decoding method can significantly lower the error floor of the conventional STDC decoding algorithm, especially in fast fading channels. The application of the proposed multiple-symbol decision-feedback STDC decoding technique in orthogonal frequency-division multiplexing (OFDM system is also discussed.
DEFF Research Database (Denmark)
Keibler, Evan; Arumugam, Manimozhiyan; Brent, Michael R
2007-01-01
be prohibitive. Existing approaches to reducing memory usage either sacrifice optimality or trade increased running time for reduced memory. RESULTS: We developed two novel decoding algorithms, Treeterbi and Parallel Treeterbi, and implemented them in the TWINSCAN/N-SCAN gene-prediction system. The worst case......MOTIVATION: Hidden Markov models (HMMs) and generalized HMMs been successfully applied to many problems, but the standard Viterbi algorithm for computing the most probable interpretation of an input sequence (known as decoding) requires memory proportional to the length of the sequence, which can...... asymptotic space and time are the same as for standard Viterbi, but in practice, Treeterbi optimally decodes arbitrarily long sequences with generalized HMMs in bounded memory without increasing running time. Parallel Treeterbi uses the same ideas to split optimal decoding across processors, dividing latency...
Zhang, Wuhong; Chen, Lixiang
2016-06-15
Digital spiral imaging has been demonstrated as an effective optical tool to encode optical information and retrieve topographic information of an object. Here we develop a conceptually new and concise scheme for optical image encoding and decoding toward free-space digital spiral imaging. We experimentally demonstrate that the optical lattices with ℓ=±50 orbital angular momentum superpositions and a clover image with nearly 200 Laguerre-Gaussian (LG) modes can be well encoded and successfully decoded. It is found that an image encoded/decoded with a two-index LG spectrum (considering both azimuthal and radial indices, ℓ and p) possesses much higher fidelity than that with a one-index LG spectrum (only considering the ℓ index). Our work provides an alternative tool for the image encoding/decoding scheme toward free-space optical communications.
Lenzoni, G.; Liu, J.; Knight, M.R.
2018-01-01
Calcium plays a key role in determining the specificity of a vast array of signalling pathways in plants. Cellular calcium elevations with different characteristics (calcium signatures) carry information on the identity of the primary stimulus, ensuring appropriate downstream responses. However, the mechanism for decoding calcium signatures is unknown. To determine this, decoding of the salicylic acid (SA)-mediated plant immunity signalling network controlling gene expression was examined. ...
Decoder calibration with ultra small current sample set for intracortical brain-machine interface.
Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping
2018-04-01
Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application of intracortical brain-machine interfaces in
Decoded fMRI neurofeedback can induce bidirectional confidence changes within single participants
Cortese, Aurelio; Amano, Kaoru; Koizumi, Ai; Lau, Hakwan; Kawato, Mitsuo
2017-01-01
Neurofeedback studies using real-time functional magnetic resonance imaging (rt-fMRI) have recently incorporated the multi-voxel pattern decoding approach, allowing for fMRI to serve as a tool to manipulate fine-grained neural activity embedded in voxel patterns. Because of its tremendous potential for clinical applications, certain questions regarding decoded neurofeedback (DecNef) must be addressed. Specifically, can the same participants learn to induce neural patterns in opposite directio...
Generalized Simplified Variable-Scaled Min Sum LDPC decoder for irregular LDPC Codes
Emran, Ahmed A.; Elsabrouty, Maha
2015-01-01
In this paper, we propose a novel low complexity scaling strategy of min-sum decoding algorithm for irregular LDPC codes. In the proposed method, we generalize our previously proposed simplified Variable Scaled Min-Sum (SVS-min-sum) by replacing the sub-optimal starting value and heuristic update for the scaling factor sequence by optimized values. Density evolution and Nelder-Mead optimization are used offline, prior to the decoding, to obtain the optimal starting point and per iteration upd...
Decoder calibration with ultra small current sample set for intracortical brain-machine interface
Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping
2018-04-01
Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application
Euclidean Geometry Codes, minimum weight words and decodable error-patterns using bit-flipping
DEFF Research Database (Denmark)
Høholdt, Tom; Justesen, Jørn; Jonsson, Bergtor
2005-01-01
We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns.......We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns....
Distributed-phase OCDMA encoder-decoders based on fiber Bragg gratings
Zhang, Zhaowei; Tian, C.; Petropoulos, P.; Richardson, D.J.; Ibsen, M.
2007-01-01
We propose and demonstrate new optical code-division multiple-access (OCDMA) encoder-decoders having a continuous phase-distribution. With the same spatial refractive index distribution as the reconfigurable optical phase encoder-decoders, they are inherently suitable for the application in reconfigurable OCDMA systems. Furthermore, compared with conventional discrete-phase devices, they also have additional advantages of being more tolerant to input pulse width and, therefore, have the poten...
ABCtoolbox: a versatile toolkit for approximate Bayesian computations
Directory of Open Access Journals (Sweden)
Neuenschwander Samuel
2010-03-01
Full Text Available Abstract Background The estimation of demographic parameters from genetic data often requires the computation of likelihoods. However, the likelihood function is computationally intractable for many realistic evolutionary models, and the use of Bayesian inference has therefore been limited to very simple models. The situation changed recently with the advent of Approximate Bayesian Computation (ABC algorithms allowing one to obtain parameter posterior distributions based on simulations not requiring likelihood computations. Results Here we present ABCtoolbox, a series of open source programs to perform Approximate Bayesian Computations (ABC. It implements various ABC algorithms including rejection sampling, MCMC without likelihood, a Particle-based sampler and ABC-GLM. ABCtoolbox is bundled with, but not limited to, a program that allows parameter inference in a population genetics context and the simultaneous use of different types of markers with different ploidy levels. In addition, ABCtoolbox can also interact with most simulation and summary statistics computation programs. The usability of the ABCtoolbox is demonstrated by inferring the evolutionary history of two evolutionary lineages of Microtus arvalis. Using nuclear microsatellites and mitochondrial sequence data in the same estimation procedure enabled us to infer sex-specific population sizes and migration rates and to find that males show smaller population sizes but much higher levels of migration than females. Conclusion ABCtoolbox allows a user to perform all the necessary steps of a full ABC analysis, from parameter sampling from prior distributions, data simulations, computation of summary statistics, estimation of posterior distributions, model choice, validation of the estimation procedure, and visualization of the results.
Bayesian estimation and modeling: Editorial to the second special issue on Bayesian data analysis.
Chow, Sy-Miin; Hoijtink, Herbert
2017-12-01
This editorial accompanies the second special issue on Bayesian data analysis published in this journal. The emphases of this issue are on Bayesian estimation and modeling. In this editorial, we outline the basics of current Bayesian estimation techniques and some notable developments in the statistical literature, as well as adaptations and extensions by psychological researchers to better tailor to the modeling applications in psychology. We end with a discussion on future outlooks of Bayesian data analysis in psychology. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Delay reduction in lossy intermittent feedback for generalized instantly decodable network coding
Douik, Ahmed S.
2013-10-01
In this paper, we study the effect of lossy intermittent feedback loss events on the multicast decoding delay performance of generalized instantly decodable network coding. These feedback loss events create uncertainty at the sender about the reception statues of different receivers and thus uncertainty to accurately determine subsequent instantly decodable coded packets. To solve this problem, we first identify the different possibilities of uncertain packets at the sender and their probabilities. We then derive the expression of the mean decoding delay. We formulate the Generalized Instantly Decodable Network Coding (G-IDNC) minimum decoding delay problem as a maximum weight clique problem. Since finding the optimal solution is NP-hard, we design a variant of the algorithm employed in [1]. Our algorithm is compared to the two blind graph update proposed in [2] through extensive simulations. Results show that our algorithm outperforms the blind approaches in all the situations and achieves a tolerable degradation, against the perfect feedback, for large feedback loss period. © 2013 IEEE.
Adaptive decoding using local field potentials in a brain-machine interface.
Rosa So; Libedinsky, Camilo; Kai Keng Ang; Wee Chiek Clement Lim; Kyaw Kyar Toe; Cuntai Guan
2016-08-01
Brain-machine interface (BMI) systems have the potential to restore function to people who suffer from paralysis due to a spinal cord injury. However, in order to achieve long-term use, BMI systems have to overcome two challenges - signal degeneration over time, and non-stationarity of signals. Effects of loss in spike signals over time can be mitigated by using local field potential (LFP) signals for decoding, and a solution to address the signal non-stationarity is to use adaptive methods for periodic recalibration of the decoding model. We implemented a BMI system in a nonhuman primate model that allows brain-controlled movement of a robotic platform. Using this system, we showed that LFP signals alone can be used for decoding in a closed-loop brain-controlled BMI. Further, we performed offline analysis to assess the potential implementation of an adaptive decoding method that does not presume knowledge of the target location. Our results show that with periodic signal and channel selection adaptation, decoding accuracy using LFP alone can be improved by between 5-50%. These results demonstrate the feasibility of implementing unsupervised adaptive methods during asynchronous decoding of LFP signals for long-term usage in a BMI system.
Word Decoding Development during Phonics Instruction in Children at Risk for Dyslexia.
Schaars, Moniek M H; Segers, Eliane; Verhoeven, Ludo
2017-05-01
In the present study, we examined the early word decoding development of 73 children at genetic risk of dyslexia and 73 matched controls. We conducted monthly curriculum-embedded word decoding measures during the first 5 months of phonics-based reading instruction followed by standardized word decoding measures halfway and by the end of first grade. In kindergarten, vocabulary, phonological awareness, lexical retrieval, and verbal and visual short-term memory were assessed. The results showed that the children at risk were less skilled in phonemic awareness in kindergarten. During the first 5 months of reading instruction, children at risk were less efficient in word decoding and the discrepancy increased over the months. In subsequent months, the discrepancy prevailed for simple words but increased for more complex words. Phonemic awareness and lexical retrieval predicted the reading development in children at risk and controls to the same extent. It is concluded that children at risk are behind their typical peers in word decoding development starting from the very beginning. Furthermore, it is concluded that the disadvantage increased during phonics instruction and that the same predictors underlie the development of word decoding in the two groups of children. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Efficient decoding with steady-state Kalman filter in neural interface systems.
Malik, Wasim Q; Truccolo, Wilson; Brown, Emery N; Hochberg, Leigh R
2011-02-01
The Kalman filter is commonly used in neural interface systems to decode neural activity and estimate the desired movement kinematics. We analyze a low-complexity Kalman filter implementation in which the filter gain is approximated by its steady-state form, computed offline before real-time decoding commences. We evaluate its performance using human motor cortical spike train data obtained from an intracortical recording array as part of an ongoing pilot clinical trial. We demonstrate that the standard Kalman filter gain converges to within 95% of the steady-state filter gain in 1.5±0.5 s (mean ±s.d.). The difference in the intended movement velocity decoded by the two filters vanishes within 5 s, with a correlation coefficient of 0.99 between the two decoded velocities over the session length. We also find that the steady-state Kalman filter reduces the computational load (algorithm execution time) for decoding the firing rates of 25±3 single units by a factor of 7.0±0.9. We expect that the gain in computational efficiency will be much higher in systems with larger neural ensembles. The steady-state filter can thus provide substantial runtime efficiency at little cost in terms of estimation accuracy. This far more efficient neural decoding approach will facilitate the practical implementation of future large-dimensional, multisignal neural interface systems.
Complexity Analysis of Reed-Solomon Decoding over GF(2m without Using Syndromes
Directory of Open Access Journals (Sweden)
Zhiyuan Yan
2008-06-01
Full Text Available There has been renewed interest in decoding Reed-Solomon (RS codes without using syndromes recently. In this paper, we investigate the complexity of syndromeless decoding, and compare it to that of syndrome-based decoding. Aiming to provide guidelines to practical applications, our complexity analysis focuses on RS codes over characteristic-2 fields, for which some multiplicative FFT techniques are not applicable. Due to moderate block lengths of RS codes in practice, our analysis is complete, without big O notation. In addition to fast implementation using additive FFT techniques, we also consider direct implementation, which is still relevant for RS codes with moderate lengths. For high-rate RS codes, when compared to syndrome-based decoding algorithms, not only syndromeless decoding algorithms require more field operations regardless of implementation, but also decoder architectures based on their direct implementations have higher hardware costs and lower throughput. We also derive tighter bounds on the complexities of fast polynomial multiplications based on Cantor's approach and the fast extended Euclidean algorithm.
How Major Depressive Disorder affects the ability to decode multimodal dynamic emotional stimuli
Directory of Open Access Journals (Sweden)
FILOMENA SCIBELLI
2016-09-01
Full Text Available Most studies investigating the processing of emotions in depressed patients reported impairments in the decoding of negative emotions. However, these studies adopted static stimuli (mostly stereotypical facial expressions corresponding to basic emotions which do not reflect the way people experience emotions in everyday life. For this reason, this work proposes to investigate the decoding of emotional expressions in patients affected by Recurrent Major Depressive Disorder (RMDDs using dynamic audio/video stimuli. RMDDs’ performance is compared with the performance of patients with Adjustment Disorder with Depressed Mood (ADs and healthy (HCs subjects. The experiments involve 27 RMDDs (16 with acute depression - RMDD-A, and 11 in a compensation phase - RMDD-C, 16 ADs and 16 HCs. The ability to decode emotional expressions is assessed through an emotion recognition task based on short audio (without video, video (without audio and audio/video clips. The results show that AD patients are significantly less accurate than HCs in decoding fear, anger, happiness, surprise and sadness. RMDD-As with acute depression are significantly less accurate than HCs in decoding happiness, sadness and surprise. Finally, no significant differences were found between HCs and RMDD-Cs in a compensation phase. The different communication channels and the types of emotion play a significant role in limiting the decoding accuracy.
Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes
Directory of Open Access Journals (Sweden)
Luca Fanucci
2009-01-01
Full Text Available The layered decoding algorithm has recently been proposed as an efficient means for the decoding of low-density parity-check (LDPC codes, thanks to the remarkable improvement in the convergence speed (2x of the decoding process. However, pipelined semi-parallel decoders suffer from violations or “hazards” between consecutive updates, which not only violate the layered principle but also enforce the loops in the code, thus spoiling the error correction performance. This paper describes three different techniques to properly reschedule the decoding updates, based on the careful insertion of “idle” cycles, to prevent the hazards of the pipeline mechanism. Also, different semi-parallel architectures of a layered LDPC decoder suitable for use with such techniques are analyzed. Then, taking the LDPC codes for the wireless local area network (IEEE 802.11n as a case study, a detailed analysis of the performance attained with the proposed techniques and architectures is reported, and results of the logic synthesis on a 65 nm low-power CMOS technology are shown.
Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes
Directory of Open Access Journals (Sweden)
Rovini Massimo
2009-01-01
Full Text Available The layered decoding algorithm has recently been proposed as an efficient means for the decoding of low-density parity-check (LDPC codes, thanks to the remarkable improvement in the convergence speed (2x of the decoding process. However, pipelined semi-parallel decoders suffer from violations or "hazards" between consecutive updates, which not only violate the layered principle but also enforce the loops in the code, thus spoiling the error correction performance. This paper describes three different techniques to properly reschedule the decoding updates, based on the careful insertion of "idle" cycles, to prevent the hazards of the pipeline mechanism. Also, different semi-parallel architectures of a layered LDPC decoder suitable for use with such techniques are analyzed. Then, taking the LDPC codes for the wireless local area network (IEEE 802.11n as a case study, a detailed analysis of the performance attained with the proposed techniques and architectures is reported, and results of the logic synthesis on a 65 nm low-power CMOS technology are shown.
Directory of Open Access Journals (Sweden)
Johann A. Briffa
2014-06-01
Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.
Signal adaptive processing in MPEG-2 decoders with embedded resizing for interlaced video
Zhong, Zhun; Chen, Yingwei; Lan, Tse-Hua
2002-01-01
Video decoding at reduced resolution with resizing embedded in the decoding loop saves computational resources such as memory, memory bandwidth and CPU cycles. Key to such embedded resizing is proper filtering/scaling of DCT data, and motion compensation at the reduced resolution. Although MPEG-2 video decoding with embedded resizing has been investigated in the past, little work has been reported on solving problems associated with interlaced video undergoing decoding with embedded resizing. In particular, annoying artifacts may occur in moving areas of interlaced video due to improper scaling or motion compensation. In this paper, we introduce the notion of the Local Interlacing Property for interlaced moving areas and propose algorithms to detect and process data with the Local Interlacing Property properly in the context of decoding with embedded resizing. Specifically, we demonstrate that 1) vertical high frequency in interlaced moving areas should be preserved during downscaling, and 2) phase shift must be added for motion compensation in interlaced moving areas under certain circumstances. Experimental results show that our method effectively removes artifacts in interlaced moving areas, making MPEG-2 video decoding with embedded resizing a practical tradeoff for interlaced video.
A lossy graph model for delay reduction in generalized instantly decodable network coding
Douik, Ahmed S.
2014-06-01
The problem of minimizing the decoding delay in Generalized instantly decodable network coding (G-IDNC) for both perfect and lossy feedback scenarios is formulated as a maximum weight clique problem over the G-IDNC graph in. In this letter, we introduce a new lossy G-IDNC graph (LG-IDNC) model to further minimize the decoding delay in lossy feedback scenarios. Whereas the G-IDNC graph represents only doubtless combinable packets, the LG-IDNC graph represents also uncertain packet combinations, arising from lossy feedback events, when the expected decoding delay of XORing them among themselves or with other certain packets is lower than that expected when sending these packets separately. We compare the decoding delay performance of LG-IDNC and G-IDNC graphs through extensive simulations. Numerical results show that our new LG-IDNC graph formulation outperforms the G-IDNC graph formulation in all lossy feedback situations and achieves significant improvement in the decoding delay especially when the feedback erasure probability is higher than the packet erasure probability. © 2012 IEEE.
Decoding hindlimb movement for a brain machine interface after a complete spinal transection.
Manohar, Anitha; Flint, Robert D; Knudsen, Eric; Moxon, Karen A
2012-01-01
Stereotypical locomotor movements can be made without input from the brain after a complete spinal transection. However, the restoration of functional gait requires descending modulation of spinal circuits to independently control the movement of each limb. To evaluate whether a brain-machine interface (BMI) could be used to regain conscious control over the hindlimb, rats were trained to press a pedal and the encoding of hindlimb movement was assessed using a BMI paradigm. Off-line, information encoded by neurons in the hindlimb sensorimotor cortex was assessed. Next neural population functions, or weighted representations of the neuronal activity, were used to replace the hindlimb movement as a trigger for reward in real-time (on-line decoding) in three conditions: while the animal could still press the pedal, after the pedal was removed and after a complete spinal transection. A novel representation of the motor program was learned when the animals used neural control to achieve water reward (e.g. more information was conveyed faster). After complete spinal transection, the ability of these neurons to convey information was reduced by more than 40%. However, this BMI representation was relearned over time despite a persistent reduction in the neuronal firing rate during the task. Therefore, neural control is a general feature of the motor cortex, not restricted to forelimb movements, and can be regained after spinal injury.
Neural overlap of L1 and L2 semantic representations in speech: A decoding approach.
Van de Putte, Eowyn; De Baene, Wouter; Brass, Marcel; Duyck, Wouter
2017-11-15
Although research has now converged towards a consensus that both languages of a bilingual are represented in at least partly shared systems for language comprehension, it remains unclear whether both languages are represented in the same neural populations for production. We investigated the neural overlap between L1 and L2 semantic representations of translation equivalents using a production task in which the participants had to name pictures in L1 and L2. Using a decoding approach, we tested whether brain activity during the production of individual nouns in one language allowed predicting the production of the same concepts in the other language. Because both languages only share the underlying semantic representation (sensory and lexical overlap was maximally avoided), this would offer very strong evidence for neural overlap in semantic representations of bilinguals. Based on the brain activation for the individual concepts in one language in the bilateral occipito-temporal cortex and the inferior and the middle temporal gyrus, we could accurately predict the equivalent individual concepts in the other language. This indicates that these regions share semantic representations across L1 and L2 word production. Copyright © 2017 Elsevier Inc. All rights reserved.
Bayesian analysis of log Gaussian Cox processes for disease mapping
DEFF Research Database (Denmark)
Benes, Viktor; Bodlák, Karel; Møller, Jesper
We consider a data set of locations where people in Central Bohemia have been infected by tick-borne encephalitis, and where population census data and covariates concerning vegetation and altitude are available. The aims are to estimate the risk map of the disease and to study the dependence...... of the risk on the covariates. Instead of using the common area level approaches we consider a Bayesian analysis for a log Gaussian Cox point process with covariates. Posterior characteristics for a discretized version of the log Gaussian Cox process are computed using markov chain Monte Carlo methods...
A Bayesian approach to model uncertainty
International Nuclear Information System (INIS)
Buslik, A.
1994-01-01
A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given
Bayesian analysis for the social sciences
Jackman, Simon
2009-01-01
Bayesian methods are increasingly being used in the social sciences, as the problems encountered lend themselves so naturally to the subjective qualities of Bayesian methodology. This book provides an accessible introduction to Bayesian methods, tailored specifically for social science students. It contains lots of real examples from political science, psychology, sociology, and economics, exercises in all chapters, and detailed descriptions of all the key concepts, without assuming any background in statistics beyond a first course. It features examples of how to implement the methods using WinBUGS - the most-widely used Bayesian analysis software in the world - and R - an open-source statistical software. The book is supported by a Website featuring WinBUGS and R code, and data sets.
Bayesian optimization for computationally extensive probability distributions.
Tamura, Ryo; Hukushima, Koji
2018-01-01
An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions.
新家, 健精
1991-01-01
© 2012 Springer Science+Business Media, LLC. All rights reserved. Article Outline: Glossary Definition of the Subject and Introduction The Bayesian Statistical Paradigm Three Examples Comparison with the Frequentist Statistical Paradigm Future Directions Bibliography
Implementing the Bayesian paradigm in risk analysis
International Nuclear Information System (INIS)
Aven, T.; Kvaloey, J.T.
2002-01-01
The Bayesian paradigm comprises a unified and consistent framework for analyzing and expressing risk. Yet, we see rather few examples of applications where the full Bayesian setting has been adopted with specifications of priors of unknown parameters. In this paper, we discuss some of the practical challenges of implementing Bayesian thinking and methods in risk analysis, emphasizing the introduction of probability models and parameters and associated uncertainty assessments. We conclude that there is a need for a pragmatic view in order to 'successfully' apply the Bayesian approach, such that we can do the assignments of some of the probabilities without adopting the somewhat sophisticated procedure of specifying prior distributions of parameters. A simple risk analysis example is presented to illustrate ideas
A Bayesian concept learning approach to crowdsourcing
DEFF Research Database (Denmark)
Viappiani, P.; Zilles, S.; Hamilton, H.J.
2011-01-01
We develop a Bayesian approach to concept learning for crowdsourcing applications. A probabilistic belief over possible concept definitions is maintained and updated according to (noisy) observations from experts, whose behaviors are modeled using discrete types. We propose recommendation...
An Intuitive Dashboard for Bayesian Network Inference
International Nuclear Information System (INIS)
Reddy, Vikas; Farr, Anna Charisse; Wu, Paul; Mengersen, Kerrie; Yarlagadda, Prasad K D V
2014-01-01
Current Bayesian network software packages provide good graphical interface for users who design and develop Bayesian networks for various applications. However, the intended end-users of these networks may not necessarily find such an interface appealing and at times it could be overwhelming, particularly when the number of nodes in the network is large. To circumvent this problem, this paper presents an intuitive dashboard, which provides an additional layer of abstraction, enabling the end-users to easily perform inferences over the Bayesian networks. Unlike most software packages, which display the nodes and arcs of the network, the developed tool organises the nodes based on the cause-and-effect relationship, making the user-interaction more intuitive and friendly. In addition to performing various types of inferences, the users can conveniently use the tool to verify the behaviour of the developed Bayesian network. The tool has been developed using QT and SMILE libraries in C++
A Bayesian Network Approach to Ontology Mapping
National Research Council Canada - National Science Library
Pan, Rong; Ding, Zhongli; Yu, Yang; Peng, Yun
2005-01-01
.... In this approach, the source and target ontologies are first translated into Bayesian networks (BN); the concept mapping between the two ontologies are treated as evidential reasoning between the two translated BNs...
Learning Bayesian networks for discrete data
Liang, Faming
2009-02-01
Bayesian networks have received much attention in the recent literature. In this article, we propose an approach to learn Bayesian networks using the stochastic approximation Monte Carlo (SAMC) algorithm. Our approach has two nice features. Firstly, it possesses the self-adjusting mechanism and thus avoids essentially the local-trap problem suffered by conventional MCMC simulation-based approaches in learning Bayesian networks. Secondly, it falls into the class of dynamic importance sampling algorithms; the network features can be inferred by dynamically weighted averaging the samples generated in the learning process, and the resulting estimates can have much lower variation than the single model-based estimates. The numerical results indicate that our approach can mix much faster over the space of Bayesian networks than the conventional MCMC simulation-based approaches. © 2008 Elsevier B.V. All rights reserved.
Genome scans for detecting footprints of local adaptation using a Bayesian factor model.
Duforet-Frebourg, Nicolas; Bazin, Eric; Blum, Michael G B
2014-09-01
There is a considerable impetus in population genomics to pinpoint loci involved in local adaptation. A powerful approach to find genomic regions subject to local adaptation is to genotype numerous molecular markers and look for outlier loci. One of the most common approaches for selection scans is based on statistics that measure population differentiation such as FST. However, there are important caveats with approaches related to FST because they require grouping individuals into populations and they additionally assume a particular model of population structure. Here, we implement a more flexible individual-based approach based on Bayesian factor models. Factor models capture population structure with latent variables called factors, which can describe clustering of individuals into populations or isolation-by-distance patterns. Using hierarchical Bayesian modeling, we both infer population structure and identify outlier loci that are candidates for local adaptation. In order to identify outlier loci, the hierarchical factor model searches for loci that are atypically related to population structure as measured by the latent factors. In a model of population divergence, we show that it can achieve a 2-fold or more reduction of false discovery rate compared with the software BayeScan or with an FST approach. We show that our software can handle large data sets by analyzing the single nucleotide polymorphisms of the Human Genome Diversity Project. The Bayesian factor model is implemented in the open-source PCAdapt software. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Bayesian inference from count data using discrete uniform priors.
Directory of Open Access Journals (Sweden)
Federico Comoglio
Full Text Available We consider a set of sample counts obtained by sampling arbitrary fractions of a finite volume containing an homogeneously dispersed population of identical objects. We report a Bayesian derivation of the posterior probability distribution of the population size using a binomial likelihood and non-conjugate, discrete uniform priors under sampling with or without replacement. Our derivation yields a computationally feasible formula that can prove useful in a variety of statistical problems involving absolute quantification under uncertainty. We implemented our algorithm in the R package dupiR and compared it with a previously proposed Bayesian method based on a Gamma prior. As a showcase, we demonstrate that our inference framework can be used to estimate bacterial survival curves from measurements characterized by extremely low or zero counts and rather high sampling fractions. All in all, we provide a versatile, general purpose algorithm to infer population sizes from count data, which can find application in a broad spectrum of biological and physical problems.
Bayesian networks for management of industrial risk
International Nuclear Information System (INIS)
Munteanu, P.; Debache, G.; Duval, C.
2008-01-01
This article presents the outlines of Bayesian networks modelling and argues for their interest in the probabilistic studies of industrial risk and reliability. A practical case representative of this type of study is presented in support of the argumentation. The article concludes on some research tracks aiming at improving the performances of the methods relying on Bayesian networks and at widening their application area in risk management. (authors)
MCMC for parameters estimation by bayesian approach
International Nuclear Information System (INIS)
Ait Saadi, H.; Ykhlef, F.; Guessoum, A.
2011-01-01
This article discusses the parameter estimation for dynamic system by a Bayesian approach associated with Markov Chain Monte Carlo methods (MCMC). The MCMC methods are powerful for approximating complex integrals, simulating joint distributions, and the estimation of marginal posterior distributions, or posterior means. The MetropolisHastings algorithm has been widely used in Bayesian inference to approximate posterior densities. Calibrating the proposal distribution is one of the main issues of MCMC simulation in order to accelerate the convergence.
Fully probabilistic design of hierarchical Bayesian models
Czech Academy of Sciences Publication Activity Database
Quinn, A.; Kárný, Miroslav; Guy, Tatiana Valentine
2016-01-01
Roč. 369, č. 1 (2016), s. 532-547 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Fully probabilistic design * Ideal distribution * Minimum cross- entropy principle * Bayesian conditioning * Kullback-Leibler divergence * Bayesian nonparametric modelling Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.832, year: 2016 http://library.utia.cas.cz/separaty/2016/AS/karny-0463052.pdf
Capturing Business Cycles from a Bayesian Viewpoint
大鋸, 崇
2011-01-01
This paper is a survey of empirical studies analyzing business cycles from the perspective of Bayesian econometrics. Kim and Nelson (1998) use a hybrid model; Dynamic factor model of Stock and Watson (1989) and Markov switching model of Hamilton (1989). From the point of view, it is more important dealing with non-linear and non-Gaussian econometric models, recently. Although the classical econometric approaches have difficulty in these models, the Bayesian's do easily. The fact leads heavy u...
Variations on Bayesian Prediction and Inference
2016-05-09
inference 2.2.1 Background There are a number of statistical inference problems that are not generally formulated via a full probability model...problem of inference about an unknown parameter, the Bayesian approach requires a full probability 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...the problem of inference about an unknown parameter, the Bayesian approach requires a full probability model/likelihood which can be an obstacle
A Bayesian classifier for symbol recognition
Barrat , Sabine; Tabbone , Salvatore; Nourrissier , Patrick
2007-01-01
URL : http://www.buyans.com/POL/UploadedFile/134_9977.pdf; International audience; We present in this paper an original adaptation of Bayesian networks to symbol recognition problem. More precisely, a descriptor combination method, which enables to improve significantly the recognition rate compared to the recognition rates obtained by each descriptor, is presented. In this perspective, we use a simple Bayesian classifier, called naive Bayes. In fact, probabilistic graphical models, more spec...
On the role of articulatory prosodies in German message decoding.
Kohler, Klaus J; Niebuhr, Oliver
2011-01-01
A theoretical framework for speech reduction is outlined in which 'coarticulation' and 'articulatory control' operate on sequences of 'opening-closing gestures' in linguistic and communicative settings, leading to suprasegmental properties - 'articulatory prosodies' - in the acoustic output. In linking this gestalt perspective in speech production to the role of phonetic detail in speech understanding, this paper reports on perception experiments that test listeners' reactions to varying extension of an 'articulatory prosody of palatality' in message identification. The point of departure for the experimental design was the German utterance ich kann Ihnen das ja mal sagen 'I can mention this to you' from the Kiel Corpus of Spontaneous Speech, which contains the palatalized stretch [k̟(h)ε̈n(j)n(j)əs] for the sequence of function words /kan i.n(kə)n das/ kann Ihnen das. The utterance also makes sense without the personal pronoun Ihnen. Systematic experimental variation has shown that the extent of palatality has a highly significant influence on the decoding of Ihnen and that the effect of nasal consonant duration depends on the extension of palatality. These results are discussed in a plea to base future speech perception research on a paradigm that makes the traditional segment-prosody divide more permeable, and moves away from the generally practised phoneme orientation. Copyright © 2011 S. Karger AG, Basel.