WorldWideScience

Sample records for single word decoding

  1. IQ Predicts Word Decoding Skills in Populations with Intellectual Disabilities

    Science.gov (United States)

    Levy, Yonata

    2011-01-01

    This is a study of word decoding in adolescents with Down syndrome and in adolescents with Intellectual Deficits of unknown etiology. It was designed as a replication of studies of word decoding in English speaking and in Hebrew speaking adolescents with Williams syndrome ([0230] and [0235]). Participants' IQ was matched to IQ in the groups with…

  2. Role of Gender and Linguistic Diversity in Word Decoding Development

    Science.gov (United States)

    Verhoeven, Ludo; van Leeuwe, Jan

    2011-01-01

    The purpose of the present study was to investigate the role of gender and linguistic diversity in the growth of Dutch word decoding skills throughout elementary school for a representative sample of children living in the Netherlands. Following a longitudinal design, the children's decoding abilities for (1) regular CVC words, (2) complex…

  3. Word Processing in Dyslexics: An Automatic Decoding Deficit?

    Science.gov (United States)

    Yap, Regina; Van Der Leu, Aryan

    1993-01-01

    Compares dyslexic children with normal readers on measures of phonological decoding and automatic word processing. Finds that dyslexics have a deficit in automatic phonological decoding skills. Discusses results within the framework of the phonological deficit and the automatization deficit hypotheses. (RS)

  4. Word-Decoding Skill Interacts with Working Memory Capacity to Influence Inference Generation during Reading

    Science.gov (United States)

    Hamilton, Stephen; Freed, Erin; Long, Debra L.

    2016-01-01

    The aim of this study was to examine predictions derived from a proposal about the relation between word-decoding skill and working memory capacity, called verbal efficiency theory. The theory states that poor word representations and slow decoding processes consume resources in working memory that would otherwise be used to execute high-level…

  5. Word Decoding Development during Phonics Instruction in Children at Risk for Dyslexia.

    Science.gov (United States)

    Schaars, Moniek M H; Segers, Eliane; Verhoeven, Ludo

    2017-05-01

    In the present study, we examined the early word decoding development of 73 children at genetic risk of dyslexia and 73 matched controls. We conducted monthly curriculum-embedded word decoding measures during the first 5 months of phonics-based reading instruction followed by standardized word decoding measures halfway and by the end of first grade. In kindergarten, vocabulary, phonological awareness, lexical retrieval, and verbal and visual short-term memory were assessed. The results showed that the children at risk were less skilled in phonemic awareness in kindergarten. During the first 5 months of reading instruction, children at risk were less efficient in word decoding and the discrepancy increased over the months. In subsequent months, the discrepancy prevailed for simple words but increased for more complex words. Phonemic awareness and lexical retrieval predicted the reading development in children at risk and controls to the same extent. It is concluded that children at risk are behind their typical peers in word decoding development starting from the very beginning. Furthermore, it is concluded that the disadvantage increased during phonics instruction and that the same predictors underlie the development of word decoding in the two groups of children. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Euclidean Geometry Codes, minimum weight words and decodable error-patterns using bit-flipping

    DEFF Research Database (Denmark)

    Høholdt, Tom; Justesen, Jørn; Jonsson, Bergtor

    2005-01-01

    We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns.......We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns....

  7. The Effects of Word Box Instruction on Acquisition, Generalization, and Maintenance of Decoding and Spelling Skills for First Graders

    Science.gov (United States)

    Alber-Morgan, Sheila R.; Joseph, Laurice M.; Kanotz, Brittany; Rouse, Christina A.; Sawyer, Mary R.

    2016-01-01

    This study examined the effects of implementing word boxes as a supplemental instruction method on the acquisition, maintenance, and generalization of word identification and spelling. Word box intervention consists of using manipulatives to learn phonological decoding skills. The participants were three African-American urban first graders…

  8. The attentional blink is related to phonemic decoding, but not sight-word recognition, in typically reading adults.

    Science.gov (United States)

    Tyson-Parry, Maree M; Sailah, Jessica; Boyes, Mark E; Badcock, Nicholas A

    2015-10-01

    This research investigated the relationship between the attentional blink (AB) and reading in typical adults. The AB is a deficit in the processing of the second of two rapidly presented targets when it occurs in close temporal proximity to the first target. Specifically, this experiment examined whether the AB was related to both phonological and sight-word reading abilities, and whether the relationship was mediated by accuracy on a single-target rapid serial visual processing task (single-target accuracy). Undergraduate university students completed a battery of tests measuring reading ability, non-verbal intelligence, and rapid automatised naming, in addition to rapid serial visual presentation tasks in which they were required to identify either two (AB task) or one (single target task) target/s (outlined shapes: circle, square, diamond, cross, and triangle) in a stream of random-dot distractors. The duration of the AB was related to phonological reading (n=41, β=-0.43): participants who exhibited longer ABs had poorer phonemic decoding skills. The AB was not related to sight-word reading. Single-target accuracy did not mediate the relationship between the AB and reading, but was significantly related to AB depth (non-linear fit, R(2)=.50): depth reflects the maximal cost in T2 reporting accuracy in the AB. The differential relationship between the AB and phonological versus sight-word reading implicates common resources used for phonemic decoding and target consolidation, which may be involved in cognitive control. The relationship between single-target accuracy and the AB is discussed in terms of cognitive preparation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Importance of speech production for phonological awareness and word decoding: the case of children with cerebral palsy.

    NARCIS (Netherlands)

    Peeters, M.; Verhoeven, L.; Moor, J.M.H. de; Balkom, H. van

    2009-01-01

    The goal of this longitudinal study was to investigate the precursors of early reading development in 52 children with cerebral palsy at kindergarten level in comparison to 65 children without disabilities. Word Decoding was measured to investigate early reading skills, while Phonological Awareness,

  10. Importance of Speech Production for Phonological Awareness and Word Decoding: The Case of Children with Cerebral Palsy

    Science.gov (United States)

    Peeters, Marieke; Verhoeven, Ludo; de Moor, Jan; van Balkom, Hans

    2009-01-01

    The goal of this longitudinal study was to investigate the precursors of early reading development in 52 children with cerebral palsy at kindergarten level in comparison to 65 children without disabilities. Word Decoding was measured to investigate early reading skills, while Phonological Awareness, Phonological Short-term Memory (STM), Speech…

  11. Decoding Signal Processing at the Single-Cell Level

    Energy Technology Data Exchange (ETDEWEB)

    Wiley, H. Steven

    2017-12-01

    The ability of cells to detect and decode information about their extracellular environment is critical to generating an appropriate response. In multicellular organisms, cells must decode dozens of signals from their neighbors and extracellular matrix to maintain tissue homeostasis while still responding to environmental stressors. How cells detect and process information from their surroundings through a surprisingly limited number of signal transduction pathways is one of the most important question in biology. Despite many decades of research, many of the fundamental principles that underlie cell signal processing remain obscure. However, in this issue of Cell Systems, Gillies et al present compelling evidence that the early response gene circuit can act as a linear signal integrator, thus providing significant insight into how cells handle fluctuating signals and noise in their environment.

  12. The role of short-term memory impairment in nonword repetition, real word repetition, and nonword decoding: A case study.

    Science.gov (United States)

    Peter, Beate

    2018-01-01

    In a companion study, adults with dyslexia and adults with a probable history of childhood apraxia of speech showed evidence of difficulty with processing sequential information during nonword repetition, multisyllabic real word repetition and nonword decoding. Results suggested that some errors arose in visual encoding during nonword reading, all levels of processing but especially short-term memory storage/retrieval during nonword repetition, and motor planning and programming during complex real word repetition. To further investigate the role of short-term memory, a participant with short-term memory impairment (MI) was recruited. MI was confirmed with poor performance during a sentence repetition and three nonword repetition tasks, all of which have a high short-term memory load, whereas typical performance was observed during tests of reading, spelling, and static verbal knowledge, all with low short-term memory loads. Experimental results show error-free performance during multisyllabic real word repetition but high counts of sequence errors, especially migrations and assimilations, during nonword repetition, supporting short-term memory as a locus of sequential processing deficit during nonword repetition. Results are also consistent with the hypothesis that during complex real word repetition, short-term memory is bypassed as the word is recognized and retrieved from long-term memory prior to producing the word.

  13. Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes

    Science.gov (United States)

    Costello, D. J., Jr.; Deng, H.; Lin, S.

    1984-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.

  14. Decoding speech perception by native and non-native speakers using single-trial electrophysiological data.

    Directory of Open Access Journals (Sweden)

    Alex Brandmeyer

    Full Text Available Brain-computer interfaces (BCIs are systems that use real-time analysis of neuroimaging data to determine the mental state of their user for purposes such as providing neurofeedback. Here, we investigate the feasibility of a BCI based on speech perception. Multivariate pattern classification methods were applied to single-trial EEG data collected during speech perception by native and non-native speakers. Two principal questions were asked: 1 Can differences in the perceived categories of pairs of phonemes be decoded at the single-trial level? 2 Can these same categorical differences be decoded across participants, within or between native-language groups? Results indicated that classification performance progressively increased with respect to the categorical status (within, boundary or across of the stimulus contrast, and was also influenced by the native language of individual participants. Classifier performance showed strong relationships with traditional event-related potential measures and behavioral responses. The results of the cross-participant analysis indicated an overall increase in average classifier performance when trained on data from all participants (native and non-native. A second cross-participant classifier trained only on data from native speakers led to an overall improvement in performance for native speakers, but a reduction in performance for non-native speakers. We also found that the native language of a given participant could be decoded on the basis of EEG data with accuracy above 80%. These results indicate that electrophysiological responses underlying speech perception can be decoded at the single-trial level, and that decoding performance systematically reflects graded changes in the responses related to the phonological status of the stimuli. This approach could be used in extensions of the BCI paradigm to support perceptual learning during second language acquisition.

  15. Morphology and Vocabulary Acquisition: Using Visual Cues from Word Parts to Enhance Recall and Decode Newly Encountered Words

    Science.gov (United States)

    Bellomo, Tom

    2012-01-01

    An enhanced replication of an original quasi-experiment (Tom Bellomo, 2009b) was conducted to quantify the extent of long term retention of word parts and vocabulary. Such were introduced as part of a vocabulary acquisition strategy in a developmental reading course at one southeast four-year college. Aside from incorporating changes to the test…

  16. Iterative List Decoding

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Hjaltason, Johan

    2005-01-01

    We analyze the relation between iterative decoding and the extended parity check matrix. By considering a modified version of bit flipping, which produces a list of decoded words, we derive several relations between decodable error patterns and the parameters of the code. By developing a tree...... of codewords at minimal distance from the received vector, we also obtain new information about the code....

  17. Quantitative evaluation of muscle synergy models: a single-trial task decoding approach.

    Science.gov (United States)

    Delis, Ioannis; Berret, Bastien; Pozzo, Thierry; Panzeri, Stefano

    2013-01-01

    Muscle synergies, i.e., invariant coordinated activations of groups of muscles, have been proposed as building blocks that the central nervous system (CNS) uses to construct the patterns of muscle activity utilized for executing movements. Several efficient dimensionality reduction algorithms that extract putative synergies from electromyographic (EMG) signals have been developed. Typically, the quality of synergy decompositions is assessed by computing the Variance Accounted For (VAF). Yet, little is known about the extent to which the combination of those synergies encodes task-discriminating variations of muscle activity in individual trials. To address this question, here we conceive and develop a novel computational framework to evaluate muscle synergy decompositions in task space. Unlike previous methods considering the total variance of muscle patterns (VAF based metrics), our approach focuses on variance discriminating execution of different tasks. The procedure is based on single-trial task decoding from muscle synergy activation features. The task decoding based metric evaluates quantitatively the mapping between synergy recruitment and task identification and automatically determines the minimal number of synergies that captures all the task-discriminating variability in the synergy activations. In this paper, we first validate the method on plausibly simulated EMG datasets. We then show that it can be applied to different types of muscle synergy decomposition and illustrate its applicability to real data by using it for the analysis of EMG recordings during an arm pointing task. We find that time-varying and synchronous synergies with similar number of parameters are equally efficient in task decoding, suggesting that in this experimental paradigm they are equally valid representations of muscle synergies. Overall, these findings stress the effectiveness of the decoding metric in systematically assessing muscle synergy decompositions in task space.

  18. Decoding sequence learning from single-trial intracranial EEG in humans.

    Directory of Open Access Journals (Sweden)

    Marzia De Lucia

    Full Text Available We propose and validate a multivariate classification algorithm for characterizing changes in human intracranial electroencephalographic data (iEEG after learning motor sequences. The algorithm is based on a Hidden Markov Model (HMM that captures spatio-temporal properties of the iEEG at the level of single trials. Continuous intracranial iEEG was acquired during two sessions (one before and one after a night of sleep in two patients with depth electrodes implanted in several brain areas. They performed a visuomotor sequence (serial reaction time task, SRTT using the fingers of their non-dominant hand. Our results show that the decoding algorithm correctly classified single iEEG trials from the trained sequence as belonging to either the initial training phase (day 1, before sleep or a later consolidated phase (day 2, after sleep, whereas it failed to do so for trials belonging to a control condition (pseudo-random sequence. Accurate single-trial classification was achieved by taking advantage of the distributed pattern of neural activity. However, across all the contacts the hippocampus contributed most significantly to the classification accuracy for both patients, and one fronto-striatal contact for one patient. Together, these human intracranial findings demonstrate that a multivariate decoding approach can detect learning-related changes at the level of single-trial iEEG. Because it allows an unbiased identification of brain sites contributing to a behavioral effect (or experimental condition at the level of single subject, this approach could be usefully applied to assess the neural correlates of other complex cognitive functions in patients implanted with multiple electrodes.

  19. Attentional Selection in a Cocktail Party Environment Can Be Decoded from Single-Trial EEG

    Science.gov (United States)

    O'Sullivan, James A.; Power, Alan J.; Mesgarani, Nima; Rajaram, Siddharth; Foxe, John J.; Shinn-Cunningham, Barbara G.; Slaney, Malcolm; Shamma, Shihab A.; Lalor, Edmund C.

    2015-01-01

    How humans solve the cocktail party problem remains unknown. However, progress has been made recently thanks to the realization that cortical activity tracks the amplitude envelope of speech. This has led to the development of regression methods for studying the neurophysiology of continuous speech. One such method, known as stimulus-reconstruction, has been successfully utilized with cortical surface recordings and magnetoencephalography (MEG). However, the former is invasive and gives a relatively restricted view of processing along the auditory hierarchy, whereas the latter is expensive and rare. Thus it would be extremely useful for research in many populations if stimulus-reconstruction was effective using electroencephalography (EEG), a widely available and inexpensive technology. Here we show that single-trial (≈60 s) unaveraged EEG data can be decoded to determine attentional selection in a naturalistic multispeaker environment. Furthermore, we show a significant correlation between our EEG-based measure of attention and performance on a high-level attention task. In addition, by attempting to decode attention at individual latencies, we identify neural processing at ∼200 ms as being critical for solving the cocktail party problem. These findings open up new avenues for studying the ongoing dynamics of cognition using EEG and for developing effective and natural brain–computer interfaces. PMID:24429136

  20. Single-Trial Decoding of Bistable Perception Based on Sparse Nonnegative Tensor Decomposition

    Science.gov (United States)

    Wang, Zhisong; Maier, Alexander; Logothetis, Nikos K.; Liang, Hualou

    2008-01-01

    The study of the neuronal correlates of the spontaneous alternation in perception elicited by bistable visual stimuli is promising for understanding the mechanism of neural information processing and the neural basis of visual perception and perceptual decision-making. In this paper, we develop a sparse nonnegative tensor factorization-(NTF)-based method to extract features from the local field potential (LFP), collected from the middle temporal (MT) visual cortex in a macaque monkey, for decoding its bistable structure-from-motion (SFM) perception. We apply the feature extraction approach to the multichannel time-frequency representation of the intracortical LFP data. The advantages of the sparse NTF-based feature extraction approach lies in its capability to yield components common across the space, time, and frequency domains yet discriminative across different conditions without prior knowledge of the discriminating frequency bands and temporal windows for a specific subject. We employ the support vector machines (SVMs) classifier based on the features of the NTF components for single-trial decoding the reported perception. Our results suggest that although other bands also have certain discriminability, the gamma band feature carries the most discriminative information for bistable perception, and that imposing the sparseness constraints on the nonnegative tensor factorization improves extraction of this feature. PMID:18528515

  1. Decoded fMRI neurofeedback can induce bidirectional confidence changes within single participants.

    Science.gov (United States)

    Cortese, Aurelio; Amano, Kaoru; Koizumi, Ai; Lau, Hakwan; Kawato, Mitsuo

    2017-04-01

    Neurofeedback studies using real-time functional magnetic resonance imaging (rt-fMRI) have recently incorporated the multi-voxel pattern decoding approach, allowing for fMRI to serve as a tool to manipulate fine-grained neural activity embedded in voxel patterns. Because of its tremendous potential for clinical applications, certain questions regarding decoded neurofeedback (DecNef) must be addressed. Specifically, can the same participants learn to induce neural patterns in opposite directions in different sessions? If so, how does previous learning affect subsequent induction effectiveness? These questions are critical because neurofeedback effects can last for months, but the short- to mid-term dynamics of such effects are unknown. Here we employed a within-subjects design, where participants underwent two DecNef training sessions to induce behavioural changes of opposing directionality (up or down regulation of perceptual confidence in a visual discrimination task), with the order of training counterbalanced across participants. Behavioral results indicated that the manipulation was strongly influenced by the order and the directionality of neurofeedback training. We applied nonlinear mathematical modeling to parametrize four main consequences of DecNef: main effect of change in confidence, strength of down-regulation of confidence relative to up-regulation, maintenance of learning effects, and anterograde learning interference. Modeling results revealed that DecNef successfully induced bidirectional confidence changes in different sessions within single participants. Furthermore, the effect of up- compared to down-regulation was more prominent, and confidence changes (regardless of the direction) were largely preserved even after a week-long interval. Lastly, the effect of the second session was markedly diminished as compared to the effect of the first session, indicating strong anterograde learning interference. These results are interpreted in the framework

  2. The effects of video self-modeling on the decoding skills of children at risk for reading disabilities

    OpenAIRE

    Ayala, SM; O'Connor, R

    2013-01-01

    Ten first grade students who had responded poorly to a Tier 2 reading intervention in a response to intervention (RTI) model received an intervention of video self-modeling to improve decoding skills and sight word recognition. Students were video recorded blending and segmenting decodable words and reading sight words. Videos were edited and viewed a minimum of four times per week. Data were collected twice per week using curriculum-based measures. A single subject multiple baseline across p...

  3. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis

    In this thesis we describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon codes with non-uniform profile. With this scheme decoding with good performance...... is possible as low as Eb/No=0.6 dB, which is about 1.7 dB below the signal-to-noise ratio that marks the cut-off rate for the convolutional code. This is possible since the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability...... of computational overflow. Analytical results for the probability that the first Reed-Solomon word is decoded after C computations are presented. This is supported by simulation results that are also extended to other parameters....

  4. Decoding subtle forearm flexions using fractal features of surface electromyogram from single and multiple sensors.

    Science.gov (United States)

    Arjunan, Sridhar Poosapadi; Kumar, Dinesh Kant

    2010-10-21

    Identifying finger and wrist flexion based actions using a single channel surface electromyogram (sEMG) can lead to a number of applications such as sEMG based controllers for near elbow amputees, human computer interface (HCI) devices for elderly and for defence personnel. These are currently infeasible because classification of sEMG is unreliable when the level of muscle contraction is low and there are multiple active muscles. The presence of noise and cross-talk from closely located and simultaneously active muscles is exaggerated when muscles are weakly active such as during sustained wrist and finger flexion. This paper reports the use of fractal properties of sEMG to reliably identify individual wrist and finger flexion, overcoming the earlier shortcomings. SEMG signal was recorded when the participant maintained pre-specified wrist and finger flexion movements for a period of time. Various established sEMG signal parameters such as root mean square (RMS), Mean absolute value (MAV), Variance (VAR) and Waveform length (WL) and the proposed fractal features: fractal dimension (FD) and maximum fractal length (MFL) were computed. Multi-variant analysis of variance (MANOVA) was conducted to determine the p value, indicative of the significance of the relationships between each of these parameters with the wrist and finger flexions. Classification accuracy was also computed using the trained artificial neural network (ANN) classifier to decode the desired subtle movements. The results indicate that the p value for the proposed feature set consisting of FD and MFL of single channel sEMG was 0.0001 while that of various combinations of the five established features ranged between 0.009 - 0.0172. From the accuracy of classification by the ANN, the average accuracy in identifying the wrist and finger flexions using the proposed feature set of single channel sEMG was 90%, while the average accuracy when using a combination of other features ranged between 58% and 73

  5. Decoding subtle forearm flexions using fractal features of surface electromyogram from single and multiple sensors

    Directory of Open Access Journals (Sweden)

    Kumar Dinesh

    2010-10-01

    Full Text Available Abstract Background Identifying finger and wrist flexion based actions using a single channel surface electromyogram (sEMG can lead to a number of applications such as sEMG based controllers for near elbow amputees, human computer interface (HCI devices for elderly and for defence personnel. These are currently infeasible because classification of sEMG is unreliable when the level of muscle contraction is low and there are multiple active muscles. The presence of noise and cross-talk from closely located and simultaneously active muscles is exaggerated when muscles are weakly active such as during sustained wrist and finger flexion. This paper reports the use of fractal properties of sEMG to reliably identify individual wrist and finger flexion, overcoming the earlier shortcomings. Methods SEMG signal was recorded when the participant maintained pre-specified wrist and finger flexion movements for a period of time. Various established sEMG signal parameters such as root mean square (RMS, Mean absolute value (MAV, Variance (VAR and Waveform length (WL and the proposed fractal features: fractal dimension (FD and maximum fractal length (MFL were computed. Multi-variant analysis of variance (MANOVA was conducted to determine the p value, indicative of the significance of the relationships between each of these parameters with the wrist and finger flexions. Classification accuracy was also computed using the trained artificial neural network (ANN classifier to decode the desired subtle movements. Results The results indicate that the p value for the proposed feature set consisting of FD and MFL of single channel sEMG was 0.0001 while that of various combinations of the five established features ranged between 0.009 - 0.0172. From the accuracy of classification by the ANN, the average accuracy in identifying the wrist and finger flexions using the proposed feature set of single channel sEMG was 90%, while the average accuracy when using a combination

  6. Multi- and Unisensory Decoding of Words and Nonwords Result in Differential Brain Responses in Dyslexic and Nondyslexic Adults

    Science.gov (United States)

    Kast, Monika; Bezzola, Ladina; Jancke, Lutz; Meyer, Martin

    2011-01-01

    The present functional magnetic resonance imaging (fMRI) study was designed, in order to investigate the neural substrates involved in the audiovisual processing of disyllabic German words and pseudowords. Twelve dyslexic and 13 nondyslexic adults performed a lexical decision task while stimuli were presented unimodally (either aurally or…

  7. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1998-01-01

    We describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon (RS) codes with nonuniform profile. With this scheme decoding with good performance is possible as low...... as Eb/N0=0.6 dB, which is about 1.25 dB below the signal-to-noise ratio (SNR) that marks the cutoff rate for the full system. Accounting for about 0.45 dB due to the outer codes, sequential decoding takes place at about 1.7 dB below the SNR cutoff rate for the convolutional code. This is possible since...... the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability of computational overflow. Analytical results for the probability that the first RS word is decoded after C computations are presented. These results are supported...

  8. Production Variability and Single Word Intelligibility in Aphasia and Apraxia of Speech

    Science.gov (United States)

    Haley, Katarina L.; Martin, Gwenyth

    2011-01-01

    This study was designed to estimate test-retest reliability of orthographic speech intelligibility testing in speakers with aphasia and AOS and to examine its relationship to the consistency of speaker and listener responses. Monosyllabic single word speech samples were recorded from 13 speakers with coexisting aphasia and AOS. These words were…

  9. Decoding Dyslexia, a Common Learning Disability

    Science.gov (United States)

    ... if they continue to struggle. Read More "Dyslexic" Articles In Their Own Words: Dealing with Dyslexia / Decoding Dyslexia, a Common Learning Disability / What is Dyslexia? / Special Education and Research ...

  10. Bounded-Angle Iterative Decoding of LDPC Codes

    Science.gov (United States)

    Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2009-01-01

    Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).

  11. Fast Reed-Solomon Decoder

    Science.gov (United States)

    Liu, K. Y.

    1986-01-01

    High-speed decoder intended for use with Reed-Solomon (RS) codes of long code length and high error-correcting capability. Design based on algorithm that includes high-radix Fermat transform procedure, which is most efficient for high speeds. RS code in question has code-word length of 256 symbols, of which 224 are information symbols and 32 are redundant.

  12. LDPC Decoding on GPU for Mobile Device

    Directory of Open Access Journals (Sweden)

    Yiqin Lu

    2016-01-01

    Full Text Available A flexible software LDPC decoder that exploits data parallelism for simultaneous multicode words decoding on the mobile device is proposed in this paper, supported by multithreading on OpenCL based graphics processing units. By dividing the check matrix into several parts to make full use of both the local memory and private memory on GPU and properly modify the code capacity each time, our implementation on a mobile phone shows throughputs above 100 Mbps and delay is less than 1.6 millisecond in decoding, which make high-speed communication like video calling possible. To realize efficient software LDPC decoding on the mobile device, the LDPC decoding feature on communication baseband chip should be replaced to save the cost and make it easier to upgrade decoder to be compatible with a variety of channel access schemes.

  13. On Decoding Interleaved Chinese Remainder Codes

    DEFF Research Database (Denmark)

    Li, Wenhui; Sidorenko, Vladimir; Nielsen, Johan Sebastian Rosenkilde

    2013-01-01

    We model the decoding of Interleaved Chinese Remainder codes as that of finding a short vector in a Z-lattice. Using the LLL algorithm, we obtain an efficient decoding algorithm, correcting errors beyond the unique decoding bound and having nearly linear complexity. The algorithm can fail...... with a probability dependent on the number of errors, and we give an upper bound for this. Simulation results indicate that the bound is close to the truth. We apply the proposed decoding algorithm for decoding a single CR code using the idea of “Power” decoding, suggested for Reed-Solomon codes. A combination...... of these two methods can be used to decode low-rate Interleaved Chinese Remainder codes....

  14. English Word-Level Decoding and Oral Language Factors as Predictors of Third and Fifth Grade English Language Learners' Reading Comprehension Performance

    Science.gov (United States)

    Landon, Laura L.

    2017-01-01

    This study examines the application of the Simple View of Reading (SVR), a reading comprehension theory focusing on word recognition and linguistic comprehension, to English Language Learners' (ELLs') English reading development. This study examines the concurrent and predictive validity of two components of the SVR, oral language and word-level…

  15. Advance Planning of Form Properties in the Written Production of Single and Multiple Words

    Science.gov (United States)

    Damian, Markus F.; Stadthagen-Gonzalez, Hans

    2009-01-01

    Three experiments investigated the scope of advance planning in written production. Experiment 1 manipulated phonological factors in single word written production, and Experiments 2 and 3 did the same in the production of adjective-noun utterances. In all three experiments, effects on latencies were found which mirrored those previously…

  16. Investigation of the Functional Neuroanatomy of Single Word Reading and Its Development

    Science.gov (United States)

    Palmer, Erica D.; Brown, Timothy T.; Petersen, Steven E.; Schlaggar, Bradley L.

    2004-01-01

    An understanding of the processing underlying single word reading will provide insight into how skilled reading is achieved, with important implications for reading education and impaired reading. Investigation of the functional neuroanatomy of both the mature and the developing systems will be critical for reaching this understanding. To this…

  17. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; The Map and Related Decoding Algirithms

    Science.gov (United States)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.

  18. Single-word multiple-bit upsets in static random access devices

    International Nuclear Information System (INIS)

    Koga, R.; Pinkerton, S.D.; Lie, T.J.; Crawford, K.B.

    1993-01-01

    Energetic ions and protons can cause single event upsets (SEUs) in static random access memory (SRAM) cells. In some cases multiple bits may be upset as the result of a single event. Space-borne electronics systems incorporating high-density SRAM are vulnerable to single-word multiple-bit upsets (SMUs). The authors review here recent observations of SMU, present the results of a systematic investigation of the physical cell arrangements employed in several currently available SRAM device types, and discuss implications for the occurrence and mitigation of SMU

  19. Auditory comprehension: from the voice up to the single word level

    OpenAIRE

    Jones, Anna Barbara

    2016-01-01

    Auditory comprehension, the ability to understand spoken language, consists of a number of different auditory processing skills. In the five studies presented in this thesis I investigated both intact and impaired auditory comprehension at different levels: voice versus phoneme perception, as well as single word auditory comprehension in terms of phonemic and semantic content. In the first study, using sounds from different continua of ‘male’-/pæ/ to ‘female’-/tæ/ and ‘male’...

  20. The Effects of Video Self-Modeling on the Decoding Skills of Children At Risk for Reading Disabilities

    OpenAIRE

    Ayala, Sandra M

    2010-01-01

    Ten first grade students, participating in a Tier II response to intervention (RTI) reading program received an intervention of video self modeling to improve decoding skills and sight word recognition. The students were video recorded blending and segmenting decodable words, and reading sight words taken directly from their curriculum instruction. Individual videos were recorded and edited to show students successfully and accurately decoding words and practicing sight word recognition. Each...

  1. A Few Words about Words | Poster

    Science.gov (United States)

    By Ken Michaels, Guest Writer In Shakepeare’s play “Hamlet,” Polonius inquires of the prince, “What do you read, my lord?” Not at all pleased with what he’s reading, Hamlet replies, “Words, words, words.”1 I have previously described the communication model in which a sender encodes a message and then sends it via some channel (or medium) to a receiver, who decodes the message

  2. The Influence of Visual Word Form in Reading: Single Case Study of an Arabic Patient with Deep Dyslexia

    Science.gov (United States)

    Boumaraf, Assia; Macoir, Joël

    2016-01-01

    Deep dyslexia is a written language disorder characterized by poor reading of non-words, and advantage for concrete over abstract words with production of semantic, visual and morphological errors. In this single case study of an Arabic patient with input deep dyslexia, we investigated the impact of graphic features of Arabic on manifestations of…

  3. Illustrative examples in a bilingual decoding dictionary: An (un ...

    African Journals Online (AJOL)

    Keywords: Illustrative Examples, Bilingual Decoding Dictionary, Semantic Differences Between Source Language (Sl) And Target Language (Tl), Grammatical Differences Between Sl And Tl, Translation Of Examples, Transposition, Context-Dependent Translation, One-Word Equivalent, Zero Equivalent, Idiomatic ...

  4. Psychometric characteristics of single-word tests of children's speech sound production.

    Science.gov (United States)

    Flipsen, Peter; Ogiela, Diane A

    2015-04-01

    Our understanding of test construction has improved since the now-classic review by McCauley and Swisher (1984). The current review article examines the psychometric characteristics of current single-word tests of speech sound production in an attempt to determine whether our tests have improved since then. It also provides a resource that clinicians may use to help them make test selection decisions for their particular client populations. Ten tests published since 1990 were reviewed to determine whether they met the 10 criteria set out by McCauley and Swisher (1984), as well as 7 additional criteria. All of the tests reviewed met at least 3 of McCauley and Swisher's (1984) original criteria, and 9 of 10 tests met at least 5 of them. Most of the tests met some of the additional criteria as well. The state of the art for single-word tests of speech sound production in children appears to have improved in the last 30 years. There remains, however, room for improvement.

  5. Selectivity of N170 for visual words in the right hemisphere: Evidence from single-trial analysis.

    Science.gov (United States)

    Yang, Hang; Zhao, Jing; Gaspar, Carl M; Chen, Wei; Tan, Yufei; Weng, Xuchu

    2017-08-01

    Neuroimaging and neuropsychological studies have identified the involvement of the right posterior region in the processing of visual words. Interestingly, in contrast, ERP studies of the N170 typically demonstrate selectivity for words more strikingly over the left hemisphere. Why is right hemisphere selectivity for words during the N170 epoch typically not observed, despite the clear involvement of this region in word processing? One possibility is that amplitude differences measured on averaged ERPs in previous studies may have been obscured by variation in peak latency across trials. This study examined this possibility by using single-trial analysis. Results show that words evoked greater single-trial N170s than control stimuli in the right hemisphere. Additionally, we observed larger trial-to-trial variability on N170 peak latency for words as compared to control stimuli over the right hemisphere. Results demonstrate that, in contrast to much of the prior literature, the N170 can be selective to words over the right hemisphere. This discrepancy is explained in terms of variability in trial-to-trial peak latency for responses to words over the right hemisphere. © 2017 Society for Psychophysiological Research.

  6. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    Science.gov (United States)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  7. Decoding intention at sensorimotor timescales.

    Directory of Open Access Journals (Sweden)

    Mathew Salvaris

    Full Text Available The ability to decode an individual's intentions in real time has long been a 'holy grail' of research on human volition. For example, a reliable method could be used to improve scientific study of voluntary action by allowing external probe stimuli to be delivered at different moments during development of intention and action. Several Brain Computer Interface applications have used motor imagery of repetitive actions to achieve this goal. These systems are relatively successful, but only if the intention is sustained over a period of several seconds; much longer than the timescales identified in psychophysiological studies for normal preparation for voluntary action. We have used a combination of sensorimotor rhythms and motor imagery training to decode intentions in a single-trial cued-response paradigm similar to those used in human and non-human primate motor control research. Decoding accuracy of over 0.83 was achieved with twelve participants. With this approach, we could decode intentions to move the left or right hand at sub-second timescales, both for instructed choices instructed by an external stimulus and for free choices generated intentionally by the participant. The implications for volition are considered.

  8. Decoding vigilance with NIRS.

    Science.gov (United States)

    Bogler, Carsten; Mehnert, Jan; Steinbrink, Jens; Haynes, John-Dylan

    2014-01-01

    Sustained, long-term cognitive workload is associated with variations and decrements in performance. Such fluctuations in vigilance can be a risk factor especially during dangerous attention demanding activities. Functional MRI studies have shown that attentional performance is correlated with BOLD-signals, especially in parietal and prefrontal cortical regions. An interesting question is whether these BOLD-signals could be measured in real-world scenarios, say to warn in a dangerous workplace whenever a subjects' vigilance is low. Because fMRI lacks the mobility needed for such applications, we tested whether the monitoring of vigilance might be possible using Near-Infrared Spectroscopy (NIRS). NIRS is a highly mobile technique that measures hemodynamics in the surface of the brain. We demonstrate that non-invasive NIRS signals correlate with vigilance. These signals carry enough information to decode subjects' reaction times at a single trial level.

  9. Decoding vigilance with NIRS.

    Directory of Open Access Journals (Sweden)

    Carsten Bogler

    Full Text Available Sustained, long-term cognitive workload is associated with variations and decrements in performance. Such fluctuations in vigilance can be a risk factor especially during dangerous attention demanding activities. Functional MRI studies have shown that attentional performance is correlated with BOLD-signals, especially in parietal and prefrontal cortical regions. An interesting question is whether these BOLD-signals could be measured in real-world scenarios, say to warn in a dangerous workplace whenever a subjects' vigilance is low. Because fMRI lacks the mobility needed for such applications, we tested whether the monitoring of vigilance might be possible using Near-Infrared Spectroscopy (NIRS. NIRS is a highly mobile technique that measures hemodynamics in the surface of the brain. We demonstrate that non-invasive NIRS signals correlate with vigilance. These signals carry enough information to decode subjects' reaction times at a single trial level.

  10. Single Trial Decoding of Belief Decision Making from EEG and fMRI Data Using ICA Features

    Directory of Open Access Journals (Sweden)

    Pamela eDouglas

    2013-07-01

    Full Text Available The complex task of assessing the veracity of a statement is thought to activate uniquely distributed brain regions based on whether a subject believes or disbelieves a given assertion. In the current work, we present parallel machine learning methods for predicting a subject’s decision response to a given propositional statement based on independent component (IC features derived from EEG and fMRI data. Our results demonstrate that IC features outperformed features derived from event related spectral perturbations derived from any single spectral band, yet were similar to accuracy across all spectral bands combined. We compared our diagnostic IC spatial maps with our conventional general linear model (GLM results, and found that informative ICs had significant spatial overlap with our GLM results, yet also revealed unique regions like amygdala that were not statistically significant in GLM analyses. Overall, these results suggest that ICs may yield a parsimonious feature set that can be used along with a decision tree structure for interpretation of features used in classifying complex cognitive processes such as belief and disbelief across both fMRI and EEG neuroimaging modalities.

  11. Learning to Read Words: Theory, Findings, and Issues

    Science.gov (United States)

    Ehri, Linnea C.

    2005-01-01

    Reading words may take several forms. Readers may utilize decoding, analogizing, or predicting to read unfamiliar words. Readers read familiar words by accessing them in memory, called sight word reading. With practice, all words come to be read automatically by sight, which is the most efficient, unobtrusive way to read words in text. The process…

  12. Using Serial and Discrete Digit Naming to Unravel Word Reading Processes.

    Science.gov (United States)

    Altani, Angeliki; Protopapas, Athanassios; Georgiou, George K

    2018-01-01

    During reading acquisition, word recognition is assumed to undergo a developmental shift from slow serial/sublexical processing of letter strings to fast parallel processing of whole word forms. This shift has been proposed to be detected by examining the size of the relationship between serial- and discrete-trial versions of word reading and rapid naming tasks. Specifically, a strong association between serial naming of symbols and single word reading suggests that words are processed serially, whereas a strong association between discrete naming of symbols and single word reading suggests that words are processed in parallel as wholes. In this study, 429 Grade 1, 3, and 5 English-speaking Canadian children were tested on serial and discrete digit naming and word reading. Across grades, single word reading was more strongly associated with discrete naming than with serial naming of digits, indicating that short high-frequency words are processed as whole units early in the development of reading ability in English. In contrast, serial naming was not a unique predictor of single word reading across grades, suggesting that within-word sequential processing was not required for the successful recognition for this set of words. Factor mixture analysis revealed that our participants could be clustered into two classes, namely beginning and more advanced readers. Serial naming uniquely predicted single word reading only among the first class of readers, indicating that novice readers rely on a serial strategy to decode words. Yet, a considerable proportion of Grade 1 students were assigned to the second class, evidently being able to process short high-frequency words as unitized symbols. We consider these findings together with those from previous studies to challenge the hypothesis of a binary distinction between serial/sublexical and parallel/lexical processing in word reading. We argue instead that sequential processing in word reading operates on a continuum

  13. Comparison of single-word and adjective-noun phrase production using event-related brain potentials

    DEFF Research Database (Denmark)

    Lange, Violaine Michel

    2015-01-01

    stimuli varying in complexity -black and white line drawings, coloured line drawings, and arrays of drawings-in participants producing single nouns. Whilst naming latencies were similar for single noun production between visual stimuli conditions, ERPs differed between drawing arrays and single drawings...... in a time-window extending beyond early visual analysis. In a second experiment, different participants were asked to produce either single noun or adjective-noun dual-word phrases to black-and-white and coloured line drawings, respectively. Adjective-noun phrase production (2W) resulted in naming latencies...

  14. Single dose antidepressant administration modulates the neural processing of self-referent personality trait words

    DEFF Research Database (Denmark)

    Miskowiak, Kamilla; Papadatou-Pastou, Marietta; Cowen, Philip J

    2007-01-01

    categorisation and recognition of self-referent personality trait words were assessed using event-related functional Magnetic Resonance Imaging (fMRI). Reboxetine had no effect on neuronal response during self-referent categorisation of positive or negative personality trait words. However, in a subsequent...

  15. The effect of fine and grapho-motor skill demands on preschoolers' decoding skill.

    Science.gov (United States)

    Suggate, Sebastian; Pufke, Eva; Stoeger, Heidrun

    2016-01-01

    Previous correlational research has found indications that fine motor skills (FMS) link to early reading development, but the work has not demonstrated causality. We manipulated 51 preschoolers' FMS while children learned to decode letters and nonsense words in a within-participants, randomized, and counterbalanced single-factor design with pre- and posttesting. In two conditions, children wrote with a pencil that had a conical shape fitted to the end filled with either steel (impaired writing condition) or polystyrene (normal writing condition). In a third control condition, children simply pointed at the letters with the light pencil as they learned to read the words (pointing condition). Results indicate that children learned the most decoding skills in the normal writing condition, followed by the pointing and impaired writing conditions. In addition, working memory, phonemic awareness, and grapho-motor skills were generally predictors of decoding skill development. The findings provide experimental evidence that having lower FMS is disadvantageous for reading development. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Optimization of MPEG decoding

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1999-01-01

    MPEG-2 video decoding is examined. A unified approach to quality improvement, chrominance upsampling, de-interlacing and superresolution is presented. The information over several frames is combined as part of the processing....

  17. Deep generative learning of location-invariant visual word recognition.

    Science.gov (United States)

    Di Bono, Maria Grazia; Zorzi, Marco

    2013-01-01

    It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words-which was the model's learning objective

  18. Words, Words, Words: English, Vocabulary.

    Science.gov (United States)

    Lamb, Barbara

    The Quinmester course on words gives the student the opportunity to increase his proficiency by investigating word origins, word histories, morphology, and phonology. The course includes the following: dictionary skills and familiarity with the "Oxford,""Webster's Third," and "American Heritage" dictionaries; word…

  19. Grasp movement decoding from premotor and parietal cortex.

    Science.gov (United States)

    Townsend, Benjamin R; Subasi, Erk; Scherberger, Hansjörg

    2011-10-05

    Despite recent advances in harnessing cortical motor-related activity to control computer cursors and robotic devices, the ability to decode and execute different grasping patterns remains a major obstacle. Here we demonstrate a simple Bayesian decoder for real-time classification of grip type and wrist orientation in macaque monkeys that uses higher-order planning signals from anterior intraparietal cortex (AIP) and ventral premotor cortex (area F5). Real-time decoding was based on multiunit signals, which had similar tuning properties to cells in previous single-unit recording studies. Maximum decoding accuracy for two grasp types (power and precision grip) and five wrist orientations was 63% (chance level, 10%). Analysis of decoder performance showed that grip type decoding was highly accurate (90.6%), with most errors occurring during orientation classification. In a subsequent off-line analysis, we found small but significant performance improvements (mean, 6.25 percentage points) when using an optimized spike-sorting method (superparamagnetic clustering). Furthermore, we observed significant differences in the contributions of F5 and AIP for grasp decoding, with F5 being better suited for classification of the grip type and AIP contributing more toward decoding of object orientation. However, optimum decoding performance was maximal when using neural activity simultaneously from both areas. Overall, these results highlight quantitative differences in the functional representation of grasp movements in AIP and F5 and represent a first step toward using these signals for developing functional neural interfaces for hand grasping.

  20. Teaching Formulaic Sequences: The Same as or Different from Teaching Single Words?

    Science.gov (United States)

    Alali, Fatima A.; Schmitt, Norbert

    2012-01-01

    Formulaic language is an important component of discourse and needs to be addressed in teaching pedagogy. Unfortunately, there has been little research into the most effective ways of teaching formulaic language. In this study, Kuwaiti students were taught words and idioms using the same teaching methodologies, and their learning was measured. The…

  1. Deep generative learning of location-invariant visual word recognition

    Science.gov (United States)

    Di Bono, Maria Grazia; Zorzi, Marco

    2013-01-01

    It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words—which was the model's learning objective

  2. Decoding communities in networks

    Science.gov (United States)

    Radicchi, Filippo

    2018-02-01

    According to a recent information-theoretical proposal, the problem of defining and identifying communities in networks can be interpreted as a classical communication task over a noisy channel: memberships of nodes are information bits erased by the channel, edges and nonedges in the network are parity bits introduced by the encoder but degraded through the channel, and a community identification algorithm is a decoder. The interpretation is perfectly equivalent to the one at the basis of well-known statistical inference algorithms for community detection. The only difference in the interpretation is that a noisy channel replaces a stochastic network model. However, the different perspective gives the opportunity to take advantage of the rich set of tools of coding theory to generate novel insights on the problem of community detection. In this paper, we illustrate two main applications of standard coding-theoretical methods to community detection. First, we leverage a state-of-the-art decoding technique to generate a family of quasioptimal community detection algorithms. Second and more important, we show that the Shannon's noisy-channel coding theorem can be invoked to establish a lower bound, here named as decodability bound, for the maximum amount of noise tolerable by an ideal decoder to achieve perfect detection of communities. When computed for well-established synthetic benchmarks, the decodability bound explains accurately the performance achieved by the best community detection algorithms existing on the market, telling us that only little room for their improvement is still potentially left.

  3. Decoding communities in networks.

    Science.gov (United States)

    Radicchi, Filippo

    2018-02-01

    According to a recent information-theoretical proposal, the problem of defining and identifying communities in networks can be interpreted as a classical communication task over a noisy channel: memberships of nodes are information bits erased by the channel, edges and nonedges in the network are parity bits introduced by the encoder but degraded through the channel, and a community identification algorithm is a decoder. The interpretation is perfectly equivalent to the one at the basis of well-known statistical inference algorithms for community detection. The only difference in the interpretation is that a noisy channel replaces a stochastic network model. However, the different perspective gives the opportunity to take advantage of the rich set of tools of coding theory to generate novel insights on the problem of community detection. In this paper, we illustrate two main applications of standard coding-theoretical methods to community detection. First, we leverage a state-of-the-art decoding technique to generate a family of quasioptimal community detection algorithms. Second and more important, we show that the Shannon's noisy-channel coding theorem can be invoked to establish a lower bound, here named as decodability bound, for the maximum amount of noise tolerable by an ideal decoder to achieve perfect detection of communities. When computed for well-established synthetic benchmarks, the decodability bound explains accurately the performance achieved by the best community detection algorithms existing on the market, telling us that only little room for their improvement is still potentially left.

  4. Single-Word Predictions of Upcoming Language During Comprehension: Evidence from the Cumulative Semantic Interference Task

    Science.gov (United States)

    Kleinman, Daniel; Runnqvist, Elin; Ferreira, Victor S.

    2015-01-01

    Comprehenders predict upcoming speech and text on the basis of linguistic input. How many predictions do comprehenders make for an upcoming word? If a listener strongly expects to hear the word “sock”, is the word “shirt” partially expected as well, is it actively inhibited, or is it ignored? The present research addressed these questions by measuring the “downstream” effects of prediction on the processing of subsequently presented stimuli using the cumulative semantic interference paradigm. In three experiments, subjects named pictures (sock) that were presented either in isolation or after strongly constraining sentence frames (“After doing his laundry, Mark always seemed to be missing one…”). Naming sock slowed the subsequent naming of the picture shirt – the standard cumulative semantic interference effect. However, although picture naming was much faster after sentence frames, the interference effect was not modulated by the context (bare vs. sentence) in which either picture was presented. According to the only model of cumulative semantic interference that can account for such a pattern of data, this indicates that comprehenders pre-activated and maintained the pre-activation of best sentence completions (sock) but did not maintain the pre-activation of less likely completions (shirt). Thus, comprehenders predicted only the most probable completion for each sentence. PMID:25917550

  5. Toward a universal decoder of linguistic meaning from brain activation.

    Science.gov (United States)

    Pereira, Francisco; Lou, Bin; Pritchett, Brianna; Ritter, Samuel; Gershman, Samuel J; Kanwisher, Nancy; Botvinick, Matthew; Fedorenko, Evelina

    2018-03-06

    Prior work decoding linguistic meaning from imaging data has been largely limited to concrete nouns, using similar stimuli for training and testing, from a relatively small number of semantic categories. Here we present a new approach for building a brain decoding system in which words and sentences are represented as vectors in a semantic space constructed from massive text corpora. By efficiently sampling this space to select training stimuli shown to subjects, we maximize the ability to generalize to new meanings from limited imaging data. To validate this approach, we train the system on imaging data of individual concepts, and show it can decode semantic vector representations from imaging data of sentences about a wide variety of both concrete and abstract topics from two separate datasets. These decoded representations are sufficiently detailed to distinguish even semantically similar sentences, and to capture the similarity structure of meaning relationships between sentences.

  6. Brain-to-text: Decoding spoken phrases from phone representations in the brain

    Directory of Open Access Journals (Sweden)

    Christian eHerff

    2015-06-01

    Full Text Available It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG recordings. Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR, and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system achieved word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step towards human-machine communication based on imagined speech.

  7. Adaptive decoding of convolutional codes

    Directory of Open Access Journals (Sweden)

    K. Hueske

    2007-06-01

    Full Text Available Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  8. Adaptive decoding of convolutional codes

    Science.gov (United States)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  9. Decoding Xing-Ling codes

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Refslund

    2002-01-01

    This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed.......This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed....

  10. Multi-stage decoding of multi-level modulation codes

    Science.gov (United States)

    Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.

    1991-01-01

    Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).

  11. The Multisyllabic Word Dilemma: Helping Students Build Meaning, Spell, and Read "Big" Words.

    Science.gov (United States)

    Cunningham, Patricia M.

    1998-01-01

    Looks at what is known about multisyllabic words, which is a lot more than educators knew when the previous generation of multisyllabic word instruction was created. Reviews the few studies that have carried out instructional approaches to increase students' ability to decode big words. Outlines a program of instruction, based on what is currently…

  12. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    Shannon limit of the channel. Among the earliest discovered codes that approach the. Shannon limit were the low density parity check (LDPC) codes. The term low density arises from the property of the parity check matrix defining the code. We will now define this matrix and the role that it plays in decoding. 2. Linear Codes.

  13. The Fluid Reading Primer: Animated Decoding Support for Emergent Readers.

    Science.gov (United States)

    Zellweger, Polle T.; Mackinlay, Jock D.

    A prototype application called the Fluid Reading Primer was developed to help emergent readers with the process of decoding written words into their spoken forms. The Fluid Reading Primer is part of a larger research project called Fluid Documents, which is exploring the use of interactive animation of typography to show additional information in…

  14. On minimizing the maximum broadcast decoding delay for instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.; Sorour, Sameh; Alouini, Mohamed-Slim; Ai-Naffouri, Tareq Y.

    2014-01-01

    In this paper, we consider the problem of minimizing the maximum broadcast decoding delay experienced by all the receivers of generalized instantly decodable network coding (IDNC). Unlike the sum decoding delay, the maximum decoding delay as a

  15. Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique

    Science.gov (United States)

    Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi

    Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.

  16. Design of FBG En/decoders in Coherent 2-D Time-polarization OCDMA Systems

    Science.gov (United States)

    Hou, Fen-fei; Yang, Ming

    2012-12-01

    A novel fiber Bragg grating (FBG)-based en/decoder for the two-dimensional (2-D) time-spreading and polarization multiplexer optical coding is proposed. Compared with other 2-D en/decoders, the proposed en/decoding for an optical code-division multiple-access (OCDMA) system uses a single phase-encoded FBG and coherent en/decoding. Furthermore, combined with reconstruction-equivalent-chirp technology, such en/decoders can be realized with a conventional simple fabrication setup. Experimental results of such en/decoders and the corresponding system test at a data rate of 5 Gbit/s demonstrate that this kind of 2-D FBG-based en/decoders could improve the performances of OCDMA systems.

  17. Bayesian population decoding of spiking neurons.

    Science.gov (United States)

    Gerwinn, Sebastian; Macke, Jakob; Bethge, Matthias

    2009-01-01

    The timing of action potentials in spiking neurons depends on the temporal dynamics of their inputs and contains information about temporal fluctuations in the stimulus. Leaky integrate-and-fire neurons constitute a popular class of encoding models, in which spike times depend directly on the temporal structure of the inputs. However, optimal decoding rules for these models have only been studied explicitly in the noiseless case. Here, we study decoding rules for probabilistic inference of a continuous stimulus from the spike times of a population of leaky integrate-and-fire neurons with threshold noise. We derive three algorithms for approximating the posterior distribution over stimuli as a function of the observed spike trains. In addition to a reconstruction of the stimulus we thus obtain an estimate of the uncertainty as well. Furthermore, we derive a 'spike-by-spike' online decoding scheme that recursively updates the posterior with the arrival of each new spike. We use these decoding rules to reconstruct time-varying stimuli represented by a Gaussian process from spike trains of single neurons as well as neural populations.

  18. Bayesian population decoding of spiking neurons

    Directory of Open Access Journals (Sweden)

    Sebastian Gerwinn

    2009-10-01

    Full Text Available The timing of action potentials in spiking neurons depends on the temporal dynamics of their inputs and contains information about temporal fluctuations in the stimulus. Leaky integrate-and-fire neurons constitute a popular class of encoding models, in which spike times depend directly on the temporal structure of the inputs. However, optimal decoding rules for these models have only been studied explicitly in the noiseless case. Here, we study decoding rules for probabilistic inference of a continuous stimulus from the spike times of a population of leaky integrate-and-fire neurons with threshold noise. We derive three algorithms for approximating the posterior distribution over stimuli as a function of the observed spike trains. In addition to a reconstruction of the stimulus we thus obtain an estimate of the uncertainty as well. Furthermore, we derive a `spike-by-spike' online decoding scheme that recursively updates the posterior with the arrival of each new spike. We use these decoding rules to reconstruct time-varying stimuli represented by a Gaussian process from spike trains of single neurons as well as neural populations.

  19. Decoding the human genome

    CERN Multimedia

    CERN. Geneva. Audiovisual Unit; Antonerakis, S E

    2002-01-01

    Decoding the Human genome is a very up-to-date topic, raising several questions besides purely scientific, in view of the two competing teams (public and private), the ethics of using the results, and the fact that the project went apparently faster and easier than expected. The lecture series will address the following chapters: Scientific basis and challenges. Ethical and social aspects of genomics.

  20. Multi-stage decoding for multi-level block modulation codes

    Science.gov (United States)

    Lin, Shu

    1991-01-01

    In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  1. The Effects of Visual Attention Span and Phonological Decoding in Reading Comprehension in Dyslexia: A Path Analysis.

    Science.gov (United States)

    Chen, Chen; Schneps, Matthew H; Masyn, Katherine E; Thomson, Jennifer M

    2016-11-01

    Increasing evidence has shown visual attention span to be a factor, distinct from phonological skills, that explains single-word identification (pseudo-word/word reading) performance in dyslexia. Yet, little is known about how well visual attention span explains text comprehension. Observing reading comprehension in a sample of 105 high school students with dyslexia, we used a pathway analysis to examine the direct and indirect path between visual attention span and reading comprehension while controlling for other factors such as phonological awareness, letter identification, short-term memory, IQ and age. Integrating phonemic decoding efficiency skills in the analytic model, this study aimed to disentangle how visual attention span and phonological skills work together in reading comprehension for readers with dyslexia. We found visual attention span to have a significant direct effect on more difficult reading comprehension but not on an easier level. It also had a significant direct effect on pseudo-word identification but not on word identification. In addition, we found that visual attention span indirectly explains reading comprehension through pseudo-word reading and word reading skills. This study supports the hypothesis that at least part of the dyslexic profile can be explained by visual attention abilities. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. An English-French-German-Spanish Word Frequency Dictionary: A Correlation of the First Six Thousand Words in Four Single-Language Frequency Lists.

    Science.gov (United States)

    Eaton, Helen S., Comp.

    This semantic frequency list for English, French, German, and Spanish correlates 6,474 concepts represented by individual words in an order of diminishing occurrence. Designed as a research tool, the work is segmented into seven comparative "Thousand Concepts" lists with 115 sectional subdivisions, each of which begins with the key English word…

  3. Decoding Facial Expressions: A New Test with Decoding Norms.

    Science.gov (United States)

    Leathers, Dale G.; Emigh, Ted H.

    1980-01-01

    Describes the development and testing of a new facial meaning sensitivity test designed to determine how specialized are the meanings that can be decoded from facial expressions. Demonstrates the use of the test to measure a receiver's current level of skill in decoding facial expressions. (JMF)

  4. How Many Pages in a Single Word: Alternative Typo-poetics of Surrealist Magazines

    Directory of Open Access Journals (Sweden)

    Biljana Andonovska

    2013-07-01

    Full Text Available The paper examines the experimental design, typography and editorial strategies of the rare avant-garde publication Four Pages - Onanism of Death - And So On (1930, published by Oskar Davičo, Đorđe Kostić and Đorđe Jovanović, probably the first Surrealist Edition of the Belgrade surrealist group. Starting from its unconventional format and the way authors (reshape and (misdirect each page in an autonomous fashion, I further analyze the intrinsic interaction between the text, its graphic embodiment and surrounding para-textual elements (illustrations, body text, titles, folding, dating, margins, comments. Special attention is given to the concepts of depersonalization, free association and automatic writing as primary poetical sources for the delinearisation of the reading process and 'emancipation' of the text, its content and syntax as well as its position, direction, and visual materiality on the page. Resisting conventional classifications and simplified distinctions between established print media and genres, this surrealist single-issue placard magazine mixes elements of the poster, magazine, and booklet. Its ambiguous nature leads us toward theoretical discussion of the avant-garde magazine as an autonomous literary genre and original, self-sufficient artwork, as was already suggested by the theory of Russian formalism.

  5. Lexical decoder for continuous speech recognition: sequential neural network approach

    International Nuclear Information System (INIS)

    Iooss, Christine

    1991-01-01

    The work presented in this dissertation concerns the study of a connectionist architecture to treat sequential inputs. In this context, the model proposed by J.L. Elman, a recurrent multilayers network, is used. Its abilities and its limits are evaluated. Modifications are done in order to treat erroneous or noisy sequential inputs and to classify patterns. The application context of this study concerns the realisation of a lexical decoder for analytical multi-speakers continuous speech recognition. Lexical decoding is completed from lattices of phonemes which are obtained after an acoustic-phonetic decoding stage relying on a K Nearest Neighbors search technique. Test are done on sentences formed from a lexicon of 20 words. The results are obtained show the ability of the proposed connectionist model to take into account the sequentiality at the input level, to memorize the context and to treat noisy or erroneous inputs. (author) [fr

  6. The Role of Accessibility of Semantic Word Knowledge in Monolingual and Bilingual Fifth-Grade Reading

    Science.gov (United States)

    Cremer, M.; Schoonen, R.

    2013-01-01

    The influences of word decoding, availability, and accessibility of semantic word knowledge on reading comprehension were investigated for monolingual "("n = 65) and bilingual children ("n" = 70). Despite equal decoding abilities, monolingual children outperformed bilingual children with regard to reading comprehension and…

  7. List Decoding of Algebraic Codes

    DEFF Research Database (Denmark)

    Nielsen, Johan Sebastian Rosenkilde

    We investigate three paradigms for polynomial-time decoding of Reed–Solomon codes beyond half the minimum distance: the Guruswami–Sudan algorithm, Power decoding and the Wu algorithm. The main results concern shaping the computational core of all three methods to a problem solvable by module...... Hermitian codes using Guruswami–Sudan or Power decoding faster than previously known, and we show how to Wu list decode binary Goppa codes....... to solve such using module minimisation, or using our new Demand–Driven algorithm which is also based on module minimisation. The decoding paradigms are all derived and analysed in a self-contained manner, often in new ways or examined in greater depth than previously. Among a number of new results, we...

  8. Some words on Word

    NARCIS (Netherlands)

    Janssen, Maarten; Visser, A.

    In many disciplines, the notion of a word is of central importance. For instance, morphology studies le mot comme tel, pris isol´ement (Mel’ˇcuk, 1993 [74]). In the philosophy of language the word was often considered to be the primary bearer of meaning. Lexicography has as its fundamental role

  9. Minimum decoding trellis length and truncation depth of wrap-around Viterbi algorithm for TBCC in mobile WiMAX

    Directory of Open Access Journals (Sweden)

    Liu Yu-Sun

    2011-01-01

    Full Text Available Abstract The performance of the wrap-around Viterbi decoding algorithm with finite truncation depth and fixed decoding trellis length is investigated for tail-biting convolutional codes in the mobile WiMAX standard. Upper bounds on the error probabilities induced by finite truncation depth and the uncertainty of the initial state are derived for the AWGN channel. The truncation depth and the decoding trellis length that yield negligible performance loss are obtained for all transmission rates over the Rayleigh channel using computer simulations. The results show that the circular decoding algorithm with an appropriately chosen truncation depth and a decoding trellis just a fraction longer than the original received code words can achieve almost the same performance as the optimal maximum likelihood decoding algorithm in mobile WiMAX. A rule of thumb for the values of the truncation depth and the trellis tail length is also proposed.

  10. Astrophysics Decoding the cosmos

    CERN Document Server

    Irwin, Judith A

    2007-01-01

    Astrophysics: Decoding the Cosmos is an accessible introduction to the key principles and theories underlying astrophysics. This text takes a close look at the radiation and particles that we receive from astronomical objects, providing a thorough understanding of what this tells us, drawing the information together using examples to illustrate the process of astrophysics. Chapters dedicated to objects showing complex processes are written in an accessible manner and pull relevant background information together to put the subject firmly into context. The intention of the author is that the book will be a 'tool chest' for undergraduate astronomers wanting to know the how of astrophysics. Students will gain a thorough grasp of the key principles, ensuring that this often-difficult subject becomes more accessible.

  11. Disruption of Spelling-to-Sound Correspondence Mapping during Single-Word Reading in Patients with Temporal Lobe Epilepsy

    Science.gov (United States)

    Ledoux, Kerry; Gordon, Barry

    2011-01-01

    Processing and/or hemispheric differences in the neural bases of word recognition were examined in patients with long-standing, medically-intractable epilepsy localized to the left (N = 18) or right (N = 7) temporal lobe. Participants were asked to read words that varied in the frequency of their spelling-to-sound correspondences. For the right…

  12. Neural Decoder for Topological Codes

    Science.gov (United States)

    Torlai, Giacomo; Melko, Roger G.

    2017-07-01

    We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two-dimensional toric code with phase-flip errors.

  13. A Fully Parallel VLSI-implementation of the Viterbi Decoding Algorithm

    DEFF Research Database (Denmark)

    Sparsø, Jens; Jørgensen, Henrik Nordtorp; Paaske, Erik

    1989-01-01

    In this paper we describe the implementation of a K = 7, R = 1/2 single-chip Viterbi decoder intended to operate at 10-20 Mbit/sec. We propose a general, regular and area efficient floor-plan that is also suitable for implementation of decoders for codes with different generator polynomials...

  14. Fast decoding algorithms for geometric coded apertures

    International Nuclear Information System (INIS)

    Byard, Kevin

    2015-01-01

    Fast decoding algorithms are described for the class of coded aperture designs known as geometric coded apertures which were introduced by Gourlay and Stephen. When compared to the direct decoding method, the algorithms significantly reduce the number of calculations required when performing the decoding for these apertures and hence speed up the decoding process. Experimental tests confirm the efficacy of these fast algorithms, demonstrating a speed up of approximately two to three orders of magnitude over direct decoding.

  15. Coding and decoding with dendrites.

    Science.gov (United States)

    Papoutsi, Athanasia; Kastellakis, George; Psarrou, Maria; Anastasakis, Stelios; Poirazi, Panayiota

    2014-02-01

    Since the discovery of complex, voltage dependent mechanisms in the dendrites of multiple neuron types, great effort has been devoted in search of a direct link between dendritic properties and specific neuronal functions. Over the last few years, new experimental techniques have allowed the visualization and probing of dendritic anatomy, plasticity and integrative schemes with unprecedented detail. This vast amount of information has caused a paradigm shift in the study of memory, one of the most important pursuits in Neuroscience, and calls for the development of novel theories and models that will unify the available data according to some basic principles. Traditional models of memory considered neural cells as the fundamental processing units in the brain. Recent studies however are proposing new theories in which memory is not only formed by modifying the synaptic connections between neurons, but also by modifications of intrinsic and anatomical dendritic properties as well as fine tuning of the wiring diagram. In this review paper we present previous studies along with recent findings from our group that support a key role of dendrites in information processing, including the encoding and decoding of new memories, both at the single cell and the network level. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Increased motor preparation activity during fluent single word production in DS: A correlate for stuttering frequency and severity.

    Science.gov (United States)

    Vanhoutte, Sarah; Santens, Patrick; Cosyns, Marjan; van Mierlo, Pieter; Batens, Katja; Corthals, Paul; De Letter, Miet; Van Borsel, John

    2015-08-01

    Abnormal speech motor preparation is suggested to be a neural characteristic of stuttering. One of the neurophysiological substrates of motor preparation is the contingent negative variation (CNV). The CNV is an event-related, slow negative potential that occurs between two defined stimuli. Unfortunately, CNV tasks are rarely studied in developmental stuttering (DS). Therefore, the present study aimed to evaluate motor preparation in DS by use of a CNV task. Twenty five adults who stutter (AWS) and 35 fluent speakers (FS) were included. They performed a picture naming task while an electro-encephalogram was recorded. The slope of the CNV was evaluated at frontal, central and parietal electrode sites. In addition, a correlation analysis was performed with stuttering severity and frequency measures. There was a marked increase in CNV slope in AWS as compared to FS. This increase was observed over the entire scalp with respect to stimulus onset, and only over the right hemisphere with respect to lip movement onset. Moreover, strong positive correlations were found between CNV slope and stuttering frequency and severity. As the CNV is known to reflect the activity in the basal ganglia-thalamo-cortical-network, the present findings confirm an increased activation of this loop during speech motor preparation in stuttering. The more a person stutters, the more neurons of this cortical-subcortical network seem to be activated. Because this increased CNV slope was observed during fluent single word production, it is discussed whether or not this observation refers to a successful compensation strategy. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Neural Response After a Single ECT Session During Retrieval of Emotional Self-Referent Words in Depression

    DEFF Research Database (Denmark)

    Miskowiak, Kamilla W; Macoveanu, Julian; Jørgensen, Martin B

    2018-01-01

    of their electroconvulsive therapy course in a double-blind, between-groups design. The following day, patients were given a self-referential emotional word categorization test and a free recall test. This was followed by an incidental word recognition task during whole-brain functional magnetic resonance imaging at 3T...... response may reflect early facilitation of memory for positive self-referent information, which could contribute to improvements in depressive symptoms including feelings of self-worth with repeated treatments....

  18. Visual perception as retrospective Bayesian decoding from high- to low-level features.

    Science.gov (United States)

    Ding, Stephanie; Cueva, Christopher J; Tsodyks, Misha; Qian, Ning

    2017-10-24

    When a stimulus is presented, its encoding is known to progress from low- to high-level features. How these features are decoded to produce perception is less clear, and most models assume that decoding follows the same low- to high-level hierarchy of encoding. There are also theories arguing for global precedence, reversed hierarchy, or bidirectional processing, but they are descriptive without quantitative comparison with human perception. Moreover, observers often inspect different parts of a scene sequentially to form overall perception, suggesting that perceptual decoding requires working memory, yet few models consider how working-memory properties may affect decoding hierarchy. We probed decoding hierarchy by comparing absolute judgments of single orientations and relative/ordinal judgments between two sequentially presented orientations. We found that lower-level, absolute judgments failed to account for higher-level, relative/ordinal judgments. However, when ordinal judgment was used to retrospectively decode memory representations of absolute orientations, striking aspects of absolute judgments, including the correlation and forward/backward aftereffects between two reported orientations in a trial, were explained. We propose that the brain prioritizes decoding of higher-level features because they are more behaviorally relevant, and more invariant and categorical, and thus easier to specify and maintain in noisy working memory, and that more reliable higher-level decoding constrains less reliable lower-level decoding. Published under the PNAS license.

  19. Where one hand meets the other: limb-specific and action-dependent movement plans decoded from preparatory signals in single human frontoparietal brain areas.

    Science.gov (United States)

    Gallivan, Jason P; McLean, D Adam; Flanagan, J Randall; Culham, Jody C

    2013-01-30

    Planning object-directed hand actions requires successful integration of the movement goal with the acting limb. Exactly where and how this sensorimotor integration occurs in the brain has been studied extensively with neurophysiological recordings in nonhuman primates, yet to date, because of limitations of non-invasive methodologies, the ability to examine the same types of planning-related signals in humans has been challenging. Here we show, using a multivoxel pattern analysis of functional MRI (fMRI) data, that the preparatory activity patterns in several frontoparietal brain regions can be used to predict both the limb used and hand action performed in an upcoming movement. Participants performed an event-related delayed movement task whereby they planned and executed grasp or reach actions with either their left or right hand toward a single target object. We found that, although the majority of frontoparietal areas represented hand actions (grasping vs reaching) for the contralateral limb, several areas additionally coded hand actions for the ipsilateral limb. Notable among these were subregions within the posterior parietal cortex (PPC), dorsal premotor cortex (PMd), ventral premotor cortex, dorsolateral prefrontal cortex, presupplementary motor area, and motor cortex, a region more traditionally implicated in contralateral movement generation. Additional analyses suggest that hand actions are represented independently of the intended limb in PPC and PMd. In addition to providing a unique mapping of limb-specific and action-dependent intention-related signals across the human cortical motor system, these findings uncover a much stronger representation of the ipsilateral limb than expected from previous fMRI findings.

  20. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    Science.gov (United States)

    Lin, Shu

    1998-01-01

    sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.

  1. Contributions of Phonological Awareness, Phonological Short-Term Memory, and Rapid Automated Naming, toward Decoding Ability in Students with Mild Intellectual Disability

    Science.gov (United States)

    Soltani, Amanallah; Roslan, Samsilah

    2013-01-01

    Reading decoding ability is a fundamental skill to acquire word-specific orthographic information necessary for skilled reading. Decoding ability and its underlying phonological processing skills have been heavily investigated typically among developing students. However, the issue has rarely been noticed among students with intellectual…

  2. Tracking Perceptual and Memory Decisions by Decoding Brain Activity

    NARCIS (Netherlands)

    van Vugt, Marieke; Brandt, Armin; Schulze-Bonhage, Andreas

    2017-01-01

    Decision making is thought to involve a process of evidence accumulation, modelled as a drifting diffusion process. This modeling framework suggests that all single-stage decisions involve a similar evidence accumulation process. In this paper we use decoding by machine learning classifiers on

  3. EEG source imaging assists decoding in a face recognition task

    DEFF Research Database (Denmark)

    Andersen, Rasmus S.; Eliasen, Anders U.; Pedersen, Nicolai

    2017-01-01

    of face recognition. This task concerns the differentiation of brain responses to images of faces and scrambled faces and poses a rather difficult decoding problem at the single trial level. We implement the pipeline using spatially focused features and show that this approach is challenged and source...

  4. A restricted test of single word intelligibility in 3-year-old children with and without cleft palate

    DEFF Research Database (Denmark)

    Willadsen, Elisabeth; Poulsen, Mads

    2012-01-01

    Abstract Objective: In a previous study, children with cleft palate with hard palate closure at 12 months of age showed more typical phonological development than children with an unrepaired hard palate at 36 months of age. This finding was based on narrow transcription of word initial target...... hard palate closure at either12 months (HPR (hard palate repaired)) or 36 months (HPU (hard palate unrepaired)), were compared to data obtained from 14 age-matched, typically developing, control children. Methods: Video recordings of the children naming target words were shown to 84 naïve listeners...... consonants obtained from a simple naming test. To evaluate the relevance of this finding, we investigated how well the children's target words were understood by 84 naïve listeners. Design: A cross-sectional study. Participants: Data obtained from twenty-eight children with UCLP, 3 years of age, who received...

  5. Decoding ensemble activity from neurophysiological recordings in the temporal cortex.

    Science.gov (United States)

    Kreiman, Gabriel

    2011-01-01

    We study subjects with pharmacologically intractable epilepsy who undergo semi-chronic implantation of electrodes for clinical purposes. We record physiological activity from tens to more than one hundred electrodes implanted in different parts of neocortex. These recordings provide higher spatial and temporal resolution than non-invasive measures of human brain activity. Here we discuss our efforts to develop hardware and algorithms to interact with the human brain by decoding ensemble activity in single trials. We focus our discussion on decoding visual information during a variety of visual object recognition tasks but the same technologies and algorithms can also be directly applied to other cognitive phenomena.

  6. Orientation decoding: Sense in spirals?

    Science.gov (United States)

    Clifford, Colin W G; Mannion, Damien J

    2015-04-15

    The orientation of a visual stimulus can be successfully decoded from the multivariate pattern of fMRI activity in human visual cortex. Whether this capacity requires coarse-scale orientation biases is controversial. We and others have advocated the use of spiral stimuli to eliminate a potential coarse-scale bias-the radial bias toward local orientations that are collinear with the centre of gaze-and hence narrow down the potential coarse-scale biases that could contribute to orientation decoding. The usefulness of this strategy is challenged by the computational simulations of Carlson (2014), who reported the ability to successfully decode spirals of opposite sense (opening clockwise or counter-clockwise) from the pooled output of purportedly unbiased orientation filters. Here, we elaborate the mathematical relationship between spirals of opposite sense to confirm that they cannot be discriminated on the basis of the pooled output of unbiased or radially biased orientation filters. We then demonstrate that Carlson's (2014) reported decoding ability is consistent with the presence of inadvertent biases in the set of orientation filters; biases introduced by their digital implementation and unrelated to the brain's processing of orientation. These analyses demonstrate that spirals must be processed with an orientation bias other than the radial bias for successful decoding of spiral sense. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Signal Words

    Science.gov (United States)

    SIGNAL WORDS TOPIC FACT SHEET NPIC fact sheets are designed to answer questions that are commonly asked by the ... making decisions about pesticide use. What are Signal Words? Signal words are found on pesticide product labels, ...

  8. Improved decoding for a concatenated coding system

    DEFF Research Database (Denmark)

    Paaske, Erik

    1990-01-01

    The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new...... decoding procedures based on repeated decoding trials and exchange of information between the two decoders and the deinterleaver are proposed. In the first one, where the improvement is 0.3-0.4 dB, only the RS decoder performs repeated trials. In the second one, where the improvement is 0.5-0.6 dB, both...... decoders perform repeated decoding trials and decoding information is exchanged between them...

  9. SWIPT in Multiuser MIMO Decode-and-Forward Relay Broadcasting Channel with Energy Harvesting Relays

    KAUST Repository

    Benkhelifa, Fatma; Salem, Ahmed Sultan; Alouini, Mohamed-Slim

    2017-01-01

    In this paper, we consider a multiuser multiple- input multiple-output (MIMO) decode-and-forward (DF) relay broadcasting channel (BC) with single source, multiple energy harvesting relays and multiple destinations. Since the end-to-end sum rate

  10. Soft-decision decoding of RS codes

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2005-01-01

    By introducing a few simplifying assumptions we derive a simple condition for successful decoding using the Koetter-Vardy algorithm for soft-decision decoding of RS codes. We show that the algorithm has a significant advantage over hard decision decoding when the code rate is low, when two or more...

  11. Toric Codes, Multiplicative Structure and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    2017-01-01

    Long linear codes constructed from toric varieties over finite fields, their multiplicative structure and decoding. The main theme is the inherent multiplicative structure on toric codes. The multiplicative structure allows for \\emph{decoding}, resembling the decoding of Reed-Solomon codes and al...

  12. FPGA Realization of Memory 10 Viterbi Decoder

    DEFF Research Database (Denmark)

    Paaske, Erik; Bach, Thomas Bo; Andersen, Jakob Dahl

    1997-01-01

    sequence mode when feedback from the Reed-Solomon decoder is available. The Viterbi decoder is realized using two Altera FLEX 10K50 FPGA's. The overall operating speed is 30 kbit/s, and since up to three iterations are performed for each frame and only one decoder is used, the operating speed...

  13. Error Recovery Properties and Soft Decoding of Quasi-Arithmetic Codes

    Directory of Open Access Journals (Sweden)

    Christine Guillemot

    2007-08-01

    Full Text Available This paper first introduces a new set of aggregated state models for soft-input decoding of quasi arithmetic (QA codes with a termination constraint. The decoding complexity with these models is linear with the sequence length. The aggregation parameter controls the tradeoff between decoding performance and complexity. It is shown that close-to-optimal decoding performance can be obtained with low values of the aggregation parameter, that is, with a complexity which is significantly reduced with respect to optimal QA bit/symbol models. The choice of the aggregation parameter depends on the synchronization recovery properties of the QA codes. This paper thus describes a method to estimate the probability mass function (PMF of the gain/loss of symbols following a single bit error (i.e., of the difference between the number of encoded and decoded symbols. The entropy of the gain/loss turns out to be the average amount of information conveyed by a length constraint on both the optimal and aggregated state models. This quantity allows us to choose the value of the aggregation parameter that will lead to close-to-optimal decoding performance. It is shown that the optimum position for the length constraint is not the last time instant of the decoding process. This observation leads to the introduction of a new technique for robust decoding of QA codes with redundancy which turns out to outperform techniques based on the concept of forbidden symbol.

  14. Selectivity of lexical-semantic disorders in Polish-speaking patients with aphasia: evidence from single-word comprehension.

    Science.gov (United States)

    Jodzio, Krzysztof; Biechowska, Daria; Leszniewska-Jodzio, Barbara

    2008-09-01

    Several neuropsychological studies have shown that patients with brain damage may demonstrate selective category-specific deficits of auditory comprehension. The present paper reports on an investigation of aphasic patients' preserved ability to perform a semantic task on spoken words despite severe impairment in auditory comprehension, as shown by failure in matching spoken words to pictured objects. Twenty-six aphasic patients (11 women and 15 men) with impaired speech comprehension due to a left-hemisphere ischaemic stroke were examined; all were right-handed and native speakers of Polish. Six narrowly defined semantic categories for which dissociations have been reported are colors, body parts, animals, food, objects (mostly tools), and means of transportation. An analysis using one-way ANOVA with repeated measures in conjunction with the Lambda-Wilks Test revealed significant discrepancies among these categories in aphasic patients, who had much more difficulty comprehending names of colors than they did comprehending names of other objects (F((5,21))=13.15; pexplanation in terms of word frequency and/or visual complexity was ruled out. Evidence from the present study support the position that so called "global" aphasia is an imprecise term and should be redefined. These results are discussed within the connectionist and modular perspectives on category-specific deficits in aphasia.

  15. Dynamics of intracellular information decoding

    International Nuclear Information System (INIS)

    Kobayashi, Tetsuya J; Kamimura, Atsushi

    2011-01-01

    A variety of cellular functions are robust even to substantial intrinsic and extrinsic noise in intracellular reactions and the environment that could be strong enough to impair or limit them. In particular, of substantial importance is cellular decision-making in which a cell chooses a fate or behavior on the basis of information conveyed in noisy external signals. For robust decoding, the crucial step is filtering out the noise inevitably added during information transmission. As a minimal and optimal implementation of such an information decoding process, the autocatalytic phosphorylation and autocatalytic dephosphorylation (aPadP) cycle was recently proposed. Here, we analyze the dynamical properties of the aPadP cycle in detail. We describe the dynamical roles of the stationary and short-term responses in determining the efficiency of information decoding and clarify the optimality of the threshold value of the stationary response and its information-theoretical meaning. Furthermore, we investigate the robustness of the aPadP cycle against the receptor inactivation time and intrinsic noise. Finally, we discuss the relationship among information decoding with information-dependent actions, bet-hedging and network modularity

  16. Dynamics of intracellular information decoding.

    Science.gov (United States)

    Kobayashi, Tetsuya J; Kamimura, Atsushi

    2011-10-01

    A variety of cellular functions are robust even to substantial intrinsic and extrinsic noise in intracellular reactions and the environment that could be strong enough to impair or limit them. In particular, of substantial importance is cellular decision-making in which a cell chooses a fate or behavior on the basis of information conveyed in noisy external signals. For robust decoding, the crucial step is filtering out the noise inevitably added during information transmission. As a minimal and optimal implementation of such an information decoding process, the autocatalytic phosphorylation and autocatalytic dephosphorylation (aPadP) cycle was recently proposed. Here, we analyze the dynamical properties of the aPadP cycle in detail. We describe the dynamical roles of the stationary and short-term responses in determining the efficiency of information decoding and clarify the optimality of the threshold value of the stationary response and its information-theoretical meaning. Furthermore, we investigate the robustness of the aPadP cycle against the receptor inactivation time and intrinsic noise. Finally, we discuss the relationship among information decoding with information-dependent actions, bet-hedging and network modularity.

  17. Human Genome Research: Decoding DNA

    Science.gov (United States)

    dropdown arrow Site Map A-Z Index Menu Synopsis Human Genome Research: Decoding DNA Resources with of the DNA double helix during April 2003. James D. Watson, Francis Crick, and Maurice Wilkins were company Celera announced the completion of a "working draft" reference DNA sequence of the human

  18. Word Translation Entropy

    DEFF Research Database (Denmark)

    Schaeffer, Moritz; Dragsted, Barbara; Hvelplund, Kristian Tangsgaard

    This study reports on an investigation into the relationship between the number of translation alternatives for a single word and eye movements on the source text. In addition, the effect of word order differences between source and target text on eye movements on the source text is studied. In p...

  19. Nine Words - Nine Columns

    DEFF Research Database (Denmark)

    Trempe Jr., Robert B.; Buthke, Jan

    2016-01-01

    of computational and mechanical processes towards an anesthetic. Each team received a single word, translating and evolving that word first into a double-curved computational surface, next a ruled computational surface, and then a physically shaped foam mold via a 6-axis robot. The foam molds then operated...

  20. Orthographic Context Sensitivity in Vowel Decoding by Portuguese Monolingual and Portuguese-English Bilingual Children

    Science.gov (United States)

    Vale, Ana Paula

    2011-01-01

    This study examines the pronunciation of the first vowel in decoding disyllabic pseudowords derived from Portuguese words. Participants were 96 Portuguese monolinguals and 52 Portuguese-English bilinguals of equivalent Portuguese reading levels. The results indicate that sensitivity to vowel context emerges early, both in monolinguals and in…

  1. Elegant grapheme-phoneme correspondence: a periodic chart and singularity generalization unify decoding.

    Science.gov (United States)

    Gates, Louis

    2017-12-11

    The accompanying article introduces highly transparent grapheme-phoneme relationships embodied within a Periodic table of decoding cells, which arguably presents the quintessential transparent decoding elements. The study then folds these cells into one highly transparent but simply stated singularity generalization-this generalization unifies the decoding cells (97% transparency). Deeper, the periodic table and singularity generalization together highlight the connectivity of the periodic cells. Moreover, these interrelated cells, coupled with the singularity generalization, clarify teaching targets and enable efficient learning of the letter-sound code. This singularity generalization, in turn, serves as a model for creating unified but easily stated subordinate generalizations for any one of the transparent cells or groups of cells shown within the tables. The article then expands the periodic cells into two tables of teacher-ready sample word lists-one table includes sample words for the basic and phonogram vowel cells, and the other table embraces word samples for the transparent consonant cells. The paper concludes with suggestions for teaching the cellular transparency embedded within reoccurring isolated words and running text to promote decoding automaticity of the periodic cells.

  2. The Effects of Musical Training on the Decoding Skills of German-Speaking Primary School Children

    Science.gov (United States)

    Rautenberg, Iris

    2015-01-01

    This paper outlines the results of a long-term study of 159 German-speaking primary school children. The correlations between musical skills (perception and differentiation of rhythmical and tonal/melodic patterns) and decoding skills, and the effects of musical training on word-level reading abilities were investigated. Cognitive skills and…

  3. Fast and Flexible Successive-Cancellation List Decoders for Polar Codes

    Science.gov (United States)

    Hashemi, Seyyed Ali; Condo, Carlo; Gross, Warren J.

    2017-11-01

    Polar codes have gained significant amount of attention during the past few years and have been selected as a coding scheme for the next generation of mobile broadband standard. Among decoding schemes, successive-cancellation list (SCL) decoding provides a reasonable trade-off between the error-correction performance and hardware implementation complexity when used to decode polar codes, at the cost of limited throughput. The simplified SCL (SSCL) and its extension SSCL-SPC increase the speed of decoding by removing redundant calculations when encountering particular information and frozen bit patterns (rate one and single parity check codes), while keeping the error-correction performance unaltered. In this paper, we improve SSCL and SSCL-SPC by proving that the list size imposes a specific number of bit estimations required to decode rate one and single parity check codes. Thus, the number of estimations can be limited while guaranteeing exactly the same error-correction performance as if all bits of the code were estimated. We call the new decoding algorithms Fast-SSCL and Fast-SSCL-SPC. Moreover, we show that the number of bit estimations in a practical application can be tuned to achieve desirable speed, while keeping the error-correction performance almost unchanged. Hardware architectures implementing both algorithms are then described and implemented: it is shown that our design can achieve 1.86 Gb/s throughput, higher than the best state-of-the-art decoders.

  4. Speed and automaticity of word recognition - inseparable twins?

    DEFF Research Database (Denmark)

    Poulsen, Mads; Asmussen, Vibeke; Elbro, Carsten

    'Speed and automaticity' of word recognition is a standard collocation. However, it is not clear whether speed and automaticity (i.e., effortlessness) make independent contributions to reading comprehension. In theory, both speed and automaticity may save cognitive resources for comprehension...... processes. Hence, the aim of the present study was to assess the unique contributions of word recognition speed and automaticity to reading comprehension while controlling for decoding speed and accuracy. Method: 139 Grade 5 students completed tests of reading comprehension and computer-based tests of speed...... of decoding and word recognition together with a test of effortlessness (automaticity) of word recognition. Effortlessness was measured in a dual task in which participants were presented with a word enclosed in an unrelated figure. The task was to read the word and decide whether the figure was a triangle...

  5. Neural Response After a Single ECT Session During Retrieval of Emotional Self-Referent Words in Depression: A Randomized, Sham-Controlled fMRI Study

    Science.gov (United States)

    Miskowiak, Kamilla W; Macoveanu, Julian; Jørgensen, Martin B; Støttrup, Mette M; Ott, Caroline V; Jensen, Hans M; Jørgensen, Anders; Harmer, J; Paulson, Olaf B; Kessing, Lars V; Siebner, Hartwig R

    2018-01-01

    Abstract Background Negative neurocognitive bias is a core feature of depression that is reversed by antidepressant drug treatment. However, it is unclear whether modulation of neurocognitive bias is a common mechanism of distinct biological treatments. This randomized controlled functional magnetic resonance imaging study explored the effects of a single electroconvulsive therapy session on self-referent emotional processing. Methods Twenty-nine patients with treatment-resistant major depressive disorder were randomized to one active or sham electroconvulsive therapy session at the beginning of their electroconvulsive therapy course in a double-blind, between-groups design. The following day, patients were given a self-referential emotional word categorization test and a free recall test. This was followed by an incidental word recognition task during whole-brain functional magnetic resonance imaging at 3T. Mood was assessed at baseline, on the functional magnetic resonance imaging day, and after 6 electroconvulsive therapy sessions. Data were complete and analyzed for 25 patients (electroconvulsive therapy: n = 14, sham: n = 11). The functional magnetic resonance imaging data were analyzed using the FMRIB Software Library randomize algorithm, and the Threshold-Free Cluster Enhancement method was used to identify significant clusters (corrected at P words. However, electroconvulsive therapy reduced the retrieval-specific neural response for positive words in the left frontopolar cortex. This effect occurred in the absence of differences between groups in behavioral performance or mood symptoms. Conclusions The observed effect of electroconvulsive therapy on prefrontal response may reflect early facilitation of memory for positive self-referent information, which could contribute to improvements in depressive symptoms including feelings of self-worth with repeated treatments. PMID:29718333

  6. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection

    Science.gov (United States)

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-01

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963

  7. Scaffolding Students’ Independent Decoding of Unfamiliar Text with a Prototype of an eBook-feature

    Directory of Open Access Journals (Sweden)

    Stig T Gissel

    2015-10-01

    Full Text Available This study was undertaken to design, evaluate and refine an eBook-feature that supports students’ decoding of unfamiliar text. The feature supports students’ independent reading of eBooks with text-to-speech, graded support in the form of syllabification and rhyme analogy, and by dividing the word material into different categories based on the frequency and regularity of the word or its constituent parts. The eBook-feature is based on connectionist models of reading and reading acquisition and the theory of scaffolding. Students are supported in mapping between spelling and sound, in identifying the relevant spelling patterns and in generalizing, in order to strengthen their decoding skills. The prototype was evaluated with Danish students in the second grade to see how and under what circumstances students can use the feature in ways that strengthen their decoding skills and support them in reading unfamiliar text. It was found that most students could interact with the eBook-material in ways that the envisioned learning trajectory in the study predicts are beneficial in strengthening their decoding skills. The study contributes with both principles for designing digital learning material with supportive features for decoding unfamiliar text and with a concrete proposal for a design. The perspectives for making reading acquisition more differentiated and meaningful for second graders in languages with irregular spelling are discussed.

  8. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.

    Science.gov (United States)

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-15

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.

  9. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection

    Directory of Open Access Journals (Sweden)

    Jiahui Meng

    2018-01-01

    Full Text Available In order to improve the performance of non-binary low-density parity check codes (LDPC hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER of 10−5 over an additive white Gaussian noise (AWGN channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.

  10. Optimized Min-Sum Decoding Algorithm for Low Density Parity Check Codes

    OpenAIRE

    Mohammad Rakibul Islam; Dewan Siam Shafiullah; Muhammad Mostafa Amir Faisal; Imran Rahman

    2011-01-01

    Low Density Parity Check (LDPC) code approaches Shannon–limit performance for binary field and long code lengths. However, performance of binary LDPC code is degraded when the code word length is small. An optimized min-sum algorithm for LDPC code is proposed in this paper. In this algorithm unlike other decoding methods, an optimization factor has been introduced in both check node and bit node of the Min-sum algorithm. The optimization factor is obtained before decoding program, and the sam...

  11. Synthetic phonics and decodable instructional reading texts: How far do these support poor readers?

    Science.gov (United States)

    Price-Mohr, Ruth Maria; Price, Colin Bernard

    2018-05-01

    This paper presents data from a quasi-experimental trial with paired randomisation that emerged during the development of a reading scheme for children in England. This trial was conducted with a group of 12 children, aged 5-6, and considered to be falling behind their peers in reading ability and a matched control group. There were two intervention conditions (A: using mixed teaching methods and a high percentage of non-phonically decodable vocabulary; P: using mixed teaching methods and low percentage of non-decodable vocabulary); allocation to these was randomised. Children were assessed at pre- and post-test on standardised measures of receptive vocabulary, phoneme awareness, word reading, and comprehension. Two class teachers in the same school each selected 6 children, who they considered to be poor readers, to participate (n = 12). A control group (using synthetic phonics only and phonically decodable vocabulary) was selected from the same 2 classes based on pre-test scores for word reading (n = 16). Results from the study show positive benefits for poor readers from using both additional teaching methods (such as analytic phonics, sight word vocabulary, and oral vocabulary extension) in addition to synthetic phonics, and also non-decodable vocabulary in instructional reading text. Copyright © 2018 John Wiley & Sons, Ltd.

  12. A class of Sudan-decodable codes

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Refslund

    2000-01-01

    In this article, Sudan's algorithm is modified into an efficient method to list-decode a class of codes which can be seen as a generalization of Reed-Solomon codes. The algorithm is specialized into a very efficient method for unique decoding. The code construction can be generalized based...... on algebraic-geometry codes and the decoding algorithms are generalized accordingly. Comparisons with Reed-Solomon and Hermitian codes are made....

  13. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  14. Interior point decoding for linear vector channels

    International Nuclear Information System (INIS)

    Wadayama, T

    2008-01-01

    In this paper, a novel decoding algorithm for low-density parity-check (LDPC) codes based on convex optimization is presented. The decoding algorithm, called interior point decoding, is designed for linear vector channels. The linear vector channels include many practically important channels such as inter-symbol interference channels and partial response channels. It is shown that the maximum likelihood decoding (MLD) rule for a linear vector channel can be relaxed to a convex optimization problem, which is called a relaxed MLD problem

  15. Interior point decoding for linear vector channels

    Energy Technology Data Exchange (ETDEWEB)

    Wadayama, T [Nagoya Institute of Technology, Gokiso, Showa-ku, Nagoya, Aichi, 466-8555 (Japan)], E-mail: wadayama@nitech.ac.jp

    2008-01-15

    In this paper, a novel decoding algorithm for low-density parity-check (LDPC) codes based on convex optimization is presented. The decoding algorithm, called interior point decoding, is designed for linear vector channels. The linear vector channels include many practically important channels such as inter-symbol interference channels and partial response channels. It is shown that the maximum likelihood decoding (MLD) rule for a linear vector channel can be relaxed to a convex optimization problem, which is called a relaxed MLD problem.

  16. Co-lateralized bilingual mechanisms for reading in single and dual language contexts: evidence from visual half-field processing of action words in proficient bilinguals

    Directory of Open Access Journals (Sweden)

    Marlena eKrefta

    2015-08-01

    Full Text Available When reading, proficient bilinguals seem to engage the same cognitive circuits regardless of the language in use. Yet, whether or not such ‘bilingual’ mechanisms would be lateralized in the same way in distinct – single or dual – language contexts is a question for debate. To fill this gap, we tested 18 highly proficient Polish (L1 – English (L2 childhood bilinguals whose task was to read aloud one of the two laterally presented action verbs, one stimulus per visual half field. While in the single-language blocks only L1 or L2 words were shown, in the subsequent mixed-language blocks words from both languages were concurrently displayed. All stimuli were presented for 217 ms followed by masks in which letters were replaced with hash marks. Since in non-simultaneous bilinguals the control of language, skilled actions (including reading, and representations of action concepts are typically left lateralized, the vast majority of our participants showed the expected, significant right visual field advantage for L1 and L2, both for accuracy and response times. The observed effects were nevertheless associated with substantial variability in the strength of the lateralization of the mechanisms involved. Moreover, although it could be predicted that participants’ performance should be better in a single-language context, accuracy was significantly higher and response times were significantly shorter in a dual-language context, irrespective of the language tested. Finally, for both accuracy and response times, there were significant positive correlations between the laterality indices (LIs of both languages independent of the context, with a significantly greater left-sided advantage for L1 vs. L2 in the mixed-language blocks, based on LIs calculated for response times. Thus, despite similar representations of the two languages in the bilingual brain, these results also point to the functional separation of L1 and L2 in the dual

  17. Video encoder/decoder for encoding/decoding motion compensated images

    NARCIS (Netherlands)

    1996-01-01

    Video encoder and decoder, provided with a motion compensator for motion-compensated video coding or decoding in which a picture is coded or decoded in blocks in alternately horizontal and vertical steps. The motion compensator is provided with addressing means (160) and controlled multiplexers

  18. Decoding bipedal locomotion from the rat sensorimotor cortex

    Science.gov (United States)

    Rigosa, J.; Panarese, A.; Dominici, N.; Friedli, L.; van den Brand, R.; Carpaneto, J.; DiGiovanna, J.; Courtine, G.; Micera, S.

    2015-10-01

    Objective. Decoding forelimb movements from the firing activity of cortical neurons has been interfaced with robotic and prosthetic systems to replace lost upper limb functions in humans. Despite the potential of this approach to improve locomotion and facilitate gait rehabilitation, decoding lower limb movement from the motor cortex has received comparatively little attention. Here, we performed experiments to identify the type and amount of information that can be decoded from neuronal ensemble activity in the hindlimb area of the rat motor cortex during bipedal locomotor tasks. Approach. Rats were trained to stand, step on a treadmill, walk overground and climb staircases in a bipedal posture. To impose this gait, the rats were secured in a robotic interface that provided support against the direction of gravity and in the mediolateral direction, but behaved transparently in the forward direction. After completion of training, rats were chronically implanted with a micro-wire array spanning the left hindlimb motor cortex to record single and multi-unit activity, and bipolar electrodes into 10 muscles of the right hindlimb to monitor electromyographic signals. Whole-body kinematics, muscle activity, and neural signals were simultaneously recorded during execution of the trained tasks over multiple days of testing. Hindlimb kinematics, muscle activity, gait phases, and locomotor tasks were decoded using offline classification algorithms. Main results. We found that the stance and swing phases of gait and the locomotor tasks were detected with accuracies as robust as 90% in all rats. Decoded hindlimb kinematics and muscle activity exhibited a larger variability across rats and tasks. Significance. Our study shows that the rodent motor cortex contains useful information for lower limb neuroprosthetic development. However, brain-machine interfaces estimating gait phases or locomotor behaviors, instead of continuous variables such as limb joint positions or speeds

  19. Evaluation framework for K-best sphere decoders

    KAUST Repository

    Shen, Chungan; Eltawil, Ahmed M.; Salama, Khaled N.

    2010-01-01

    or receive antennas. Tree-searching type decoder structures such as Sphere decoder and K-best decoder present an interesting trade-off between complexity and performance. Many algorithmic developments and VLSI implementations have been reported in literature

  20. Short-term retention of a single word relies on retrieval from long-term memory when both rehearsal and refreshing are disrupted.

    Science.gov (United States)

    Rose, Nathan S; Buchsbaum, Bradley R; Craik, Fergus I M

    2014-07-01

    Many working memory (WM) models propose that the focus of attention (or primary memory) has a capacity limit of one to four items, and therefore, that performance on WM tasks involves retrieving some items from long-term (or secondary) memory (LTM). In the present study, we present evidence suggesting that recall of even one item on a WM task can involve retrieving it from LTM. The WM task required participants to make a deep (living/nonliving) or shallow ("e"/no "e") level-of-processing (LOP) judgment on one word and to recall the word after a 10-s delay on each trial. During the delay, participants either rehearsed the word or performed an easy or a hard math task. When the to-be-remembered item could be rehearsed, recall was fast and accurate. When it was followed by a math task, recall was slower, error-prone, and benefited from a deeper LOP at encoding, especially for the hard math condition. The authors suggest that a covert-retrieval mechanism may have refreshed the item during easy math, and that the hard math condition shows that even a single item cannot be reliably held in WM during a sufficiently distracting task--therefore, recalling the item involved retrieving it from LTM. Additionally, performance on a final free recall (LTM) test was better for items recalled following math than following rehearsal, suggesting that initial recall following math involved elaborative retrieval from LTM, whereas rehearsal did not. The authors suggest that the extent to which performance on WM tasks involves retrieval from LTM depends on the amounts of disruption to both rehearsal and covert-retrieval/refreshing maintenance mechanisms.

  1. Clusterless Decoding of Position From Multiunit Activity Using A Marked Point Process Filter

    Science.gov (United States)

    Deng, Xinyi; Liu, Daniel F.; Kay, Kenneth; Frank, Loren M.; Eden, Uri T.

    2016-01-01

    Point process filters have been applied successfully to decode neural signals and track neural dynamics. Traditionally, these methods assume that multiunit spiking activity has already been correctly spike-sorted. As a result, these methods are not appropriate for situations where sorting cannot be performed with high precision such as real-time decoding for brain-computer interfaces. As the unsupervised spike-sorting problem remains unsolved, we took an alternative approach that takes advantage of recent insights about clusterless decoding. Here we present a new point process decoding algorithm that does not require multiunit signals to be sorted into individual units. We use the theory of marked point processes to construct a function that characterizes the relationship between a covariate of interest (in this case, the location of a rat on a track) and features of the spike waveforms. In our example, we use tetrode recordings, and the marks represent a four-dimensional vector of the maximum amplitudes of the spike waveform on each of the four electrodes. In general, the marks may represent any features of the spike waveform. We then use Bayes’ rule to estimate spatial location from hippocampal neural activity. We validate our approach with a simulation study and with experimental data recorded in the hippocampus of a rat moving through a linear environment. Our decoding algorithm accurately reconstructs the rat’s position from unsorted multiunit spiking activity. We then compare the quality of our decoding algorithm to that of a traditional spike-sorting and decoding algorithm. Our analyses show that the proposed decoding algorithm performs equivalently or better than algorithms based on sorted single-unit activity. These results provide a path toward accurate real-time decoding of spiking patterns that could be used to carry out content-specific manipulations of population activity in hippocampus or elsewhere in the brain. PMID:25973549

  2. Concatenated coding system with iterated sequential inner decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1995-01-01

    We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder......We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder...

  3. Application of Beyond Bound Decoding for High Speed Optical Communications

    DEFF Research Database (Denmark)

    Li, Bomin; Larsen, Knud J.; Vegas Olmos, Juan José

    2013-01-01

    This paper studies the application of beyond bound decoding method for high speed optical communications. This hard-decision decoding method outperforms traditional minimum distance decoding method, with a total net coding gain of 10.36 dB.......This paper studies the application of beyond bound decoding method for high speed optical communications. This hard-decision decoding method outperforms traditional minimum distance decoding method, with a total net coding gain of 10.36 dB....

  4. Encoding and Decoding Models in Cognitive Electrophysiology

    Directory of Open Access Journals (Sweden)

    Christopher R. Holdgraf

    2017-09-01

    Full Text Available Cognitive neuroscience has seen rapid growth in the size and complexity of data recorded from the human brain as well as in the computational tools available to analyze this data. This data explosion has resulted in an increased use of multivariate, model-based methods for asking neuroscience questions, allowing scientists to investigate multiple hypotheses with a single dataset, to use complex, time-varying stimuli, and to study the human brain under more naturalistic conditions. These tools come in the form of “Encoding” models, in which stimulus features are used to model brain activity, and “Decoding” models, in which neural features are used to generated a stimulus output. Here we review the current state of encoding and decoding models in cognitive electrophysiology and provide a practical guide toward conducting experiments and analyses in this emerging field. Our examples focus on using linear models in the study of human language and audition. We show how to calculate auditory receptive fields from natural sounds as well as how to decode neural recordings to predict speech. The paper aims to be a useful tutorial to these approaches, and a practical introduction to using machine learning and applied statistics to build models of neural activity. The data analytic approaches we discuss may also be applied to other sensory modalities, motor systems, and cognitive systems, and we cover some examples in these areas. In addition, a collection of Jupyter notebooks is publicly available as a complement to the material covered in this paper, providing code examples and tutorials for predictive modeling in python. The aim is to provide a practical understanding of predictive modeling of human brain data and to propose best-practices in conducting these analyses.

  5. Multiformat decoder for a DSP-based IP set-top box

    Science.gov (United States)

    Pescador, F.; Garrido, M. J.; Sanz, C.; Juárez, E.; Samper, D.; Antoniello, R.

    2007-05-01

    Internet Protocol Set-Top Boxes (IP STBs) based on single-processor architectures have been recently introduced in the market. In this paper, the implementation of an MPEG-4 SP/ASP video decoder for a multi-format IP STB based on a TMS320DM641 DSP is presented. An initial decoder for PC platform was fully tested and ported to the DSP. Using this code an optimization process was started achieving a 90% speedup. This process allows real-time MPEG-4 SP/ASP decoding. The MPEG-4 decoder has been integrated in an IP STB and tested in a real environment using DVD movies and TV channels with excellent results.

  6. Discrete versus multiple word displays: A re-analysis of studies comparing dyslexic and typically developing children

    Directory of Open Access Journals (Sweden)

    Pierluigi eZoccolotti

    2015-10-01

    Full Text Available The study examines whether impairments in reading a text can be explained by a deficit in word decoding or an additional deficit in the processes governing the integration of reading subcomponents (including eye movement programming and pronunciation should also be postulated. We report a re-analysis of data from eleven previous experiments conducted in our lab where the reading performance on single, discrete word displays as well multiple displays (texts, and in few cases also word lists was investigated in groups of dyslexic children and typically developing readers. The analysis focuses on measures of time and not accuracy.Across experiments, dyslexic children are slower and more variable than typically developing readers in reading texts as well as vocal RTs to singly presented words; the dis-homogeneity in variability between groups points to the inappropriateness of standard measures of size effect (such as Cohen’s d, and suggests the use of the ratio between groups’ performance. The mean ratio for text reading is 1.95 across experiments. Mean ratio for vocal RTs for singly presented words is considerably smaller (1.52. Furthermore, this latter value is probably an overestimation as considering total reading times (i.e., a measure including also the pronunciation component considerably reduces the group difference in vocal RTs (1.19 according to Martelli et al., 2014. The ratio difference between single and multiple displays does not depend upon the presence of a semantic context in the case of texts as large ratios are also observed with lists of unrelated words (though studies testing this aspect were few.We conclude that, if care is taken in using appropriate comparisons, the deficit in reading texts or lists of words is appreciably greater than that revealed with discrete word presentations. Thus, reading multiple stimuli present a specific, additional challenge to dyslexic children indicating that models of reading should

  7. On minimizing the maximum broadcast decoding delay for instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.

    2014-09-01

    In this paper, we consider the problem of minimizing the maximum broadcast decoding delay experienced by all the receivers of generalized instantly decodable network coding (IDNC). Unlike the sum decoding delay, the maximum decoding delay as a definition of delay for IDNC allows a more equitable distribution of the delays between the different receivers and thus a better Quality of Service (QoS). In order to solve this problem, we first derive the expressions for the probability distributions of maximum decoding delay increments. Given these expressions, we formulate the problem as a maximum weight clique problem in the IDNC graph. Although this problem is known to be NP-hard, we design a greedy algorithm to perform effective packet selection. Through extensive simulations, we compare the sum decoding delay and the max decoding delay experienced when applying the policies to minimize the sum decoding delay and our policy to reduce the max decoding delay. Simulations results show that our policy gives a good agreement among all the delay aspects in all situations and outperforms the sum decoding delay policy to effectively minimize the sum decoding delay when the channel conditions become harsher. They also show that our definition of delay significantly improve the number of served receivers when they are subject to strict delay constraints.

  8. Best linear decoding of random mask images

    International Nuclear Information System (INIS)

    Woods, J.W.; Ekstrom, M.P.; Palmieri, T.M.; Twogood, R.E.

    1975-01-01

    In 1968 Dicke proposed coded imaging of x and γ rays via random pinholes. Since then, many authors have agreed with him that this technique can offer significant image improvement. A best linear decoding of the coded image is presented, and its superiority over the conventional matched filter decoding is shown. Experimental results in the visible light region are presented. (U.S.)

  9. Oppositional Decoding as an Act of Resistance.

    Science.gov (United States)

    Steiner, Linda

    1988-01-01

    Argues that contributors to the "No Comment" feature of "Ms." magazine are engaging in oppositional decoding and speculates on why this is a satisfying group process. Also notes such decoding presents another challenge to the idea that mass media has the same effect on all audiences. (SD)

  10. Don't words come easy? A psychophysical exploration of word superiority

    DEFF Research Database (Denmark)

    Starrfelt, Randi; Petersen, Anders; Vangkilde, Signe Allerup

    2013-01-01

    Words are made of letters, and yet sometimes it is easier to identify a word than a single letter. This word superiority effect (WSE) has been observed when written stimuli are presented very briefly or degraded by visual noise. We compare performance with letters and words in three experiments, ...... and visual short term memory capacity. So, even if single words come easy, there is a limit to the word superiority effect....

  11. High Speed Frame Synchronization and Viterbi Decoding

    DEFF Research Database (Denmark)

    Paaske, Erik; Justesen, Jørn; Larsen, Knud J.

    1996-01-01

    The purpose of Phase 1 of the study is to describe the system structure and algorithms in sufficient detail to allow drawing the high level architecture of units containing frame synchronization and Viterbi decoding. The systems we consider are high data rate space communication systems. Also...... components. Node synchronization performed within a Viterbi decoder is discussed, and algorithms for frame synchronization are described and analyzed. We present a list of system configurations that we find potentially useful. Further, the high level architecture of units that contain frame synchronization...... and various other functions needed in a complete system is presented. Two such units are described, one for placement before the Viterbi decoder and another for placement after the decoder. The high level architectures of three possible implementations of Viterbi decoders are described: The first...

  12. High Speed Frame Synchronization and Viterbi Decoding

    DEFF Research Database (Denmark)

    Paaske, Erik; Justesen, Jørn; Larsen, Knud J.

    1998-01-01

    The study has been divided into two phases. The purpose of Phase 1 of the study was to describe the system structure and algorithms in sufficient detail to allow drawing the high level architecture of units containing frame synchronization and Viterbi decoding. After selection of which specific...... potentially useful.Algorithms for frame synchronization are described and analyzed. Further, the high level architecture of units that contain frame synchronization and various other functions needed in a complete system is presented. Two such units are described, one for placement before the Viterbi decoder...... towards a realization in an FPGA.Node synchronization performed within a Viterbi decoder is discussed, and the high level architectures of three possible implementations of Viterbi decoders are described: The first implementation uses a number of commercially available decoders while the the two others...

  13. Codes on the Klein quartic, ideals, and decoding

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    1987-01-01

    descriptions as left ideals in the group-algebra GF(2^{3})[G]. This description allows for easy decoding. For instance, in the case of the single error correcting code of length21and dimension16with minimal distance3. decoding is obtained by multiplication with an idempotent in the group algebra.......A sequence of codes with particular symmetries and with large rates compared to their minimal distances is constructed over the field GF(2^{3}). In the sequence there is, for instance, a code of length 21 and dimension10with minimal distance9, and a code of length21and dimension16with minimal...... distance3. The codes are constructed from algebraic geometry using the dictionary between coding theory and algebraic curves over finite fields established by Goppa. The curve used in the present work is the Klein quartic. This curve has the maximal number of rational points over GF(2^{3})allowed by Serre...

  14. Word form Encoding in Chinese Word Naming and Word Typing

    Science.gov (United States)

    Chen, Jenn-Yeu; Li, Cheng-Yi

    2011-01-01

    The process of word form encoding was investigated in primed word naming and word typing with Chinese monosyllabic words. The target words shared or did not share the onset consonants with the prime words. The stimulus onset asynchrony (SOA) was 100 ms or 300 ms. Typing required the participants to enter the phonetic letters of the target word,…

  15. Hybrid EEG-fNIRS-Based Eight-Command Decoding for BCI: Application to Quadcopter Control.

    Science.gov (United States)

    Khan, Muhammad Jawad; Hong, Keum-Shik

    2017-01-01

    In this paper, a hybrid electroencephalography-functional near-infrared spectroscopy (EEG-fNIRS) scheme to decode eight active brain commands from the frontal brain region for brain-computer interface is presented. A total of eight commands are decoded by fNIRS, as positioned on the prefrontal cortex, and by EEG, around the frontal, parietal, and visual cortices. Mental arithmetic, mental counting, mental rotation, and word formation tasks are decoded with fNIRS, in which the selected features for classification and command generation are the peak, minimum, and mean ΔHbO values within a 2-s moving window. In the case of EEG, two eyeblinks, three eyeblinks, and eye movement in the up/down and left/right directions are used for four-command generation. The features in this case are the number of peaks and the mean of the EEG signal during 1 s window. We tested the generated commands on a quadcopter in an open space. An average accuracy of 75.6% was achieved with fNIRS for four-command decoding and 86% with EEG for another four-command decoding. The testing results show the possibility of controlling a quadcopter online and in real-time using eight commands from the prefrontal and frontal cortices via the proposed hybrid EEG-fNIRS interface.

  16. Hybrid EEG–fNIRS-Based Eight-Command Decoding for BCI: Application to Quadcopter Control

    Science.gov (United States)

    Khan, Muhammad Jawad; Hong, Keum-Shik

    2017-01-01

    In this paper, a hybrid electroencephalography–functional near-infrared spectroscopy (EEG–fNIRS) scheme to decode eight active brain commands from the frontal brain region for brain–computer interface is presented. A total of eight commands are decoded by fNIRS, as positioned on the prefrontal cortex, and by EEG, around the frontal, parietal, and visual cortices. Mental arithmetic, mental counting, mental rotation, and word formation tasks are decoded with fNIRS, in which the selected features for classification and command generation are the peak, minimum, and mean ΔHbO values within a 2-s moving window. In the case of EEG, two eyeblinks, three eyeblinks, and eye movement in the up/down and left/right directions are used for four-command generation. The features in this case are the number of peaks and the mean of the EEG signal during 1 s window. We tested the generated commands on a quadcopter in an open space. An average accuracy of 75.6% was achieved with fNIRS for four-command decoding and 86% with EEG for another four-command decoding. The testing results show the possibility of controlling a quadcopter online and in real-time using eight commands from the prefrontal and frontal cortices via the proposed hybrid EEG–fNIRS interface. PMID:28261084

  17. Real-time inference of word relevance from electroencephalogram and eye gaze

    Science.gov (United States)

    Wenzel, M. A.; Bogojeski, M.; Blankertz, B.

    2017-10-01

    Objective. Brain-computer interfaces can potentially map the subjective relevance of the visual surroundings, based on neural activity and eye movements, in order to infer the interest of a person in real-time. Approach. Readers looked for words belonging to one out of five semantic categories, while a stream of words passed at different locations on the screen. It was estimated in real-time which words and thus which semantic category interested each reader based on the electroencephalogram (EEG) and the eye gaze. Main results. Words that were subjectively relevant could be decoded online from the signals. The estimation resulted in an average rank of 1.62 for the category of interest among the five categories after a hundred words had been read. Significance. It was demonstrated that the interest of a reader can be inferred online from EEG and eye tracking signals, which can potentially be used in novel types of adaptive software, which enrich the interaction by adding implicit information about the interest of the user to the explicit interaction. The study is characterised by the following novelties. Interpretation with respect to the word meaning was necessary in contrast to the usual practice in brain-computer interfacing where stimulus recognition is sufficient. The typical counting task was avoided because it would not be sensible for implicit relevance detection. Several words were displayed at the same time, in contrast to the typical sequences of single stimuli. Neural activity was related with eye tracking to the words, which were scanned without restrictions on the eye movements.

  18. Optical RAM row access using WDM-enabled all-passive row/column decoders

    Science.gov (United States)

    Papaioannou, Sotirios; Alexoudi, Theoni; Kanellos, George T.; Miliou, Amalia; Pleros, Nikos

    2014-03-01

    Towards achieving a functional RAM organization that reaps the advantages offered by optical technology, a complete set of optical peripheral modules, namely the Row (RD) and Column Decoder (CD) units, is required. In this perspective, we demonstrate an all-passive 2×4 optical RAM RD with row access operation and subsequent all-passive column decoding to control the access of WDM-formatted words in optical RAM rows. The 2×4 RD exploits a WDM-formatted 2-bit-long memory WordLine address along with its complementary value, all of them encoded on four different wavelengths and broadcasted to all RAM rows. The RD relies on an all-passive wavelength-selective filtering matrix (λ-matrix) that ensures a logical `0' output only at the selected RAM row. Subsequently, the RD output of each row drives the respective SOA-MZI-based Row Access Gate (AG) to grant/block the entry of the incoming data words to the whole memory row. In case of a selected row, the data word exits the row AG and enters the respective CD that relies on an allpassive wavelength-selective Arrayed Waveguide Grating (AWG) for decoding the word bits into their individual columns. Both RD and CD procedures are carried out without requiring any active devices, assuming that the memory address and data word bits as well as their inverted values will be available in their optical form by the CPU interface. Proof-of-concept experimental verification exploiting cascaded pairs of AWGs as the λ-matrix is demonstrated at 10Gb/s, providing error-free operation with a peak power penalty lower than 0.2dB for all optical word channels.

  19. Efficient decoding with steady-state Kalman filter in neural interface systems.

    Science.gov (United States)

    Malik, Wasim Q; Truccolo, Wilson; Brown, Emery N; Hochberg, Leigh R

    2011-02-01

    The Kalman filter is commonly used in neural interface systems to decode neural activity and estimate the desired movement kinematics. We analyze a low-complexity Kalman filter implementation in which the filter gain is approximated by its steady-state form, computed offline before real-time decoding commences. We evaluate its performance using human motor cortical spike train data obtained from an intracortical recording array as part of an ongoing pilot clinical trial. We demonstrate that the standard Kalman filter gain converges to within 95% of the steady-state filter gain in 1.5±0.5 s (mean ±s.d.). The difference in the intended movement velocity decoded by the two filters vanishes within 5 s, with a correlation coefficient of 0.99 between the two decoded velocities over the session length. We also find that the steady-state Kalman filter reduces the computational load (algorithm execution time) for decoding the firing rates of 25±3 single units by a factor of 7.0±0.9. We expect that the gain in computational efficiency will be much higher in systems with larger neural ensembles. The steady-state filter can thus provide substantial runtime efficiency at little cost in terms of estimation accuracy. This far more efficient neural decoding approach will facilitate the practical implementation of future large-dimensional, multisignal neural interface systems.

  20. Decoding face information in time, frequency and space from direct intracranial recordings of the human brain.

    Directory of Open Access Journals (Sweden)

    Naotsugu Tsuchiya

    Full Text Available Faces are processed by a neural system with distributed anatomical components, but the roles of these components remain unclear. A dominant theory of face perception postulates independent representations of invariant aspects of faces (e.g., identity in ventral temporal cortex including the fusiform gyrus, and changeable aspects of faces (e.g., emotion in lateral temporal cortex including the superior temporal sulcus. Here we recorded neuronal activity directly from the cortical surface in 9 neurosurgical subjects undergoing epilepsy monitoring while they viewed static and dynamic facial expressions. Applying novel decoding analyses to the power spectrogram of electrocorticograms (ECoG from over 100 contacts in ventral and lateral temporal cortex, we found better representation of both invariant and changeable aspects of faces in ventral than lateral temporal cortex. Critical information for discriminating faces from geometric patterns was carried by power modulations between 50 to 150 Hz. For both static and dynamic face stimuli, we obtained a higher decoding performance in ventral than lateral temporal cortex. For discriminating fearful from happy expressions, critical information was carried by power modulation between 60-150 Hz and below 30 Hz, and again better decoded in ventral than lateral temporal cortex. Task-relevant attention improved decoding accuracy more than 10% across a wide frequency range in ventral but not at all in lateral temporal cortex. Spatial searchlight decoding showed that decoding performance was highest around the middle fusiform gyrus. Finally, we found that the right hemisphere, in general, showed superior decoding to the left hemisphere. Taken together, our results challenge the dominant model for independent face representation of invariant and changeable aspects: information about both face attributes was better decoded from a single region in the middle fusiform gyrus.

  1. Fast decoders for qudit topological codes

    International Nuclear Information System (INIS)

    Anwar, Hussain; Brown, Benjamin J; Campbell, Earl T; Browne, Dan E

    2014-01-01

    Qudit toric codes are a natural higher-dimensional generalization of the well-studied qubit toric code. However, standard methods for error correction of the qubit toric code are not applicable to them. Novel decoders are needed. In this paper we introduce two renormalization group decoders for qudit codes and analyse their error correction thresholds and efficiency. The first decoder is a generalization of a ‘hard-decisions’ decoder due to Bravyi and Haah (arXiv:1112.3252). We modify this decoder to overcome a percolation effect which limits its threshold performance for many-level quantum systems. The second decoder is a generalization of a ‘soft-decisions’ decoder due to Poulin and Duclos-Cianci (2010 Phys. Rev. Lett. 104 050504), with a small cell size to optimize the efficiency of implementation in the high dimensional case. In each case, we estimate thresholds for the uncorrelated bit-flip error model and provide a comparative analysis of the performance of both these approaches to error correction of qudit toric codes. (paper)

  2. Iterative Decoding of Concatenated Codes: A Tutorial

    Directory of Open Access Journals (Sweden)

    Phillip A. Regalia

    2005-05-01

    Full Text Available The turbo decoding algorithm of a decade ago constituted a milestone in error-correction coding for digital communications, and has inspired extensions to generalized receiver topologies, including turbo equalization, turbo synchronization, and turbo CDMA, among others. Despite an accrued understanding of iterative decoding over the years, the “turbo principle” remains elusive to master analytically, thereby inciting interest from researchers outside the communications domain. In this spirit, we develop a tutorial presentation of iterative decoding for parallel and serial concatenated codes, in terms hopefully accessible to a broader audience. We motivate iterative decoding as a computationally tractable attempt to approach maximum-likelihood decoding, and characterize fixed points in terms of a “consensus” property between constituent decoders. We review how the decoding algorithm for both parallel and serial concatenated codes coincides with an alternating projection algorithm, which allows one to identify conditions under which the algorithm indeed converges to a maximum-likelihood solution, in terms of particular likelihood functions factoring into the product of their marginals. The presentation emphasizes a common framework applicable to both parallel and serial concatenated codes.

  3. Effects of an iPad-Supported Phonics Intervention on Decoding Performance and Time On-Task

    Science.gov (United States)

    Larabee, Kaitlyn M.; Burns, Matthew K.; McComas, Jennifer J.

    2014-01-01

    Despite their recent popularity in schools, there is minimal consensus in the educational literature regarding the use of mobile devices for reading intervention. The word box intervention (Joseph "Read Teach" 52:348-356, 1998) has been consistently associated with improvements in student decoding performance. This early efficacy study…

  4. Word classes

    DEFF Research Database (Denmark)

    Rijkhoff, Jan

    2007-01-01

    in grammatical descriptions of some 50 languages, which together constitute a representative sample of the world’s languages (Hengeveld et al. 2004: 529). It appears that there are both quantitative and qualitative differences between word class systems of individual languages. Whereas some languages employ...... a parts-of-speech system that includes the categories Verb, Noun, Adjective and Adverb, other languages may use only a subset of these four lexical categories. Furthermore, quite a few languages have a major word class whose members cannot be classified in terms of the categories Verb – Noun – Adjective...... – Adverb, because they have properties that are strongly associated with at least two of these four traditional word classes (e.g. Adjective and Adverb). Finally, this article discusses some of the ways in which word class distinctions interact with other grammatical domains, such as syntax and morphology....

  5. Emotion Words Affect Eye Fixations during Reading

    Science.gov (United States)

    Scott, Graham G.; O'Donnell, Patrick J.; Sereno, Sara C.

    2012-01-01

    Emotion words are generally characterized as possessing high arousal and extreme valence and have typically been investigated in paradigms in which they are presented and measured as single words. This study examined whether a word's emotional qualities influenced the time spent viewing that word in the context of normal reading. Eye movements…

  6. Decoding small surface codes with feedforward neural networks

    Science.gov (United States)

    Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen

    2018-01-01

    Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.

  7. The serial message-passing schedule for LDPC decoding algorithms

    Science.gov (United States)

    Liu, Mingshan; Liu, Shanshan; Zhou, Yuan; Jiang, Xue

    2015-12-01

    The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.

  8. Fast decoding algorithms for coded aperture systems

    International Nuclear Information System (INIS)

    Byard, Kevin

    2014-01-01

    Fast decoding algorithms are described for a number of established coded aperture systems. The fast decoding algorithms for all these systems offer significant reductions in the number of calculations required when reconstructing images formed by a coded aperture system and hence require less computation time to produce the images. The algorithms may therefore be of use in applications that require fast image reconstruction, such as near real-time nuclear medicine and location of hazardous radioactive spillage. Experimental tests confirm the efficacy of the fast decoding techniques

  9. Decoding Algorithms for Random Linear Network Codes

    DEFF Research Database (Denmark)

    Heide, Janus; Pedersen, Morten Videbæk; Fitzek, Frank

    2011-01-01

    We consider the problem of efficient decoding of a random linear code over a finite field. In particular we are interested in the case where the code is random, relatively sparse, and use the binary finite field as an example. The goal is to decode the data using fewer operations to potentially...... achieve a high coding throughput, and reduce energy consumption.We use an on-the-fly version of the Gauss-Jordan algorithm as a baseline, and provide several simple improvements to reduce the number of operations needed to perform decoding. Our tests show that the improvements can reduce the number...

  10. Three phase full wave dc motor decoder

    Science.gov (United States)

    Studer, P. A. (Inventor)

    1977-01-01

    A three phase decoder for dc motors is disclosed which employs an extremely simple six transistor circuit to derive six properly phased output signals for fullwave operation of dc motors. Six decoding transistors are coupled at their base-emitter junctions across a resistor network arranged in a delta configuration. Each point of the delta configuration is coupled to one of three position sensors which sense the rotational position of the motor. A second embodiment of the invention is disclosed in which photo-optical isolators are used in place of the decoding transistors.

  11. Cross-Lingual Dependency Parsing with Late Decoding for Truly Low-Resource Languages

    OpenAIRE

    Schlichtkrull, Michael Sejr; Søgaard, Anders

    2017-01-01

    In cross-lingual dependency annotation projection, information is often lost during transfer because of early decoding. We present an end-to-end graph-based neural network dependency parser that can be trained to reproduce matrices of edge scores, which can be directly projected across word alignments. We show that our approach to cross-lingual dependency parsing is not only simpler, but also achieves an absolute improvement of 2.25% averaged across 10 languages compared to the previous state...

  12. Improved Power Decoding of One-Point Hermitian Codes

    DEFF Research Database (Denmark)

    Puchinger, Sven; Bouw, Irene; Rosenkilde, Johan Sebastian Heesemann

    2017-01-01

    We propose a new partial decoding algorithm for one-point Hermitian codes that can decode up to the same number of errors as the Guruswami–Sudan decoder. Simulations suggest that it has a similar failure probability as the latter one. The algorithm is based on a recent generalization of the power...... decoding algorithm for Reed–Solomon codes and does not require an expensive root-finding step. In addition, it promises improvements for decoding interleaved Hermitian codes....

  13. Decoding of interleaved Reed-Solomon codes using improved power decoding

    DEFF Research Database (Denmark)

    Puchinger, Sven; Rosenkilde ne Nielsen, Johan

    2017-01-01

    We propose a new partial decoding algorithm for m-interleaved Reed-Solomon (IRS) codes that can decode, with high probability, a random error of relative weight 1 − Rm/m+1 at all code rates R, in time polynomial in the code length n. For m > 2, this is an asymptotic improvement over the previous...... state-of-the-art for all rates, and the first improvement for R > 1/3 in the last 20 years. The method combines collaborative decoding of IRS codes with power decoding up to the Johnson radius....

  14. Low-Power Bitstream-Residual Decoder for H.264/AVC Baseline Profile Decoding

    Directory of Open Access Journals (Sweden)

    Xu Ke

    2009-01-01

    Full Text Available Abstract We present the design and VLSI implementation of a novel low-power bitstream-residual decoder for H.264/AVC baseline profile. It comprises a syntax parser, a parameter decoder, and an Inverse Quantization Inverse Transform (IQIT decoder. The syntax parser detects and decodes each incoming codeword in the bitstream under the control of a hierarchical Finite State Machine (FSM; the IQIT decoder performs inverse transform and quantization with pipelining and parallelism. Various power reduction techniques, such as data-driven based on statistic results, nonuniform partition, precomputation, guarded evaluation, hierarchical FSM decomposition, TAG method, zero-block skipping, and clock gating , are adopted and integrated throughout the bitstream-residual decoder. With innovative architecture, the proposed design is able to decode QCIF video sequences of 30 fps at a clock rate as low as 1.5 MHz. A prototype H.264/AVC baseline decoding chip utilizing the proposed decoder is fabricated in UMC 0.18  m 1P6M CMOS technology. The proposed design is measured under 1 V 1.8 V supply with 0.1 V step. It dissipates 76  W at 1 V and 253  W at 1.8 V.

  15. Transcranial direct current stimulation (tDCS) modulation of picture naming and word reading: A meta-analysis of single session tDCS applied to healthy participants.

    Science.gov (United States)

    Westwood, Samuel J; Romani, Cristina

    2017-09-01

    Recent reviews quantifying the effects of single sessions of transcranial direct current stimulation (or tDCS) in healthy volunteers find only minor effects on cognition despite the popularity of this technique. Here, we wanted to quantify the effects of tDCS on language production tasks that measure word reading and picture naming. We reviewed 14 papers measuring tDCS effects across a total of 96 conditions to a) quantify effects of conventional stimulation on language regions (i.e., left hemisphere anodal tDCS administered to temporal/frontal areas) under normal conditions or under conditions of cognitive (semantic) interference; b) identify parameters which may moderate the size of the tDCS effect within conventional stimulation protocols (e.g., online vs offline, high vs. low current densities, and short vs. long durations), as well as within types of stimulation not typically explored by previous reviews (i.e., right hemisphere anodal tDCS or left/right hemisphere cathodal tDCS). In all analyses there was no significant effect of tDCS, but we did find a small but significant effect of time and duration of stimulation with stronger effects for offline stimulation and for shorter durations (tDCS and its poor efficacy in healthy participants. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Mapping of MPEG-4 decoding on a flexible architecture platform

    Science.gov (United States)

    van der Tol, Erik B.; Jaspers, Egbert G.

    2001-12-01

    In the field of consumer electronics, the advent of new features such as Internet, games, video conferencing, and mobile communication has triggered the convergence of television and computers technologies. This requires a generic media-processing platform that enables simultaneous execution of very diverse tasks such as high-throughput stream-oriented data processing and highly data-dependent irregular processing with complex control flows. As a representative application, this paper presents the mapping of a Main Visual profile MPEG-4 for High-Definition (HD) video onto a flexible architecture platform. A stepwise approach is taken, going from the decoder application toward an implementation proposal. First, the application is decomposed into separate tasks with self-contained functionality, clear interfaces, and distinct characteristics. Next, a hardware-software partitioning is derived by analyzing the characteristics of each task such as the amount of inherent parallelism, the throughput requirements, the complexity of control processing, and the reuse potential over different applications and different systems. Finally, a feasible implementation is proposed that includes amongst others a very-long-instruction-word (VLIW) media processor, one or more RISC processors, and some dedicated processors. The mapping study of the MPEG-4 decoder proves the flexibility and extensibility of the media-processing platform. This platform enables an effective HW/SW co-design yielding a high performance density.

  17. Decoding rule search domain in the left inferior frontal gyrus

    Science.gov (United States)

    Babcock, Laura; Vallesi, Antonino

    2018-01-01

    Traditionally, the left hemisphere has been thought to extract mainly verbal patterns of information, but recent evidence has shown that the left Inferior Frontal Gyrus (IFG) is active during inductive reasoning in both the verbal and spatial domains. We aimed to understand whether the left IFG supports inductive reasoning in a domain-specific or domain-general fashion. To do this we used Multi-Voxel Pattern Analysis to decode the representation of domain during a rule search task. Thirteen participants were asked to extract the rule underlying streams of letters presented in different spatial locations. Each rule was either verbal (letters forming words) or spatial (positions forming geometric figures). Our results show that domain was decodable in the left prefrontal cortex, suggesting that this region represents domain-specific information, rather than processes common to the two domains. A replication study with the same participants tested two years later confirmed these findings, though the individual representations changed, providing evidence for the flexible nature of representations. This study extends our knowledge on the neural basis of goal-directed behaviors and on how information relevant for rule extraction is flexibly mapped in the prefrontal cortex. PMID:29547623

  18. Fast mental states decoding in mixed reality.

    Directory of Open Access Journals (Sweden)

    Daniele eDe Massari

    2014-11-01

    Full Text Available The combination of Brain-Computer Interface technology, allowing online monitoring and decoding of brain activity, with virtual and mixed reality systems may help to shape and guide implicit and explicit learning using ecological scenarios. Real-time information of ongoing brain states acquired through BCI might be exploited for controlling data presentation in virtual environments. In this context, assessing to what extent brain states can be discriminated during mixed reality experience is critical for adapting specific data features to contingent brain activity. In this study we recorded EEG data while participants experienced a mixed reality scenario implemented through the eXperience Induction Machine (XIM. The XIM is a novel framework modeling the integration of a sensing system that evaluates and measures physiological and psychological states with a number of actuators and effectors that coherently reacts to the user's actions. We then assessed continuous EEG-based discrimination of spatial navigation, reading and calculation performed in mixed reality, using LDA and SVM classifiers. Dynamic single trial classification showed high accuracy of LDA and SVM classifiers in detecting multiple brain states as well as in differentiating between high and low mental workload, using a 5 s time-window shifting every 200 ms. Our results indicate overall better performance of LDA with respect to SVM and suggest applicability of our approach in a BCI-controlled mixed reality scenario. Ultimately, successful prediction of brain states might be used to drive adaptation of data representation in order to boost information processing in mixed reality.

  19. Fast mental states decoding in mixed reality.

    Science.gov (United States)

    De Massari, Daniele; Pacheco, Daniel; Malekshahi, Rahim; Betella, Alberto; Verschure, Paul F M J; Birbaumer, Niels; Caria, Andrea

    2014-01-01

    The combination of Brain-Computer Interface (BCI) technology, allowing online monitoring and decoding of brain activity, with virtual and mixed reality (MR) systems may help to shape and guide implicit and explicit learning using ecological scenarios. Real-time information of ongoing brain states acquired through BCI might be exploited for controlling data presentation in virtual environments. Brain states discrimination during mixed reality experience is thus critical for adapting specific data features to contingent brain activity. In this study we recorded electroencephalographic (EEG) data while participants experienced MR scenarios implemented through the eXperience Induction Machine (XIM). The XIM is a novel framework modeling the integration of a sensing system that evaluates and measures physiological and psychological states with a number of actuators and effectors that coherently reacts to the user's actions. We then assessed continuous EEG-based discrimination of spatial navigation, reading and calculation performed in MR, using linear discriminant analysis (LDA) and support vector machine (SVM) classifiers. Dynamic single trial classification showed high accuracy of LDA and SVM classifiers in detecting multiple brain states as well as in differentiating between high and low mental workload, using a 5 s time-window shifting every 200 ms. Our results indicate overall better performance of LDA with respect to SVM and suggest applicability of our approach in a BCI-controlled MR scenario. Ultimately, successful prediction of brain states might be used to drive adaptation of data representation in order to boost information processing in MR.

  20. Timing, timing, timing: Fast decoding of object information from intracranial field potentials in human visual cortex

    Science.gov (United States)

    Liu, Hesheng; Agam, Yigal; Madsen, Joseph R.; Kreiman, Gabriel

    2010-01-01

    Summary The difficulty of visual recognition stems from the need to achieve high selectivity while maintaining robustness to object transformations within hundreds of milliseconds. Theories of visual recognition differ in whether the neuronal circuits invoke recurrent feedback connections or not. The timing of neurophysiological responses in visual cortex plays a key role in distinguishing between bottom-up and top-down theories. Here we quantified at millisecond resolution the amount of visual information conveyed by intracranial field potentials from 912 electrodes in 11 human subjects. We could decode object category information from human visual cortex in single trials as early as 100 ms post-stimulus. Decoding performance was robust to depth rotation and scale changes. The results suggest that physiological activity in the temporal lobe can account for key properties of visual recognition. The fast decoding in single trials is compatible with feed-forward theories and provides strong constraints for computational models of human vision. PMID:19409272

  1. Multiuser Random Coding Techniques for Mismatched Decoding

    OpenAIRE

    Scarlett, Jonathan; Martinez, Alfonso; Guillén i Fàbregas, Albert

    2016-01-01

    This paper studies multiuser random coding techniques for channel coding with a given (possibly suboptimal) decoding rule. For the mismatched discrete memoryless multiple-access channel, an error exponent is obtained that is tight with respect to the ensemble average, and positive within the interior of Lapidoth's achievable rate region. This exponent proves the ensemble tightness of the exponent of Liu and Hughes in the case of maximum-likelihood decoding. An equivalent dual form of Lapidoth...

  2. Periodic words connected with the Fibonacci words

    Directory of Open Access Journals (Sweden)

    G. M. Barabash

    2016-06-01

    Full Text Available In this paper we introduce two families of periodic words (FLP-words of type 1 and FLP-words of type 2 that are connected with the Fibonacci words and investigated their properties.

  3. Learning words

    DEFF Research Database (Denmark)

    Jaswal, Vikram K.; Hansen, Mikkel

    2006-01-01

    Children tend to infer that when a speaker uses a new label, the label refers to an unlabeled object rather than one they already know the label for. Does this inference reflect a default assumption that words are mutually exclusive? Or does it instead reflect the result of a pragmatic reasoning...... process about what the speaker intended? In two studies, we distinguish between these possibilities. Preschoolers watched as a speaker pointed toward (Study 1) or looked at (Study 2) a familiar object while requesting the referent for a new word (e.g. 'Can you give me the blicket?'). In both studies......, despite the speaker's unambiguous behavioral cue indicating an intent to refer to a familiar object, children inferred that the novel label referred to an unfamiliar object. These results suggest that children expect words to be mutually exclusive even when a speaker provides some kinds of pragmatic...

  4. A novel parallel pipeline structure of VP9 decoder

    Science.gov (United States)

    Qin, Huabiao; Chen, Wu; Yi, Sijun; Tan, Yunfei; Yi, Huan

    2018-04-01

    To improve the efficiency of VP9 decoder, a novel parallel pipeline structure of VP9 decoder is presented in this paper. According to the decoding workflow, VP9 decoder can be divided into sub-modules which include entropy decoding, inverse quantization, inverse transform, intra prediction, inter prediction, deblocking and pixel adaptive compensation. By analyzing the computing time of each module, hotspot modules are located and the causes of low efficiency of VP9 decoder can be found. Then, a novel pipeline decoder structure is designed by using mixed parallel decoding methods of data division and function division. The experimental results show that this structure can greatly improve the decoding efficiency of VP9.

  5. New decoding methods of interleaved burst error-correcting codes

    Science.gov (United States)

    Nakano, Y.; Kasahara, M.; Namekawa, T.

    1983-04-01

    A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.

  6. Brain-state classification and a dual-state decoder dramatically improve the control of cursor movement through a brain-machine interface

    Science.gov (United States)

    Sachs, Nicholas A.; Ruiz-Torres, Ricardo; Perreault, Eric J.; Miller, Lee E.

    2016-02-01

    Objective. It is quite remarkable that brain machine interfaces (BMIs) can be used to control complex movements with fewer than 100 neurons. Success may be due in part to the limited range of dynamical conditions under which most BMIs are tested. Achieving high-quality control that spans these conditions with a single linear mapping will be more challenging. Even for simple reaching movements, existing BMIs must reduce the stochastic noise of neurons by averaging the control signals over time, instead of over the many neurons that normally control movement. This forces a compromise between a decoder with dynamics allowing rapid movement and one that allows postures to be maintained with little jitter. Our current work presents a method for addressing this compromise, which may also generalize to more highly varied dynamical situations, including movements with more greatly varying speed. Approach. We have developed a system that uses two independent Wiener filters as individual components in a single decoder, one optimized for movement, and the other for postural control. We computed an LDA classifier using the same neural inputs. The decoder combined the outputs of the two filters in proportion to the likelihood assigned by the classifier to each state. Main results. We have performed online experiments with two monkeys using this neural-classifier, dual-state decoder, comparing it to a standard, single-state decoder as well as to a dual-state decoder that switched states automatically based on the cursor’s proximity to a target. The performance of both monkeys using the classifier decoder was markedly better than that of the single-state decoder and comparable to the proximity decoder. Significance. We have demonstrated a novel strategy for dealing with the need to make rapid movements while also maintaining precise cursor control when approaching and stabilizing within targets. Further gains can undoubtedly be realized by optimizing the performance of the

  7. Words Do Come Easy (Sometimes)

    DEFF Research Database (Denmark)

    Starrfelt, Randi; Petersen, Anders; Vangkilde, Signe Allerup

    multiple stimuli are presented simultaneously: Are words treated as units or wholes in visual short term memory? Using methods based on a Theory of Visual Attention (TVA), we measured perceptual threshold, visual processing speed and visual short term memory capacity for words and letters, in two simple...... a different pattern: Letters are perceived more easily than words, and this is reflected both in perceptual processing speed and short term memory capacity. So even if single words do come easy, they seem to enjoy no advantage in visual short term memory....

  8. Does "Word Coach" Coach Words?

    Science.gov (United States)

    Cobb, Tom; Horst, Marlise

    2011-01-01

    This study reports on the design and testing of an integrated suite of vocabulary training games for Nintendo[TM] collectively designated "My Word Coach" (Ubisoft, 2008). The games' design is based on a wide range of learning research, from classic studies on recycling patterns to frequency studies of modern corpora. Its general usage…

  9. Integration of lexical and sublexical processing in the spelling of regular words: a multiple single-case study in Italian dysgraphic patients.

    Science.gov (United States)

    Laiacona, Marcella; Capitani, Erminio; Zonca, Giusy; Scola, Ilaria; Saletta, Paola; Luzzatti, Claudio

    2009-01-01

    In this study we investigated 12 cases of "mixed dysgraphia", a spelling impairment where regular words are spelt better than either ambiguous words or regular non-words. Two explanations of mixed dysgraphia were formerly offered by Luzzatti et al. (1998): (i) a double functional lesion of the orthographic output lexicon (or damage to its access) and of the acoustic-to-phonological conversion; and (ii) some kind of interaction/summation between lexical and sublexical spelling routes when processing regular words. We first analysed whether a double functional lesion was sufficient to explain the mixed dysgraphia, checking acoustic-to-phonological conversion by means of the repetition of words and non-words: the answer was positive in five cases and uncertain in three. We tested the remaining four cases to see if there was an interaction between lexical and sublexical processing of regular words, quantifying for each patient, on a probabilistic basis, the separate contribution of the residual lexical and sublexical resources. We investigated whether the processing along these routes was simultaneous but independent ("independent cooperation") or if instead there was "interaction", i.e., the simultaneous activity led to an added increase of efficiency over and above the mere combination of separate success probabilities. For one case the processing along the two routes was independent, in the other three cases an interaction resulted. Following the same approach, we found that for the five cases with a double functional lesion, the observed success on regular word spelling was higher than that expected on a probabilistic basis, but the interpretation of this finding was different.

  10. Word wheels

    CERN Document Server

    Clark, Kathryn

    2013-01-01

    Targeting the specific problems learners have with language structure, these multi-sensory exercises appeal to all age groups including adults. Exercises use sight, sound and touch and are also suitable for English as an Additional Lanaguage and Basic Skills students.Word Wheels includes off-the-shelf resources including lesson plans and photocopiable worksheets, an interactive CD with practice exercises, and support material for the busy teacher or non-specialist staff, as well as homework activities.

  11. FPGA implementation of low complexity LDPC iterative decoder

    Science.gov (United States)

    Verma, Shivani; Sharma, Sanjay

    2016-07-01

    Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.

  12. High-speed architecture for the decoding of trellis-coded modulation

    Science.gov (United States)

    Osborne, William P.

    1992-01-01

    Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.

  13. Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision.

    Science.gov (United States)

    Wen, Haiguang; Shi, Junxing; Zhang, Yizhen; Lu, Kun-Han; Cao, Jiayue; Liu, Zhongming

    2017-10-20

    Convolutional neural network (CNN) driven by image recognition has been shown to be able to explain cortical responses to static pictures at ventral-stream areas. Here, we further showed that such CNN could reliably predict and decode functional magnetic resonance imaging data from humans watching natural movies, despite its lack of any mechanism to account for temporal dynamics or feedback processing. Using separate data, encoding and decoding models were developed and evaluated for describing the bi-directional relationships between the CNN and the brain. Through the encoding models, the CNN-predicted areas covered not only the ventral stream, but also the dorsal stream, albeit to a lesser degree; single-voxel response was visualized as the specific pixel pattern that drove the response, revealing the distinct representation of individual cortical location; cortical activation was synthesized from natural images with high-throughput to map category representation, contrast, and selectivity. Through the decoding models, fMRI signals were directly decoded to estimate the feature representations in both visual and semantic spaces, for direct visual reconstruction and semantic categorization, respectively. These results corroborate, generalize, and extend previous findings, and highlight the value of using deep learning, as an all-in-one model of the visual cortex, to understand and decode natural vision. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Performance breakdown in optimal stimulus decoding.

    Science.gov (United States)

    Lubomir Kostal; Lansky, Petr; Pilarski, Stevan

    2015-06-01

    One of the primary goals of neuroscience is to understand how neurons encode and process information about their environment. The problem is often approached indirectly by examining the degree to which the neuronal response reflects the stimulus feature of interest. In this context, the methods of signal estimation and detection theory provide the theoretical limits on the decoding accuracy with which the stimulus can be identified. The Cramér-Rao lower bound on the decoding precision is widely used, since it can be evaluated easily once the mathematical model of the stimulus-response relationship is determined. However, little is known about the behavior of different decoding schemes with respect to the bound if the neuronal population size is limited. We show that under broad conditions the optimal decoding displays a threshold-like shift in performance in dependence on the population size. The onset of the threshold determines a critical range where a small increment in size, signal-to-noise ratio or observation time yields a dramatic gain in the decoding precision. We demonstrate the existence of such threshold regions in early auditory and olfactory information coding. We discuss the origin of the threshold effect and its impact on the design of effective coding approaches in terms of relevant population size.

  15. Symbol synchronization for the TDRSS decoder

    Science.gov (United States)

    Costello, D. J., Jr.

    1983-01-01

    Each 8 bits out of the Viterbi decoder correspond to one symbol of the R/S code. Synchronization must be maintained here so that each 8-bit symbol delivered to the R/S decoder corresponds to an 8-bit symbol from the R/S encoder. Lack of synchronization, would cause an error in almost every R/S symbol since even a - 1-bit sync slip shifts every bit in each 8-bit symbol by one position, therby confusing the mapping betweeen 8-bit sequences and symbols. The error correcting capability of the R/S code would be exceeded. Possible ways to correcting this condition include: (1) designing the R/S decoder to recognize the overload and shifting the output sequence of the inner decoder to establish a different sync state; (2) using the characteristics of the inner decoder to establish symbol synchronization for the outer code, with or without a deinterleaver and an interleaver; and (3) modifying the encoder to alternate periodically between two sets of generators.

  16. Modified Decoding Algorithm of LLR-SPA

    Directory of Open Access Journals (Sweden)

    Zhongxun Wang

    2014-09-01

    Full Text Available In wireless sensor networks, the energy consumption is mainly occurred in the stage of information transmission. The Low Density Parity Check code can make full use of the channel information to save energy. Because of the widely used decoding algorithm of the Low Density Parity Check code, this paper proposes a new decoding algorithm which is based on the LLR-SPA (Sum-Product Algorithm in Log-Likelihood-domain to improve the accuracy of the decoding algorithm. In the modified algorithm, a piecewise linear function is used to approximate the complicated Jacobi correction term in LLR-SPA decoding algorithm. Construct the tangent by the tangency point to the function of Jacobi correction term, which is based on the first order Taylor Series. In this way, the proposed piecewise linear approximation offers almost a perfect match to the function of Jacobi correction term. Meanwhile, the proposed piecewise linear approximation could avoid the operation of logarithmic which is more suitable for practical application. The simulation results show that the proposed algorithm could improve the decoding accuracy greatly without noticeable variation of the computational complexity.

  17. Coding and decoding in a point-to-point communication using the polarization of the light beam.

    Science.gov (United States)

    Kavehvash, Z; Massoumian, F

    2008-05-10

    A new technique for coding and decoding of optical signals through the use of polarization is described. In this technique the concept of coding is translated to polarization. In other words, coding is done in such a way that each code represents a unique polarization. This is done by implementing a binary pattern on a spatial light modulator in such a way that the reflected light has the required polarization. Decoding is done by the detection of the received beam's polarization. By linking the concept of coding to polarization we can use each of these concepts in measuring the other one, attaining some gains. In this paper the construction of a simple point-to-point communication where coding and decoding is done through polarization will be discussed.

  18. Noise-robust speech recognition through auditory feature detection and spike sequence decoding.

    Science.gov (United States)

    Schafer, Phillip B; Jin, Dezhe Z

    2014-03-01

    Speech recognition in noisy conditions is a major challenge for computer systems, but the human brain performs it routinely and accurately. Automatic speech recognition (ASR) systems that are inspired by neuroscience can potentially bridge the performance gap between humans and machines. We present a system for noise-robust isolated word recognition that works by decoding sequences of spikes from a population of simulated auditory feature-detecting neurons. Each neuron is trained to respond selectively to a brief spectrotemporal pattern, or feature, drawn from the simulated auditory nerve response to speech. The neural population conveys the time-dependent structure of a sound by its sequence of spikes. We compare two methods for decoding the spike sequences--one using a hidden Markov model-based recognizer, the other using a novel template-based recognition scheme. In the latter case, words are recognized by comparing their spike sequences to template sequences obtained from clean training data, using a similarity measure based on the length of the longest common sub-sequence. Using isolated spoken digits from the AURORA-2 database, we show that our combined system outperforms a state-of-the-art robust speech recognizer at low signal-to-noise ratios. Both the spike-based encoding scheme and the template-based decoding offer gains in noise robustness over traditional speech recognition methods. Our system highlights potential advantages of spike-based acoustic coding and provides a biologically motivated framework for robust ASR development.

  19. Completion time reduction in instantly decodable network coding through decoding delay control

    KAUST Repository

    Douik, Ahmed S.

    2014-12-01

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to completely act against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. In this paper, we study the effect of controlling the decoding delay to reduce the completion time below its currently best known solution. We first derive the decoding-delay-dependent expressions of the users\\' and their overall completion times. Although using such expressions to find the optimal overall completion time is NP-hard, we use a heuristic that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Simulation results show that this new algorithm achieves both a lower mean completion time and mean decoding delay compared to the best known heuristic for completion time reduction. The gap in performance becomes significant for harsh erasure scenarios.

  20. Completion time reduction in instantly decodable network coding through decoding delay control

    KAUST Repository

    Douik, Ahmed S.; Sorour, Sameh; Alouini, Mohamed-Slim; Al-Naffouri, Tareq Y.

    2014-01-01

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to completely act against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. In this paper, we study the effect of controlling the decoding delay to reduce the completion time below its currently best known solution. We first derive the decoding-delay-dependent expressions of the users' and their overall completion times. Although using such expressions to find the optimal overall completion time is NP-hard, we use a heuristic that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Simulation results show that this new algorithm achieves both a lower mean completion time and mean decoding delay compared to the best known heuristic for completion time reduction. The gap in performance becomes significant for harsh erasure scenarios.

  1. NP-hardness of decoding quantum error-correction codes

    Science.gov (United States)

    Hsieh, Min-Hsiu; Le Gall, François

    2011-05-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  2. NP-hardness of decoding quantum error-correction codes

    International Nuclear Information System (INIS)

    Hsieh, Min-Hsiu; Le Gall, Francois

    2011-01-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  3. Generalized Sudan's List Decoding for Order Domain Codes

    DEFF Research Database (Denmark)

    Geil, Hans Olav; Matsumoto, Ryutaroh

    2007-01-01

    We generalize Sudan's list decoding algorithm without multiplicity to evaluation codes coming from arbitrary order domains. The number of correctable errors by the proposed method is larger than the original list decoding without multiplicity....

  4. Partially blind instantly decodable network codes for lossy feedback environment

    KAUST Repository

    Sorour, Sameh; Douik, Ahmed S.; Valaee, Shahrokh; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2014-01-01

    an expression for the expected decoding delay increment for any arbitrary transmission. This expression is then used to find the optimal policy that reduces the decoding delay in such lossy feedback environment. Results show that our proposed solutions both

  5. Neural signatures of attention: insights from decoding population activity patterns.

    Science.gov (United States)

    Sapountzis, Panagiotis; Gregoriou, Georgia G

    2018-01-01

    Understanding brain function and the computations that individual neurons and neuronal ensembles carry out during cognitive functions is one of the biggest challenges in neuroscientific research. To this end, invasive electrophysiological studies have provided important insights by recording the activity of single neurons in behaving animals. To average out noise, responses are typically averaged across repetitions and across neurons that are usually recorded on different days. However, the brain makes decisions on short time scales based on limited exposure to sensory stimulation by interpreting responses of populations of neurons on a moment to moment basis. Recent studies have employed machine-learning algorithms in attention and other cognitive tasks to decode the information content of distributed activity patterns across neuronal ensembles on a single trial basis. Here, we review results from studies that have used pattern-classification decoding approaches to explore the population representation of cognitive functions. These studies have offered significant insights into population coding mechanisms. Moreover, we discuss how such advances can aid the development of cognitive brain-computer interfaces.

  6. Neuroprosthetic Decoder Training as Imitation Learning.

    Science.gov (United States)

    Merel, Josh; Carlson, David; Paninski, Liam; Cunningham, John P

    2016-05-01

    Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user's intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user's intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger), can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy) for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector.

  7. Neuroprosthetic Decoder Training as Imitation Learning.

    Directory of Open Access Journals (Sweden)

    Josh Merel

    2016-05-01

    Full Text Available Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user's intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user's intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger, can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector.

  8. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Thommesen, Christian

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes.......Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes....

  9. Binary Systematic Network Coding for Progressive Packet Decoding

    OpenAIRE

    Jones, Andrew L.; Chatzigeorgiou, Ioannis; Tassi, Andrea

    2015-01-01

    We consider binary systematic network codes and investigate their capability of decoding a source message either in full or in part. We carry out a probability analysis, derive closed-form expressions for the decoding probability and show that systematic network coding outperforms conventional net- work coding. We also develop an algorithm based on Gaussian elimination that allows progressive decoding of source packets. Simulation results show that the proposed decoding algorithm can achieve ...

  10. Decoding Hermitian Codes with Sudan's Algorithm

    DEFF Research Database (Denmark)

    Høholdt, Tom; Nielsen, Rasmus Refslund

    1999-01-01

    We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q-polynomial, and a reduct......We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q...

  11. Decoding Interleaved Gabidulin Codes using Alekhnovich's Algorithm

    DEFF Research Database (Denmark)

    Puchinger, Sven; Müelich, Sven; Mödinger, David

    2017-01-01

    We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent.......We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent....

  12. Decoding LDPC Convolutional Codes on Markov Channels

    Directory of Open Access Journals (Sweden)

    Kashyap Manohar

    2008-01-01

    Full Text Available Abstract This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.

  13. Decoding LDPC Convolutional Codes on Markov Channels

    Directory of Open Access Journals (Sweden)

    Chris Winstead

    2008-04-01

    Full Text Available This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.

  14. Decoding algorithm for vortex communications receiver

    Science.gov (United States)

    Kupferman, Judy; Arnon, Shlomi

    2018-01-01

    Vortex light beams can provide a tremendous alphabet for encoding information. We derive a symbol decoding algorithm for a direct detection matrix detector vortex beam receiver using Laguerre Gauss (LG) modes, and develop a mathematical model of symbol error rate (SER) for this receiver. We compare SER as a function of signal to noise ratio (SNR) for our algorithm and for the Pearson correlation algorithm. To our knowledge, this is the first comprehensive treatment of a decoding algorithm of a matrix detector for an LG receiver.

  15. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid; Alouini, Mohamed-Slim

    2013-01-01

    channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the \\textit{lattice decoder}. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement

  16. Image transmission system using adaptive joint source and channel decoding

    Science.gov (United States)

    Liu, Weiliang; Daut, David G.

    2005-03-01

    In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.

  17. Decoding and Encoding Facial Expressions in Preschool-Age Children.

    Science.gov (United States)

    Zuckerman, Miron; Przewuzman, Sylvia J.

    1979-01-01

    Preschool-age children drew, decoded, and encoded facial expressions depicting five different emotions. Accuracy of drawing, decoding and encoding each of the five emotions was consistent across the three tasks; decoding ability was correlated with drawing ability among female subjects, but neither of these abilities was correlated with encoding…

  18. Word Domain Disambiguation via Word Sense Disambiguation

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Tratz, Stephen C.; Gregory, Michelle L.

    2006-06-04

    Word subject domains have been widely used to improve the perform-ance of word sense disambiguation al-gorithms. However, comparatively little effort has been devoted so far to the disambiguation of word subject do-mains. The few existing approaches have focused on the development of al-gorithms specific to word domain dis-ambiguation. In this paper we explore an alternative approach where word domain disambiguation is achieved via word sense disambiguation. Our study shows that this approach yields very strong results, suggesting that word domain disambiguation can be ad-dressed in terms of word sense disam-biguation with no need for special purpose algorithms.

  19. Feature Selection Methods for Robust Decoding of Finger Movements in a Non-human Primate

    Science.gov (United States)

    Padmanaban, Subash; Baker, Justin; Greger, Bradley

    2018-01-01

    Objective: The performance of machine learning algorithms used for neural decoding of dexterous tasks may be impeded due to problems arising when dealing with high-dimensional data. The objective of feature selection algorithms is to choose a near-optimal subset of features from the original feature space to improve the performance of the decoding algorithm. The aim of our study was to compare the effects of four feature selection techniques, Wilcoxon signed-rank test, Relative Importance, Principal Component Analysis (PCA), and Mutual Information Maximization on SVM classification performance for a dexterous decoding task. Approach: A nonhuman primate (NHP) was trained to perform small coordinated movements—similar to typing. An array of microelectrodes was implanted in the hand area of the motor cortex of the NHP and used to record action potentials (AP) during finger movements. A Support Vector Machine (SVM) was used to classify which finger movement the NHP was making based upon AP firing rates. We used the SVM classification to examine the functional parameters of (i) robustness to simulated failure and (ii) longevity of classification. We also compared the effect of using isolated-neuron and multi-unit firing rates as the feature vector supplied to the SVM. Main results: The average decoding accuracy for multi-unit features and single-unit features using Mutual Information Maximization (MIM) across 47 sessions was 96.74 ± 3.5% and 97.65 ± 3.36% respectively. The reduction in decoding accuracy between using 100% of the features and 10% of features based on MIM was 45.56% (from 93.7 to 51.09%) and 4.75% (from 95.32 to 90.79%) for multi-unit and single-unit features respectively. MIM had best performance compared to other feature selection methods. Significance: These results suggest improved decoding performance can be achieved by using optimally selected features. The results based on clinically relevant performance metrics also suggest that the decoding

  20. Exploring the word superiority effect using TVA

    DEFF Research Database (Denmark)

    Starrfelt, Randi

    Words are made of letters, and yet sometimes it is easier to identify a word than a single letter. This word superiority effect (WSE) has been observed when written stimuli are presented very briefly or degraded by visual noise. It is unclear, however, if this is due to a lower threshold for perc...... simultaneously we find a different pattern: In a whole report experiment with six stimuli (letters or words), letters are perceived more easily than words, and this is reflected both in perceptual processing speed and short term memory capacity....... for perception of words, or a higher speed of processing for words than letters. We have investigated the WSE using methods based on a Theory of Visual Attention. In an experiment using single stimuli (words or letters) presented centrally, we show that the classical WSE is specifically reflected in perceptual...

  1. Visual word learning in adults with dyslexia

    Directory of Open Access Journals (Sweden)

    Rosa Kit Wan Kwok

    2014-05-01

    Full Text Available We investigated word learning in university and college students with a diagnosis of dyslexia and in typically-reading controls. Participants read aloud short (4-letter and longer (7-letter nonwords as quickly as possible. The nonwords were repeated across 10 blocks, using a different random order in each block. Participants returned 7 days later and repeated the experiment. Accuracy was high in both groups. The dyslexics were substantially slower than the controls at reading the nonwords throughout the experiment. They also showed a larger length effect, indicating less effective decoding skills. Learning was demonstrated by faster reading of the nonwords across repeated presentations and by a reduction in the difference in reading speeds between shorter and longer nonwords. The dyslexics required more presentations of the nonwords before the length effect became non-significant, only showing convergence in reaction times between shorter and longer items in the second testing session where controls achieved convergence part-way through the first session. Participants also completed a psychological test battery assessing reading and spelling, vocabulary, phonological awareness, working memory, nonverbal ability and motor speed. The dyslexics performed at a similar level to the controls on nonverbal ability but significantly less well on all the other measures. Regression analyses found that decoding ability, measured as the speed of reading aloud nonwords when they were presented for the first time, was predicted by a composite of word reading and spelling scores (‘literacy’. Word learning was assessed in terms of the improvement in naming speeds over 10 blocks of training. Learning was predicted by vocabulary and working memory scores, but not by literacy, phonological awareness, nonverbal ability or motor speed. The results show that young dyslexic adults have problems both in pronouncing novel words and in learning new written words.

  2. The Separability of Morphological Processes from Semantic Meaning and Syntactic Class in Production of Single Words: Evidence from the Hebrew Root Morpheme

    Science.gov (United States)

    Deutsch, Avital

    2016-01-01

    In the present study we investigated to what extent the morphological facilitation effect induced by the derivational root morpheme in Hebrew is independent of semantic meaning and grammatical information of the part of speech involved. Using the picture-word interference paradigm with auditorily presented distractors, Experiment 1 compared the…

  3. On Rational Interpolation-Based List-Decoding and List-Decoding Binary Goppa Codes

    DEFF Research Database (Denmark)

    Beelen, Peter; Høholdt, Tom; Nielsen, Johan Sebastian Rosenkilde

    2013-01-01

    We derive the Wu list-decoding algorithm for generalized Reed–Solomon (GRS) codes by using Gröbner bases over modules and the Euclidean algorithm as the initial algorithm instead of the Berlekamp–Massey algorithm. We present a novel method for constructing the interpolation polynomial fast. We gi...... and a duality in the choice of parameters needed for decoding, both in the case of GRS codes and in the case of Goppa codes....

  4. Faster 2-regular information-set decoding

    NARCIS (Netherlands)

    Bernstein, D.J.; Lange, T.; Peters, C.P.; Schwabe, P.; Chee, Y.M.

    2011-01-01

    Fix positive integers B and w. Let C be a linear code over F 2 of length Bw. The 2-regular-decoding problem is to find a nonzero codeword consisting of w length-B blocks, each of which has Hamming weight 0 or 2. This problem appears in attacks on the FSB (fast syndrome-based) hash function and

  5. Sequential decoders for large MIMO systems

    KAUST Repository

    Ali, Konpal S.; Abediseid, Walid; Alouini, Mohamed-Slim

    2014-01-01

    the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity

  6. 47 CFR 11.33 - EAS Decoder.

    Science.gov (United States)

    2010-10-01

    ..., satellite, public switched telephone network, or any other source that uses the EAS protocol. (2) Valid..., analog radio and television broadcast stations, analog cable systems and wireless cable systems may... program data must be retained even with power removed. (7) Outputs. Decoders shall have the following...

  7. Older Adults Have Difficulty in Decoding Sarcasm

    Science.gov (United States)

    Phillips, Louise H.; Allen, Roy; Bull, Rebecca; Hering, Alexandra; Kliegel, Matthias; Channon, Shelley

    2015-01-01

    Younger and older adults differ in performance on a range of social-cognitive skills, with older adults having difficulties in decoding nonverbal cues to emotion and intentions. Such skills are likely to be important when deciding whether someone is being sarcastic. In the current study we investigated in a life span sample whether there are…

  8. Long-Term Asynchronous Decoding of Arm Motion Using Electrocorticographic Signals in Monkeys

    Science.gov (United States)

    Chao, Zenas C.; Nagasaka, Yasuo; Fujii, Naotaka

    2009-01-01

    Brain–machine interfaces (BMIs) employ the electrical activity generated by cortical neurons directly for controlling external devices and have been conceived as a means for restoring human cognitive or sensory-motor functions. The dominant approach in BMI research has been to decode motor variables based on single-unit activity (SUA). Unfortunately, this approach suffers from poor long-term stability and daily recalibration is normally required to maintain reliable performance. A possible alternative is BMIs based on electrocorticograms (ECoGs), which measure population activity and may provide more durable and stable recording. However, the level of long-term stability that ECoG-based decoding can offer remains unclear. Here we propose a novel ECoG-based decoding paradigm and show that we have successfully decoded hand positions and arm joint angles during an asynchronous food-reaching task in monkeys when explicit cues prompting the onset of movement were not required. Performance using our ECoG-based decoder was comparable to existing SUA-based systems while evincing far superior stability and durability. In addition, the same decoder could be used for months without any drift in accuracy or recalibration. These results were achieved by incorporating the spatio-spectro-temporal integration of activity across multiple cortical areas to compensate for the lower fidelity of ECoG signals. These results show the feasibility of high-performance, chronic and versatile ECoG-based neuroprosthetic devices for real-life applications. This new method provides a stable platform for investigating cortical correlates for understanding motor control, sensory perception, and high-level cognitive processes. PMID:20407639

  9. Decoding Speech With Integrated Hybrid Signals Recorded From the Human Ventral Motor Cortex

    Directory of Open Access Journals (Sweden)

    Kenji Ibayashi

    2018-04-01

    Full Text Available Restoration of speech communication for locked-in patients by means of brain computer interfaces (BCIs is currently an important area of active research. Among the neural signals obtained from intracranial recordings, single/multi-unit activity (SUA/MUA, local field potential (LFP, and electrocorticography (ECoG are good candidates for an input signal for BCIs. However, the question of which signal or which combination of the three signal modalities is best suited for decoding speech production remains unverified. In order to record SUA, LFP, and ECoG simultaneously from a highly localized area of human ventral sensorimotor cortex (vSMC, we fabricated an electrode the size of which was 7 by 13 mm containing sparsely arranged microneedle and conventional macro contacts. We determined which signal modality is the most capable of decoding speech production, and tested if the combination of these signals could improve the decoding accuracy of spoken phonemes. Feature vectors were constructed from spike frequency obtained from SUAs and event-related spectral perturbation derived from ECoG and LFP signals, then input to the decoder. The results showed that the decoding accuracy for five spoken vowels was highest when features from multiple signals were combined and optimized for each subject, and reached 59% when averaged across all six subjects. This result suggests that multi-scale signals convey complementary information for speech articulation. The current study demonstrated that simultaneous recording of multi-scale neuronal activities could raise decoding accuracy even though the recording area is limited to a small portion of cortex, which is advantageous for future implementation of speech-assisting BCIs.

  10. A Scalable Architecture of a Structured LDPC Decoder

    Science.gov (United States)

    Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon

    2004-01-01

    We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.

  11. Intra-dance variation among waggle runs and the design of efficient protocols for honey bee dance decoding

    Directory of Open Access Journals (Sweden)

    Margaret J. Couvillon

    2012-03-01

    Noise is universal in information transfer. In animal communication, this presents a challenge not only for intended signal receivers, but also to biologists studying the system. In honey bees, a forager communicates to nestmates the location of an important resource via the waggle dance. This vibrational signal is composed of repeating units (waggle runs that are then averaged by nestmates to derive a single vector. Manual dance decoding is a powerful tool for studying bee foraging ecology, although the process is time-consuming: a forager may repeat the waggle run 1- >100 times within a dance. It is impractical to decode all of these to obtain the vector; however, intra-dance waggle runs vary, so it is important to decode enough to obtain a good average. Here we examine the variation among waggle runs made by foraging bees to devise a method of dance decoding. The first and last waggle runs within a dance are significantly more variable than the middle run. There was no trend in variation for the middle waggle runs. We recommend that any four consecutive waggle runs, not including the first and last runs, may be decoded, and we show that this methodology is suitable by demonstrating the goodness-of-fit between the decoded vectors from our subsamples with the vectors from the entire dances.

  12. Priming effect on word reading and recall

    OpenAIRE

    Faria, Isabel Hub; Luegi, Paula

    2008-01-01

    This study focuses on priming as a function of exposure to bimodal stimuli of European Portuguese screen centred single words and isolated pictures inserted at the screen’s right upper corner, with four kinds of word-picture relation. The eye movements of 18 Portuguese native university students were registered while reading four sets of ten word-picture pairs, and their respective oral recall lists of words or pictures were kept. The results reveal a higher phonological primin...

  13. Decoding Pigeon Behavior Outcomes Using Functional Connections among Local Field Potentials.

    Science.gov (United States)

    Chen, Yan; Liu, Xinyu; Li, Shan; Wan, Hong

    2018-01-01

    Recent studies indicate that the local field potential (LFP) carries information about an animal's behavior, but issues regarding whether there are any relationships between the LFP functional networks and behavior tasks as well as whether it is possible to employ LFP network features to decode the behavioral outcome in a single trial remain unresolved. In this study, we developed a network-based method to decode the behavioral outcomes in pigeons by using the functional connectivity strength values among LFPs recorded from the nidopallium caudolaterale (NCL). In our method, the functional connectivity strengths were first computed based on the synchronization likelihood. Second, the strength values were unwrapped into row vectors and their dimensions were then reduced by principal component analysis. Finally, the behavioral outcomes in single trials were decoded using leave-one-out combined with the k -nearest neighbor method. The results showed that the LFP functional network based on the gamma-band was related to the goal-directed behavior of pigeons. Moreover, the accuracy of the network features (74 ± 8%) was significantly higher than that of the power features (61 ± 12%). The proposed method provides a powerful tool for decoding animal behavior outcomes using a neural functional network.

  14. Discrete decoding based ultrafast multidimensional nuclear magnetic resonance spectroscopy

    International Nuclear Information System (INIS)

    Wei, Zhiliang; Lin, Liangjie; Ye, Qimiao; Li, Jing; Cai, Shuhui; Chen, Zhong

    2015-01-01

    The three-dimensional (3D) nuclear magnetic resonance (NMR) spectroscopy constitutes an important and powerful tool in analyzing chemical and biological systems. However, the abundant 3D information arrives at the expense of long acquisition times lasting hours or even days. Therefore, there has been a continuous interest in developing techniques to accelerate recordings of 3D NMR spectra, among which the ultrafast spatiotemporal encoding technique supplies impressive acquisition speed by compressing a multidimensional spectrum in a single scan. However, it tends to suffer from tradeoffs among spectral widths in different dimensions, which deteriorates in cases of NMR spectroscopy with more dimensions. In this study, the discrete decoding is proposed to liberate the ultrafast technique from tradeoffs among spectral widths in different dimensions by focusing decoding on signal-bearing sites. For verifying its feasibility and effectiveness, we utilized the method to generate two different types of 3D spectra. The proposed method is also applicable to cases with more than three dimensions, which, based on the experimental results, may widen applications of the ultrafast technique

  15. Discrete decoding based ultrafast multidimensional nuclear magnetic resonance spectroscopy

    Science.gov (United States)

    Wei, Zhiliang; Lin, Liangjie; Ye, Qimiao; Li, Jing; Cai, Shuhui; Chen, Zhong

    2015-07-01

    The three-dimensional (3D) nuclear magnetic resonance (NMR) spectroscopy constitutes an important and powerful tool in analyzing chemical and biological systems. However, the abundant 3D information arrives at the expense of long acquisition times lasting hours or even days. Therefore, there has been a continuous interest in developing techniques to accelerate recordings of 3D NMR spectra, among which the ultrafast spatiotemporal encoding technique supplies impressive acquisition speed by compressing a multidimensional spectrum in a single scan. However, it tends to suffer from tradeoffs among spectral widths in different dimensions, which deteriorates in cases of NMR spectroscopy with more dimensions. In this study, the discrete decoding is proposed to liberate the ultrafast technique from tradeoffs among spectral widths in different dimensions by focusing decoding on signal-bearing sites. For verifying its feasibility and effectiveness, we utilized the method to generate two different types of 3D spectra. The proposed method is also applicable to cases with more than three dimensions, which, based on the experimental results, may widen applications of the ultrafast technique.

  16. Decoding subjective mental states from fMRI activity patterns

    International Nuclear Information System (INIS)

    Tamaki, Masako; Kamitani, Yukiyasu

    2011-01-01

    In recent years, functional magnetic resonance imaging (fMRI) decoding has emerged as a powerful tool to read out detailed stimulus features from multi-voxel brain activity patterns. Moreover, the method has been extended to perform a primitive form of 'mind-reading,' by applying a decoder 'objectively' trained using stimulus features to more 'subjective' conditions. In this paper, we first introduce basic procedures for fMRI decoding based on machine learning techniques. Second, we discuss the source of information used for decoding, in particular, the possibility of extracting information from subvoxel neural structures. We next introduce two experimental designs for decoding subjective mental states: the 'objective-to-subjective design' and the 'subjective-to-subjective design.' Then, we illustrate recent studies on the decoding of a variety of mental states, such as, attention, awareness, decision making, memory, and mental imagery. Finally, we discuss the challenges and new directions of fMRI decoding. (author)

  17. SYMBOL LEVEL DECODING FOR DUO-BINARY TURBO CODES

    Directory of Open Access Journals (Sweden)

    Yogesh Beeharry

    2017-05-01

    Full Text Available This paper investigates the performance of three different symbol level decoding algorithms for Duo-Binary Turbo codes. Explicit details of the computations involved in the three decoding techniques, and a computational complexity analysis are given. Simulation results with different couple lengths, code-rates, and QPSK modulation reveal that the symbol level decoding with bit-level information outperforms the symbol level decoding by 0.1 dB on average in the error floor region. Moreover, a complexity analysis reveals that symbol level decoding with bit-level information reduces the decoding complexity by 19.6 % in terms of the total number of computations required for each half-iteration as compared to symbol level decoding.

  18. Belief propagation decoding of quantum channels by passing quantum messages

    International Nuclear Information System (INIS)

    Renes, Joseph M

    2017-01-01

    The belief propagation (BP) algorithm is a powerful tool in a wide range of disciplines from statistical physics to machine learning to computational biology, and is ubiquitous in decoding classical error-correcting codes. The algorithm works by passing messages between nodes of the factor graph associated with the code and enables efficient decoding of the channel, in some cases even up to the Shannon capacity. Here we construct the first BP algorithm which passes quantum messages on the factor graph and is capable of decoding the classical–quantum channel with pure state outputs. This gives explicit decoding circuits whose number of gates is quadratic in the code length. We also show that this decoder can be modified to work with polar codes for the pure state channel and as part of a decoder for transmitting quantum information over the amplitude damping channel. These represent the first explicit capacity-achieving decoders for non-Pauli channels. (fast track communication)

  19. Belief propagation decoding of quantum channels by passing quantum messages

    Science.gov (United States)

    Renes, Joseph M.

    2017-07-01

    The belief propagation (BP) algorithm is a powerful tool in a wide range of disciplines from statistical physics to machine learning to computational biology, and is ubiquitous in decoding classical error-correcting codes. The algorithm works by passing messages between nodes of the factor graph associated with the code and enables efficient decoding of the channel, in some cases even up to the Shannon capacity. Here we construct the first BP algorithm which passes quantum messages on the factor graph and is capable of decoding the classical-quantum channel with pure state outputs. This gives explicit decoding circuits whose number of gates is quadratic in the code length. We also show that this decoder can be modified to work with polar codes for the pure state channel and as part of a decoder for transmitting quantum information over the amplitude damping channel. These represent the first explicit capacity-achieving decoders for non-Pauli channels.

  20. Evidence for the involvement of a nonlexical route in the repetition of familiar words: A comparison of single and dual route models of auditory repetition.

    Science.gov (United States)

    Hanley, J Richard; Dell, Gary S; Kay, Janice; Baron, Rachel

    2004-03-01

    In this paper, we attempt to simulate the picture naming and auditory repetition performance of two patients reported by Hanley, Kay, and Edwards (2002), who were matched for picture naming score but who differed significantly in their ability to repeat familiar words. In Experiment 1, we demonstrate that the model of naming and repetition put forward by Foygel and Dell (2000) is better able to accommodate this pattern of performance than the model put forward by Dell, Schwartz, Martin, Saffran, and Gagnon (1997). Nevertheless, Foygel and Dell's model underpredicted the repetition performance of both patients. In Experiment 2, we attempt to simulate their performance using a new dual route model of repetition in which Foygel and Dell's model is augmented by an additional nonlexical repetition pathway. The new model provided a more accurate fit to the real-word repetition performance of both patients. It is argued that the results provide support for dual route models of auditory repetition.

  1. IV. NIH Toolbox Cognition Battery (CB): measuring language (vocabulary comprehension and reading decoding).

    Science.gov (United States)

    Gershon, Richard C; Slotkin, Jerry; Manly, Jennifer J; Blitz, David L; Beaumont, Jennifer L; Schnipke, Deborah; Wallner-Allen, Kathleen; Golinkoff, Roberta Michnick; Gleason, Jean Berko; Hirsh-Pasek, Kathy; Adams, Marilyn Jager; Weintraub, Sandra

    2013-08-01

    Mastery of language skills is an important predictor of daily functioning and health. Vocabulary comprehension and reading decoding are relatively quick and easy to measure and correlate highly with overall cognitive functioning, as well as with success in school and work. New measures of vocabulary comprehension and reading decoding (in both English and Spanish) were developed for the NIH Toolbox Cognition Battery (CB). In the Toolbox Picture Vocabulary Test (TPVT), participants hear a spoken word while viewing four pictures, and then must choose the picture that best represents the word. This approach tests receptive vocabulary knowledge without the need to read or write, removing the literacy load for children who are developing literacy and for adults who struggle with reading and writing. In the Toolbox Oral Reading Recognition Test (TORRT), participants see a letter or word onscreen and must pronounce or identify it. The examiner determines whether it was pronounced correctly by comparing the response to the pronunciation guide on a separate computer screen. In this chapter, we discuss the importance of language during childhood and the relation of language and brain function. We also review the development of the TPVT and TORRT, including information about the item calibration process and results from a validation study. Finally, the strengths and weaknesses of the measures are discussed. © 2013 The Society for Research in Child Development, Inc.

  2. A One-Pass Real-Time Decoder Using Memory-Efficient State Network

    Science.gov (United States)

    Shao, Jian; Li, Ta; Zhang, Qingqing; Zhao, Qingwei; Yan, Yonghong

    This paper presents our developed decoder which adopts the idea of statically optimizing part of the knowledge sources while handling the others dynamically. The lexicon, phonetic contexts and acoustic model are statically integrated to form a memory-efficient state network, while the language model (LM) is dynamically incorporated on the fly by means of extended tokens. The novelties of our approach for constructing the state network are (1) introducing two layers of dummy nodes to cluster the cross-word (CW) context dependent fan-in and fan-out triphones, (2) introducing a so-called “WI layer” to store the word identities and putting the nodes of this layer in the non-shared mid-part of the network, (3) optimizing the network at state level by a sufficient forward and backward node-merge process. The state network is organized as a multi-layer structure for distinct token propagation at each layer. By exploiting the characteristics of the state network, several techniques including LM look-ahead, LM cache and beam pruning are specially designed for search efficiency. Especially in beam pruning, a layer-dependent pruning method is proposed to further reduce the search space. The layer-dependent pruning takes account of the neck-like characteristics of WI layer and the reduced variety of word endings, which enables tighter beam without introducing much search errors. In addition, other techniques including LM compression, lattice-based bookkeeping and lattice garbage collection are also employed to reduce the memory requirements. Experiments are carried out on a Mandarin spontaneous speech recognition task where the decoder involves a trigram LM and CW triphone models. A comparison with HDecode of HTK toolkits shows that, within 1% performance deviation, our decoder can run 5 times faster with half of the memory footprint.

  3. Abelian primitive words

    OpenAIRE

    Domaratzki, Michael; Rampersad, Narad

    2011-01-01

    We investigate Abelian primitive words, which are words that are not Abelian powers. We show that unlike classical primitive words, the set of Abelian primitive words is not context-free. We can determine whether a word is Abelian primitive in linear time. Also different from classical primitive words, we find that a word may have more than one Abelian root. We also consider enumeration problems and the relation to the theory of codes. Peer reviewed

  4. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid

    2013-04-04

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the \\\\textit{lattice decoder}. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  5. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid

    2012-10-01

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  6. Video coding for decoding power-constrained embedded devices

    Science.gov (United States)

    Lu, Ligang; Sheinin, Vadim

    2004-01-01

    Low power dissipation and fast processing time are crucial requirements for embedded multimedia devices. This paper presents a technique in video coding to decrease the power consumption at a standard video decoder. Coupled with a small dedicated video internal memory cache on a decoder, the technique can substantially decrease the amount of data traffic to the external memory at the decoder. A decrease in data traffic to the external memory at decoder will result in multiple benefits: faster real-time processing and power savings. The encoder, given prior knowledge of the decoder"s dedicated video internal memory cache management scheme, regulates its choice of motion compensated predictors to reduce the decoder"s external memory accesses. This technique can be used in any standard or proprietary encoder scheme to generate a compliant output bit stream decodable by standard CPU-based and dedicated hardware-based decoders for power savings with the best quality-power cost trade off. Our simulation results show that with a relatively small amount of dedicated video internal memory cache, the technique may decrease the traffic between CPU and external memory over 50%.

  7. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid; Alouini, Mohamed-Slim

    2012-01-01

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  8. Design of 10Gbps optical encoder/decoder structure for FE-OCDMA system using SOA and opto-VLSI processors.

    Science.gov (United States)

    Aljada, Muhsen; Hwang, Seow; Alameh, Kamal

    2008-01-21

    In this paper we propose and experimentally demonstrate a reconfigurable 10Gbps frequency-encoded (1D) encoder/decoder structure for optical code division multiple access (OCDMA). The encoder is constructed using a single semiconductor optical amplifier (SOA) and 1D reflective Opto-VLSI processor. The SOA generates broadband amplified spontaneous emission that is dynamically sliced using digital phase holograms loaded onto the Opto-VLSI processor to generate 1D codewords. The selected wavelengths are injected back into the same SOA for amplifications. The decoder is constructed using single Opto-VLSI processor only. The encoded signal can successfully be retrieved at the decoder side only when the digital phase holograms of the encoder and the decoder are matched. The system performance is measured in terms of the auto-correlation and cross-correlation functions as well as the eye diagram.

  9. Model-based decoding, information estimation, and change-point detection techniques for multineuron spike trains.

    Science.gov (United States)

    Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam

    2011-01-01

    One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.

  10. Cortical Decoding of Individual Finger and Wrist Kinematics for an Upper-Limb Neuroprosthesis

    Science.gov (United States)

    Aggarwal, Vikram; Tenore, Francesco; Acharya, Soumyadipta; Schieber, Marc H.; Thakor, Nitish V.

    2010-01-01

    Previous research has shown that neuronal activity can be used to continuously decode the kinematics of gross movements involving arm and hand trajectory. However, decoding the kinematics of fine motor movements, such as the manipulation of individual fingers, has not been demonstrated. In this study, single unit activities were recorded from task-related neurons in M1 of two trained rhesus monkey as they performed individuated movements of the fingers and wrist. The primates’ hand was placed in a manipulandum, and strain gauges at the tips of each finger were used to track the digit’s position. Both linear and non-linear filters were designed to simultaneously predict kinematics of each digit and the wrist, and their performance compared using mean squared error and correlation coefficients. All models had high decoding accuracy, but the feedforward ANN (R=0.76–0.86, MSE=0.04–0.05) and Kalman filter (R=0.68–0.86, MSE=0.04–0.07) performed better than a simple linear regression filter (0.58–0.81, 0.05–0.07). These results suggest that individual finger and wrist kinematics can be decoded with high accuracy, and be used to control a multi-fingered prosthetic hand in real-time. PMID:19964645

  11. LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor

    Science.gov (United States)

    Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram

    2007-09-01

    Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.

  12. A Tensor-Product-Kernel Framework for Multiscale Neural Activity Decoding and Control

    Science.gov (United States)

    Li, Lin; Brockmeier, Austin J.; Choi, John S.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2014-01-01

    Brain machine interfaces (BMIs) have attracted intense attention as a promising technology for directly interfacing computers or prostheses with the brain's motor and sensory areas, thereby bypassing the body. The availability of multiscale neural recordings including spike trains and local field potentials (LFPs) brings potential opportunities to enhance computational modeling by enriching the characterization of the neural system state. However, heterogeneity on data type (spike timing versus continuous amplitude signals) and spatiotemporal scale complicates the model integration of multiscale neural activity. In this paper, we propose a tensor-product-kernel-based framework to integrate the multiscale activity and exploit the complementary information available in multiscale neural activity. This provides a common mathematical framework for incorporating signals from different domains. The approach is applied to the problem of neural decoding and control. For neural decoding, the framework is able to identify the nonlinear functional relationship between the multiscale neural responses and the stimuli using general purpose kernel adaptive filtering. In a sensory stimulation experiment, the tensor-product-kernel decoder outperforms decoders that use only a single neural data type. In addition, an adaptive inverse controller for delivering electrical microstimulation patterns that utilizes the tensor-product kernel achieves promising results in emulating the responses to natural stimulation. PMID:24829569

  13. Decoding spikes in a spiking neuronal network

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [Department of Informatics, University of Sussex, Brighton BN1 9QH (United Kingdom); Ding, Mingzhou [Department of Mathematics, Florida Atlantic University, Boca Raton, FL 33431 (United States)

    2004-06-04

    We investigate how to reliably decode the input information from the output of a spiking neuronal network. A maximum likelihood estimator of the input signal, together with its Fisher information, is rigorously calculated. The advantage of the maximum likelihood estimation over the 'brute-force rate coding' estimate is clearly demonstrated. It is pointed out that the ergodic assumption in neuroscience, i.e. a temporal average is equivalent to an ensemble average, is in general not true. Averaging over an ensemble of neurons usually gives a biased estimate of the input information. A method on how to compensate for the bias is proposed. Reconstruction of dynamical input signals with a group of spiking neurons is extensively studied and our results show that less than a spike is sufficient to accurately decode dynamical inputs.

  14. Decoding spikes in a spiking neuronal network

    International Nuclear Information System (INIS)

    Feng Jianfeng; Ding, Mingzhou

    2004-01-01

    We investigate how to reliably decode the input information from the output of a spiking neuronal network. A maximum likelihood estimator of the input signal, together with its Fisher information, is rigorously calculated. The advantage of the maximum likelihood estimation over the 'brute-force rate coding' estimate is clearly demonstrated. It is pointed out that the ergodic assumption in neuroscience, i.e. a temporal average is equivalent to an ensemble average, is in general not true. Averaging over an ensemble of neurons usually gives a biased estimate of the input information. A method on how to compensate for the bias is proposed. Reconstruction of dynamical input signals with a group of spiking neurons is extensively studied and our results show that less than a spike is sufficient to accurately decode dynamical inputs

  15. Neural decoding of visual imagery during sleep.

    Science.gov (United States)

    Horikawa, T; Tamaki, M; Miyawaki, Y; Kamitani, Y

    2013-05-03

    Visual imagery during sleep has long been a topic of persistent speculation, but its private nature has hampered objective analysis. Here we present a neural decoding approach in which machine-learning models predict the contents of visual imagery during the sleep-onset period, given measured brain activity, by discovering links between human functional magnetic resonance imaging patterns and verbal reports with the assistance of lexical and image databases. Decoding models trained on stimulus-induced brain activity in visual cortical areas showed accurate classification, detection, and identification of contents. Our findings demonstrate that specific visual experience during sleep is represented by brain activity patterns shared by stimulus perception, providing a means to uncover subjective contents of dreaming using objective neural measurement.

  16. Decoding of intended saccade direction in an oculomotor brain-computer interface

    Science.gov (United States)

    Jia, Nan; Brincat, Scott L.; Salazar-Gómez, Andrés F.; Panko, Mikhail; Guenther, Frank H.; Miller, Earl K.

    2017-08-01

    Objective. To date, invasive brain-computer interface (BCI) research has largely focused on replacing lost limb functions using signals from the hand/arm areas of motor cortex. However, the oculomotor system may be better suited to BCI applications involving rapid serial selection from spatial targets, such as choosing from a set of possible words displayed on a computer screen in an augmentative and alternative communication (AAC) application. Here we aimed to demonstrate the feasibility of a BCI utilizing the oculomotor system. Approach. We developed a chronic intracortical BCI in monkeys to decode intended saccadic eye movement direction using activity from multiple frontal cortical areas. Main results. Intended saccade direction could be decoded in real time with high accuracy, particularly at contralateral locations. Accurate decoding was evident even at the beginning of the BCI session; no extensive BCI experience was necessary. High-frequency (80-500 Hz) local field potential magnitude provided the best performance, even over spiking activity, thus simplifying future BCI applications. Most of the information came from the frontal and supplementary eye fields, with relatively little contribution from dorsolateral prefrontal cortex. Significance. Our results support the feasibility of high-accuracy intracortical oculomotor BCIs that require little or no practice to operate and may be ideally suited for ‘point and click’ computer operation as used in most current AAC systems.

  17. Deep generative learning of location-invariant visual word recognition

    Directory of Open Access Journals (Sweden)

    Maria Grazia eDi Bono

    2013-09-01

    Full Text Available It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters from their eye-centred (i.e., retinal locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Conversely, there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words – which was the model’s learning objective – is largely based on letter-level information.

  18. Probabilistic Amplitude Shaping With Hard Decision Decoding and Staircase Codes

    Science.gov (United States)

    Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi; Steiner, Fabian

    2018-05-01

    We consider probabilistic amplitude shaping (PAS) as a means of increasing the spectral efficiency of fiber-optic communication systems. In contrast to previous works in the literature, we consider probabilistic shaping with hard decision decoding (HDD). In particular, we apply the PAS recently introduced by B\\"ocherer \\emph{et al.} to a coded modulation (CM) scheme with bit-wise HDD that uses a staircase code as the forward error correction code. We show that the CM scheme with PAS and staircase codes yields significant gains in spectral efficiency with respect to the baseline scheme using a staircase code and a standard constellation with uniformly distributed signal points. Using a single staircase code, the proposed scheme achieves performance within $0.57$--$1.44$ dB of the corresponding achievable information rate for a wide range of spectral efficiencies.

  19. Selection combining for noncoherent decode-and-forward relay networks

    Directory of Open Access Journals (Sweden)

    Nguyen Ha

    2011-01-01

    Full Text Available Abstract This paper studies a new decode-and-forward relaying scheme for a cooperative wireless network composed of one source, K relays, and one destination and with binary frequency-shift keying modulation. A single threshold is employed to select retransmitting relays as follows: a relay retransmits to the destination if its decision variable is larger than the threshold; otherwise, it remains silent. The destination then performs selection combining for the detection of transmitted information. The average end-to-end bit-error-rate (BER is analytically determined in a closed-form expression. Based on the derived BER, the problem of choosing an optimal threshold or jointly optimal threshold and power allocation to minimize the end-to-end BER is also investigated. Both analytical and simulation results reveal that the obtained optimal threshold scheme or jointly optimal threshold and power-allocation scheme can significantly improve the BER performance compared to a previously proposed scheme.

  20. Generalized instantly decodable network coding for relay-assisted networks

    KAUST Repository

    Elmahdy, Adel M.

    2013-09-01

    In this paper, we investigate the problem of minimizing the frame completion delay for Instantly Decodable Network Coding (IDNC) in relay-assisted wireless multicast networks. We first propose a packet recovery algorithm in the single relay topology which employs generalized IDNC instead of strict IDNC previously proposed in the literature for the same relay-assisted topology. This use of generalized IDNC is supported by showing that it is a super-set of the strict IDNC scheme, and thus can generate coding combinations that are at least as efficient as strict IDNC in reducing the average completion delay. We then extend our study to the multiple relay topology and propose a joint generalized IDNC and relay selection algorithm. This proposed algorithm benefits from the reception diversity of the multiple relays to further reduce the average completion delay in the network. Simulation results show that our proposed solutions achieve much better performance compared to previous solutions in the literature. © 2013 IEEE.

  1. Power decoding Reed-Solomon codes up to the Johnson radius

    DEFF Research Database (Denmark)

    Rosenkilde, Johan Sebastian Heesemann

    2018-01-01

    Power decoding, or "decoding using virtual interleaving" is a technique for decoding Reed-Solomon codes up to the Sudan radius. Since the method's inception, it has been an open question if it is possible to use this approach to decode up to the Johnson radius - the decoding radius of the Guruswami...

  2. Resource Efficient LDPC Decoders for Multimedia Communication

    OpenAIRE

    Chandrasetty, Vikram Arkalgud; Aziz, Syed Mahfuzul

    2013-01-01

    Achieving high image quality is an important aspect in an increasing number of wireless multimedia applications. These applications require resource efficient error correction hardware to detect and correct errors introduced by the communication channel. This paper presents an innovative flexible architecture for error correction using Low-Density Parity-Check (LDPC) codes. The proposed partially-parallel decoder architecture utilizes a novel code construction technique based on multi-level H...

  3. Decoding divergent series in nonparaxial optics.

    Science.gov (United States)

    Borghi, Riccardo; Gori, Franco; Guattari, Giorgio; Santarsiero, Massimo

    2011-03-15

    A theoretical analysis aimed at investigating the divergent character of perturbative series involved in the study of free-space nonparaxial propagation of vectorial optical beams is proposed. Our analysis predicts a factorial divergence for such series and provides a theoretical framework within which the results of recently published numerical experiments concerning nonparaxial propagation of vectorial Gaussian beams find a meaningful interpretation in terms of the decoding operated on such series by the Weniger transformation.

  4. Convergent and diagnostic validity of STAVUX, a word and pseudoword spelling test for adults.

    Science.gov (United States)

    Östberg, Per; Backlund, Charlotte; Lindström, Emma

    2016-10-01

    Few comprehensive spelling tests are available in Swedish, and none have been validated in adults with reading and writing disorders. The recently developed STAVUX test includes word and pseudoword spelling subtests with high internal consistency and adult norms stratified by education. This study evaluated the convergent and diagnostic validity of STAVUX in adults with dyslexia. Forty-six adults, 23 with dyslexia and 23 controls, took STAVUX together with a standard word-decoding test and a self-rated measure of spelling skills. STAVUX subtest scores showed moderate to strong correlations with word-decoding scores and predicted self-rated spelling skills. Word and pseudoword subtest scores both predicted dyslexia status. Receiver-operating characteristic (ROC) analysis showed excellent diagnostic discriminability. Sensitivity was 91% and specificity 96%. In conclusion, the results of this study support the convergent and diagnostic validity of STAVUX.

  5. Sequential decoders for large MIMO systems

    KAUST Repository

    Ali, Konpal S.

    2014-05-01

    Due to their ability to provide high data rates, multiple-input multiple-output (MIMO) systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In this paper, we employ the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. Numerical results are done that show moderate bias values result in a decent performance-complexity trade-off. We also attempt to bound the error by bounding the bias, using the minimum distance of a lattice. The variations in complexity with SNR have an interesting trend that shows room for considerable improvement. Our work is compared against linear decoders (LDs) aided with Element-based Lattice Reduction (ELR) and Complex Lenstra-Lenstra-Lovasz (CLLL) reduction. © 2014 IFIP.

  6. Markov source model for printed music decoding

    Science.gov (United States)

    Kopec, Gary E.; Chou, Philip A.; Maltz, David A.

    1995-03-01

    This paper describes a Markov source model for a simple subset of printed music notation. The model is based on the Adobe Sonata music symbol set and a message language of our own design. Chord imaging is the most complex part of the model. Much of the complexity follows from a rule of music typography that requires the noteheads for adjacent pitches to be placed on opposite sides of the chord stem. This rule leads to a proliferation of cases for other typographic details such as dot placement. We describe the language of message strings accepted by the model and discuss some of the imaging issues associated with various aspects of the message language. We also point out some aspects of music notation that appear problematic for a finite-state representation. Development of the model was greatly facilitated by the duality between image synthesis and image decoding. Although our ultimate objective was a music image model for use in decoding, most of the development proceeded by using the evolving model for image synthesis, since it is computationally far less costly to image a message than to decode an image.

  7. Kernel Temporal Differences for Neural Decoding

    Science.gov (United States)

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  8. Syllabic Length Effect in Visual Word Recognition

    Directory of Open Access Journals (Sweden)

    Roya Ranjbar Mohammadi

    2014-07-01

    Full Text Available Studies on visual word recognition have resulted in different and sometimes contradictory proposals as Multi-Trace Memory Model (MTM, Dual-Route Cascaded Model (DRC, and Parallel Distribution Processing Model (PDP. The role of the number of syllables in word recognition was examined by the use of five groups of English words and non-words. The reaction time of the participants to these words was measured using reaction time measuring software. The results indicated that there was syllabic effect on recognition of both high and low frequency words. The pattern was incremental in terms of syllable number. This pattern prevailed in high and low frequency words and non-words except in one syllable words. In general, the results are in line with the PDP model which claims that a single processing mechanism is used in both words and non-words recognition. In other words, the findings suggest that lexical items are mainly processed via a lexical route.  A pedagogical implication of the findings would be that reading in English as a foreign language involves analytical processing of the syllable of the words.

  9. BioWord: A sequence manipulation suite for Microsoft Word

    Directory of Open Access Journals (Sweden)

    Anzaldi Laura J

    2012-06-01

    Full Text Available Abstract Background The ability to manipulate, edit and process DNA and protein sequences has rapidly become a necessary skill for practicing biologists across a wide swath of disciplines. In spite of this, most everyday sequence manipulation tools are distributed across several programs and web servers, sometimes requiring installation and typically involving frequent switching between applications. To address this problem, here we have developed BioWord, a macro-enabled self-installing template for Microsoft Word documents that integrates an extensive suite of DNA and protein sequence manipulation tools. Results BioWord is distributed as a single macro-enabled template that self-installs with a single click. After installation, BioWord will open as a tab in the Office ribbon. Biologists can then easily manipulate DNA and protein sequences using a familiar interface and minimize the need to switch between applications. Beyond simple sequence manipulation, BioWord integrates functionality ranging from dyad search and consensus logos to motif discovery and pair-wise alignment. Written in Visual Basic for Applications (VBA as an open source, object-oriented project, BioWord allows users with varying programming experience to expand and customize the program to better meet their own needs. Conclusions BioWord integrates a powerful set of tools for biological sequence manipulation within a handy, user-friendly tab in a widely used word processing software package. The use of a simple scripting language and an object-oriented scheme facilitates customization by users and provides a very accessible educational platform for introducing students to basic bioinformatics algorithms.

  10. BioWord: A sequence manipulation suite for Microsoft Word

    Science.gov (United States)

    2012-01-01

    Background The ability to manipulate, edit and process DNA and protein sequences has rapidly become a necessary skill for practicing biologists across a wide swath of disciplines. In spite of this, most everyday sequence manipulation tools are distributed across several programs and web servers, sometimes requiring installation and typically involving frequent switching between applications. To address this problem, here we have developed BioWord, a macro-enabled self-installing template for Microsoft Word documents that integrates an extensive suite of DNA and protein sequence manipulation tools. Results BioWord is distributed as a single macro-enabled template that self-installs with a single click. After installation, BioWord will open as a tab in the Office ribbon. Biologists can then easily manipulate DNA and protein sequences using a familiar interface and minimize the need to switch between applications. Beyond simple sequence manipulation, BioWord integrates functionality ranging from dyad search and consensus logos to motif discovery and pair-wise alignment. Written in Visual Basic for Applications (VBA) as an open source, object-oriented project, BioWord allows users with varying programming experience to expand and customize the program to better meet their own needs. Conclusions BioWord integrates a powerful set of tools for biological sequence manipulation within a handy, user-friendly tab in a widely used word processing software package. The use of a simple scripting language and an object-oriented scheme facilitates customization by users and provides a very accessible educational platform for introducing students to basic bioinformatics algorithms. PMID:22676326

  11. BioWord: a sequence manipulation suite for Microsoft Word.

    Science.gov (United States)

    Anzaldi, Laura J; Muñoz-Fernández, Daniel; Erill, Ivan

    2012-06-07

    The ability to manipulate, edit and process DNA and protein sequences has rapidly become a necessary skill for practicing biologists across a wide swath of disciplines. In spite of this, most everyday sequence manipulation tools are distributed across several programs and web servers, sometimes requiring installation and typically involving frequent switching between applications. To address this problem, here we have developed BioWord, a macro-enabled self-installing template for Microsoft Word documents that integrates an extensive suite of DNA and protein sequence manipulation tools. BioWord is distributed as a single macro-enabled template that self-installs with a single click. After installation, BioWord will open as a tab in the Office ribbon. Biologists can then easily manipulate DNA and protein sequences using a familiar interface and minimize the need to switch between applications. Beyond simple sequence manipulation, BioWord integrates functionality ranging from dyad search and consensus logos to motif discovery and pair-wise alignment. Written in Visual Basic for Applications (VBA) as an open source, object-oriented project, BioWord allows users with varying programming experience to expand and customize the program to better meet their own needs. BioWord integrates a powerful set of tools for biological sequence manipulation within a handy, user-friendly tab in a widely used word processing software package. The use of a simple scripting language and an object-oriented scheme facilitates customization by users and provides a very accessible educational platform for introducing students to basic bioinformatics algorithms.

  12. Deep Learning Methods for Improved Decoding of Linear Codes

    Science.gov (United States)

    Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair

    2018-02-01

    The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.

  13. The Cognitive Correlates of Third-Grade Skill in Arithmetic, Algorithmic Computation, and Arithmetic Word Problems

    Science.gov (United States)

    Fuchs, Lynn S.; Fuchs, Douglas; Compton, Donald L.; Powell, Sarah R.; Seethaler, Pamela M.; Capizzi, Andrea M.; Schatschneider, Christopher; Fletcher, Jack M.

    2006-01-01

    The purpose of this study was to examine the cognitive correlates of RD-grade skill in arithmetic, algorithmic computation, and arithmetic word problems. Third graders (N = 312) were measured on language, nonverbal problem solving, concept formation, processing speed, long-term memory, working memory, phonological decoding, and sight word…

  14. High Frequency rTMS over the Left Parietal Lobule Increases Non-Word Reading Accuracy

    Science.gov (United States)

    Costanzo, Floriana; Menghini, Deny; Caltagirone, Carlo; Oliveri, Massimiliano; Vicari, Stefano

    2012-01-01

    Increasing evidence in the literature supports the usefulness of Transcranial Magnetic Stimulation (TMS) in studying reading processes. Two brain regions are primarily involved in phonological decoding: the left superior temporal gyrus (STG), which is associated with the auditory representation of spoken words, and the left inferior parietal lobe…

  15. Decoding Delay Controlled Completion Time Reduction in Instantly Decodable Network Coding

    KAUST Repository

    Douik, Ahmed

    2016-06-27

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to act completely against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. This paper investigates the effect of controlling the decoding delay to reduce the completion time below its currently best-known solution in both perfect and imperfect feedback with persistent erasure channels. To solve the problem, the decodingdelay- dependent expressions of the users’ and overall completion times are derived in the complete feedback scenario. Although using such expressions to find the optimal overall completion time is NP-hard, the paper proposes two novel heuristics that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Afterward, the paper extends the study to the imperfect feedback scenario in which uncertainties at the sender affects its ability to anticipate accurately the decoding delay increase at each user. The paper formulates the problem in such environment and derives the expression of the minimum increase in the completion time. Simulation results show the performance of the proposed solutions and suggest that both heuristics achieves a lower mean completion time as compared to the best-known heuristics for the completion time reduction in perfect and imperfect feedback. The gap in performance becomes more significant as the erasure of the channel increases.

  16. Decoding Delay Controlled Completion Time Reduction in Instantly Decodable Network Coding

    KAUST Repository

    Douik, Ahmed S.; Sorour, Sameh; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2016-01-01

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to act completely against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. This paper investigates the effect of controlling the decoding delay to reduce the completion time below its currently best-known solution in both perfect and imperfect feedback with persistent erasure channels. To solve the problem, the decodingdelay- dependent expressions of the users’ and overall completion times are derived in the complete feedback scenario. Although using such expressions to find the optimal overall completion time is NP-hard, the paper proposes two novel heuristics that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Afterward, the paper extends the study to the imperfect feedback scenario in which uncertainties at the sender affects its ability to anticipate accurately the decoding delay increase at each user. The paper formulates the problem in such environment and derives the expression of the minimum increase in the completion time. Simulation results show the performance of the proposed solutions and suggest that both heuristics achieves a lower mean completion time as compared to the best-known heuristics for the completion time reduction in perfect and imperfect feedback. The gap in performance becomes more significant as the erasure of the channel increases.

  17. Observations on Polar Coding with CRC-Aided List Decoding

    Science.gov (United States)

    2016-09-01

    TECHNICAL REPORT 3041 September 2016 Observations on Polar Coding with CRC-Aided List Decoding David Wasserman Approved for public release. SSC...described in [2, 3]. In FY15 and FY16 we used cyclic redundancy check (CRC)-aided polar list decoding [4]. Section 2 describes the basics of polar coding ...and gives details of the encoders and decoders we used. In the course of our research, we performed simulations of polar codes in hundreds of cases

  18. Polar Coding with CRC-Aided List Decoding

    Science.gov (United States)

    2015-08-01

    TECHNICAL REPORT 2087 August 2015 Polar Coding with CRC-Aided List Decoding David Wasserman Approved...list decoding . RESULTS Our simulation results show that polar coding can produce results very similar to the FEC used in the Digital Video...standard. RECOMMENDATIONS In any application for which the DVB-S2 FEC is considered, polar coding with CRC-aided list decod - ing with N = 65536

  19. A quantum algorithm for Viterbi decoding of classical convolutional codes

    OpenAIRE

    Grice, Jon R.; Meyer, David A.

    2014-01-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper the proposed algorithm is applied to decoding classical convolutional codes, for instance; large constraint length $Q$ and short decode frames $N$. Other applications of the classical Viterbi algorithm where $Q$ is large (e.g. speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butter...

  20. Design of a VLSI Decoder for Partially Structured LDPC Codes

    Directory of Open Access Journals (Sweden)

    Fabrizio Vacca

    2008-01-01

    of their parity matrix can be partitioned into two disjoint sets, namely, the structured and the random ones. For the proposed class of codes a constructive design method is provided. To assess the value of this method the constructed codes performance are presented. From these results, a novel decoding method called split decoding is introduced. Finally, to prove the effectiveness of the proposed approach a whole VLSI decoder is designed and characterized.

  1. Spatial attention in written word perception

    Directory of Open Access Journals (Sweden)

    Veronica eMontani

    2014-02-01

    Full Text Available The role of attention in visual word recognition and reading aloud is a long debated issue. Studies of both developmental and acquired reading disorders provide growing evidence that spatial attention is critically involved in word reading, in particular for the phonological decoding of unfamiliar letter strings. However, studies on healthy participants have produced contrasting results. The aim of this study was to investigate how the allocation of spatial attention may influence the perception of letter strings in skilled readers. High frequency words, low frequency words and pseudowords were briefly and parafoveally presented either in the left or the right visual field. Attentional allocation was modulated by the presentation of a spatial cue before the target string. Accuracy in reporting the target string was modulated by the spatial cue but this effect varied with the type of string. For unfamiliar strings, processing was facilitated when attention was focused on the string location and hindered when it was diverted from the target. This finding is consistent the assumptions of the CDP+ model of reading aloud, as well as with familiarity sensitivity models that argue for a flexible use of attention according with the specific requirements of the string. Moreover, we found that processing of high-frequency words was facilitated by an extra-large focus of attention. The latter result is consistent with the hypothesis that a broad distribution of attention is the default mode during reading of familiar words because it might optimally engage the broad receptive fields of the highest detectors in the hierarchical system for visual word recognition.

  2. Interpolation decoding method with variable parameters for fractal image compression

    International Nuclear Information System (INIS)

    He Chuanjiang; Li Gaoping; Shen Xiaona

    2007-01-01

    The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal

  3. Joint Decoding of Concatenated VLEC and STTC System

    Directory of Open Access Journals (Sweden)

    Chen Huijun

    2008-01-01

    Full Text Available Abstract We consider the decoding of wireless communication systems with both source coding in the application layer and channel coding in the physical layer for high-performance transmission over fading channels. Variable length error correcting codes (VLECs and space time trellis codes (STTCs are used to provide bandwidth efficient data compression as well as coding and diversity gains. At the receiver, an iterative joint source and space time decoding scheme are developed to utilize redundancy in both STTC and VLEC to improve overall decoding performance. Issues such as the inseparable systematic information in the symbol level, the asymmetric trellis structure of VLEC, and information exchange between bit and symbol domains have been considered in the maximum a posteriori probability (MAP decoding algorithm. Simulation results indicate that the developed joint decoding scheme achieves a significant decoding gain over the separate decoding in fading channels, whether or not the channel information is perfectly known at the receiver. Furthermore, how rate allocation between STTC and VLEC affects the performance of the joint source and space-time decoder is investigated. Different systems with a fixed overall information rate are studied. It is shown that for a system with more redundancy dedicated to the source code and a higher order modulation of STTC, the joint decoding yields better performance, though with increased complexity.

  4. Joint Decoding of Concatenated VLEC and STTC System

    Directory of Open Access Journals (Sweden)

    Huijun Chen

    2008-07-01

    Full Text Available We consider the decoding of wireless communication systems with both source coding in the application layer and channel coding in the physical layer for high-performance transmission over fading channels. Variable length error correcting codes (VLECs and space time trellis codes (STTCs are used to provide bandwidth efficient data compression as well as coding and diversity gains. At the receiver, an iterative joint source and space time decoding scheme are developed to utilize redundancy in both STTC and VLEC to improve overall decoding performance. Issues such as the inseparable systematic information in the symbol level, the asymmetric trellis structure of VLEC, and information exchange between bit and symbol domains have been considered in the maximum a posteriori probability (MAP decoding algorithm. Simulation results indicate that the developed joint decoding scheme achieves a significant decoding gain over the separate decoding in fading channels, whether or not the channel information is perfectly known at the receiver. Furthermore, how rate allocation between STTC and VLEC affects the performance of the joint source and space-time decoder is investigated. Different systems with a fixed overall information rate are studied. It is shown that for a system with more redundancy dedicated to the source code and a higher order modulation of STTC, the joint decoding yields better performance, though with increased complexity.

  5. Sub-quadratic decoding of one-point hermitian codes

    DEFF Research Database (Denmark)

    Nielsen, Johan Sebastian Rosenkilde; Beelen, Peter

    2015-01-01

    We present the first two sub-quadratic complexity decoding algorithms for one-point Hermitian codes. The first is based on a fast realization of the Guruswami-Sudan algorithm using state-of-the-art algorithms from computer algebra for polynomial-ring matrix minimization. The second is a power...... decoding algorithm: an extension of classical key equation decoding which gives a probabilistic decoding algorithm up to the Sudan radius. We show how the resulting key equations can be solved by the matrix minimization algorithms from computer algebra, yielding similar asymptotic complexities....

  6. Population coding and decoding in a neural field: a computational study.

    Science.gov (United States)

    Wu, Si; Amari, Shun-Ichi; Nakahara, Hiroyuki

    2002-05-01

    This study uses a neural field model to investigate computational aspects of population coding and decoding when the stimulus is a single variable. A general prototype model for the encoding process is proposed, in which neural responses are correlated, with strength specified by a gaussian function of their difference in preferred stimuli. Based on the model, we study the effect of correlation on the Fisher information, compare the performances of three decoding methods that differ in the amount of encoding information being used, and investigate the implementation of the three methods by using a recurrent network. This study not only rediscovers main results in existing literatures in a unified way, but also reveals important new features, especially when the neural correlation is strong. As the neural correlation of firing becomes larger, the Fisher information decreases drastically. We confirm that as the width of correlation increases, the Fisher information saturates and no longer increases in proportion to the number of neurons. However, we prove that as the width increases further--wider than (sqrt)2 times the effective width of the turning function--the Fisher information increases again, and it increases without limit in proportion to the number of neurons. Furthermore, we clarify the asymptotic efficiency of the maximum likelihood inference (MLI) type of decoding methods for correlated neural signals. It shows that when the correlation covers a nonlocal range of population (excepting the uniform correlation and when the noise is extremely small), the MLI type of method, whose decoding error satisfies the Cauchy-type distribution, is not asymptotically efficient. This implies that the variance is no longer adequate to measure decoding accuracy.

  7. Distinct neural patterns enable grasp types decoding in monkey dorsal premotor cortex

    Science.gov (United States)

    Hao, Yaoyao; Zhang, Qiaosheng; Controzzi, Marco; Cipriani, Christian; Li, Yue; Li, Juncheng; Zhang, Shaomin; Wang, Yiwen; Chen, Weidong; Chiara Carrozza, Maria; Zheng, Xiaoxiang

    2014-12-01

    Objective. Recent studies have shown that dorsal premotor cortex (PMd), a cortical area in the dorsomedial grasp pathway, is involved in grasp movements. However, the neural ensemble firing property of PMd during grasp movements and the extent to which it can be used for grasp decoding are still unclear. Approach. To address these issues, we used multielectrode arrays to record both spike and local field potential (LFP) signals in PMd in macaque monkeys performing reaching and grasping of one of four differently shaped objects. Main results. Single and population neuronal activity showed distinct patterns during execution of different grip types. Cluster analysis of neural ensemble signals indicated that the grasp related patterns emerged soon (200-300 ms) after the go cue signal, and faded away during the hold period. The timing and duration of the patterns varied depending on the behaviors of individual monkey. Application of support vector machine model to stable activity patterns revealed classification accuracies of 94% and 89% for each of the two monkeys, indicating a robust, decodable grasp pattern encoded in the PMd. Grasp decoding using LFPs, especially the high-frequency bands, also produced high decoding accuracies. Significance. This study is the first to specify the neuronal population encoding of grasp during the time course of grasp. We demonstrate high grasp decoding performance in PMd. These findings, combined with previous evidence for reach related modulation studies, suggest that PMd may play an important role in generation and maintenance of grasp action and may be a suitable locus for brain-machine interface applications.

  8. Competition between multiple words for a referent in cross-situational word learning

    Science.gov (United States)

    Benitez, Viridiana L.; Yurovsky, Daniel; Smith, Linda B.

    2016-01-01

    Three experiments investigated competition between word-object pairings in a cross-situational word-learning paradigm. Adults were presented with One-Word pairings, where a single word labeled a single object, and Two-Word pairings, where two words labeled a single object. In addition to measuring learning of these two pairing types, we measured competition between words that refer to the same object. When the word-object co-occurrences were presented intermixed in training (Experiment 1), we found evidence for direct competition between words that label the same referent. Separating the two words for an object in time eliminated any evidence for this competition (Experiment 2). Experiment 3 demonstrated that adding a linguistic cue to the second label for a referent led to different competition effects between adults who self-reported different language learning histories, suggesting both distinctiveness and language learning history affect competition. Finally, in all experiments, competition effects were unrelated to participants’ explicit judgments of learning, suggesting that competition reflects the operating characteristics of implicit learning processes. Together, these results demonstrate that the role of competition between overlapping associations in statistical word-referent learning depends on time, the distinctiveness of word-object pairings, and language learning history. PMID:27087742

  9. Narrative-Based Intervention for Word-Finding Difficulties: A Case Study

    Science.gov (United States)

    Marks, Ian; Stokes, Stephanie F.

    2010-01-01

    Background: Children with word-finding difficulties manifest a high frequency of word-finding characteristics in narrative, yet word-finding interventions have concentrated on single-word treatments and outcome measures. Aims: This study measured the effectiveness of a narrative-based intervention in improving single-word picture-naming and…

  10. Clinical Strategies for Sampling Word Recognition Performance.

    Science.gov (United States)

    Schlauch, Robert S; Carney, Edward

    2018-04-17

    Computer simulation was used to estimate the statistical properties of searches for maximum word recognition ability (PB max). These involve presenting multiple lists and discarding all scores but that of the 1 list that produced the highest score. The simulations, which model limitations inherent in the precision of word recognition scores, were done to inform clinical protocols. A secondary consideration was a derivation of 95% confidence intervals for significant changes in score from phonemic scoring of a 50-word list. The PB max simulations were conducted on a "client" with flat performance intensity functions. The client's performance was assumed to be 60% initially and 40% for a second assessment. Thousands of estimates were obtained to examine the precision of (a) single lists and (b) multiple lists using a PB max procedure. This method permitted summarizing the precision for assessing a 20% drop in performance. A single 25-word list could identify only 58.4% of the cases in which performance fell from 60% to 40%. A single 125-word list identified 99.8% of the declines correctly. Presenting 3 or 5 lists to find PB max produced an undesirable finding: an increase in the word recognition score. A 25-word list produces unacceptably low precision for making clinical decisions. This finding holds in both single and multiple 25-word lists, as in a search for PB max. A table is provided, giving estimates of 95% critical ranges for successive presentations of a 50-word list analyzed by the number of phonemes correctly identified.

  11. The Interpretability of the Word “Soxan” in Ferdowsi’s Shahnameh

    Directory of Open Access Journals (Sweden)

    F Vejdani

    2014-02-01

    Foregrounding the status of this word in the linguistic structure of this work which paves the way for the interpretability of the text and also representing his personal style in using such word which asks the reader to decode its meaning and interprets it himself are other aims of this research which is unprecedented among researches about Shahnameh.

  12. Recurrent Partial Words

    Directory of Open Access Journals (Sweden)

    Francine Blanchet-Sadri

    2011-08-01

    Full Text Available Partial words are sequences over a finite alphabet that may contain wildcard symbols, called holes, which match or are compatible with all letters; partial words without holes are said to be full words (or simply words. Given an infinite partial word w, the number of distinct full words over the alphabet that are compatible with factors of w of length n, called subwords of w, refers to a measure of complexity of infinite partial words so-called subword complexity. This measure is of particular interest because we can construct partial words with subword complexities not achievable by full words. In this paper, we consider the notion of recurrence over infinite partial words, that is, we study whether all of the finite subwords of a given infinite partial word appear infinitely often, and we establish connections between subword complexity and recurrence in this more general framework.

  13. Sudan-decoding generalized geometric Goppa codes

    DEFF Research Database (Denmark)

    Heydtmann, Agnes Eileen

    2003-01-01

    Generalized geometric Goppa codes are vector spaces of n-tuples with entries from different extension fields of a ground field. They are derived from evaluating functions similar to conventional geometric Goppa codes, but allowing evaluation in places of arbitrary degree. A decoding scheme...... for these codes based on Sudan's improved algorithm is presented and its error-correcting capacity is analyzed. For the implementation of the algorithm it is necessary that the so-called increasing zero bases of certain spaces of functions are available. A method to obtain such bases is developed....

  14. Memory-efficient decoding of LDPC codes

    Science.gov (United States)

    Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon

    2005-01-01

    We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.

  15. [Modulation of Metacognition with Decoded Neurofeedback].

    Science.gov (United States)

    Koizumi, Ai; Cortese, Aurelio; Amano, Kaoru; Kawato, Mitsuo; Lau, Hakwan

    2017-12-01

    Humans often assess their confidence in their own perception, e.g., feeling "confident" or "certain" of having seen a friend, or feeling "uncertain" about whether the phone rang. The neural mechanism underlying the metacognitive function that reflects subjective perception still remains under debate. We have previously used decoded neurofeedback (DecNef) to demonstrate that manipulating the multivoxel activation patterns in the frontoparietal network modulates perceptual confidence without affecting perceptual performance. The results provided clear evidence for a dissociation between perceptual confidence and performance and suggested a distinct role of the frontoparietal network in metacognition.

  16. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  17. Spatial attention in written word perception.

    Science.gov (United States)

    Montani, Veronica; Facoetti, Andrea; Zorzi, Marco

    2014-01-01

    The role of attention in visual word recognition and reading aloud is a long debated issue. Studies of both developmental and acquired reading disorders provide growing evidence that spatial attention is critically involved in word reading, in particular for the phonological decoding of unfamiliar letter strings. However, studies on healthy participants have produced contrasting results. The aim of this study was to investigate how the allocation of spatial attention may influence the perception of letter strings in skilled readers. High frequency words (HFWs), low frequency words and pseudowords were briefly and parafoveally presented either in the left or the right visual field. Attentional allocation was modulated by the presentation of a spatial cue before the target string. Accuracy in reporting the target string was modulated by the spatial cue but this effect varied with the type of string. For unfamiliar strings, processing was facilitated when attention was focused on the string location and hindered when it was diverted from the target. This finding is consistent the assumptions of the CDP+ model of reading aloud, as well as with familiarity sensitivity models that argue for a flexible use of attention according with the specific requirements of the string. Moreover, we found that processing of HFWs was facilitated by an extra-large focus of attention. The latter result is consistent with the hypothesis that a broad distribution of attention is the default mode during reading of familiar words because it might optimally engage the broad receptive fields of the highest detectors in the hierarchical system for visual word recognition.

  18. Universal Lyndon Words

    OpenAIRE

    Carpi, Arturo; Fici, Gabriele; Holub, Stepan; Oprsal, Jakub; Sciortino, Marinella

    2014-01-01

    A word $w$ over an alphabet $\\Sigma$ is a Lyndon word if there exists an order defined on $\\Sigma$ for which $w$ is lexicographically smaller than all of its conjugates (other than itself). We introduce and study \\emph{universal Lyndon words}, which are words over an $n$-letter alphabet that have length $n!$ and such that all the conjugates are Lyndon words. We show that universal Lyndon words exist for every $n$ and exhibit combinatorial and structural properties of these words. We then defi...

  19. Vectorization of Reed Solomon decoding and mapping on the EVP

    NARCIS (Netherlands)

    Kumar, A.; Berkel, van C.H.

    2008-01-01

    Reed Solomon (RS) codes are used in a variety of (wireless) communication systems. Although commonly implemented in dedicated hardware, this paper explores the mapping of high-throughput RS decoding on vector DSPs. The four modules of such a decoder, viz. Syndrome Computation, Key Equation Solver,

  20. LDPC Codes--Structural Analysis and Decoding Techniques

    Science.gov (United States)

    Zhang, Xiaojie

    2012-01-01

    Low-density parity-check (LDPC) codes have been the focus of much research over the past decade thanks to their near Shannon limit performance and to their efficient message-passing (MP) decoding algorithms. However, the error floor phenomenon observed in MP decoding, which manifests itself as an abrupt change in the slope of the error-rate curve,…

  1. Decoding bipedal locomotion from the rat sensorimotor cortex

    NARCIS (Netherlands)

    Rigosa, J.; Panarese, A.; Dominici, N.; Friedli, L.; van den Brand, R.; Carpaneto, J.; DiGiovanna, J.; Courtine, G.; Micera, S.

    2015-01-01

    Objective. Decoding forelimb movements from the firing activity of cortical neurons has been interfaced with robotic and prosthetic systems to replace lost upper limb functions in humans. Despite the potential of this approach to improve locomotion and facilitate gait rehabilitation, decoding lower

  2. Knowledge inhibition and N400: a study with words that look like common words.

    Science.gov (United States)

    Debruille, J B

    1998-04-01

    In addition to their own representations, low frequency words, such as BRIBE, can covertly activate the representations of higher frequency words they look like (e.g., BRIDE). Hence, look-alike words can activate knowledge that is incompatible with the knowledge corresponding to accurate representations. Comparatively, eccentric words, that is, low frequency words that do not look as much like higher frequency words, are less likely to activate incompatible knowledge. This study focuses on the hypothesis that the N400 component of the event-related potential reflects the inhibition of incompatible knowledge. This hypothesis predicts that look-alike words elicit N400s of greater amplitudes than eccentric words in conditions where incompatible knowledge is inhibited. Results from a single item lexical decision experiment are reported which support the inhibition hypothesis. Copyright 1998 Academic Press.

  3. Don’t words come easy?A psychophysical exploration of word superiority

    Directory of Open Access Journals (Sweden)

    Randi eStarrfelt

    2013-09-01

    Full Text Available Words are made of letters, and yet sometimes it is easier to identify a word than a single letter. This word superiority effect (WSE has been observed when written stimuli are presented very briefly or degraded by visual noise. We compare performance with letters and words in three experiments, to explore the extents and limits of the WSE. Using a carefully controlled list of three letter words, we show that a word superiority effect can be revealed in vocal reaction times even to undegraded stimuli. With a novel combination of psychophysics and mathematical modelling, we further show that the typical WSE is specifically reflected in perceptual processing speed: single words are simply processed faster than single letters. Intriguingly, when multiple stimuli are presented simultaneously, letters are perceived more easily than words, and this is reflected both in perceptual processing speed and visual short term memory capacity. So, even if single words come easy, there is a limit to the word superiority effect.

  4. Iterative List Decoding of Concatenated Source-Channel Codes

    Directory of Open Access Journals (Sweden)

    Hedayat Ahmadreza

    2005-01-01

    Full Text Available Whenever variable-length entropy codes are used in the presence of a noisy channel, any channel errors will propagate and cause significant harm. Despite using channel codes, some residual errors always remain, whose effect will get magnified by error propagation. Mitigating this undesirable effect is of great practical interest. One approach is to use the residual redundancy of variable length codes for joint source-channel decoding. In this paper, we improve the performance of residual redundancy source-channel decoding via an iterative list decoder made possible by a nonbinary outer CRC code. We show that the list decoding of VLC's is beneficial for entropy codes that contain redundancy. Such codes are used in state-of-the-art video coders, for example. The proposed list decoder improves the overall performance significantly in AWGN and fully interleaved Rayleigh fading channels.

  5. General Purpose Graphics Processing Unit Based High-Rate Rice Decompression and Reed-Solomon Decoding

    Energy Technology Data Exchange (ETDEWEB)

    Loughry, Thomas A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to ten times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.

  6. Decoding magnetoencephalographic rhythmic activity using spectrospatial information.

    Science.gov (United States)

    Kauppi, Jukka-Pekka; Parkkonen, Lauri; Hari, Riitta; Hyvärinen, Aapo

    2013-12-01

    We propose a new data-driven decoding method called Spectral Linear Discriminant Analysis (Spectral LDA) for the analysis of magnetoencephalography (MEG). The method allows investigation of changes in rhythmic neural activity as a result of different stimuli and tasks. The introduced classification model only assumes that each "brain state" can be characterized as a combination of neural sources, each of which shows rhythmic activity at one or several frequency bands. Furthermore, the model allows the oscillation frequencies to be different for each such state. We present decoding results from 9 subjects in a four-category classification problem defined by an experiment involving randomly alternating epochs of auditory, visual and tactile stimuli interspersed with rest periods. The performance of Spectral LDA was very competitive compared with four alternative classifiers based on different assumptions concerning the organization of rhythmic brain activity. In addition, the spectral and spatial patterns extracted automatically on the basis of trained classifiers showed that Spectral LDA offers a novel and interesting way of analyzing spectrospatial oscillatory neural activity across the brain. All the presented classification methods and visualization tools are freely available as a Matlab toolbox. © 2013.

  7. Unsupervised learning of facial emotion decoding skills

    Directory of Open Access Journals (Sweden)

    Jan Oliver Huelle

    2014-02-01

    Full Text Available Research on the mechanisms underlying human facial emotion recognition has long focussed on genetically determined neural algorithms and often neglected the question of how these algorithms might be tuned by social learning. Here we show that facial emotion decoding skills can be significantly and sustainably improved by practise without an external teaching signal. Participants saw video clips of dynamic facial expressions of five different women and were asked to decide which of four possible emotions (anger, disgust, fear and sadness was shown in each clip. Although no external information about the correctness of the participant’s response or the sender’s true affective state was provided, participants showed a significant increase of facial emotion recognition accuracy both within and across two training sessions two days to several weeks apart. We discuss several similarities and differences between the unsupervised improvement of facial decoding skills observed in the current study, unsupervised perceptual learning of simple stimuli described in previous studies and practise effects often observed in cognitive tasks.

  8. Unsupervised learning of facial emotion decoding skills.

    Science.gov (United States)

    Huelle, Jan O; Sack, Benjamin; Broer, Katja; Komlewa, Irina; Anders, Silke

    2014-01-01

    Research on the mechanisms underlying human facial emotion recognition has long focussed on genetically determined neural algorithms and often neglected the question of how these algorithms might be tuned by social learning. Here we show that facial emotion decoding skills can be significantly and sustainably improved by practice without an external teaching signal. Participants saw video clips of dynamic facial expressions of five different women and were asked to decide which of four possible emotions (anger, disgust, fear, and sadness) was shown in each clip. Although no external information about the correctness of the participant's response or the sender's true affective state was provided, participants showed a significant increase of facial emotion recognition accuracy both within and across two training sessions two days to several weeks apart. We discuss several similarities and differences between the unsupervised improvement of facial decoding skills observed in the current study, unsupervised perceptual learning of simple stimuli described in previous studies and practice effects often observed in cognitive tasks.

  9. Decoding suprathreshold stochastic resonance with optimal weights

    International Nuclear Information System (INIS)

    Xu, Liyan; Vladusich, Tony; Duan, Fabing; Gunn, Lachlan J.; Abbott, Derek; McDonnell, Mark D.

    2015-01-01

    We investigate an array of stochastic quantizers for converting an analog input signal into a discrete output in the context of suprathreshold stochastic resonance. A new optimal weighted decoding is considered for different threshold level distributions. We show that for particular noise levels and choices of the threshold levels optimally weighting the quantizer responses provides a reduced mean square error in comparison with the original unweighted array. However, there are also many parameter regions where the original array provides near optimal performance, and when this occurs, it offers a much simpler approach than optimally weighting each quantizer's response. - Highlights: • A weighted summing array of independently noisy binary comparators is investigated. • We present an optimal linearly weighted decoding scheme for combining the comparator responses. • We solve for the optimal weights by applying least squares regression to simulated data. • We find that the MSE distortion of weighting before summation is superior to unweighted summation of comparator responses. • For some parameter regions, the decrease in MSE distortion due to weighting is negligible

  10. Bayer image parallel decoding based on GPU

    Science.gov (United States)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  11. The Differential Contributions of Auditory-Verbal and Visuospatial Working Memory on Decoding Skills in Children Who Are Poor Decoders

    Science.gov (United States)

    Squires, Katie Ellen

    2013-01-01

    This study investigated the differential contribution of auditory-verbal and visuospatial working memory (WM) on decoding skills in second- and fifth-grade children identified with poor decoding. Thirty-two second-grade students and 22 fifth-grade students completed measures that assessed simple and complex auditory-verbal and visuospatial memory,…

  12. Combinatorics on words Christoffel words and repetitions in words

    CERN Document Server

    Berstel, Jean; Reutenauer, Christophe; Saliola, Franco V

    2008-01-01

    The two parts of this text are based on two series of lectures delivered by Jean Berstel and Christophe Reutenauer in March 2007 at the Centre de Recherches Mathématiques, Montréal, Canada. Part I represents the first modern and comprehensive exposition of the theory of Christoffel words. Part II presents numerous combinatorial and algorithmic aspects of repetition-free words stemming from the work of Axel Thue-a pioneer in the theory of combinatorics on words. A beginner to the theory of combinatorics on words will be motivated by the numerous examples, and the large variety of exercises, which make the book unique at this level of exposition. The clean and streamlined exposition and the extensive bibliography will also be appreciated. After reading this book, beginners should be ready to read modern research papers in this rapidly growing field and contribute their own research to its development. Experienced readers will be interested in the finitary approach to Sturmian words that Christoffel words offe...

  13. On universal partial words

    OpenAIRE

    Chen, Herman Z. Q.; Kitaev, Sergey; Mütze, Torsten; Sun, Brian Y.

    2016-01-01

    A universal word for a finite alphabet $A$ and some integer $n\\geq 1$ is a word over $A$ such that every word in $A^n$ appears exactly once as a subword (cyclically or linearly). It is well-known and easy to prove that universal words exist for any $A$ and $n$. In this work we initiate the systematic study of universal partial words. These are words that in addition to the letters from $A$ may contain an arbitrary number of occurrences of a special `joker' symbol $\\Diamond\

  14. Word 2013 for dummies

    CERN Document Server

    Gookin, Dan

    2013-01-01

    This bestselling guide to Microsoft Word is the first and last word on Word 2013 It's a whole new Word, so jump right into this book and learn how to make the most of it. Bestselling For Dummies author Dan Gookin puts his usual fun and friendly candor back to work to show you how to navigate the new features of Word 2013. Completely in tune with the needs of the beginning user, Gookin explains how to use Word 2013 quickly and efficiently so that you can spend more time working on your projects and less time trying to figure it all out. Walks you through the capabilit

  15. O2-GIDNC: Beyond instantly decodable network coding

    KAUST Repository

    Aboutorab, Neda

    2013-06-01

    In this paper, we are concerned with extending the graph representation of generalized instantly decodable network coding (GIDNC) to a more general opportunistic network coding (ONC) scenario, referred to as order-2 GIDNC (O2-GIDNC). In the O2-GIDNC scheme, receivers can store non-instantly decodable packets (NIDPs) comprising two of their missing packets, and use them in a systematic way for later decodings. Once this graph representation is found, it can be used to extend the GIDNC graph-based analyses to the proposed O2-GIDNC scheme with a limited increase in complexity. In the proposed O2-GIDNC scheme, the information of the stored NIDPs at the receivers and the decoding opportunities they create can be exploited to improve the broadcast completion time and decoding delay compared to traditional GIDNC scheme. The completion time and decoding delay minimizing algorithms that can operate on the new O2-GIDNC graph are further described. The simulation results show that our proposed O2-GIDNC improves the completion time and decoding delay performance of the traditional GIDNC. © 2013 IEEE.

  16. On decoding of multi-level MPSK modulation codes

    Science.gov (United States)

    Lin, Shu; Gupta, Alok Kumar

    1990-01-01

    The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.

  17. Encoder-decoder optimization for brain-computer interfaces.

    Science.gov (United States)

    Merel, Josh; Pianto, Donald M; Cunningham, John P; Paninski, Liam

    2015-06-01

    Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.

  18. Encoder-decoder optimization for brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Josh Merel

    2015-06-01

    Full Text Available Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model" and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.

  19. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    Science.gov (United States)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  20. Toddlers' sensitivity to within-word coarticulation during spoken word recognition: Developmental differences in lexical competition.

    Science.gov (United States)

    Zamuner, Tania S; Moore, Charlotte; Desmeules-Trudel, Félix

    2016-12-01

    To understand speech, listeners need to be able to decode the speech stream into meaningful units. However, coarticulation causes phonemes to differ based on their context. Because coarticulation is an ever-present component of the speech stream, it follows that listeners may exploit this source of information for cues to the identity of the words being spoken. This research investigates the development of listeners' sensitivity to coarticulation cues below the level of the phoneme in spoken word recognition. Using a looking-while-listening paradigm, adults and 2- and 3-year-old children were tested on coarticulation cues that either matched or mismatched the target. Both adults and children predicted upcoming phonemes based on anticipatory coarticulation to make decisions about word identity. The overall results demonstrate that coarticulation cues are a fundamental component of children's spoken word recognition system. However, children did not show the same resolution as adults of the mismatching coarticulation cues and competitor inhibition, indicating that children's processing systems are still developing. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Understanding Medical Words

    Science.gov (United States)

    ... Bar Home Current Issue Past Issues Understanding Medical Words Past Issues / Summer 2009 Table of Contents For ... Medicine that teaches you about many of the words related to your health care Do you have ...

  2. Decoding emotional valence from electroencephalographic rhythmic activity.

    Science.gov (United States)

    Celikkanat, Hande; Moriya, Hiroki; Ogawa, Takeshi; Kauppi, Jukka-Pekka; Kawanabe, Motoaki; Hyvarinen, Aapo

    2017-07-01

    We attempt to decode emotional valence from electroencephalographic rhythmic activity in a naturalistic setting. We employ a data-driven method developed in a previous study, Spectral Linear Discriminant Analysis, to discover the relationships between the classification task and independent neuronal sources, optimally utilizing multiple frequency bands. A detailed investigation of the classifier provides insight into the neuronal sources related with emotional valence, and the individual differences of the subjects in processing emotions. Our findings show: (1) sources whose locations are similar across subjects are consistently involved in emotional responses, with the involvement of parietal sources being especially significant, and (2) even though the locations of the involved neuronal sources are consistent, subjects can display highly varying degrees of valence-related EEG activity in the sources.

  3. Decoding the mechanisms of Antikythera astronomical device

    CERN Document Server

    Lin, Jian-Liang

    2016-01-01

    This book presents a systematic design methodology for decoding the interior structure of the Antikythera mechanism, an astronomical device from ancient Greece. The historical background, surviving evidence and reconstructions of the mechanism are introduced, and the historical development of astronomical achievements and various astronomical instruments are investigated. Pursuing an approach based on the conceptual design of modern mechanisms and bearing in mind the standards of science and technology at the time, all feasible designs of the six lost/incomplete/unclear subsystems are synthesized as illustrated examples, and 48 feasible designs of the complete interior structure are presented. This approach provides not only a logical tool for applying modern mechanical engineering knowledge to the reconstruction of the Antikythera mechanism, but also an innovative research direction for identifying the original structures of the mechanism in the future. In short, the book offers valuable new insights for all...

  4. Academic Training - Bioinformatics: Decoding the Genome

    CERN Multimedia

    Chris Jones

    2006-01-01

    ACADEMIC TRAINING LECTURE SERIES 27, 28 February 1, 2, 3 March 2006 from 11:00 to 12:00 - Auditorium, bldg. 500 Decoding the Genome A special series of 5 lectures on: Recent extraordinary advances in the life sciences arising through new detection technologies and bioinformatics The past five years have seen an extraordinary change in the information and tools available in the life sciences. The sequencing of the human genome, the discovery that we possess far fewer genes than foreseen, the measurement of the tiny changes in the genomes that differentiate us, the sequencing of the genomes of many pathogens that lead to diseases such as malaria are all examples of completely new information that is now available in the quest for improved healthcare. New tools have allowed similar strides in the discovery of the associated protein structures, providing invaluable information for those searching for new drugs. New DNA microarray chips permit simultaneous measurement of the state of expression of tens...

  5. Real-time minimal-bit-error probability decoding of convolutional codes

    Science.gov (United States)

    Lee, L.-N.

    1974-01-01

    A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.

  6. Real-time minimal bit error probability decoding of convolutional codes

    Science.gov (United States)

    Lee, L. N.

    1973-01-01

    A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.

  7. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  8. Locally decodable codes and private information retrieval schemes

    CERN Document Server

    Yekhanin, Sergey

    2010-01-01

    Locally decodable codes (LDCs) are codes that simultaneously provide efficient random access retrieval and high noise resilience by allowing reliable reconstruction of an arbitrary bit of a message by looking at only a small number of randomly chosen codeword bits. Local decodability comes with a certain loss in terms of efficiency - specifically, locally decodable codes require longer codeword lengths than their classical counterparts. Private information retrieval (PIR) schemes are cryptographic protocols designed to safeguard the privacy of database users. They allow clients to retrieve rec

  9. Neural network decoder for quantum error correcting codes

    Science.gov (United States)

    Krastanov, Stefan; Jiang, Liang

    Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.

  10. EXIT Chart Analysis of Binary Message-Passing Decoders

    DEFF Research Database (Denmark)

    Lechner, Gottfried; Pedersen, Troels; Kramer, Gerhard

    2007-01-01

    Binary message-passing decoders for LDPC codes are analyzed using EXIT charts. For the analysis, the variable node decoder performs all computations in the L-value domain. For the special case of a hard decision channel, this leads to the well know Gallager B algorithm, while the analysis can...... be extended to channels with larger output alphabets. By increasing the output alphabet from hard decisions to four symbols, a gain of more than 1.0 dB is achieved using optimized codes. For this code optimization, the mixing property of EXIT functions has to be modified to the case of binary message......-passing decoders....

  11. Turbo decoder architecture for beyond-4G applications

    CERN Document Server

    Wong, Cheng-Chi

    2013-01-01

    This book describes the most recent techniques for turbo decoder implementation, especially for 4G and beyond 4G applications. The authors reveal techniques for the design of high-throughput decoders for future telecommunication systems, enabling designers to reduce hardware cost and shorten processing time. Coverage includes an explanation of VLSI implementation of the turbo decoder, from basic functional units to advanced parallel architecture. The authors discuss both hardware architecture techniques and experimental results, showing the variations in area/throughput/performance with respec

  12. Behavioral decoding of working memory items inside and outside the focus of attention.

    Science.gov (United States)

    Mallett, Remington; Lewis-Peacock, Jarrod A

    2018-03-31

    How we attend to our thoughts affects how we attend to our environment. Holding information in working memory can automatically bias visual attention toward matching information. By observing attentional biases on reaction times to visual search during a memory delay, it is possible to reconstruct the source of that bias using machine learning techniques and thereby behaviorally decode the content of working memory. Can this be done when more than one item is held in working memory? There is some evidence that multiple items can simultaneously bias attention, but the effects have been inconsistent. One explanation may be that items are stored in different states depending on the current task demands. Recent models propose functionally distinct states of representation for items inside versus outside the focus of attention. Here, we use behavioral decoding to evaluate whether multiple memory items-including temporarily irrelevant items outside the focus of attention-exert biases on visual attention. Only the single item in the focus of attention was decodable. The other item showed a brief attentional bias that dissipated until it returned to the focus of attention. These results support the idea of dynamic, flexible states of working memory across time and priority. © 2018 New York Academy of Sciences.

  13. Study of bifurcation behavior of two-dimensional turbo product code decoders

    International Nuclear Information System (INIS)

    He Yejun; Lau, Francis C.M.; Tse, Chi K.

    2008-01-01

    Turbo codes, low-density parity-check (LDPC) codes and turbo product codes (TPCs) are high performance error-correction codes which employ iterative algorithms for decoding. Under different conditions, the behaviors of the decoders are different. While the nonlinear dynamical behaviors of turbo code decoders and LDPC decoders have been reported in the literature, the dynamical behavior of TPC decoders is relatively unexplored. In this paper, we investigate the behavior of the iterative algorithm of a two-dimensional TPC decoder when the input signal-to-noise ratio (SNR) varies. The quantity to be measured is the mean square value of the posterior probabilities of the information bits. Unlike turbo decoders or LDPC decoders, TPC decoders do not produce a clear 'waterfall region'. This is mainly because the TPC decoding algorithm does not converge to 'indecisive' fixed points even at very low SNR values

  14. Study of bifurcation behavior of two-dimensional turbo product code decoders

    Energy Technology Data Exchange (ETDEWEB)

    He Yejun [Department of Electronic and Information Engineering, Hong Kong Polytechnic University, Hunghom, Hong Kong (China); Lau, Francis C.M. [Department of Electronic and Information Engineering, Hong Kong Polytechnic University, Hunghom, Hong Kong (China)], E-mail: encmlau@polyu.edu.hk; Tse, Chi K. [Department of Electronic and Information Engineering, Hong Kong Polytechnic University, Hunghom, Hong Kong (China)

    2008-04-15

    Turbo codes, low-density parity-check (LDPC) codes and turbo product codes (TPCs) are high performance error-correction codes which employ iterative algorithms for decoding. Under different conditions, the behaviors of the decoders are different. While the nonlinear dynamical behaviors of turbo code decoders and LDPC decoders have been reported in the literature, the dynamical behavior of TPC decoders is relatively unexplored. In this paper, we investigate the behavior of the iterative algorithm of a two-dimensional TPC decoder when the input signal-to-noise ratio (SNR) varies. The quantity to be measured is the mean square value of the posterior probabilities of the information bits. Unlike turbo decoders or LDPC decoders, TPC decoders do not produce a clear 'waterfall region'. This is mainly because the TPC decoding algorithm does not converge to 'indecisive' fixed points even at very low SNR values.

  15. Architecture for time or transform domain decoding of reed-solomon codes

    Science.gov (United States)

    Shao, Howard M. (Inventor); Truong, Trieu-Kie (Inventor); Hsu, In-Shek (Inventor); Deutsch, Leslie J. (Inventor)

    1989-01-01

    Two pipeline (255,233) RS decoders, one a time domain decoder and the other a transform domain decoder, use the same first part to develop an errata locator polynomial .tau.(x), and an errata evaluator polynominal A(x). Both the time domain decoder and transform domain decoder have a modified GCD that uses an input multiplexer and an output demultiplexer to reduce the number of GCD cells required. The time domain decoder uses a Chien search and polynomial evaluator on the GCD outputs .tau.(x) and A(x), for the final decoding steps, while the transform domain decoder uses a transform error pattern algorithm operating on .tau.(x) and the initial syndrome computation S(x), followed by an inverse transform algorithm in sequence for the final decoding steps prior to adding the received RS coded message to produce a decoded output message.

  16. The Effect of Known-and-Unknown Word Combinations on Intentional Vocabulary Learning

    Science.gov (United States)

    Kasahara, Kiwamu

    2011-01-01

    The purpose of this study is to examine whether learning a known-and-unknown word combination is superior in terms of retention and retrieval of meaning to learning a single unknown word. The term "combination" in this study means a two-word collocation of a familiar word and a word that is new to the participants. Following the results of…

  17. WordPress Bible

    CERN Document Server

    Brazell, Aaron

    2010-01-01

    The WordPress Bible provides a complete and thorough guide to the largest self hosted blogging tool. This guide starts by covering the basics of WordPress such as installing and the principles of blogging, marketing and social media interaction, but then quickly ramps the reader up to more intermediate to advanced level topics such as plugins, WordPress Loop, themes and templates, custom fields, caching, security and more. The WordPress Bible is the only complete resource one needs to learning WordPress from beginning to end.

  18. Decoding of finger trajectory from ECoG using deep learning

    Science.gov (United States)

    Xie, Ziqian; Schwartz, Odelia; Prasad, Abhishek

    2018-06-01

    Objective. Conventional decoding pipeline for brain-machine interfaces (BMIs) consists of chained different stages of feature extraction, time-frequency analysis and statistical learning models. Each of these stages uses a different algorithm trained in a sequential manner, which makes it difficult to make the whole system adaptive. The goal was to create an adaptive online system with a single objective function and a single learning algorithm so that the whole system can be trained in parallel to increase the decoding performance. Here, we used deep neural networks consisting of convolutional neural networks (CNN) and a special kind of recurrent neural network (RNN) called long short term memory (LSTM) to address these needs. Approach. We used electrocorticography (ECoG) data collected by Kubanek et al. The task consisted of individual finger flexions upon a visual cue. Our model combined a hierarchical feature extractor CNN and a RNN that was able to process sequential data and recognize temporal dynamics in the neural data. CNN was used as the feature extractor and LSTM was used as the regression algorithm to capture the temporal dynamics of the signal. Main results. We predicted the finger trajectory using ECoG signals and compared results for the least angle regression (LARS), CNN-LSTM, random forest, LSTM model (LSTM_HC, for using hard-coded features) and a decoding pipeline consisting of band-pass filtering, energy extraction, feature selection and linear regression. The results showed that the deep learning models performed better than the commonly used linear model. The deep learning models not only gave smoother and more realistic trajectories but also learned the transition between movement and rest state. Significance. This study demonstrated a decoding network for BMI that involved a convolutional and recurrent neural network model. It integrated the feature extraction pipeline into the convolution and pooling layer and used LSTM layer to capture the

  19. Construction and decoding of a class of algebraic geometry codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Jensen, Helge Elbrønd

    1989-01-01

    A class of codes derived from algebraic plane curves is constructed. The concepts and results from algebraic geometry that were used are explained in detail; no further knowledge of algebraic geometry is needed. Parameters, generator and parity-check matrices are given. The main result is a decod...... is a decoding algorithm which turns out to be a generalization of the Peterson algorithm for decoding BCH decoder codes......A class of codes derived from algebraic plane curves is constructed. The concepts and results from algebraic geometry that were used are explained in detail; no further knowledge of algebraic geometry is needed. Parameters, generator and parity-check matrices are given. The main result...

  20. Decoding Reed-Solomon Codes beyond half the minimum distance

    DEFF Research Database (Denmark)

    Høholdt, Tom; Nielsen, Rasmus Refslund

    1999-01-01

    We describe an efficient implementation of M.Sudan"s algorithm for decoding Reed-Solomon codes beyond half the minimum distance. Furthermore we calculate an upper bound of the probabilty of getting more than one codeword as output...

  1. The brain's silent messenger: using selective attention to decode human thought for brain-based communication.

    Science.gov (United States)

    Naci, Lorina; Cusack, Rhodri; Jia, Vivian Z; Owen, Adrian M

    2013-05-29

    The interpretation of human thought from brain activity, without recourse to speech or action, is one of the most provoking and challenging frontiers of modern neuroscience. In particular, patients who are fully conscious and awake, yet, due to brain damage, are unable to show any behavioral responsivity, expose the limits of the neuromuscular system and the necessity for alternate forms of communication. Although it is well established that selective attention can significantly enhance the neural representation of attended sounds, it remains, thus far, untested as a response modality for brain-based communication. We asked whether its effect could be reliably used to decode answers to binary (yes/no) questions. Fifteen healthy volunteers answered questions (e.g., "Do you have brothers or sisters?") in the fMRI scanner, by selectively attending to the appropriate word ("yes" or "no"). Ninety percent of the answers were decoded correctly based on activity changes within the attention network. The majority of volunteers conveyed their answers with less than 3 min of scanning, suggesting that this technique is suited for communication in a reasonable amount of time. Formal comparison with the current best-established fMRI technique for binary communication revealed improved individual success rates and scanning times required to detect responses. This novel fMRI technique is intuitive, easy to use in untrained participants, and reliably robust within brief scanning times. Possible applications include communication with behaviorally nonresponsive patients.

  2. Assessing neglect dyslexia with compound words.

    Science.gov (United States)

    Reinhart, Stefan; Schunck, Alexander; Schaadt, Anna Katharina; Adams, Michaela; Simon, Alexandra; Kerkhoff, Georg

    2016-10-01

    The neglect syndrome is frequently associated with neglect dyslexia (ND), which is characterized by omissions or misread initial letters of single words. ND is usually assessed with standardized reading texts in clinical settings. However, particularly in the chronic phase of ND, patients often report reading deficits in everyday situations but show (nearly) normal performances in test situations that are commonly well-structured. To date, sensitive and standardized tests to assess the severity and characteristics of ND are lacking, although reading is of high relevance for daily life and vocational settings. Several studies found modulating effects of different word features on ND. We combined those features in a novel test to enhance test sensitivity in the assessment of ND. Low-frequency words of different length that contain residual pronounceable words when the initial letter strings are neglected were selected. We compared these words in a group of 12 ND-patients suffering from right-hemispheric first-ever stroke with word stimuli containing no existing residual words. Finally, we tested whether the serially presented words are more sensitive for the diagnosis of ND than text reading. The severity of ND was modulated strongly by the ND-test words and error frequencies in single word reading of ND words were on average more than 10 times higher than in a standardized text reading test (19.8% vs. 1.8%). The novel ND-test maximizes the frequency of specific ND-errors and is therefore more sensitive for the assessment of ND than conventional text reading tasks. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. An overview of turbo decoding on fading channels

    OpenAIRE

    ATILGAN, Doğan

    2009-01-01

    A review of turbo coding and decoding has been presented in the literature [1]. In that paper, turbo coding and decoding on AWGN (Additive White Gaussian Noise) channels has been elaborated. In wireless communications, a phenomennon called multipath fading is frequently encountered. Therefore, investigation of efficient techniques to tackle with the destructive effects of fading is essential. Turbo coding has been proven as an efficient channel coding technique for AWGN channels. Some of the ...

  4. Performance Analysis of a Decoding Algorithm for Algebraic Geometry Codes

    DEFF Research Database (Denmark)

    Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund; Høholdt, Tom

    1998-01-01

    We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is greater than or equal to [(dFR-1)/2]+1, where dFR is the Feng-Rao distance......We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is greater than or equal to [(dFR-1)/2]+1, where dFR is the Feng-Rao distance...

  5. Recent results in the decoding of Algebraic geometry codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund

    1998-01-01

    We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is [(dFR-1)/2]+1, where dFR is the Feng-Rao distance......We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is [(dFR-1)/2]+1, where dFR is the Feng-Rao distance...

  6. Effect of video decoder errors on video interpretability

    Science.gov (United States)

    Young, Darrell L.

    2014-06-01

    The advancement in video compression technology can result in more sensitivity to bit errors. Bit errors can propagate causing sustained loss of interpretability. In the worst case, the decoder "freezes" until it can re-synchronize with the stream. Detection of artifacts enables downstream processes to avoid corrupted frames. A simple template approach to detect block stripes and a more advanced cascade approach to detect compression artifacts was shown to correlate to the presence of artifacts and decoder messages.

  7. Electrophysiological difference between mental state decoding and mental state reasoning.

    Science.gov (United States)

    Cao, Bihua; Li, Yiyuan; Li, Fuhong; Li, Hong

    2012-06-29

    Previous studies have explored the neural mechanism of Theory of Mind (ToM), but the neural correlates of its two components, mental state decoding and mental state reasoning, remain unclear. In the present study, participants were presented with various photographs, showing an actor looking at 1 of 2 objects, either with a happy or an unhappy expression. They were asked to either decode the emotion of the actor (mental state decoding task), predict which object would be chosen by the actor (mental state reasoning task), or judge at which object the actor was gazing (physical task), while scalp potentials were recorded. Results showed that (1) the reasoning task elicited an earlier N2 peak than the decoding task did over the prefrontal scalp sites; and (2) during the late positive component (240-440 ms), the reasoning task elicited a more positive deflection than the other two tasks did at the prefrontal scalp sites. In addition, neither the decoding task nor the reasoning task has no left/right hemisphere difference. These findings imply that mental state reasoning differs from mental state decoding early (210 ms) after stimulus onset, and that the prefrontal lobe is the neural basis of mental state reasoning. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Evaluation framework for K-best sphere decoders

    KAUST Repository

    Shen, Chungan

    2010-08-01

    While Maximum-Likelihood (ML) is the optimum decoding scheme for most communication scenarios, practical implementation difficulties limit its use, especially for Multiple Input Multiple Output (MIMO) systems with a large number of transmit or receive antennas. Tree-searching type decoder structures such as Sphere decoder and K-best decoder present an interesting trade-off between complexity and performance. Many algorithmic developments and VLSI implementations have been reported in literature with widely varying performance to area and power metrics. In this semi-tutorial paper we present a holistic view of different Sphere decoding techniques and K-best decoding techniques, identifying the key algorithmic and implementation trade-offs. We establish a consistent benchmark framework to investigate and compare the delay cost, power cost, and power-delay-product cost incurred by each method. Finally, using the framework, we propose and analyze a novel architecture and compare that to other published approaches. Our goal is to explicitly elucidate the overall advantages and disadvantages of each proposed algorithms in one coherent framework. © 2010 World Scientific Publishing Company.

  9. Partially blind instantly decodable network codes for lossy feedback environment

    KAUST Repository

    Sorour, Sameh

    2014-09-01

    In this paper, we study the multicast completion and decoding delay minimization problems for instantly decodable network coding (IDNC) in the case of lossy feedback. When feedback loss events occur, the sender falls into uncertainties about packet reception at the different receivers, which forces it to perform partially blind selections of packet combinations in subsequent transmissions. To determine efficient selection policies that reduce the completion and decoding delays of IDNC in such an environment, we first extend the perfect feedback formulation in our previous works to the lossy feedback environment, by incorporating the uncertainties resulting from unheard feedback events in these formulations. For the completion delay problem, we use this formulation to identify the maximum likelihood state of the network in events of unheard feedback and employ it to design a partially blind graph update extension to the multicast IDNC algorithm in our earlier work. For the decoding delay problem, we derive an expression for the expected decoding delay increment for any arbitrary transmission. This expression is then used to find the optimal policy that reduces the decoding delay in such lossy feedback environment. Results show that our proposed solutions both outperform previously proposed approaches and achieve tolerable degradation even at relatively high feedback loss rates.

  10. Phoneme Awareness, Vocabulary and Word Decoding in Monolingual and Bilingual Dutch Children

    Science.gov (United States)

    Janssen, Marije; Bosman, Anna M. T.; Leseman, Paul P. M.

    2013-01-01

    The aim of this study was to investigate whether bilingually raised children in the Netherlands, who receive literacy instruction in their second language only, show an advantage on Dutch phoneme-awareness tasks compared with monolingual Dutch-speaking children. Language performance of a group of 47 immigrant first-grade children with various…

  11. Direct migration motion estimation and mode decision to decoder for a low-complexity decoder Wyner-Ziv video coding

    Science.gov (United States)

    Lei, Ted Chih-Wei; Tseng, Fan-Shuo

    2017-07-01

    This paper addresses the problem of high-computational complexity decoding in traditional Wyner-Ziv video coding (WZVC). The key focus is the migration of two traditionally high-computationally complex encoder algorithms, namely motion estimation and mode decision. In order to reduce the computational burden in this process, the proposed architecture adopts the partial boundary matching algorithm and four flexible types of block mode decision at the decoder. This approach does away with the need for motion estimation and mode decision at the encoder. The experimental results show that the proposed padding block-based WZVC not only decreases decoder complexity to approximately one hundredth that of the state-of-the-art DISCOVER decoding but also outperforms DISCOVER codec by up to 3 to 4 dB.

  12. Do handwritten words magnify lexical effects in visual word recognition?

    Science.gov (United States)

    Perea, Manuel; Gil-López, Cristina; Beléndez, Victoria; Carreiras, Manuel

    2016-01-01

    An examination of how the word recognition system is able to process handwritten words is fundamental to formulate a comprehensive model of visual word recognition. Previous research has revealed that the magnitude of lexical effects (e.g., the word-frequency effect) is greater with handwritten words than with printed words. In the present lexical decision experiments, we examined whether the quality of handwritten words moderates the recruitment of top-down feedback, as reflected in word-frequency effects. Results showed a reading cost for difficult-to-read and easy-to-read handwritten words relative to printed words. But the critical finding was that difficult-to-read handwritten words, but not easy-to-read handwritten words, showed a greater word-frequency effect than printed words. Therefore, the inherent physical variability of handwritten words does not necessarily boost the magnitude of lexical effects.

  13. Singer product apertures—A coded aperture system with a fast decoding algorithm

    International Nuclear Information System (INIS)

    Byard, Kevin; Shutler, Paul M.E.

    2017-01-01

    A new type of coded aperture configuration that enables fast decoding of the coded aperture shadowgram data is presented. Based on the products of incidence vectors generated from the Singer difference sets, we call these Singer product apertures. For a range of aperture dimensions, we compare experimentally the performance of three decoding methods: standard decoding, induction decoding and direct vector decoding. In all cases the induction and direct vector methods are several orders of magnitude faster than the standard method, with direct vector decoding being significantly faster than induction decoding. For apertures of the same dimensions the increase in speed offered by direct vector decoding over induction decoding is better for lower throughput apertures.

  14. An FPGA Implementation of (3,6-Regular Low-Density Parity-Check Code Decoder

    Directory of Open Access Journals (Sweden)

    Tong Zhang

    2003-05-01

    Full Text Available Because of their excellent error-correcting performance, low-density parity-check (LDPC codes have recently attracted a lot of attention. In this paper, we are interested in the practical LDPC code decoder hardware implementations. The direct fully parallel decoder implementation usually incurs too high hardware complexity for many real applications, thus partly parallel decoder design approaches that can achieve appropriate trade-offs between hardware complexity and decoding throughput are highly desirable. Applying a joint code and decoder design methodology, we develop a high-speed (3,k-regular LDPC code partly parallel decoder architecture based on which we implement a 9216-bit, rate-1/2(3,6-regular LDPC code decoder on Xilinx FPGA device. This partly parallel decoder supports a maximum symbol throughput of 54 Mbps and achieves BER 10−6 at 2 dB over AWGN channel while performing maximum 18 decoding iterations.

  15. Singer product apertures—A coded aperture system with a fast decoding algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Byard, Kevin, E-mail: kevin.byard@aut.ac.nz [School of Economics, Faculty of Business, Economics and Law, Auckland University of Technology, Auckland 1142 (New Zealand); Shutler, Paul M.E. [National Institute of Education, Nanyang Technological University, 1 Nanyang Walk, Singapore 637616 (Singapore)

    2017-06-01

    A new type of coded aperture configuration that enables fast decoding of the coded aperture shadowgram data is presented. Based on the products of incidence vectors generated from the Singer difference sets, we call these Singer product apertures. For a range of aperture dimensions, we compare experimentally the performance of three decoding methods: standard decoding, induction decoding and direct vector decoding. In all cases the induction and direct vector methods are several orders of magnitude faster than the standard method, with direct vector decoding being significantly faster than induction decoding. For apertures of the same dimensions the increase in speed offered by direct vector decoding over induction decoding is better for lower throughput apertures.

  16. Word Pocket Guide

    CERN Document Server

    Glenn, Walter

    2004-01-01

    Millions of people use Microsoft Word every day and, chances are, you're one of them. Like most Word users, you've attained a certain level of proficiency--enough to get by, with a few extra tricks and tips--but don't get the opportunity to probe much further into the real power of Word. And Word is so rich in features that regardless of your level of expertise, there's always more to master. If you've ever wanted a quick answer to a nagging question or had the thought that there must be a better way, then this second edition of Word Pocket Guide is just what you need. Updated for Word 2003

  17. Decoding reality the universe as quantum information

    CERN Document Server

    Vedral, Vlatko

    2010-01-01

    In Decoding Reality, Vlatko Vedral offers a mind-stretching look at the deepest questions about the universe--where everything comes from, why things are as they are, what everything is. The most fundamental definition of reality is not matter or energy, he writes, but information--and it is the processing of information that lies at the root of all physical, biological, economic, and social phenomena. This view allows Vedral to address a host of seemingly unrelated questions: Why does DNA bind like it does? What is the ideal diet for longevity? How do you make your first million dollars? We can unify all through the understanding that everything consists of bits of information, he writes, though that raises the question of where these bits come from. To find the answer, he takes us on a guided tour through the bizarre realm of quantum physics. At this sub-sub-subatomic level, we find such things as the interaction of separated quantum particles--what Einstein called "spooky action at a distance." In fact, V...

  18. Statistical coding and decoding of heartbeat intervals.

    Science.gov (United States)

    Lucena, Fausto; Barros, Allan Kardec; Príncipe, José C; Ohnishi, Noboru

    2011-01-01

    The heart integrates neuroregulatory messages into specific bands of frequency, such that the overall amplitude spectrum of the cardiac output reflects the variations of the autonomic nervous system. This modulatory mechanism seems to be well adjusted to the unpredictability of the cardiac demand, maintaining a proper cardiac regulation. A longstanding theory holds that biological organisms facing an ever-changing environment are likely to evolve adaptive mechanisms to extract essential features in order to adjust their behavior. The key question, however, has been to understand how the neural circuitry self-organizes these feature detectors to select behaviorally relevant information. Previous studies in computational perception suggest that a neural population enhances information that is important for survival by minimizing the statistical redundancy of the stimuli. Herein we investigate whether the cardiac system makes use of a redundancy reduction strategy to regulate the cardiac rhythm. Based on a network of neural filters optimized to code heartbeat intervals, we learn a population code that maximizes the information across the neural ensemble. The emerging population code displays filter tuning proprieties whose characteristics explain diverse aspects of the autonomic cardiac regulation, such as the compromise between fast and slow cardiac responses. We show that the filters yield responses that are quantitatively similar to observed heart rate responses during direct sympathetic or parasympathetic nerve stimulation. Our findings suggest that the heart decodes autonomic stimuli according to information theory principles analogous to how perceptual cues are encoded by sensory systems.

  19. Rate Aware Instantly Decodable Network Codes

    KAUST Repository

    Douik, Ahmed

    2016-02-26

    This paper addresses the problem of reducing the delivery time of data messages to cellular users using instantly decodable network coding (IDNC) with physical-layer rate awareness. While most of the existing literature on IDNC does not consider any physical layer complications, this paper proposes a cross-layer scheme that incorporates the different channel rates of the various users in the decision process of both the transmitted message combinations and the rates with which they are transmitted. The completion time minimization problem in such scenario is first shown to be intractable. The problem is, thus, approximated by reducing, at each transmission, the increase of an anticipated version of the completion time. The paper solves the problem by formulating it as a maximum weight clique problem over a newly designed rate aware IDNC (RA-IDNC) graph. Further, the paper provides a multi-layer solution to improve the completion time approximation. Simulation results suggest that the cross-layer design largely outperforms the uncoded transmissions strategies and the classical IDNC scheme. © 2015 IEEE.

  20. Encoding and decoding messages with chaotic lasers

    International Nuclear Information System (INIS)

    Alsing, P.M.; Gavrielides, A.; Kovanis, V.; Roy, R.; Thornburg, K.S. Jr.

    1997-01-01

    We investigate the structure of the strange attractor of a chaotic loss-modulated solid-state laser utilizing return maps based on a combination of intensity maxima and interspike intervals, as opposed to those utilizing Poincare sections defined by the intensity maxima of the laser (I=0,Ie<0) alone. We find both experimentally and numerically that a simple, intrinsic relationship exists between an intensity maximum and the pair of preceding and succeeding interspike intervals. In addition, we numerically investigate encoding messages on the output of a chaotic transmitter laser and its subsequent decoding by a similar receiver laser. By exploiting the relationship between the intensity maxima and the interspike intervals, we demonstrate that the method utilized to encode the message is vital to the system close-quote s ability to hide the signal from unwanted deciphering. In this work alternative methods are studied in order to encode messages by modulating the magnitude of pumping of the transmitter laser and also by driving its loss modulation with more than one frequency. copyright 1997 The American Physical Society

  1. Decoding P4-ATPase substrate interactions.

    Science.gov (United States)

    Roland, Bartholomew P; Graham, Todd R

    Cellular membranes display a diversity of functions that are conferred by the unique composition and organization of their proteins and lipids. One important aspect of lipid organization is the asymmetric distribution of phospholipids (PLs) across the plasma membrane. The unequal distribution of key PLs between the cytofacial and exofacial leaflets of the bilayer creates physical surface tension that can be used to bend the membrane; and like Ca 2+ , a chemical gradient that can be used to transduce biochemical signals. PL flippases in the type IV P-type ATPase (P4-ATPase) family are the principle transporters used to set and repair this PL gradient and the asymmetric organization of these membranes are encoded by the substrate specificity of these enzymes. Thus, understanding the mechanisms of P4-ATPase substrate specificity will help reveal their role in membrane organization and cell biology. Further, decoding the structural determinants of substrate specificity provides investigators the opportunity to mutationally tune this specificity to explore the role of particular PL substrates in P4-ATPase cellular functions. This work reviews the role of P4-ATPases in membrane biology, presents our current understanding of P4-ATPase substrate specificity, and discusses how these fundamental aspects of P4-ATPase enzymology may be used to enhance our knowledge of cellular membrane biology.

  2. Observing human movements helps decoding environmental forces.

    Science.gov (United States)

    Zago, Myrka; La Scaleia, Barbara; Miller, William L; Lacquaniti, Francesco

    2011-11-01

    Vision of human actions can affect several features of visual motion processing, as well as the motor responses of the observer. Here, we tested the hypothesis that action observation helps decoding environmental forces during the interception of a decelerating target within a brief time window, a task intrinsically very difficult. We employed a factorial design to evaluate the effects of scene orientation (normal or inverted) and target gravity (normal or inverted). Button-press triggered the motion of a bullet, a piston, or a human arm. We found that the timing errors were smaller for upright scenes irrespective of gravity direction in the Bullet group, while the errors were smaller for the standard condition of normal scene and gravity in the Piston group. In the Arm group, instead, performance was better when the directions of scene and target gravity were concordant, irrespective of whether both were upright or inverted. These results suggest that the default viewer-centered reference frame is used with inanimate scenes, such as those of the Bullet and Piston protocols. Instead, the presence of biological movements in animate scenes (as in the Arm protocol) may help processing target kinematics under the ecological conditions of coherence between scene and target gravity directions.

  3. Rate Aware Instantly Decodable Network Codes

    KAUST Repository

    Douik, Ahmed; Sorour, Sameh; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2016-01-01

    This paper addresses the problem of reducing the delivery time of data messages to cellular users using instantly decodable network coding (IDNC) with physical-layer rate awareness. While most of the existing literature on IDNC does not consider any physical layer complications, this paper proposes a cross-layer scheme that incorporates the different channel rates of the various users in the decision process of both the transmitted message combinations and the rates with which they are transmitted. The completion time minimization problem in such scenario is first shown to be intractable. The problem is, thus, approximated by reducing, at each transmission, the increase of an anticipated version of the completion time. The paper solves the problem by formulating it as a maximum weight clique problem over a newly designed rate aware IDNC (RA-IDNC) graph. Further, the paper provides a multi-layer solution to improve the completion time approximation. Simulation results suggest that the cross-layer design largely outperforms the uncoded transmissions strategies and the classical IDNC scheme. © 2015 IEEE.

  4. Statistical coding and decoding of heartbeat intervals.

    Directory of Open Access Journals (Sweden)

    Fausto Lucena

    Full Text Available The heart integrates neuroregulatory messages into specific bands of frequency, such that the overall amplitude spectrum of the cardiac output reflects the variations of the autonomic nervous system. This modulatory mechanism seems to be well adjusted to the unpredictability of the cardiac demand, maintaining a proper cardiac regulation. A longstanding theory holds that biological organisms facing an ever-changing environment are likely to evolve adaptive mechanisms to extract essential features in order to adjust their behavior. The key question, however, has been to understand how the neural circuitry self-organizes these feature detectors to select behaviorally relevant information. Previous studies in computational perception suggest that a neural population enhances information that is important for survival by minimizing the statistical redundancy of the stimuli. Herein we investigate whether the cardiac system makes use of a redundancy reduction strategy to regulate the cardiac rhythm. Based on a network of neural filters optimized to code heartbeat intervals, we learn a population code that maximizes the information across the neural ensemble. The emerging population code displays filter tuning proprieties whose characteristics explain diverse aspects of the autonomic cardiac regulation, such as the compromise between fast and slow cardiac responses. We show that the filters yield responses that are quantitatively similar to observed heart rate responses during direct sympathetic or parasympathetic nerve stimulation. Our findings suggest that the heart decodes autonomic stimuli according to information theory principles analogous to how perceptual cues are encoded by sensory systems.

  5. Neuroimaging of decoding and language comprehension in young very low birth weight (VLBW adolescents: Indications for compensatory mechanisms.

    Directory of Open Access Journals (Sweden)

    Helene van Ettinger-Veenstra

    Full Text Available In preterm children with very low birth weight (VLBW ≤ 1500 g, reading problems are often observed. Reading comprehension is dependent on word decoding and language comprehension. We investigated neural activation-within brain regions important for reading-related to components of reading comprehension in young VLBW adolescents in direct comparison to normal birth weight (NBW term-born peers, with the use of functional magnetic resonance imaging (fMRI. We hypothesized that the decoding mechanisms will be affected by VLBW, and expect to see increased neural activity for VLBW which may be modulated by task performance and cognitive ability. The study investigated 13 (11 included in fMRI young adolescents (ages 12 to 14 years born preterm with VLBW and in 13 NBW controls (ages 12-14 years for performance on the Block Design and Vocabulary subtests of the Wechsler Intelligence Scale for Children; and for semantic, orthographic, and phonological processing during an fMRI paradigm. The VLBW group showed increased phonological activation in left inferior frontal gyrus, decreased orthographic activation in right supramarginal gyrus, and decreased semantic activation in left inferior frontal gyrus. Block Design was related to altered right-hemispheric activation, and VLBW showed lower WISC Block Design scores. Left angular gyrus showed activation increase specific for VLBW with high accuracy on the semantic test. Young VLBW adolescents showed no accuracy and reaction time performance differences on our fMRI language tasks, but they did exhibit altered neural activation during these tasks. This altered activation for VLBW was observed as increased activation during phonological decoding, and as mainly decreased activation during orthographic and semantic processing. Correlations of neural activation with accuracy on the semantic fMRI task and with decreased WISC Block Design performance were specific for the VLBW group. Together, results suggest

  6. Neuroimaging of decoding and language comprehension in young very low birth weight (VLBW) adolescents: Indications for compensatory mechanisms.

    Science.gov (United States)

    van Ettinger-Veenstra, Helene; Widén, Carin; Engström, Maria; Karlsson, Thomas; Leijon, Ingemar; Nelson, Nina

    2017-01-01

    In preterm children with very low birth weight (VLBW ≤ 1500 g), reading problems are often observed. Reading comprehension is dependent on word decoding and language comprehension. We investigated neural activation-within brain regions important for reading-related to components of reading comprehension in young VLBW adolescents in direct comparison to normal birth weight (NBW) term-born peers, with the use of functional magnetic resonance imaging (fMRI). We hypothesized that the decoding mechanisms will be affected by VLBW, and expect to see increased neural activity for VLBW which may be modulated by task performance and cognitive ability. The study investigated 13 (11 included in fMRI) young adolescents (ages 12 to 14 years) born preterm with VLBW and in 13 NBW controls (ages 12-14 years) for performance on the Block Design and Vocabulary subtests of the Wechsler Intelligence Scale for Children; and for semantic, orthographic, and phonological processing during an fMRI paradigm. The VLBW group showed increased phonological activation in left inferior frontal gyrus, decreased orthographic activation in right supramarginal gyrus, and decreased semantic activation in left inferior frontal gyrus. Block Design was related to altered right-hemispheric activation, and VLBW showed lower WISC Block Design scores. Left angular gyrus showed activation increase specific for VLBW with high accuracy on the semantic test. Young VLBW adolescents showed no accuracy and reaction time performance differences on our fMRI language tasks, but they did exhibit altered neural activation during these tasks. This altered activation for VLBW was observed as increased activation during phonological decoding, and as mainly decreased activation during orthographic and semantic processing. Correlations of neural activation with accuracy on the semantic fMRI task and with decreased WISC Block Design performance were specific for the VLBW group. Together, results suggest compensatory

  7. Baby's first 10 words.

    Science.gov (United States)

    Tardif, Twila; Fletcher, Paul; Liang, Weilan; Zhang, Zhixiang; Kaciroti, Niko; Marchman, Virginia A

    2008-07-01

    Although there has been much debate over the content of children's first words, few large sample studies address this question for children at the very earliest stages of word learning. The authors report data from comparable samples of 265 English-, 336 Putonghua- (Mandarin), and 369 Cantonese-speaking 8- to 16-month-old infants whose caregivers completed MacArthur-Bates Communicative Development Inventories and reported them to produce between 1 and 10 words. Analyses of individual words indicated striking commonalities in the first words that children learn. However, substantive cross-linguistic differences appeared in the relative prevalence of common nouns, people terms, and verbs as well as in the probability that children produced even one of these word types when they had a total of 1-3, 4-6, or 7-10 words in their vocabularies. These data document cross-linguistic differences in the types of words produced even at the earliest stages of vocabulary learning and underscore the importance of parental input and cross-linguistic/cross-cultural variations in children's early word-learning.

  8. Word 2010 Bible

    CERN Document Server

    Tyson, Herb

    2010-01-01

    In-depth guidance on Word 2010 from a Microsoft MVP. Microsoft Word 2010 arrives with many changes and improvements, and this comprehensive guide from Microsoft MVP Herb Tyson is your expert, one-stop resource for it all. Master Word's new features such as a new interface and customized Ribbon, major new productivity-boosting collaboration tools, how to publish directly to blogs, how to work with XML, and much more. Follow step-by-step instructions and best practices, avoid pitfalls, discover practical workarounds, and get the very most out of your new Word 2010 with this packed guide. Coverag

  9. Activation of extrastriate and frontal cortical areas by visual words and word-like stimuli

    International Nuclear Information System (INIS)

    Petersen, S.E.; Fox, P.T.; Snyder, A.Z.; Raichle, M.E.

    1990-01-01

    Visual presentation of words activates extrastriate regions of the occipital lobes of the brain. When analyzed by positron emission tomography (PET), certain areas in the left, medial extrastriate visual cortex were activated by visually presented pseudowords that obey English spelling rules, as well as by actual words. These areas were not activated by nonsense strings of letters or letter-like forms. Thus visual word form computations are based on learned distinctions between words and nonwords. In addition, during passive presentation of words, but not pseudowords, activation occurred in a left frontal area that is related to semantic processing. These findings support distinctions made in cognitive psychology and computational modeling between high-level visual and semantic computations on single words and describe the anatomy that may underlie these distinctions

  10. Efficient universal computing architectures for decoding neural activity.

    Directory of Open Access Journals (Sweden)

    Benjamin I Rapoport

    Full Text Available The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain- machine interfaces (BMIs. Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain- machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than [Formula: see text]. We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA implementation of this portion

  11. A real-time MPEG software decoder using a portable message-passing library

    Energy Technology Data Exchange (ETDEWEB)

    Kwong, Man Kam; Tang, P.T. Peter; Lin, Biquan

    1995-12-31

    We present a real-time MPEG software decoder that uses message-passing libraries such as MPL, p4 and MPI. The parallel MPEG decoder currently runs on the IBM SP system but can be easil ported to other parallel machines. This paper discusses our parallel MPEG decoding algorithm as well as the parallel programming environment under which it uses. Several technical issues are discussed, including balancing of decoding speed, memory limitation, 1/0 capacities, and optimization of MPEG decoding components. This project shows that a real-time portable software MPEG decoder is feasible in a general-purpose parallel machine.

  12. The Relationships among Cognitive Correlates and Irregular Word, Non-Word, and Word Reading

    Science.gov (United States)

    Abu-Hamour, Bashir; University, Mu'tah; Urso, Annmarie; Mather, Nancy

    2012-01-01

    This study explored four hypotheses: (a) the relationships among rapid automatized naming (RAN) and processing speed (PS) to irregular word, non-word, and word reading; (b) the predictive power of various RAN and PS measures, (c) the cognitive correlates that best predicted irregular word, non-word, and word reading, and (d) reading performance of…

  13. Word of Jeremiah - Word of God

    DEFF Research Database (Denmark)

    Holt, Else Kragelund

    2007-01-01

    The article examines the relationship between God, prophet and the people in the Book of Jeremiah. The analysis shows a close connection, almost an identification, between the divine word (and consequently God himself) and the prophet, so that the prophet becomes a metaphor for God. This is done...

  14. Word translation entropy in translation

    DEFF Research Database (Denmark)

    Schaeffer, Moritz; Dragsted, Barbara; Hvelplund, Kristian Tangsgaard

    2016-01-01

    This study reports on an investigation into the relationship between the number of translation alternatives for a single word and eye movements on the source text. In addition, the effect of word order differences between source and target text on eye movements on the source text is studied....... In particular, the current study investigates the effect of these variables on early and late eye movement measures. Early eye movement measures are indicative of processes that are more automatic while late measures are more indicative of conscious processing. Most studies that found evidence of target...... language activation during source text reading in translation, i.e. co-activation of the two linguistic systems, employed late eye movement measures or reaction times. The current study therefore aims to investigate if and to what extent earlier eye movement measures in reading for translation show...

  15. Word Processing for All.

    Science.gov (United States)

    Abbott, Chris

    1991-01-01

    Pupils with special educational needs are finding that the use of word processors can give them a new confidence and pride in their own abilities. This article describes the use of such devices as the "mouse," on-screen word lists, spell checkers, and overlay keyboards. (JDD)

  16. Mapping visual stimuli to perceptual decisions via sparse decoding of mesoscopic neural activity.

    Science.gov (United States)

    Sajda, Paul

    2010-01-01

    In this talk I will describe our work investigating sparse decoding of neural activity, given a realistic mapping of the visual scene to neuronal spike trains generated by a model of primary visual cortex (V1). We use a linear decoder which imposes sparsity via an L1 norm. The decoder can be viewed as a decoding neuron (linear summation followed by a sigmoidal nonlinearity) in which there are relatively few non-zero synaptic weights. We find: (1) the best decoding performance is for a representation that is sparse in both space and time, (2) decoding of a temporal code results in better performance than a rate code and is also a better fit to the psychophysical data, (3) the number of neurons required for decoding increases monotonically as signal-to-noise in the stimulus decreases, with as little as 1% of the neurons required for decoding at the highest signal-to-noise levels, and (4) sparse decoding results in a more accurate decoding of the stimulus and is a better fit to psychophysical performance than a distributed decoding, for example one imposed by an L2 norm. We conclude that sparse coding is well-justified from a decoding perspective in that it results in a minimum number of neurons and maximum accuracy when sparse representations can be decoded from the neural dynamics.

  17. Flexible Word Classes

    DEFF Research Database (Denmark)

    • First major publication on the phenomenon • Offers cross-linguistic, descriptive, and diverse theoretical approaches • Includes analysis of data from different language families and from lesser studied languages This book is the first major cross-linguistic study of 'flexible words', i.e. words...... that cannot be classified in terms of the traditional lexical categories Verb, Noun, Adjective or Adverb. Flexible words can - without special morphosyntactic marking - serve in functions for which other languages must employ members of two or more of the four traditional, 'specialised' word classes. Thus......, flexible words are underspecified for communicative functions like 'predicating' (verbal function), 'referring' (nominal function) or 'modifying' (a function typically associated with adjectives and e.g. manner adverbs). Even though linguists have been aware of flexible world classes for more than...

  18. WordPress Bible

    CERN Document Server

    Brazell, Aaron

    2011-01-01

    Get the latest word on the biggest self-hosted blogging tool on the marketWithin a week of the announcement of WordPress 3.0, it had been downloaded over a million times. Now you can get on the bandwagon of this popular open-source blogging tool with WordPress Bible, 2nd Edition. Whether you're a casual blogger or programming pro, this comprehensive guide covers the latest version of WordPress, from the basics through advanced application development. If you want to thoroughly learn WordPress, this is the book you need to succeed.Explores the principles of blogging, marketing, and social media

  19. Vocabulary knowledge mediates the link between socioeconomic status and word learning in grade school.

    Science.gov (United States)

    Maguire, Mandy J; Schneider, Julie M; Middleton, Anna E; Ralph, Yvonne; Lopez, Michael; Ackerman, Robert A; Abel, Alyson D

    2018-02-01

    The relationship between children's slow vocabulary growth and the family's low socioeconomic status (SES) has been well documented. However, previous studies have often focused on infants or preschoolers and primarily used static measures of vocabulary at multiple time points. To date, there is no research investigating whether SES predicts a child's word learning abilities in grade school and, if so, what mediates this relationship. In this study, 68 children aged 8-15 years performed a written word learning from context task that required using the surrounding text to identify the meaning of an unknown word. Results revealed that vocabulary knowledge significantly mediated the relationship between SES (as measured by maternal education) and word learning. This was true despite the fact that the words in the linguistic context surrounding the target word are typically acquired well before 8 years of age. When controlling for vocabulary, word learning from written context was not predicted by differences in reading comprehension, decoding, or working memory. These findings reveal that differences in vocabulary growth between grade school children from low and higher SES homes are likely related to differences in the process of word learning more than knowledge of surrounding words or reading skills. Specifically, children from lower SES homes are not as effective at using known vocabulary to build a robust semantic representation of incoming text to identify the meaning of an unknown word. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Nurturing a lexical legacy: reading experience is critical for the development of word reading skill

    Science.gov (United States)

    Nation, Kate

    2017-12-01

    The scientific study of reading has taught us much about the beginnings of reading in childhood, with clear evidence that the gateway to reading opens when children are able to decode, or `sound out' written words. Similarly, there is a large evidence base charting the cognitive processes that characterise skilled word recognition in adults. Less understood is how children develop word reading expertise. Once basic reading skills are in place, what factors are critical for children to move from novice to expert? This paper outlines the role of reading experience in this transition. Encountering individual words in text provides opportunities for children to refine their knowledge about how spelling represents spoken language. Alongside this, however, reading experience provides much more than repeated exposure to individual words in isolation. According to the lexical legacy perspective, outlined in this paper, experiencing words in diverse and meaningful language environments is critical for the development of word reading skill. At its heart is the idea that reading provides exposure to words in many different contexts, episodes and experiences which, over time, sum to a rich and nuanced database about their lexical history within an individual's experience. These rich and diverse encounters bring about local variation at the word level: a lexical legacy that is measurable during word reading behaviour, even in skilled adults.

  1. Robust pattern decoding in shape-coded structured light

    Science.gov (United States)

    Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai

    2017-09-01

    Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.

  2. Optimal and efficient decoding of concatenated quantum block codes

    International Nuclear Information System (INIS)

    Poulin, David

    2006-01-01

    We consider the problem of optimally decoding a quantum error correction code--that is, to find the optimal recovery procedure given the outcomes of partial ''check'' measurements on the system. In general, this problem is NP hard. However, we demonstrate that for concatenated block codes, the optimal decoding can be efficiently computed using a message-passing algorithm. We compare the performance of the message-passing algorithm to that of the widespread blockwise hard decoding technique. Our Monte Carlo results using the five-qubit and Steane's code on a depolarizing channel demonstrate significant advantages of the message-passing algorithms in two respects: (i) Optimal decoding increases by as much as 94% the error threshold below which the error correction procedure can be used to reliably send information over a noisy channel; and (ii) for noise levels below these thresholds, the probability of error after optimal decoding is suppressed at a significantly higher rate, leading to a substantial reduction of the error correction overhead

  3. Distributed coding/decoding complexity in video sensor networks.

    Science.gov (United States)

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  4. Recalling taboo and nontaboo words.

    Science.gov (United States)

    Jay, Timothy; Caldwell-Harris, Catherine; King, Krista

    2008-01-01

    People remember emotional and taboo words better than neutral words. It is well known that words that are processed at a deep (i.e., semantic) level are recalled better than words processed at a shallow (i.e., purely visual) level. To determine how depth of processing influences recall of emotional and taboo words, a levels of processing paradigm was used. Whether this effect holds for emotional and taboo words has not been previously investigated. Two experiments demonstrated that taboo and emotional words benefit less from deep processing than do neutral words. This is consistent with the proposal that memories for taboo and emotional words are a function of the arousal level they evoke, even under shallow encoding conditions. Recall was higher for taboo words, even when taboo words were cued to be recalled after neutral and emotional words. The superiority of taboo word recall is consistent with cognitive neuroscience and brain imaging research.

  5. Word learning mechanisms.

    Science.gov (United States)

    He, Angela Xiaoxue; Arunachalam, Sudha

    2017-07-01

    How do children acquire the meanings of words? Many word learning mechanisms have been proposed to guide learners through this challenging task. Despite the availability of rich information in the learner's linguistic and extralinguistic input, the word-learning task is insurmountable without such mechanisms for filtering through and utilizing that information. Different kinds of words, such as nouns denoting object concepts and verbs denoting event concepts, require to some extent different kinds of information and, therefore, access to different kinds of mechanisms. We review some of these mechanisms to examine the relationship between the input that is available to learners and learners' intake of that input-that is, the organized, interpreted, and stored representations they form. We discuss how learners segment individual words from the speech stream and identify their grammatical categories, how they identify the concepts denoted by these words, and how they refine their initial representations of word meanings. WIREs Cogn Sci 2017, 8:e1435. doi: 10.1002/wcs.1435 This article is categorized under: Linguistics > Language Acquisition Psychology > Language. © 2017 Wiley Periodicals, Inc.

  6. Systolic array processing of the sequential decoding algorithm

    Science.gov (United States)

    Chang, C. Y.; Yao, K.

    1989-01-01

    A systolic array processing technique is applied to implementing the stack algorithm form of the sequential decoding algorithm. It is shown that sorting, a key function in the stack algorithm, can be efficiently realized by a special type of systolic arrays known as systolic priority queues. Compared to the stack-bucket algorithm, this approach is shown to have the advantages that the decoding always moves along the optimal path, that it has a fast and constant decoding speed and that its simple and regular hardware architecture is suitable for VLSI implementation. Three types of systolic priority queues are discussed: random access scheme, shift register scheme and ripple register scheme. The property of the entries stored in the systolic priority queue is also investigated. The results are applicable to many other basic sorting type problems.

  7. Analysis of Minimal LDPC Decoder System on a Chip Implementation

    Directory of Open Access Journals (Sweden)

    T. Palenik

    2015-09-01

    Full Text Available This paper presents a practical method of potential replacement of several different Quasi-Cyclic Low-Density Parity-Check (QC-LDPC codes with one, with the intention of saving as much memory as required to implement the LDPC encoder and decoder in a memory-constrained System on a Chip (SoC. The presented method requires only a very small modification of the existing encoder and decoder, making it suitable for utilization in a Software Defined Radio (SDR platform. Besides the analysis of the effects of necessary variable-node value fixation during the Belief Propagation (BP decoding algorithm, practical standard-defined code parameters are scrutinized in order to evaluate the feasibility of the proposed LDPC setup simplification. Finally, the error performance of the modified system structure is evaluated and compared with the original system structure by means of simulation.

  8. Analysis and Design of Binary Message-Passing Decoders

    DEFF Research Database (Denmark)

    Lechner, Gottfried; Pedersen, Troels; Kramer, Gerhard

    2012-01-01

    Binary message-passing decoders for low-density parity-check (LDPC) codes are studied by using extrinsic information transfer (EXIT) charts. The channel delivers hard or soft decisions and the variable node decoder performs all computations in the L-value domain. A hard decision channel results...... message-passing decoders. Finally, it is shown that errors on cycles consisting only of degree two and three variable nodes cannot be corrected and a necessary and sufficient condition for the existence of a cycle-free subgraph is derived....... in the well-know Gallager B algorithm, and increasing the output alphabet from hard decisions to two bits yields a gain of more than 1.0 dB in the required signal to noise ratio when using optimized codes. The code optimization requires adapting the mixing property of EXIT functions to the case of binary...

  9. An online brain-machine interface using decoding of movement direction from the human electrocorticogram

    Science.gov (United States)

    Milekovic, Tomislav; Fischer, Jörg; Pistohl, Tobias; Ruescher, Johanna; Schulze-Bonhage, Andreas; Aertsen, Ad; Rickert, Jörn; Ball, Tonio; Mehring, Carsten

    2012-08-01

    A brain-machine interface (BMI) can be used to control movements of an artificial effector, e.g. movements of an arm prosthesis, by motor cortical signals that control the equivalent movements of the corresponding body part, e.g. arm movements. This approach has been successfully applied in monkeys and humans by accurately extracting parameters of movements from the spiking activity of multiple single neurons. We show that the same approach can be realized using brain activity measured directly from the surface of the human cortex using electrocorticography (ECoG). Five subjects, implanted with ECoG implants for the purpose of epilepsy assessment, took part in our study. Subjects used directionally dependent ECoG signals, recorded during active movements of a single arm, to control a computer cursor in one out of two directions. Significant BMI control was achieved in four out of five subjects with correct directional decoding in 69%-86% of the trials (75% on average). Our results demonstrate the feasibility of an online BMI using decoding of movement direction from human ECoG signals. Thus, to achieve such BMIs, ECoG signals might be used in conjunction with or as an alternative to intracortical neural signals.

  10. Error-correction coding and decoding bounds, codes, decoders, analysis and applications

    CERN Document Server

    Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak

    2017-01-01

    This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...

  11. The fast decoding of Reed-Solomon codes using Fermat theoretic transforms and continued fractions

    Science.gov (United States)

    Reed, I. S.; Scholtz, R. A.; Welch, L. R.; Truong, T. K.

    1978-01-01

    It is shown that Reed-Solomon (RS) codes can be decoded by using a fast Fourier transform (FFT) algorithm over finite fields GF(F sub n), where F sub n is a Fermat prime, and continued fractions. This new transform decoding method is simpler than the standard method for RS codes. The computing time of this new decoding algorithm in software can be faster than the standard decoding method for RS codes.

  12. Improving throughput of single-relay DF channel using linear constellation precoding

    KAUST Repository

    Fareed, Muhammad Mehboob

    2014-08-01

    In this letter, we propose a transmission scheme to improve the overall throughput of a cooperative communication system with single decode-and-forward relay. Symbol error rate and throughput analysis of the new scheme are presented to facilitate the performance comparison with the existing decode-and-forward relaying schemes. Simulation results are further provided to corroborate the analytical results. © 2012 IEEE.

  13. Improving throughput of single-relay DF channel using linear constellation precoding

    KAUST Repository

    Fareed, Muhammad Mehboob; Yang, Hongchuan; Alouini, Mohamed-Slim

    2014-01-01

    In this letter, we propose a transmission scheme to improve the overall throughput of a cooperative communication system with single decode-and-forward relay. Symbol error rate and throughput analysis of the new scheme are presented to facilitate the performance comparison with the existing decode-and-forward relaying schemes. Simulation results are further provided to corroborate the analytical results. © 2012 IEEE.

  14. Joint Estimation and Decoding of Space-Time Trellis Codes

    Directory of Open Access Journals (Sweden)

    Zhang Jianqiu

    2002-01-01

    Full Text Available We explore the possibility of using an emerging tool in statistical signal processing, sequential importance sampling (SIS, for joint estimation and decoding of space-time trellis codes (STTC. First, we provide background on SIS, and then we discuss its application to space-time trellis code (STTC systems. It is shown through simulations that SIS is suitable for joint estimation and decoding of STTC with time-varying flat-fading channels when phase ambiguity is avoided. We used a design criterion for STTCs and temporally correlated channels that combats phase ambiguity without pilot signaling. We have shown by simulations that the design is valid.

  15. Efficient decoding of random errors for quantum expander codes

    OpenAIRE

    Fawzi , Omar; Grospellier , Antoine; Leverrier , Anthony

    2017-01-01

    We show that quantum expander codes, a constant-rate family of quantum LDPC codes, with the quasi-linear time decoding algorithm of Leverrier, Tillich and Z\\'emor can correct a constant fraction of random errors with very high probability. This is the first construction of a constant-rate quantum LDPC code with an efficient decoding algorithm that can correct a linear number of random errors with a negligible failure probability. Finding codes with these properties is also motivated by Gottes...

  16. Min-Max decoding for non binary LDPC codes

    OpenAIRE

    Savin, Valentin

    2008-01-01

    Iterative decoding of non-binary LDPC codes is currently performed using either the Sum-Product or the Min-Sum algorithms or slightly different versions of them. In this paper, several low-complexity quasi-optimal iterative algorithms are proposed for decoding non-binary codes. The Min-Max algorithm is one of them and it has the benefit of two possible LLR domain implementations: a standard implementation, whose complexity scales as the square of the Galois field's cardinality and a reduced c...

  17. Linear-time general decoding algorithm for the surface code

    Science.gov (United States)

    Darmawan, Andrew S.; Poulin, David

    2018-05-01

    A quantum error correcting protocol can be substantially improved by taking into account features of the physical noise process. We present an efficient decoder for the surface code which can account for general noise features, including coherences and correlations. We demonstrate that the decoder significantly outperforms the conventional matching algorithm on a variety of noise models, including non-Pauli noise and spatially correlated noise. The algorithm is based on an approximate calculation of the logical channel using a tensor-network description of the noisy state.

  18. Progressive Image Transmission Based on Joint Source-Channel Decoding Using Adaptive Sum-Product Algorithm

    Directory of Open Access Journals (Sweden)

    David G. Daut

    2007-03-01

    Full Text Available A joint source-channel decoding method is designed to accelerate the iterative log-domain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec making it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. The positions of bits belonging to error-free coding passes are then fed back to the channel decoder. The log-likelihood ratios (LLRs of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the nonsource controlled decoding method by up to 3 dB in terms of PSNR.

  19. Progressive Image Transmission Based on Joint Source-Channel Decoding Using Adaptive Sum-Product Algorithm

    Directory of Open Access Journals (Sweden)

    Liu Weiliang

    2007-01-01

    Full Text Available A joint source-channel decoding method is designed to accelerate the iterative log-domain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec making it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. The positions of bits belonging to error-free coding passes are then fed back to the channel decoder. The log-likelihood ratios (LLRs of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the nonsource controlled decoding method by up to 3 dB in terms of PSNR.

  20. Performance-complexity tradeoff in sequential decoding for the unconstrained AWGN channel

    KAUST Repository

    Abediseid, Walid; Alouini, Mohamed-Slim

    2013-01-01

    channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement

  1. Construction and decoding of matrix-product codes from nested codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Lally, Kristine; Ruano, Diego

    2009-01-01

    We consider matrix-product codes [C1 ... Cs] · A, where C1, ..., Cs  are nested linear codes and matrix A has full rank. We compute their minimum distance and provide a decoding algorithm when A is a non-singular by columns matrix. The decoding algorithm decodes up to half of the minimum distance....

  2. Adaptive decoding of MPEG-4 sprites for memory-constrained embedded systems

    NARCIS (Netherlands)

    Pastrnak, M.; Farin, D.S.; With, de P.H.N.; Cardinal, J.; Cerf, N.; Delgrnage, O.

    2005-01-01

    Background sprite decoding is an essential part of object-based video coding.The composition and rendering of a final scene involves the placing of individual video objects in a predefined way superimposed on the decoded background image. The MPEG-4 standard includes the decoding algorithm for

  3. tRNA's wobble decoding of the genome: 40 years of modification.

    Science.gov (United States)

    Agris, Paul F; Vendeix, Franck A P; Graham, William D

    2007-02-09

    The genetic code is degenerate, in that 20 amino acids are encoded by 61 triplet codes. In 1966, Francis Crick hypothesized that the cell's limited number of tRNAs decoded the genome by recognizing more than one codon. The ambiguity of that recognition resided in the third base-pair, giving rise to the Wobble Hypothesis. Post-transcriptional modifications at tRNA's wobble position 34, especially modifications of uridine 34, enable wobble to occur. The Modified Wobble Hypothesis proposed in 1991 that specific modifications of a tRNA wobble nucleoside shape the anticodon architecture in such a manner that interactions were restricted to the complementary base plus a single wobble pairing for amino acids with twofold degenerate codons. However, chemically different modifications at position 34 would expand the ability of a tRNA to read three or even four of the fourfold degenerate codons. One foundation of Crick's Wobble Hypothesis was that a near-constant geometry of canonical base-pairing be maintained in forming all three base-pairs between the tRNA anticodon and mRNA codon on the ribosome. In accepting an aminoacyl-tRNA, the ribosome requires maintenance of a specific geometry for the anticodon-codon base-pairing. However, it is the post-transcriptional modifications at tRNA wobble position 34 and purine 37, 3'-adjacent to the anticodon, that pre-structure the anticodon domain to ensure the correct codon binding. The modifications create both the architecture and the stability needed for decoding through restraints on anticodon stereochemistry and conformational space, and through selective hydrogen bonding. A physicochemical understanding of modified nucleoside contributions to the tRNA anticodon domain architecture and its decoding of the genome has advanced RNA world evolutionary theory, the principles of RNA chemistry, and the application of this knowledge to the introduction of new amino acids to proteins.

  4. Decoding Humor Experiences from Brain Activity of People Viewing Comedy Movies

    Science.gov (United States)

    Sawahata, Yasuhito; Komine, Kazuteru; Morita, Toshiya; Hiruma, Nobuyuki

    2013-01-01

    Humans naturally have a sense of humor. Experiencing humor not only encourages social interactions, but also produces positive physiological effects on the human body, such as lowering blood pressure. Recent neuro-imaging studies have shown evidence for distinct mental state changes at work in people experiencing humor. However, the temporal characteristics of these changes remain elusive. In this paper, we objectively measured humor-related mental states from single-trial functional magnetic resonance imaging (fMRI) data obtained while subjects viewed comedy TV programs. Measured fMRI data were labeled on the basis of the lag before or after the viewer’s perception of humor (humor onset) determined by the viewer-reported humor experiences during the fMRI scans. We trained multiple binary classifiers, or decoders, to distinguish between fMRI data obtained at each lag from ones obtained during a neutral state in which subjects were not experiencing humor. As a result, in the right dorsolateral prefrontal cortex and the right temporal area, the decoders showed significant classification accuracies even at two seconds ahead of the humor onsets. Furthermore, given a time series of fMRI data obtained during movie viewing, we found that the decoders with significant performance were also able to predict the upcoming humor events on a volume-by-volume basis. Taking into account the hemodynamic delay, our results suggest that the upcoming humor events are encoded in specific brain areas up to about five seconds before the awareness of experiencing humor. Our results provide evidence that there exists a mental state lasting for a few seconds before actual humor perception, as if a viewer is expecting the future humorous events. PMID:24324656

  5. Words that Pop!

    Science.gov (United States)

    Russell, Shirley

    1988-01-01

    To excite students' appreciation of language, comic book words--onomatopoeia--are a useful tool. Exercises and books are suggested. A list of books for adults and children is recommended, and a reproducible page is provided. (JL)

  6. Prosody and Spoken Word Recognition in Early and Late Spanish-English Bilingual Individuals

    Science.gov (United States)

    Boutsen, Frank R.; Dvorak, Justin D.; Deweber, Derick D.

    2017-01-01

    Purpose: This study was conducted to compare the influence of word properties on gated single-word recognition in monolingual and bilingual individuals under conditions of native and nonnative accent and to determine whether word-form prosody facilitates recognition in bilingual individuals. Method: Word recognition was assessed in monolingual and…

  7. Decorporation: officially a word.

    Science.gov (United States)

    Fisher, D R

    2000-05-01

    This note is the brief history of a word. Decorporation is a scientific term known to health physicists who have an interest in the removal of internally deposited radionuclides from the body after an accidental or inadvertent intake. Although the word decorporation appears many times in the radiation protection literature, it was only recently accepted by the editors of the Oxford English Dictionary as an entry for their latest edition.

  8. Decorporation: Officially a word

    International Nuclear Information System (INIS)

    Fisher, D.R.

    2000-01-01

    This note is the brief history of a word. Decorporation is a scientific term known to health physicists who have an interest in the removal of internally deposited radionuclides from the body after an accidental or inadvertent intake. Although the word decorporation appears many times in the radiation protection literature, it was only recently accepted by the editors of the Oxford English Dictionary as an entry for their latest edition

  9. Decorporation: Officially a word

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, D.R.

    2000-05-01

    This note is the brief history of a word. Decorporation is a scientific term known to health physicists who have an interest in the removal of internally deposited radionuclides from the body after an accidental or inadvertent intake. Although the word decorporation appears many times in the radiation protection literature, it was only recently accepted by the editors of the Oxford English Dictionary as an entry for their latest edition.

  10. Decorporation: Officially a word

    International Nuclear Information System (INIS)

    Fisher, Darrell R.

    1999-01-01

    This note is the brief history of a word. Decorporation is a scientific term known to health physicists who have an interest in the removal of internally deposited radionuclides from the body after an accidental or inadvertent intake. Although the word decorporation appears many times in the radiation protection literature, it was only recently accepted by the editors of the Oxford English Dictionary as an entry for their latest edition

  11. Sonority and early words

    DEFF Research Database (Denmark)

    Kjærbæk, Laila; Boeg Thomsen, Ditte; Lambertsen, Claus

    2015-01-01

    Syllables play an important role in children’s early language acquisition, and children appear to rely on clear syllabic structures as a key to word acquisition (Vihman 1996; Oller 2000). However, not all languages present children with equally clear cues to syllabic structure, and since the spec......Syllables play an important role in children’s early language acquisition, and children appear to rely on clear syllabic structures as a key to word acquisition (Vihman 1996; Oller 2000). However, not all languages present children with equally clear cues to syllabic structure, and since...... acquisition therefore presents us with the opportunity to examine how children respond to the task of word learning when the input language offers less clear cues to syllabic structure than usually seen. To investigate the sound structure in Danish children’s lexical development, we need a model of syllable......-29 months. For the two children, the phonetic structure of the first ten words to occur is compared with that of the last ten words to occur before 30 months of age, and with that of ten words in between. Measures related to the sonority envelope, viz. sonority types and in particular sonority rises...

  12. Finding words in a language that allows words without vowels.

    Science.gov (United States)

    El Aissati, Abder; McQueen, James M; Cutler, Anne

    2012-07-01

    Across many languages from unrelated families, spoken-word recognition is subject to a constraint whereby potential word candidates must contain a vowel. This constraint minimizes competition from embedded words (e.g., in English, disfavoring win in twin because t cannot be a word). However, the constraint would be counter-productive in certain languages that allow stand-alone vowelless open-class words. One such language is Berber (where t is indeed a word). Berber listeners here detected words affixed to nonsense contexts with or without vowels. Length effects seen in other languages replicated in Berber, but in contrast to prior findings, word detection was not hindered by vowelless contexts. When words can be vowelless, otherwise universal constraints disfavoring vowelless words do not feature in spoken-word recognition. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Decoding sound level in the marmoset primary auditory cortex.

    Science.gov (United States)

    Sun, Wensheng; Marongelli, Ellisha N; Watkins, Paul V; Barbour, Dennis L

    2017-10-01

    Neurons that respond favorably to a particular sound level have been observed throughout the central auditory system, becoming steadily more common at higher processing areas. One theory about the role of these level-tuned or nonmonotonic neurons is the level-invariant encoding of sounds. To investigate this theory, we simulated various subpopulations of neurons by drawing from real primary auditory cortex (A1) neuron responses and surveyed their performance in forming different sound level representations. Pure nonmonotonic subpopulations did not provide the best level-invariant decoding; instead, mixtures of monotonic and nonmonotonic neurons provided the most accurate decoding. For level-fidelity decoding, the inclusion of nonmonotonic neurons slightly improved or did not change decoding accuracy until they constituted a high proportion. These results indicate that nonmonotonic neurons fill an encoding role complementary to, rather than alternate to, monotonic neurons. NEW & NOTEWORTHY Neurons with nonmonotonic rate-level functions are unique to the central auditory system. These level-tuned neurons have been proposed to account for invariant sound perception across sound levels. Through systematic simulations based on real neuron responses, this study shows that neuron populations perform sound encoding optimally when containing both monotonic and nonmonotonic neurons. The results indicate that instead of working independently, nonmonotonic neurons complement the function of monotonic neurons in different sound-encoding contexts. Copyright © 2017 the American Physiological Society.

  14. Complete ML Decoding orf the (73,45) PG Code

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Hjaltason, Johan

    2005-01-01

    Our recent proof of the completeness of decoding by list bit flipping is reviewed. The proof is based on an enumeration of all cosets of low weight in terms of their minimum weight and syndrome weight. By using a geometric description of the error patterns we characterize all remaining cosets....

  15. Decoding Representations: How Children with Autism Understand Drawings

    Science.gov (United States)

    Allen, Melissa L.

    2009-01-01

    Young typically developing children can reason about abstract depictions if they know the intention of the artist. Children with autism spectrum disorder (ASD), who are notably impaired in social, "intention monitoring" domains, may have great difficulty in decoding vague representations. In Experiment 1, children with ASD are unable to use…

  16. A quantum algorithm for Viterbi decoding of classical convolutional codes

    Science.gov (United States)

    Grice, Jon R.; Meyer, David A.

    2015-07-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.

  17. Adaptive Combined Source and Channel Decoding with Modulation ...

    African Journals Online (AJOL)

    In this paper, an adaptive system employing combined source and channel decoding with modulation is proposed for slow Rayleigh fading channels. Huffman code is used as the source code and Convolutional code is used for error control. The adaptive scheme employs a family of Convolutional codes of different rates ...

  18. Peeling Decoding of LDPC Codes with Applications in Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Weijun Zeng

    2016-01-01

    Full Text Available We present a new approach for the analysis of iterative peeling decoding recovery algorithms in the context of Low-Density Parity-Check (LDPC codes and compressed sensing. The iterative recovery algorithm is particularly interesting for its low measurement cost and low computational complexity. The asymptotic analysis can track the evolution of the fraction of unrecovered signal elements in each iteration, which is similar to the well-known density evolution analysis in the context of LDPC decoding algorithm. Our analysis shows that there exists a threshold on the density factor; if under this threshold, the recovery algorithm is successful; otherwise it will fail. Simulation results are also provided for verifying the agreement between the proposed asymptotic analysis and recovery algorithm. Compared with existing works of peeling decoding algorithm, focusing on the failure probability of the recovery algorithm, our proposed approach gives accurate evolution of performance with different parameters of measurement matrices and is easy to implement. We also show that the peeling decoding algorithm performs better than other schemes based on LDPC codes.

  19. High-throughput GPU-based LDPC decoding

    Science.gov (United States)

    Chang, Yang-Lang; Chang, Cheng-Chun; Huang, Min-Yu; Huang, Bormin

    2010-08-01

    Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.

  20. Real Time Decoding of Color Symbol for Optical Positioning System

    Directory of Open Access Journals (Sweden)

    Abdul Waheed Malik

    2015-01-01

    Full Text Available This paper presents the design and real-time decoding of a color symbol that can be used as a reference marker for optical navigation. The designed symbol has a circular shape and is printed on paper using two distinct colors. This pair of colors is selected based on the highest achievable signal to noise ratio. The symbol is designed to carry eight bit information. Real time decoding of this symbol is performed using a heterogeneous combination of Field Programmable Gate Array (FPGA and a microcontroller. An image sensor having a resolution of 1600 by 1200 pixels is used to capture images of symbols in complex backgrounds. Dynamic image segmentation, component labeling and feature extraction was performed on the FPGA. The region of interest was further computed from the extracted features. Feature data belonging to the symbol was sent from the FPGA to the microcontroller. Image processing tasks are partitioned between the FPGA and microcontroller based on data intensity. Experiments were performed to verify the rotational independence of the symbols. The maximum distance between camera and symbol allowing for correct detection and decoding was analyzed. Experiments were also performed to analyze the number of generated image components and sub-pixel precision versus different light sources and intensities. The proposed hardware architecture can process up to 55 frames per second for accurate detection and decoding of symbols at two Megapixels resolution. The power consumption of the complete system is 342mw.

  1. Fast decoding of codes from algebraic plane curves

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Jensen, Helge Elbrønd

    1992-01-01

    Improvement to an earlier decoding algorithm for codes from algebraic geometry is presented. For codes from an arbitrary regular plane curve the authors correct up to d*/2-m2 /8+m/4-9/8 errors, where d* is the designed distance of the code and m is the degree of the curve. The complexity of finding...

  2. Name that tune: decoding music from the listening brain.

    NARCIS (Netherlands)

    Schaefer, R.S.; Farquhar, J.D.R.; Blokland, Y.M.; Sadakata, M.; Desain, P.W.M.

    2011-01-01

    In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven

  3. Name that tune: Decoding music from the listening brain

    NARCIS (Netherlands)

    Schaefer, R.S.; Farquhar, J.D.R.; Blokland, Y.M.; Sadakata, M.; Desain, P.W.M.

    2011-01-01

    In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven

  4. Decoding English Alphabet Letters Using EEG Phase Information

    Directory of Open Access Journals (Sweden)

    YiYan Wang

    2018-02-01

    Full Text Available Increasing evidence indicates that the phase pattern and power of the low frequency oscillations of brain electroencephalograms (EEG contain significant information during the human cognition of sensory signals such as auditory and visual stimuli. Here, we investigate whether and how the letters of the alphabet can be directly decoded from EEG phase and power data. In addition, we investigate how different band oscillations contribute to the classification and determine the critical time periods. An English letter recognition task was assigned, and statistical analyses were conducted to decode the EEG signal corresponding to each letter visualized on a computer screen. We applied support vector machine (SVM with gradient descent method to learn the potential features for classification. It was observed that the EEG phase signals have a higher decoding accuracy than the oscillation power information. Low-frequency theta and alpha oscillations have phase information with higher accuracy than do other bands. The decoding performance was best when the analysis period began from 180 to 380 ms after stimulus presentation, especially in the lateral occipital and posterior temporal scalp regions (PO7 and PO8. These results may provide a new approach for brain-computer interface techniques (BCI and may deepen our understanding of EEG oscillations in cognition.

  5. O2-GIDNC: Beyond instantly decodable network coding

    KAUST Repository

    Aboutorab, Neda; Sorour, Sameh; Sadeghi, Parastoo

    2013-01-01

    In this paper, we are concerned with extending the graph representation of generalized instantly decodable network coding (GIDNC) to a more general opportunistic network coding (ONC) scenario, referred to as order-2 GIDNC (O2-GIDNC). In the O2-GIDNC

  6. LDPC-based iterative joint source-channel decoding for JPEG2000.

    Science.gov (United States)

    Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane

    2007-02-01

    A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.

  7. On Lattice Sequential Decoding for Large MIMO Systems

    KAUST Repository

    Ali, Konpal S.

    2014-04-01

    Due to their ability to provide high data rates, Multiple-Input Multiple-Output (MIMO) wireless communication systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In the case of large overdetermined MIMO systems, we employ the Sequential Decoder using the Fano Algorithm. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. We attempt to bound the error by bounding the bias, using the minimum distance of a lattice. Also, a particular trend is observed with increasing SNR: a region of low complexity and high error, followed by a region of high complexity and error falling, and finally a region of low complexity and low error. For lower bias values, the stages of the trend are incurred at lower SNR than for higher bias values. This has the important implication that a low enough bias value, at low to moderate SNR, can result in low error and low complexity even for large MIMO systems. Our work is compared against Lattice Reduction (LR) aided Linear Decoders (LDs). Another impressive observation for low bias values that satisfy the error bound is that the Sequential Decoder\\'s error is seen to fall with increasing system size, while it grows for the LR-aided LDs. For the case of large underdetermined MIMO systems, Sequential Decoding with two preprocessing schemes is proposed – 1) Minimum Mean Square Error Generalized Decision Feedback Equalization (MMSE-GDFE) preprocessing 2) MMSE-GDFE preprocessing, followed by Lattice Reduction and Greedy Ordering. Our work is compared against previous work which employs Sphere Decoding preprocessed using MMSE-GDFE, Lattice Reduction and Greedy Ordering. For the case of large systems, this results in high complexity and difficulty in choosing the sphere radius. Our schemes

  8. The influence of social, individual and linguistic factors on children's performance in tasks of reading single words aloud / A Influência de fatores sociais, individuais e lingüísticos no desempenho de crianças na leitura em voz alta de palavras isoladas

    Directory of Open Access Journals (Sweden)

    Patrícia Silva Lúcio

    2010-01-01

    Full Text Available This study evaluates social, individual and linguistic factors in the performance of a single- word reading aloud task. A group of 1st to 4th grade school children from Belo Horizonte-MG (N = 333 read aloud 323 single words presented in a computer screen. Measures of reaction time (RT and error scores were collected. The Generalized Estimating Equations method exhibited the grapheme-phoneme and phoneme-grapheme regularity effect in reading and also showed an impact on the number of categories of regularity in this effect. No social factor was important to explain the results, but their mothers' education was correlated to the error scores (in opposite direction. There was no gender effect. Other factors rather than the traditional ones were also relevant, such as the age of reading acquisition and the verbal comprehension. The work brings important theoretical issues to cognitive reading assessment in Brazil.

  9. WORD LEVEL DISCRIMINATIVE TRAINING FOR HANDWRITTEN WORD RECOGNITION

    NARCIS (Netherlands)

    Chen, W.; Gader, P.

    2004-01-01

    Word level training refers to the process of learning the parameters of a word recognition system based on word level criteria functions. Previously, researchers trained lexicon­driven handwritten word recognition systems at the character level individually. These systems generally use statistical

  10. Finding words in a language that allows words without vowels

    NARCIS (Netherlands)

    El Aissati, A.; McQueen, J.M.; Cutler, A.

    2012-01-01

    Across many languages from unrelated families, spoken-word recognition is subject to a constraint whereby potential word candidates must contain a vowel. This constraint minimizes competition from embedded words (e.g., in English, disfavoring win in twin because t cannot be a word). However, the

  11. Code-modulated visual evoked potentials using fast stimulus presentation and spatiotemporal beamformer decoding.

    Science.gov (United States)

    Wittevrongel, Benjamin; Van Wolputte, Elia; Van Hulle, Marc M

    2017-11-08

    When encoding visual targets using various lagged versions of a pseudorandom binary sequence of luminance changes, the EEG signal recorded over the viewer's occipital pole exhibits so-called code-modulated visual evoked potentials (cVEPs), the phase lags of which can be tied to these targets. The cVEP paradigm has enjoyed interest in the brain-computer interfacing (BCI) community for the reported high information transfer rates (ITR, in bits/min). In this study, we introduce a novel decoding algorithm based on spatiotemporal beamforming, and show that this algorithm is able to accurately identify the gazed target. Especially for a small number of repetitions of the coding sequence, our beamforming approach significantly outperforms an optimised support vector machine (SVM)-based classifier, which is considered state-of-the-art in cVEP-based BCI. In addition to the traditional 60 Hz stimulus presentation rate for the coding sequence, we also explore the 120 Hz rate, and show that the latter enables faster communication, with a maximal median ITR of 172.87 bits/min. Finally, we also report on a transition effect in the EEG signal following the onset of the stimulus sequence, and recommend to exclude the first 150 ms of the trials from decoding when relying on a single presentation of the stimulus sequence.

  12. Decoding conjunctions of direction-of-motion and binocular disparity from human visual cortex.

    Science.gov (United States)

    Seymour, Kiley J; Clifford, Colin W G

    2012-05-01

    Motion and binocular disparity are two features in our environment that share a common correspondence problem. Decades of psychophysical research dedicated to understanding stereopsis suggest that these features interact early in human visual processing to disambiguate depth. Single-unit recordings in the monkey also provide evidence for the joint encoding of motion and disparity across much of the dorsal visual stream. Here, we used functional MRI and multivariate pattern analysis to examine where in the human brain conjunctions of motion and disparity are encoded. Subjects sequentially viewed two stimuli that could be distinguished only by their conjunctions of motion and disparity. Specifically, each stimulus contained the same feature information (leftward and rightward motion and crossed and uncrossed disparity) but differed exclusively in the way these features were paired. Our results revealed that a linear classifier could accurately decode which stimulus a subject was viewing based on voxel activation patterns throughout the dorsal visual areas and as early as V2. This decoding success was conditional on some voxels being individually sensitive to the unique conjunctions comprising each stimulus, thus a classifier could not rely on independent information about motion and binocular disparity to distinguish these conjunctions. This study expands on evidence that disparity and motion interact at many levels of human visual processing, particularly within the dorsal stream. It also lends support to the idea that stereopsis is subserved by early mechanisms also tuned to direction of motion.

  13. Real-time SHVC software decoding with multi-threaded parallel processing

    Science.gov (United States)

    Gudumasu, Srinivas; He, Yuwen; Ye, Yan; He, Yong; Ryu, Eun-Seok; Dong, Jie; Xiu, Xiaoyu

    2014-09-01

    This paper proposes a parallel decoding framework for scalable HEVC (SHVC). Various optimization technologies are implemented on the basis of SHVC reference software SHM-2.0 to achieve real-time decoding speed for the two layer spatial scalability configuration. SHVC decoder complexity is analyzed with profiling information. The decoding process at each layer and the up-sampling process are designed in parallel and scheduled by a high level application task manager. Within each layer, multi-threaded decoding is applied to accelerate the layer decoding speed. Entropy decoding, reconstruction, and in-loop processing are pipeline designed with multiple threads based on groups of coding tree units (CTU). A group of CTUs is treated as a processing unit in each pipeline stage to achieve a better trade-off between parallelism and synchronization. Motion compensation, inverse quantization, and inverse transform modules are further optimized with SSE4 SIMD instructions. Simulations on a desktop with an Intel i7 processor 2600 running at 3.4 GHz show that the parallel SHVC software decoder is able to decode 1080p spatial 2x at up to 60 fps (frames per second) and 1080p spatial 1.5x at up to 50 fps for those bitstreams generated with SHVC common test conditions in the JCT-VC standardization group. The decoding performance at various bitrates with different optimization technologies and different numbers of threads are compared in terms of decoding speed and resource usage, including processor and memory.

  14. Modality dependency of familiarity ratings of Japanese words.

    Science.gov (United States)

    Amano, S; Kondo, T; Kakehi, K

    1995-07-01

    Familiarity ratings for a large number of aurally and visually presented Japanese words wer measured for 11 subjects, in order to investigate the modality dependency of familiarity. The correlation coefficient between auditory and visual ratings was .808, which is lower than that observed for English words, suggesting that a substantial portion of the mental lexicon is modality dependent. It was shown that the modality dependency is greater for low-familiarity words than it is for medium- or high-familiarity words. This difference between the low- and the medium- or high-familiarity words has a relationship to orthography. That is, the dependency is larger in words consisting only of kanji, which may have multiple pronunciations and usually represent meaning, than it is in words consisting only of hiragana or katakana, which have a single pronunciation and usually do not represent meaning. These results indicate that the idiosyncratic characteristics of Japanese orthography contribute to the modality dependency.

  15. Low Complexity Approach for High Throughput Belief-Propagation based Decoding of LDPC Codes

    Directory of Open Access Journals (Sweden)

    BOT, A.

    2013-11-01

    Full Text Available The paper proposes a low complexity belief propagation (BP based decoding algorithm for LDPC codes. In spite of the iterative nature of the decoding process, the proposed algorithm provides both reduced complexity and increased BER performances as compared with the classic min-sum (MS algorithm, generally used for hardware implementations. Linear approximations of check-nodes update function are used in order to reduce the complexity of the BP algorithm. Considering this decoding approach, an FPGA based hardware architecture is proposed for implementing the decoding algorithm, aiming to increase the decoder throughput. FPGA technology was chosen for the LDPC decoder implementation, due to its parallel computation and reconfiguration capabilities. The obtained results show improvements regarding decoding throughput and BER performances compared with state-of-the-art approaches.

  16. Ixpantepec Nieves Mixtec Word Prosody

    Science.gov (United States)

    Carroll, Lucien Serapio

    This dissertation presents a phonological description and acoustic analysis of the word prosody of Ixpantepec Nieves Mixtec, which involves both a complex tone system and a default stress system. The analysis of Nieves Mixtec word prosody is complicated by a close association between morphological structure and prosodic structure, and by the interactions between word prosody and phonation type, which has both contrastive and non-contrastive roles in the phonology. I contextualize these systems within the phonology of Nieves Mixtec as a whole, within the literature on other Mixtec varieties, and within the literature on cross-linguistic prosodic typology. The literature on prosodic typology indicates that stress is necessarily defined abstractly, as structured prominence realized differently in each language. Descriptions of stress in other Mixtec varieties widely report default stress on the initial syllable of the canonical bimoraic root, though some descriptions suggest final stress or mobile stress. I first present phonological evidence---from distributional restrictions, phonological processes, and loanword adaptation---that Nieves Mixtec word prosody does involve a stress system, based on trochaic feet aligned to the root. I then present an acoustic study comparing stressed syllables to unstressed syllables, for ten potential acoustic correlates of stress. The results indicate that the acoustic correlates of stress in Nieves Mixtec include segmental duration, intensity and periodicity. Building on analyses of other Mixtec tone systems, I show that the distribution of tone and the tone processes in Nieves Mixtec support an analysis in which morae may bear H, M or L tone, where M tone is underlyingly unspecified, and each morpheme may sponsor a final +H or +L floating tone. Bimoraic roots thus host up to two linked tones and one floating tone, while monomoraic clitics host just one linked tone and one floating tone, and tonal morphemes are limited to a single

  17. Electronic Word of Behavior

    DEFF Research Database (Denmark)

    Kunst, Katrine

    It is widely recognized that the transition from Word-of-mouth (WOM) to electronic word-of-mouth (eWOM) allows for a wider and faster spread of information. However, little attention has been given to how digital channels expand the types of information consumers share. In this paper, we argue...... that recent years have seen a social media-facilitated move from opinion-centric eWOM (e.g. reviews) to behavior-centric (e.g. information about friends’ music consumption on Spotify). A review of the concepts of WOM and eWOM and a netnographic study reveal that the current definitions and understandings...... of the concepts do not capture this new kind of consumer-to-consumer information transfer about products and services. Consequently, we suggest an extension of those concepts: Electronic Word of Behavior....

  18. The Effects of Visual Attention Span and Phonological Decoding in Reading Comprehension in Dyslexia: A Path Analysis

    OpenAIRE

    Chen, C.; Schneps, M.; Masyn, K.; Thomson, J.

    2016-01-01

    Increasing evidence has shown visual attention span to be a factor, distinct from phonological skills, that explains single-word identification (pseudo-word/word reading) performance in dyslexia. Yet, little is known about how well visual attention span explains text comprehension. Observing reading comprehension in a sample of 105 high school students with dyslexia, we used a pathway analysis to examine the direct and indirect path between visual attention span and reading comprehension whil...

  19. The Activation of Embedded Words in Spoken Word Recognition

    Science.gov (United States)

    Zhang, Xujin; Samuel, Arthur G.

    2015-01-01

    The current study investigated how listeners understand English words that have shorter words embedded in them. A series of auditory-auditory priming experiments assessed the activation of six types of embedded words (2 embedded positions × 3 embedded proportions) under different listening conditions. Facilitation of lexical decision responses to targets (e.g., pig) associated with words embedded in primes (e.g., hamster) indexed activation of the embedded words (e.g., ham). When the listening conditions were optimal, isolated embedded words (e.g., ham) primed their targets in all six conditions (Experiment 1a). Within carrier words (e.g., hamster), the same set of embedded words produced priming only when they were at the beginning or comprised a large proportion of the carrier word (Experiment 1b). When the listening conditions were made suboptimal by expanding or compressing the primes, significant priming was found for isolated embedded words (Experiment 2a), but no priming was produced when the carrier words were compressed/expanded (Experiment 2b). Similarly, priming was eliminated when the carrier words were presented with one segment replaced by noise (Experiment 3). When cognitive load was imposed, priming for embedded words was again found when they were presented in isolation (Experiment 4a), but not when they were embedded in the carrier words (Experiment 4b). The results suggest that both embedded position and proportion play important roles in the activation of embedded words, but that such activation only occurs under unusually good listening conditions. PMID:25593407

  20. The Activation of Embedded Words in Spoken Word Recognition.

    Science.gov (United States)

    Zhang, Xujin; Samuel, Arthur G

    2015-01-01

    The current study investigated how listeners understand English words that have shorter words embedded in them. A series of auditory-auditory priming experiments assessed the activation of six types of embedded words (2 embedded positions × 3 embedded proportions) under different listening conditions. Facilitation of lexical decision responses to targets (e.g., pig) associated with words embedded in primes (e.g., hamster ) indexed activation of the embedded words (e.g., ham ). When the listening conditions were optimal, isolated embedded words (e.g., ham ) primed their targets in all six conditions (Experiment 1a). Within carrier words (e.g., hamster ), the same set of embedded words produced priming only when they were at the beginning or comprised a large proportion of the carrier word (Experiment 1b). When the listening conditions were made suboptimal by expanding or compressing the primes, significant priming was found for isolated embedded words (Experiment 2a), but no priming was produced when the carrier words were compressed/expanded (Experiment 2b). Similarly, priming was eliminated when the carrier words were presented with one segment replaced by noise (Experiment 3). When cognitive load was imposed, priming for embedded words was again found when they were presented in isolation (Experiment 4a), but not when they were embedded in the carrier words (Experiment 4b). The results suggest that both embedded position and proportion play important roles in the activation of embedded words, but that such activation only occurs under unusually good listening conditions.

  1. Essential words for the TOEFL

    CERN Document Server

    Matthiesen, Steven J

    2017-01-01

    This revised book is specifically designed for ESL students preparing to take the TOEFL. Includes new words and phrases, a section on purpose words, a list of vocabulary words with definitions, sample sentences, practice exercises for 500 need-to-know words, practice test with answer key, and more.

  2. Reduplication Facilitates Early Word Segmentation

    Science.gov (United States)

    Ota, Mitsuhiko; Skarabela, Barbora

    2018-01-01

    This study explores the possibility that early word segmentation is aided by infants' tendency to segment words with repeated syllables ("reduplication"). Twenty-four nine-month-olds were familiarized with passages containing one novel reduplicated word and one novel non-reduplicated word. Their central fixation times in response to…

  3. Finding Rising and Falling Words

    NARCIS (Netherlands)

    Tjong Kim Sang, E.

    2016-01-01

    We examine two different methods for finding rising words (among which neologisms) and falling words (among which archaisms) in decades of magazine texts (millions of words) and in years of tweets (billions of words): one based on correlation coefficients of relative frequencies and time, and one

  4. Word of mouth komunikacija

    Directory of Open Access Journals (Sweden)

    Žnideršić-Kovač Ružica

    2009-01-01

    Full Text Available Consumers' buying decision is very complex multistep process in which a lot of factors have significant impact. Traditional approach to the problem of communication between a company and its consumers, implies usage of marketing mix instruments, mostly promotion mix, in order to achieve positive purchase decision. Formal communication between company and consumers is dominant comparing to informal communication, and even in marketing literature there is not enough attention paid to this type of communication such as Word of Mouth. Numerous of research shows that consumers emphasize crucial impact of Word of Mouth on their buying decision. .

  5. Decoding the genome with an integrative analysis tool: combinatorial CRM Decoder.

    Science.gov (United States)

    Kang, Keunsoo; Kim, Joomyeong; Chung, Jae Hoon; Lee, Daeyoup

    2011-09-01

    The identification of genome-wide cis-regulatory modules (CRMs) and characterization of their associated epigenetic features are fundamental steps toward the understanding of gene regulatory networks. Although integrative analysis of available genome-wide information can provide new biological insights, the lack of novel methodologies has become a major bottleneck. Here, we present a comprehensive analysis tool called combinatorial CRM decoder (CCD), which utilizes the publicly available information to identify and characterize genome-wide CRMs in a species of interest. CCD first defines a set of the epigenetic features which is significantly associated with a set of known CRMs as a code called 'trace code', and subsequently uses the trace code to pinpoint putative CRMs throughout the genome. Using 61 genome-wide data sets obtained from 17 independent mouse studies, CCD successfully catalogued ∼12 600 CRMs (five distinct classes) including polycomb repressive complex 2 target sites as well as imprinting control regions. Interestingly, we discovered that ∼4% of the identified CRMs belong to at least two different classes named 'multi-functional CRM', suggesting their functional importance for regulating spatiotemporal gene expression. From these examples, we show that CCD can be applied to any potential genome-wide datasets and therefore will shed light on unveiling genome-wide CRMs in various species.

  6. AARP Word 2010 for dummies

    CERN Document Server

    Gookin, Dan

    2011-01-01

    It's a whole new Word - make the most of it! Here's exactly what you need to know to get going with Word 2010. From firing up Word, using the spell checker, and working with templates to formatting documents, adding images, and saving your stuff, you'll get the first and last word on Word 2010 with this fun and easy mini guide. So get ready to channel your inner writer and start creating Word files that wow! Open the book and find:Tips for navigating Word with the keyboard and mouseAdvice on using the RibbonHow to edit text and undo mistakesThings to know

  7. Glycans: bioactive signals decoded by lectins.

    Science.gov (United States)

    Gabius, Hans-Joachim

    2008-12-01

    The glycan part of cellular glycoconjugates affords a versatile means to build biochemical signals. These oligosaccharides have an exceptional talent in this respect. They surpass any other class of biomolecule in coding capacity within an oligomer (code word). Four structural factors account for this property: the potential for variability of linkage points, anomeric position and ring size as well as the aptitude for branching (first and second dimensions of the sugar code). Specific intermolecular recognition is favoured by abundant potential for hydrogen/co-ordination bonds and for C-H/pi-interactions. Fittingly, an array of protein folds has developed in evolution with the ability to select certain glycans from the natural diversity. The thermodynamics of this reaction profits from the occurrence of these ligands in only a few energetically favoured conformers, comparing favourably with highly flexible peptides (third dimension of the sugar code). Sequence, shape and local aspects of glycan presentation (e.g. multivalency) are key factors to regulate the avidity of lectin binding. At the level of cells, distinct glycan determinants, a result of enzymatic synthesis and dynamic remodelling, are being defined as biomarkers. Their presence gains a functional perspective by co-regulation of the cognate lectin as effector, for example in growth regulation. The way to tie sugar signal and lectin together is illustrated herein for two tumour model systems. In this sense, orchestration of glycan and lectin expression is an efficient means, with far-reaching relevance, to exploit the coding potential of oligosaccharides physiologically and medically.

  8. Cultural Image of Animal Words

    Institute of Scientific and Technical Information of China (English)

    邓海燕

    2017-01-01

    This paper,after introducing the definition and forms of cultural image,focuses on the detailed comparison and analysis of cultural image of animal words both in English and in Chinese from four aspects,that is,same animal word,same cultural image;same animal word,different cultural images;different animal words,same cultural image;different animal words,different cultural images.

  9. Greater pre-stimulus effective connectivity from the left inferior frontal area to other areas is associated with better phonological decoding in dyslexic readers

    Directory of Open Access Journals (Sweden)

    Richard E Frye

    2010-12-01

    Full Text Available Functional neuroimaging studies suggest that neural networks that subserve reading are organized differently in dyslexic readers (DRs and typical readers (TRs, yet the hierarchical structure of these networks has not been well studied. We used Granger Causality (GC to examine the effective connectivity of the preparatory network that occurs prior to viewing a non-word stimulus that requires phonological decoding in 7 DRs and 10 TRs who were young adults. The neuromagnetic activity that occurred 500 ms prior to each rhyme trial was analyzed from sensors overlying the left and right inferior frontal areas (IFA, temporoparietal areas (TPA, and ventral occipitotemporal areas (VOTA within the low, medium, and high beta and gamma sub-bands. A mixed-model analysis determined whether connectivity to or from the left and right IFAs differed across connectivity direction (into vs. out of the IFAs, brain areas, reading group, and/or performance. Results indicated that greater connectivity in the low beta sub-band from the left IFA to other cortical areas was significantly related to better non-word rhyme discrimination in DRs but not TRs. This suggests that the left IFA is an important cortical area involved in compensating for poor phonological function in DRs. We suggest that the left IFA activates a wider-than usual network prior to each trial in the service of supporting otherwise effortful phonological decoding in DRs. The fact that the left IFA provides top-down activation to both posterior left hemispheres areas used by typical readers for phonological decoding and homologous right hemisphere areas is discussed. In contrast, within the high gamma sub-band, better performance was associated with decreased connectivity between the left IFA and other brain areas, in both reading groups. Overly strong gamma connectivity during the pre-stimulus period may interfere with subsequent transient activation and deactivation of sub-networks once the non-word

  10. Neural decoding of attentional selection in multi-speaker environments without access to clean sources

    Science.gov (United States)

    O'Sullivan, James; Chen, Zhuo; Herrero, Jose; McKhann, Guy M.; Sheth, Sameer A.; Mehta, Ashesh D.; Mesgarani, Nima

    2017-10-01

    Objective. People who suffer from hearing impairments can find it difficult to follow a conversation in a multi-speaker environment. Current hearing aids can suppress background noise; however, there is little that can be done to help a user attend to a single conversation amongst many without knowing which speaker the user is attending to. Cognitively controlled hearing aids that use auditory attention decoding (AAD) methods are the next step in offering help. Translating the successes in AAD research to real-world applications poses a number of challenges, including the lack of access to the clean sound sources in the environment with which to compare with the neural signals. We propose a novel framework that combines single-channel speech separation algorithms with AAD. Approach. We present an end-to-end system that (1) receives a single audio channel containing a mixture of speakers that is heard by a listener along with the listener’s neural signals, (2) automatically separates the individual speakers in the mixture, (3) determines the attended speaker, and (4) amplifies the attended speaker’s voice to assist the listener. Main results. Using invasive electrophysiology recordings, we identified the regions of the auditory cortex that contribute to AAD. Given appropriate electrode locations, our system is able to decode the attention of subjects and amplify the attended speaker using only the mixed audio. Our quality assessment of the modified audio demonstrates a significant improvement in both subjective and objective speech quality measures. Significance. Our novel framework for AAD bridges the gap between the most recent advancements in speech processing technologies and speech prosthesis research and moves us closer to the development of cognitively controlled hearable devices for the hearing impaired.

  11. Evidence for simultaneous syntactic processing of multiple words during reading.

    Directory of Open Access Journals (Sweden)

    Joshua Snell

    Full Text Available A hotly debated issue in reading research concerns the extent to which readers process parafoveal words, and how parafoveal information might influence foveal word recognition. We investigated syntactic word processing both in sentence reading and in reading isolated foveal words when these were flanked by parafoveal words. In Experiment 1 we found a syntactic parafoveal preview benefit in sentence reading, meaning that fixation durations on target words were decreased when there was a syntactically congruent preview word at the target location (n during the fixation on the pre-target (n-1. In Experiment 2 we used a flanker paradigm in which participants had to classify foveal target words as either noun or verb, when those targets were flanked by syntactically congruent or incongruent words (stimulus on-time 170 ms. Lower response times and error rates in the congruent condition suggested that higher-order (syntactic information can be integrated across foveal and parafoveal words. Although higher-order parafoveal-on-foveal effects have been elusive in sentence reading, results from our flanker paradigm show that the reading system can extract higher-order information from multiple words in a single glance. We propose a model of reading to account for the present findings.

  12. Wording in international law

    NARCIS (Netherlands)

    d' Aspremont, J.

    2012-01-01

    Since the demise of philosophical foundationalism and that of the Aristotelian idea of an inner meaning of words, scholarship about international law is no longer perceived as a mining activity geared towards the extraction of pre-existing meaning. Rather, international legal scholarship is in a

  13. Wording in International Law

    NARCIS (Netherlands)

    d' Aspremont, J.

    2012-01-01

    Since the demise of philosophical foundationalism and that of the Aristotelian idea of an inner meaning of words, the scholarship about international law is no longer perceived as a mining activity geared towards the extraction of pre-existing meaning. Rather, international legal scholarship is in a

  14. A Life in Words

    DEFF Research Database (Denmark)

    Siegumfeldt, Inge Birgitte; Auster, Paul

    "Paul Auster's A Life in Words--a wide-ranging dialogue between Auster and the Danish professor I.B. Siegumfeldt--is a remarkably candid and often surprising celebration of one writer's art, craft, and life. It includes many revelations that have never been shared before, such as that he doesn...

  15. Have Words, Will Understand?

    Science.gov (United States)

    James, Jon

    2013-01-01

    Shifting the focus from words to concepts--does it work? The author shares his findings from such a project with three primary schools in the UK. Many children aged 7-10 find mastering the language of science difficult and do not make the progress that they could. Encountering complex terminology in the science language causes students to become…

  16. Doing words together

    DEFF Research Database (Denmark)

    Fusaroli, Riccardo; Østergaard, Svend; Raczaszek-Leonardi, Joanna

    In this paper we test the effects of social interactions in embodied problem solving by employing a Scrabble-like setting. 28 pairs of participants had to generate as many words as possible from 2 balanced sets of 7 letters, which they could manipulate, either individually or collectively...

  17. Getting the Word Out.

    Science.gov (United States)

    Brandou, Julian R.

    1982-01-01

    Suggests public relations strategies which science educators can adopt to spread the word about the importance of good science teaching. These include preparing a fact sheet summarizing a project/course/organization, tips on creating a newsworthy event (awards, displays at a mall, and others), and what to submit to the news media. (Author/JN)

  18. Word Problem Wizardry.

    Science.gov (United States)

    Cassidy, Jack

    1991-01-01

    Presents suggestions for teaching math word problems to elementary students. The strategies take into consideration differences between reading in math and reading in other areas. A problem-prediction game and four self-checking activities are included along with a magic password challenge. (SM)

  19. Using Constant Time Delay to Teach Braille Word Recognition

    Science.gov (United States)

    Hooper, Jonathan; Ivy, Sarah; Hatton, Deborah

    2014-01-01

    Introduction: Constant time delay has been identified as an evidence-based practice to teach print sight words and picture recognition (Browder, Ahlbrim-Delzell, Spooner, Mims, & Baker, 2009). For the study presented here, we tested the effectiveness of constant time delay to teach new braille words. Methods: A single-subject multiple baseline…

  20. Translation Ambiguity but Not Word Class Predicts Translation Performance

    Science.gov (United States)

    Prior, Anat; Kroll, Judith F.; Macwhinney, Brian

    2013-01-01

    We investigated the influence of word class and translation ambiguity on cross-linguistic representation and processing. Bilingual speakers of English and Spanish performed translation production and translation recognition tasks on nouns and verbs in both languages. Words either had a single translation or more than one translation. Translation…

  1. Performance-complexity tradeoff in sequential decoding for the unconstrained AWGN channel

    KAUST Repository

    Abediseid, Walid

    2013-06-01

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter - the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity. © 2013 IEEE.

  2. Achievable Information Rates for Coded Modulation With Hard Decision Decoding for Coherent Fiber-Optic Systems

    Science.gov (United States)

    Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi

    2017-12-01

    We analyze the achievable information rates (AIRs) for coded modulation schemes with QAM constellations with both bit-wise and symbol-wise decoders, corresponding to the case where a binary code is used in combination with a higher-order modulation using the bit-interleaved coded modulation (BICM) paradigm and to the case where a nonbinary code over a field matched to the constellation size is used, respectively. In particular, we consider hard decision decoding, which is the preferable option for fiber-optic communication systems where decoding complexity is a concern. Recently, Liga \\emph{et al.} analyzed the AIRs for bit-wise and symbol-wise decoders considering what the authors called \\emph{hard decision decoder} which, however, exploits \\emph{soft information} of the transition probabilities of discrete-input discrete-output channel resulting from the hard detection. As such, the complexity of the decoder is essentially the same as the complexity of a soft decision decoder. In this paper, we analyze instead the AIRs for the standard hard decision decoder, commonly used in practice, where the decoding is based on the Hamming distance metric. We show that if standard hard decision decoding is used, bit-wise decoders yield significantly higher AIRs than symbol-wise decoders. As a result, contrary to the conclusion by Liga \\emph{et al.}, binary decoders together with the BICM paradigm are preferable for spectrally-efficient fiber-optic systems. We also design binary and nonbinary staircase codes and show that, in agreement with the AIRs, binary codes yield better performance.

  3. STACK DECODING OF LINEAR BLOCK CODES FOR DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

    Directory of Open Access Journals (Sweden)

    H. Prashantha Kumar

    2012-03-01

    Full Text Available The boundaries between block and convolutional codes have become diffused after recent advances in the understanding of the trellis structure of block codes and the tail-biting structure of some convolutional codes. Therefore, decoding algorithms traditionally proposed for decoding convolutional codes have been applied for decoding certain classes of block codes. This paper presents the decoding of block codes using tree structure. Many good block codes are presently known. Several of them have been used in applications ranging from deep space communication to error control in storage systems. But the primary difficulty with applying Viterbi or BCJR algorithms to decode of block codes is that, even though they are optimum decoding methods, the promised bit error rates are not achieved in practice at data rates close to capacity. This is because the decoding effort is fixed and grows with block length, and thus only short block length codes can be used. Therefore, an important practical question is whether a suboptimal realizable soft decision decoding method can be found for block codes. A noteworthy result which provides a partial answer to this question is described in the following sections. This result of near optimum decoding will be used as motivation for the investigation of different soft decision decoding methods for linear block codes which can lead to the development of efficient decoding algorithms. The code tree can be treated as an expanded version of the trellis, where every path is totally distinct from every other path. We have derived the tree structure for (8, 4 and (16, 11 extended Hamming codes and have succeeded in implementing the soft decision stack algorithm to decode them. For the discrete memoryless channel, gains in excess of 1.5dB at a bit error rate of 10-5 with respect to conventional hard decision decoding are demonstrated for these codes.

  4. Decoding thalamic afferent input using microcircuit spiking activity.

    Science.gov (United States)

    Sederberg, Audrey J; Palmer, Stephanie E; MacLean, Jason N

    2015-04-01

    A behavioral response appropriate to a sensory stimulus depends on the collective activity of thousands of interconnected neurons. The majority of cortical connections arise from neighboring neurons, and thus understanding the cortical code requires characterizing information representation at the scale of the cortical microcircuit. Using two-photon calcium imaging, we densely sampled the thalamically evoked response of hundreds of neurons spanning multiple layers and columns in thalamocortical slices of mouse somatosensory cortex. We then used a biologically plausible decoder to characterize the representation of two distinct thalamic inputs, at the level of the microcircuit, to reveal those aspects of the activity pattern that are likely relevant to downstream neurons. Our data suggest a sparse code, distributed across lamina, in which a small population of cells carries stimulus-relevant information. Furthermore, we find that, within this subset of neurons, decoder performance improves when noise correlations are taken into account. Copyright © 2015 the American Physiological Society.

  5. Soft decoding a self-dual (48, 24; 12) code

    Science.gov (United States)

    Solomon, G.

    1993-01-01

    A self-dual (48,24;12) code comes from restricting a binary cyclic (63,18;36) code to a 6 x 7 matrix, adding an eighth all-zero column, and then adjoining six dimensions to this extended 6 x 8 matrix. These six dimensions are generated by linear combinations of row permutations of a 6 x 8 matrix of weight 12, whose sums of rows and columns add to one. A soft decoding using these properties and approximating maximum likelihood is presented here. This is preliminary to a possible soft decoding of the box (72,36;15) code that promises a 7.7-dB theoretical coding under maximum likelihood.

  6. DECODING OF ACADEMIC CONTENT BY THE 1st GRADE STUDENTS

    Directory of Open Access Journals (Sweden)

    Kamil Błaszczyński

    2017-07-01

    Full Text Available In the paper a comparative study conducted on the 1st grade students of sociology and pedagogy is discussed. The study was focused on the language skills of students. The most important skills tested were the abilities to decode academic content. The study shows that the students have very poor language skills in decoding the academic content on every level of its complexity. They also have noticeable problems with the definition of basic academic terms. The significance of the obtained results are high because of the innovative topic and character of the study, which was the first such study conducted on students of a Polish university. Results are also valuable for academic teachers who are interested in such problems as effective communication with students.

  7. Reaction Decoder Tool (RDT): extracting features from chemical reactions.

    Science.gov (United States)

    Rahman, Syed Asad; Torrance, Gilliean; Baldacci, Lorenzo; Martínez Cuesta, Sergio; Fenninger, Franz; Gopal, Nimish; Choudhary, Saket; May, John W; Holliday, Gemma L; Steinbeck, Christoph; Thornton, Janet M

    2016-07-01

    Extracting chemical features like Atom-Atom Mapping (AAM), Bond Changes (BCs) and Reaction Centres from biochemical reactions helps us understand the chemical composition of enzymatic reactions. Reaction Decoder is a robust command line tool, which performs this task with high accuracy. It supports standard chemical input/output exchange formats i.e. RXN/SMILES, computes AAM, highlights BCs and creates images of the mapped reaction. This aids in the analysis of metabolic pathways and the ability to perform comparative studies of chemical reactions based on these features. This software is implemented in Java, supported on Windows, Linux and Mac OSX, and freely available at https://github.com/asad/ReactionDecoder : asad@ebi.ac.uk or s9asad@gmail.com. © The Author 2016. Published by Oxford University Press.

  8. Fast decoder for local quantum codes using Groebner basis

    Science.gov (United States)

    Haah, Jeongwan

    2013-03-01

    Based on arXiv:1204.1063. A local translation-invariant quantum code has a description in terms of Laurent polynomials. As an application of this observation, we present a fast decoding algorithm for translation-invariant local quantum codes in any spatial dimensions using the straightforward division algorithm for multivariate polynomials. The running time is O (n log n) on average, or O (n2 log n) on worst cases, where n is the number of physical qubits. The algorithm improves a subroutine of the renormalization-group decoder by Bravyi and Haah (arXiv:1112.3252) in the translation-invariant case. This work is supported in part by the Insitute for Quantum Information and Matter, an NSF Physics Frontier Center, and the Korea Foundation for Advanced Studies.

  9. [Efficacy of decoding training for children with difficulty reading hiragana].

    Science.gov (United States)

    Uchiyama, Hitoshi; Tanaka, Daisuke; Seki, Ayumi; Wakamiya, Eiji; Hirasawa, Noriko; Iketani, Naotake; Kato, Ken; Koeda, Tatsuya

    2013-05-01

    The present study aimed to clarify the efficacy of decoding training focusing on the correspondence between written symbols and their readings for children with difficulty reading hiragana (Japanese syllabary). Thirty-five children with difficulty reading hiragana were selected from among 367 first-grade elementary school students using a reading aloud test and were then divided into intervention (n=15) and control (n=20) groups. The intervention comprised 5 minutes of decoding training each day for a period of 3 weeks using an original program on a personal computer. Reading time and number of reading errors in the reading aloud test were compared between the groups. The intervention group showed a significant shortening of reading time (F(1,33)=5.40, phiragana.

  10. Optimal decoding and information transmission in Hodgkin-Huxley neurons under metabolic cost constraints.

    Science.gov (United States)

    Kostal, Lubomir; Kobayashi, Ryota

    2015-10-01

    Information theory quantifies the ultimate limits on reliable information transfer by means of the channel capacity. However, the channel capacity is known to be an asymptotic quantity, assuming unlimited metabolic cost and computational power. We investigate a single-compartment Hodgkin-Huxley type neuronal model under the spike-rate coding scheme and address how the metabolic cost and the decoding complexity affects the optimal information transmission. We find that the sub-threshold stimulation regime, although attaining the smallest capacity, allows for the most efficient balance between the information transmission and the metabolic cost. Furthermore, we determine post-synaptic firing rate histograms that are optimal from the information-theoretic point of view, which enables the comparison of our results with experimental data. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Electro-optic spatial decoding on the spherical-wavefront Coulomb fields of plasma electron sources.

    Science.gov (United States)

    Huang, K; Esirkepov, T; Koga, J K; Kotaki, H; Mori, M; Hayashi, Y; Nakanii, N; Bulanov, S V; Kando, M

    2018-02-13

    Detections of the pulse durations and arrival timings of relativistic electron beams are important issues in accelerator physics. Electro-optic diagnostics on the Coulomb fields of electron beams have the advantages of single shot and non-destructive characteristics. We present a study of introducing the electro-optic spatial decoding technique to laser wakefield acceleration. By placing an electro-optic crystal very close to a gas target, we discovered that the Coulomb field of the electron beam possessed a spherical wavefront and was inconsistent with the previously widely used model. The field structure was demonstrated by experimental measurement, analytic calculations and simulations. A temporal mapping relationship with generality was derived in a geometry where the signals had spherical wavefronts. This study could be helpful for the applications of electro-optic diagnostics in laser plasma acceleration experiments.

  12. FPGA implementation of high-performance QC-LDPC decoder for optical communications

    Science.gov (United States)

    Zou, Ding; Djordjevic, Ivan B.

    2015-01-01

    Forward error correction is as one of the key technologies enabling the next-generation high-speed fiber optical communications. Quasi-cyclic (QC) low-density parity-check (LDPC) codes have been considered as one of the promising candidates due to their large coding gain performance and low implementation complexity. In this paper, we present our designed QC-LDPC code with girth 10 and 25% overhead based on pairwise balanced design. By FPGAbased emulation, we demonstrate that the 5-bit soft-decision LDPC decoder can achieve 11.8dB net coding gain with no error floor at BER of 10-15 avoiding using any outer code or post-processing method. We believe that the proposed single QC-LDPC code is a promising solution for 400Gb/s optical communication systems and beyond.

  13. Sequential decoding of intramuscular EMG signals via estimation of a Markov model.

    Science.gov (United States)

    Monsifrot, Jonathan; Le Carpentier, Eric; Aoustin, Yannick; Farina, Dario

    2014-09-01

    This paper addresses the sequential decoding of intramuscular single-channel electromyographic (EMG) signals to extract the activity of individual motor neurons. A hidden Markov model is derived from the physiological generation of the EMG signal. The EMG signal is described as a sum of several action potentials (wavelet) trains, embedded in noise. For each train, the time interval between wavelets is modeled by a process that parameters are linked to the muscular activity. The parameters of this process are estimated sequentially by a Bayes filter, along with the firing instants. The method was tested on some simulated signals and an experimental one, from which the rates of detection and classification of action potentials were above 95% with respect to the reference decomposition. The method works sequentially in time, and is the first to address the problem of intramuscular EMG decomposition online. It has potential applications for man-machine interfacing based on motor neuron activities.

  14. SWIPT in Multiuser MIMO Decode-and-Forward Relay Broadcasting Channel with Energy Harvesting Relays

    KAUST Repository

    Benkhelifa, Fatma

    2017-02-09

    In this paper, we consider a multiuser multiple- input multiple-output (MIMO) decode-and-forward (DF) relay broadcasting channel (BC) with single source, multiple energy harvesting relays and multiple destinations. Since the end-to-end sum rate maximization problem is intractable, we tackle a simplified problem where we maximize the sum of the harvested energy at the relays, we employ the block diagonalization (BD) procedure at the source, and we mitigate the interference between the relay- destination channels. The interference mitigation at the destinations is managed in two ways: either to fix the interference covariance matrices at the destination and update them at each iteration until convergence, or to cancel the interference using an algorithm similar to the BD method. We provide numerical results to show the relevance of our proposed solution.

  15. Iterative demodulation and decoding of coded non-square QAM

    Science.gov (United States)

    Li, L.; Divsalar, D.; Dolinar, S.

    2003-01-01

    Simulation results show that, with iterative demodulation and decoding, coded NS-8QAM performs 0.5 dB better than standard 8QAM and 0.7 dB better than 8PSK at BER= 10(sup -5), when the FEC code is the (15, 11) Hamming code concatenated with a rate-1 accumulator code, while coded NS-32QAM performs 0.25 dB better than standard 32QAM.

  16. Design and Implementation of Viterbi Decoder Using VHDL

    Science.gov (United States)

    Thakur, Akash; Chattopadhyay, Manju K.

    2018-03-01

    A digital design conversion of Viterbi decoder for ½ rate convolutional encoder with constraint length k = 3 is presented in this paper. The design is coded with the help of VHDL, simulated and synthesized using XILINX ISE 14.7. Synthesis results show a maximum frequency of operation for the design is 100.725 MHz. The requirement of memory is less as compared to conventional method.

  17. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    Science.gov (United States)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  18. Decoding Gimmicks of Financial Shenanigans in Telecom Sector in India

    OpenAIRE

    Sandeep GOEL

    2013-01-01

    Major corporate financial shenanigans get away in the name of creative accounting. But, they need to be studied for lessons learned and strategies to avoid or reduce the incidence of such frauds in the future. It is essential for shareholders, particularly the common man who does not have any access to the company except reported financial numbers. This paper aims to decode the level of financial shenanigans practices in corporate enterprises in telecom sector in India. The reason being is th...

  19. Fast Transform Decoding Of Nonsystematic Reed-Solomon Codes

    Science.gov (United States)

    Truong, Trieu-Kie; Cheung, Kar-Ming; Shiozaki, A.; Reed, Irving S.

    1992-01-01

    Fast, efficient Fermat number transform used to compute F'(x) analogous to computation of syndrome in conventional decoding scheme. Eliminates polynomial multiplications and reduces number of multiplications in reconstruction of F'(x) to n log (n). Euclidean algorithm used to evaluate F(x) directly, without going through intermediate steps of solving error-locator and error-evaluator polynomials. Algorithm suitable for implementation in very-large-scale integrated circuits.

  20. Efficient algorithms for maximum likelihood decoding in the surface code

    Science.gov (United States)

    Bravyi, Sergey; Suchara, Martin; Vargo, Alexander

    2014-09-01

    We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.

  1. Biological 2-Input Decoder Circuit in Human Cells

    Science.gov (United States)

    2015-01-01

    Decoders are combinational circuits that convert information from n inputs to a maximum of 2n outputs. This operation is of major importance in computing systems yet it is vastly underexplored in synthetic biology. Here, we present a synthetic gene network architecture that operates as a biological decoder in human cells, converting 2 inputs to 4 outputs. As a proof-of-principle, we use small molecules to emulate the two inputs and fluorescent reporters as the corresponding four outputs. The experiments are performed using transient transfections in human kidney embryonic cells and the characterization by fluorescence microscopy and flow cytometry. We show a clear separation between the ON and OFF mean fluorescent intensity states. Additionally, we adopt the integrated mean fluorescence intensity for the characterization of the circuit and show that this metric is more robust to transfection conditions when compared to the mean fluorescent intensity. To conclude, we present the first implementation of a genetic decoder. This combinational system can be valuable toward engineering higher-order circuits as well as accommodate a multiplexed interface with endogenous cellular functions. PMID:24694115

  2. An Area-Efficient Reconfigurable LDPC Decoder with Conflict Resolution

    Science.gov (United States)

    Zhou, Changsheng; Huang, Yuebin; Huang, Shuangqu; Chen, Yun; Zeng, Xiaoyang

    Based on Turbo-Decoding Message-Passing (TDMP) and Normalized Min-Sum (NMS) algorithm, an area efficient LDPC decoder that supports both structured and unstructured LDPC codes is proposed in this paper. We introduce a solution to solve the memory access conflict problem caused by TDMP algorithm. We also arrange the main timing schedule carefully to handle the operations of our solution while avoiding much additional hardware consumption. To reduce the memory bits needed, the extrinsic message storing strategy is also optimized. Besides the extrinsic message recover and the accumulate operation are merged together. To verify our architecture, a LDPC decoder that supports both China Multimedia Mobile Broadcasting (CMMB) and Digital Terrestrial/ Television Multimedia Broadcasting (DTMB) standards is developed using SMIC 0.13µm standard CMOS process. The core area is 4.75mm2 and the maximum operating clock frequency is 200MHz. The estimated power consumption is 48.4mW at 25MHz for CMMB and 130.9mW at 50MHz for DTMB with 5 iterations and 1.2V supply.

  3. Biological 2-input decoder circuit in human cells.

    Science.gov (United States)

    Guinn, Michael; Bleris, Leonidas

    2014-08-15

    Decoders are combinational circuits that convert information from n inputs to a maximum of 2(n) outputs. This operation is of major importance in computing systems yet it is vastly underexplored in synthetic biology. Here, we present a synthetic gene network architecture that operates as a biological decoder in human cells, converting 2 inputs to 4 outputs. As a proof-of-principle, we use small molecules to emulate the two inputs and fluorescent reporters as the corresponding four outputs. The experiments are performed using transient transfections in human kidney embryonic cells and the characterization by fluorescence microscopy and flow cytometry. We show a clear separation between the ON and OFF mean fluorescent intensity states. Additionally, we adopt the integrated mean fluorescence intensity for the characterization of the circuit and show that this metric is more robust to transfection conditions when compared to the mean fluorescent intensity. To conclude, we present the first implementation of a genetic decoder. This combinational system can be valuable toward engineering higher-order circuits as well as accommodate a multiplexed interface with endogenous cellular functions.

  4. Right word making sense of the words that confuse

    CERN Document Server

    Morrison, Elizabeth

    2012-01-01

    'Affect' or 'effect'? 'Right', 'write' or 'rite'? English can certainly be a confusing language, whether you're a native speaker or learning it as a second language. 'The Right Word' is the essential reference to help people master its subtleties and avoid making mistakes. Divided into three sections, it first examines homophones - those tricky words that sound the same but are spelled differently - then looks at words that often confuse before providing a list of commonly misspelled words.

  5. Nine Words - Nine Columns

    DEFF Research Database (Denmark)

    Trempe Jr., Robert B.; Buthke, Jan

    2016-01-01

    This book records the efforts of a one-week joint workshop between Master students from Studio 2B of Arkitektskolen Aarhus and Master students from the Harbin Institute of Technology in Harbin, China. The workshop employed nine action words to instigate team-based investigation into the effects o...... as formwork for the shaping of wood veneer. The resulting columns ‘wear’ every aspect of this design pipeline process and display the power of process towards an architectural resolution....

  6. Italian Word Association Norms.

    Science.gov (United States)

    1966-07-01

    and Russell, VI.A. Systematic changes in word association norms: 1910-1952. Journal of Abnormal and Social Psychology, 19C0, 60, 293-303. lilb Kurez, I...Acsorbento, Cartone, Celluloee, Compiti, Disegno, !)o- cuwe-m~to, Gnibinetto, Gihills, Goinma, Lete, Licer~ .a, l!ateri.Ble, Notp, Penna, Problema ...Ostiflato, flifatto, Ruvido, Seno, Somaro, Studio, Tavolo FACILITA’( 42,31) 36 Difficolth 7 Difficile, Semplicit~. 5 Problema 2 Grande, Impossibile

  7. Role of syllable segmentation processes in peripheral word recognition.

    Science.gov (United States)

    Bernard, Jean-Baptiste; Calabrèse, Aurélie; Castet, Eric

    2014-12-01

    Previous studies of foveal visual word recognition provide evidence for a low-level syllable decomposition mechanism occurring during the recognition of a word. We investigated if such a decomposition mechanism also exists in peripheral word recognition. Single words were visually presented to subjects in the peripheral field using a 6° square gaze-contingent simulated central scotoma. In the first experiment, words were either unicolor or had their adjacent syllables segmented with two different colors (color/syllable congruent condition). Reaction times for correct word identification were measured for the two different conditions and for two different print sizes. Results show a significant decrease in reaction time for the color/syllable congruent condition compared with the unicolor condition. A second experiment suggests that this effect is specific to syllable decomposition and results from strategic, presumably involving attentional factors, rather than stimulus-driven control.

  8. The basis of orientation decoding in human primary visual cortex: fine- or coarse-scale biases?

    Science.gov (United States)

    Maloney, Ryan T

    2015-01-01

    Orientation signals in human primary visual cortex (V1) can be reliably decoded from the multivariate pattern of activity as measured with functional magnetic resonance imaging (fMRI). The precise underlying source of these decoded signals (whether by orientation biases at a fine or coarse scale in cortex) remains a matter of some controversy, however. Freeman and colleagues (J Neurosci 33: 19695-19703, 2013) recently showed that the accuracy of decoding of spiral patterns in V1 can be predicted by a voxel's preferred spatial position (the population receptive field) and its coarse orientation preference, suggesting that coarse-scale biases are sufficient for orientation decoding. Whether they are also necessary for decoding remains an open question, and one with implications for the broader interpretation of multivariate decoding results in fMRI studies. Copyright © 2015 the American Physiological Society.

  9. Emotion Decoding and Incidental Processing Fluency as Antecedents of Attitude Certainty.

    Science.gov (United States)

    Petrocelli, John V; Whitmire, Melanie B

    2017-07-01

    Previous research demonstrates that attitude certainty influences the degree to which an attitude changes in response to persuasive appeals. In the current research, decoding emotions from facial expressions and incidental processing fluency, during attitude formation, are examined as antecedents of both attitude certainty and attitude change. In Experiment 1, participants who decoded anger or happiness during attitude formation expressed their greater attitude certainty, and showed more resistance to persuasion than participants who decoded sadness. By manipulating the emotion decoded, the diagnosticity of processing fluency experienced during emotion decoding, and the gaze direction of the social targets, Experiment 2 suggests that the link between emotion decoding and attitude certainty results from incidental processing fluency. Experiment 3 demonstrated that fluency in processing irrelevant stimuli influences attitude certainty, which in turn influences resistance to persuasion. Implications for appraisal-based accounts of attitude formation and attitude change are discussed.

  10. Infants Track Word Forms in Early Word-Object Associations

    Science.gov (United States)

    Zamuner, Tania S.; Fais, Laurel; Werker, Janet F.

    2014-01-01

    A central component of language development is word learning. One characterization of this process is that language learners discover objects and then look for word forms to associate with these objects (Mcnamara, 1984; Smith, 2000). Another possibility is that word forms themselves are also important, such that once learned, hearing a familiar…

  11. Effects of providing word sounds during printed word learning

    NARCIS (Netherlands)

    Reitsma, P.; Dongen, van A.J.N.; Custers, E.

    1984-01-01

    The purpose of this study was to explore the effects of the availability of the spoken sound of words along with the printed forms during reading practice. Firstgrade children from two normal elementary schools practised reading several unfamiliar words in print. For half of the printed words the

  12. Reading component skills in dyslexia: word recognition, comprehension and processing speed

    Directory of Open Access Journals (Sweden)

    Darlene Godoy Oliveira

    2014-11-01

    Full Text Available The cognitive model of reading comprehension posits that reading comprehension is a result of the interaction between decoding and linguistic comprehension. Recently, the notion of decoding skill was expanded to include word recognition. In addition, some studies suggest that other skills could be integrated into this model, like processing speed, and have consistently indicated that this skill influences and is an important predictor of the main components of the model, such as vocabulary for comprehension and phonological awareness of word recognition. The following study evaluated the components of the reading comprehension model and predictive skills in children and adolescents with dyslexia. 40 children and adolescents (8-13 years were divided in a Dyslexic Group (DG, 18 children, MA = 10.78, SD = 1.66 and Control Group (CG 22 children, MA = 10.59, SD = 1.86. All were students from the 2nd to 8th grade of elementary school and groups were equivalent in school grade, age, gender, and IQ. Oral and reading comprehension, word recognition, processing speed, picture naming, receptive vocabulary and phonological awareness were assessed. There were no group differences regarding the accuracy in oral and reading comprehension, phonological awareness, naming, and vocabulary scores. DG performed worse than the CG in word recognition (general score and orthographic confusion items and were slower in naming. Results corroborated the literature regarding word recognition and processing speed deficits in dyslexia. However, dyslexics can achieve normal scores on reading comprehension test. Data supports the importance of delimitation of different reading strategies embedded in the word recognition component. The role of processing speed in reading problems remain unclear.

  13. Reading component skills in dyslexia: word recognition, comprehension and processing speed.

    Science.gov (United States)

    de Oliveira, Darlene G; da Silva, Patrícia B; Dias, Natália M; Seabra, Alessandra G; Macedo, Elizeu C

    2014-01-01

    The cognitive model of reading comprehension (RC) posits that RC is a result of the interaction between decoding and linguistic comprehension. Recently, the notion of decoding skill was expanded to include word recognition. In addition, some studies suggest that other skills could be integrated into this model, like processing speed, and have consistently indicated that this skill influences and is an important predictor of the main components of the model, such as vocabulary for comprehension and phonological awareness of word recognition. The following study evaluated the components of the RC model and predictive skills in children and adolescents with dyslexia. 40 children and adolescents (8-13 years) were divided in a Dyslexic Group (DG; 18 children, MA = 10.78, SD = 1.66) and control group (CG 22 children, MA = 10.59, SD = 1.86). All were students from the 2nd to 8th grade of elementary school and groups were equivalent in school grade, age, gender, and IQ. Oral and RC, word recognition, processing speed, picture naming, receptive vocabulary, and phonological awareness were assessed. There were no group differences regarding the accuracy in oral and RC, phonological awareness, naming, and vocabulary scores. DG performed worse than the CG in word recognition (general score and orthographic confusion items) and were slower in naming. Results corroborated the literature regarding word recognition and processing speed deficits in dyslexia. However, dyslexics can achieve normal scores on RC test. Data supports the importance of delimitation of different reading strategies embedded in the word recognition component. The role of processing speed in reading problems remain unclear.

  14. Neural Correlates of Task-Irrelevant First and Second Language Emotion Words — Evidence from the Face-Word Stroop Task

    Directory of Open Access Journals (Sweden)

    Lin Fan

    2016-11-01

    Full Text Available Emotionally valenced words have thus far not been empirically examined in a bilingual population with the emotional face-word Stroop paradigm. Chinese-English bilinguals were asked to identify the facial expressions of emotion with their first (L1 or second (L2 language task-irrelevant emotion words superimposed on the face pictures. We attempted to examine how the emotional content of words modulates behavioral performance and cerebral functioning in the bilinguals’ two languages. The results indicated that there were significant congruency effects for both L1 and L2 emotion words, and that identifiable differences in the magnitude of Stroop effect between the two languages were also observed, suggesting L1 is more capable of activating the emotional response to word stimuli. For event-related potentials (ERPs data, an N350-550 effect was observed only in L1 task with greater negativity for incongruent than congruent trials. The size of N350-550 effect differed across languages, whereas no identifiable language distinction was observed in the effect of conflict slow potential (conflict SP. Finally, more pronounced negative amplitude at 230-330 ms was observed in L1 than in L2, but only for incongruent trials. This negativity, likened to an orthographic decoding N250, may reflect the extent of attention to emotion word processing at word-form level, while N350-550 reflects a complicated set of processes in the conflict processing. Overall, the face-word congruency effect has reflected identifiable language distinction at 230-330 and 350-550 ms, which provides supporting evidence for the theoretical proposals assuming attenuated emotionality of L2 processing.

  15. Coding/decoding two-dimensional images with orbital angular momentum of light.

    Science.gov (United States)

    Chu, Jiaqi; Li, Xuefeng; Smithwick, Quinn; Chu, Daping

    2016-04-01

    We investigate encoding and decoding of two-dimensional information using the orbital angular momentum (OAM) of light. Spiral phase plates and phase-only spatial light modulators are used in encoding and decoding of OAM states, respectively. We show that off-axis points and spatial variables encoded with a given OAM state can be recovered through decoding with the corresponding complimentary OAM state.

  16. A Low-Complexity Joint Detection-Decoding Algorithm for Nonbinary LDPC-Coded Modulation Systems

    OpenAIRE

    Wang, Xuepeng; Bai, Baoming; Ma, Xiao

    2010-01-01

    In this paper, we present a low-complexity joint detection-decoding algorithm for nonbinary LDPC codedmodulation systems. The algorithm combines hard-decision decoding using the message-passing strategy with the signal detector in an iterative manner. It requires low computational complexity, offers good system performance and has a fast rate of decoding convergence. Compared to the q-ary sum-product algorithm (QSPA), it provides an attractive candidate for practical applications of q-ary LDP...

  17. Neural overlap of L1 and L2 semantic representations in speech: A decoding approach.

    Science.gov (United States)

    Van de Putte, Eowyn; De Baene, Wouter; Brass, Marcel; Duyck, Wouter

    2017-11-15

    Although research has now converged towards a consensus that both languages of a bilingual are represented in at least partly shared systems for language comprehension, it remains unclear whether both languages are represented in the same neural populations for production. We investigated the neural overlap between L1 and L2 semantic representations of translation equivalents using a production task in which the participants had to name pictures in L1 and L2. Using a decoding approach, we tested whether brain activity during the production of individual nouns in one language allowed predicting the production of the same concepts in the other language. Because both languages only share the underlying semantic representation (sensory and lexical overlap was maximally avoided), this would offer very strong evidence for neural overlap in semantic representations of bilinguals. Based on the brain activation for the individual concepts in one language in the bilateral occipito-temporal cortex and the inferior and the middle temporal gyrus, we could accurately predict the equivalent individual concepts in the other language. This indicates that these regions share semantic representations across L1 and L2 word production. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Single-item memory, associative memory, and the human hippocampus

    OpenAIRE

    Gold, Jeffrey J.; Hopkins, Ramona O.; Squire, Larry R.

    2006-01-01

    We tested recognition memory for items and associations in memory-impaired patients with bilateral lesions thought to be limited to the hippocampal region. In Experiment 1 (Combined memory test), participants studied words and then took a memory test in which studied words, new words, studied word pairs, and recombined word pairs were presented in a mixed order. In Experiment 2 (Separated memory test), participants studied single words and then took a memory test involving studied word and ne...

  19. The Effectiveness of Dictionary Examples in Decoding: The Case of Kuwaiti Learners of English

    Directory of Open Access Journals (Sweden)

    Hashan Al-Ajmi

    2011-10-01

    Full Text Available

    Abstract: This study tries to shed light on the role of dictionary examples in the comprehension of word meanings. An experimental procedure has been devised whereby two groups of students with English as major subject at Kuwait University were asked to provide the Arabic equivalents for ten English headwords. The first group was given a list of entries for these words copied from the Oxford Advanced Learner's Dictionary (OALD while the second group had to read the same list but without illustrative examples. Results indicate that the students' decoding performance was negatively affected by the presence of illustrative examples in the dictionary entry.

    Keywords: ARABIC, BILINGUAL DICTIONARY, COMPREHENSION, EFL DICTIONARY,ILLUSTRATIVE EXAMPLE, MONOLINGUAL DICTIONARY, TRANSLATION

    Opsomming: Die doeltreffendheid van woordeboekvoorbeelde by dekodering:Die geval van Koeweiti-aanleerders van Engels. Hierdie studie probeer ligwerp op die rol van woordeboekvoorbeelde by die verstaan van woordbetekenisse. 'n Eksperimentelemetode is ontwerp waarby twee groepe studente met Engels as hoofvak by die Universiteit vanKoeweit gevra is om Arabiese ekwivalente vir tien Engelse trefwoorde te verskaf. Aan die eerstegroep is 'n lys inskrywings van hierdie woorde oorgeneem uit die Oxford Advanced Learner's Dictionary(OALD gegee, terwyl die tweede groep dieselfde lys moes lees, maar sonder verduidelikendevoorbeelde. Resultate het aangedui dat die studente se dekoderende prestasie negatiefbeïnvloed is deur die teenwoordigheid van verduidelikende voorbeelde in die woordeboekinskrywing.

    Sleutelwoorde: ARABIES, TWEETALIGE WOORDEBOEK, BEGRIP, EVT-WOORDEBOEK,VERDUIDELIKENDE VOORBEELD, EENTALIGE WOORDEBOEK, VERTALING

  20. WordPress multisite administration

    CERN Document Server

    Longren, Tyler

    2013-01-01

    This is a simple, concise guide with a step-by-step approach, packed with screenshots and examples to set up and manage a network blog using WordPress.WordPress Multisite Administration is ideal for anyone wanting to familiarize themselves with WordPress Multisite. You'll need to know the basics about WordPress, and having at least a broad understanding of HTML, CSS, and PHP will help, but isn't required.