WorldWideScience

Sample records for neural efficiency hypothesis

  1. Decision-making conflict and the neural efficiency hypothesis of intelligence: a functional near-infrared spectroscopy investigation.

    Science.gov (United States)

    Di Domenico, Stefano I; Rodrigo, Achala H; Ayaz, Hasan; Fournier, Marc A; Ruocco, Anthony C

    2015-04-01

    Research on the neural efficiency hypothesis of intelligence (NEH) has revealed that the brains of more intelligent individuals consume less energy when performing easy cognitive tasks but more energy when engaged in difficult mental operations. However, previous studies testing the NEH have relied on cognitive tasks that closely resemble psychometric tests of intelligence, potentially confounding efficiency during intelligence-test performance with neural efficiency per se. The present study sought to provide a novel test of the NEH by examining patterns of prefrontal activity while participants completed an experimental paradigm that is qualitatively distinct from the contents of psychometric tests of intelligence. Specifically, participants completed a personal decision-making task (e.g., which occupation would you prefer, dancer or chemist?) in which they made a series of forced choices according to their subjective preferences. The degree of decisional conflict (i.e., choice difficulty) between the available response options was manipulated on the basis of participants' unique preference ratings for the target stimuli, which were obtained prior to scanning. Evoked oxygenation of the prefrontal cortex was measured using 16-channel continuous-wave functional near-infrared spectroscopy. Consistent with the NEH, intelligence predicted decreased activation of the right inferior frontal gyrus (IFG) during low-conflict situations and increased activation of the right-IFG during high-conflict situations. This pattern of right-IFG activity among more intelligent individuals was complemented by faster reaction times in high-conflict situations. These results provide new support for the NEH and suggest that the neural efficiency of more intelligent individuals generalizes to the performance of cognitive tasks that are distinct from intelligence tests. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Learning-Related Changes in Adolescents' Neural Networks during Hypothesis-Generating and Hypothesis-Understanding Training

    Science.gov (United States)

    Lee, Jun-Ki; Kwon, Yongju

    2012-01-01

    Fourteen science high school students participated in this study, which investigated neural-network plasticity associated with hypothesis-generating and hypothesis-understanding in learning. The students were divided into two groups and participated in either hypothesis-generating or hypothesis-understanding type learning programs, which were…

  3. Isobars and the Efficient Market Hypothesis

    OpenAIRE

    Kristýna Ivanková

    2010-01-01

    Isobar surfaces, a method for describing the overall shape of multidimensional data, are estimated by nonparametric regression and used to evaluate the efficiency of selected markets based on returns of their stock market indices.

  4. Adaptation hypothesis of biological efficiency of ionizing radiation

    International Nuclear Information System (INIS)

    Kudritskij, Yu.K.; Georgievskij, A.B.; Karpov, V.I.

    1992-01-01

    Adaptation hypothesis of biological efficiency of ionizing radiation is based on acknowledgement of invariance of fundamental laws and principles of biology related to unity of biota and media, evolution and adaptation for radiobiology. The basic arguments for adaptation hypothesis validity, its correspondence to the requirements imposed on scientific hypothes are presented

  5. TECHNICAL ANALYSIS OF EFFICIENT MARKET HYPOTHESIS IN A FRONTIER MARKET

    OpenAIRE

    MOBEEN Ur Rehman; WAQAS Bin Khidmat

    2013-01-01

    This paper focuses on identifying the major financial indicators or ratios that play a crucial role in determining the prices of the securities. Also the volatility of the prices of securities on the basis of previous performance of the companies will help us to understand the applicability of efficient market hypothesis in our emerging financial market. The scope of this paper is to investigate the weak form of market efficiency in the Karachi stock exchange. This paper will help the investo...

  6. Energy prices, multiple structural breaks, and efficient market hypothesis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chien-Chiang; Lee, Jun-De [Department of Applied Economics, National Chung Hsing University, Taichung (China)

    2009-04-15

    This paper investigates the efficient market hypothesis using total energy price and four kinds of various disaggregated energy prices - coal, oil, gas, and electricity - for OECD countries over the period 1978-2006. We employ a highly flexible panel data stationarity test of Carrion-i-Silvestre et al. [Carrion-i-Silvestre JL, Del Barrio-Castro T, Lopez-Bazo E. Breaking the panels: an application to GDP per capita. J Econometrics 2005;8:159-75], which incorporates multiple shifts in level and slope, thereby controlling for cross-sectional dependence through bootstrap methods. Overwhelming evidence in favor of the broken stationarity hypothesis is found, implying that energy prices are not characterized by an efficient market. Thus, it shows the presence of profitable arbitrage opportunities among energy prices. The estimated breaks are meaningful and coincide with the most critical events which affected the energy prices. (author)

  7. Energy prices, multiple structural breaks, and efficient market hypothesis

    International Nuclear Information System (INIS)

    Lee, Chien-Chiang; Lee, Jun-De

    2009-01-01

    This paper investigates the efficient market hypothesis using total energy price and four kinds of various disaggregated energy prices - coal, oil, gas, and electricity - for OECD countries over the period 1978-2006. We employ a highly flexible panel data stationarity test of Carrion-i-Silvestre et al. [Carrion-i-Silvestre JL, Del Barrio-Castro T, Lopez-Bazo E. Breaking the panels: an application to GDP per capita. J Econometrics 2005;8:159-75], which incorporates multiple shifts in level and slope, thereby controlling for cross-sectional dependence through bootstrap methods. Overwhelming evidence in favor of the broken stationarity hypothesis is found, implying that energy prices are not characterized by an efficient market. Thus, it shows the presence of profitable arbitrage opportunities among energy prices. The estimated breaks are meaningful and coincide with the most critical events which affected the energy prices. (author)

  8. TESTING THE EFFICIENT MARKET HYPOTHESIS ON THE ROMANIAN CAPITAL MARKET

    OpenAIRE

    Daniel Stefan ARMEANU; Sorin-Iulian CIOACA

    2014-01-01

    The Efficient Market Hypothesis (EMH) is one of the leading financial concepts that dominated the economic research over the last 50 years, being one of the pillars of the modern economic science. This theory, developed by Eugene Fama in the `70s, was a landmark in the development of theoretical concepts and models trying to explain the price evolution of financial assets (considering the common assumptions of the main developed theories) and also for the development of some branches in the f...

  9. Nearly Efficient Likelihood Ratio Tests of the Unit Root Hypothesis

    DEFF Research Database (Denmark)

    Jansson, Michael; Nielsen, Morten Ørregaard

    Seemingly absent from the arsenal of currently available "nearly efficient" testing procedures for the unit root hypothesis, i.e. tests whose local asymptotic power functions are indistinguishable from the Gaussian power envelope, is a test admitting a (quasi-)likelihood ratio interpretation. We...... show that the likelihood ratio unit root test derived in a Gaussian AR(1) model with standard normal innovations is nearly efficient in that model. Moreover, these desirable properties carry over to more complicated models allowing for serially correlated and/or non-Gaussian innovations....

  10. A neural hypothesis for stress-induced headache.

    Science.gov (United States)

    Cathcart, Stuart

    2009-12-01

    The mechanisms by which stress contributes to CTH are not clearly understood. The commonly accepted notion of muscle hyper-reactivity to stress in CTH sufferers is not supported in the research data. We propose a neural model whereby stress acts supra-spinally to aggravate already increased pain sensitivity in CTH sufferers. Indirect support for the model comes from emerging research elucidating complex supra-spinal networks through which psychological stress may contribute to and even cause pain. Similarly, emerging research demonstrates supra-spinal pain processing abnormalities in CTH sufferers. While research with CTH sufferers offering direct support for the model is lacking at present, initial work by our group is consistent with the models predictions, particularly, that stress aggravates already increased pain sensitivity in CTH sufferers.

  11. An algorithm for testing the efficient market hypothesis.

    Directory of Open Access Journals (Sweden)

    Ioana-Andreea Boboc

    Full Text Available The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA, Moving Average Convergence Divergence (MACD, Relative Strength Index (RSI and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH.

  12. An algorithm for testing the efficient market hypothesis.

    Science.gov (United States)

    Boboc, Ioana-Andreea; Dinică, Mihai-Cristian

    2013-01-01

    The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH).

  13. The efficient market hypothesis: problems with interpretations of empirical tests

    Directory of Open Access Journals (Sweden)

    Denis Alajbeg

    2012-03-01

    Full Text Available Despite many “refutations” in empirical tests, the efficient market hypothesis (EMH remains the central concept of financial economics. The EMH’s resistance to the results of empirical testing emerges from the fact that the EMH is not a falsifiable theory. Its axiomatic definition shows how asset prices would behave under assumed conditions. Testing for this price behavior does not make much sense as the conditions in the financial markets are much more complex than the simplified conditions of perfect competition, zero transaction costs and free information used in the formulation of the EMH. Some recent developments within the tradition of the adaptive market hypothesis are promising regarding development of a falsifiable theory of price formation in financial markets, but are far from giving assurance that we are approaching a new formulation. The most that can be done in the meantime is to be very cautious while interpreting the empirical evidence that is presented as “testing” the EMH.

  14. Martingales, nonstationary increments, and the efficient market hypothesis

    Science.gov (United States)

    McCauley, Joseph L.; Bassler, Kevin E.; Gunaratne, Gemunu H.

    2008-06-01

    We discuss the deep connection between nonstationary increments, martingales, and the efficient market hypothesis for stochastic processes x(t) with arbitrary diffusion coefficients D(x,t). We explain why a test for a martingale is generally a test for uncorrelated increments. We explain why martingales look Markovian at the level of both simple averages and 2-point correlations. But while a Markovian market has no memory to exploit and cannot be beaten systematically, a martingale admits memory that might be exploitable in higher order correlations. We also use the analysis of this paper to correct a misstatement of the ‘fair game’ condition in terms of serial correlations in Fama’s paper on the EMH. We emphasize that the use of the log increment as a variable in data analysis generates spurious fat tails and spurious Hurst exponents.

  15. Investigating neural efficiency of elite karate athletes during a mental arithmetic task using EEG.

    Science.gov (United States)

    Duru, Adil Deniz; Assem, Moataz

    2018-02-01

    Neural efficiency is proposed as one of the neural mechanisms underlying elite athletic performances. Previous sports studies examined neural efficiency using tasks that involve motor functions. In this study we investigate the extent of neural efficiency beyond motor tasks by using a mental subtraction task. A group of elite karate athletes are compared to a matched group of non-athletes. Electroencephalogram is used to measure cognitive dynamics during resting and increased mental workload periods. Mainly posterior alpha band power of the karate players was found to be higher than control subjects under both tasks. Moreover, event related synchronization/desynchronization has been computed to investigate the neural efficiency hypothesis among subjects. Finally, this study is the first study to examine neural efficiency related to a cognitive task, not a motor task, in elite karate players using ERD/ERS analysis. The results suggest that the effect of neural efficiency in the brain is global rather than local and thus might be contributing to the elite athletic performances. Also the results are in line with the neural efficiency hypothesis tested for motor performance studies.

  16. Aging and motor variability: a test of the neural noise hypothesis.

    Science.gov (United States)

    Sosnoff, Jacob J; Newell, Karl M

    2011-07-01

    Experimental tests of the neural noise hypothesis of aging, which holds that aging-related increments in motor variability are due to increases in white noise in the perceptual-motor system, were conducted. Young (20-29 years old) and old (60-69 and 70-79 years old) adults performed several perceptual-motor tasks. Older adults were progressively more variable in their performance outcome, but there was no age-related difference in white noise in the motor output. Older adults had a greater frequency-dependent structure in their motor variability that was associated with performance decrements. The findings challenge the main tenet of the neural noise hypothesis of aging in that the increased variability of older adults was due to a decreased ability to adapt to the constraints of the task rather than an increment of neural noise per se.

  17. Martingales, detrending data, and the efficient market hypothesis

    Science.gov (United States)

    McCauley, Joseph L.; Bassler, Kevin E.; Gunaratne, Gemunu H.

    2008-01-01

    We discuss martingales, detrending data, and the efficient market hypothesis (EMH) for stochastic processes x( t) with arbitrary diffusion coefficients D( x, t). Beginning with x-independent drift coefficients R( t) we show that martingale stochastic processes generate uncorrelated, generally non-stationary increments. Generally, a test for a martingale is therefore a test for uncorrelated increments. A detrended process with an x-dependent drift coefficient is generally not a martingale, and so we extend our analysis to include the class of ( x, t)-dependent drift coefficients of interest in finance. We explain why martingales look Markovian at the level of both simple averages and 2-point correlations. And while a Markovian market has no memory to exploit and presumably cannot be beaten systematically, it has never been shown that martingale memory cannot be exploited in 3-point or higher correlations to beat the market. We generalize our Markov scaling solutions presented earlier, and also generalize the martingale formulation of the EMH to include ( x, t)-dependent drift in log returns. We also use the analysis of this paper to correct a misstatement of the ‘fair game’ condition in terms of serial correlations in Fama's paper on the EMH. We end with a discussion of Levy's characterization of Brownian motion and prove that an arbitrary martingale is topologically inequivalent to a Wiener process.

  18. Mirror neurons and the social nature of language: the neural exploitation hypothesis.

    Science.gov (United States)

    Gallese, Vittorio

    2008-01-01

    This paper discusses the relevance of the discovery of mirror neurons in monkeys and of the mirror neuron system in humans to a neuroscientific account of primates' social cognition and its evolution. It is proposed that mirror neurons and the functional mechanism they underpin, embodied simulation, can ground within a unitary neurophysiological explanatory framework important aspects of human social cognition. In particular, the main focus is on language, here conceived according to a neurophenomenological perspective, grounding meaning on the social experience of action. A neurophysiological hypothesis--the "neural exploitation hypothesis"--is introduced to explain how key aspects of human social cognition are underpinned by brain mechanisms originally evolved for sensorimotor integration. It is proposed that these mechanisms were later on adapted as new neurofunctional architecture for thought and language, while retaining their original functions as well. By neural exploitation, social cognition and language can be linked to the experiential domain of action.

  19. Why would musical training benefit the neural encoding of speech? The OPERA hypothesis.

    Directory of Open Access Journals (Sweden)

    Aniruddh D. Patel

    2011-06-01

    Full Text Available Mounting evidence suggests that musical training benefits the neural encoding of speech. This paper offers a hypothesis specifying why such benefits occur. The OPERA hypothesis proposes that such benefits are driven by adaptive plasticity in speech-processing networks, and that this plasticity occurs when five conditions are met. These are: 1 Overlap: there is anatomical overlap in the brain networks that process an acoustic feature used in both music and speech (e.g., waveform periodicity, amplitude envelope, 2 Precision: music places higher demands on these shared networks than does speech, in terms of the precision of processing, 3 Emotion: the musical activities that engage this network elicit strong positive emotion, 4 Repetition: the musical activities that engage this network are frequently repeated, and 5 Attention: the musical activities that engage this network are associated with focused attention. According to the OPERA hypothesis, when these conditions are met neural plasticity drives the networks in question to function with higher precision than needed for ordinary speech communication. Yet since speech shares these networks with music, speech processing benefits. The OPERA hypothesis is used to account for the observed superior subcortical encoding of speech in musically trained individuals, and to suggest mechanisms by which musical training might improve linguistic reading abilities.

  20. Why would Musical Training Benefit the Neural Encoding of Speech? The OPERA Hypothesis.

    Science.gov (United States)

    Patel, Aniruddh D

    2011-01-01

    Mounting evidence suggests that musical training benefits the neural encoding of speech. This paper offers a hypothesis specifying why such benefits occur. The "OPERA" hypothesis proposes that such benefits are driven by adaptive plasticity in speech-processing networks, and that this plasticity occurs when five conditions are met. These are: (1) Overlap: there is anatomical overlap in the brain networks that process an acoustic feature used in both music and speech (e.g., waveform periodicity, amplitude envelope), (2) Precision: music places higher demands on these shared networks than does speech, in terms of the precision of processing, (3) Emotion: the musical activities that engage this network elicit strong positive emotion, (4) Repetition: the musical activities that engage this network are frequently repeated, and (5) Attention: the musical activities that engage this network are associated with focused attention. According to the OPERA hypothesis, when these conditions are met neural plasticity drives the networks in question to function with higher precision than needed for ordinary speech communication. Yet since speech shares these networks with music, speech processing benefits. The OPERA hypothesis is used to account for the observed superior subcortical encoding of speech in musically trained individuals, and to suggest mechanisms by which musical training might improve linguistic reading abilities.

  1. A BEHAVIORAL FINANCE PERSPECTIVE OF THE EFFICIENT MARKET HYPOTHESIS

    OpenAIRE

    Camelia Oprean

    2012-01-01

    Nowadays, a central theme in the finance and economic theory is market efficiency. After several decades of research, economists have not yet reached a consensus about the existence of efficient financial markets in terms of information. In the problematized approaches regarding the treated subject, one can find the inquiries on the validity of assumptions underlying the informational efficiency theory of the financial market. The emerging discipline of behavioral economics and finance has ch...

  2. A Modern Approach to the Efficient-Market Hypothesis

    OpenAIRE

    Gabriel Frahm

    2013-01-01

    Market efficiency at least requires the absence of weak arbitrage opportunities, but this is not sufficient to establish a situation where the market is sensitive, i.e., where it "fully reflects" or "rapidly adjusts to" some information flow including the evolution of asset prices. By contrast, No Weak Arbitrage together with market sensitivity is sufficient and necessary for a market to be informationally efficient.

  3. Efficient Market Hypothesis: Some Evidences from Emerging European Forex Markets

    OpenAIRE

    Anoop S Kumar; Bandi Kamaiah

    2014-01-01

    This study attempts to analyze the presence of weak form efficiency in the forex markets of a set of select European emerging markets namely Bulgaria, Croatia, Czech Republic, Hungary Poland, Romania, Russia, Slovakia and Slovenia using the monthly NEER data ranging from jan-1994 to Dec-2013. We employ a two step comprehensive methodology where in the first place we test for weak form efficiency using a family of individual and joint variance ratio tests. The results show that while the marke...

  4. Efficient Market Hypothesis: Some Evidences from Emerging European Forex Markets

    Directory of Open Access Journals (Sweden)

    Anoop S Kumar

    2014-06-01

    Full Text Available This study attempts to analyze the presence of weak form efficiency in the forex markets of a set of select European emerging markets namely Bulgaria, Croatia, Czech Republic, Hungary Poland, Romania, Russia, Slovakia and Slovenia using the monthly NEER data ranging from jan-1994 to Dec-2013. We employ a two step comprehensive methodology where in the first place we test for weak form efficiency using a family of individual and joint variance ratio tests. The results show that while the markets of Croatia, Czech Republic and Bulgaria may be weak form efficient at a shorter lag, the other six markets are not informationally efficient. In the next stage, we estimate a measure of relative efficiency to show the extent to which a market is weak-form inefficient. From the results, it is found that the forex markets of Croatia, Czech Republic and Bulgaria are least weak form inefficient compared to others. The findings of the study are of relevance as it shows that even after roughly two decades of free market economic policies, majority of the forex markets in the area remains informationally inefficient.

  5. Experiential Learning of the Efficient Market Hypothesis: Two Trading Games

    Science.gov (United States)

    Park, Andreas

    2010-01-01

    In goods markets, an equilibrium price balances demand and supply. In a financial market, an equilibrium price also aggregates people's information to reveal the true value of a financial security. Although the underlying idea of informationally efficient markets is one of the centerpieces of capital market theory, students often have difficulties…

  6. Is the Economic andTesting the Efficient Markets Hypothesis on the Romanian Capital Market

    Directory of Open Access Journals (Sweden)

    Dragoș Mînjină

    2013-11-01

    Full Text Available Informational efficiency of capital markets has been the subject of numerous empirical studies. Intensive research of the field is justified by the important implications of the knowledge of the of informational efficiency level in the financial practice. Empirical studies that have tested the efficient markets hypothesis on the Romanian capital market revealed mostly that this market is not characterised by the weak form of the efficient markets hypothesis. However, recent empirical studies have obtained results for the weak form of the efficient markets hypothesis. The present decline period of the Romanian capital market, recorded on the background of adverse economic developments internally and externally, will be an important test for the continuation of recent positive developments, manifested the level of informational efficiency too.

  7. An adaptive workspace hypothesis about the neural correlates of consciousness: insights from neuroscience and meditation studies.

    Science.gov (United States)

    Raffone, Antonino; Srinivasan, Narayanan

    2009-01-01

    While enormous progress has been made to identify neural correlates of consciousness (NCC), crucial NCC aspects are still very controversial. A major hurdle is the lack of an adequate definition and characterization of different aspects of conscious experience and also its relationship to attention and metacognitive processes like monitoring. In this paper, we therefore attempt to develop a unitary theoretical framework for NCC, with an interdependent characterization of endogenous attention, access consciousness, phenomenal awareness, metacognitive consciousness, and a non-referential form of unified consciousness. We advance an adaptive workspace hypothesis about the NCC based on the global workspace model emphasizing transient resonant neurodynamics and prefrontal cortex function, as well as meditation-related characterizations of conscious experiences. In this hypothesis, transient dynamic links within an adaptive coding net in prefrontal cortex, especially in anterior prefrontal cortex, and between it and the rest of the brain, in terms of ongoing intrinsic and long-range signal exchanges, flexibly regulate the interplay between endogenous attention, access consciousness, phenomenal awareness, and metacognitive consciousness processes. Such processes are established in terms of complementary aspects of an ongoing transition between context-sensitive global workspace assemblies, modulated moment-to-moment by body and environment states. Brain regions associated to momentary interoceptive and exteroceptive self-awareness, or first-person experiential perspective as emphasized in open monitoring meditation, play an important modulatory role in adaptive workspace transitions.

  8. Efficient Cancer Detection Using Multiple Neural Networks.

    Science.gov (United States)

    Shell, John; Gregory, William D

    2017-01-01

    The inspection of live excised tissue specimens to ascertain malignancy is a challenging task in dermatopathology and generally in histopathology. We introduce a portable desktop prototype device that provides highly accurate neural network classification of malignant and benign tissue. The handheld device collects 47 impedance data samples from 1 Hz to 32 MHz via tetrapolar blackened platinum electrodes. The data analysis was implemented with six different backpropagation neural networks (BNN). A data set consisting of 180 malignant and 180 benign breast tissue data files in an approved IRB study at the Aurora Medical Center, Milwaukee, WI, USA, were utilized as a neural network input. The BNN structure consisted of a multi-tiered consensus approach autonomously selecting four of six neural networks to determine a malignant or benign classification. The BNN analysis was then compared with the histology results with consistent sensitivity of 100% and a specificity of 100%. This implementation successfully relied solely on statistical variation between the benign and malignant impedance data and intricate neural network configuration. This device and BNN implementation provides a novel approach that could be a valuable tool to augment current medical practice assessment of the health of breast, squamous, and basal cell carcinoma and other excised tissue without requisite tissue specimen expertise. It has the potential to provide clinical management personnel with a fast non-invasive accurate assessment of biopsied or sectioned excised tissue in various clinical settings.

  9. Neural mechanisms of peristalsis in the isolated rabbit distal colon: a neuromechanical loop hypothesis.

    Science.gov (United States)

    Dinning, Phil G; Wiklendt, Lukasz; Omari, Taher; Arkwright, John W; Spencer, Nick J; Brookes, Simon J H; Costa, Marcello

    2014-01-01

    Propulsive contractions of circular muscle are largely responsible for the movements of content along the digestive tract. Mechanical and electrophysiological recordings of isolated colonic circular muscle have demonstrated that localized distension activates ascending and descending interneuronal pathways, evoking contraction orally and relaxation anally. These polarized enteric reflex pathways can theoretically be sequentially activated by the mechanical stimulation of the advancing contents. Here, we test the hypothesis that initiation and propagation of peristaltic contractions involves a neuromechanical loop; that is an initial gut distension activates local and oral reflex contraction and anal reflex relaxation, the subsequent movement of content then acts as new mechanical stimulus triggering sequentially reflex contractions/relaxations at each point of the gut resulting in a propulsive peristaltic contraction. In fluid filled isolated rabbit distal colon, we combined spatiotemporal mapping of gut diameter and intraluminal pressure with a new analytical method, allowing us to identify when and where active (neurally-driven) contraction or relaxation occurs. Our data indicate that gut dilation is associated with propagating peristaltic contractions, and that the associated level of dilation is greater than that preceding non-propagating contractions (2.7 ± 1.4 mm vs. 1.6 ± 1.2 mm; P polarized enteric circuits. These produce propulsion of the bolus which activates further anally, polarized enteric circuits by distension, thus closing the neuromechanical loop.

  10. Studying the mechanisms of the Somatic Marker Hypothesis in Spiking Neural Networks (SNN

    Directory of Open Access Journals (Sweden)

    Manuel GONZÁLEZ

    2013-07-01

    Full Text Available Normal 0 21 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} In this paper, a mechanism of emotional bias in decision making is studied using Spiking Neural Networks to simulate the associative and recurrent networks involved. The results obtained are along the lines of those proposed by A. Damasio as part of the Somatic Marker Hypothesis, in particular, that, in absence of emotional input, the decision making is driven by the rational input alone. Appropriate representations for the Objective and Emotional Values are also suggested, provided a spike representation (code of the information.

  11. Studying the mechanisms of the Somatic Marker Hypothesis in Spiking Neural Networks (SNN

    Directory of Open Access Journals (Sweden)

    Alejandro JIMÉNEZ-RODRÍGUEZ

    2012-09-01

    Full Text Available Normal 0 21 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} In this paper, a mechanism of emotional bias in decision making is studied using Spiking Neural Networks to simulate the associative and recurrent networks involved. The results obtained are along the lines of those proposed by A. Damasio as part of the Somatic Marker Hypothesis, in particular, that, in absence of emotional input, the decision making is driven by the rational input alone. Appropriate representations for the Objective and Emotional Values are also suggested, provided a spike representation (code of the information.

  12. Rekonsiliasi Perseteruan antara Efficient Market Hypothesis dan Behavioral Finance melalui Perspektif Neuroeconomics

    Directory of Open Access Journals (Sweden)

    Satia Nur Maharani

    2014-08-01

    Full Text Available Behavioral finance evaluation on Efficient Market Hypothesis causes debates among scientists supporting both theories. This article describes a comprehensive debate between rational behavior perspective on the Efficient Market Hypothesis with irrational behavior on behavioral finance, and how neuroeconomics shed some light on these two perspectives. This article gives a wider range of colors to represent investors behavior that is very complex, and encourage the growth of new generations of related theory of capital markets through interdisciplinary collaboration. Findings indicated that neuroeconomics perspective identified economic behavior through psychological functions.

  13. Efficiency turns the table on neural encoding, decoding and noise.

    Science.gov (United States)

    Deneve, Sophie; Chalk, Matthew

    2016-04-01

    Sensory neurons are usually described with an encoding model, for example, a function that predicts their response from the sensory stimulus using a receptive field (RF) or a tuning curve. However, central to theories of sensory processing is the notion of 'efficient coding'. We argue here that efficient coding implies a completely different neural coding strategy. Instead of a fixed encoding model, neural populations would be described by a fixed decoding model (i.e. a model reconstructing the stimulus from the neural responses). Because the population solves a global optimization problem, individual neurons are variable, but not noisy, and have no truly invariant tuning curve or receptive field. We review recent experimental evidence and implications for neural noise correlations, robustness and adaptation. Copyright © 2016. Published by Elsevier Ltd.

  14. Computationally Efficient Neural Network Intrusion Security Awareness

    Energy Technology Data Exchange (ETDEWEB)

    Todd Vollmer; Milos Manic

    2009-08-01

    An enhanced version of an algorithm to provide anomaly based intrusion detection alerts for cyber security state awareness is detailed. A unique aspect is the training of an error back-propagation neural network with intrusion detection rule features to provide a recognition basis. Network packet details are subsequently provided to the trained network to produce a classification. This leverages rule knowledge sets to produce classifications for anomaly based systems. Several test cases executed on ICMP protocol revealed a 60% identification rate of true positives. This rate matched the previous work, but 70% less memory was used and the run time was reduced to less than 1 second from 37 seconds.

  15. Synaptic E-I Balance Underlies Efficient Neural Coding.

    Science.gov (United States)

    Zhou, Shanglin; Yu, Yuguo

    2018-01-01

    Both theoretical and experimental evidence indicate that synaptic excitation and inhibition in the cerebral cortex are well-balanced during the resting state and sensory processing. Here, we briefly summarize the evidence for how neural circuits are adjusted to achieve this balance. Then, we discuss how such excitatory and inhibitory balance shapes stimulus representation and information propagation, two basic functions of neural coding. We also point out the benefit of adopting such a balance during neural coding. We conclude that excitatory and inhibitory balance may be a fundamental mechanism underlying efficient coding.

  16. Computationally efficient model predictive control algorithms a neural network approach

    CERN Document Server

    Ławryńczuk, Maciej

    2014-01-01

    This book thoroughly discusses computationally efficient (suboptimal) Model Predictive Control (MPC) techniques based on neural models. The subjects treated include: ·         A few types of suboptimal MPC algorithms in which a linear approximation of the model or of the predicted trajectory is successively calculated on-line and used for prediction. ·         Implementation details of the MPC algorithms for feedforward perceptron neural models, neural Hammerstein models, neural Wiener models and state-space neural models. ·         The MPC algorithms based on neural multi-models (inspired by the idea of predictive control). ·         The MPC algorithms with neural approximation with no on-line linearization. ·         The MPC algorithms with guaranteed stability and robustness. ·         Cooperation between the MPC algorithms and set-point optimization. Thanks to linearization (or neural approximation), the presented suboptimal algorithms do not require d...

  17. Predictability of Exchange Rates in Sri Lanka: A Test of the Efficient Market Hypothesis

    OpenAIRE

    Guneratne B Wickremasinghe

    2007-01-01

    This study examined the validity of the weak and semi-strong forms of the efficient market hypothesis (EMH) for the foreign exchange market of Sri Lanka. Monthly exchange rates for four currencies during the floating exchange rate regime were used in the empirical tests. Using a battery of tests, empirical results indicate that the current values of the four exchange rates can be predicted from their past values. Further, the tests of semi-strong form efficiency indicate that exchange rate pa...

  18. An Efficient Implementation of Track-Oriented Multiple Hypothesis Tracker Using Graphical Model Approaches

    Directory of Open Access Journals (Sweden)

    Jinping Sun

    2017-01-01

    Full Text Available The multiple hypothesis tracker (MHT is currently the preferred method for addressing data association problem in multitarget tracking (MTT application. MHT seeks the most likely global hypothesis by enumerating all possible associations over time, which is equal to calculating maximum a posteriori (MAP estimate over the report data. Despite being a well-studied method, MHT remains challenging mostly because of the computational complexity of data association. In this paper, we describe an efficient method for solving the data association problem using graphical model approaches. The proposed method uses the graph representation to model the global hypothesis formation and subsequently applies an efficient message passing algorithm to obtain the MAP solution. Specifically, the graph representation of data association problem is formulated as a maximum weight independent set problem (MWISP, which translates the best global hypothesis formation into finding the maximum weight independent set on the graph. Then, a max-product belief propagation (MPBP inference algorithm is applied to seek the most likely global hypotheses with the purpose of avoiding a brute force hypothesis enumeration procedure. The simulation results show that the proposed MPBP-MHT method can achieve better tracking performance than other algorithms in challenging tracking situations.

  19. The Athlete’s Brain: Cross-Sectional Evidence for Neural Efficiency during Cycling Exercise

    Directory of Open Access Journals (Sweden)

    Sebastian Ludyga

    2016-01-01

    Full Text Available The “neural efficiency” hypothesis suggests that experts are characterized by a more efficient cortical function in cognitive tests. Although this hypothesis has been extended to a variety of movement-related tasks within the last years, it is unclear whether or not neural efficiency is present in cyclists performing endurance exercise. Therefore, this study examined brain cortical activity at rest and during exercise between cyclists of higher (HIGH; n=14; 55.6 ± 2.8 mL/min/kg and lower (LOW; n=15; 46.4 ± 4.1 mL/min/kg maximal oxygen consumption (VO2MAX. Male and female participants performed a graded exercise test with spirometry to assess VO2MAX. After 3 to 5 days, EEG was recorded at rest with eyes closed and during cycling at the individual anaerobic threshold over a 30 min period. Possible differences in alpha/beta ratio as well as alpha and beta power were investigated at frontal, central, and parietal sites. The statistical analysis revealed significant differences between groups (F=12.04; p=0.002, as the alpha/beta ratio was increased in HIGH compared to LOW in both the resting state (p≤0.018 and the exercise condition (p≤0.025. The present results indicate enhanced neural efficiency in subjects with high VO2MAX, possibly due to the inhibition of task-irrelevant cognitive processes.

  20. Neural network fusion capabilities for efficient implementation of tracking algorithms

    Science.gov (United States)

    Sundareshan, Malur K.; Amoozegar, Farid

    1997-03-01

    The ability to efficiently fuse information of different forms to facilitate intelligent decision making is one of the major capabilities of trained multilayer neural networks that is now being recognized. While development of innovative adaptive control algorithms for nonlinear dynamical plants that attempt to exploit these capabilities seems to be more popular, a corresponding development of nonlinear estimation algorithms using these approaches, particularly for application in target surveillance and guidance operations, has not received similar attention. We describe the capabilities and functionality of neural network algorithms for data fusion and implementation of tracking filters. To discuss details and to serve as a vehicle for quantitative performance evaluations, the illustrative case of estimating the position and velocity of surveillance targets is considered. Efficient target- tracking algorithms that can utilize data from a host of sensing modalities and are capable of reliably tracking even uncooperative targets executing fast and complex maneuvers are of interest in a number of applications. The primary motivation for employing neural networks in these applications comes from the efficiency with which more features extracted from different sensor measurements can be utilized as inputs for estimating target maneuvers. A system architecture that efficiently integrates the fusion capabilities of a trained multilayer neural net with the tracking performance of a Kalman filter is described. The innovation lies in the way the fusion of multisensor data is accomplished to facilitate improved estimation without increasing the computational complexity of the dynamical state estimator itself.

  1. The Efficient Market Hypothesis: Is It Applicable to the Foreign Exchange Market?

    OpenAIRE

    Nguyen, James

    2004-01-01

    The study analyses the applicability of the efficient market hypothesis to the foreign exchange market by testing the profitability of the filter rule on the spot market. The significance of the returns was validated by comparing them to the returns from randomly generated shuffled series via bootstrap methods. The results were surprising. For the total period (1984-2003) small filter rules could deliver significant returns indicating an inefficient foreign exchange market. However, once the ...

  2. Analisis Efficient Market Hypothesis (EMH) di Bursa Saham Syariah, 2005:1 – 2008:11

    OpenAIRE

    Cahyadin, Malik; Milandari, Devi Oktaviana

    2009-01-01

    Study on sharia stocks exchange takes an important place and consideration since transactions and financial activities within a stocks exchange market will determine the mode of market itself and will have impact on economic activities in a country where the market is established. This paper is mainly focused to oversee sharia stocks exchanges in Indonesia, United States, Saudi Arabia, and Malaysia using efficient market hypothesis (EHM) method. Data used in this study were collected from mon...

  3. Analisis Efficient Market Hypothesis (EMH) Di Bursa Saham Syariah, 2005:1 – 2008:11

    OpenAIRE

    Cahyadin, Malik; Milandari, Devi Oktaviana

    2009-01-01

    Study on sharia stocks exchange takes an important place and consideration since transactions and financial activities within a stocks exchange market will determine the mode of market itself and will have impact on economic activities in a country where the market is established. This paper is mainly focused to oversee sharia stocks exchanges in Indonesia, United States, Saudi Arabia, and Malaysia using efficient market hypothesis (EHM) method. Data used in this study were collected from mon...

  4. THE EFFICIENT MARKET HYPOTHESIS REVISITED: EVIDENCE FROM THE FIVE SMALL OPEN ASEAN STOCK MARKETS

    OpenAIRE

    QAISER MUNIR; KOK SOOK CHING; FUMITAKA FUROUKA; KASIM MANSUR

    2012-01-01

    The efficient market hypothesis (EMH), which suggests that returns of a stock market are unpredictable from historical price changes, is satisfied when stock prices are characterized by a random walk (unit root) process. A finding of unit root implies that stock returns cannot be predicted. This paper investigates the stock prices behavior of five ASEAN (Association of Southeast Asian Nations) countries i.e., Indonesia, Malaysia, Philippines, Singapore and Thailand, for the period from 1990:1...

  5. Efficient Neural Network Modeling for Flight and Space Dynamics Simulation

    Directory of Open Access Journals (Sweden)

    Ayman Hamdy Kassem

    2011-01-01

    Full Text Available This paper represents an efficient technique for neural network modeling of flight and space dynamics simulation. The technique will free the neural network designer from guessing the size and structure for the required neural network model and will help to minimize the number of neurons. For linear flight/space dynamics systems, the technique can find the network weights and biases directly by solving a system of linear equations without the need for training. Nonlinear flight dynamic systems can be easily modeled by training its linearized models keeping the same network structure. The training is fast, as it uses the linear system knowledge to speed up the training process. The technique is tested on different flight/space dynamic models and showed promising results.

  6. An efficient automated parameter tuning framework for spiking neural networks.

    Science.gov (United States)

    Carlson, Kristofor D; Nageswaran, Jayram Moorkanikara; Dutt, Nikil; Krichmar, Jeffrey L

    2014-01-01

    As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65× of the GPU implementation over the CPU implementation, or 0.35 h per generation for GPU vs. 23.5 h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier.

  7. Robust Approach to Verifying the Weak Form of the Efficient Market Hypothesis

    Science.gov (United States)

    Střelec, Luboš

    2011-09-01

    The weak form of the efficient markets hypothesis states that prices incorporate only past information about the asset. An implication of this form of the efficient markets hypothesis is that one cannot detect mispriced assets and consistently outperform the market through technical analysis of past prices. One of possible formulations of the efficient market hypothesis used for weak form tests is that share prices follow a random walk. It means that returns are realizations of IID sequence of random variables. Consequently, for verifying the weak form of the efficient market hypothesis, we can use distribution tests, among others, i.e. some tests of normality and/or some graphical methods. Many procedures for testing the normality of univariate samples have been proposed in the literature [7]. Today the most popular omnibus test of normality for a general use is the Shapiro-Wilk test. The Jarque-Bera test is the most widely adopted omnibus test of normality in econometrics and related fields. In particular, the Jarque-Bera test (i.e. test based on the classical measures of skewness and kurtosis) is frequently used when one is more concerned about heavy-tailed alternatives. As these measures are based on moments of the data, this test has a zero breakdown value [2]. In other words, a single outlier can make the test worthless. The reason so many classical procedures are nonrobust to outliers is that the parameters of the model are expressed in terms of moments, and their classical estimators are expressed in terms of sample moments, which are very sensitive to outliers. Another approach to robustness is to concentrate on the parameters of interest suggested by the problem under this study. Consequently, novel robust testing procedures of testing normality are presented in this paper to overcome shortcomings of classical normality tests in the field of financial data, which are typical with occurrence of remote data points and additional types of deviations from

  8. An efficient optical architecture for sparsely connected neural networks

    Science.gov (United States)

    Hine, Butler P., III; Downie, John D.; Reid, Max B.

    1990-01-01

    An architecture for general-purpose optical neural network processor is presented in which the interconnections and weights are formed by directing coherent beams holographically, thereby making use of the space-bandwidth products of the recording medium for sparsely interconnected networks more efficiently that the commonly used vector-matrix multiplier, since all of the hologram area is in use. An investigation is made of the use of computer-generated holograms recorded on such updatable media as thermoplastic materials, in order to define the interconnections and weights of a neural network processor; attention is given to limits on interconnection densities, diffraction efficiencies, and weighing accuracies possible with such an updatable thin film holographic device.

  9. Energy efficient neural stimulation: coupling circuit design and membrane biophysics.

    Science.gov (United States)

    Foutz, Thomas J; Ackermann, D Michael; Kilgore, Kevin L; McIntyre, Cameron C

    2012-01-01

    The delivery of therapeutic levels of electrical current to neural tissue is a well-established treatment for numerous indications such as Parkinson's disease and chronic pain. While the neuromodulation medical device industry has experienced steady clinical growth over the last two decades, much of the core technology underlying implanted pulse generators remain unchanged. In this study we propose some new methods for achieving increased energy-efficiency during neural stimulation. The first method exploits the biophysical features of excitable tissue through the use of a centered-triangular stimulation waveform. Neural activation with this waveform is achieved with a statistically significant reduction in energy compared to traditional rectangular waveforms. The second method demonstrates energy savings that could be achieved by advanced circuitry design. We show that the traditional practice of using a fixed compliance voltage for constant-current stimulation results in substantial energy loss. A portion of this energy can be recuperated by adjusting the compliance voltage to real-time requirements. Lastly, we demonstrate the potential impact of axon fiber diameter on defining the energy-optimal pulse-width for stimulation. When designing implantable pulse generators for energy efficiency, we propose that the future combination of a variable compliance system, a centered-triangular stimulus waveform, and an axon diameter specific stimulation pulse-width has great potential to reduce energy consumption and prolong battery life in neuromodulation devices.

  10. Energy efficient neural stimulation: coupling circuit design and membrane biophysics.

    Directory of Open Access Journals (Sweden)

    Thomas J Foutz

    Full Text Available The delivery of therapeutic levels of electrical current to neural tissue is a well-established treatment for numerous indications such as Parkinson's disease and chronic pain. While the neuromodulation medical device industry has experienced steady clinical growth over the last two decades, much of the core technology underlying implanted pulse generators remain unchanged. In this study we propose some new methods for achieving increased energy-efficiency during neural stimulation. The first method exploits the biophysical features of excitable tissue through the use of a centered-triangular stimulation waveform. Neural activation with this waveform is achieved with a statistically significant reduction in energy compared to traditional rectangular waveforms. The second method demonstrates energy savings that could be achieved by advanced circuitry design. We show that the traditional practice of using a fixed compliance voltage for constant-current stimulation results in substantial energy loss. A portion of this energy can be recuperated by adjusting the compliance voltage to real-time requirements. Lastly, we demonstrate the potential impact of axon fiber diameter on defining the energy-optimal pulse-width for stimulation. When designing implantable pulse generators for energy efficiency, we propose that the future combination of a variable compliance system, a centered-triangular stimulus waveform, and an axon diameter specific stimulation pulse-width has great potential to reduce energy consumption and prolong battery life in neuromodulation devices.

  11. Efficient universal computing architectures for decoding neural activity.

    Directory of Open Access Journals (Sweden)

    Benjamin I Rapoport

    Full Text Available The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain- machine interfaces (BMIs. Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain- machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than [Formula: see text]. We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA implementation of this portion

  12. Attractor neural networks with resource-efficient synaptic connectivity

    Science.gov (United States)

    Pehlevan, Cengiz; Sengupta, Anirvan

    Memories are thought to be stored in the attractor states of recurrent neural networks. Here we explore how resource constraints interplay with memory storage function to shape synaptic connectivity of attractor networks. We propose that given a set of memories, in the form of population activity patterns, the neural circuit choses a synaptic connectivity configuration that minimizes a resource usage cost. We argue that the total synaptic weight (l1-norm) in the network measures the resource cost because synaptic weight is correlated with synaptic volume, which is a limited resource, and is proportional to neurotransmitter release and post-synaptic current, both of which cost energy. Using numerical simulations and replica theory, we characterize optimal connectivity profiles in resource-efficient attractor networks. Our theory explains several experimental observations on cortical connectivity profiles, 1) connectivity is sparse, because synapses are costly, 2) bidirectional connections are overrepresented and 3) are stronger, because attractor states need strong recurrence.

  13. Differential theory of learning for efficient neural network pattern recognition

    Science.gov (United States)

    Hampshire, John B., II; Vijaya Kumar, Bhagavatula

    1993-09-01

    We describe a new theory of differential learning by which a broad family of pattern classifiers (including many well-known neural network paradigms) can learn stochastic concepts efficiently. We describe the relationship between a classifier's ability to generate well to unseen test examples and the efficiency of the strategy by which it learns. We list a series of proofs that differential learning is efficient in its information and computational resource requirements, whereas traditional probabilistic learning strategies are not. The proofs are illustrated by a simple example that lends itself to closed-form analysis. We conclude with an optical character recognition task for which three different types of differentially generated classifiers generalize significantly better than their probabilistically generated counterparts.

  14. Efficient Market Hypothesis in South Africa: Evidence from Linear and Nonlinear Unit Root Tests

    Directory of Open Access Journals (Sweden)

    Andrew Phiri

    2015-12-01

    Full Text Available This study investigates the weak form efficient market hypothesis (EMH for five generalized stock indices in the Johannesburg Stock Exchange (JSE using weekly data collected from 31st January 2000 to 16th December 2014. In particular, we test for weak form market efficiency using a battery of linear and nonlinear unit root testing procedures comprising of the classical augmented Dickey-Fuller (ADF tests, the two-regime threshold autoregressive (TAR unit root tests described in Enders and Granger (1998 as well as the three-regime unit root tests described in Bec, Salem, and Carrasco (2004. Based on our empirical analysis, we are able to demonstrate that whilst the linear unit root tests advocate for unit roots within the time series, the nonlinear unit root tests suggest that most stock indices are threshold stationary processes. These results bridge two opposing contentions obtained from previous studies by concluding that under a linear framework the JSE stock indices offer support in favour of weak form market efficiency whereas when nonlinearity is accounted for, a majority of the indices violate the weak form EMH.

  15. Thermodynamic efficiency of learning a rule in neural networks

    Science.gov (United States)

    Goldt, Sebastian; Seifert, Udo

    2017-11-01

    Biological systems have to build models from their sensory input data that allow them to efficiently process previously unseen inputs. Here, we study a neural network learning a binary classification rule for these inputs from examples provided by a teacher. We analyse the ability of the network to apply the rule to new inputs, that is to generalise from past experience. Using stochastic thermodynamics, we show that the thermodynamic costs of the learning process provide an upper bound on the amount of information that the network is able to learn from its teacher for both batch and online learning. This allows us to introduce a thermodynamic efficiency of learning. We analytically compute the dynamics and the efficiency of a noisy neural network performing online learning in the thermodynamic limit. In particular, we analyse three popular learning algorithms, namely Hebbian, Perceptron and AdaTron learning. Our work extends the methods of stochastic thermodynamics to a new type of learning problem and might form a suitable basis for investigating the thermodynamics of decision-making.

  16. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks.

    Science.gov (United States)

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper.

  17. On supertaskers and the neural basis of efficient multitasking.

    Science.gov (United States)

    Medeiros-Ward, Nathan; Watson, Jason M; Strayer, David L

    2015-06-01

    The present study used brain imaging to determine the neural basis of individual differences in multitasking, the ability to successfully perform at least two attention-demanding tasks at once. Multitasking is mentally taxing and, therefore, should recruit the prefrontal cortex to maintain task goals when coordinating attentional control and managing the cognitive load. To investigate this possibility, we used functional neuroimaging to assess neural activity in both extraordinary multitaskers (Supertaskers) and control subjects who were matched on working memory capacity. Participants performed a challenging dual N-back task in which auditory and visual stimuli were presented simultaneously, requiring independent and continuous maintenance, updating, and verification of the contents of verbal and spatial working memory. With the task requirements and considerable cognitive load that accompanied increasing N-back, relative to the controls, the multitasking of Supertaskers was characterized by more efficient recruitment of anterior cingulate and posterior frontopolar prefrontal cortices. Results are interpreted using neuropsychological and evolutionary perspectives on individual differences in multitasking ability and the neural correlates of attentional control.

  18. How to build VLSI-efficient neural chips

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-02-01

    This paper presents several upper and lower bounds for the number-of-bits required for solving a classification problem, as well as ways in which these bounds can be used to efficiently build neural network chips. The focus will be on complexity aspects pertaining to neural networks: (1) size complexity and depth (size) tradeoffs, and (2) precision of weights and thresholds as well as limited interconnectivity. They show difficult problems-exponential growth in either space (precision and size) and/or time (learning and depth)-when using neural networks for solving general classes of problems (particular cases may enjoy better performances). The bounds for the number-of-bits required for solving a classification problem represent the first step of a general class of constructive algorithms, by showing how the quantization of the input space could be done in O (m{sup 2}n) steps. Here m is the number of examples, while n is the number of dimensions. The second step of the algorithm finds its roots in the implementation of a class of Boolean functions using threshold gates. It is substantiated by mathematical proofs for the size O (mn/{Delta}), and the depth O [log(mn)/log{Delta}] of the resulting network (here {Delta} is the maximum fan in). Using the fan in as a parameter, a full class of solutions can be designed. The third step of the algorithm represents a reduction of the size and an increase of its generalization capabilities. Extensions by using analogue COMPARISONs, allows for real inputs, and increase the generalization capabilities at the expense of longer training times. Finally, several solutions which can lower the size of the resulting neural network are detailed. The interesting aspect is that they are obtained for limited, or even constant, fan-ins. In support of these claims many simulations have been performed and are called upon.

  19. Sex differences in the neural mechanisms mediating addiction: a new synthesis and hypothesis

    Directory of Open Access Journals (Sweden)

    Becker Jill B

    2012-06-01

    Full Text Available Abstract In this review we propose that there are sex differences in how men and women enter onto the path that can lead to addiction. Males are more likely than females to engage in risky behaviors that include experimenting with drugs of abuse, and in susceptible individuals, they are drawn into the spiral that can eventually lead to addiction. Women and girls are more likely to begin taking drugs as self-medication to reduce stress or alleviate depression. For this reason women enter into the downward spiral further along the path to addiction, and so transition to addiction more rapidly. We propose that this sex difference is due, at least in part, to sex differences in the organization of the neural systems responsible for motivation and addiction. Additionally, we suggest that sex differences in these systems and their functioning are accentuated with addiction. In the current review we discuss historical, cultural, social and biological bases for sex differences in addiction with an emphasis on sex differences in the neurotransmitter systems that are implicated.

  20. Efficient compliance with prescribed bounds on operational parameters by means of hypothesis testing using reactor data

    International Nuclear Information System (INIS)

    Sermer, P.; Olive, C.; Hoppe, F.M.

    2000-01-01

    - A common problem in any reactor operations is to comply with a requirement that certain operational parameters are constrained to lie within some prescribed bounds. The fundamental issue which is to be addressed in any compliance description can be stated as follows: The compliance definition, compliance procedures and allowances for uncertainties in data and accompanying methodologies, should be well defined and justifiable. To this end, a mathematical framework for compliance, in which the computed or measured estimates of process parameters are considered random variables, is described in this paper. This allows a statistical formulation of the definition of compliance with licence or otherwise imposed limits. An important aspect of the proposed methodology is that the derived statistical tests are obtained by a Monte Carlo procedure using actual reactor operational data. The implementation of the methodology requires a routine surveillance of the reactor core in order to perform the underlying statistical tests. The additional work required for surveillance is balanced by the fact that the resulting actions on the reactor operations, implemented in station procedures, make the reactor 'safer' by increasing the operating margins. Furthermore, increased margins are also achieved by efficient solution techniques which may allow an increase in reactor power. A rigorous analysis of a compliance problem using statistical hypothesis testing based on extreme value probability distributions and actual reactor operational data leads to effective solutions in the areas of licensing, nuclear safety, reliability and competitiveness of operating nuclear reactors. (author)

  1. A critique on efficient market hypothesis (EMH): Empirical evidence of return anomalies in 12 U.S. industry portfolios

    OpenAIRE

    Lee, Cheng Hsun George

    2006-01-01

    This paper focuses on two major arguments the momentum effect and market-learns hypothesis concerning the validity of the Efficient Market Hypothesis are summarized. Six empirical experiments with 12 U.S. Industry Portfolio are conducted. They not only provide the evidence against some of the EMH assumptions, but also aim to address the formation of return anomalies. Of them, three are designed to assess the validity of EMH with different approaches (White Noise, Effectiveness, Forecastibilit...

  2. Can Intrinsic Fluctuations Increase Efficiency in Neural Information Processing?

    Science.gov (United States)

    Liljenström, Hans

    2003-05-01

    All natural processes are accompanied by fluctuations, characterized as noise or chaos. Biological systems, which have evolved during billions of years, are likely to have adapted, not only to cope with such fluctuations, but also to make use of them. We investigate how the complex dynamics of the brain, including oscillations, chaos and noise, can affect the efficiency of neural information processing. In particular, we consider the amplification and functional role of internal fluctuations. Using computer simulations of a neural network model of the olfactory cortex and hippocampus, we demonstrate how microscopic fluctuations can result in global effects at the network level. We show that the rate of information processing in associative memory tasks can be maximized for optimal noise levels, analogous to stochastic resonance phenomena. Noise can also induce transitions between different dynamical states, which could be of significance for learning and memory. A chaotic-like behavior, induced by noise or by an increase in neuronal excitability, can enhance system performance if it is transient and converges to a limit cycle memory state. We speculate whether this dynamical behavior perhaps could be related to (creative) thinking.

  3. Lifelong bilingualism maintains neural efficiency for cognitive control in aging.

    Science.gov (United States)

    Gold, Brian T; Kim, Chobok; Johnson, Nathan F; Kryscio, Richard J; Smith, Charles D

    2013-01-09

    Recent behavioral data have shown that lifelong bilingualism can maintain youthful cognitive control abilities in aging. Here, we provide the first direct evidence of a neural basis for the bilingual cognitive control boost in aging. Two experiments were conducted, using a perceptual task-switching paradigm, including a total of 110 participants. In Experiment 1, older adult bilinguals showed better perceptual switching performance than their monolingual peers. In Experiment 2, younger and older adult monolinguals and bilinguals completed the same perceptual task-switching experiment while functional magnetic resonance imaging (fMRI) was performed. Typical age-related performance reductions and fMRI activation increases were observed. However, like younger adults, bilingual older adults outperformed their monolingual peers while displaying decreased activation in left lateral frontal cortex and cingulate cortex. Critically, this attenuation of age-related over-recruitment associated with bilingualism was directly correlated with better task-switching performance. In addition, the lower blood oxygenation level-dependent response in frontal regions accounted for 82% of the variance in the bilingual task-switching reaction time advantage. These results suggest that lifelong bilingualism offsets age-related declines in the neural efficiency for cognitive control processes.

  4. Efficient Pruning Method for Ensemble Self-Generating Neural Networks

    Directory of Open Access Journals (Sweden)

    Hirotaka Inoue

    2003-12-01

    Full Text Available Recently, multiple classifier systems (MCS have been used for practical applications to improve classification accuracy. Self-generating neural networks (SGNN are one of the suitable base-classifiers for MCS because of their simple setting and fast learning. However, the computation cost of the MCS increases in proportion to the number of SGNN. In this paper, we propose an efficient pruning method for the structure of the SGNN in the MCS. We compare the pruned MCS with two sampling methods. Experiments have been conducted to compare the pruned MCS with an unpruned MCS, the MCS based on C4.5, and k-nearest neighbor method. The results show that the pruned MCS can improve its classification accuracy as well as reducing the computation cost.

  5. Efficient computation in adaptive artificial spiking neural networks

    NARCIS (Netherlands)

    D. Zambrano (Davide); R.B.P. Nusselder (Roeland); H.S. Scholte; S.M. Bohte (Sander)

    2017-01-01

    textabstractArtificial Neural Networks (ANNs) are bio-inspired models of neural computation that have proven highly effective. Still, ANNs lack a natural notion of time, and neural units in ANNs exchange analog values in a frame-based manner, a computationally and energetically inefficient form of

  6. Design of efficient and safe neural stimulators a multidisciplinary approach

    CERN Document Server

    van Dongen, Marijn

    2016-01-01

    This book discusses the design of neural stimulator systems which are used for the treatment of a wide variety of brain disorders such as Parkinson’s, depression and tinnitus. Whereas many existing books treating neural stimulation focus on one particular design aspect, such as the electrical design of the stimulator, this book uses a multidisciplinary approach: by combining the fields of neuroscience, electrophysiology and electrical engineering a thorough understanding of the complete neural stimulation chain is created (from the stimulation IC down to the neural cell). This multidisciplinary approach enables readers to gain new insights into stimulator design, while context is provided by presenting innovative design examples. Provides a single-source, multidisciplinary reference to the field of neural stimulation, bridging an important knowledge gap among the fields of bioelectricity, neuroscience, neuroengineering and microelectronics;Uses a top-down approach to understanding the neural activation proc...

  7. DANNP: an efficient artificial neural network pruning tool

    KAUST Repository

    Alshahrani, Mona

    2017-11-06

    Background Artificial neural networks (ANNs) are a robust class of machine learning models and are a frequent choice for solving classification problems. However, determining the structure of the ANNs is not trivial as a large number of weights (connection links) may lead to overfitting the training data. Although several ANN pruning algorithms have been proposed for the simplification of ANNs, these algorithms are not able to efficiently cope with intricate ANN structures required for complex classification problems. Methods We developed DANNP, a web-based tool, that implements parallelized versions of several ANN pruning algorithms. The DANNP tool uses a modified version of the Fast Compressed Neural Network software implemented in C++ to considerably enhance the running time of the ANN pruning algorithms we implemented. In addition to the performance evaluation of the pruned ANNs, we systematically compared the set of features that remained in the pruned ANN with those obtained by different state-of-the-art feature selection (FS) methods. Results Although the ANN pruning algorithms are not entirely parallelizable, DANNP was able to speed up the ANN pruning up to eight times on a 32-core machine, compared to the serial implementations. To assess the impact of the ANN pruning by DANNP tool, we used 16 datasets from different domains. In eight out of the 16 datasets, DANNP significantly reduced the number of weights by 70%–99%, while maintaining a competitive or better model performance compared to the unpruned ANN. Finally, we used a naïve Bayes classifier derived with the features selected as a byproduct of the ANN pruning and demonstrated that its accuracy is comparable to those obtained by the classifiers trained with the features selected by several state-of-the-art FS methods. The FS ranking methodology proposed in this study allows the users to identify the most discriminant features of the problem at hand. To the best of our knowledge, DANNP (publicly

  8. Investigation of efficient features for image recognition by neural networks.

    Science.gov (United States)

    Goltsev, Alexander; Gritsenko, Vladimir

    2012-04-01

    In the paper, effective and simple features for image recognition (named LiRA-features) are investigated in the task of handwritten digit recognition. Two neural network classifiers are considered-a modified 3-layer perceptron LiRA and a modular assembly neural network. A method of feature selection is proposed that analyses connection weights formed in the preliminary learning process of a neural network classifier. In the experiments using the MNIST database of handwritten digits, the feature selection procedure allows reduction of feature number (from 60 000 to 7000) preserving comparable recognition capability while accelerating computations. Experimental comparison between the LiRA perceptron and the modular assembly neural network is accomplished, which shows that recognition capability of the modular assembly neural network is somewhat better. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. FROM EFFICIENT MARKET HYPOTHESIS TO BEHAVIOURAL FINANCE: CAN BEHAVIOURAL FINANCE BE THE NEW DOMINANT MODEL FOR INVESTING?

    OpenAIRE

    Anastasios KONSTANTINIDIS; Androniki KATARACHIA; George BOROVAS; Maria Eleni VOUTSA

    2012-01-01

    The present paper reviews two fundamental investing paradigms, which have had a substantial impact on the manner investors tend to develop their own strategies. specifically, the study elaborates on efficient market hypothesis (emh), which, despite remaining most prominent and popular until the 1990s, is considered rather controversial and often disputed, and the theory of behavioural finance, which has increasingly been implemented in financial institutions. based on an extensive survey of b...

  10. Precision Scaling of Neural Networks for Efficient Audio Processing

    OpenAIRE

    Ko, Jong Hwan; Fromm, Josh; Philipose, Matthai; Tashev, Ivan; Zarar, Shuayb

    2017-01-01

    While deep neural networks have shown powerful performance in many audio applications, their large computation and memory demand has been a challenge for real-time processing. In this paper, we study the impact of scaling the precision of neural networks on the performance of two common audio processing tasks, namely, voice-activity detection and single-channel speech enhancement. We determine the optimal pair of weight/neuron bit precision by exploring its impact on both the performance and ...

  11. Efficient and Invariant Convolutional Neural Networks for Dense Prediction

    OpenAIRE

    Gao, Hongyang; Ji, Shuiwang

    2017-01-01

    Convolutional neural networks have shown great success on feature extraction from raw input data such as images. Although convolutional neural networks are invariant to translations on the inputs, they are not invariant to other transformations, including rotation and flip. Recent attempts have been made to incorporate more invariance in image recognition applications, but they are not applicable to dense prediction tasks, such as image segmentation. In this paper, we propose a set of methods...

  12. Fast and Efficient Asynchronous Neural Computation with Adapting Spiking Neural Networks

    NARCIS (Netherlands)

    D. Zambrano (Davide); S.M. Bohte (Sander)

    2016-01-01

    textabstractBiological neurons communicate with a sparing exchange of pulses - spikes. It is an open question how real spiking neurons produce the kind of powerful neural computation that is possible with deep artificial neural networks, using only so very few spikes to communicate. Building on

  13. Hurst exponent and prediction based on weak-form efficient market hypothesis of stock markets

    Science.gov (United States)

    Eom, Cheoljun; Choi, Sunghoon; Oh, Gabjin; Jung, Woo-Sung

    2008-07-01

    We empirically investigated the relationships between the degree of efficiency and the predictability in financial time-series data. The Hurst exponent was used as the measurement of the degree of efficiency, and the hit rate calculated from the nearest-neighbor prediction method was used for the prediction of the directions of future price changes. We used 60 market indexes of various countries. We empirically discovered that the relationship between the degree of efficiency (the Hurst exponent) and the predictability (the hit rate) is strongly positive. That is, a market index with a higher Hurst exponent tends to have a higher hit rate. These results suggested that the Hurst exponent is useful for predicting future price changes. Furthermore, we also discovered that the Hurst exponent and the hit rate are useful as standards that can distinguish emerging capital markets from mature capital markets.

  14. Pap-smear Classification Using Efficient Second Order Neural Network Training Algorithms

    DEFF Research Database (Denmark)

    Ampazis, Nikolaos; Dounias, George; Jantzen, Jan

    2004-01-01

    In this paper we make use of two highly efficient second order neural network training algorithms, namely the LMAM (Levenberg-Marquardt with Adaptive Momentum) and OLMAM (Optimized Levenberg-Marquardt with Adaptive Momentum), for the construction of an efficient pap-smear test classifier. The alg......In this paper we make use of two highly efficient second order neural network training algorithms, namely the LMAM (Levenberg-Marquardt with Adaptive Momentum) and OLMAM (Optimized Levenberg-Marquardt with Adaptive Momentum), for the construction of an efficient pap-smear test classifier...

  15. On the Efficient Market Hypothesis of Stock Market Indexes: The Role of Non-synchronous Trading and Portfolio Effects

    OpenAIRE

    Ortiz, Roberto; Contreras, Mauricio; Villena, Marcelo

    2015-01-01

    In this article, the long-term behavior of the stock market index of the New York Stock Exchange is studied, for the period 1950 to 2013. Specifically, the CRSP Value-Weighted and CRSP Equal-Weighted index are analyzed in terms of market efficiency, using the standard ratio variance test, considering over 1600 one week rolling windows. For the equally weighted index, the null hypothesis of random walk is rejected in the whole period, while for the weighted market value index, the null hypothe...

  16. Energy efficiency optimisation for distillation column using artificial neural network models

    International Nuclear Information System (INIS)

    Osuolale, Funmilayo N.; Zhang, Jie

    2016-01-01

    This paper presents a neural network based strategy for the modelling and optimisation of energy efficiency in distillation columns incorporating the second law of thermodynamics. Real-time optimisation of distillation columns based on mechanistic models is often infeasible due to the effort in model development and the large computation effort associated with mechanistic model computation. This issue can be addressed by using neural network models which can be quickly developed from process operation data. The computation time in neural network model evaluation is very short making them ideal for real-time optimisation. Bootstrap aggregated neural networks are used in this study for enhanced model accuracy and reliability. Aspen HYSYS is used for the simulation of the distillation systems. Neural network models for exergy efficiency and product compositions are developed from simulated process operation data and are used to maximise exergy efficiency while satisfying products qualities constraints. Applications to binary systems of methanol-water and benzene-toluene separations culminate in a reduction of utility consumption of 8.2% and 28.2% respectively. Application to multi-component separation columns also demonstrate the effectiveness of the proposed method with a 32.4% improvement in the exergy efficiency. - Highlights: • Neural networks can accurately model exergy efficiency in distillation columns. • Bootstrap aggregated neural network offers improved model prediction accuracy. • Improved exergy efficiency is obtained through model based optimisation. • Reductions of utility consumption by 8.2% and 28.2% were achieved for binary systems. • The exergy efficiency for multi-component distillation is increased by 32.4%.

  17. FROM EFFICIENT MARKET HYPOTHESIS TO BEHAVIOURAL FINANCE: CAN BEHAVIOURAL FINANCE BE THE NEW DOMINANT MODEL FOR INVESTING?

    Directory of Open Access Journals (Sweden)

    George BOROVAS

    2012-12-01

    Full Text Available The present paper reviews two fundamental investing paradigms, which have had a substantial impact on the manner investors tend to develop their own strategies. specifically, the study elaborates on efficient market hypothesis (emh, which, despite remaining most prominent and popular until the 1990s, is considered rather controversial and often disputed, and the theory of behavioural finance, which has increasingly been implemented in financial institutions. based on an extensive survey of behavioural finance and emh literature, the study demonstrates, despite any assertions, the inherent irrationality of the theory of efficient market, and discusses the potential reasons for its recent decline, arguing in favor of its replacement or co-existence with behavioural finance. in addition, the study highlights that the theory of behavioural finance, which endorses human behavioral and psychological attitudes, should become the theoretical framework for successful and profitable investing.

  18. vDNN: Virtualized Deep Neural Networks for Scalable, Memory-Efficient Neural Network Design

    OpenAIRE

    Rhu, Minsoo; Gimelshein, Natalia; Clemons, Jason; Zulfiqar, Arslan; Keckler, Stephen W.

    2016-01-01

    The most widely used machine learning frameworks require users to carefully tune their memory usage so that the deep neural network (DNN) fits into the DRAM capacity of a GPU. This restriction hampers a researcher's flexibility to study different machine learning algorithms, forcing them to either use a less desirable network architecture or parallelize the processing across multiple GPUs. We propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU...

  19. SiNC: Saliency-injected neural codes for representation and efficient retrieval of medical radiographs.

    Directory of Open Access Journals (Sweden)

    Jamil Ahmad

    Full Text Available Medical image collections contain a wealth of information which can assist radiologists and medical experts in diagnosis and disease detection for making well-informed decisions. However, this objective can only be realized if efficient access is provided to semantically relevant cases from the ever-growing medical image repositories. In this paper, we present an efficient method for representing medical images by incorporating visual saliency and deep features obtained from a fine-tuned convolutional neural network (CNN pre-trained on natural images. Saliency detector is employed to automatically identify regions of interest like tumors, fractures, and calcified spots in images prior to feature extraction. Neuronal activation features termed as neural codes from different CNN layers are comprehensively studied to identify most appropriate features for representing radiographs. This study revealed that neural codes from the last fully connected layer of the fine-tuned CNN are found to be the most suitable for representing medical images. The neural codes extracted from the entire image and salient part of the image are fused to obtain the saliency-injected neural codes (SiNC descriptor which is used for indexing and retrieval. Finally, locality sensitive hashing techniques are applied on the SiNC descriptor to acquire short binary codes for allowing efficient retrieval in large scale image collections. Comprehensive experimental evaluations on the radiology images dataset reveal that the proposed framework achieves high retrieval accuracy and efficiency for scalable image retrieval applications and compares favorably with existing approaches.

  20. Neural chips, neural computers and application in high and superhigh energy physics experiments

    International Nuclear Information System (INIS)

    Nikityuk, N.M.; )

    2001-01-01

    Architecture peculiarity and characteristics of series of neural chips and neural computes used in scientific instruments are considered. Tendency of development and use of them in high energy and superhigh energy physics experiments are described. Comparative data which characterize the efficient use of neural chips for useful event selection, classification elementary particles, reconstruction of tracks of charged particles and for search of hypothesis Higgs particles are given. The characteristics of native neural chips and accelerated neural boards are considered [ru

  1. Neural Control and Adaptive Neural Forward Models for Insect-like, Energy-Efficient, and Adaptable Locomotion of Walking Machines

    DEFF Research Database (Denmark)

    Manoonpong, Poramate; Parlitz, Ulrich; Wörgötter, Florentin

    2013-01-01

    such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast...... on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models...... allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show...

  2. Neural Control and Adaptive Neural Forward Models for Insect-like, Energy-Efficient, and Adaptable Locomotion of Walking Machines

    Directory of Open Access Journals (Sweden)

    Poramate eManoonpong

    2013-02-01

    Full Text Available Living creatures, like walking animals, have found fascinating solutions for the problem of locomotion control. Their movements show the impression of elegance including versatile, energy-efficient, and adaptable locomotion. During the last few decades, roboticists have tried to imitate such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast, animal locomotion mechanisms seem to largely depend not only on central mechanisms (central pattern generators, CPGs and sensory feedback (afferent-based control but also on internal forward models (efference copies. They are used to a different degree in different animals. Generally, CPGs organize basic rhythmic motions which are shaped by sensory feedback while internal models are used for sensory prediction and state estimations. According to this concept, we present here adaptive neural locomotion control consisting of a CPG mechanism with neuromodulation and local leg control mechanisms based on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show that the employed embodied neural closed-loop system can be a powerful way for developing robust and adaptable machines.

  3. Emotion processing in words: a test of the neural re-use hypothesis using surface and intracranial EEG.

    Science.gov (United States)

    Ponz, Aurélie; Montant, Marie; Liegeois-Chauvel, Catherine; Silva, Catarina; Braun, Mario; Jacobs, Arthur M; Ziegler, Johannes C

    2014-05-01

    This study investigates the spatiotemporal brain dynamics of emotional information processing during reading using a combination of surface and intracranial electroencephalography (EEG). Two different theoretical views were opposed. According to the standard psycholinguistic perspective, emotional responses to words are generated within the reading network itself subsequent to semantic activation. According to the neural re-use perspective, brain regions that are involved in processing emotional information contained in other stimuli (faces, pictures, smells) might be in charge of the processing of emotional information in words as well. We focused on a specific emotion-disgust-which has a clear locus in the brain, the anterior insula. Surface EEG showed differences between disgust and neutral words as early as 200 ms. Source localization suggested a cortical generator of the emotion effect in the left anterior insula. These findings were corroborated through the intracranial recordings of two epileptic patients with depth electrodes in insular and orbitofrontal areas. Both electrodes showed effects of disgust in reading as early as 200 ms. The early emotion effect in a brain region (insula) that responds to specific emotions in a variety of situations and stimuli clearly challenges classic sequential theories of reading in favor of the neural re-use perspective.

  4. Efficient decoding with steady-state Kalman filter in neural interface systems.

    Science.gov (United States)

    Malik, Wasim Q; Truccolo, Wilson; Brown, Emery N; Hochberg, Leigh R

    2011-02-01

    The Kalman filter is commonly used in neural interface systems to decode neural activity and estimate the desired movement kinematics. We analyze a low-complexity Kalman filter implementation in which the filter gain is approximated by its steady-state form, computed offline before real-time decoding commences. We evaluate its performance using human motor cortical spike train data obtained from an intracortical recording array as part of an ongoing pilot clinical trial. We demonstrate that the standard Kalman filter gain converges to within 95% of the steady-state filter gain in 1.5±0.5 s (mean ±s.d.). The difference in the intended movement velocity decoded by the two filters vanishes within 5 s, with a correlation coefficient of 0.99 between the two decoded velocities over the session length. We also find that the steady-state Kalman filter reduces the computational load (algorithm execution time) for decoding the firing rates of 25±3 single units by a factor of 7.0±0.9. We expect that the gain in computational efficiency will be much higher in systems with larger neural ensembles. The steady-state filter can thus provide substantial runtime efficiency at little cost in terms of estimation accuracy. This far more efficient neural decoding approach will facilitate the practical implementation of future large-dimensional, multisignal neural interface systems.

  5. The efficient market hypothesis of brazilian capital market, 2000-2010: an event study of distribution of dividends

    Directory of Open Access Journals (Sweden)

    Daniel Moreira Carvalho

    2013-11-01

    Full Text Available In the semi-strong form of the Efficient Markets Hypothesis - EMH, developed by Fama (1970, 1991, the prices reflect both the past and any information disclosed by companies, making impossible to an investor to get abnormal returns consistently, based on this type of information. In this paper we analyze the price behavior of common shares of 87 listed companies in the BM&FBovespa, in the announcements of 452 events of dividend distribution, occurred between January 2000 and September 2010, in order to identify the EMH in semi-strong form of Brazilian capital market. We used an event study, which evaluates abnormal returns of stocks relative to the market return (Ibovespa. The analysis of the abnormal return in the event window (10 days before and after the dividend distribution announcement showed an upward trend, with significant positive abnormal returns on days t-5, t-3, and t-1 to t+1. The results go in the direction of other studies of national literature and contribute to attest that the Brazilian capital market lacks the semi-strong form of informational efficiency.

  6. Energy-efficient neural information processing in individual neurons and neuronal networks.

    Science.gov (United States)

    Yu, Lianchun; Yu, Yuguo

    2017-11-01

    Brains are composed of networks of an enormous number of neurons interconnected with synapses. Neural information is carried by the electrical signals within neurons and the chemical signals among neurons. Generating these electrical and chemical signals is metabolically expensive. The fundamental issue raised here is whether brains have evolved efficient ways of developing an energy-efficient neural code from the molecular level to the circuit level. Here, we summarize the factors and biophysical mechanisms that could contribute to the energy-efficient neural code for processing input signals. The factors range from ion channel kinetics, body temperature, axonal propagation of action potentials, low-probability release of synaptic neurotransmitters, optimal input and noise, the size of neurons and neuronal clusters, excitation/inhibition balance, coding strategy, cortical wiring, and the organization of functional connectivity. Both experimental and computational evidence suggests that neural systems may use these factors to maximize the efficiency of energy consumption in processing neural signals. Studies indicate that efficient energy utilization may be universal in neuronal systems as an evolutionary consequence of the pressure of limited energy. As a result, neuronal connections may be wired in a highly economical manner to lower energy costs and space. Individual neurons within a network may encode independent stimulus components to allow a minimal number of neurons to represent whole stimulus characteristics efficiently. This basic principle may fundamentally change our view of how billions of neurons organize themselves into complex circuits to operate and generate the most powerful intelligent cognition in nature. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  7. Pap-smear Classification Using Efficient Second Order Neural Network Training Algorithms

    DEFF Research Database (Denmark)

    Ampazis, Nikolaos; Dounias, George; Jantzen, Jan

    2004-01-01

    In this paper we make use of two highly efficient second order neural network training algorithms, namely the LMAM (Levenberg-Marquardt with Adaptive Momentum) and OLMAM (Optimized Levenberg-Marquardt with Adaptive Momentum), for the construction of an efficient pap-smear test classifier. The alg......In this paper we make use of two highly efficient second order neural network training algorithms, namely the LMAM (Levenberg-Marquardt with Adaptive Momentum) and OLMAM (Optimized Levenberg-Marquardt with Adaptive Momentum), for the construction of an efficient pap-smear test classifier....... The algorithms are methodologically similar, and are based on iterations of the form employed in the Levenberg-Marquardt (LM) method for non-linear least squares problems with the inclusion of an additional adaptive momentum term arising from the formulation of the training task as a constrained optimization...

  8. Design of artificial neural networks using a genetic algorithm to predict collection efficiency in venturi scrubbers.

    Science.gov (United States)

    Taheri, Mahboobeh; Mohebbi, Ali

    2008-08-30

    In this study, a new approach for the auto-design of neural networks, based on a genetic algorithm (GA), has been used to predict collection efficiency in venturi scrubbers. The experimental input data, including particle diameter, throat gas velocity, liquid to gas flow rate ratio, throat hydraulic diameter, pressure drop across the venturi scrubber and collection efficiency as an output, have been used to create a GA-artificial neural network (ANN) model. The testing results from the model are in good agreement with the experimental data. Comparison of the results of the GA optimized ANN model with the results from the trial-and-error calibrated ANN model indicates that the GA-ANN model is more efficient. Finally, the effects of operating parameters such as liquid to gas flow rate ratio, throat gas velocity, and particle diameter on collection efficiency were determined.

  9. The Bio-Inspired Optimization of Trading Strategies and Its Impact on the Efficient Market Hypothesis and Sustainable Development Strategies

    Directory of Open Access Journals (Sweden)

    Rafał Dreżewski

    2018-05-01

    Full Text Available In this paper, the evolutionary algorithm for the optimization of Forex market trading strategies is proposed. The introduction to issues related to the financial markets and the evolutionary algorithms precedes the main part of the paper, in which the proposed trading system is presented. The system uses the evolutionary algorithm for optimization of a parameterized greedy strategy, which is then used as an investment strategy on the Forex market. In the proposed system, a model of the Forex market was developed, including all elements that are necessary for simulating realistic trading processes. The proposed evolutionary algorithm contains several novel mechanisms that were introduced to optimize the greedy strategy. The most important of the proposed techniques are the mechanisms for maintaining the population diversity, a mechanism for protecting the best individuals in the population, the mechanisms preventing the excessive growth of the population, the mechanisms of the initialization of the population after moving the time window and a mechanism of choosing the best strategies used for trading. The experiments, conducted with the use of real-world Forex market data, were aimed at testing the quality of the results obtained using the proposed algorithm and comparing them with the results obtained by the buy-and-hold strategy. By comparing our results with the results of the buy-and-hold strategy, we attempted to verify the validity of the efficient market hypothesis. The credibility of the hypothesis would have more general implications for many different areas of our lives, including future sustainable development policies.

  10. Trait anxiety and the neural efficiency of manipulation in working memory

    NARCIS (Netherlands)

    Basten, U.; Stelzel, C.; Fiebach, C.J.

    2012-01-01

    The present study investigates the effects of trait anxiety on the neural efficiency of working memory component functions (manipulation vs. maintenance) in the absence of threat-related stimuli. For the manipulation of affectively neutral verbal information held in working memory, high- and

  11. Efficient Online Learning Algorithms Based on LSTM Neural Networks.

    Science.gov (United States)

    Ergen, Tolga; Kozat, Suleyman Serdar

    2017-09-13

    We investigate online nonlinear regression and introduce novel regression structures based on the long short term memory (LSTM) networks. For the introduced structures, we also provide highly efficient and effective online training methods. To train these novel LSTM-based structures, we put the underlying architecture in a state space form and introduce highly efficient and effective particle filtering (PF)-based updates. We also provide stochastic gradient descent and extended Kalman filter-based updates. Our PF-based training method guarantees convergence to the optimal parameter estimation in the mean square error sense provided that we have a sufficient number of particles and satisfy certain technical conditions. More importantly, we achieve this performance with a computational complexity in the order of the first-order gradient-based methods by controlling the number of particles. Since our approach is generic, we also introduce a gated recurrent unit (GRU)-based approach by directly replacing the LSTM architecture with the GRU architecture, where we demonstrate the superiority of our LSTM-based approach in the sequential prediction task via different real life data sets. In addition, the experimental results illustrate significant performance improvements achieved by the introduced algorithms with respect to the conventional methods over several different benchmark real life data sets.

  12. The role of fluid intelligence and learning in analogical reasoning: How to become neurally efficient?

    Science.gov (United States)

    Dix, Annika; Wartenburger, Isabell; van der Meer, Elke

    2016-10-01

    This study on analogical reasoning evaluates the impact of fluid intelligence on adaptive changes in neural efficiency over the course of an experiment and specifies the underlying cognitive processes. Grade 10 students (N=80) solved unfamiliar geometric analogy tasks of varying difficulty. Neural efficiency was measured by the event-related desynchronization (ERD) in the alpha band, an indicator of cortical activity. Neural efficiency was defined as a low amount of cortical activity accompanying high performance during problem-solving. Students solved the tasks faster and more accurately the higher their FI was. Moreover, while high FI led to greater cortical activity in the first half of the experiment, high FI was associated with a neurally more efficient processing (i.e., better performance but same amount of cortical activity) in the second half of the experiment. Performance in difficult tasks improved over the course of the experiment for all students while neural efficiency increased for students with higher but decreased for students with lower fluid intelligence. Based on analyses of the alpha sub-bands, we argue that high fluid intelligence was associated with a stronger investment of attentional resource in the integration of information and the encoding of relations in this unfamiliar task in the first half of the experiment (lower-2 alpha band). Students with lower fluid intelligence seem to adapt their applied strategies over the course of the experiment (i.e., focusing on task-relevant information; lower-1 alpha band). Thus, the initially lower cortical activity and its increase in students with lower fluid intelligence might reflect the overcoming of mental overload that was present in the first half of the experiment. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.

    Science.gov (United States)

    Zhou, Xichuan; Li, Shengli; Tang, Fang; Hu, Shengdong; Lin, Zhi; Zhang, Lei

    2017-07-18

    Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections. By adaptively removing the majority of neural connections and robustly representing the reserved connections using binary integers, the proposed algorithm could save up to 99.9% memory utility and computational resources without undermining classification accuracy. An efficient sparse-mapping-memory-based hardware architecture is proposed to fully take advantage of the algorithmic optimization. Different from traditional Von Neumann architecture, the deep-adaptive network on chip (DANoC) brings communication and computation in close proximity to avoid power-hungry parameter transfers between on-board memory and on-chip computational units. Experiments over different image classification benchmarks show that the DANoC system achieves competitively high accuracy and efficiency comparing with the state-of-the-art approaches.

  14. A preferential design approach for energy-efficient and robust implantable neural signal processing hardware.

    Science.gov (United States)

    Narasimhan, Seetharam; Chiel, Hillel J; Bhunia, Swarup

    2009-01-01

    For implantable neural interface applications, it is important to compress data and analyze spike patterns across multiple channels in real time. Such a computational task for online neural data processing requires an innovative circuit-architecture level design approach for low-power, robust and area-efficient hardware implementation. Conventional microprocessor or Digital Signal Processing (DSP) chips would dissipate too much power and are too large in size for an implantable system. In this paper, we propose a novel hardware design approach, referred to as "Preferential Design" that exploits the nature of the neural signal processing algorithm to achieve a low-voltage, robust and area-efficient implementation using nanoscale process technology. The basic idea is to isolate the critical components with respect to system performance and design them more conservatively compared to the noncritical ones. This allows aggressive voltage scaling for low power operation while ensuring robustness and area efficiency. We have applied the proposed approach to a neural signal processing algorithm using the Discrete Wavelet Transform (DWT) and observed significant improvement in power and robustness over conventional design.

  15. Prediction of Protein Thermostability by an Efficient Neural Network Approach

    Directory of Open Access Journals (Sweden)

    Jalal Rezaeenour

    2016-10-01

    significantly improves the accuracy of ELM in prediction of thermostable enzymes. ELM tends to require more neurons in the hidden-layer than conventional tuning-based learning algorithms. To overcome these, the proposed approach uses a GA which optimizes the structure and the parameters of the ELM. In summary, optimization of ELM with GA results in an efficient prediction method; numerical experiments proved that our approach yields excellent results.

  16. Extended passaging increases the efficiency of neural differentiation from induced pluripotent stem cells

    Directory of Open Access Journals (Sweden)

    Koehler Karl R

    2011-08-01

    Full Text Available Abstract Background The use of induced pluripotent stem cells (iPSCs for the functional replacement of damaged neurons and in vitro disease modeling is of great clinical relevance. Unfortunately, the capacity of iPSC lines to differentiate into neurons is highly variable, prompting the need for a reliable means of assessing the differentiation capacity of newly derived iPSC cell lines. Extended passaging is emerging as a method of ensuring faithful reprogramming. We adapted an established and efficient embryonic stem cell (ESC neural induction protocol to test whether iPSCs (1 have the competence to give rise to functional neurons with similar efficiency as ESCs and (2 whether the extent of neural differentiation could be altered or enhanced by increased passaging. Results Our gene expression and morphological analyses revealed that neural conversion was temporally delayed in iPSC lines and some iPSC lines did not properly form embryoid bodies during the first stage of differentiation. Notably, these deficits were corrected by continual passaging in an iPSC clone. iPSCs with greater than 20 passages (late-passage iPSCs expressed higher expression levels of pluripotency markers and formed larger embryoid bodies than iPSCs with fewer than 10 passages (early-passage iPSCs. Moreover, late-passage iPSCs started to express neural marker genes sooner than early-passage iPSCs after the initiation of neural induction. Furthermore, late-passage iPSC-derived neurons exhibited notably greater excitability and larger voltage-gated currents than early-passage iPSC-derived neurons, although these cells were morphologically indistinguishable. Conclusions These findings strongly suggest that the efficiency neuronal conversion depends on the complete reprogramming of iPSCs via extensive passaging.

  17. Efficient and Rapid Derivation of Primitive Neural Stem Cells and Generation of Brain Subtype Neurons From Human Pluripotent Stem Cells

    OpenAIRE

    Yan, Yiping; Shin, Soojung; Jha, Balendu Shekhar; Liu, Qiuyue; Sheng, Jianting; Li, Fuhai; Zhan, Ming; Davis, Janine; Bharti, Kapil; Zeng, Xianmin; Rao, Mahendra; Malik, Nasir; Vemuri, Mohan C.

    2013-01-01

    This study developed a highly efficient serum-free pluripotent stem cell (PSC) neural induction medium that can induce human PSCs into primitive neural stem cells (NSCs) in 7 days, obviating the need for time-consuming, laborious embryoid body generation or rosette picking. This method of primitive NSC derivation sets the stage for the scalable production of clinically relevant neural cells for cell therapy applications in good manufacturing practice conditions.

  18. Event-driven processing for hardware-efficient neural spike sorting

    Science.gov (United States)

    Liu, Yan; Pereira, João L.; Constandinou, Timothy G.

    2018-02-01

    Objective. The prospect of real-time and on-node spike sorting provides a genuine opportunity to push the envelope of large-scale integrated neural recording systems. In such systems the hardware resources, power requirements and data bandwidth increase linearly with channel count. Event-based (or data-driven) processing can provide here a new efficient means for hardware implementation that is completely activity dependant. In this work, we investigate using continuous-time level-crossing sampling for efficient data representation and subsequent spike processing. Approach. (1) We first compare signals (synthetic neural datasets) encoded with this technique against conventional sampling. (2) We then show how such a representation can be directly exploited by extracting simple time domain features from the bitstream to perform neural spike sorting. (3) The proposed method is implemented in a low power FPGA platform to demonstrate its hardware viability. Main results. It is observed that considerably lower data rates are achievable when using 7 bits or less to represent the signals, whilst maintaining the signal fidelity. Results obtained using both MATLAB and reconfigurable logic hardware (FPGA) indicate that feature extraction and spike sorting accuracies can be achieved with comparable or better accuracy than reference methods whilst also requiring relatively low hardware resources. Significance. By effectively exploiting continuous-time data representation, neural signal processing can be achieved in a completely event-driven manner, reducing both the required resources (memory, complexity) and computations (operations). This will see future large-scale neural systems integrating on-node processing in real-time hardware.

  19. Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback.

    Science.gov (United States)

    Orhan, A Emin; Ma, Wei Ji

    2017-07-26

    Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey's learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.

  20. THE FRACTAL MARKET HYPOTHESIS

    OpenAIRE

    FELICIA RAMONA BIRAU

    2012-01-01

    In this article, the concept of capital market is analysed using Fractal Market Hypothesis which is a modern, complex and unconventional alternative to classical finance methods. Fractal Market Hypothesis is in sharp opposition to Efficient Market Hypothesis and it explores the application of chaos theory and fractal geometry to finance. Fractal Market Hypothesis is based on certain assumption. Thus, it is emphasized that investors did not react immediately to the information they receive and...

  1. On the Keyhole Hypothesis

    DEFF Research Database (Denmark)

    Mikkelsen, Kaare B.; Kidmose, Preben; Hansen, Lars Kai

    2017-01-01

    simultaneously recorded scalp EEG. A cross-validation procedure was employed to ensure unbiased estimates. We present several pieces of evidence in support of the keyhole hypothesis: There is a high mutual information between data acquired at scalp electrodes and through the ear-EEG "keyhole," furthermore we......We propose and test the keyhole hypothesis that measurements from low dimensional EEG, such as ear-EEG reflect a broadly distributed set of neural processes. We formulate the keyhole hypothesis in information theoretical terms. The experimental investigation is based on legacy data consisting of 10...

  2. Compact, Energy-Efficient High-Frequency Switched Capacitor Neural Stimulator With Active Charge Balancing.

    Science.gov (United States)

    Hsu, Wen-Yang; Schmid, Alexandre

    2017-08-01

    Safety and energy efficiency are two major concerns for implantable neural stimulators. This paper presents a novel high-frequency, switched capacitor (HFSC) stimulation and active charge balancing scheme, which achieves high energy efficiency and well-controlled stimulation charge in the presence of large electrode impedance variations. Furthermore, the HFSC can be implemented in a compact size without any external component to simultaneously enable multichannel stimulation by deploying multiple stimulators. The theoretical analysis shows significant benefits over the constant-current and voltage-mode stimulation methods. The proposed solution was fabricated using a 0.18 μm high-voltage technology, and occupies only 0.035 mm 2 for a single stimulator. The measurement result shows 50% peak energy efficiency and confirms the effectiveness of active charge balancing to prevent the electrode dissolution.

  3. Neural network configuration and efficiency underlies individual differences in spatial orientation ability.

    Science.gov (United States)

    Arnold, Aiden E G F; Protzner, Andrea B; Bray, Signe; Levy, Richard M; Iaria, Giuseppe

    2014-02-01

    Spatial orientation is a complex cognitive process requiring the integration of information processed in a distributed system of brain regions. Current models on the neural basis of spatial orientation are based primarily on the functional role of single brain regions, with limited understanding of how interaction among these brain regions relates to behavior. In this study, we investigated two sources of variability in the neural networks that support spatial orientation--network configuration and efficiency--and assessed whether variability in these topological properties relates to individual differences in orientation accuracy. Participants with higher accuracy were shown to express greater activity in the right supramarginal gyrus, the right precentral cortex, and the left hippocampus, over and above a core network engaged by the whole group. Additionally, high-performing individuals had increased levels of global efficiency within a resting-state network composed of brain regions engaged during orientation and increased levels of node centrality in the right supramarginal gyrus, the right primary motor cortex, and the left hippocampus. These results indicate that individual differences in the configuration of task-related networks and their efficiency measured at rest relate to the ability to spatially orient. Our findings advance systems neuroscience models of orientation and navigation by providing insight into the role of functional integration in shaping orientation behavior.

  4. Plant Disease Severity Assessment-How Rater Bias, Assessment Method, and Experimental Design Affect Hypothesis Testing and Resource Use Efficiency.

    Science.gov (United States)

    Chiang, Kuo-Szu; Bock, Clive H; Lee, I-Hsuan; El Jarroudi, Moussa; Delfosse, Philippe

    2016-12-01

    The effect of rater bias and assessment method on hypothesis testing was studied for representative experimental designs for plant disease assessment using balanced and unbalanced data sets. Data sets with the same number of replicate estimates for each of two treatments are termed "balanced" and those with unequal numbers of replicate estimates are termed "unbalanced". The three assessment methods considered were nearest percent estimates (NPEs), an amended 10% incremental scale, and the Horsfall-Barratt (H-B) scale. Estimates of severity of Septoria leaf blotch on leaves of winter wheat were used to develop distributions for a simulation model. The experimental designs are presented here in the context of simulation experiments which consider the optimal design for the number of specimens (individual units sampled) and the number of replicate estimates per specimen for a fixed total number of observations (total sample size for the treatments being compared). The criterion used to gauge each method was the power of the hypothesis test. As expected, at a given fixed number of observations, the balanced experimental designs invariably resulted in a higher power compared with the unbalanced designs at different disease severity means, mean differences, and variances. Based on these results, with unbiased estimates using NPE, the recommended number of replicate estimates taken per specimen is 2 (from a sample of specimens of at least 30), because this conserves resources. Furthermore, for biased estimates, an apparent difference in the power of the hypothesis test was observed between assessment methods and between experimental designs. Results indicated that, regardless of experimental design or rater bias, an amended 10% incremental scale has slightly less power compared with NPEs, and that the H-B scale is more likely than the others to cause a type II error. These results suggest that choice of assessment method, optimizing sample number and number of replicate

  5. THE FRACTAL MARKET HYPOTHESIS

    Directory of Open Access Journals (Sweden)

    FELICIA RAMONA BIRAU

    2012-05-01

    Full Text Available In this article, the concept of capital market is analysed using Fractal Market Hypothesis which is a modern, complex and unconventional alternative to classical finance methods. Fractal Market Hypothesis is in sharp opposition to Efficient Market Hypothesis and it explores the application of chaos theory and fractal geometry to finance. Fractal Market Hypothesis is based on certain assumption. Thus, it is emphasized that investors did not react immediately to the information they receive and of course, the manner in which they interpret that information may be different. Also, Fractal Market Hypothesis refers to the way that liquidity and investment horizons influence the behaviour of financial investors.

  6. Consequences of Converting Graded to Action Potentials upon Neural Information Coding and Energy Efficiency

    Science.gov (United States)

    Sengupta, Biswa; Laughlin, Simon Barry; Niven, Jeremy Edward

    2014-01-01

    Information is encoded in neural circuits using both graded and action potentials, converting between them within single neurons and successive processing layers. This conversion is accompanied by information loss and a drop in energy efficiency. We investigate the biophysical causes of this loss of information and efficiency by comparing spiking neuron models, containing stochastic voltage-gated Na+ and K+ channels, with generator potential and graded potential models lacking voltage-gated Na+ channels. We identify three causes of information loss in the generator potential that are the by-product of action potential generation: (1) the voltage-gated Na+ channels necessary for action potential generation increase intrinsic noise and (2) introduce non-linearities, and (3) the finite duration of the action potential creates a ‘footprint’ in the generator potential that obscures incoming signals. These three processes reduce information rates by ∼50% in generator potentials, to ∼3 times that of spike trains. Both generator potentials and graded potentials consume almost an order of magnitude less energy per second than spike trains. Because of the lower information rates of generator potentials they are substantially less energy efficient than graded potentials. However, both are an order of magnitude more efficient than spike trains due to the higher energy costs and low information content of spikes, emphasizing that there is a two-fold cost of converting analogue to digital; information loss and cost inflation. PMID:24465197

  7. Combining neural networks and signed particles to simulate quantum systems more efficiently

    Science.gov (United States)

    Sellier, Jean Michel

    2018-04-01

    Recently a new formulation of quantum mechanics has been suggested which describes systems by means of ensembles of classical particles provided with a sign. This novel approach mainly consists of two steps: the computation of the Wigner kernel, a multi-dimensional function describing the effects of the potential over the system, and the field-less evolution of the particles which eventually create new signed particles in the process. Although this method has proved to be extremely advantageous in terms of computational resources - as a matter of fact it is able to simulate in a time-dependent fashion many-body systems on relatively small machines - the Wigner kernel can represent the bottleneck of simulations of certain systems. Moreover, storing the kernel can be another issue as the amount of memory needed is cursed by the dimensionality of the system. In this work, we introduce a new technique which drastically reduces the computation time and memory requirement to simulate time-dependent quantum systems which is based on the use of an appropriately tailored neural network combined with the signed particle formalism. In particular, the suggested neural network is able to compute efficiently and reliably the Wigner kernel without any training as its entire set of weights and biases is specified by analytical formulas. As a consequence, the amount of memory for quantum simulations radically drops since the kernel does not need to be stored anymore as it is now computed by the neural network itself, only on the cells of the (discretized) phase-space which are occupied by particles. As its is clearly shown in the final part of this paper, not only this novel approach drastically reduces the computational time, it also remains accurate. The author believes this work opens the way towards effective design of quantum devices, with incredible practical implications.

  8. The Synapse Project: Engagement in mentally challenging activities enhances neural efficiency.

    Science.gov (United States)

    McDonough, Ian M; Haber, Sara; Bischof, Gérard N; Park, Denise C

    2015-01-01

    Correlational and limited experimental evidence suggests that an engaged lifestyle is associated with the maintenance of cognitive vitality in old age. However, the mechanisms underlying these engagement effects are poorly understood. We hypothesized that mental effort underlies engagement effects and used fMRI to examine the impact of high-challenge activities (digital photography and quilting) compared with low-challenge activities (socializing or performing low-challenge cognitive tasks) on neural function at pretest, posttest, and one year after the engagement program. In the scanner, participants performed a semantic-classification task with two levels of difficulty to assess the modulation of brain activity in response to task demands. The High-Challenge group, but not the Low-Challenge group, showed increased modulation of brain activity in medial frontal, lateral temporal, and parietal cortex-regions associated with attention and semantic processing-some of which were maintained a year later. This increased modulation stemmed from decreases in brain activity during the easy condition for the High-Challenge group and was associated with time committed to the program, age, and cognition. Sustained engagement in cognitively demanding activities facilitated cognition by increasing neural efficiency. Mentally-challenging activities may be neuroprotective and an important element to maintaining a healthy brain into late adulthood.

  9. FPGA IMPLEMENTATION OF ADAPTIVE INTEGRATED SPIKING NEURAL NETWORK FOR EFFICIENT IMAGE RECOGNITION SYSTEM

    Directory of Open Access Journals (Sweden)

    T. Pasupathi

    2014-05-01

    Full Text Available Image recognition is a technology which can be used in various applications such as medical image recognition systems, security, defense video tracking, and factory automation. In this paper we present a novel pipelined architecture of an adaptive integrated Artificial Neural Network for image recognition. In our proposed work we have combined the feature of spiking neuron concept with ANN to achieve the efficient architecture for image recognition. The set of training images are trained by ANN and target output has been identified. Real time videos are captured and then converted into frames for testing purpose and the image were recognized. The machine can operate at up to 40 frames/sec using images acquired from the camera. The system has been implemented on XC3S400 SPARTAN-3 Field Programmable Gate Arrays.

  10. Efficient organ localization using multi-label convolutional neural networks in thorax-abdomen CT scans

    Science.gov (United States)

    Efrain Humpire-Mamani, Gabriel; Arindra Adiyoso Setio, Arnaud; van Ginneken, Bram; Jacobs, Colin

    2018-04-01

    Automatic localization of organs and other structures in medical images is an important preprocessing step that can improve and speed up other algorithms such as organ segmentation, lesion detection, and registration. This work presents an efficient method for simultaneous localization of multiple structures in 3D thorax-abdomen CT scans. Our approach predicts the location of multiple structures using a single multi-label convolutional neural network for each orthogonal view. Each network takes extra slices around the current slice as input to provide extra context. A sigmoid layer is used to perform multi-label classification. The output of the three networks is subsequently combined to compute a 3D bounding box for each structure. We used our approach to locate 11 structures of interest. The neural network was trained and evaluated on a large set of 1884 thorax-abdomen CT scans from patients undergoing oncological workup. Reference bounding boxes were annotated by human observers. The performance of our method was evaluated by computing the wall distance to the reference bounding boxes. The bounding boxes annotated by the first human observer were used as the reference standard for the test set. Using the best configuration, we obtained an average wall distance of 3.20~+/-~7.33 mm in the test set. The second human observer achieved 1.23~+/-~3.39 mm. For all structures, the results were better than those reported in previously published studies. In conclusion, we proposed an efficient method for the accurate localization of multiple organs. Our method uses multiple slices as input to provide more context around the slice under analysis, and we have shown that this improves performance. This method can easily be adapted to handle more organs.

  11. A Hypothesis and Review of the Relationship between Selection for Improved Production Efficiency, Coping Behavior, and Domestication

    Directory of Open Access Journals (Sweden)

    Wendy M. Rauw

    2017-09-01

    Full Text Available Coping styles in response to stressors have been described both in humans and in other animal species. Because coping styles are directly related to individual fitness they are part of the life history strategy. Behavioral styles trade off with other life-history traits through the acquisition and allocation of resources. Domestication and subsequent artificial selection for production traits specifically focused on selection of individuals with energy sparing mechanisms for non-production traits. Domestication resulted in animals with low levels of aggression and activity, and a low hypothalamic–pituitary–adrenal (HPA axis reactivity. In the present work, we propose that, vice versa, selection for improved production efficiency may to some extent continue to favor docile domesticated phenotypes. It is hypothesized that both domestication and selection for improved production efficiency may result in the selection of reactive style animals. Both domesticated and reactive style animals are characterized by low levels of aggression and activity, and increased serotonin neurotransmitter levels. However, whereas domestication quite consistently results in a decrease in the functional state of the HPA axis, the reactive coping style is often found to be dominated by a high HPA response. This may suggest that fearfulness and coping behavior are two independent underlying dimensions to the coping response. Although it is generally proposed that animal welfare improves with selection for calmer animals that are less fearful and reactive to novelty, animals bred to be less sensitive with fewer desires may be undesirable from an ethical point of view.

  12. Artificial neural networks: an efficient tool for modelling and optimization of biofuel production (a mini review)

    International Nuclear Information System (INIS)

    Sewsynker-Sukai, Yeshona; Faloye, Funmilayo; Kana, Evariste Bosco Gueguim

    2016-01-01

    In view of the looming energy crisis as a result of depleting fossil fuel resources and environmental concerns from greenhouse gas emissions, the need for sustainable energy sources has secured global attention. Research is currently focused towards renewable sources of energy due to their availability and environmental friendliness. Biofuel production like other bioprocesses is controlled by several process parameters including pH, temperature and substrate concentration; however, the improvement of biofuel production requires a robust process model that accurately relates the effect of input variables to the process output. Artificial neural networks (ANNs) have emerged as a tool for modelling complex, non-linear processes. ANNs are applied in the prediction of various processes; they are useful for virtual experimentations and can potentially enhance bioprocess research and development. In this study, recent findings on the application of ANN for the modelling and optimization of biohydrogen, biogas, biodiesel, microbial fuel cell technology and bioethanol are reviewed. In addition, comparative studies on the modelling efficiency of ANN and other techniques such as the response surface methodology are briefly discussed. The review highlights the efficiency of ANNs as a modelling and optimization tool in biofuel process development

  13. The neural exploitation hypothesis and its implications for an embodied approach to language and cognition: Insights from the study of action verbs processing and motor disorders in Parkinson's disease.

    Science.gov (United States)

    Gallese, Vittorio; Cuccio, Valentina

    2018-03-01

    As it is widely known, Parkinson's disease is clinically characterized by motor disorders such as the loss of voluntary movement control, including resting tremor, postural instability, and bradykinesia (Bocanegra et al., 2015; Helmich, Hallett, Deuschl, Toni, & Bloem, 2012; Liu et al., 2006; Rosin, Topka, & Dichgans, 1997). In the last years, many empirical studies (e.g., Bocanegra et al., 2015; Spadacenta et al., 2012) have also shown that the processing of action verbs is selectively impaired in patients affected by this neurodegenerative disorder. In the light of these findings, it has been suggested that Parkinson disorder can be interpreted within an embodied cognition framework (e.g., Bocanegra et al., 2015). The central tenet of any embodied approach to language and cognition is that high order cognitive functions are grounded in the sensory-motor system. With regard to this point, Gallese (2008) proposed the neural exploitation hypothesis to account for, at the phylogenetic level, how key aspects of human language are underpinned by brain mechanisms originally evolved for sensory-motor integration. Glenberg and Gallese (2012) also applied the neural exploitation hypothesis to the ontogenetic level. On the basis of these premises, they developed a theory of language acquisition according to which, sensory-motor mechanisms provide a neurofunctional architecture for the acquisition of language, while retaining their original functions as well. The neural exploitation hypothesis is here applied to interpret the profile of patients affected by Parkinson's disease. It is suggested that action semantic impairments directly tap onto motor disorders. Finally, a discussion of what theory of language is needed to account for the interactions between language and movement disorders is presented. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Feed Forward Artificial Neural Network Model to Estimate the TPH Removal Efficiency in Soil Washing Process

    Directory of Open Access Journals (Sweden)

    Hossein Jafari Mansoorian

    2017-01-01

    Full Text Available Background & Aims of the Study: A feed forward artificial neural network (FFANN was developed to predict the efficiency of total petroleum hydrocarbon (TPH removal from a contaminated soil, using soil washing process with Tween 80. The main objective of this study was to assess the performance of developed FFANN model for the estimation of   TPH removal. Materials and Methods: Several independent repressors including pH, shaking speed, surfactant concentration and contact time were used to describe the removal of TPH as a dependent variable in a FFANN model. 85% of data set observations were used for training the model and remaining 15% were used for model testing, approximately. The performance of the model was compared with linear regression and assessed, using Root of Mean Square Error (RMSE as goodness-of-fit measure Results: For the prediction of TPH removal efficiency, a FANN model with a three-hidden-layer structure of 4-3-1 and a learning rate of 0.01 showed the best predictive results. The RMSE and R2 for the training and testing steps of the model were obtained to be 2.596, 0.966, 10.70 and 0.78, respectively. Conclusion: For about 80% of the TPH removal efficiency can be described by the assessed regressors the developed model. Thus, focusing on the optimization of soil washing process regarding to shaking speed, contact time, surfactant concentration and pH can improve the TPH removal performance from polluted soils. The results of this study could be the basis for the application of FANN for the assessment of soil washing process and the control of petroleum hydrocarbon emission into the environments.

  15. Encoding neural and synaptic functionalities in electron spin: A pathway to efficient neuromorphic computing

    Science.gov (United States)

    Sengupta, Abhronil; Roy, Kaushik

    2017-12-01

    Present day computers expend orders of magnitude more computational resources to perform various cognitive and perception related tasks that humans routinely perform every day. This has recently resulted in a seismic shift in the field of computation where research efforts are being directed to develop a neurocomputer that attempts to mimic the human brain by nanoelectronic components and thereby harness its efficiency in recognition problems. Bridging the gap between neuroscience and nanoelectronics, this paper attempts to provide a review of the recent developments in the field of spintronic device based neuromorphic computing. Description of various spin-transfer torque mechanisms that can be potentially utilized for realizing device structures mimicking neural and synaptic functionalities is provided. A cross-layer perspective extending from the device to the circuit and system level is presented to envision the design of an All-Spin neuromorphic processor enabled with on-chip learning functionalities. Device-circuit-algorithm co-simulation framework calibrated to experimental results suggest that such All-Spin neuromorphic systems can potentially achieve almost two orders of magnitude energy improvement in comparison to state-of-the-art CMOS implementations.

  16. Efficient and rapid derivation of primitive neural stem cells and generation of brain subtype neurons from human pluripotent stem cells.

    Science.gov (United States)

    Yan, Yiping; Shin, Soojung; Jha, Balendu Shekhar; Liu, Qiuyue; Sheng, Jianting; Li, Fuhai; Zhan, Ming; Davis, Janine; Bharti, Kapil; Zeng, Xianmin; Rao, Mahendra; Malik, Nasir; Vemuri, Mohan C

    2013-11-01

    Human pluripotent stem cells (hPSCs), including human embryonic stem cells and human induced pluripotent stem cells, are unique cell sources for disease modeling, drug discovery screens, and cell therapy applications. The first step in producing neural lineages from hPSCs is the generation of neural stem cells (NSCs). Current methods of NSC derivation involve the time-consuming, labor-intensive steps of an embryoid body generation or coculture with stromal cell lines that result in low-efficiency derivation of NSCs. In this study, we report a highly efficient serum-free pluripotent stem cell neural induction medium that can induce hPSCs into primitive NSCs (pNSCs) in 7 days, obviating the need for time-consuming, laborious embryoid body generation or rosette picking. The pNSCs expressed the neural stem cell markers Pax6, Sox1, Sox2, and Nestin; were negative for Oct4; could be expanded for multiple passages; and could be differentiated into neurons, astrocytes, and oligodendrocytes, in addition to the brain region-specific neuronal subtypes GABAergic, dopaminergic, and motor neurons. Global gene expression of the transcripts of pNSCs was comparable to that of rosette-derived and human fetal-derived NSCs. This work demonstrates an efficient method to generate expandable pNSCs, which can be further differentiated into central nervous system neurons and glia with temporal, spatial, and positional cues of brain regional heterogeneity. This method of pNSC derivation sets the stage for the scalable production of clinically relevant neural cells for cell therapy applications in good manufacturing practice conditions.

  17. Hybrid feedback feedforward: An efficient design of adaptive neural network control.

    Science.gov (United States)

    Pan, Yongping; Liu, Yiqi; Xu, Bin; Yu, Haoyong

    2016-04-01

    This paper presents an efficient hybrid feedback feedforward (HFF) adaptive approximation-based control (AAC) strategy for a class of uncertain Euler-Lagrange systems. The control structure includes a proportional-derivative (PD) control term in the feedback loop and a radial-basis-function (RBF) neural network (NN) in the feedforward loop, which mimics the human motor learning control mechanism. At the presence of discontinuous friction, a sigmoid-jump-function NN is incorporated to improve control performance. The major difference of the proposed HFF-AAC design from the traditional feedback AAC (FB-AAC) design is that only desired outputs, rather than both tracking errors and desired outputs, are applied as RBF-NN inputs. Yet, such a slight modification leads to several attractive properties of HFF-AAC, including the convenient choice of an approximation domain, the decrease of the number of RBF-NN inputs, and semiglobal practical asymptotic stability dominated by control gains. Compared with previous HFF-AAC approaches, the proposed approach possesses the following two distinctive features: (i) all above attractive properties are achieved by a much simpler control scheme; (ii) the bounds of plant uncertainties are not required to be known. Consequently, the proposed approach guarantees a minimum configuration of the control structure and a minimum requirement of plant knowledge for the AAC design, which leads to a sharp decrease of implementation cost in terms of hardware selection, algorithm realization and system debugging. Simulation results have demonstrated that the proposed HFF-AAC can perform as good as or even better than the traditional FB-AAC under much simpler control synthesis and much lower computational cost. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Efficient Transfer Entropy Analysis of Non-Stationary Neural Time Series

    Science.gov (United States)

    Vicente, Raul; Díaz-Pernas, Francisco J.; Wibral, Michael

    2014-01-01

    Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and

  19. Towards an Efficient Artificial Neural Network Pruning and Feature Ranking Tool

    KAUST Repository

    AlShahrani, Mona

    2015-01-01

    Artificial Neural Networks (ANNs) are known to be among the most effective and expressive machine learning models. Their impressive abilities to learn have been reflected in many broad application domains such as image recognition, medical diagnosis, online banking, robotics, dynamic systems, and many others. ANNs with multiple layers of complex non-linear transformations (a.k.a Deep ANNs) have shown recently successful results in the area of computer vision and speech recognition. ANNs are parametric models that approximate unknown functions in which parameter values (weights) are adapted during training. ANN’s weights can be large in number and thus render the trained model more complex with chances for “overfitting” training data. In this study, we explore the effects of network pruning on performance of ANNs and ranking of features that describe the data. Simplified ANN model results in fewer parameters, less computation and faster training. We investigate the use of Hessian-based pruning algorithms as well as simpler ones (i.e. non Hessian-based) on nine datasets with varying number of input features and ANN parameters. The Hessian-based Optimal Brain Surgeon algorithm (OBS) is robust but slow. Therefore a faster parallel Hessian- approximation is provided. An additional speedup is provided using a variant we name ‘Simple n Optimal Brain Surgeon’ (SNOBS), which represents a good compromise between robustness and time efficiency. For some of the datasets, the ANN pruning experiments show on average 91% reduction in the number of ANN parameters and about 60% - 90% in the number of ANN input features, while maintaining comparable or better accuracy to the case when no pruning is applied. Finally, we show through a comprehensive comparison with seven state-of-the art feature filtering methods that the feature selection and ranking obtained as a byproduct of the ANN pruning is comparable in accuracy to these methods.

  20. Towards an Efficient Artificial Neural Network Pruning and Feature Ranking Tool

    KAUST Repository

    AlShahrani, Mona

    2015-05-24

    Artificial Neural Networks (ANNs) are known to be among the most effective and expressive machine learning models. Their impressive abilities to learn have been reflected in many broad application domains such as image recognition, medical diagnosis, online banking, robotics, dynamic systems, and many others. ANNs with multiple layers of complex non-linear transformations (a.k.a Deep ANNs) have shown recently successful results in the area of computer vision and speech recognition. ANNs are parametric models that approximate unknown functions in which parameter values (weights) are adapted during training. ANN’s weights can be large in number and thus render the trained model more complex with chances for “overfitting” training data. In this study, we explore the effects of network pruning on performance of ANNs and ranking of features that describe the data. Simplified ANN model results in fewer parameters, less computation and faster training. We investigate the use of Hessian-based pruning algorithms as well as simpler ones (i.e. non Hessian-based) on nine datasets with varying number of input features and ANN parameters. The Hessian-based Optimal Brain Surgeon algorithm (OBS) is robust but slow. Therefore a faster parallel Hessian- approximation is provided. An additional speedup is provided using a variant we name ‘Simple n Optimal Brain Surgeon’ (SNOBS), which represents a good compromise between robustness and time efficiency. For some of the datasets, the ANN pruning experiments show on average 91% reduction in the number of ANN parameters and about 60% - 90% in the number of ANN input features, while maintaining comparable or better accuracy to the case when no pruning is applied. Finally, we show through a comprehensive comparison with seven state-of-the art feature filtering methods that the feature selection and ranking obtained as a byproduct of the ANN pruning is comparable in accuracy to these methods.

  1. Wavelet based artificial neural network applied for energy efficiency enhancement of decoupled HVAC system

    International Nuclear Information System (INIS)

    Jahedi, G.; Ardehali, M.M.

    2012-01-01

    Highlights: ► In HVAC systems, temperature and relative humidity are coupled and dynamic mathematical models are non-linear. ► A wavelet-based ANN is used in series with an infinite impulse response filter for self tuning of PD controller. ► Energy consumption is evaluated for a decoupled bi-linear HVAC system with variable air volume and variable water flow. ► Substantial enhancement in energy efficiency is realized, when the gain coefficients of PD controllers are tuned adaptively. - Abstract: Control methodologies could lower energy demand and consumption of heating, ventilating and air conditioning (HVAC) systems and, simultaneously, achieve better comfort conditions. However, the application of classical controllers is unsatisfactory as HVAC systems are non-linear and the control variables such as temperature and relative humidity (RH) inside the thermal zone are coupled. The objective of this study is to develop and simulate a wavelet-based artificial neural network (WNN) for self tuning of a proportional-derivative (PD) controller for a decoupled bi-linear HVAC system with variable air volume and variable water flow responsible for controlling temperature and RH of a thermal zone, where thermal comfort and energy consumption of the system are evaluated. To achieve the objective, a WNN is used in series with an infinite impulse response (IIR) filter for faster and more accurate identification of system dynamics, as needed for on-line use and off-line batch mode training. The WNN-IIR algorithm is used for self-tuning of two PD controllers for temperature and RH. The simulation results show that the WNN-IIR controller performance is superior, as compared with classical PD controller. The enhancement in efficiency of the HVAC system is accomplished due to substantially lower consumption of energy during the transient operation, when the gain coefficients of PD controllers are tuned in an adaptive manner, as the steady state setpoints for temperature and

  2. A Design Methodology for Efficient Implementation of Deconvolutional Neural Networks on an FPGA

    OpenAIRE

    Zhang, Xinyu; Das, Srinjoy; Neopane, Ojash; Kreutz-Delgado, Ken

    2017-01-01

    In recent years deep learning algorithms have shown extremely high performance on machine learning tasks such as image classification and speech recognition. In support of such applications, various FPGA accelerator architectures have been proposed for convolutional neural networks (CNNs) that enable high performance for classification tasks at lower power than CPU and GPU processors. However, to date, there has been little research on the use of FPGA implementations of deconvolutional neural...

  3. An Efficient Neural-Network-Based Microseismic Monitoring Platform for Hydraulic Fracture on an Edge Computing Architecture.

    Science.gov (United States)

    Zhang, Xiaopu; Lin, Jun; Chen, Zubin; Sun, Feng; Zhu, Xi; Fang, Gengfa

    2018-06-05

    Microseismic monitoring is one of the most critical technologies for hydraulic fracturing in oil and gas production. To detect events in an accurate and efficient way, there are two major challenges. One challenge is how to achieve high accuracy due to a poor signal-to-noise ratio (SNR). The other one is concerned with real-time data transmission. Taking these challenges into consideration, an edge-computing-based platform, namely Edge-to-Center LearnReduce, is presented in this work. The platform consists of a data center with many edge components. At the data center, a neural network model combined with convolutional neural network (CNN) and long short-term memory (LSTM) is designed and this model is trained by using previously obtained data. Once the model is fully trained, it is sent to edge components for events detection and data reduction. At each edge component, a probabilistic inference is added to the neural network model to improve its accuracy. Finally, the reduced data is delivered to the data center. Based on experiment results, a high detection accuracy (over 96%) with less transmitted data (about 90%) was achieved by using the proposed approach on a microseismic monitoring system. These results show that the platform can simultaneously improve the accuracy and efficiency of microseismic monitoring.

  4. An Efficient Neural-Network-Based Microseismic Monitoring Platform for Hydraulic Fracture on an Edge Computing Architecture

    Directory of Open Access Journals (Sweden)

    Xiaopu Zhang

    2018-06-01

    Full Text Available Microseismic monitoring is one of the most critical technologies for hydraulic fracturing in oil and gas production. To detect events in an accurate and efficient way, there are two major challenges. One challenge is how to achieve high accuracy due to a poor signal-to-noise ratio (SNR. The other one is concerned with real-time data transmission. Taking these challenges into consideration, an edge-computing-based platform, namely Edge-to-Center LearnReduce, is presented in this work. The platform consists of a data center with many edge components. At the data center, a neural network model combined with convolutional neural network (CNN and long short-term memory (LSTM is designed and this model is trained by using previously obtained data. Once the model is fully trained, it is sent to edge components for events detection and data reduction. At each edge component, a probabilistic inference is added to the neural network model to improve its accuracy. Finally, the reduced data is delivered to the data center. Based on experiment results, a high detection accuracy (over 96% with less transmitted data (about 90% was achieved by using the proposed approach on a microseismic monitoring system. These results show that the platform can simultaneously improve the accuracy and efficiency of microseismic monitoring.

  5. Neural and hybrid modeling: an alternative route to efficiently predict the behavior of biotechnological processes aimed at biofuels obtainment.

    Science.gov (United States)

    Curcio, Stefano; Saraceno, Alessandra; Calabrò, Vincenza; Iorio, Gabriele

    2014-01-01

    The present paper was aimed at showing that advanced modeling techniques, based either on artificial neural networks or on hybrid systems, might efficiently predict the behavior of two biotechnological processes designed for the obtainment of second-generation biofuels from waste biomasses. In particular, the enzymatic transesterification of waste-oil glycerides, the key step for the obtainment of biodiesel, and the anaerobic digestion of agroindustry wastes to produce biogas were modeled. It was proved that the proposed modeling approaches provided very accurate predictions of systems behavior. Both neural network and hybrid modeling definitely represented a valid alternative to traditional theoretical models, especially when comprehensive knowledge of the metabolic pathways, of the true kinetic mechanisms, and of the transport phenomena involved in biotechnological processes was difficult to be achieved.

  6. Neural and Hybrid Modeling: An Alternative Route to Efficiently Predict the Behavior of Biotechnological Processes Aimed at Biofuels Obtainment

    Directory of Open Access Journals (Sweden)

    Stefano Curcio

    2014-01-01

    Full Text Available The present paper was aimed at showing that advanced modeling techniques, based either on artificial neural networks or on hybrid systems, might efficiently predict the behavior of two biotechnological processes designed for the obtainment of second-generation biofuels from waste biomasses. In particular, the enzymatic transesterification of waste-oil glycerides, the key step for the obtainment of biodiesel, and the anaerobic digestion of agroindustry wastes to produce biogas were modeled. It was proved that the proposed modeling approaches provided very accurate predictions of systems behavior. Both neural network and hybrid modeling definitely represented a valid alternative to traditional theoretical models, especially when comprehensive knowledge of the metabolic pathways, of the true kinetic mechanisms, and of the transport phenomena involved in biotechnological processes was difficult to be achieved.

  7. Application of Artificial Neural Networks for Efficient High-Resolution 2D DOA Estimation

    Directory of Open Access Journals (Sweden)

    M. Agatonović

    2012-12-01

    Full Text Available A novel method to provide high-resolution Two-Dimensional Direction of Arrival (2D DOA estimation employing Artificial Neural Networks (ANNs is presented in this paper. The observed space is divided into azimuth and elevation sectors. Multilayer Perceptron (MLP neural networks are employed to detect the presence of a source in a sector while Radial Basis Function (RBF neural networks are utilized for DOA estimation. It is shown that a number of appropriately trained neural networks can be successfully used for the high-resolution DOA estimation of narrowband sources in both azimuth and elevation. The training time of each smaller network is significantly re¬duced as different training sets are used for networks in detection and estimation stage. By avoiding the spectral search, the proposed method is suitable for real-time ap¬plications as it provides DOA estimates in a matter of seconds. At the same time, it demonstrates the accuracy comparable to that of the super-resolution 2D MUSIC algorithm.

  8. Efficient forward propagation of time-sequences in convolutional neural networks using Deep Shifting

    NARCIS (Netherlands)

    K.L. Groenland (Koen); S.M. Bohte (Sander)

    2016-01-01

    textabstractWhen a Convolutional Neural Network is used for on-the-fly evaluation of continuously updating time-sequences, many redundant convolution operations are performed. We propose the method of Deep Shifting, which remembers previously calculated results of convolution operations in order

  9. Application of neural network technology to nuclear plant thermal efficiency improvement

    International Nuclear Information System (INIS)

    Doremus, Rick; Allen Ho, S.; Bailey, James V.; Roman, Harry

    2004-01-01

    Due to the tremendous cost of building new nuclear power plants, it has become increasingly attractive to increase the power output from the existing operating power plants. There are two options that may be available to accomplish this goal. One option is to uprate the plant through licensing modification for a comfortably achievable goal of 4% to 6%. However, the licensing efforts required are no small task, vary from plant to plant, and may take years to accomplish. Some nuclear power plants may not have this option because of design, environmental, political, or geographical limitations. A second option exists that is simpler and more immediate. It focuses on improving the plant operating conditions using adaptive software that could increase the total plant output by approximately one-half percent by adjusting certain key operating parameters. No design basis analyses, hardware modifications, or licensing changes are required. In fact, this technique can be used on a plant that has already obtained licensing modification to obtain an additional one-half percent on top of the 4% to 6% increase. Public Service Electric and Gas and ARD Corporation are jointly investigating the creation of a Plant Optimization System, called POSITIVE. POSITIVE is an adaptive software tool that enables a user to analyze current plant data to identify potential problem areas and to obtain recommendations for increasing the plant's electric output. POSITIVE uses a combination of expert systems and adaptive software to analyze the thermal performance of a nuclear power plant. Historical data, obtained while the plant was above 93% power, is used to train neural networks to determine the current electric output of the plant. Once sufficiently trained, new data can be processed through the neural network. The neural network first determines the electric output associated with the current data. If the actual power matches the power predicted by the network, the neural network can be used

  10. Thymidine Kinase-Negative Herpes Simplex Virus 1 Can Efficiently Establish Persistent Infection in Neural Tissues of Nude Mice.

    Science.gov (United States)

    Huang, Chih-Yu; Yao, Hui-Wen; Wang, Li-Chiu; Shen, Fang-Hsiu; Hsu, Sheng-Min; Chen, Shun-Hua

    2017-02-15

    patients with persistent infection. However, answers to the questions as to whether TK-negative (TK - ) HSV-1 can establish persistent infection in brains of immunocompromised hosts and whether neurons in vivo are permissive for TK - HSV-1 remain elusive. Using three genetically engineered HSV-1 TK - mutants and two strains of nude mice deficient in T cells, we found that all three HSV-1 TK - mutants can efficiently establish persistent infection in the brain stem and trigeminal ganglion and detected glycoprotein C, a true late viral antigen, in brainstem neurons. Our study provides evidence that TK - HSV-1 can persist in neural tissues and replicate in brain neurons of immunocompromised hosts. Copyright © 2017 American Society for Microbiology.

  11. Efficient Embedded Decoding of Neural Network Language Models in a Machine Translation System.

    Science.gov (United States)

    Zamora-Martinez, Francisco; Castro-Bleda, Maria Jose

    2018-02-22

    Neural Network Language Models (NNLMs) are a successful approach to Natural Language Processing tasks, such as Machine Translation. We introduce in this work a Statistical Machine Translation (SMT) system which fully integrates NNLMs in the decoding stage, breaking the traditional approach based on [Formula: see text]-best list rescoring. The neural net models (both language models (LMs) and translation models) are fully coupled in the decoding stage, allowing to more strongly influence the translation quality. Computational issues were solved by using a novel idea based on memorization and smoothing of the softmax constants to avoid their computation, which introduces a trade-off between LM quality and computational cost. These ideas were studied in a machine translation task with different combinations of neural networks used both as translation models and as target LMs, comparing phrase-based and [Formula: see text]-gram-based systems, showing that the integrated approach seems more promising for [Formula: see text]-gram-based systems, even with nonfull-quality NNLMs.

  12. The Purchasing Power Parity Hypothesis:

    African Journals Online (AJOL)

    2011-10-02

    Oct 2, 2011 ... reject the unit root hypothesis in real exchange rates may simply be due to the shortness ..... Violations of Purchasing Power Parity and Their Implications for Efficient ... Official Intervention in the Foreign Exchange Market:.

  13. Effect of task complexity on intelligence and neural efficiency in children: an event-related potential study.

    Science.gov (United States)

    Zhang, Qiong; Shi, Jiannong; Luo, Yuejia; Liu, Sainan; Yang, Jie; Shen, Mowei

    2007-10-08

    The present study investigates the effects of task complexity, intelligence and neural efficiency on children's performance on an Elementary Cognitive Task. Twenty-three children were divided into two groups on the basis of their Raven Progressive Matrix scores and were then asked to complete a choice reaction task with two test conditions. We recorded the electroencephalogram and calculated the peak latencies and amplitudes for anteriorly distributed P225, N380 and late positive component. Our results suggested shorter late positive component latencies in brighter children, possibly reflecting a higher processing speed in these individuals. Increased P225 amplitude and increased N380 amplitudes for brighter children may indicate a more efficient allocation of attention for brighter children. No moderating effect of task complexity on brain-intelligence relationship was found.

  14. An Efficient Feature Extraction Method with Pseudo-Zernike Moment in RBF Neural Network-Based Human Face Recognition System

    Directory of Open Access Journals (Sweden)

    Ahmadi Majid

    2003-01-01

    Full Text Available This paper introduces a novel method for the recognition of human faces in digital images using a new feature extraction method that combines the global and local information in frontal view of facial images. Radial basis function (RBF neural network with a hybrid learning algorithm (HLA has been used as a classifier. The proposed feature extraction method includes human face localization derived from the shape information. An efficient distance measure as facial candidate threshold (FCT is defined to distinguish between face and nonface images. Pseudo-Zernike moment invariant (PZMI with an efficient method for selecting moment order has been used. A newly defined parameter named axis correction ratio (ACR of images for disregarding irrelevant information of face images is introduced. In this paper, the effect of these parameters in disregarding irrelevant information in recognition rate improvement is studied. Also we evaluate the effect of orders of PZMI in recognition rate of the proposed technique as well as RBF neural network learning speed. Simulation results on the face database of Olivetti Research Laboratory (ORL indicate that the proposed method for human face recognition yielded a recognition rate of 99.3%.

  15. Development of efficiency module of organization of Arctic sea cargo transportation with application of neural network technologies

    Science.gov (United States)

    Sobolevskaya, E. Yu; Glushkov, S. V.; Levchenko, N. G.; Orlov, A. P.

    2018-05-01

    The analysis of software intended for organizing and managing the processes of sea cargo transportation has been carried out. The shortcomings of information resources are presented, for the organization of work in the Arctic and Subarctic regions of the Far East: the lack of decision support systems, the lack of factor analysis to calculate the time and cost of delivery. The architecture of the module for calculating the effectiveness of the organization of sea cargo transportation has been developed. The simulation process has been considered, which is based on the neural network. The main classification factors with their weighting coefficients have been identified. The architecture of the neural network has been developed to calculate the efficiency of the organization of sea cargo transportation in Arctic conditions. The architecture of the intellectual system of organization of sea cargo transportation has been developed, taking into account the difficult navigation conditions in the Arctic. Its implementation will allow one to provide the management of the shipping company with predictive analytics; to support decision-making; to calculate the most efficient delivery route; to provide on demand online transportation forecast, to minimize the shipping cost, delays in transit, and risks to cargo safety.

  16. Applying a supervised ANN (artificial neural network) approach to the prognostication of driven wheel energy efficiency indices

    International Nuclear Information System (INIS)

    Taghavifar, Hamid; Mardani, Aref

    2014-01-01

    This paper examines the prediction of energy efficiency indices of driven wheels (i.e. traction coefficient and tractive power efficiency) as affected by wheel load, slippage and forward velocity at three different levels with three replicates to form a total of 162 data points. The pertinent experiments were carried out in the soil bin testing facility. A feed-forward ANN (artificial neural network) with standard BP (back propagation) algorithm was practiced to construct a supervised representation to predict the energy efficiency indices of driven wheels. It was deduced, in view of the statistical performance criteria (i.e. MSE (mean squared error) and R 2 ), that a supervised ANN with 3-8-10-2 topology and Levenberg–Marquardt training algorithm represented the optimal model. Modeling implementations indicated that ANN is a powerful technique to prognosticate the stochastic energy efficiency indices as affected by soil-wheel interactions with MSE of 0.001194 and R 2 of 0.987 and 0.9772 for traction coefficient and tractive power efficiency. It was found that traction coefficient and tractive power efficiency increase with increased slippage. A similar trend is valid for the influence of wheel load on the objective parameters. Wherein increase of velocity led to an increment of tractive power efficiency, velocity had no significant effect on traction coefficient. - Highlights: • Energy efficiency indexes were assessed as affected by tire parameters. • ANN was applied for prognostication of the objective parameters. • A 3-8-10-2 ANN with MSE of 0.001194 and R 2 of 0.987 and 0.9772 was designated as optimal model. • Optimal values of learning rate and momentum were found 0.9 and 0.5, respectively

  17. Motor sequence learning-induced neural efficiency in functional brain connectivity.

    Science.gov (United States)

    Karim, Helmet T; Huppert, Theodore J; Erickson, Kirk I; Wollam, Mariegold E; Sparto, Patrick J; Sejdić, Ervin; VanSwearingen, Jessie M

    2017-02-15

    Previous studies have shown the functional neural circuitry differences before and after an explicitly learned motor sequence task, but have not assessed these changes during the process of motor skill learning. Functional magnetic resonance imaging activity was measured while participants (n=13) were asked to tap their fingers to visually presented sequences in blocks that were either the same sequence repeated (learning block) or random sequences (control block). Motor learning was associated with a decrease in brain activity during learning compared to control. Lower brain activation was noted in the posterior parietal association area and bilateral thalamus during the later periods of learning (not during the control). Compared to the control condition, we found the task-related motor learning was associated with decreased connectivity between the putamen and left inferior frontal gyrus and left middle cingulate brain regions. Motor learning was associated with changes in network activity, spatial extent, and connectivity. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Efficient airport detection using region-based fully convolutional neural networks

    Science.gov (United States)

    Xin, Peng; Xu, Yuelei; Zhang, Xulei; Ma, Shiping; Li, Shuai; Lv, Chao

    2018-04-01

    This paper presents a model for airport detection using region-based fully convolutional neural networks. To achieve fast detection with high accuracy, we shared the conv layers between the region proposal procedure and the airport detection procedure and used graphics processing units (GPUs) to speed up the training and testing time. For lack of labeled data, we transferred the convolutional layers of ZF net pretrained by ImageNet to initialize the shared convolutional layers, then we retrained the model using the alternating optimization training strategy. The proposed model has been tested on an airport dataset consisting of 600 images. Experiments show that the proposed method can distinguish airports in our dataset from similar background scenes almost real-time with high accuracy, which is much better than traditional methods.

  19. Efficient second order Algorithms for Function Approximation with Neural Networks. Application to Sextic Potentials

    International Nuclear Information System (INIS)

    Gougam, L.A.; Taibi, H.; Chikhi, A.; Mekideche-Chafa, F.

    2009-01-01

    The problem of determining the analytical description for a set of data arises in numerous sciences and applications and can be referred to as data modeling or system identification. Neural networks are a convenient means of representation because they are known to be universal approximates that can learn data. The desired task is usually obtained by a learning procedure which consists in adjusting the s ynaptic weights . For this purpose, many learning algorithms have been proposed to update these weights. The convergence for these learning algorithms is a crucial criterion for neural networks to be useful in different applications. The aim of the present contribution is to use a training algorithm for feed forward wavelet networks used for function approximation. The training is based on the minimization of the least-square cost function. The minimization is performed by iterative second order gradient-based methods. We make use of the Levenberg-Marquardt algorithm to train the architecture of the chosen network and, then, the training procedure starts with a simple gradient method which is followed by a BFGS (Broyden, Fletcher, Glodfarb et Shanno) algorithm. The performances of the two algorithms are then compared. Our method is then applied to determine the energy of the ground state associated to a sextic potential. In fact, the Schrodinger equation does not always admit an exact solution and one has, generally, to solve it numerically. To this end, the sextic potential is, firstly, approximated with the above outlined wavelet network and, secondly, implemented into a numerical scheme. Our results are in good agreement with the ones found in the literature.

  20. Architecture and performance of neural networks for efficient A/C control in buildings

    International Nuclear Information System (INIS)

    Mahmoud, Mohamed A.; Ben-Nakhi, Abdullatif E.

    2003-01-01

    The feasibility of using neural networks (NNs) for optimizing air conditioning (AC) setback scheduling in public buildings was investigated. The main focus is on optimizing the network architecture in order to achieve best performance. To save energy, the temperature inside public buildings is allowed to rise after business hours by setting back the thermostat. The objective is to predict the time of the end of thermostat setback (EoS) such that the design temperature inside the building is restored in time for the start of business hours. State of the art building simulation software, ESP-r, was used to generate a database that covered the years 1995-1999. The software was used to calculate the EoS for two office buildings using the climate records in Kuwait. The EoS data for 1995 and 1996 were used for training and testing the NNs. The robustness of the trained NN was tested by applying them to a 'production' data set (1997-1999), which the networks have never 'seen' before. For each of the six different NN architectures evaluated, parametric studies were performed to determine the network parameters that best predict the EoS. External hourly temperature readings were used as network inputs, and the thermostat end of setback (EoS) is the output. The NN predictions were improved by developing a neural control scheme (NC). This scheme is based on using the temperature readings as they become available. For each NN architecture considered, six NNs were designed and trained for this purpose. The performance of the NN analysis was evaluated using a statistical indicator (the coefficient of multiple determination) and by statistical analysis of the error patterns, including ANOVA (analysis of variance). The results show that the NC, when used with a properly designed NN, is a powerful instrument for optimizing AC setback scheduling based only on external temperature records

  1. Direct and efficient transfection of mouse neural stem cells and mature neurons by in vivo mRNA electroporation.

    Science.gov (United States)

    Bugeon, Stéphane; de Chevigny, Antoine; Boutin, Camille; Coré, Nathalie; Wild, Stefan; Bosio, Andreas; Cremer, Harold; Beclin, Christophe

    2017-11-01

    In vivo brain electroporation of DNA expression vectors is a widely used method for lineage and gene function studies in the developing and postnatal brain. However, transfection efficiency of DNA is limited and adult brain tissue is refractory to electroporation. Here, we present a systematic study of mRNA as a vector for acute genetic manipulation in the developing and adult brain. We demonstrate that mRNA electroporation is far more efficient than DNA electroporation, and leads to faster and more homogeneous protein expression in vivo Importantly, mRNA electroporation allows the manipulation of neural stem cells and postmitotic neurons in the adult brain using minimally invasive procedures. Finally, we show that this approach can be efficiently used for functional studies, as exemplified by transient overexpression of the neurogenic factor Myt1l and by stably inactivating Dicer nuclease in vivo in adult born olfactory bulb interneurons and in fully integrated cortical projection neurons. © 2017. Published by The Company of Biologists Ltd.

  2. Neuroticism, intelligence, and intra-individual variability in elementary cognitive tasks: testing the mental noise hypothesis.

    Science.gov (United States)

    Colom, Roberto; Quiroga, Ma Angeles

    2009-08-01

    Some studies show positive correlations between intraindividual variability in elementary speed measures (reflecting processing efficiency) and individual differences in neuroticism (reflecting instability in behaviour). The so-called neural noise hypothesis assumes that higher levels of noise are related both to smaller indices of processing efficiency and greater levels of neuroticism. Here, we test this hypothesis measuring mental speed by means of three elementary cognitive tasks tapping similar basic processes but varying systematically their content (verbal, numerical, and spatial). Neuroticism and intelligence are also measured. The sample comprised 196 undergraduate psychology students. The results show that (1) processing efficiency is generally unrelated to individual differences in neuroticism, (2) processing speed and efficiency correlate with intelligence, and (3) only the efficiency index is genuinely related to intelligence when the colinearity between speed and efficiency is controlled.

  3. An Efficient Implementation of Deep Convolutional Neural Networks for MRI Segmentation.

    Science.gov (United States)

    Hoseini, Farnaz; Shahbahrami, Asadollah; Bayat, Peyman

    2018-02-27

    Image segmentation is one of the most common steps in digital image processing, classifying a digital image into different segments. The main goal of this paper is to segment brain tumors in magnetic resonance images (MRI) using deep learning. Tumors having different shapes, sizes, brightness and textures can appear anywhere in the brain. These complexities are the reasons to choose a high-capacity Deep Convolutional Neural Network (DCNN) containing more than one layer. The proposed DCNN contains two parts: architecture and learning algorithms. The architecture and the learning algorithms are used to design a network model and to optimize parameters for the network training phase, respectively. The architecture contains five convolutional layers, all using 3 × 3 kernels, and one fully connected layer. Due to the advantage of using small kernels with fold, it allows making the effect of larger kernels with smaller number of parameters and fewer computations. Using the Dice Similarity Coefficient metric, we report accuracy results on the BRATS 2016, brain tumor segmentation challenge dataset, for the complete, core, and enhancing regions as 0.90, 0.85, and 0.84 respectively. The learning algorithm includes the task-level parallelism. All the pixels of an MR image are classified using a patch-based approach for segmentation. We attain a good performance and the experimental results show that the proposed DCNN increases the segmentation accuracy compared to previous techniques.

  4. Investigating the Neural Bases for Intra-Subject Cognitive Efficiency Using Functional Magnetic Resonance Imaging

    Directory of Open Access Journals (Sweden)

    Neena K. Rao

    2014-10-01

    Full Text Available Several fMRI studies have examined brain regions mediating inter-subject variability in cognitive efficiency, but none have examined regions mediating intra-subject variability in efficiency. Thus, the present study was designed to identify brain regions involved in intra-subject variability in cognitive efficiency via participant-level correlations between trial-level reaction time (RT and trial-level fMRI BOLD percent signal change on a processing speed task. On each trial, participants indicated whether a digit-symbol probe-pair was present or absent in an array of nine digit-symbol probe-pairs while fMRI data were collected. Deconvolution analyses, using RT time-series models (derived from the proportional scaling of an event-related hemodynamic response function model by trial-level RT, were used to evaluate relationships between trial-level RTs and BOLD percent signal change. Although task-related patterns of activation and deactivation were observed in regions including bilateral occipital, bilateral parietal, portions of the medial wall such as the precuneus, default mode network regions including anterior cingulate, posterior cingulate, bilateral temporal, right cerebellum, and right cuneus, RT-BOLD correlations were observed in a more circumscribed set of regions. Positive RT-related patterns, or RT-BOLD correlations where fast RTs were associated with lower BOLD percent signal change, were observed in regions including bilateral occipital, bilateral parietal, and the precuneus. RT-BOLD correlations were not observed in the default mode network indicating a smaller set of regions associated with intra-subject variability in cognitive efficiency. The results are discussed in terms of a distributed area of regions that mediate variability in the cognitive efficiency that might underlie processing speed differences between individuals.

  5. TESTING THE HYPOTHESIS OF MARKET EFFICIENCY THROUGH ARTIFICIAL NEURAL NETWORKS: A CASE STUDY WITH THE TEN MAJOR IBOVESPA SHARES IN THE FIRST QUARTER OF 2011

    Directory of Open Access Journals (Sweden)

    Luiz Henrique Herling

    2013-01-01

    Full Text Available Fuel market is facing political, economic, social and environmental problems that are fuzzing the future of fossil energy sources and in face of these facts, countries are looking for hybrid and electric vehicles as part of solution in transportation sector due to the fact of electric vehicles use few or no fossil fuel. The objective in this article was to identify options until 2020 to introduce electric vehicle in the urban traffic of São Paulo city and to develop this study the method of literature review in secondary sources was used to present electric vehicle technologies and to identify parameters that were assessed through morphological analysis technique. In morphological analysis, sets of values were defined by the author for these parameters, possible combinations were structured, clearly impractical deployment options before 2020 were discarded and some viable solutions were analyzed in details. These analyses concluded that there are viable options for actual days in São Paulo city, but important requirements regarding technology, politic, market, infrastructure and innovation in products and services still need to be addressed and it is the main reason of electric vehicle remain unnoticed by consumers as an viable option. The challenges are great and the actors who are willing to solve them will find a promising market to explore.

  6. Efficient spiking neural network model of pattern motion selectivity in visual cortex.

    Science.gov (United States)

    Beyeler, Michael; Richert, Micah; Dutt, Nikil D; Krichmar, Jeffrey L

    2014-07-01

    Simulating large-scale models of biological motion perception is challenging, due to the required memory to store the network structure and the computational power needed to quickly solve the neuronal dynamics. A low-cost yet high-performance approach to simulating large-scale neural network models in real-time is to leverage the parallel processing capability of graphics processing units (GPUs). Based on this approach, we present a two-stage model of visual area MT that we believe to be the first large-scale spiking network to demonstrate pattern direction selectivity. In this model, component-direction-selective (CDS) cells in MT linearly combine inputs from V1 cells that have spatiotemporal receptive fields according to the motion energy model of Simoncelli and Heeger. Pattern-direction-selective (PDS) cells in MT are constructed by pooling over MT CDS cells with a wide range of preferred directions. Responses of our model neurons are comparable to electrophysiological results for grating and plaid stimuli as well as speed tuning. The behavioral response of the network in a motion discrimination task is in agreement with psychophysical data. Moreover, our implementation outperforms a previous implementation of the motion energy model by orders of magnitude in terms of computational speed and memory usage. The full network, which comprises 153,216 neurons and approximately 40 million synapses, processes 20 frames per second of a 40 × 40 input video in real-time using a single off-the-shelf GPU. To promote the use of this algorithm among neuroscientists and computer vision researchers, the source code for the simulator, the network, and analysis scripts are publicly available.

  7. Fish and chips: implementation of a neural network model into computer chips to maximize swimming efficiency in autonomous underwater vehicles.

    Science.gov (United States)

    Blake, R W; Ng, H; Chan, K H S; Li, J

    2008-09-01

    Recent developments in the design and propulsion of biomimetic autonomous underwater vehicles (AUVs) have focused on boxfish as models (e.g. Deng and Avadhanula 2005 Biomimetic micro underwater vehicle with oscillating fin propulsion: system design and force measurement Proc. 2005 IEEE Int. Conf. Robot. Auto. (Barcelona, Spain) pp 3312-7). Whilst such vehicles have many potential advantages in operating in complex environments (e.g. high manoeuvrability and stability), limited battery life and payload capacity are likely functional disadvantages. Boxfish employ undulatory median and paired fins during routine swimming which are characterized by high hydromechanical Froude efficiencies (approximately 0.9) at low forward speeds. Current boxfish-inspired vehicles are propelled by a low aspect ratio, 'plate-like' caudal fin (ostraciiform tail) which can be shown to operate at a relatively low maximum Froude efficiency (approximately 0.5) and is mainly employed as a rudder for steering and in rapid swimming bouts (e.g. escape responses). Given this and the fact that bioinspired engineering designs are not obligated to wholly duplicate a biological model, computer chips were developed using a multilayer perception neural network model of undulatory fin propulsion in the knifefish Xenomystus nigri that would potentially allow an AUV to achieve high optimum values of propulsive efficiency at any given forward velocity, giving a minimum energy drain on the battery. We envisage that externally monitored information on flow velocity (sensory system) would be conveyed to the chips residing in the vehicle's control unit, which in turn would signal the locomotor unit to adopt kinematics (e.g. fin frequency, amplitude) associated with optimal propulsion efficiency. Power savings could protract vehicle operational life and/or provide more power to other functions (e.g. communications).

  8. An integrated multichannel neural recording analog front-end ASIC with area-efficient driven right leg circuit.

    Science.gov (United States)

    Tao Tang; Wang Ling Goh; Lei Yao; Jia Hao Cheong; Yuan Gao

    2017-07-01

    This paper describes an integrated multichannel neural recording analog front end (AFE) with a novel area-efficient driven right leg (DRL) circuit to improve the system common mode rejection ratio (CMRR). The proposed AFE consists of an AC-coupled low-noise programmable-gain amplifier, an area-efficient DRL block and a 10-bit SAR ADC. Compared to conventional DRL circuit, the proposed capacitor-less DRL design achieves 90% chip area reduction with enhanced CMRR performance, making it ideal for multichannel biomedical recording applications. The AFE circuit has been designed in a standard 0.18-μm CMOS process. Post-layout simulation results show that the AFE provides two gain settings of 54dB/60dB while consuming 1 μA per channel under a supply voltage of 1 V. The input-referred noise of the AFE integrated from 1 Hz to 10k Hz is only 4 μVrms and the CMRR is 110 dB.

  9. Efficient derivation of multipotent neural stem/progenitor cells from non-human primate embryonic stem cells.

    Directory of Open Access Journals (Sweden)

    Hiroko Shimada

    Full Text Available The common marmoset (Callithrix jacchus is a small New World primate that has been used as a non-human primate model for various biomedical studies. We previously demonstrated that transplantation of neural stem/progenitor cells (NS/PCs derived from mouse and human embryonic stem cells (ESCs and induced pluripotent stem cells (iPSCs promote functional locomotor recovery of mouse spinal cord injury models. However, for the clinical application of such a therapeutic approach, we need to evaluate the efficacy and safety of pluripotent stem cell-derived NS/PCs not only by xenotransplantation, but also allotransplantation using non-human primate models to assess immunological rejection and tumorigenicity. In the present study, we established a culture method to efficiently derive NS/PCs as neurospheres from common marmoset ESCs. Marmoset ESC-derived neurospheres could be passaged repeatedly and showed sequential generation of neurons and astrocytes, similar to that of mouse ESC-derived NS/PCs, and gave rise to functional neurons as indicated by calcium imaging. Although marmoset ESC-derived NS/PCs could not differentiate into oligodendrocytes under default culture conditions, these cells could abundantly generate oligodendrocytes by incorporating additional signals that recapitulate in vivo neural development. Moreover, principal component analysis of microarray data demonstrated that marmoset ESC-derived NS/PCs acquired similar gene expression profiles to those of fetal brain-derived NS/PCs by repeated passaging. Therefore, marmoset ESC-derived NS/PCs may be useful not only for accurate evaluation by allotransplantation of NS/PCs into non-human primate models, but are also applicable to analysis of iPSCs established from transgenic disease model marmosets.

  10. Nanoparticle-neural stem cells for targeted ovarian cancer treatment: optimization of silica nanoparticles for efficient drug loading

    Science.gov (United States)

    Patel, Z.; Berlin, J.; Abidi, W.

    2018-02-01

    One of the drugs used to treat ovarian cancer is cisplatin. However, cisplatin kills normal surrounding tissue in addition to cancer cells. To improve tumor targeting efficiency, our lab uses neural stem cells (NSCs), which migrate directly to ovarian tumors. If free cisplatin is loaded into NSCs for targeted drug delivery, it will kill the NSCs. To prevent the drug cisplatin from killing both the NSCs and normal surrounding tissue, our lab synthesizes silica nanoparticles (SiNPs) that act as a protective carrier. The big picture here is to maximize efficiency of tumor targeting using NSCs and minimize toxicity to these NSCs using SiNPs. The goal of this project is to optimize the stability of SiNPs, which is important for efficient drug loading. To do this, the concentration of tetraethyl orthosilicate (TEOS), one of the main components of SiNPs, was varied. We hypothesized that more TEOS equates to more stable SiNPs because TEOS contributes carbon to SiNPs, and thus a tightly-packed chemical structure results in a stable particle. Then, the stability of the SiNPs were checked in cell media and phosphate buffered saline (PBS). Lastly, the SiNPs were analyzed for their porosity using the transmission electron microscope (TEM). TEM imaging showed white spots in the 200-800 μL TEOS batches and no white spots in the 1000-1800 μL TEOS batches. The white spots were pores, which indicate instability. We concluded that the ultimate factor that determines the stability of SiNPs (100 nm) is the concentration of organic substance.

  11. Efficient Market Hypothesis and Comovement Among Emerging Markets = Etkin Piyasa Hipotezi ve Gelişmekte Olan Piyasaların Birlikte Hareketi

    Directory of Open Access Journals (Sweden)

    Oktay TAŞ

    2010-06-01

    Full Text Available The main purpose of this study is to investigate stock market cointegration from the market efficiency perspective. Therefore, eleven emerging stock market indices are tested by using weekly data for the period of January 1998-December 2008 and for the sub period of January 2002-December 2008. Comovement among the emerging market countries was analyzed through Johansen cointegration test. The existence of two cointegrating vectors has been found at 5% significance level. However, the firm evidence against the market efficiency could not be established because of the low explanatory power of the results generated from the vector error correction model.

  12. Birth weight predicted baseline muscular efficiency, but not response of energy expenditure to calorie restriction: An empirical test of the predictive adaptive response hypothesis.

    Science.gov (United States)

    Workman, Megan; Baker, Jack; Lancaster, Jane B; Mermier, Christine; Alcock, Joe

    2016-07-01

    Aiming to test the evolutionary significance of relationships linking prenatal growth conditions to adult phenotypes, this study examined whether birth size predicts energetic savings during fasting. We specifically tested a Predictive Adaptive Response (PAR) model that predicts greater energetic saving among adults who were born small. Data were collected from a convenience sample of young adults living in Albuquerque, NM (n = 34). Indirect calorimetry quantified changes in resting energy expenditure (REE) and active muscular efficiency that occurred in response to a 29-h fast. Multiple regression analyses linked birth weight to baseline and postfast metabolic values while controlling for appropriate confounders (e.g., sex, body mass). Birth weight did not moderate the relationship between body size and energy expenditure, nor did it predict the magnitude change in REE or muscular efficiency observed from baseline to after fasting. Alternative indicators of birth size were also examined (e.g., low v. normal birth weight, comparison of tertiles), with no effects found. However, baseline muscular efficiency improved by 1.1% per 725 g (S.D.) increase in birth weight (P = 0.037). Birth size did not influence the sensitivity of metabolic demands to fasting-neither at rest nor during activity. Moreover, small birth size predicted a reduction in the efficiency with which muscles convert energy expended into work accomplished. These results do not support the ascription of adaptive function to phenotypes associated with small birth size. © 2015 Wiley Periodicals, Inc. Am. J. Hum. Biol. 28:484-492, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  13. Variability: A Pernicious Hypothesis.

    Science.gov (United States)

    Noddings, Nel

    1992-01-01

    The hypothesis of greater male variability in test results is discussed in its historical context, and reasons feminists have objected to the hypothesis are considered. The hypothesis acquires political importance if it is considered that variability results from biological, rather than cultural, differences. (SLD)

  14. Physiopathological Hypothesis of Cellulite

    Science.gov (United States)

    de Godoy, José Maria Pereira; de Godoy, Maria de Fátima Guerreiro

    2009-01-01

    A series of questions are asked concerning this condition including as regards to its name, the consensus about the histopathological findings, physiological hypothesis and treatment of the disease. We established a hypothesis for cellulite and confirmed that the clinical response is compatible with this hypothesis. Hence this novel approach brings a modern physiological concept with physiopathologic basis and clinical proof of the hypothesis. We emphasize that the choice of patient, correct diagnosis of cellulite and the technique employed are fundamental to success. PMID:19756187

  15. Life Origination Hydrate Hypothesis (LOH-Hypothesis

    Directory of Open Access Journals (Sweden)

    Victor Ostrovskii

    2012-01-01

    Full Text Available The paper develops the Life Origination Hydrate Hypothesis (LOH-hypothesis, according to which living-matter simplest elements (LMSEs, which are N-bases, riboses, nucleosides, nucleotides, DNA- and RNA-like molecules, amino-acids, and proto-cells repeatedly originated on the basis of thermodynamically controlled, natural, and inevitable processes governed by universal physical and chemical laws from CH4, niters, and phosphates under the Earth's surface or seabed within the crystal cavities of the honeycomb methane-hydrate structure at low temperatures; the chemical processes passed slowly through all successive chemical steps in the direction that is determined by a gradual decrease in the Gibbs free energy of reacting systems. The hypothesis formulation method is based on the thermodynamic directedness of natural movement and consists ofan attempt to mentally backtrack on the progression of nature and thus reveal principal milestones alongits route. The changes in Gibbs free energy are estimated for different steps of the living-matter origination process; special attention is paid to the processes of proto-cell formation. Just the occurrence of the gas-hydrate periodic honeycomb matrix filled with LMSEs almost completely in its final state accounts for size limitation in the DNA functional groups and the nonrandom location of N-bases in the DNA chains. The slowness of the low-temperature chemical transformations and their “thermodynamic front” guide the gross process of living matter origination and its successive steps. It is shown that the hypothesis is thermodynamically justified and testable and that many observed natural phenomena count in its favor.

  16. Designing an artificial neural network using radial basis function to model exergetic efficiency of nanofluids in mini double pipe heat exchanger

    Science.gov (United States)

    Ghasemi, Nahid; Aghayari, Reza; Maddah, Heydar

    2018-06-01

    The present study aims at predicting and optimizing exergetic efficiency of TiO2-Al2O3/water nanofluid at different Reynolds numbers, volume fractions and twisted ratios using Artificial Neural Networks (ANN) and experimental data. Central Composite Design (CCD) and cascade Radial Basis Function (RBF) were used to display the significant levels of the analyzed factors on the exergetic efficiency. The size of TiO2-Al2O3/water nanocomposite was 20-70 nm. The parameters of ANN model were adapted by a training algorithm of radial basis function (RBF) with a wide range of experimental data set. Total mean square error and correlation coefficient were used to evaluate the results which the best result was obtained from double layer perceptron neural network with 30 neurons in which total Mean Square Error(MSE) and correlation coefficient (R2) were equal to 0.002 and 0.999, respectively. This indicated successful prediction of the network. Moreover, the proposed equation for predicting exergetic efficiency was extremely successful. According to the optimal curves, the optimum designing parameters of double pipe heat exchanger with inner twisted tape and nanofluid under the constrains of exergetic efficiency 0.937 are found to be Reynolds number 2500, twisted ratio 2.5 and volume fraction( v/v%) 0.05.

  17. Study of the efficiency of transplantation of human neural stem cells to rats with spinal trauma: the use of functional load tests and BBB test.

    Science.gov (United States)

    Lebedev, S V; Karasev, A V; Chekhonin, V P; Savchenko, E A; Viktorov, I V; Chelyshev, Yu A; Shaimardanova, G F

    2010-09-01

    Human ensheating neural stem cells of the olfactory epithelium were transplanted to adult male rats immediately after contusion trauma of the spinal cord at T9 level rostrally and caudally to the injury. Voluntary movements (by a 21-point BBB scale), rota-rod performance, and walking along a narrowing beam were monitored weekly over 60 days. In rats receiving cell transplantation, the mean BBB score significantly increased by 11% by the end of the experiment. The mean parameters of load tests also regularly surpassed the corresponding parameters in controls. The efficiency of transplantation (percent of animals with motor function recovery parameters surpassing the corresponding mean values in the control groups) was 62% by the state of voluntary motions, 37% by the rota-rod test, and 32% by the narrowing beam test. Morphometry revealed considerable shrinking of the zone of traumatic damage in the spinal cord and activation of posttraumatic remyelination in animals receiving transplantation of human neural stem cells.

  18. Design of a Closed-Loop, Bidirectional Brain Machine Interface System With Energy Efficient Neural Feature Extraction and PID Control.

    Science.gov (United States)

    Liu, Xilin; Zhang, Milin; Richardson, Andrew G; Lucas, Timothy H; Van der Spiegel, Jan

    2017-08-01

    This paper presents a bidirectional brain machine interface (BMI) microsystem designed for closed-loop neuroscience research, especially experiments in freely behaving animals. The system-on-chip (SoC) consists of 16-channel neural recording front-ends, neural feature extraction units, 16-channel programmable neural stimulator back-ends, in-channel programmable closed-loop controllers, global analog-digital converters (ADC), and peripheral circuits. The proposed neural feature extraction units includes 1) an ultra low-power neural energy extraction unit enabling a 64-step natural logarithmic domain frequency tuning, and 2) a current-mode action potential (AP) detection unit with time-amplitude window discriminator. A programmable proportional-integral-derivative (PID) controller has been integrated in each channel enabling a various of closed-loop operations. The implemented ADCs include a 10-bit voltage-mode successive approximation register (SAR) ADC for the digitization of the neural feature outputs and/or local field potential (LFP) outputs, and an 8-bit current-mode SAR ADC for the digitization of the action potential outputs. The multi-mode stimulator can be programmed to perform monopolar or bipolar, symmetrical or asymmetrical charge balanced stimulation with a maximum current of 4 mA in an arbitrary channel configuration. The chip has been fabricated in 0.18 μ m CMOS technology, occupying a silicon area of 3.7 mm 2 . The chip dissipates 56 μW/ch on average. General purpose low-power microcontroller with Bluetooth module are integrated in the system to provide wireless link and SoC configuration. Methods, circuit techniques and system topology proposed in this work can be used in a wide range of relevant neurophysiology research, especially closed-loop BMI experiments.

  19. Hypothesis analysis methods, hypothesis analysis devices, and articles of manufacture

    Science.gov (United States)

    Sanfilippo, Antonio P [Richland, WA; Cowell, Andrew J [Kennewick, WA; Gregory, Michelle L [Richland, WA; Baddeley, Robert L [Richland, WA; Paulson, Patrick R [Pasco, WA; Tratz, Stephen C [Richland, WA; Hohimer, Ryan E [West Richland, WA

    2012-03-20

    Hypothesis analysis methods, hypothesis analysis devices, and articles of manufacture are described according to some aspects. In one aspect, a hypothesis analysis method includes providing a hypothesis, providing an indicator which at least one of supports and refutes the hypothesis, using the indicator, associating evidence with the hypothesis, weighting the association of the evidence with the hypothesis, and using the weighting, providing information regarding the accuracy of the hypothesis.

  20. Highly efficient methods to obtain homogeneous dorsal neural progenitor cells from human and mouse embryonic stem cells and induced pluripotent stem cells.

    Science.gov (United States)

    Zhang, Meixiang; Ngo, Justine; Pirozzi, Filomena; Sun, Ying-Pu; Wynshaw-Boris, Anthony

    2018-03-15

    Embryonic stem cells (ESCs) and induced pluripotent stem cells (iPSCs) have been widely used to generate cellular models harboring specific disease-related genotypes. Of particular importance are ESC and iPSC applications capable of producing dorsal telencephalic neural progenitor cells (NPCs) that are representative of the cerebral cortex and overcome the challenges of maintaining a homogeneous population of cortical progenitors over several passages in vitro. While previous studies were able to derive NPCs from pluripotent cell types, the fraction of dorsal NPCs in this population is small and decreases over several passages. Here, we present three protocols that are highly efficient in differentiating mouse and human ESCs, as well as human iPSCs, into a homogeneous and stable population of dorsal NPCs. These protocols will be useful for modeling cerebral cortical neurological and neurodegenerative disorders in both mouse and human as well as for high-throughput drug screening for therapeutic development. We optimized three different strategies for generating dorsal telencephalic NPCs from mouse and human pluripotent cell types through single or double inhibition of bone morphogenetic protein (BMP) and/or SMAD pathways. Mouse and human pluripotent cells were aggregated to form embryoid bodies in suspension and were treated with dorsomorphin alone (BMP inhibition) or combined with SB431542 (double BMP/SMAD inhibition) during neural induction. Neural rosettes were then selected from plated embryoid bodies to purify the population of dorsal NPCs. We tested the expression of key dorsal NPC markers as well as nonectodermal markers to confirm the efficiency of our three methods in comparison to published and commercial protocols. Single and double inhibition of BMP and/or SMAD during neural induction led to the efficient differentiation of dorsal NPCs, based on the high percentage of PAX6-positive cells and the NPC gene expression profile. There were no statistically

  1. Improving efficiency of two-type maximum power point tracking methods of tip-speed ratio and optimum torque in wind turbine system using a quantum neural network

    International Nuclear Information System (INIS)

    Ganjefar, Soheil; Ghassemi, Ali Akbar; Ahmadi, Mohamad Mehdi

    2014-01-01

    In this paper, a quantum neural network (QNN) is used as controller in the adaptive control structures to improve efficiency of the maximum power point tracking (MPPT) methods in the wind turbine system. For this purpose, direct and indirect adaptive control structures equipped with QNN are used in tip-speed ratio (TSR) and optimum torque (OT) MPPT methods. The proposed control schemes are evaluated through a battery-charging windmill system equipped with PMSG (permanent magnet synchronous generator) at a random wind speed to demonstrate transcendence of their effectiveness as compared to PID controller and conventional neural network controller (CNNC). - Highlights: • Using a new control method to harvest the maximum power from wind energy system. • Using an adaptive control scheme based on quantum neural network (QNN). • Improving of MPPT-TSR method by direct adaptive control scheme based on QNN. • Improving of MPPT-OT method by indirect adaptive control scheme based on QNN. • Using a windmill system based on PMSG to evaluate proposed control schemes

  2. Prediction of geomagnetic storm using neural networks: Comparison of the efficiency of the Satellite and ground-based input parameters

    International Nuclear Information System (INIS)

    Stepanova, Marina; Antonova, Elizavieta; Munos-Uribe, F A; Gordo, S L Gomez; Torres-Sanchez, M V

    2008-01-01

    Different kinds of neural networks have established themselves as an effective tool in the prediction of different geomagnetic indices, including the Dst being the most important constituent for determination of the impact of Space Weather on the human life. Feed-forward networks with one hidden layer are used to forecast the Dst variation, using separately the solar wind paramenters, polar cap index, and auroral electrojet index as input parameters. It was found that in all three cases the storm-time intervals were predicted much more precisely as quite time intervals. The majority of cross-correlation coefficients between predicted and observed Dst of strong geomagnetic storms are situated between 0.8 and 0.9. Changes in the neural network architecture, including the number of nodes in the input and hidden layers and the transfer functions between them lead to an improvement of a network performance up to 10%.

  3. Efficient and Fast Differentiation of Human Neural Stem Cells from Human Embryonic Stem Cells for Cell Therapy

    Directory of Open Access Journals (Sweden)

    Xinxin Han

    2017-01-01

    Full Text Available Stem cell-based therapies have been used for repairing damaged brain tissue and helping functional recovery after brain injury. Aberrance neurogenesis is related with brain injury, and multipotential neural stem cells from human embryonic stem (hES cells provide a great promise for cell replacement therapies. Optimized protocols for neural differentiation are necessary to produce functional human neural stem cells (hNSCs for cell therapy. However, the qualified procedure is scarce and detailed features of hNSCs originated from hES cells are still unclear. In this study, we developed a method to obtain hNSCs from hES cells, by which we could harvest abundant hNSCs in a relatively short time. Then, we examined the expression of pluripotent and multipotent marker genes through immunostaining and confirmed differentiation potential of the differentiated hNSCs. Furthermore, we analyzed the mitotic activity of these hNSCs. In this report, we provided comprehensive features of hNSCs and delivered the knowledge about how to obtain more high-quality hNSCs from hES cells which may help to accelerate the NSC-based therapies in brain injury treatment.

  4. The Qualitative Expectations Hypothesis

    DEFF Research Database (Denmark)

    Frydman, Roman; Johansen, Søren; Rahbek, Anders

    2017-01-01

    We introduce the Qualitative Expectations Hypothesis (QEH) as a new approach to modeling macroeconomic and financial outcomes. Building on John Muth's seminal insight underpinning the Rational Expectations Hypothesis (REH), QEH represents the market's forecasts to be consistent with the predictions...... of an economistís model. However, by assuming that outcomes lie within stochastic intervals, QEH, unlike REH, recognizes the ambiguity faced by an economist and market participants alike. Moreover, QEH leaves the model open to ambiguity by not specifying a mechanism determining specific values that outcomes take...

  5. The Qualitative Expectations Hypothesis

    DEFF Research Database (Denmark)

    Frydman, Roman; Johansen, Søren; Rahbek, Anders

    We introduce the Qualitative Expectations Hypothesis (QEH) as a new approach to modeling macroeconomic and financial outcomes. Building on John Muth's seminal insight underpinning the Rational Expectations Hypothesis (REH), QEH represents the market's forecasts to be consistent with the predictions...... of an economist's model. However, by assuming that outcomes lie within stochastic intervals, QEH, unlike REH, recognizes the ambiguity faced by an economist and market participants alike. Moreover, QEH leaves the model open to ambiguity by not specifying a mechanism determining specific values that outcomes take...

  6. Vitamin E isomer δ-tocopherol enhances the efficiency of neural stem cell differentiation via L-type calcium channel.

    Science.gov (United States)

    Deng, Sihao; Hou, Guoqiang; Xue, Zhiqin; Zhang, Longmei; Zhou, Yuye; Liu, Chao; Liu, Yanqing; Li, Zhiyuan

    2015-01-12

    The effects of the vitamin E isomer δ-tocopherol on neural stem cell (NSC) differentiation have not been investigated until now. Here we investigated the effects of δ-tocopherol on NSC neural differentiation, maturation and its possible mechanisms. Neonatal rat NSCs were grown in suspended neurosphere cultures, and were identified by their expression of nestin protein and their capacity for self-renewal. Treatment with a low concentration of δ-tocopherol induced a significant increase in the percentage of β-III-tubulin-positive cells. δ-Tocopherol also stimulated morphological maturation of neurons in culture. We further observed that δ-tocopherol stimulation increased the expression of voltage-dependent Ca(2+) channels. Moreover, a L-type specific Ca(2+) channel blocker verapamil reduced the percentage of differentiated neurons after δ-tocopherol treatment, and blocked the effects of δ-tocopherol on NSC differentiation into neurons. Together, our study demonstrates that δ-tocopherol may act through elevation of L-type calcium channel activity to increase neuronal differentiation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Revisiting the Dutch hypothesis

    NARCIS (Netherlands)

    Postma, Dirkje S.; Weiss, Scott T.; van den Berge, Maarten; Kerstjens, Huib A. M.; Koppelman, Gerard H.

    The Dutch hypothesis was first articulated in 1961, when many novel and advanced scientific techniques were not available, such as genomics techniques for pinpointing genes, gene expression, lipid and protein profiles, and the microbiome. In addition, computed tomographic scans and advanced analysis

  8. The Lehman Sisters Hypothesis

    NARCIS (Netherlands)

    I.P. van Staveren (Irene)

    2014-01-01

    markdownabstract__Abstract__ This article explores the Lehman Sisters Hypothesis. It reviews empirical literature about gender differences in behavioral, experimental, and neuro-economics as well as in other fields of behavioral research. It discusses gender differences along three dimensions of

  9. Fuel cell-based CHP system modelling using Artificial Neural Networks aimed at developing techno-economic efficiency maximization control systems

    International Nuclear Information System (INIS)

    Asensio, F.J.; San Martín, J.I.; Zamora, I.; Garcia-Villalobos, J.

    2017-01-01

    This paper focuses on the modelling of the performance of a Polymer Electrolyte Membrane Fuel Cell (PEMFC)-based cogeneration system to integrate it in hybrid and/or connected to grid systems and enable the optimization of the techno-economic efficiency of the system in which it is integrated. To this end, experimental tests on a PEMFC-based cogeneration system of 600 W of electrical power have been performed to train an Artificial Neural Network (ANN). Once the learning of the ANN, it has been able to emulate real operating conditions, such as the cooling water out temperature and the hydrogen consumption of the PEMFC depending on several variables, such as the electric power demanded, temperature of the inlet water flow to the cooling circuit, cooling water flow and the heat demanded to the CHP system. After analysing the results, it is concluded that the presented model reproduces with enough accuracy and precision the performance of the experimented PEMFC, thus enabling the use of the model and the ANN learning methodology to model other PEMFC-based cogeneration systems and integrate them in techno-economic efficiency optimization control systems. - Highlights: • The effect of the energy demand variation on the PEMFC's efficiency is predicted. • The model relies on experimental data obtained from a 600 W PEMFC. • It provides the temperature and the hydrogen consumption with good accuracy. • The range in which the global energy efficiency could be improved is provided.

  10. A CMOS power-efficient low-noise current-mode front-end amplifier for neural signal recording.

    Science.gov (United States)

    Wu, Chung-Yu; Chen, Wei-Ming; Kuo, Liang-Ting

    2013-04-01

    In this paper, a new current-mode front-end amplifier (CMFEA) for neural signal recording systems is proposed. In the proposed CMFEA, a current-mode preamplifier with an active feedback loop operated at very low frequency is designed as the first gain stage to bypass any dc offset current generated by the electrode-tissue interface and to achieve a low high-pass cutoff frequency below 0.5 Hz. No reset signal or ultra-large pseudo resistor is required. The current-mode preamplifier has low dc operation current to enhance low-noise performance and decrease power consumption. A programmable current gain stage is adopted to provide adjustable gain for adaptive signal scaling. A following current-mode filter is designed to adjust the low-pass cutoff frequency for different neural signals. The proposed CMFEA is designed and fabricated in 0.18-μm CMOS technology and the area of the core circuit is 0.076 mm(2). The measured high-pass cutoff frequency is as low as 0.3 Hz and the low-pass cutoff frequency is adjustable from 1 kHz to 10 kHz. The measured maximum current gain is 55.9 dB. The measured input-referred current noise density is 153 fA /√Hz , and the power consumption is 13 μW at 1-V power supply. The fabricated CMFEA has been successfully applied to the animal test for recording the seizure ECoG of Long-Evan rats.

  11. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    Science.gov (United States)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  12. From neuro-pigments to neural efficiency: The relationship between retinal carotenoids and behavioral and neuroelectric indices of cognitive control in childhood.

    Science.gov (United States)

    Walk, Anne M; Khan, Naiman A; Barnett, Sasha M; Raine, Lauren B; Kramer, Arthur F; Cohen, Neal J; Moulton, Christopher J; Renzi-Hammond, Lisa M; Hammond, Billy R; Hillman, Charles H

    2017-08-01

    Lutein and zeaxanthin are plant pigments known to preferentially accumulate in neural tissue. Macular Pigment Optical Density (MPOD), a non-invasive measure of retinal carotenoids and surrogate measure of brain carotenoid concentration, has been associated with disease prevention and cognitive health. Superior MPOD status in later adulthood has been shown to provide neuroprotective effects on cognition. Given that childhood signifies a critical period for carotenoid accumulation in brain, it is likely that the beneficial impact would be evident during development, though this relationship has not been directly investigated. The present study investigated the relationship between MPOD and the behavioral and neuroelectric indices elicited during a cognitive control task in preadolescent children. 49 participants completed a modified flanker task while event-related potentials (ERPs) were recorded to assess the P3 component of the ERP waveform. MPOD was associated with both behavioral performance and P3 amplitude such that children with higher MPOD had more accurate performance and lower P3 amplitudes. These relationships were more pronounced for trials requiring greater amounts of cognitive control. These results indicate that children with higher MPOD may respond to cognitive tasks more efficiently, maintaining high performance while displaying neural indices indicative of lower cognitive load. These findings provide novel support for the neuroprotective influence of retinal carotenoids during preadolescence. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Safe and efficient method for cryopreservation of human induced pluripotent stem cell-derived neural stem and progenitor cells by a programmed freezer with a magnetic field.

    Science.gov (United States)

    Nishiyama, Yuichiro; Iwanami, Akio; Kohyama, Jun; Itakura, Go; Kawabata, Soya; Sugai, Keiko; Nishimura, Soraya; Kashiwagi, Rei; Yasutake, Kaori; Isoda, Miho; Matsumoto, Morio; Nakamura, Masaya; Okano, Hideyuki

    2016-06-01

    Stem cells represent a potential cellular resource in the development of regenerative medicine approaches to the treatment of pathologies in which specific cells are degenerated or damaged by genetic abnormality, disease, or injury. Securing sufficient supplies of cells suited to the demands of cell transplantation, however, remains challenging, and the establishment of safe and efficient cell banking procedures is an important goal. Cryopreservation allows the storage of stem cells for prolonged time periods while maintaining them in adequate condition for use in clinical settings. Conventional cryopreservation systems include slow-freezing and vitrification both have advantages and disadvantages in terms of cell viability and/or scalability. In the present study, we developed an advanced slow-freezing technique using a programmed freezer with a magnetic field called Cells Alive System (CAS) and examined its effectiveness on human induced pluripotent stem cell-derived neural stem/progenitor cells (hiPSC-NS/PCs). This system significantly increased cell viability after thawing and had less impact on cellular proliferation and differentiation. We further found that frozen-thawed hiPSC-NS/PCs were comparable with non-frozen ones at the transcriptome level. Given these findings, we suggest that the CAS is useful for hiPSC-NS/PCs banking for clinical uses involving neural disorders and may open new avenues for future regenerative medicine. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  14. Bayesian Hypothesis Testing

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, Stephen A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sigeti, David E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-11-15

    These are a set of slides about Bayesian hypothesis testing, where many hypotheses are tested. The conclusions are the following: The value of the Bayes factor obtained when using the median of the posterior marginal is almost the minimum value of the Bayes factor. The value of τ2 which minimizes the Bayes factor is a reasonable choice for this parameter. This allows a likelihood ratio to be computed with is the least favorable to H0.

  15. The Drift Burst Hypothesis

    OpenAIRE

    Christensen, Kim; Oomen, Roel; Renò, Roberto

    2016-01-01

    The Drift Burst Hypothesis postulates the existence of short-lived locally explosive trends in the price paths of financial assets. The recent US equity and Treasury flash crashes can be viewed as two high profile manifestations of such dynamics, but we argue that drift bursts of varying magnitude are an expected and regular occurrence in financial markets that can arise through established mechanisms such as feedback trading. At a theoretical level, we show how to build drift bursts into the...

  16. Hypothesis in research

    Directory of Open Access Journals (Sweden)

    Eudaldo Enrique Espinoza Freire

    2018-01-01

    Full Text Available It is intended with this work to have a material with the fundamental contents, which enable the university professor to formulate the hypothesis, for the development of an investigation, taking into account the problem to be solved. For its elaboration, the search of information in primary documents was carried out, such as thesis of degree and reports of research results, selected on the basis of its relevance with the analyzed subject, current and reliability, secondary documents, as scientific articles published in journals of recognized prestige, the selection was made with the same terms as in the previous documents. It presents a conceptualization of the updated hypothesis, its characterization and an analysis of the structure of the hypothesis in which the determination of the variables is deepened. The involvement of the university professor in the teaching-research process currently faces some difficulties, which are manifested, among other aspects, in an unstable balance between teaching and research, which leads to a separation between them.

  17. Structural Reliability: An Assessment Using a New and Efficient Two-Phase Method Based on Artificial Neural Network and a Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    Naser Kazemi Elaki

    2016-06-01

    Full Text Available In this research, a two-phase algorithm based on the artificial neural network (ANN and a harmony search (HS algorithm has been developed with the aim of assessing the reliability of structures with implicit limit state functions. The proposed method involves the generation of datasets to be used specifically for training by Finite Element analysis, to establish an ANN model using a proven ANN model in the reliability assessment process as an analyzer for structures, and finally estimate the reliability index and failure probability by using the HS algorithm, without any requirements for the explicit form of limit state function. The proposed algorithm is investigated here, and its accuracy and efficiency are demonstrated by using several numerical examples. The results obtained show that the proposed algorithm gives an appropriate estimate for the assessment of reliability of structures.

  18. Histopathological examination of nerve samples from pure neural leprosy patients: obtaining maximum information to improve diagnostic efficiency

    Directory of Open Access Journals (Sweden)

    Sérgio Luiz Gomes Antunes

    2012-03-01

    Full Text Available Nerve biopsy examination is an important auxiliary procedure for diagnosing pure neural leprosy (PNL. When acid-fast bacilli (AFB are not detected in the nerve sample, the value of other nonspecific histological alterations should be considered along with pertinent clinical, electroneuromyographical and laboratory data (the detection of Mycobacterium leprae DNA with polymerase chain reaction and the detection of serum anti-phenolic glycolipid 1 antibodies to support a possible or probable PNL diagnosis. Three hundred forty nerve samples [144 from PNL patients and 196 from patients with non-leprosy peripheral neuropathies (NLN] were examined. Both AFB-negative and AFB-positive PNL samples had more frequent histopathological alterations (epithelioid granulomas, mononuclear infiltrates, fibrosis, perineurial and subperineurial oedema and decreased numbers of myelinated fibres than the NLN group. Multivariate analysis revealed that independently, mononuclear infiltrate and perineurial fibrosis were more common in the PNL group and were able to correctly classify AFB-negative PNL samples. These results indicate that even in the absence of AFB, these histopathological nerve alterations may justify a PNL diagnosis when observed in conjunction with pertinent clinical, epidemiological and laboratory data.

  19. Histopathological examination of nerve samples from pure neural leprosy patients: obtaining maximum information to improve diagnostic efficiency.

    Science.gov (United States)

    Antunes, Sérgio Luiz Gomes; Chimelli, Leila; Jardim, Márcia Rodrigues; Vital, Robson Teixeira; Nery, José Augusto da Costa; Corte-Real, Suzana; Hacker, Mariana Andréa Vilas Boas; Sarno, Euzenir Nunes

    2012-03-01

    Nerve biopsy examination is an important auxiliary procedure for diagnosing pure neural leprosy (PNL). When acid-fast bacilli (AFB) are not detected in the nerve sample, the value of other nonspecific histological alterations should be considered along with pertinent clinical, electroneuromyographical and laboratory data (the detection of Mycobacterium leprae DNA with polymerase chain reaction and the detection of serum anti-phenolic glycolipid 1 antibodies) to support a possible or probable PNL diagnosis. Three hundred forty nerve samples [144 from PNL patients and 196 from patients with non-leprosy peripheral neuropathies (NLN)] were examined. Both AFB-negative and AFB-positive PNL samples had more frequent histopathological alterations (epithelioid granulomas, mononuclear infiltrates, fibrosis, perineurial and subperineurial oedema and decreased numbers of myelinated fibres) than the NLN group. Multivariate analysis revealed that independently, mononuclear infiltrate and perineurial fibrosis were more common in the PNL group and were able to correctly classify AFB-negative PNL samples. These results indicate that even in the absence of AFB, these histopathological nerve alterations may justify a PNL diagnosis when observed in conjunction with pertinent clinical, epidemiological and laboratory data.

  20. Eating breakfast enhances the efficiency of neural networks engaged during mental arithmetic in school-aged children.

    Science.gov (United States)

    Pivik, R T; Tennal, Kevin B; Chapman, Stephen D; Gu, Yuyuan

    2012-06-25

    To determine the influence of a morning meal on complex mental functions in children (8-11 y), time-frequency analyses were applied to electroencephalographic (EEG) activity recorded while children solved simple addition problems after an overnight fast and again after having either eaten or skipped breakfast. Power of low frequency EEG activity [2 Hertz (Hz) bands in the 2-12 Hz range] was determined from recordings over frontal and parietal brain regions associated with mathematical thinking during mental calculation of correctly answered problems. Analyses were adjusted for background variables known to influence or reflect the development of mathematical skills, i.e., age and measures of math competence and math fluency. Relative to fed children, those who continued to fast showed greater power increases in upper theta (6-8 Hz) and both alpha bands (8-10 Hz; 10-12 Hz) across sites. Increased theta suggests greater demands on working memory. Increased alpha may facilitate task-essential activity by suppressing non-task-essential activity. Fasting children also had greater delta (2-4 Hz) and greater lower-theta (4-6 Hz) power in left frontal recordings-indicating a region-specific emphasis on both working memory for mental calculation (theta) and activation of processes that suppress interfering activity (delta). Fed children also showed a significant increase in correct responses while children who continued to fast did not. Taken together the findings suggest that neural network activity involved in processing numerical information is functionally enhanced and performance is improved in children who have eaten breakfast, whereas greater mental effort is required for this mathematical thinking in children who skip breakfast. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Development of a thermal control algorithm using artificial neural network models for improved thermal comfort and energy efficiency in accommodation buildings

    International Nuclear Information System (INIS)

    Moon, Jin Woo; Jung, Sung Kwon

    2016-01-01

    Highlights: • An ANN model for predicting optimal start moment of the cooling system was developed. • An ANN model for predicting the amount of cooling energy consumption was developed. • An optimal control algorithm was developed employing two ANN models. • The algorithm showed the advanced thermal comfort and energy efficiency. - Abstract: The aim of this study was to develop a control algorithm to demonstrate the improved thermal comfort and building energy efficiency of accommodation buildings in the cooling season. For this, two artificial neural network (ANN)-based predictive and adaptive models were developed and employed in the algorithm. One model predicted the cooling energy consumption during the unoccupied period for different setback temperatures and the other predicted the time required for restoring current indoor temperature to the normal set-point temperature. Using numerical simulation methods, the prediction accuracy of the two ANN models and the performance of the algorithm were tested. Through the test result analysis, the two ANN models showed their prediction accuracy with an acceptable error rate when applied in the control algorithm. In addition, the two ANN models based algorithm can be used to provide a more comfortable and energy efficient indoor thermal environment than the two conventional control methods, which respectively employed a fixed set-point temperature for the entire day and a setback temperature during the unoccupied period. Therefore, the operating range was 23–26 °C during the occupied period and 25–28 °C during the unoccupied period. Based on the analysis, it can be concluded that the optimal algorithm with two predictive and adaptive ANN models can be used to design a more comfortable and energy efficient indoor thermal environment for accommodation buildings in a comprehensive manner.

  2. A neural flow estimator

    DEFF Research Database (Denmark)

    Jørgensen, Ivan Harald Holger; Bogason, Gudmundur; Bruun, Erik

    1995-01-01

    This paper proposes a new way to estimate the flow in a micromechanical flow channel. A neural network is used to estimate the delay of random temperature fluctuations induced in a fluid. The design and implementation of a hardware efficient neural flow estimator is described. The system...... is implemented using switched-current technique and is capable of estimating flow in the μl/s range. The neural estimator is built around a multiplierless neural network, containing 96 synaptic weights which are updated using the LMS1-algorithm. An experimental chip has been designed that operates at 5 V...

  3. [Dilemma of null hypothesis in ecological hypothesis's experiment test.

    Science.gov (United States)

    Li, Ji

    2016-06-01

    Experimental test is one of the major test methods of ecological hypothesis, though there are many arguments due to null hypothesis. Quinn and Dunham (1983) analyzed the hypothesis deduction model from Platt (1964) and thus stated that there is no null hypothesis in ecology that can be strictly tested by experiments. Fisher's falsificationism and Neyman-Pearson (N-P)'s non-decisivity inhibit statistical null hypothesis from being strictly tested. Moreover, since the null hypothesis H 0 (α=1, β=0) and alternative hypothesis H 1 '(α'=1, β'=0) in ecological progresses are diffe-rent from classic physics, the ecological null hypothesis can neither be strictly tested experimentally. These dilemmas of null hypothesis could be relieved via the reduction of P value, careful selection of null hypothesis, non-centralization of non-null hypothesis, and two-tailed test. However, the statistical null hypothesis significance testing (NHST) should not to be equivalent to the causality logistical test in ecological hypothesis. Hence, the findings and conclusions about methodological studies and experimental tests based on NHST are not always logically reliable.

  4. The Bergschrund Hypothesis Revisited

    Science.gov (United States)

    Sanders, J. W.; Cuffey, K. M.; MacGregor, K. R.

    2009-12-01

    After Willard Johnson descended into the Lyell Glacier bergschrund nearly 140 years ago, he proposed that the presence of the bergschrund modulated daily air temperature fluctuations and enhanced freeze-thaw processes. He posited that glaciers, through their ability to birth bergschrunds, are thus able to induce rapid cirque headwall retreat. In subsequent years, many researchers challenged the bergschrund hypothesis on grounds that freeze-thaw events did not occur at depth in bergschrunds. We propose a modified version of Johnson’s original hypothesis: that bergschrunds maintain subfreezing temperatures at values that encourage rock fracture via ice lensing because they act as a cold air trap in areas that would otherwise be held near zero by temperate glacial ice. In support of this claim we investigated three sections of the bergschrund at the West Washmawapta Glacier, British Columbia, Canada, which sits in an east-facing cirque. During our bergschrund reconnaissance we installed temperature sensors at multiple elevations, light sensors at depth in 2 of the 3 locations and painted two 1 m2 sections of the headwall. We first emphasize bergschrunds are not wanting for ice: verglas covers significant fractions of the headwall and icicles dangle from the base of bödens or overhanging rocks. If temperature, rather than water availability, is the limiting factor governing ice-lensing rates, our temperature records demonstrate that the bergschrund provides a suitable environment for considerable rock fracture. At the three sites (north, west, and south walls), the average temperature at depth from 9/3/2006 to 8/6/2007 was -3.6, -3.6, and -2.0 °C, respectively. During spring, when we observed vast amounts of snow melt trickle in to the bergschrund, temperatures averaged -3.7, -3.8, and -2.2 °C, respectively. Winter temperatures are even lower: -8.5, -7.3, and -2.4 °C, respectively. Values during the following year were similar. During the fall, diurnal

  5. Understanding the Implications of Neural Population Activity on Behavior

    Science.gov (United States)

    Briguglio, John

    Learning how neural activity in the brain leads to the behavior we exhibit is one of the fundamental questions in Neuroscience. In this dissertation, several lines of work are presented to that use principles of neural coding to understand behavior. In one line of work, we formulate the efficient coding hypothesis in a non-traditional manner in order to test human perceptual sensitivity to complex visual textures. We find a striking agreement between how variable a particular texture signal is and how sensitive humans are to its presence. This reveals that the efficient coding hypothesis is still a guiding principle for neural organization beyond the sensory periphery, and that the nature of cortical constraints differs from the peripheral counterpart. In another line of work, we relate frequency discrimination acuity to neural responses from auditory cortex in mice. It has been previously observed that optogenetic manipulation of auditory cortex, in addition to changing neural responses, evokes changes in behavioral frequency discrimination. We are able to account for changes in frequency discrimination acuity on an individual basis by examining the Fisher information from the neural population with and without optogenetic manipulation. In the third line of work, we address the question of what a neural population should encode given that its inputs are responses from another group of neurons. Drawing inspiration from techniques in machine learning, we train Deep Belief Networks on fake retinal data and show the emergence of Garbor-like filters, reminiscent of responses in primary visual cortex. In the last line of work, we model the state of a cortical excitatory-inhibitory network during complex adaptive stimuli. Using a rate model with Wilson-Cowan dynamics, we demonstrate that simple non-linearities in the signal transferred from inhibitory to excitatory neurons can account for real neural recordings taken from auditory cortex. This work establishes and tests

  6. An Efficient Approach for Lipase-Catalyzed Synthesis of Retinyl Laurate Nutraceutical by Combining Ultrasound Assistance and Artificial Neural Network Optimization

    Directory of Open Access Journals (Sweden)

    Shang-Ming Huang

    2017-11-01

    Full Text Available Although retinol is an important nutrient, retinol is highly sensitive to oxidation. At present, some ester forms of retinol are generally used in nutritional supplements because of its stability and bioavailability. However, such esters are commonly synthesized by chemical procedures which are harmful to the environment. Thus, this study utilized a green method using lipase as a catalyst with sonication assistance to produce a retinol derivative named retinyl laurate. Moreover, the process was optimized by an artificial neural network (ANN. First, a three-level-four-factor central composite design (CCD was employed to design 27 experiments, which the highest relative conversion was 82.64%. Further, the optimal architecture of the CCD-employing ANN was developed, including the learning Levenberg-Marquardt algorithm, the transfer function (hyperbolic tangent, iterations (10,000, and the nodes of the hidden layer (6. The best performance of the ANN was evaluated by the root mean squared error (RMSE and the coefficient of determination (R2 from predicting and observed data, which displayed a good data-fitting property. Finally, the process performed with optimal parameters actually obtained a relative conversion of 88.31% without long-term reactions, and the lipase showed great reusability for biosynthesis. Thus, this study utilizes green technology to efficiently produce retinyl laurate, and the bioprocess is well established by ANN-mediated modeling and optimization.

  7. An Efficient Approach for Lipase-Catalyzed Synthesis of Retinyl Laurate Nutraceutical by Combining Ultrasound Assistance and Artificial Neural Network Optimization.

    Science.gov (United States)

    Huang, Shang-Ming; Li, Hsin-Ju; Liu, Yung-Chuan; Kuo, Chia-Hung; Shieh, Chwen-Jen

    2017-11-15

    Although retinol is an important nutrient, retinol is highly sensitive to oxidation. At present, some ester forms of retinol are generally used in nutritional supplements because of its stability and bioavailability. However, such esters are commonly synthesized by chemical procedures which are harmful to the environment. Thus, this study utilized a green method using lipase as a catalyst with sonication assistance to produce a retinol derivative named retinyl laurate. Moreover, the process was optimized by an artificial neural network (ANN). First, a three-level-four-factor central composite design (CCD) was employed to design 27 experiments, which the highest relative conversion was 82.64%. Further, the optimal architecture of the CCD-employing ANN was developed, including the learning Levenberg-Marquardt algorithm, the transfer function (hyperbolic tangent), iterations (10,000), and the nodes of the hidden layer (6). The best performance of the ANN was evaluated by the root mean squared error (RMSE) and the coefficient of determination ( R ²) from predicting and observed data, which displayed a good data-fitting property. Finally, the process performed with optimal parameters actually obtained a relative conversion of 88.31% without long-term reactions, and the lipase showed great reusability for biosynthesis. Thus, this study utilizes green technology to efficiently produce retinyl laurate, and the bioprocess is well established by ANN-mediated modeling and optimization.

  8. Is the Aluminum Hypothesis Dead?

    Science.gov (United States)

    2014-01-01

    The Aluminum Hypothesis, the idea that aluminum exposure is involved in the etiology of Alzheimer disease, dates back to a 1965 demonstration that aluminum causes neurofibrillary tangles in the brains of rabbits. Initially the focus of intensive research, the Aluminum Hypothesis has gradually been abandoned by most researchers. Yet, despite this current indifference, the Aluminum Hypothesis continues to attract the attention of a small group of scientists and aluminum continues to be viewed with concern by some of the public. This review article discusses reasons that mainstream science has largely abandoned the Aluminum Hypothesis and explores a possible reason for some in the general public continuing to view aluminum with mistrust. PMID:24806729

  9. "The seven sins" of the Hebbian synapse: can the hypothesis of synaptic plasticity explain long-term memory consolidation?

    Science.gov (United States)

    Arshavsky, Yuri I

    2006-10-01

    Memorizing new facts and events means that entering information produces specific physical changes within the brain. According to the commonly accepted view, traces of memory are stored through the structural modifications of synaptic connections, which result in changes of synaptic efficiency and, therefore, in formations of new patterns of neural activity (the hypothesis of synaptic plasticity). Most of the current knowledge on learning and initial stages of memory consolidation ("synaptic consolidation") is based on this hypothesis. However, the hypothesis of synaptic plasticity faces a number of conceptual and experimental difficulties when it deals with potentially permanent consolidation of declarative memory ("system consolidation"). These difficulties are rooted in the major intrinsic self-contradiction of the hypothesis: stable declarative memory is unlikely to be based on such a non-stable foundation as synaptic plasticity. Memory that can last throughout an entire lifespan should be "etched in stone." The only "stone-like" molecules within living cells are DNA molecules. Therefore, I advocate an alternative, genomic hypothesis of memory, which suggests that acquired information is persistently stored within individual neurons through modifications of DNA, and that these modifications serve as the carriers of elementary memory traces.

  10. Sequential neural models with stochastic layers

    DEFF Research Database (Denmark)

    Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich

    2016-01-01

    How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...

  11. Active Neural Localization

    OpenAIRE

    Chaplot, Devendra Singh; Parisotto, Emilio; Salakhutdinov, Ruslan

    2018-01-01

    Localization is the problem of estimating the location of an autonomous agent from an observation and a map of the environment. Traditional methods of localization, which filter the belief based on the observations, are sub-optimal in the number of steps required, as they do not decide the actions taken by the agent. We propose "Active Neural Localizer", a fully differentiable neural network that learns to localize accurately and efficiently. The proposed model incorporates ideas of tradition...

  12. Ocean wave forecasting using recurrent neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    , merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper describes an artificial neural network, namely recurrent neural network with rprop update algorithm and is applied for wave forecasting. Measured ocean waves off...

  13. Neural adaptations to electrical stimulation strength training

    NARCIS (Netherlands)

    Hortobagyi, Tibor; Maffiuletti, Nicola A.

    2011-01-01

    This review provides evidence for the hypothesis that electrostimulation strength training (EST) increases the force of a maximal voluntary contraction (MVC) through neural adaptations in healthy skeletal muscle. Although electrical stimulation and voluntary effort activate muscle differently, there

  14. Dynamical agents' strategies and the fractal market hypothesis

    Czech Academy of Sciences Publication Activity Database

    Vácha, Lukáš; Vošvrda, Miloslav

    2005-01-01

    Roč. 14, č. 2 (2005), s. 172-179 ISSN 1210-0455 Grant - others:GA UK(CZ) 454/2004/A EK/FSV Institutional research plan: CEZ:AV0Z10750506 Keywords : efficient market hypothesis * fractal market hypothesis * agent's investment horizons Subject RIV: AH - Economics

  15. Hypothesis Designs for Three-Hypothesis Test Problems

    OpenAIRE

    Yan Li; Xiaolong Pu

    2010-01-01

    As a helpful guide for applications, the alternative hypotheses of the three-hypothesis test problems are designed under the required error probabilities and average sample number in this paper. The asymptotic formulas and the proposed numerical quadrature formulas are adopted, respectively, to obtain the hypothesis designs and the corresponding sequential test schemes under the Koopman-Darmois distributions. The example of the normal mean test shows that our methods are qu...

  16. The equilibrium-point hypothesis--past, present and future.

    Science.gov (United States)

    Feldman, Anatol G; Levin, Mindy F

    2009-01-01

    This chapter is a brief account of fundamentals of the equilibrium-point hypothesis or more adequately called the threshold control theory (TCT). It also compares the TCT with other approaches to motor control. The basic notions of the TCT are reviewed with a major focus on solutions to the problems of multi-muscle and multi-degrees of freedom redundancy. The TCT incorporates cognitive aspects by explaining how neurons recognize that internal (neural) and external (environmental) events match each other. These aspects as well as how motor learning occurs are subjects of further development of the TCT hypothesis.

  17. Tests of the lunar hypothesis

    Science.gov (United States)

    Taylor, S. R.

    1984-01-01

    The concept that the Moon was fissioned from the Earth after core separation is the most readily testable hypothesis of lunar origin, since direct comparisons of lunar and terrestrial compositions can be made. Differences found in such comparisons introduce so many ad hoc adjustments to the fission hypothesis that it becomes untestable. Further constraints may be obtained from attempting to date the volatile-refractory element fractionation. The combination of chemical and isotopic problems suggests that the fission hypothesis is no longer viable, and separate terrestrial and lunar accretion from a population of fractionated precursor planetesimals provides a more reasonable explanation.

  18. Evaluating the Stage Learning Hypothesis.

    Science.gov (United States)

    Thomas, Hoben

    1980-01-01

    A procedure for evaluating the Genevan stage learning hypothesis is illustrated by analyzing Inhelder, Sinclair, and Bovet's guided learning experiments (in "Learning and the Development of Cognition." Cambridge: Harvard University Press, 1974). (Author/MP)

  19. The Hypothesis-Driven Physical Examination.

    Science.gov (United States)

    Garibaldi, Brian T; Olson, Andrew P J

    2018-05-01

    The physical examination remains a vital part of the clinical encounter. However, physical examination skills have declined in recent years, in part because of decreased time at the bedside. Many clinicians question the relevance of physical examinations in the age of technology. A hypothesis-driven approach to teaching and practicing the physical examination emphasizes the performance of maneuvers that can alter the likelihood of disease. Likelihood ratios are diagnostic weights that allow clinicians to estimate the post-probability of disease. This hypothesis-driven approach to the physical examination increases its value and efficiency, while preserving its cultural role in the patient-physician relationship. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Exploring heterogeneous market hypothesis using realized volatility

    Science.gov (United States)

    Chin, Wen Cheong; Isa, Zaidi; Mohd Nor, Abu Hassan Shaari

    2013-04-01

    This study investigates the heterogeneous market hypothesis using high frequency data. The cascaded heterogeneous trading activities with different time durations are modelled by the heterogeneous autoregressive framework. The empirical study indicated the presence of long memory behaviour and predictability elements in the financial time series which supported heterogeneous market hypothesis. Besides the common sum-of-square intraday realized volatility, we also advocated two power variation realized volatilities in forecast evaluation and risk measurement in order to overcome the possible abrupt jumps during the credit crisis. Finally, the empirical results are used in determining the market risk using the value-at-risk approach. The findings of this study have implications for informationally market efficiency analysis, portfolio strategies and risk managements.

  1. The atomic hypothesis: physical consequences

    International Nuclear Information System (INIS)

    Rivas, Martin

    2008-01-01

    The hypothesis that matter is made of some ultimate and indivisible objects, together with the restricted relativity principle, establishes a constraint on the kind of variables we are allowed to use for the variational description of elementary particles. We consider that the atomic hypothesis not only states the indivisibility of elementary particles, but also that these ultimate objects, if not annihilated, cannot be modified by any interaction so that all allowed states of an elementary particle are only kinematical modifications of any one of them. Therefore, an elementary particle cannot have excited states. In this way, the kinematical group of spacetime symmetries not only defines the symmetries of the system, but also the variables in terms of which the mathematical description of the elementary particles can be expressed in either the classical or the quantum mechanical description. When considering the interaction of two Dirac particles, the atomic hypothesis restricts the interaction Lagrangian to a kind of minimal coupling interaction

  2. RANDOM WALK HYPOTHESIS IN FINANCIAL MARKETS

    Directory of Open Access Journals (Sweden)

    Nicolae-Marius JULA

    2017-05-01

    Full Text Available Random walk hypothesis states that the stock market prices do not follow a predictable trajectory, but are simply random. If you are trying to predict a random set of data, one should test for randomness, because, despite the power and complexity of the used models, the results cannot be trustworthy. There are several methods for testing these hypotheses and the use of computational power provided by the R environment makes the work of the researcher easier and with a cost-effective approach. The increasing power of computing and the continuous development of econometric tests should give the potential investors new tools in selecting commodities and investing in efficient markets.

  3. Multiple sclerosis: a geographical hypothesis.

    Science.gov (United States)

    Carlyle, I P

    1997-12-01

    Multiple sclerosis remains a rare neurological disease of unknown aetiology, with a unique distribution, both geographically and historically. Rare in equatorial regions, it becomes increasingly common in higher latitudes; historically, it was first clinically recognized in the early nineteenth century. A hypothesis, based on geographical reasoning, is here proposed: that the disease is the result of a specific vitamin deficiency. Different individuals suffer the deficiency in separate and often unique ways. Evidence to support the hypothesis exists in cultural considerations, in the global distribution of the disease, and in its historical prevalence.

  4. Discussion of the Porter hypothesis

    International Nuclear Information System (INIS)

    1999-11-01

    In the reaction to the long-range vision of RMNO, published in 1996, The Dutch government posed the question whether a far-going and progressive modernization policy will lead to competitive advantages of high-quality products on partly new markets. Such a question is connected to the so-called Porter hypothesis: 'By stimulating innovation, strict environmental regulations can actually enhance competitiveness', from which statement it can be concluded that environment and economy can work together quite well. A literature study has been carried out in order to determine under which conditions that hypothesis is endorsed in the scientific literature and policy documents. Recommendations are given for further studies. refs

  5. The thrifty phenotype hypothesis revisited

    DEFF Research Database (Denmark)

    Vaag, A A; Grunnet, L G; Arora, G P

    2012-01-01

    Twenty years ago, Hales and Barker along with their co-workers published some of their pioneering papers proposing the 'thrifty phenotype hypothesis' in Diabetologia (4;35:595-601 and 3;36:62-67). Their postulate that fetal programming could represent an important player in the origin of type 2...... of the underlying molecular mechanisms. Type 2 diabetes is a multiple-organ disease, and developmental programming, with its idea of organ plasticity, is a plausible hypothesis for a common basis for the widespread organ dysfunctions in type 2 diabetes and the metabolic syndrome. Only two among the 45 known type 2...

  6. Intermittent reductions in respiratory neural activity elicit spinal TNF-α-independent, atypical PKC-dependent inactivity-induced phrenic motor facilitation.

    Science.gov (United States)

    Baertsch, Nathan A; Baker-Herman, Tracy L

    2015-04-15

    In many neural networks, mechanisms of compensatory plasticity respond to prolonged reductions in neural activity by increasing cellular excitability or synaptic strength. In the respiratory control system, a prolonged reduction in synaptic inputs to the phrenic motor pool elicits a TNF-α- and atypical PKC-dependent form of spinal plasticity known as inactivity-induced phrenic motor facilitation (iPMF). Although iPMF may be elicited by a prolonged reduction in respiratory neural activity, iPMF is more efficiently induced when reduced respiratory neural activity (neural apnea) occurs intermittently. Mechanisms giving rise to iPMF following intermittent neural apnea are unknown. The purpose of this study was to test the hypothesis that iPMF following intermittent reductions in respiratory neural activity requires spinal TNF-α and aPKC. Phrenic motor output was recorded in anesthetized and ventilated rats exposed to brief intermittent (5, ∼1.25 min), brief sustained (∼6.25 min), or prolonged sustained (30 min) neural apnea. iPMF was elicited following brief intermittent and prolonged sustained neural apnea, but not following brief sustained neural apnea. Unlike iPMF following prolonged neural apnea, spinal TNF-α was not required to initiate iPMF during intermittent neural apnea; however, aPKC was still required for its stabilization. These results suggest that different patterns of respiratory neural activity induce iPMF through distinct cellular mechanisms but ultimately converge on a similar downstream pathway. Understanding the diverse cellular mechanisms that give rise to inactivity-induced respiratory plasticity may lead to development of novel therapeutic strategies to treat devastating respiratory control disorders when endogenous compensatory mechanisms fail. Copyright © 2015 the American Physiological Society.

  7. Neural networks

    International Nuclear Information System (INIS)

    Denby, Bruce; Lindsey, Clark; Lyons, Louis

    1992-01-01

    The 1980s saw a tremendous renewal of interest in 'neural' information processing systems, or 'artificial neural networks', among computer scientists and computational biologists studying cognition. Since then, the growth of interest in neural networks in high energy physics, fueled by the need for new information processing technologies for the next generation of high energy proton colliders, can only be described as explosive

  8. Consumer health information seeking as hypothesis testing.

    Science.gov (United States)

    Keselman, Alla; Browne, Allen C; Kaufman, David R

    2008-01-01

    Despite the proliferation of consumer health sites, lay individuals often experience difficulty finding health information online. The present study attempts to understand users' information seeking difficulties by drawing on a hypothesis testing explanatory framework. It also addresses the role of user competencies and their interaction with internet resources. Twenty participants were interviewed about their understanding of a hypothetical scenario about a family member suffering from stable angina and then searched MedlinePlus consumer health information portal for information on the problem presented in the scenario. Participants' understanding of heart disease was analyzed via semantic analysis. Thematic coding was used to describe information seeking trajectories in terms of three key strategies: verification of the primary hypothesis, narrowing search within the general hypothesis area and bottom-up search. Compared to an expert model, participants' understanding of heart disease involved different key concepts, which were also differently grouped and defined. This understanding provided the framework for search-guiding hypotheses and results interpretation. Incorrect or imprecise domain knowledge led individuals to search for information on irrelevant sites, often seeking out data to confirm their incorrect initial hypotheses. Online search skills enhanced search efficiency, but did not eliminate these difficulties. Regardless of their web experience and general search skills, lay individuals may experience difficulty with health information searches. These difficulties may be related to formulating and evaluating hypotheses that are rooted in their domain knowledge. Informatics can provide support at the levels of health information portals, individual websites, and consumer education tools.

  9. The Stress Acceleration Hypothesis of Nightmares

    Directory of Open Access Journals (Sweden)

    Tore Nielsen

    2017-06-01

    Full Text Available Adverse childhood experiences can deleteriously affect future physical and mental health, increasing risk for many illnesses, including psychiatric problems, sleep disorders, and, according to the present hypothesis, idiopathic nightmares. Much like post-traumatic nightmares, which are triggered by trauma and lead to recurrent emotional dreaming about the trauma, idiopathic nightmares are hypothesized to originate in early adverse experiences that lead in later life to the expression of early memories and emotions in dream content. Accordingly, the objectives of this paper are to (1 review existing literature on sleep, dreaming and nightmares in relation to early adverse experiences, drawing upon both empirical studies of dreaming and nightmares and books and chapters by recognized nightmare experts and (2 propose a new approach to explaining nightmares that is based upon the Stress Acceleration Hypothesis of mental illness. The latter stipulates that susceptibility to mental illness is increased by adversity occurring during a developmentally sensitive window for emotional maturation—the infantile amnesia period—that ends around age 3½. Early adversity accelerates the neural and behavioral maturation of emotional systems governing the expression, learning, and extinction of fear memories and may afford short-term adaptive value. But it also engenders long-term dysfunctional consequences including an increased risk for nightmares. Two mechanisms are proposed: (1 disruption of infantile amnesia allows normally forgotten early childhood memories to influence later emotions, cognitions and behavior, including the common expression of threats in nightmares; (2 alterations of normal emotion regulation processes of both waking and sleep lead to increased fear sensitivity and less effective fear extinction. These changes influence an affect network previously hypothesized to regulate fear extinction during REM sleep, disruption of which leads to

  10. The Fractal Market Hypothesis: Applications to Financial Forecasting

    OpenAIRE

    Blackledge, Jonathan

    2010-01-01

    Most financial modelling systems rely on an underlying hypothesis known as the Efficient Market Hypothesis (EMH) including the famous Black-Scholes formula for placing an option. However, the EMH has a fundamental flaw: it is based on the assumption that economic processes are normally distributed and it has long been known that this is not the case. This fundamental assumption leads to a number of shortcomings associated with using the EMH to analyse financial data which includes failure to ...

  11. Questioning the social intelligence hypothesis.

    Science.gov (United States)

    Holekamp, Kay E

    2007-02-01

    The social intelligence hypothesis posits that complex cognition and enlarged "executive brains" evolved in response to challenges that are associated with social complexity. This hypothesis has been well supported, but some recent data are inconsistent with its predictions. It is becoming increasingly clear that multiple selective agents, and non-selective constraints, must have acted to shape cognitive abilities in humans and other animals. The task now is to develop a larger theoretical framework that takes into account both inter-specific differences and similarities in cognition. This new framework should facilitate consideration of how selection pressures that are associated with sociality interact with those that are imposed by non-social forms of environmental complexity, and how both types of functional demands interact with phylogenetic and developmental constraints.

  12. Hypothesis test for synchronization: twin surrogates revisited.

    Science.gov (United States)

    Romano, M Carmen; Thiel, Marco; Kurths, Jürgen; Mergenthaler, Konstantin; Engbert, Ralf

    2009-03-01

    The method of twin surrogates has been introduced to test for phase synchronization of complex systems in the case of passive experiments. In this paper we derive new analytical expressions for the number of twins depending on the size of the neighborhood, as well as on the length of the trajectory. This allows us to determine the optimal parameters for the generation of twin surrogates. Furthermore, we determine the quality of the twin surrogates with respect to several linear and nonlinear statistics depending on the parameters of the method. In the second part of the paper we perform a hypothesis test for phase synchronization in the case of experimental data from fixational eye movements. These miniature eye movements have been shown to play a central role in neural information processing underlying the perception of static visual scenes. The high number of data sets (21 subjects and 30 trials per person) allows us to compare the generated twin surrogates with the "natural" surrogates that correspond to the different trials. We show that the generated twin surrogates reproduce very well all linear and nonlinear characteristics of the underlying experimental system. The synchronization analysis of fixational eye movements by means of twin surrogates reveals that the synchronization between the left and right eye is significant, indicating that either the centers in the brain stem generating fixational eye movements are closely linked, or, alternatively that there is only one center controlling both eyes.

  13. Whiplash and the compensation hypothesis.

    Science.gov (United States)

    Spearing, Natalie M; Connelly, Luke B

    2011-12-01

    Review article. To explain why the evidence that compensation-related factors lead to worse health outcomes is not compelling, either in general, or in the specific case of whiplash. There is a common view that compensation-related factors lead to worse health outcomes ("the compensation hypothesis"), despite the presence of important, and unresolved sources of bias. The empirical evidence on this question has ramifications for the design of compensation schemes. Using studies on whiplash, this article outlines the methodological problems that impede attempts to confirm or refute the compensation hypothesis. Compensation studies are prone to measurement bias, reverse causation bias, and selection bias. Errors in measurement are largely due to the latent nature of whiplash injuries and health itself, a lack of clarity over the unit of measurement (specific factors, or "compensation"), and a lack of appreciation for the heterogeneous qualities of compensation-related factors and schemes. There has been a failure to acknowledge and empirically address reverse causation bias, or the likelihood that poor health influences the decision to pursue compensation: it is unclear if compensation is a cause or a consequence of poor health, or both. Finally, unresolved selection bias (and hence, confounding) is evident in longitudinal studies and natural experiments. In both cases, between-group differences have not been addressed convincingly. The nature of the relationship between compensation-related factors and health is unclear. Current approaches to testing the compensation hypothesis are prone to several important sources of bias, which compromise the validity of their results. Methods that explicitly test the hypothesis and establish whether or not a causal relationship exists between compensation factors and prolonged whiplash symptoms are needed in future studies.

  14. Subjective duration distortions mirror neural repetition suppression.

    Science.gov (United States)

    Pariyadath, Vani; Eagleman, David M

    2012-01-01

    Subjective duration is strongly influenced by repetition and novelty, such that an oddball stimulus in a stream of repeated stimuli appears to last longer in duration in comparison. We hypothesize that this duration illusion, called the temporal oddball effect, is a result of the difference in expectation between the oddball and the repeated stimuli. Specifically, we conjecture that the repeated stimuli contract in duration as a result of increased predictability; these duration contractions, we suggest, result from decreased neural response amplitude with repetition, known as repetition suppression. Participants viewed trials consisting of lines presented at a particular orientation (standard stimuli) followed by a line presented at a different orientation (oddball stimulus). We found that the size of the oddball effect correlates with the number of repetitions of the standard stimulus as well as the amount of deviance from the oddball stimulus; both of these results are consistent with a repetition suppression hypothesis. Further, we find that the temporal oddball effect is sensitive to experimental context--that is, the size of the oddball effect for a particular experimental trial is influenced by the range of duration distortions seen in preceding trials. Our data suggest that the repetition-related duration contractions causing the oddball effect are a result of neural repetition suppression. More generally, subjective duration may reflect the prediction error associated with a stimulus and, consequently, the efficiency of encoding that stimulus. Additionally, we emphasize that experimental context effects need to be taken into consideration when designing duration-related tasks.

  15. Efficient Transduction of Feline Neural Progenitor Cells for Delivery of Glial Cell Line-Derived Neurotrophic Factor Using a Feline Immunodeficiency Virus-Based Lentiviral Construct

    Directory of Open Access Journals (Sweden)

    X. Joann You

    2011-01-01

    Full Text Available Work has shown that stem cell transplantation can rescue or replace neurons in models of retinal degenerative disease. Neural progenitor cells (NPCs modified to overexpress neurotrophic factors are one means of providing sustained delivery of therapeutic gene products in vivo. To develop a nonrodent animal model of this therapeutic strategy, we previously derived NPCs from the fetal cat brain (cNPCs. Here we use bicistronic feline lentiviral vectors to transduce cNPCs with glial cell-derived neurotrophic factor (GDNF together with a GFP reporter gene. Transduction efficacy is assessed, together with transgene expression level and stability during induction of cellular differentiation, together with the influence of GDNF transduction on growth and gene expression profile. We show that GDNF overexpressing cNPCs expand in vitro, coexpress GFP, and secrete high levels of GDNF protein—before and after differentiation—all qualities advantageous for use as a cell-based approach in feline models of neural degenerative disease.

  16. An efficient approach for electric load forecasting using distributed ART (adaptive resonance theory) and HS-ARTMAP (Hyper-spherical ARTMAP network) neural network

    International Nuclear Information System (INIS)

    Cai, Yuan; Wang, Jian-zhou; Tang, Yun; Yang, Yu-chen

    2011-01-01

    This paper presents a neural network based on adaptive resonance theory, named distributed ART (adaptive resonance theory) and HS-ARTMAP (Hyper-spherical ARTMAP network), applied to the electric load forecasting problem. The distributed ART combines the stable fast learning capabilities of winner-take-all ART systems with the noise tolerance and code compression capabilities of multi-layer perceptions. The HS-ARTMAP, a hybrid of an RBF (Radial Basis Function)-network-like module which uses hyper-sphere basis function substitute the Gaussian basis function and an ART-like module, performs incremental learning capabilities in function approximation problem. The HS-ARTMAP only receives the compressed distributed coding processed by distributed ART to deal with the proliferation problem which ARTMAP (adaptive resonance theory map) architecture often encounters and still performs well in electric load forecasting. To demonstrate the performance of the methodology, data from New South Wales and Victoria in Australia are illustrated. Results show that the developed method is much better than the traditional BP and single HS-ARTMAP neural network. -- Research highlights: → The processing of the presented network is based on compressed distributed data. It's an innovation among the adaptive resonance theory architecture. → The presented network decreases the proliferation the Fuzzy ARTMAP architectures usually encounter. → The network on-line forecasts electrical load accurately, stably. → Both one-period and multi-period load forecasting are executed using data of different cities.

  17. Martingales, the Efficient Market Hypothesis, and Spurious Stylized Facts

    OpenAIRE

    McCauley, Joseph L.; Bassler, Kevin E.; Gunaratne, Gemunu H.

    2007-01-01

    The condition for stationary increments, not scaling, detemines long time pair autocorrelations. An incorrect assumption of stationary increments generates spurious stylized facts, fat tails and a Hurst exponent H_s=1/2, when the increments are nonstationary, as they are in FX markets. The nonstationarity arises from systematic uneveness in noise traders' behavior. Spurious results arise mathematically from using a log increment with a 'sliding window'. We explain why a hard to beat market de...

  18. A Molecular–Structure Hypothesis

    Directory of Open Access Journals (Sweden)

    Jan C. A. Boeyens

    2010-11-01

    Full Text Available The self-similar symmetry that occurs between atomic nuclei, biological growth structures, the solar system, globular clusters and spiral galaxies suggests that a similar pattern should characterize atomic and molecular structures. This possibility is explored in terms of the current molecular structure-hypothesis and its extension into four-dimensional space-time. It is concluded that a quantum molecule only has structure in four dimensions and that classical (Newtonian structure, which occurs in three dimensions, cannot be simulated by quantum-chemical computation.

  19. Antiaging therapy: a prospective hypothesis

    Directory of Open Access Journals (Sweden)

    Shahidi Bonjar MR

    2015-01-01

    Full Text Available Mohammad Rashid Shahidi Bonjar,1 Leyla Shahidi Bonjar2 1School of Dentistry, Kerman University of Medical Sciences, Kerman Iran; 2Department of Pharmacology, College of Pharmacy, Kerman University of Medical Sciences, Kerman, Iran Abstract: This hypothesis proposes a new prospective approach to slow the aging process in older humans. The hypothesis could lead to developing new treatments for age-related illnesses and help humans to live longer. This hypothesis has no previous documentation in scientific media and has no protocol. Scientists have presented evidence that systemic aging is influenced by peculiar molecules in the blood. Researchers at Albert Einstein College of Medicine, New York, and Harvard University in Cambridge discovered elevated titer of aging-related molecules (ARMs in blood, which trigger cascade of aging process in mice; they also indicated that the process can be reduced or even reversed. By inhibiting the production of ARMs, they could reduce age-related cognitive and physical declines. The present hypothesis offers a new approach to translate these findings into medical treatment: extracorporeal adjustment of ARMs would lead to slower rates of aging. A prospective “antiaging blood filtration column” (AABFC is a nanotechnological device that would fulfill the central role in this approach. An AABFC would set a near-youth homeostatic titer of ARMs in the blood. In this regard, the AABFC immobilizes ARMs from the blood while blood passes through the column. The AABFC harbors antibodies against ARMs. ARM antibodies would be conjugated irreversibly to ARMs on contact surfaces of the reaction platforms inside the AABFC till near-youth homeostasis is attained. The treatment is performed with the aid of a blood-circulating pump. Similar to a renal dialysis machine, blood would circulate from the body to the AABFC and from there back to the body in a closed circuit until ARMs were sufficiently depleted from the blood. The

  20. Is PMI the Hypothesis or the Null Hypothesis?

    Science.gov (United States)

    Tarone, Aaron M; Sanford, Michelle R

    2017-09-01

    Over the past several decades, there have been several strident exchanges regarding whether forensic entomologists estimate the postmortem interval (PMI), minimum PMI, or something else. During that time, there has been a proliferation of terminology reflecting this concern regarding "what we do." This has been a frustrating conversation for some in the community because much of this debate appears to be centered on what assumptions are acknowledged directly and which are embedded within a list of assumptions (or ignored altogether) in the literature and in case reports. An additional component of the conversation centers on a concern that moving away from the use of certain terminology like PMI acknowledges limitations and problems that would make the application of entomology appear less useful in court-a problem for lawyers, but one that should not be problematic for scientists in the forensic entomology community, as uncertainty is part of science that should and can be presented effectively in the courtroom (e.g., population genetic concepts in forensics). Unfortunately, a consequence of the way this conversation is conducted is that even as all involved in the debate acknowledge the concerns of their colleagues, parties continue to talk past one another advocating their preferred terminology. Progress will not be made until the community recognizes that all of the terms under consideration take the form of null hypothesis statements and that thinking about "what we do" as a null hypothesis has useful legal and scientific ramifications that transcend arguments over the usage of preferred terminology. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. The Stoichiometric Divisome: A Hypothesis

    Directory of Open Access Journals (Sweden)

    Waldemar eVollmer

    2015-05-01

    Full Text Available Dividing Escherichia coli cells simultaneously constrict the inner membrane, peptidoglycan layer and outer membrane to synthesize the new poles of the daughter cells. For this, more than 30 proteins localize to mid-cell where they form a large, ring-like assembly, the divisome, facilitating division. Although the precise function of most divisome proteins is unknown, it became apparent in recent years that dynamic protein-protein interactions are essential for divisome assembly and function. However, little is known about the nature of the interactions involved and the stoichiometry of the proteins within the divisome. A recent study (Li et al., 2014 used ribosome profiling to measure the absolute protein synthesis rates in E. coli. Interestingly, they observed that most proteins which participate in known multiprotein complexes are synthesized proportional to their stoichiometry. Based on this principle we present a hypothesis for the stoichiometry of the core of the divisome, taking into account known protein-protein interactions. From this hypothesis we infer a possible mechanism for PG synthesis during division.

  2. Neural Networks

    International Nuclear Information System (INIS)

    Smith, Patrick I.

    2003-01-01

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neural networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing

  3. Evolvable synthetic neural system

    Science.gov (United States)

    Curtis, Steven A. (Inventor)

    2009-01-01

    An evolvable synthetic neural system includes an evolvable neural interface operably coupled to at least one neural basis function. Each neural basis function includes an evolvable neural interface operably coupled to a heuristic neural system to perform high-level functions and an autonomic neural system to perform low-level functions. In some embodiments, the evolvable synthetic neural system is operably coupled to one or more evolvable synthetic neural systems in a hierarchy.

  4. Sigmund Freud and the Crick-Koch hypothesis. A footnote to the history of consciousness studies.

    Science.gov (United States)

    Smith, D L

    1999-06-01

    The author describes Crick and Koch's recently developed theory of the neurophysiological basis of consciousness as synchronised neural oscillations. The thesis that neural oscillations provide the neurophysiological basis for consciousness was anticipated by Sigmund Freud in his 1895 'Project for a scientific psychology'. Freud attempted to solve his neuropsychological 'problem of quality' by means of the hypothesis that information concerning conscious sensory qualities is transmitted through the mental apparatus by means of neural 'periods'. Freud believed that information carried by neural oscillations would proliferate across 'contact-barriers' (synapses) without inhibition. Freud's theory thus appears to imply that synchronised neural oscillations are an important component of the neurophysiological basis of consciousness. It is possible that Freud's thesis was developed in response to the experimental research of the American neuroscientist M. M. Garver.

  5. Neural networks for triggering

    International Nuclear Information System (INIS)

    Denby, B.; Campbell, M.; Bedeschi, F.; Chriss, N.; Bowers, C.; Nesti, F.

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab

  6. Combination of counterpropagation artificial neural networks and antioxidant activities for comprehensive evaluation of associated-extraction efficiency of various cyclodextrins in the traditional Chinese formula Xue-Zhi-Ning.

    Science.gov (United States)

    Sun, Lili; Yang, Jianwen; Wang, Meng; Zhang, Huijie; Liu, Yanan; Ren, Xiaoliang; Qi, Aidi

    2015-11-10

    Xue-Zhi-Ning (XZN) is a widely used traditional Chinese medicine formula to treat hyperlipidemia. Recently, cyclodextrins (CDs) have been extensively used to minimize problems relative to medicine bioavailability, such as low solubility and poor stability. The objective of this study was to determine the associated-extraction efficiency of various CDs in XZN. Three various type CDs were evaluated, including native CDs (α-CD, β-CD), hydrophilic CD derivatives (HP-β-CD and Me-β-CD), and ionic CD derivatives (SBE-β-CD and CM-β-CD). An ultra high-performance liquid chromatography (UHPLC) fingerprint was applied to determine the components in CD extracts and original aqueous extract (OAE). A counterpropagation artificial neural network (CP-ANN) was used to analyze the components in different extracts and compare the selective extraction of various CDs. Extraction efficiencies of the various CDs in terms of extracted components follow the ranking, ionic CD derivatives>hydrophilic CD derivatives>native CDs>OAE. Besides, different types of CDs have their own selective extraction and ionic CD derivatives present the strongest associated-extraction efficiency. Antioxidant potentials of various extracts were evaluated by determining the inhibition of spontaneous, H2O2-induced, CCl4-induced and Fe(2+)/ascorbic acid-induced lipid peroxidation (LPO) and analyzing the scavenging capacity for DPPH and hydroxyl radicals. The order of extraction efficiencies of the various CDs relative to antioxidant activities is as follows: SBE-β-CD>CM-β-CD>HP-β-CD>Me-β-CD>β-CD>α-CD. It can be demonstrated that all of the CDs studied increase the extraction efficiency and that ionic CD derivatives (SBE-β-CD and CM-β-CD) present the highest extraction capability in terms of amount extracted and antioxidant activities of extracts. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Robust estimation and hypothesis testing

    CERN Document Server

    Tiku, Moti L

    2004-01-01

    In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...

  8. Towards modeling of combined cooling, heating and power system with artificial neural network for exergy destruction and exergy efficiency prognostication of tri-generation components

    International Nuclear Information System (INIS)

    Taghavifar, Hadi; Anvari, Simin; Saray, Rahim Khoshbakhti; Khalilarya, Shahram; Jafarmadar, Samad; Taghavifar, Hamid

    2015-01-01

    The current study is an attempt to address the investigation of the CCHP (combined cooling, heating and power) system when 10 input variables were chosen to analyze 10 most important objective output parameters. Moreover, ANN (artificial neural network) was successfully applied on the tri-generation system on account of its capability to predict responses with great confidence. The results of sensitivity analysis were considered as foundation for selecting the most suitable and potent input parameters of the supposed cycle. Furthermore, the best ANN topology was attained based on the least amount of MSE and number of iterations. Consequently, the trainlm (Levenberg–Marquardt) training approach with 10-9-10 configuration has been exploited for ANN modeling in order to give the best output correspondence. The maximum MRE = 1.75% (mean relative error) and minimum R 2  = 0.984 represents the reliability and outperformance of the developed ANN over common conventional thermodynamic analysis carried out by EES (engineering equation solver) software. - Highlights: • Exergy analysis is undertaken for CCHP components based on operative factors. • ANN tool is applied to obtained database from thermodynamic analyses session. • The best ANN topology is detected at 10-9-10 with trainlm learning algorithm. • The input and output layer parameters were selected based on sensitivity analysis.

  9. Neural theory for the perception of causal actions.

    Science.gov (United States)

    Fleischer, Falk; Christensen, Andrea; Caggiano, Vittorio; Thier, Peter; Giese, Martin A

    2012-07-01

    The efficient prediction of the behavior of others requires the recognition of their actions and an understanding of their action goals. In humans, this process is fast and extremely robust, as demonstrated by classical experiments showing that human observers reliably judge causal relationships and attribute interactive social behavior to strongly simplified stimuli consisting of simple moving geometrical shapes. While psychophysical experiments have identified critical visual features that determine the perception of causality and agency from such stimuli, the underlying detailed neural mechanisms remain largely unclear, and it is an open question why humans developed this advanced visual capability at all. We created pairs of naturalistic and abstract stimuli of hand actions that were exactly matched in terms of their motion parameters. We show that varying critical stimulus parameters for both stimulus types leads to very similar modulations of the perception of causality. However, the additional form information about the hand shape and its relationship with the object supports more fine-grained distinctions for the naturalistic stimuli. Moreover, we show that a physiologically plausible model for the recognition of goal-directed hand actions reproduces the observed dependencies of causality perception on critical stimulus parameters. These results support the hypothesis that selectivity for abstract action stimuli might emerge from the same neural mechanisms that underlie the visual processing of natural goal-directed action stimuli. Furthermore, the model proposes specific detailed neural circuits underlying this visual function, which can be evaluated in future experiments.

  10. Memory in astrocytes: a hypothesis

    Directory of Open Access Journals (Sweden)

    Caudle Robert M

    2006-01-01

    Full Text Available Abstract Background Recent work has indicated an increasingly complex role for astrocytes in the central nervous system. Astrocytes are now known to exchange information with neurons at synaptic junctions and to alter the information processing capabilities of the neurons. As an extension of this trend a hypothesis was proposed that astrocytes function to store information. To explore this idea the ion channels in biological membranes were compared to models known as cellular automata. These comparisons were made to test the hypothesis that ion channels in the membranes of astrocytes form a dynamic information storage device. Results Two dimensional cellular automata were found to behave similarly to ion channels in a membrane when they function at the boundary between order and chaos. The length of time information is stored in this class of cellular automata is exponentially related to the number of units. Therefore the length of time biological ion channels store information was plotted versus the estimated number of ion channels in the tissue. This analysis indicates that there is an exponential relationship between memory and the number of ion channels. Extrapolation of this relationship to the estimated number of ion channels in the astrocytes of a human brain indicates that memory can be stored in this system for an entire life span. Interestingly, this information is not affixed to any physical structure, but is stored as an organization of the activity of the ion channels. Further analysis of two dimensional cellular automata also demonstrates that these systems have both associative and temporal memory capabilities. Conclusion It is concluded that astrocytes may serve as a dynamic information sink for neurons. The memory in the astrocytes is stored by organizing the activity of ion channels and is not associated with a physical location such as a synapse. In order for this form of memory to be of significant duration it is necessary

  11. Robust and distributed hypothesis testing

    CERN Document Server

    Gül, Gökhan

    2017-01-01

    This book generalizes and extends the available theory in robust and decentralized hypothesis testing. In particular, it presents a robust test for modeling errors which is independent from the assumptions that a sufficiently large number of samples is available, and that the distance is the KL-divergence. Here, the distance can be chosen from a much general model, which includes the KL-divergence as a very special case. This is then extended by various means. A minimax robust test that is robust against both outliers as well as modeling errors is presented. Minimax robustness properties of the given tests are also explicitly proven for fixed sample size and sequential probability ratio tests. The theory of robust detection is extended to robust estimation and the theory of robust distributed detection is extended to classes of distributions, which are not necessarily stochastically bounded. It is shown that the quantization functions for the decision rules can also be chosen as non-monotone. Finally, the boo...

  12. The venom optimization hypothesis revisited.

    Science.gov (United States)

    Morgenstern, David; King, Glenn F

    2013-03-01

    Animal venoms are complex chemical mixtures that typically contain hundreds of proteins and non-proteinaceous compounds, resulting in a potent weapon for prey immobilization and predator deterrence. However, because venoms are protein-rich, they come with a high metabolic price tag. The metabolic cost of venom is sufficiently high to result in secondary loss of venom whenever its use becomes non-essential to survival of the animal. The high metabolic cost of venom leads to the prediction that venomous animals may have evolved strategies for minimizing venom expenditure. Indeed, various behaviors have been identified that appear consistent with frugality of venom use. This has led to formulation of the "venom optimization hypothesis" (Wigger et al. (2002) Toxicon 40, 749-752), also known as "venom metering", which postulates that venom is metabolically expensive and therefore used frugally through behavioral control. Here, we review the available data concerning economy of venom use by animals with either ancient or more recently evolved venom systems. We conclude that the convergent nature of the evidence in multiple taxa strongly suggests the existence of evolutionary pressures favoring frugal use of venom. However, there remains an unresolved dichotomy between this economy of venom use and the lavish biochemical complexity of venom, which includes a high degree of functional redundancy. We discuss the evidence for biochemical optimization of venom as a means of resolving this conundrum. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Alien abduction: a medical hypothesis.

    Science.gov (United States)

    Forrest, David V

    2008-01-01

    In response to a new psychological study of persons who believe they have been abducted by space aliens that found that sleep paralysis, a history of being hypnotized, and preoccupation with the paranormal and extraterrestrial were predisposing experiences, I noted that many of the frequently reported particulars of the abduction experience bear more than a passing resemblance to medical-surgical procedures and propose that experience with these may also be contributory. There is the altered state of consciousness, uniformly colored figures with prominent eyes, in a high-tech room under a round bright saucerlike object; there is nakedness, pain and a loss of control while the body's boundaries are being probed; and yet the figures are thought benevolent. No medical-surgical history was apparently taken in the above mentioned study, but psychological laboratory work evaluated false memory formation. I discuss problems in assessing intraoperative awareness and ways in which the medical hypothesis could be elaborated and tested. If physicians are causing this syndrome in a percentage of patients, we should know about it; and persons who feel they have been abducted should be encouraged to inform their surgeons and anesthesiologists without challenging their beliefs.

  14. The oxidative hypothesis of senescence

    Directory of Open Access Journals (Sweden)

    Gilca M

    2007-01-01

    Full Text Available The oxidative hypothesis of senescence, since its origin in 1956, has garnered significant evidence and growing support among scientists for the notion that free radicals play an important role in ageing, either as "damaging" molecules or as signaling molecules. Age-increasing oxidative injuries induced by free radicals, higher susceptibility to oxidative stress in short-lived organisms, genetic manipulations that alter both oxidative resistance and longevity and the anti-ageing effect of caloric restriction and intermittent fasting are a few examples of accepted scientific facts that support the oxidative theory of senescence. Though not completely understood due to the complex "network" of redox regulatory systems, the implication of oxidative stress in the ageing process is now well documented. Moreover, it is compatible with other current ageing theories (e.g., those implicating the mitochondrial damage/mitochondrial-lysosomal axis, stress-induced premature senescence, biological "garbage" accumulation, etc. This review is intended to summarize and critically discuss the redox mechanisms involved during the ageing process: sources of oxidant agents in ageing (mitochondrial -electron transport chain, nitric oxide synthase reaction- and non-mitochondrial- Fenton reaction, microsomal cytochrome P450 enzymes, peroxisomal β -oxidation and respiratory burst of phagocytic cells, antioxidant changes in ageing (enzymatic- superoxide dismutase, glutathione-reductase, glutathion peroxidase, catalase- and non-enzymatic glutathione, ascorbate, urate, bilirubine, melatonin, tocopherols, carotenoids, ubiquinol, alteration of oxidative damage repairing mechanisms and the role of free radicals as signaling molecules in ageing.

  15. Direct adaptive control using feedforward neural networks

    OpenAIRE

    Cajueiro, Daniel Oliveira; Hemerly, Elder Moreira

    2003-01-01

    ABSTRACT: This paper proposes a new scheme for direct neural adaptive control that works efficiently employing only one neural network, used for simultaneously identifying and controlling the plant. The idea behind this structure of adaptive control is to compensate the control input obtained by a conventional feedback controller. The neural network training process is carried out by using two different techniques: backpropagation and extended Kalman filter algorithm. Additionally, the conver...

  16. A 12-Week Physical and Cognitive Exercise Program Can Improve Cognitive Function and Neural Efficiency in Community-Dwelling Older Adults: A Randomized Controlled Trial.

    Science.gov (United States)

    Nishiguchi, Shu; Yamada, Minoru; Tanigawa, Takanori; Sekiyama, Kaoru; Kawagoe, Toshikazu; Suzuki, Maki; Yoshikawa, Sakiko; Abe, Nobuhito; Otsuka, Yuki; Nakai, Ryusuke; Aoyama, Tomoki; Tsuboyama, Tadao

    2015-07-01

    To investigate whether a 12-week physical and cognitive exercise program can improve cognitive function and brain activation efficiency in community-dwelling older adults. Randomized controlled trial. Kyoto, Japan. Community-dwelling older adults (N = 48) were randomized into an exercise group (n = 24) and a control group (n = 24). Exercise group participants received a weekly dual task-based multimodal exercise class in combination with pedometer-based daily walking exercise during the 12-week intervention phase. Control group participants did not receive any intervention and were instructed to spend their time as usual during the intervention phase. The outcome measures were global cognitive function, memory function, executive function, and brain activation (measured using functional magnetic resonance imaging) associated with visual short-term memory. Exercise group participants had significantly greater postintervention improvement in memory and executive functions than the control group (P cognitive exercise program can improve the efficiency of brain activation during cognitive tasks in older adults, which is associated with improvements in memory and executive function. © 2015, Copyright the Authors Journal compilation © 2015, The American Geriatrics Society.

  17. Neural Network Algorithm for Particle Loading

    International Nuclear Information System (INIS)

    Lewandowski, J.L.V.

    2003-01-01

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given

  18. Neural Networks

    Directory of Open Access Journals (Sweden)

    Schwindling Jerome

    2010-04-01

    Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  19. Social learning and evolution: the cultural intelligence hypothesis

    Science.gov (United States)

    van Schaik, Carel P.; Burkart, Judith M.

    2011-01-01

    If social learning is more efficient than independent individual exploration, animals should learn vital cultural skills exclusively, and routine skills faster, through social learning, provided they actually use social learning preferentially. Animals with opportunities for social learning indeed do so. Moreover, more frequent opportunities for social learning should boost an individual's repertoire of learned skills. This prediction is confirmed by comparisons among wild great ape populations and by social deprivation and enculturation experiments. These findings shaped the cultural intelligence hypothesis, which complements the traditional benefit hypotheses for the evolution of intelligence by specifying the conditions in which these benefits can be reaped. The evolutionary version of the hypothesis argues that species with frequent opportunities for social learning should more readily respond to selection for a greater number of learned skills. Because improved social learning also improves asocial learning, the hypothesis predicts a positive interspecific correlation between social-learning performance and individual learning ability. Variation among primates supports this prediction. The hypothesis also predicts that more heavily cultural species should be more intelligent. Preliminary tests involving birds and mammals support this prediction too. The cultural intelligence hypothesis can also account for the unusual cognitive abilities of humans, as well as our unique mechanisms of skill transfer. PMID:21357223

  20. Social learning and evolution: the cultural intelligence hypothesis.

    Science.gov (United States)

    van Schaik, Carel P; Burkart, Judith M

    2011-04-12

    If social learning is more efficient than independent individual exploration, animals should learn vital cultural skills exclusively, and routine skills faster, through social learning, provided they actually use social learning preferentially. Animals with opportunities for social learning indeed do so. Moreover, more frequent opportunities for social learning should boost an individual's repertoire of learned skills. This prediction is confirmed by comparisons among wild great ape populations and by social deprivation and enculturation experiments. These findings shaped the cultural intelligence hypothesis, which complements the traditional benefit hypotheses for the evolution of intelligence by specifying the conditions in which these benefits can be reaped. The evolutionary version of the hypothesis argues that species with frequent opportunities for social learning should more readily respond to selection for a greater number of learned skills. Because improved social learning also improves asocial learning, the hypothesis predicts a positive interspecific correlation between social-learning performance and individual learning ability. Variation among primates supports this prediction. The hypothesis also predicts that more heavily cultural species should be more intelligent. Preliminary tests involving birds and mammals support this prediction too. The cultural intelligence hypothesis can also account for the unusual cognitive abilities of humans, as well as our unique mechanisms of skill transfer.

  1. A double-blind, placebo-controlled study on the effects of lutein and zeaxanthin on neural processing speed and efficiency.

    Directory of Open Access Journals (Sweden)

    Emily R Bovier

    Full Text Available Lutein and zeaxanthin are major carotenoids in the eye but are also found in post-receptoral visual pathways. It has been hypothesized that these pigments influence the processing of visual signals within and post-retina, and that increasing lutein and zeaxanthin levels within the visual system will lead to increased visual processing speeds. To test this, we measured macular pigment density (as a biomarker of lutein and zeaxanthin levels in brain, critical flicker fusion (CFF thresholds, and visual motor reaction time in young healthy subjects (n = 92. Changes in these outcome variables were also assessed after four months of supplementation with either placebo (n = 10, zeaxanthin only (20 mg/day; n = 29 or a mixed formulation containing 26 mg/day zeaxanthin, 8 mg/day lutein, and 190 mg/day mixed omega-3 fatty acids (n = 25. Significant correlations were found between retinal lutein and zeaxanthin (macular pigment and CFF thresholds (p<0.01 and visual motor performance (overall p<0.01. Supplementation with zeaxanthin and the mixed formulation (considered together produced significant (p<0.01 increases in CFF thresholds (∼12% and visual motor reaction time (∼10% compared to placebo. In general, increasing macular pigment density through supplementation (average increase of about 0.09 log units resulted in significant improvements in visual processing speed, even when testing young, healthy individuals who tend to be at peak efficiency.

  2. Validity of Linder Hypothesis in Bric Countries

    Directory of Open Access Journals (Sweden)

    Rana Atabay

    2016-03-01

    Full Text Available In this study, the theory of similarity in preferences (Linder hypothesis has been introduced and trade in BRIC countries has been examined whether the trade between these countries was valid for this hypothesis. Using the data for the period 1996 – 2010, the study applies to panel data analysis in order to provide evidence regarding the empirical validity of the Linder hypothesis for BRIC countries’ international trade. Empirical findings show that the trade between BRIC countries is in support of Linder hypothesis.

  3. Highly efficient simultaneous ultrasonic assisted adsorption of brilliant green and eosin B onto ZnS nanoparticles loaded activated carbon: Artificial neural network modeling and central composite design optimization

    Science.gov (United States)

    Jamshidi, M.; Ghaedi, M.; Dashtian, K.; Ghaedi, A. M.; Hajati, S.; Goudarzi, A.; Alipanahpour, E.

    2016-01-01

    In this work, central composite design (CCD) combined with response surface methodology (RSM) and desirability function approach (DFA) gives useful information about operational condition and also to obtain useful information about interaction and main effect of variables concerned to simultaneous ultrasound-assisted removal of brilliant green (BG) and eosin B (EB) by zinc sulfide nanoparticles loaded on activated carbon (ZnS-NPs-AC). Spectra overlap between BG and EB dyes was extensively reduced and/or omitted by derivative spectrophotometric method, while multi-layer artificial neural network (ML-ANN) model learned with Levenberg-Marquardt (LM) algorithm was used for building up a predictive model and prediction of the BG and EB removal. The ANN efficiently was able to forecast the simultaneous BG and EB removal that was confirmed by reasonable numerical value i.e. MSE of 0.0021 and R2 of 0.9589 and MSE of 0.0022 and R2 of 0.9455 for testing data set, respectively. The results reveal acceptable agreement among experimental data and ANN predicted results. Langmuir as the best model for fitting experimental data relevant to BG and EB removal indicates high, economic and profitable adsorption capacity (258.7 and 222.2 mg g- 1) that supports and confirms its applicability for wastewater treatment.

  4. Subjective duration distortions mirror neural repetition suppression.

    Directory of Open Access Journals (Sweden)

    Vani Pariyadath

    Full Text Available Subjective duration is strongly influenced by repetition and novelty, such that an oddball stimulus in a stream of repeated stimuli appears to last longer in duration in comparison. We hypothesize that this duration illusion, called the temporal oddball effect, is a result of the difference in expectation between the oddball and the repeated stimuli. Specifically, we conjecture that the repeated stimuli contract in duration as a result of increased predictability; these duration contractions, we suggest, result from decreased neural response amplitude with repetition, known as repetition suppression.Participants viewed trials consisting of lines presented at a particular orientation (standard stimuli followed by a line presented at a different orientation (oddball stimulus. We found that the size of the oddball effect correlates with the number of repetitions of the standard stimulus as well as the amount of deviance from the oddball stimulus; both of these results are consistent with a repetition suppression hypothesis. Further, we find that the temporal oddball effect is sensitive to experimental context--that is, the size of the oddball effect for a particular experimental trial is influenced by the range of duration distortions seen in preceding trials.Our data suggest that the repetition-related duration contractions causing the oddball effect are a result of neural repetition suppression. More generally, subjective duration may reflect the prediction error associated with a stimulus and, consequently, the efficiency of encoding that stimulus. Additionally, we emphasize that experimental context effects need to be taken into consideration when designing duration-related tasks.

  5. Testing the gravitational instability hypothesis?

    Science.gov (United States)

    Babul, Arif; Weinberg, David H.; Dekel, Avishai; Ostriker, Jeremiah P.

    1994-01-01

    that show correlations between galaxy density and velocity fields can rule out some physically interesting models of large-scale structure. In particular, successful reconstructions constrain the nature of any bias between the galaxy and mass distributions, since processes that modulate the efficiency of galaxy formation on large scales in a way that violates the continuity equation also produce a mismatch between the observed galaxy density and the density inferred from the peculiar velocity field. We obtain successful reconstructions for a gravitational model with peaks biasing, but we also show examples of gravitational and nongravitational models that fail reconstruction tests because of more complicated modulations of galaxy formation.

  6. Hypothesis Testing in the Real World

    Science.gov (United States)

    Miller, Jeff

    2017-01-01

    Critics of null hypothesis significance testing suggest that (a) its basic logic is invalid and (b) it addresses a question that is of no interest. In contrast to (a), I argue that the underlying logic of hypothesis testing is actually extremely straightforward and compelling. To substantiate that, I present examples showing that hypothesis…

  7. Error probabilities in default Bayesian hypothesis testing

    NARCIS (Netherlands)

    Gu, Xin; Hoijtink, Herbert; Mulder, J,

    2016-01-01

    This paper investigates the classical type I and type II error probabilities of default Bayes factors for a Bayesian t test. Default Bayes factors quantify the relative evidence between the null hypothesis and the unrestricted alternative hypothesis without needing to specify prior distributions for

  8. Reassessing the Trade-off Hypothesis

    DEFF Research Database (Denmark)

    Rosas, Guillermo; Manzetti, Luigi

    2015-01-01

    Do economic conditions drive voters to punish politicians that tolerate corruption? Previous scholarly work contends that citizens in young democracies support corrupt governments that are capable of promoting good economic outcomes, the so-called trade-off hypothesis. We test this hypothesis based...

  9. Mastery Learning and the Decreasing Variability Hypothesis.

    Science.gov (United States)

    Livingston, Jennifer A.; Gentile, J. Ronald

    1996-01-01

    This report results from studies that tested two variations of Bloom's decreasing variability hypothesis using performance on successive units of achievement in four graduate classrooms that used mastery learning procedures. Data do not support the decreasing variability hypothesis; rather, they show no change over time. (SM)

  10. Serotonin, neural markers and memory

    Directory of Open Access Journals (Sweden)

    Alfredo eMeneses

    2015-07-01

    Full Text Available Diverse neuropsychiatric disorders present dysfunctional memory and no effective treatment exits for them; likely as result of the absence of neural markers associated to memory. Neurotransmitter systems and signaling pathways have been implicated in memory and dysfunctional memory; however, their role is poorly understood. Hence, neural markers and cerebral functions and dysfunctions are revised. To our knowledge no previous systematic works have been published addressing these issues. The interactions among behavioral tasks, control groups and molecular changes and/or pharmacological effects are mentioned. Neurotransmitter receptors and signaling pathways, during normal and abnormally functioning memory with an emphasis on the behavioral aspects of memory are revised. With focus on serotonin, since as it is a well characterized neurotransmitter, with multiple pharmacological tools, and well characterized downstream signaling in mammals’ species. 5-HT1A, 5-HT4, 5-HT5, 5-HT6 and 5-HT7 receptors as well as SERT (serotonin transporter seem to be useful neural markers and/or therapeutic targets. Certainly, if the mentioned evidence is replicated, then the translatability from preclinical and clinical studies to neural changes might be confirmed. Hypothesis and theories might provide appropriate limits and perspectives of evidence

  11. Synchronization and phonological skills: precise auditory timing hypothesis (PATH

    Directory of Open Access Journals (Sweden)

    Adam eTierney

    2014-11-01

    Full Text Available Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel 2011, 2012, 2014. There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The precise auditory timing hypothesis predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills.

  12. Trade-off between multiple constraints enables simultaneous formation of modules and hubs in neural systems.

    Directory of Open Access Journals (Sweden)

    Yuhan Chen

    Full Text Available The formation of the complex network architecture of neural systems is subject to multiple structural and functional constraints. Two obvious but apparently contradictory constraints are low wiring cost and high processing efficiency, characterized by short overall wiring length and a small average number of processing steps, respectively. Growing evidence shows that neural networks are results from a trade-off between physical cost and functional value of the topology. However, the relationship between these competing constraints and complex topology is not well understood quantitatively. We explored this relationship systematically by reconstructing two known neural networks, Macaque cortical connectivity and C. elegans neuronal connections, from combinatory optimization of wiring cost and processing efficiency constraints, using a control parameter α, and comparing the reconstructed networks to the real networks. We found that in both neural systems, the reconstructed networks derived from the two constraints can reveal some important relations between the spatial layout of nodes and the topological connectivity, and match several properties of the real networks. The reconstructed and real networks had a similar modular organization in a broad range of α, resulting from spatial clustering of network nodes. Hubs emerged due to the competition of the two constraints, and their positions were close to, and partly coincided, with the real hubs in a range of α values. The degree of nodes was correlated with the density of nodes in their spatial neighborhood in both reconstructed and real networks. Generally, the rebuilt network matched a significant portion of real links, especially short-distant ones. These findings provide clear evidence to support the hypothesis of trade-off between multiple constraints on brain networks. The two constraints of wiring cost and processing efficiency, however, cannot explain all salient features in the real

  13. Implications of the Bohm-Aharonov hypothesis

    International Nuclear Information System (INIS)

    Ghirardi, G.C.; Rimini, A.; Weber, T.

    1976-01-01

    It is proved that the Bohm-Aharonov hypothesis concerning largerly separated subsystems of composite quantum systems implies that it is impossible to express the dynamical evolution in terms of the density operator

  14. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.; Shamma, Jeff S.

    2014-01-01

    incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well

  15. The (not so) Immortal Strand Hypothesis

    OpenAIRE

    Tomasetti, Cristian; Bozic, Ivana

    2015-01-01

    Background: Non-random segregation of DNA strands during stem cell replication has been proposed as a mechanism to minimize accumulated genetic errors in stem cells of rapidly dividing tissues. According to this hypothesis, an “immortal” DNA strand is passed to the stem cell daughter and not the more differentiated cell, keeping the stem cell lineage replication error-free. After it was introduced, experimental evidence both in favor and against the hypothesis has been presented. Principal...

  16. A novel Bayesian learning method for information aggregation in modular neural networks

    DEFF Research Database (Denmark)

    Wang, Pan; Xu, Lida; Zhou, Shang-Ming

    2010-01-01

    Modular neural network is a popular neural network model which has many successful applications. In this paper, a sequential Bayesian learning (SBL) is proposed for modular neural networks aiming at efficiently aggregating the outputs of members of the ensemble. The experimental results on eight...... benchmark problems have demonstrated that the proposed method can perform information aggregation efficiently in data modeling....

  17. Multiple hypothesis tracking for the cyber domain

    Science.gov (United States)

    Schwoegler, Stefan; Blackman, Sam; Holsopple, Jared; Hirsch, Michael J.

    2011-09-01

    This paper discusses how methods used for conventional multiple hypothesis tracking (MHT) can be extended to domain-agnostic tracking of entities from non-kinematic constraints such as those imposed by cyber attacks in a potentially dense false alarm background. MHT is widely recognized as the premier method to avoid corrupting tracks with spurious data in the kinematic domain but it has not been extensively applied to other problem domains. The traditional approach is to tightly couple track maintenance (prediction, gating, filtering, probabilistic pruning, and target confirmation) with hypothesis management (clustering, incompatibility maintenance, hypothesis formation, and Nassociation pruning). However, by separating the domain specific track maintenance portion from the domain agnostic hypothesis management piece, we can begin to apply the wealth of knowledge gained from ground and air tracking solutions to the cyber (and other) domains. These realizations led to the creation of Raytheon's Multiple Hypothesis Extensible Tracking Architecture (MHETA). In this paper, we showcase MHETA for the cyber domain, plugging in a well established method, CUBRC's INFormation Engine for Real-time Decision making, (INFERD), for the association portion of the MHT. The result is a CyberMHT. We demonstrate the power of MHETA-INFERD using simulated data. Using metrics from both the tracking and cyber domains, we show that while no tracker is perfect, by applying MHETA-INFERD, advanced nonkinematic tracks can be captured in an automated way, perform better than non-MHT approaches, and decrease analyst response time to cyber threats.

  18. Aminoglycoside antibiotics and autism: a speculative hypothesis

    Directory of Open Access Journals (Sweden)

    Manev Hari

    2001-10-01

    Full Text Available Abstract Background Recently, it has been suspected that there is a relationship between therapy with some antibiotics and the onset of autism; but even more curious, some children benefited transiently from a subsequent treatment with a different antibiotic. Here, we speculate how aminoglycoside antibiotics might be associated with autism. Presentation We hypothesize that aminoglycoside antibiotics could a trigger the autism syndrome in susceptible infants by causing the stop codon readthrough, i.e., a misreading of the genetic code of a hypothetical critical gene, and/or b improve autism symptoms by correcting the premature stop codon mutation in a hypothetical polymorphic gene linked to autism. Testing Investigate, retrospectively, whether a link exists between aminoglycoside use (which is not extensive in children and the onset of autism symptoms (hypothesis "a", or between amino glycoside use and improvement of these symptoms (hypothesis "b". Whereas a prospective study to test hypothesis "a" is not ethically justifiable, a study could be designed to test hypothesis "b". Implications It should be stressed that at this stage no direct evidence supports our speculative hypothesis and that its main purpose is to initiate development of new ideas that, eventually, would improve our understanding of the pathobiology of autism.

  19. A neural network approach to burst detection.

    Science.gov (United States)

    Mounce, S R; Day, A J; Wood, A S; Khan, A; Widdop, P D; Machell, J

    2002-01-01

    This paper describes how hydraulic and water quality data from a distribution network may be used to provide a more efficient leakage management capability for the water industry. The research presented concerns the application of artificial neural networks to the issue of detection and location of leakage in treated water distribution systems. An architecture for an Artificial Neural Network (ANN) based system is outlined. The neural network uses time series data produced by sensors to directly construct an empirical model for predication and classification of leaks. Results are presented using data from an experimental site in Yorkshire Water's Keighley distribution system.

  20. The equilibrium point hypothesis and its application to speech motor control.

    Science.gov (United States)

    Perrier, P; Ostry, D J; Laboissière, R

    1996-04-01

    In this paper, we address a number of issues in speech research in the context of the equilibrium point hypothesis of motor control. The hypothesis suggests that movements arise from shifts in the equilibrium position of the limb or the speech articulator. The equilibrium is a consequence of the interaction of central neural commands, reflex mechanisms, muscle properties, and external loads, but it is under the control of central neural commands. These commands act to shift the equilibrium via centrally specified signals acting at the level of the motoneurone (MN) pool. In the context of a model of sagittal plane jaw and hyoid motion based on the lambda version of the equilibrium point hypothesis, we consider the implications of this hypothesis for the notion of articulatory targets. We suggest that simple linear control signals may underlie smooth articulatory trajectories. We explore as well the phenomenon of intraarticulator coarticulation in jaw movement. We suggest that even when no account is taken of upcoming context, that apparent anticipatory changes in movement amplitude and duration may arise due to dynamics. We also present a number of simulations that show in different ways how variability in measured kinematics can arise in spite of constant magnitude speech control signals.

  1. Testing competing forms of the Milankovitch hypothesis

    DEFF Research Database (Denmark)

    Kaufmann, Robert K.; Juselius, Katarina

    2016-01-01

    We test competing forms of the Milankovitch hypothesis by estimating the coefficients and diagnostic statistics for a cointegrated vector autoregressive model that includes 10 climate variables and four exogenous variables for solar insolation. The estimates are consistent with the physical...... ice volume and solar insolation. The estimated adjustment dynamics show that solar insolation affects an array of climate variables other than ice volume, each at a unique rate. This implies that previous efforts to test the strong form of the Milankovitch hypothesis by examining the relationship...... that the latter is consistent with a weak form of the Milankovitch hypothesis and that it should be restated as follows: Internal climate dynamics impose perturbations on glacial cycles that are driven by solar insolation. Our results show that these perturbations are likely caused by slow adjustment between land...

  2. Rejecting the equilibrium-point hypothesis.

    Science.gov (United States)

    Gottlieb, G L

    1998-01-01

    The lambda version of the equilibrium-point (EP) hypothesis as developed by Feldman and colleagues has been widely used and cited with insufficient critical understanding. This article offers a small antidote to that lack. First, the hypothesis implicitly, unrealistically assumes identical transformations of lambda into muscle tension for antagonist muscles. Without that assumption, its definitions of command variables R, C, and lambda are incompatible and an EP is not defined exclusively by R nor is it unaffected by C. Second, the model assumes unrealistic and unphysiological parameters for the damping properties of the muscles and reflexes. Finally, the theory lacks rules for two of its three command variables. A theory of movement should offer insight into why we make movements the way we do and why we activate muscles in particular patterns. The EP hypothesis offers no unique ideas that are helpful in addressing either of these questions.

  3. The linear hypothesis and radiation carcinogenesis

    International Nuclear Information System (INIS)

    Roberts, P.B.

    1981-10-01

    An assumption central to most estimations of the carcinogenic potential of low levels of ionising radiation is that the risk always increases in direct proportion to the dose received. This assumption (the linear hypothesis) has been both strongly defended and attacked on several counts. It appears unlikely that conclusive, direct evidence on the validity of the hypothesis will be forthcoming. We review the major indirect arguments used in the debate. All of them are subject to objections that can seriously weaken their case. In the present situation, retention of the linear hypothesis as the basis of extrapolations from high to low dose levels can lead to excessive fears, over-regulation and unnecessarily expensive protection measures. To offset these possibilities, support is given to suggestions urging a cut-off dose, probably some fraction of natural background, below which risks can be deemed acceptable

  4. Rayleigh's hypothesis and the geometrical optics limit.

    Science.gov (United States)

    Elfouhaily, Tanos; Hahn, Thomas

    2006-09-22

    The Rayleigh hypothesis (RH) is often invoked in the theoretical and numerical treatment of rough surface scattering in order to decouple the analytical form of the scattered field. The hypothesis stipulates that the scattered field away from the surface can be extended down onto the rough surface even though it is formed by solely up-going waves. Traditionally this hypothesis is systematically used to derive the Volterra series under the small perturbation method which is equivalent to the low-frequency limit. In this Letter we demonstrate that the RH also carries the high-frequency or the geometrical optics limit, at least to first order. This finding has never been explicitly derived in the literature. Our result comforts the idea that the RH might be an exact solution under some constraints in the general case of random rough surfaces and not only in the case of small-slope deterministic periodic gratings.

  5. Equilibrium-point control hypothesis examined by measured arm stiffness during multijoint movement.

    Science.gov (United States)

    Gomi, H; Kawato

    1996-04-05

    For the last 20 years, it has been hypothesized that well-coordinated, multijoint movements are executed without complex computation by the brain, with the use of springlike muscle properties and peripheral neural feedback loops. However, it has been technically and conceptually difficult to examine this "equilibrium-point control" hypothesis directly in physiological or behavioral experiments. A high-performance manipulandum was developed and used here to measure human arm stiffness, the magnitude of which during multijoint movement is important for this hypothesis. Here, the equilibrium-point trajectory was estimated from the measured stiffness, the actual trajectory, and the generated torque. Its velocity profile differed from that of the actual trajectory. These results argue against the hypothesis that the brain sends as a motor command only an equilibrium-point trajectory similar to the actual trajectory.

  6. Computational modeling of neural plasticity for self-organization of neural networks.

    Science.gov (United States)

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Language and emotions: emotional Sapir-Whorf hypothesis.

    Science.gov (United States)

    Perlovsky, Leonid

    2009-01-01

    An emotional version of Sapir-Whorf hypothesis suggests that differences in language emotionalities influence differences among cultures no less than conceptual differences. Conceptual contents of languages and cultures to significant extent are determined by words and their semantic differences; these could be borrowed among languages and exchanged among cultures. Emotional differences, as suggested in the paper, are related to grammar and mostly cannot be borrowed. The paper considers conceptual and emotional mechanisms of language along with their role in the mind and cultural evolution. Language evolution from primordial undifferentiated animal cries is discussed: while conceptual contents increase, emotional reduced. Neural mechanisms of these processes are suggested as well as their mathematical models: the knowledge instinct, the dual model connecting language and cognition, neural modeling fields. Mathematical results are related to cognitive science, linguistics, and psychology. Experimental evidence and theoretical arguments are discussed. Dynamics of the hierarchy-heterarchy of human minds and cultures is formulated using mean-field approach and approximate equations are obtained. The knowledge instinct operating in the mind heterarchy leads to mechanisms of differentiation and synthesis determining ontological development and cultural evolution. These mathematical models identify three types of cultures: "conceptual" pragmatic cultures in which emotionality of language is reduced and differentiation overtakes synthesis resulting in fast evolution at the price of uncertainty of values, self doubts, and internal crises; "traditional-emotional" cultures where differentiation lags behind synthesis, resulting in cultural stability at the price of stagnation; and "multi-cultural" societies combining fast cultural evolution and stability. Unsolved problems and future theoretical and experimental directions are discussed.

  8. On the generalized gravi-magnetic hypothesis

    International Nuclear Information System (INIS)

    Massa, C.

    1989-01-01

    According to a generalization of the gravi-magnetic hypothesis (GMH) any neutral mass moving in a curvilinear path with respect to an inertial frame creates a magnetic field, dependent on the curvature radius of the path. A simple astrophysical consequence of the generalized GMH is suggested considering the special cases of binary pulsars and binary neutron stars

  9. Remarks about the hypothesis of limiting fragmentation

    International Nuclear Information System (INIS)

    Chou, T.T.; Yang, C.N.

    1987-01-01

    Remarks are made about the hypothesis of limiting fragmentation. In particular, the concept of favored and disfavored fragment distribution is introduced. Also, a sum rule is proved leading to a useful quantity called energy-fragmentation fraction. (author). 11 refs, 1 fig., 2 tabs

  10. Multiple hypothesis clustering in radar plot extraction

    NARCIS (Netherlands)

    Huizing, A.G.; Theil, A.; Dorp, Ph. van; Ligthart, L.P.

    1995-01-01

    False plots and plots with inaccurate range and Doppler estimates may severely degrade the performance of tracking algorithms in radar systems. This paper describes how a multiple hypothesis clustering technique can be applied to mitigate the problems involved in plot extraction. The measures of

  11. The (not so immortal strand hypothesis

    Directory of Open Access Journals (Sweden)

    Cristian Tomasetti

    2015-03-01

    Significance: Utilizing an approach that is fundamentally different from previous efforts to confirm or refute the immortal strand hypothesis, we provide evidence against non-random segregation of DNA during stem cell replication. Our results strongly suggest that parental DNA is passed randomly to stem cell daughters and provides new insight into the mechanism of DNA replication in stem cells.

  12. A Developmental Study of the Infrahumanization Hypothesis

    Science.gov (United States)

    Martin, John; Bennett, Mark; Murray, Wayne S.

    2008-01-01

    Intergroup attitudes in children were examined based on Leyen's "infrahumanization hypothesis". This suggests that some uniquely human emotions, such as shame and guilt (secondary emotions), are reserved for the in-group, whilst other emotions that are not uniquely human and shared with animals, such as anger and pleasure (primary…

  13. Morbidity and Infant Development: A Hypothesis.

    Science.gov (United States)

    Pollitt, Ernesto

    1983-01-01

    Results of a study conducted in 14 villages of Sui Lin Township, Taiwan, suggest the hypothesis that, under conditions of extreme economic impoverishment and among children within populations where energy protein malnutrition is endemic, there is an inverse relationship between incidence of morbidity in infancy and measures of motor and mental…

  14. Diagnostic Hypothesis Generation and Human Judgment

    Science.gov (United States)

    Thomas, Rick P.; Dougherty, Michael R.; Sprenger, Amber M.; Harbison, J. Isaiah

    2008-01-01

    Diagnostic hypothesis-generation processes are ubiquitous in human reasoning. For example, clinicians generate disease hypotheses to explain symptoms and help guide treatment, auditors generate hypotheses for identifying sources of accounting errors, and laypeople generate hypotheses to explain patterns of information (i.e., data) in the…

  15. Multi-hypothesis distributed stereo video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Zamarin, Marco; Forchhammer, Søren

    2013-01-01

    for stereo sequences, exploiting an interpolated intra-view SI and two inter-view SIs. The quality of the SI has a major impact on the DVC Rate-Distortion (RD) performance. As the inter-view SIs individually present lower RD performance compared with the intra-view SI, we propose multi-hypothesis decoding...

  16. [Resonance hypothesis of heart rate variability origin].

    Science.gov (United States)

    Sheĭkh-Zade, Iu R; Mukhambetaliev, G Kh; Cherednik, I L

    2009-09-01

    A hypothesis is advanced of the heart rate variability being subjected to beat-to-beat regulation of cardiac cycle duration in order to ensure the resonance interaction between respiratory and own fluctuation of the arterial system volume for minimization of power expenses of cardiorespiratory system. Myogenic, parasympathetic and sympathetic machanisms of heart rate variability are described.

  17. In Defense of Chi's Ontological Incompatibility Hypothesis

    Science.gov (United States)

    Slotta, James D.

    2011-01-01

    This article responds to an article by A. Gupta, D. Hammer, and E. F. Redish (2010) that asserts that M. T. H. Chi's (1992, 2005) hypothesis of an "ontological commitment" in conceptual development is fundamentally flawed. In this article, I argue that Chi's theoretical perspective is still very much intact and that the critique offered by Gupta…

  18. Vacuum counterexamples to the cosmic censorship hypothesis

    International Nuclear Information System (INIS)

    Miller, B.D.

    1981-01-01

    In cylindrically symmetric vacuum spacetimes it is possible to specify nonsingular initial conditions such that timelike singularities will (necessarily) evolve from these conditions. Examples are given; the spacetimes are somewhat analogous to one of the spherically symmetric counterexamples to the cosmic censorship hypothesis

  19. Prospective detection of large prediction errors: a hypothesis testing approach

    International Nuclear Information System (INIS)

    Ruan, Dan

    2010-01-01

    Real-time motion management is important in radiotherapy. In addition to effective monitoring schemes, prediction is required to compensate for system latency, so that treatment can be synchronized with tumor motion. However, it is difficult to predict tumor motion at all times, and it is critical to determine when large prediction errors may occur. Such information can be used to pause the treatment beam or adjust monitoring/prediction schemes. In this study, we propose a hypothesis testing approach for detecting instants corresponding to potentially large prediction errors in real time. We treat the future tumor location as a random variable, and obtain its empirical probability distribution with the kernel density estimation-based method. Under the null hypothesis, the model probability is assumed to be a concentrated Gaussian centered at the prediction output. Under the alternative hypothesis, the model distribution is assumed to be non-informative uniform, which reflects the situation that the future position cannot be inferred reliably. We derive the likelihood ratio test (LRT) for this hypothesis testing problem and show that with the method of moments for estimating the null hypothesis Gaussian parameters, the LRT reduces to a simple test on the empirical variance of the predictive random variable. This conforms to the intuition to expect a (potentially) large prediction error when the estimate is associated with high uncertainty, and to expect an accurate prediction when the uncertainty level is low. We tested the proposed method on patient-derived respiratory traces. The 'ground-truth' prediction error was evaluated by comparing the prediction values with retrospective observations, and the large prediction regions were subsequently delineated by thresholding the prediction errors. The receiver operating characteristic curve was used to describe the performance of the proposed hypothesis testing method. Clinical implication was represented by miss

  20. A novel hypothesis splitting method implementation for multi-hypothesis filters

    DEFF Research Database (Denmark)

    Bayramoglu, Enis; Ravn, Ole; Andersen, Nils Axel

    2013-01-01

    The paper presents a multi-hypothesis filter library featuring a novel method for splitting Gaussians into ones with smaller variances. The library is written in C++ for high performance and the source code is open and free1. The multi-hypothesis filters commonly approximate the distribution tran...

  1. The Income Inequality Hypothesis Revisited : Assessing the Hypothesis Using Four Methodological Approaches

    NARCIS (Netherlands)

    Kragten, N.; Rözer, J.

    The income inequality hypothesis states that income inequality has a negative effect on individual’s health, partially because it reduces social trust. This article aims to critically assess the income inequality hypothesis by comparing several analytical strategies, namely OLS regression,

  2. Einstein's Revolutionary Light-Quantum Hypothesis

    Science.gov (United States)

    Stuewer, Roger H.

    2005-05-01

    The paper in which Albert Einstein proposed his light-quantum hypothesis was the only one of his great papers of 1905 that he himself termed ``revolutionary.'' Contrary to widespread belief, Einstein did not propose his light-quantum hypothesis ``to explain the photoelectric effect.'' Instead, he based his argument for light quanta on the statistical interpretation of the second law of thermodynamics, with the photoelectric effect being only one of three phenomena that he offered as possible experimental support for it. I will discuss Einstein's light-quantum hypothesis of 1905 and his introduction of the wave-particle duality in 1909 and then turn to the reception of his work on light quanta by his contemporaries. We will examine the reasons that prominent physicists advanced to reject Einstein's light-quantum hypothesis in succeeding years. Those physicists included Robert A. Millikan, even though he provided convincing experimental proof of the validity of Einstein's equation of the photoelectric effect in 1915. The turning point came after Arthur Holly Compton discovered the Compton effect in late 1922, but even then Compton's discovery was contested both on experimental and on theoretical grounds. Niels Bohr, in particular, had never accepted the reality of light quanta and now, in 1924, proposed a theory, the Bohr-Kramers-Slater theory, which assumed that energy and momentum were conserved only statistically in microscopic interactions. Only after that theory was disproved experimentally in 1925 was Einstein's revolutionary light-quantum hypothesis generally accepted by physicists---a full two decades after Einstein had proposed it.

  3. A Dopamine Hypothesis of Autism Spectrum Disorder.

    Science.gov (United States)

    Pavăl, Denis

    2017-01-01

    Autism spectrum disorder (ASD) comprises a group of neurodevelopmental disorders characterized by social deficits and stereotyped behaviors. While several theories have emerged, the pathogenesis of ASD remains unknown. Although studies report dopamine signaling abnormalities in autistic patients, a coherent dopamine hypothesis which could link neurobiology to behavior in ASD is currently lacking. In this paper, we present such a hypothesis by proposing that autistic behavior arises from dysfunctions in the midbrain dopaminergic system. We hypothesize that a dysfunction of the mesocorticolimbic circuit leads to social deficits, while a dysfunction of the nigrostriatal circuit leads to stereotyped behaviors. Furthermore, we discuss 2 key predictions of our hypothesis, with emphasis on clinical and therapeutic aspects. First, we argue that dopaminergic dysfunctions in the same circuits should associate with autistic-like behavior in nonautistic subjects. Concerning this, we discuss the case of PANDAS (pediatric autoimmune neuropsychiatric disorder associated with streptococcal infections) which displays behaviors similar to those of ASD, presumed to arise from dopaminergic dysfunctions. Second, we argue that providing dopamine modulators to autistic subjects should lead to a behavioral improvement. Regarding this, we present clinical studies of dopamine antagonists which seem to have improving effects on autistic behavior. Furthermore, we explore the means of testing our hypothesis by using neuroreceptor imaging, which could provide comprehensive evidence for dopamine signaling dysfunctions in autistic subjects. Lastly, we discuss the limitations of our hypothesis. Along these lines, we aim to provide a dopaminergic model of ASD which might lead to a better understanding of the ASD pathogenesis. © 2017 S. Karger AG, Basel.

  4. Morphological neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  5. Neural Tube Defects

    Science.gov (United States)

    Neural tube defects are birth defects of the brain, spine, or spinal cord. They happen in the ... that she is pregnant. The two most common neural tube defects are spina bifida and anencephaly. In ...

  6. The neural signature of emotional memories in serial crimes.

    Science.gov (United States)

    Chassy, Philippe

    2017-10-01

    Neural plasticity is the process whereby semantic information and emotional responses are stored in neural networks. It is hypothesized that the neural networks built over time to encode the sexual fantasies that motivate serial killers to act should display a unique, detectable activation pattern. The pathological neural watermark hypothesis posits that such networks comprise activation of brain sites that reflect four cognitive components: autobiographical memory, sexual arousal, aggression, and control over aggression. The neural sites performing these cognitive functions have been successfully identified by previous research. The key findings are reviewed to hypothesise the typical pattern of activity that serial killers should display. Through the integration of biological findings into one framework, the neural approach proposed in this paper is in stark contrast with the many theories accounting for serial killers that offer non-medical taxonomies. The pathological neural watermark hypothesis offers a new framework to understand and detect deviant individuals. The technical and legal issues are briefly discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Two social brains: neural mechanisms of intersubjectivity.

    Science.gov (United States)

    Vogeley, Kai

    2017-08-19

    It is the aim of this article to present an empirically justified hypothesis about the functional roles of the two social neural systems, namely the so-called 'mirror neuron system' (MNS) and the 'mentalizing system' (MENT, also 'theory of mind network' or 'social neural network'). Both systems are recruited during cognitive processes that are either related to interaction or communication with other conspecifics, thereby constituting intersubjectivity. The hypothesis is developed in the following steps: first, the fundamental distinction that we make between persons and things is introduced; second, communication is presented as the key process that allows us to interact with others; third, the capacity to 'mentalize' or to understand the inner experience of others is emphasized as the fundamental cognitive capacity required to establish successful communication. On this background, it is proposed that MNS serves comparably early stages of social information processing related to the 'detection' of spatial or bodily signals, whereas MENT is recruited during comparably late stages of social information processing related to the 'evaluation' of emotional and psychological states of others. This hypothesis of MNS as a social detection system and MENT as a social evaluation system is illustrated by findings in the field of psychopathology. Finally, new research questions that can be derived from this hypothesis are discussed.This article is part of the themed issue 'Physiological determinants of social behaviour in animals'. © 2017 The Author(s).

  8. Robust Adaptive Neural Control of Morphing Aircraft with Prescribed Performance

    OpenAIRE

    Wu, Zhonghua; Lu, Jingchao; Shi, Jingping; Liu, Yang; Zhou, Qing

    2017-01-01

    This study proposes a low-computational composite adaptive neural control scheme for the longitudinal dynamics of a swept-back wing aircraft subject to parameter uncertainties. To efficiently release the constraint often existing in conventional neural designs, whose closed-loop stability analysis always necessitates that neural networks (NNs) be confined in the active regions, a smooth switching function is presented to conquer this issue. By integrating minimal learning parameter (MLP) tech...

  9. Neural tissue-spheres

    DEFF Research Database (Denmark)

    Andersen, Rikke K; Johansen, Mathias; Blaabjerg, Morten

    2007-01-01

    By combining new and established protocols we have developed a procedure for isolation and propagation of neural precursor cells from the forebrain subventricular zone (SVZ) of newborn rats. Small tissue blocks of the SVZ were dissected and propagated en bloc as free-floating neural tissue...... content, thus allowing experimental studies of neural precursor cells and their niche...

  10. Fractal Markets Hypothesis and the Global Financial Crisis: Scaling, Investment Horizons and Liquidity

    Czech Academy of Sciences Publication Activity Database

    Krištoufek, Ladislav

    2012-01-01

    Roč. 15, č. 6 (2012), 1250065-1-1250065-13 ISSN 0219-5259 R&D Projects: GA ČR GA402/09/0965 Grant - others:GA UK(CZ) 118310; SVV(CZ) 265 504 Institutional support: RVO:67985556 Keywords : fractal markets hypothesis * scaling * fractality * investment horizons * efficient markets hypothesis Subject RIV: AH - Economics Impact factor: 0.647, year: 2012 http://library.utia.cas.cz/separaty/2012/E/kristoufek-fractal markets hypothesis and the global financial crisis scaling investment horizons and liquidity.pdf

  11. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Neural bases of selective attention in action video game players.

    Science.gov (United States)

    Bavelier, D; Achtman, R L; Mani, M; Föcker, J

    2012-05-15

    Over the past few years, the very act of playing action video games has been shown to enhance several different aspects of visual selective attention, yet little is known about the neural mechanisms that mediate such attentional benefits. A review of the aspects of attention enhanced in action game players suggests there are changes in the mechanisms that control attention allocation and its efficiency (Hubert-Wallander, Green, & Bavelier, 2010). The present study used brain imaging to test this hypothesis by comparing attentional network recruitment and distractor processing in action gamers versus non-gamers as attentional demands increased. Moving distractors were found to elicit lesser activation of the visual motion-sensitive area (MT/MST) in gamers as compared to non-gamers, suggestive of a better early filtering of irrelevant information in gamers. As expected, a fronto-parietal network of areas showed greater recruitment as attentional demands increased in non-gamers. In contrast, gamers barely engaged this network as attentional demands increased. This reduced activity in the fronto-parietal network that is hypothesized to control the flexible allocation of top-down attention is compatible with the proposal that action game players may allocate attentional resources more automatically, possibly allowing more efficient early filtering of irrelevant information. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Tests of the Giant Impact Hypothesis

    Science.gov (United States)

    Jones, J. H.

    1998-01-01

    The giant impact hypothesis has gained popularity as a means of explaining a volatile-depleted Moon that still has a chemical affinity to the Earth. As Taylor's Axiom decrees, the best models of lunar origin are testable, but this is difficult with the giant impact model. The energy associated with the impact would be sufficient to totally melt and partially vaporize the Earth. And this means that there should he no geological vestige of Barber times. Accordingly, it is important to devise tests that may be used to evaluate the giant impact hypothesis. Three such tests are discussed here. None of these is supportive of the giant impact model, but neither do they disprove it.

  14. The discovered preference hypothesis - an empirical test

    DEFF Research Database (Denmark)

    Lundhede, Thomas; Ladenburg, Jacob; Olsen, Søren Bøye

    Using stated preference methods for valuation of non-market goods is known to be vulnerable to a range of biases. Some authors claim that these so-called anomalies in effect render the methods useless for the purpose. However, the Discovered Preference Hypothesis, as put forth by Plott [31], offers...... an nterpretation and explanation of biases which entails that the stated preference methods need not to be completely written off. In this paper we conduct a test for the validity and relevance of the DPH interpretation of biases. In a choice experiment concerning preferences for protection of Danish nature areas...... as respondents evaluate more and more choice sets. This finding supports the Discovered Preference Hypothesis interpretation and explanation of starting point bias....

  15. MOLIERE: Automatic Biomedical Hypothesis Generation System.

    Science.gov (United States)

    Sybrandt, Justin; Shtutman, Michael; Safro, Ilya

    2017-08-01

    Hypothesis generation is becoming a crucial time-saving technique which allows biomedical researchers to quickly discover implicit connections between important concepts. Typically, these systems operate on domain-specific fractions of public medical data. MOLIERE, in contrast, utilizes information from over 24.5 million documents. At the heart of our approach lies a multi-modal and multi-relational network of biomedical objects extracted from several heterogeneous datasets from the National Center for Biotechnology Information (NCBI). These objects include but are not limited to scientific papers, keywords, genes, proteins, diseases, and diagnoses. We model hypotheses using Latent Dirichlet Allocation applied on abstracts found near shortest paths discovered within this network, and demonstrate the effectiveness of MOLIERE by performing hypothesis generation on historical data. Our network, implementation, and resulting data are all publicly available for the broad scientific community.

  16. The Method of Hypothesis in Plato's Philosophy

    Directory of Open Access Journals (Sweden)

    Malihe Aboie Mehrizi

    2016-09-01

    Full Text Available The article deals with the examination of method of hypothesis in Plato's philosophy. This method, respectively, will be examined in three dialogues of Meno, Phaedon and Republic in which it is explicitly indicated. It will be shown the process of change of Plato’s attitude towards the position and usage of the method of hypothesis in his realm of philosophy. In Meno, considering the geometry, Plato attempts to introduce a method that can be used in the realm of philosophy. But, ultimately in Republic, Plato’s special attention to the method and its importance in the philosophical investigations, leads him to revise it. Here, finally Plato introduces the particular method of philosophy, i.e., the dialectic

  17. Debates—Hypothesis testing in hydrology: Introduction

    Science.gov (United States)

    Blöschl, Günter

    2017-03-01

    This paper introduces the papers in the "Debates—Hypothesis testing in hydrology" series. The four articles in the series discuss whether and how the process of testing hypotheses leads to progress in hydrology. Repeated experiments with controlled boundary conditions are rarely feasible in hydrology. Research is therefore not easily aligned with the classical scientific method of testing hypotheses. Hypotheses in hydrology are often enshrined in computer models which are tested against observed data. Testability may be limited due to model complexity and data uncertainty. All four articles suggest that hypothesis testing has contributed to progress in hydrology and is needed in the future. However, the procedure is usually not as systematic as the philosophy of science suggests. A greater emphasis on a creative reasoning process on the basis of clues and explorative analyses is therefore needed.

  18. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.

    2014-12-15

    This paper considers multi-agent sequential hypothesis testing and presents a framework for strategic learning in sequential games with explicit consideration of both temporal and spatial coordination. The associated Bayes risk functions explicitly incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well-defined value functions with respect to (a) the belief states for the case of conditional independent private noisy measurements that are also assumed to be independent identically distributed over time, and (b) the information states for the case of correlated private noisy measurements. A sequential investment game of strategic coordination and delay is also discussed as an application of the proposed strategic learning rules.

  19. Hypothesis testing of scientific Monte Carlo calculations

    Science.gov (United States)

    Wallerberger, Markus; Gull, Emanuel

    2017-11-01

    The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.

  20. Reverse hypothesis machine learning a practitioner's perspective

    CERN Document Server

    Kulkarni, Parag

    2017-01-01

    This book introduces a paradigm of reverse hypothesis machines (RHM), focusing on knowledge innovation and machine learning. Knowledge- acquisition -based learning is constrained by large volumes of data and is time consuming. Hence Knowledge innovation based learning is the need of time. Since under-learning results in cognitive inabilities and over-learning compromises freedom, there is need for optimal machine learning. All existing learning techniques rely on mapping input and output and establishing mathematical relationships between them. Though methods change the paradigm remains the same—the forward hypothesis machine paradigm, which tries to minimize uncertainty. The RHM, on the other hand, makes use of uncertainty for creative learning. The approach uses limited data to help identify new and surprising solutions. It focuses on improving learnability, unlike traditional approaches, which focus on accuracy. The book is useful as a reference book for machine learning researchers and professionals as ...

  1. An endocannabinoid hypothesis of drug reward and drug addiction.

    Science.gov (United States)

    Onaivi, Emmanuel S

    2008-10-01

    Pharmacologic treatment of drug and alcohol dependency has largely been disappointing, and new therapeutic targets and hypotheses are needed. There is accumulating evidence indicating a central role for the previously unknown but ubiquitous endocannabinoid physiological control system (EPCS) in the regulation of the rewarding effects of abused substances. Thus an endocannabinoid hypothesis of drug reward is postulated. Endocannabinoids mediate retrograde signaling in neuronal tissues and are involved in the regulation of synaptic transmission to suppress neurotransmitter release by the presynaptic cannabinoid receptors (CB-Rs). This powerful modulatory action on synaptic transmission has significant functional implications and interactions with the effects of abused substances. Our data, along with those from other investigators, provide strong new evidence for a role for EPCS modulation in the effects of drugs of abuse, and specifically for involvement of cannabinoid receptors in the neural basis of addiction. Cannabinoids and endocannabinoids appear to be involved in adding to the rewarding effects of addictive substances, including, nicotine, opiates, alcohol, cocaine, and BDZs. The results suggest that the EPCS may be an important natural regulatory mechanism for drug reward and a target for the treatment of addictive disorders.

  2. A cholinergic hypothesis of the unconscious in affective disorders.

    Directory of Open Access Journals (Sweden)

    Costa eVakalopoulos

    2013-11-01

    Full Text Available The interactions between distinct pharmacological systems are proposed as a key dynamic in the formation of unconscious memories underlying rumination and mood disorder, but also reflect the plastic capacity of neural networks that can aid recovery. An inverse and reciprocal relationship is postulated between cholinergic and monoaminergic receptor subtypes. M1-type muscarinic receptor transduction facilitates encoding of unconscious, prepotent behavioural repertoires at the core of affective disorders and ADHD. Behavioural adaptation to new contingencies is mediated by the classic prototype receptor: 5-HT1A (Gi/o and its modulation of m1-plasticity. Reversal of learning is dependent on increased phasic activation of midbrain monoaminergic nuclei and is a function of hippocampal theta. Acquired hippocampal dysfunction due to abnormal activation of the hypothalamic-pituitary-adrenal (HPA axis predicts deficits in hippocampal-dependent memory and executive function and further impairments to cognitive inhibition. Encoding of explicit memories is mediated by Gq/11 and Gs signalling of monoamines only. A role is proposed for the phasic activation of the basal forebrain cholinergic nucleus by cortical projections from the complex consisting of the insula and claustrum. Although controversial. recent studies suggest a common ontogenetic origin of the two structures and a functional coupling. Lesions of the region result in loss of motivational behaviour and familiarity based judgements. A major hypothesis of the paper is that these lost faculties result indirectly, from reduced cholinergic tone.

  3. Water Taxation and the Double Dividend Hypothesis

    OpenAIRE

    Nicholas Kilimani

    2014-01-01

    The double dividend hypothesis contends that environmental taxes have the potential to yield multiple benefits for the economy. However, empirical evidence of the potential impacts of environmental taxation in developing countries is still limited. This paper seeks to contribute to the literature by exploring the impact of a water tax in a developing country context, with Uganda as a case study. Policy makers in Uganda are exploring ways of raising revenue by taxing environmental goods such a...

  4. [Working memory, phonological awareness and spelling hypothesis].

    Science.gov (United States)

    Gindri, Gigiane; Keske-Soares, Márcia; Mota, Helena Bolli

    2007-01-01

    Working memory, phonological awareness and spelling hypothesis. To verify the relationship between working memory, phonological awareness and spelling hypothesis in pre-school children and first graders. Participants of this study were 90 students, belonging to state schools, who presented typical linguistic development. Forty students were preschoolers, with the average age of six and 50 students were first graders, with the average age of seven. Participants were submitted to an evaluation of the working memory abilities based on the Working Memory Model (Baddeley, 2000), involving phonological loop. Phonological loop was evaluated using the Auditory Sequential Test, subtest 5 of Illinois Test of Psycholinguistic Abilities (ITPA), Brazilian version (Bogossian & Santos, 1977), and the Meaningless Words Memory Test (Kessler, 1997). Phonological awareness abilities were investigated using the Phonological Awareness: Instrument of Sequential Assessment (CONFIAS - Moojen et al., 2003), involving syllabic and phonemic awareness tasks. Writing was characterized according to Ferreiro & Teberosky (1999). Preschoolers presented the ability of repeating sequences of 4.80 digits and 4.30 syllables. Regarding phonological awareness, the performance in the syllabic level was of 19.68 and in the phonemic level was of 8.58. Most of the preschoolers demonstrated to have a pre-syllabic writing hypothesis. First graders repeated, in average, sequences of 5.06 digits and 4.56 syllables. These children presented a phonological awareness of 31.12 in the syllabic level and of 16.18 in the phonemic level, and demonstrated to have an alphabetic writing hypothesis. The performance of working memory, phonological awareness and spelling level are inter-related, as well as being related to chronological age, development and scholarity.

  5. Privacy on Hypothesis Testing in Smart Grids

    OpenAIRE

    Li, Zuxing; Oechtering, Tobias

    2015-01-01

    In this paper, we study the problem of privacy information leakage in a smart grid. The privacy risk is assumed to be caused by an unauthorized binary hypothesis testing of the consumer's behaviour based on the smart meter readings of energy supplies from the energy provider. Another energy supplies are produced by an alternative energy source. A controller equipped with an energy storage device manages the energy inflows to satisfy the energy demand of the consumer. We study the optimal ener...

  6. Box-particle probability hypothesis density filtering

    OpenAIRE

    Schikora, M.; Gning, A.; Mihaylova, L.; Cremers, D.; Koch, W.

    2014-01-01

    This paper develops a novel approach for multitarget tracking, called box-particle probability hypothesis density filter (box-PHD filter). The approach is able to track multiple targets and estimates the unknown number of targets. Furthermore, it is capable of dealing with three sources of uncertainty: stochastic, set-theoretic, and data association uncertainty. The box-PHD filter reduces the number of particles significantly, which improves the runtime considerably. The small number of box-p...

  7. Quantum effects and hypothesis of cosmic censorship

    International Nuclear Information System (INIS)

    Parnovskij, S.L.

    1989-01-01

    It is shown that filamentary characteristics with linear mass of less than 10 25 g/cm distort slightly the space-time at distances, exceeding Planck ones. Their formation doesn't change vacuum energy and doesn't lead to strong quantum radiation. Therefore, the problem of their occurrence can be considered within the framework of classical collapse. Quantum effects can be ignored when considering the problem of validity of cosmic censorship hypothesis

  8. Neural substrates of decision-making.

    Science.gov (United States)

    Broche-Pérez, Y; Herrera Jiménez, L F; Omar-Martínez, E

    2016-06-01

    Decision-making is the process of selecting a course of action from among 2 or more alternatives by considering the potential outcomes of selecting each option and estimating its consequences in the short, medium and long term. The prefrontal cortex (PFC) has traditionally been considered the key neural structure in decision-making process. However, new studies support the hypothesis that describes a complex neural network including both cortical and subcortical structures. The aim of this review is to summarise evidence on the anatomical structures underlying the decision-making process, considering new findings that support the existence of a complex neural network that gives rise to this complex neuropsychological process. Current evidence shows that the cortical structures involved in decision-making include the orbitofrontal cortex (OFC), anterior cingulate cortex (ACC), and dorsolateral prefrontal cortex (DLPFC). This process is assisted by subcortical structures including the amygdala, thalamus, and cerebellum. Findings to date show that both cortical and subcortical brain regions contribute to the decision-making process. The neural basis of decision-making is a complex neural network of cortico-cortical and cortico-subcortical connections which includes subareas of the PFC, limbic structures, and the cerebellum. Copyright © 2014 Sociedad Española de Neurología. Published by Elsevier España, S.L.U. All rights reserved.

  9. The (not so) immortal strand hypothesis.

    Science.gov (United States)

    Tomasetti, Cristian; Bozic, Ivana

    2015-03-01

    Non-random segregation of DNA strands during stem cell replication has been proposed as a mechanism to minimize accumulated genetic errors in stem cells of rapidly dividing tissues. According to this hypothesis, an "immortal" DNA strand is passed to the stem cell daughter and not the more differentiated cell, keeping the stem cell lineage replication error-free. After it was introduced, experimental evidence both in favor and against the hypothesis has been presented. Using a novel methodology that utilizes cancer sequencing data we are able to estimate the rate of accumulation of mutations in healthy stem cells of the colon, blood and head and neck tissues. We find that in these tissues mutations in stem cells accumulate at rates strikingly similar to those expected without the protection from the immortal strand mechanism. Utilizing an approach that is fundamentally different from previous efforts to confirm or refute the immortal strand hypothesis, we provide evidence against non-random segregation of DNA during stem cell replication. Our results strongly suggest that parental DNA is passed randomly to stem cell daughters and provides new insight into the mechanism of DNA replication in stem cells. Copyright © 2015. Published by Elsevier B.V.

  10. A test of the orthographic recoding hypothesis

    Science.gov (United States)

    Gaygen, Daniel E.

    2003-04-01

    The Orthographic Recoding Hypothesis [D. E. Gaygen and P. A. Luce, Percept. Psychophys. 60, 465-483 (1998)] was tested. According to this hypothesis, listeners recognize spoken words heard for the first time by mapping them onto stored representations of the orthographic forms of the words. Listeners have a stable orthographic representation of words, but no phonological representation, when those words have been read frequently but never heard or spoken. Such may be the case for low frequency words such as jargon. Three experiments using visually and auditorily presented nonword stimuli tested this hypothesis. The first two experiments were explicit tests of memory (old-new tests) for words presented visually. In the first experiment, the recognition of auditorily presented nonwords was facilitated when they previously appeared on a visually presented list. The second experiment was similar, but included a concurrent articulation task during a visual word list presentation, thus preventing covert rehearsal of the nonwords. The results were similar to the first experiment. The third experiment was an indirect test of memory (auditory lexical decision task) for visually presented nonwords. Auditorily presented nonwords were identified as nonwords significantly more slowly if they had previously appeared on the visually presented list accompanied by a concurrent articulation task.

  11. Optimal neural computations require analog processors

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-31

    This paper discusses some of the limitations of hardware implementations of neural networks. The authors start by presenting neural structures and their biological inspirations, while mentioning the simplifications leading to artificial neural networks. Further, the focus will be on hardware imposed constraints. They will present recent results for three different alternatives of parallel implementations of neural networks: digital circuits, threshold gate circuits, and analog circuits. The area and the delay will be related to the neurons` fan-in and to the precision of their synaptic weights. The main conclusion is that hardware-efficient solutions require analog computations, and suggests the following two alternatives: (i) cope with the limitations imposed by silicon, by speeding up the computation of the elementary silicon neurons; (2) investigate solutions which would allow the use of the third dimension (e.g. using optical interconnections).

  12. Neural electrical activity and neural network growth.

    Science.gov (United States)

    Gafarov, F M

    2018-05-01

    The development of central and peripheral neural system depends in part on the emergence of the correct functional connectivity in its input and output pathways. Now it is generally accepted that molecular factors guide neurons to establish a primary scaffold that undergoes activity-dependent refinement for building a fully functional circuit. However, a number of experimental results obtained recently shows that the neuronal electrical activity plays an important role in the establishing of initial interneuronal connections. Nevertheless, these processes are rather difficult to study experimentally, due to the absence of theoretical description and quantitative parameters for estimation of the neuronal activity influence on growth in neural networks. In this work we propose a general framework for a theoretical description of the activity-dependent neural network growth. The theoretical description incorporates a closed-loop growth model in which the neural activity can affect neurite outgrowth, which in turn can affect neural activity. We carried out the detailed quantitative analysis of spatiotemporal activity patterns and studied the relationship between individual cells and the network as a whole to explore the relationship between developing connectivity and activity patterns. The model, developed in this work will allow us to develop new experimental techniques for studying and quantifying the influence of the neuronal activity on growth processes in neural networks and may lead to a novel techniques for constructing large-scale neural networks by self-organization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Neural Network for Sparse Reconstruction

    Directory of Open Access Journals (Sweden)

    Qingfa Li

    2014-01-01

    Full Text Available We construct a neural network based on smoothing approximation techniques and projected gradient method to solve a kind of sparse reconstruction problems. Neural network can be implemented by circuits and can be seen as an important method for solving optimization problems, especially large scale problems. Smoothing approximation is an efficient technique for solving nonsmooth optimization problems. We combine these two techniques to overcome the difficulties of the choices of the step size in discrete algorithms and the item in the set-valued map of differential inclusion. In theory, the proposed network can converge to the optimal solution set of the given problem. Furthermore, some numerical experiments show the effectiveness of the proposed network in this paper.

  14. Neural Networks Methodology and Applications

    CERN Document Server

    Dreyfus, Gérard

    2005-01-01

    Neural networks represent a powerful data processing technique that has reached maturity and broad application. When clearly understood and appropriately used, they are a mandatory component in the toolbox of any engineer who wants make the best use of the available data, in order to build models, make predictions, mine data, recognize shapes or signals, etc. Ranging from theoretical foundations to real-life applications, this book is intended to provide engineers and researchers with clear methodologies for taking advantage of neural networks in industrial, financial or banking applications, many instances of which are presented in the book. For the benefit of readers wishing to gain deeper knowledge of the topics, the book features appendices that provide theoretical details for greater insight, and algorithmic details for efficient programming and implementation. The chapters have been written by experts ands seemlessly edited to present a coherent and comprehensive, yet not redundant, practically-oriented...

  15. Melatonin receptors: Current status, facts, and hypothesis

    International Nuclear Information System (INIS)

    Stankov, B.; Reiter, R.J.

    1990-01-01

    Great progress has been made in the identification of melatonin binding sites, commonly identified as melatonin receptors by many authors, in recent years. The bulk of these studies have investigated the sites using either autoradiographic and biochemical techniques with the majority of the experiments being done on the rat, Djungarian and Syrian hamster, and sheep, although human tissue has also been employed. Many of the studies have identified melatonin binding in the central nervous system with either tritium- or iodine-labelled ligands. The latter ligand seems to provide the most reproducible and consistent data. Of the central neural tissues examined, the suprachiasmatic nuclei are most frequently mentioned as a location for melatonin binding sites although binding seems to be widespread in the brain. The other tissue that has been prominently mentioned as a site for melatonin binding is the pars tuberalis of the anterior pituitary gland. There may be time-dependent variations in melatonin binding densities in both neural and pituitary gland tissue. Very few attempts have been made to identify melatonin binding outside of the central nervous system despite the widespread actions of melatonin. Preliminary experiments have been carried out on the intracellular second messengers which mediate the actions of melatonin

  16. Response variability in Attention-Deficit/Hyperactivity Disorder: a neuronal and glial energetics hypothesis.

    Science.gov (United States)

    Russell, Vivienne A; Oades, Robert D; Tannock, Rosemary; Killeen, Peter R; Auerbach, Judith G; Johansen, Espen B; Sagvolden, Terje

    2006-08-23

    Current concepts of Attention-Deficit/Hyperactivity Disorder (ADHD) emphasize the role of higher-order cognitive functions and reinforcement processes attributed to structural and biochemical anomalies in cortical and limbic neural networks innervated by the monoamines, dopamine, noradrenaline and serotonin. However, these explanations do not account for the ubiquitous findings in ADHD of intra-individual performance variability, particularly on tasks that require continual responses to rapid, externally-paced stimuli. Nor do they consider attention as a temporal process dependent upon a continuous energy supply for efficient and consistent function. A consideration of this feature of intra-individual response variability, which is not unique to ADHD but is also found in other disorders, leads to a new perspective on the causes and potential remedies of specific aspects of ADHD. We propose that in ADHD, astrocyte function is insufficient, particularly in terms of its formation and supply of lactate. This insufficiency has implications both for performance and development: H1) In rapidly firing neurons there is deficient ATP production, slow restoration of ionic gradients across neuronal membranes and delayed neuronal firing; H2) In oligodendrocytes insufficient lactate supply impairs fatty acid synthesis and myelination of axons during development. These effects occur over vastly different time scales: those due to deficient ATP (H1) occur over milliseconds, whereas those due to deficient myelination (H2) occur over months and years. Collectively the neural outcomes of impaired astrocytic release of lactate manifest behaviourally as inefficient and inconsistent performance (variable response times across the lifespan, especially during activities that require sustained speeded responses and complex information processing). Multi-level and multi-method approaches are required. These include: 1) Use of dynamic strategies to evaluate cognitive performance under

  17. Chaotic diagonal recurrent neural network

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Zhang Yi

    2012-01-01

    We propose a novel neural network based on a diagonal recurrent neural network and chaos, and its structure and learning algorithm are designed. The multilayer feedforward neural network, diagonal recurrent neural network, and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map. The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks. (interdisciplinary physics and related areas of science and technology)

  18. Updating the lamellar hypothesis of hippocampal organization

    Directory of Open Access Journals (Sweden)

    Robert S Sloviter

    2012-12-01

    Full Text Available In 1971, Andersen and colleagues proposed that excitatory activity in the entorhinal cortex propagates topographically to the dentate gyrus, and on through a trisynaptic circuit lying within transverse hippocampal slices or lamellae [Andersen, Bliss, and Skrede. 1971. Lamellar organization of hippocampal pathways. Exp Brain Res 13, 222-238]. In this way, a relatively simple structure might mediate complex functions in a manner analogous to the way independent piano keys can produce a nearly infinite variety of unique outputs. The lamellar hypothesis derives primary support from the lamellar distribution of dentate granule cell axons (the mossy fibers, which innervate dentate hilar neurons and area CA3 pyramidal cells and interneurons within the confines of a thin transverse hippocampal segment. Following the initial formulation of the lamellar hypothesis, anatomical studies revealed that unlike granule cells, hilar mossy cells, CA3 pyramidal cells, and Layer II entorhinal cells all form axonal projections that are more divergent along the longitudinal axis than the clearly lamellar mossy fiber pathway. The existence of pathways with translamellar distribution patterns has been interpreted, incorrectly in our view, as justifying outright rejection of the lamellar hypothesis [Amaral and Witter. 1989. The three-dimensional organization of the hippocampal formation: a review of anatomical data. Neuroscience 31, 571-591]. We suggest that the functional implications of longitudinally-projecting axons depend not on whether they exist, but on what they do. The observation that focal granule cell layer discharges normally inhibit, rather than excite, distant granule cells suggests that longitudinal axons in the dentate gyrus may mediate "lateral" inhibition and define lamellar function, rather than undermine it. In this review, we attempt a reconsideration of the evidence that most directly impacts the physiological concept of hippocampal lamellar

  19. Hypothesis Testing as an Act of Rationality

    Science.gov (United States)

    Nearing, Grey

    2017-04-01

    Statistical hypothesis testing is ad hoc in two ways. First, setting probabilistic rejection criteria is, as Neyman (1957) put it, an act of will rather than an act of rationality. Second, physical theories like conservation laws do not inherently admit probabilistic predictions, and so we must use what are called epistemic bridge principles to connect model predictions with the actual methods of hypothesis testing. In practice, these bridge principles are likelihood functions, error functions, or performance metrics. I propose that the reason we are faced with these problems is because we have historically failed to account for a fundamental component of basic logic - namely the portion of logic that explains how epistemic states evolve in the presence of empirical data. This component of Cox' (1946) calculitic logic is called information theory (Knuth, 2005), and adding information theory our hypothetico-deductive account of science yields straightforward solutions to both of the above problems. This also yields a straightforward method for dealing with Popper's (1963) problem of verisimilitude by facilitating a quantitative approach to measuring process isomorphism. In practice, this involves data assimilation. Finally, information theory allows us to reliably bound measures of epistemic uncertainty, thereby avoiding the problem of Bayesian incoherency under misspecified priors (Grünwald, 2006). I therefore propose solutions to four of the fundamental problems inherent in both hypothetico-deductive and/or Bayesian hypothesis testing. - Neyman (1957) Inductive Behavior as a Basic Concept of Philosophy of Science. - Cox (1946) Probability, Frequency and Reasonable Expectation. - Knuth (2005) Lattice Duality: The Origin of Probability and Entropy. - Grünwald (2006). Bayesian Inconsistency under Misspecification. - Popper (1963) Conjectures and Refutations: The Growth of Scientific Knowledge.

  20. The conscious access hypothesis: Explaining the consciousness.

    Science.gov (United States)

    Prakash, Ravi

    2008-01-01

    The phenomenon of conscious awareness or consciousness is complicated but fascinating. Although this concept has intrigued the mankind since antiquity, exploration of consciousness from scientific perspectives is not very old. Among myriad of theories regarding nature, functions and mechanism of consciousness, off late, cognitive theories have received wider acceptance. One of the most exciting hypotheses in recent times has been the "conscious access hypotheses" based on the "global workspace model of consciousness". It underscores an important property of consciousness, the global access of information in cerebral cortex. Present article reviews the "conscious access hypothesis" in terms of its theoretical underpinnings as well as experimental supports it has received.

  1. Interstellar colonization and the zoo hypothesis

    International Nuclear Information System (INIS)

    Jones, E.M.

    1978-01-01

    Michael Hart and others have pointed out that current estimates of the number of technological civilizations arisen in the Galaxy since its formation is in fundamental conflict with the expectation that such a civilization could colonize and utilize the entire Galaxy in 10 to 20 million years. This dilemma can be called Hart's paradox. Resolution of the paradox requires that one or more of the following are true: we are the Galaxy's first technical civilization; interstellar travel is immensely impractical or simply impossible; technological civilizations are very short-lived; or we inhabit a wildnerness preserve. The latter is the zoo hypothesis

  2. Confluence Model or Resource Dilution Hypothesis?

    DEFF Research Database (Denmark)

    Jæger, Mads

    have a negative effect on educational attainment most studies cannot distinguish empirically between the CM and the RDH. In this paper, I use the different theoretical predictions in the CM and the RDH on the role of cognitive ability as a partial or complete mediator of the sibship size effect......Studies on family background often explain the negative effect of sibship size on educational attainment by one of two theories: the Confluence Model (CM) or the Resource Dilution Hypothesis (RDH). However, as both theories – for substantively different reasons – predict that sibship size should...

  3. Set theory and the continuum hypothesis

    CERN Document Server

    Cohen, Paul J

    2008-01-01

    This exploration of a notorious mathematical problem is the work of the man who discovered the solution. The independence of the continuum hypothesis is the focus of this study by Paul J. Cohen. It presents not only an accessible technical explanation of the author's landmark proof but also a fine introduction to mathematical logic. An emeritus professor of mathematics at Stanford University, Dr. Cohen won two of the most prestigious awards in mathematics: in 1964, he was awarded the American Mathematical Society's Bôcher Prize for analysis; and in 1966, he received the Fields Medal for Logic.

  4. Statistical hypothesis testing with SAS and R

    CERN Document Server

    Taeger, Dirk

    2014-01-01

    A comprehensive guide to statistical hypothesis testing with examples in SAS and R When analyzing datasets the following questions often arise:Is there a short hand procedure for a statistical test available in SAS or R?If so, how do I use it?If not, how do I program the test myself? This book answers these questions and provides an overview of the most commonstatistical test problems in a comprehensive way, making it easy to find and performan appropriate statistical test. A general summary of statistical test theory is presented, along with a basicdescription for each test, including the

  5. Evolvable Neural Software System

    Science.gov (United States)

    Curtis, Steven A.

    2009-01-01

    The Evolvable Neural Software System (ENSS) is composed of sets of Neural Basis Functions (NBFs), which can be totally autonomously created and removed according to the changing needs and requirements of the software system. The resulting structure is both hierarchical and self-similar in that a given set of NBFs may have a ruler NBF, which in turn communicates with other sets of NBFs. These sets of NBFs may function as nodes to a ruler node, which are also NBF constructs. In this manner, the synthetic neural system can exhibit the complexity, three-dimensional connectivity, and adaptability of biological neural systems. An added advantage of ENSS over a natural neural system is its ability to modify its core genetic code in response to environmental changes as reflected in needs and requirements. The neural system is fully adaptive and evolvable and is trainable before release. It continues to rewire itself while on the job. The NBF is a unique, bilevel intelligence neural system composed of a higher-level heuristic neural system (HNS) and a lower-level, autonomic neural system (ANS). Taken together, the HNS and the ANS give each NBF the complete capabilities of a biological neural system to match sensory inputs to actions. Another feature of the NBF is the Evolvable Neural Interface (ENI), which links the HNS and ANS. The ENI solves the interface problem between these two systems by actively adapting and evolving from a primitive initial state (a Neural Thread) to a complicated, operational ENI and successfully adapting to a training sequence of sensory input. This simulates the adaptation of a biological neural system in a developmental phase. Within the greater multi-NBF and multi-node ENSS, self-similar ENI s provide the basis for inter-NBF and inter-node connectivity.

  6. How organisms do the right thing: The attractor hypothesis

    Science.gov (United States)

    Emlen, J.M.; Freeman, D.C.; Mills, A.; Graham, J.H.

    1998-01-01

    Neo-Darwinian theory is highly successful at explaining the emergence of adaptive traits over successive generations. However, there are reasons to doubt its efficacy in explaining the observed, impressively detailed adaptive responses of organisms to day-to-day changes in their surroundings. Also, the theory lacks a clear mechanism to account for both plasticity and canalization. In effect, there is a growing sentiment that the neo-Darwinian paradigm is incomplete, that something more than genetic structure, mutation, genetic drift, and the action of natural selection is required to explain organismal behavior. In this paper we extend the view of organisms as complex self-organizing entities by arguing that basic physical laws, coupled with the acquisitive nature of organisms, makes adaptation all but tautological. That is, much adaptation is an unavoidable emergent property of organisms' complexity and, to some a significant degree, occurs quite independently of genomic changes wrought by natural selection. For reasons that will become obvious, we refer to this assertion as the attractor hypothesis. The arguments also clarify the concept of "adaptation." Adaptation across generations, by natural selection, equates to the (game theoretic) maximization of fitness (the success with which one individual produces more individuals), while self-organizing based adaptation, within generations, equates to energetic efficiency and the matching of intake and biosynthesis to need. Finally, we discuss implications of the attractor hypothesis for a wide variety of genetical and physiological phenomena, including genetic architecture, directed mutation, genetic imprinting, paramutation, hormesis, plasticity, optimality theory, genotype-phenotype linkage and puncuated equilibrium, and present suggestions for tests of the hypothesis. ?? 1998 American Institute of Physics.

  7. Applying Fuzzy Artificial Neural Network OSPF to develop Smart ...

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... Fuzzy Artificial Neural Network to create Smart Routing. Protocol Algorithm. ... manufactured mental aptitude strategy. The capacity to study .... Based Energy Efficiency in Wireless Sensor Networks: A Survey",. International ...

  8. Sex differences in the neural mechanisms mediating addiction: a new synthesis and hypothesis

    OpenAIRE

    Becker, Jill B; Perry, Adam N; Westenbroek, Christel

    2012-01-01

    Abstract In this review we propose that there are sex differences in how men and women enter onto the path that can lead to addiction. Males are more likely than females to engage in risky behaviors that include experimenting with drugs of abuse, and in susceptible individuals, they are drawn into the spiral that can eventually lead to addiction. Women and girls are more likely to begin taking drugs as self-medication to reduce stress or alleviate depression. For this reason women enter into ...

  9. Linear matrix inequality approach to exponential synchronization of a class of chaotic neural networks with time-varying delays

    Science.gov (United States)

    Wu, Wei; Cui, Bao-Tong

    2007-07-01

    In this paper, a synchronization scheme for a class of chaotic neural networks with time-varying delays is presented. This class of chaotic neural networks covers several well-known neural networks, such as Hopfield neural networks, cellular neural networks, and bidirectional associative memory networks. The obtained criteria are expressed in terms of linear matrix inequalities, thus they can be efficiently verified. A comparison between our results and the previous results shows that our results are less restrictive.

  10. Hypothesis-driven physical examination curriculum.

    Science.gov (United States)

    Allen, Sharon; Olson, Andrew; Menk, Jeremiah; Nixon, James

    2017-12-01

    Medical students traditionally learn physical examination skills as a rote list of manoeuvres. Alternatives like hypothesis-driven physical examination (HDPE) may promote students' understanding of the contribution of physical examination to diagnostic reasoning. We sought to determine whether first-year medical students can effectively learn to perform a physical examination using an HDPE approach, and then tailor the examination to specific clinical scenarios. Medical students traditionally learn physical examination skills as a rote list of manoeuvres CONTEXT: First-year medical students at the University of Minnesota were taught both traditional and HDPE approaches during a required 17-week clinical skills course in their first semester. The end-of-course evaluation assessed HDPE skills: students were assigned one of two cardiopulmonary cases. Each case included two diagnostic hypotheses. During an interaction with a standardised patient, students were asked to select physical examination manoeuvres in order to make a final diagnosis. Items were weighted and selection order was recorded. First-year students with minimal pathophysiology performed well. All students selected the correct diagnosis. Importantly, students varied the order when selecting examination manoeuvres depending on the diagnoses under consideration, demonstrating early clinical decision-making skills. An early introduction to HDPE may reinforce physical examination skills for hypothesis generation and testing, and can foster early clinical decision-making skills. This has important implications for further research in physical examination instruction. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  11. A default Bayesian hypothesis test for mediation.

    Science.gov (United States)

    Nuijten, Michèle B; Wetzels, Ruud; Matzke, Dora; Dolan, Conor V; Wagenmakers, Eric-Jan

    2015-03-01

    In order to quantify the relationship between multiple variables, researchers often carry out a mediation analysis. In such an analysis, a mediator (e.g., knowledge of a healthy diet) transmits the effect from an independent variable (e.g., classroom instruction on a healthy diet) to a dependent variable (e.g., consumption of fruits and vegetables). Almost all mediation analyses in psychology use frequentist estimation and hypothesis-testing techniques. A recent exception is Yuan and MacKinnon (Psychological Methods, 14, 301-322, 2009), who outlined a Bayesian parameter estimation procedure for mediation analysis. Here we complete the Bayesian alternative to frequentist mediation analysis by specifying a default Bayesian hypothesis test based on the Jeffreys-Zellner-Siow approach. We further extend this default Bayesian test by allowing a comparison to directional or one-sided alternatives, using Markov chain Monte Carlo techniques implemented in JAGS. All Bayesian tests are implemented in the R package BayesMed (Nuijten, Wetzels, Matzke, Dolan, & Wagenmakers, 2014).

  12. Gaussian Hypothesis Testing and Quantum Illumination.

    Science.gov (United States)

    Wilde, Mark M; Tomamichel, Marco; Lloyd, Seth; Berta, Mario

    2017-09-22

    Quantum hypothesis testing is one of the most basic tasks in quantum information theory and has fundamental links with quantum communication and estimation theory. In this paper, we establish a formula that characterizes the decay rate of the minimal type-II error probability in a quantum hypothesis test of two Gaussian states given a fixed constraint on the type-I error probability. This formula is a direct function of the mean vectors and covariance matrices of the quantum Gaussian states in question. We give an application to quantum illumination, which is the task of determining whether there is a low-reflectivity object embedded in a target region with a bright thermal-noise bath. For the asymmetric-error setting, we find that a quantum illumination transmitter can achieve an error probability exponent stronger than a coherent-state transmitter of the same mean photon number, and furthermore, that it requires far fewer trials to do so. This occurs when the background thermal noise is either low or bright, which means that a quantum advantage is even easier to witness than in the symmetric-error setting because it occurs for a larger range of parameters. Going forward from here, we expect our formula to have applications in settings well beyond those considered in this paper, especially to quantum communication tasks involving quantum Gaussian channels.

  13. Inoculation stress hypothesis of environmental enrichment.

    Science.gov (United States)

    Crofton, Elizabeth J; Zhang, Yafang; Green, Thomas A

    2015-02-01

    One hallmark of psychiatric conditions is the vast continuum of individual differences in susceptibility vs. resilience resulting from the interaction of genetic and environmental factors. The environmental enrichment paradigm is an animal model that is useful for studying a range of psychiatric conditions, including protective phenotypes in addiction and depression models. The major question is how environmental enrichment, a non-drug and non-surgical manipulation, can produce such robust individual differences in such a wide range of behaviors. This paper draws from a variety of published sources to outline a coherent hypothesis of inoculation stress as a factor producing the protective enrichment phenotypes. The basic tenet suggests that chronic mild stress from living in a complex environment and interacting non-aggressively with conspecifics can inoculate enriched rats against subsequent stressors and/or drugs of abuse. This paper reviews the enrichment phenotypes, mulls the fundamental nature of environmental enrichment vs. isolation, discusses the most appropriate control for environmental enrichment, and challenges the idea that cortisol/corticosterone equals stress. The intent of the inoculation stress hypothesis of environmental enrichment is to provide a scaffold with which to build testable hypotheses for the elucidation of the molecular mechanisms underlying these protective phenotypes and thus provide new therapeutic targets to treat psychiatric/neurological conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Athlete's Heart: Is the Morganroth Hypothesis Obsolete?

    Science.gov (United States)

    Haykowsky, Mark J; Samuel, T Jake; Nelson, Michael D; La Gerche, Andre

    2018-05-01

    In 1975, Morganroth and colleagues reported that the increased left ventricular (LV) mass in highly trained endurance athletes versus nonathletes was primarily due to increased end-diastolic volume while the increased LV mass in resistance trained athletes was solely due to an increased LV wall thickness. Based on the divergent remodelling patterns observed, Morganroth and colleagues hypothesised that the increased "volume" load during endurance exercise may be similar to that which occurs in patients with mitral or aortic regurgitation while the "pressure" load associated with performing a Valsalva manoeuvre (VM) during resistance exercise may mimic the stress imposed on the heart by systemic hypertension or aortic stenosis. Despite widespread acceptance of the four-decade old Morganroth hypothesis in sports cardiology, some investigators have questioned whether such a divergent "athlete's heart" phenotype exists. Given this uncertainty, the purpose of this brief review is to re-evaluate the Morganroth hypothesis regarding: i) the acute effects of resistance exercise performed with a brief VM on LV wall stress, and the patterns of LV remodelling in resistance-trained athletes; ii) the acute effects of endurance exercise on biventricular wall stress, and the time course and pattern of LV and right ventricular (RV) remodelling with endurance training; and iii) the value of comparing "loading" conditions between athletes and patients with cardiac pathology. Copyright © 2018. Published by Elsevier B.V.

  15. The Debt Overhang Hypothesis: Evidence from Pakistan

    Directory of Open Access Journals (Sweden)

    Shah Muhammad Imran

    2016-04-01

    Full Text Available This study investigates the debt overhang hypothesis for Pakistan in the period 1960-2007. The study examines empirically the dynamic behaviour of GDP, debt services, the employed labour force and investment using the time series concepts of unit roots, cointegration, error correlation and causality. Our findings suggest that debt-servicing has a negative impact on the productivity of both labour and capital, and that in turn has adversely affected economic growth. By severely constraining the ability of the country to service debt, this lends support to the debt-overhang hypothesis in Pakistan. The long run relation between debt services and economic growth implies that future increases in output will drain away in form of high debt service payments to lender country as external debt acts like a tax on output. More specifically, foreign creditors will benefit more from the rise in productivity than will domestic producers and labour. This suggests that domestic labour and capital are the ultimate losers from this heavy debt burden.

  16. Roots and Route of the Artification Hypothesis

    Directory of Open Access Journals (Sweden)

    Ellen Dissanayake

    2017-08-01

    Full Text Available Over four decades, my ideas about the arts in human evolution have themselves evolved, from an original notion of art as a human behaviour of “making special” to a full-fledged hypothesis of artification. A summary of the gradual developmental path (or route of the hypothesis, based on ethological principles and concepts, is given, and an argument presented in which artification is described as an exaptation whose roots lie in adaptive features of ancestral mother–infant interaction that contributed to infant survival and maternal reproductive success. I show how the interaction displays features of a ritualised behavior whose operations (formalization, repetition, exaggeration, and elaboration can be regarded as characteristic elements of human ritual ceremonies as well as of art (including song, dance, performance, literary language, altered surroundings, and other examples of making ordinary sounds, movement, language, environments, objects, and bodies extraordinary. Participation in these behaviours in ritual practices served adaptive ends in early Homo by coordinating brain and body states, and thereby emotionally bonding members of a group in common cause as well as reducing existential anxiety in individuals. A final section situates artification within contemporary philosophical and popular ideas of art, claiming that artifying is not a synonym for or definition of art but foundational to any evolutionary discussion of artistic/aesthetic behaviour.

  17. Hypothesis: does ochratoxin A cause testicular cancer?

    Science.gov (United States)

    Schwartz, Gary G

    2002-02-01

    Little is known about the etiology of testicular cancer, which is the most common cancer among young men. Epidemiologic data point to a carcinogenic exposure in early life or in utero, but the nature of the exposure is unknown. We hypothesize that the mycotoxin, ochratoxin A, is a cause of testicular cancer. Ochratoxin A is a naturally occurring contaminant of cereals, pigmeat, and other foods and is a known genotoxic carcinogen in animals. The major features of the descriptive epidemiology of testicular cancer (a high incidence in northern Europe, increasing incidence over time, and associations with high socioeconomic status, and with poor semen quality) are all associated with exposure to ochratoxin A. Exposure of animals to ochratoxin A via the diet or via in utero transfer induces adducts in testicular DNA. We hypothesize that consumption of foods contaminated with ochratoxin A during pregnancy and/or childhood induces lesions in testicular DNA and that puberty promotes these lesions to testicular cancer. We tested the ochratoxin A hypothesis using ecologic data on the per-capita consumption of cereals, coffee, and pigmeat, the principal dietary sources of ochratoxin A. Incidence rates for testicular cancer in 20 countries were significantly correlated with the per-capita consumption of coffee and pigmeat (r = 0.49 and 0.54, p = 0.03 and 0.01). The ochratoxin A hypothesis offers a coherent explanation for much of the descriptive epidemiology of testicular cancer and suggests new avenues for analytic research.

  18. Urbanization and the more-individuals hypothesis.

    Science.gov (United States)

    Chiari, Claudia; Dinetti, Marco; Licciardello, Cinzia; Licitra, Gaetano; Pautasso, Marco

    2010-03-01

    1. Urbanization is a landscape process affecting biodiversity world-wide. Despite many urban-rural studies of bird assemblages, it is still unclear whether more species-rich communities have more individuals, regardless of the level of urbanization. The more-individuals hypothesis assumes that species-rich communities have larger populations, thus reducing the chance of local extinctions. 2. Using newly collated avian distribution data for 1 km(2) grid cells across Florence, Italy, we show a significantly positive relationship between species richness and assemblage abundance for the whole urban area. This richness-abundance relationship persists for the 1 km(2) grid cells with less than 50% of urbanized territory, as well as for the remaining grid cells, with no significant difference in the slope of the relationship. These results support the more-individuals hypothesis as an explanation of patterns in species richness, also in human modified and fragmented habitats. 3. However, the intercept of the species richness-abundance relationship is significantly lower for highly urbanized grid cells. Our study confirms that urban communities have lower species richness but counters the common notion that assemblages in densely urbanized ecosystems have more individuals. In Florence, highly inhabited areas show fewer species and lower assemblage abundance. 4. Urbanized ecosystems are an ongoing large-scale natural experiment which can be used to test ecological theories empirically.

  19. A new glaucoma hypothesis: a role of glymphatic system dysfunction.

    Science.gov (United States)

    Wostyn, Peter; Van Dam, Debby; Audenaert, Kurt; Killer, Hanspeter Esriel; De Deyn, Peter Paul; De Groot, Veva

    2015-06-29

    In a recent review article titled "A new look at cerebrospinal fluid circulation", Brinker et al. comprehensively described novel insights from molecular and cellular biology as well as neuroimaging research, which indicate that cerebrospinal fluid (CSF) physiology is much more complex than previously believed. The glymphatic system is a recently defined brain-wide paravascular pathway for CSF and interstitial fluid exchange that facilitates efficient clearance of interstitial solutes, including amyloid-β, from the brain. Although further studies are needed to substantiate the functional significance of the glymphatic concept, one implication is that glymphatic pathway dysfunction may contribute to the deficient amyloid-β clearance in Alzheimer's disease. In this paper, we review several lines of evidence suggesting that the glymphatic system may also have potential clinical relevance for the understanding of glaucoma. As a clinically acceptable MRI-based approach to evaluate glymphatic pathway function in humans has recently been developed, a unique opportunity now exists to investigate whether suppression of the glymphatic system contributes to the development of glaucoma. The observation of a dysfunctional glymphatic system in patients with glaucoma would provide support for the hypothesis recently proposed by our group that CSF circulatory dysfunction may play a contributory role in the pathogenesis of glaucomatous damage. This would suggest a new hypothesis for glaucoma, which, just like Alzheimer's disease, might be considered then as an imbalance between production and clearance of neurotoxins, including amyloid-β.

  20. The Younger Dryas impact hypothesis: A requiem

    Science.gov (United States)

    Pinter, Nicholas; Scott, Andrew C.; Daulton, Tyrone L.; Podoll, Andrew; Koeberl, Christian; Anderson, R. Scott; Ishman, Scott E.

    2011-06-01

    The Younger Dryas (YD) impact hypothesis is a recent theory that suggests that a cometary or meteoritic body or bodies hit and/or exploded over North America 12,900 years ago, causing the YD climate episode, extinction of Pleistocene megafauna, demise of the Clovis archeological culture, and a range of other effects. Since gaining widespread attention in 2007, substantial research has focused on testing the 12 main signatures presented as evidence of a catastrophic extraterrestrial event 12,900 years ago. Here we present a review of the impact hypothesis, including its evolution and current variants, and of efforts to test and corroborate the hypothesis. The physical evidence interpreted as signatures of an impact event can be separated into two groups. The first group consists of evidence that has been largely rejected by the scientific community and is no longer in widespread discussion, including: particle tracks in archeological chert; magnetic nodules in Pleistocene bones; impact origin of the Carolina Bays; and elevated concentrations of radioactivity, iridium, and fullerenes enriched in 3He. The second group consists of evidence that has been active in recent research and discussions: carbon spheres and elongates, magnetic grains and magnetic spherules, byproducts of catastrophic wildfire, and nanodiamonds. Over time, however, these signatures have also seen contrary evidence rather than support. Recent studies have shown that carbon spheres and elongates do not represent extraterrestrial carbon nor impact-induced megafires, but are indistinguishable from fungal sclerotia and arthropod fecal material that are a small but common component of many terrestrial deposits. Magnetic grains and spherules are heterogeneously distributed in sediments, but reported measurements of unique peaks in concentrations at the YD onset have yet to be reproduced. The magnetic grains are certainly just iron-rich detrital grains, whereas reported YD magnetic spherules are

  1. A hypothesis on a role of oxytocin in the social mechanisms of speech and vocal learning.

    Science.gov (United States)

    Theofanopoulou, Constantina; Boeckx, Cedric; Jarvis, Erich D

    2017-08-30

    Language acquisition in humans and song learning in songbirds naturally happen as a social learning experience, providing an excellent opportunity to reveal social motivation and reward mechanisms that boost sensorimotor learning. Our knowledge about the molecules and circuits that control these social mechanisms for vocal learning and language is limited. Here we propose a hypothesis of a role for oxytocin (OT) in the social motivation and evolution of vocal learning and language. Building upon existing evidence, we suggest specific neural pathways and mechanisms through which OT might modulate vocal learning circuits in specific developmental stages. © 2017 The Authors.

  2. Approaches to informed consent for hypothesis-testing and hypothesis-generating clinical genomics research.

    Science.gov (United States)

    Facio, Flavia M; Sapp, Julie C; Linn, Amy; Biesecker, Leslie G

    2012-10-10

    Massively-parallel sequencing (MPS) technologies create challenges for informed consent of research participants given the enormous scale of the data and the wide range of potential results. We propose that the consent process in these studies be based on whether they use MPS to test a hypothesis or to generate hypotheses. To demonstrate the differences in these approaches to informed consent, we describe the consent processes for two MPS studies. The purpose of our hypothesis-testing study is to elucidate the etiology of rare phenotypes using MPS. The purpose of our hypothesis-generating study is to test the feasibility of using MPS to generate clinical hypotheses, and to approach the return of results as an experimental manipulation. Issues to consider in both designs include: volume and nature of the potential results, primary versus secondary results, return of individual results, duty to warn, length of interaction, target population, and privacy and confidentiality. The categorization of MPS studies as hypothesis-testing versus hypothesis-generating can help to clarify the issue of so-called incidental or secondary results for the consent process, and aid the communication of the research goals to study participants.

  3. An introduction to neural network methods for differential equations

    CERN Document Server

    Yadav, Neha; Kumar, Manoj

    2015-01-01

    This book introduces a variety of neural network methods for solving differential equations arising in science and engineering. The emphasis is placed on a deep understanding of the neural network techniques, which has been presented in a mostly heuristic and intuitive manner. This approach will enable the reader to understand the working, efficiency and shortcomings of each neural network technique for solving differential equations. The objective of this book is to provide the reader with a sound understanding of the foundations of neural networks, and a comprehensive introduction to neural network methods for solving differential equations together with recent developments in the techniques and their applications. The book comprises four major sections. Section I consists of a brief overview of differential equations and the relevant physical problems arising in science and engineering. Section II illustrates the history of neural networks starting from their beginnings in the 1940s through to the renewed...

  4. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus

    2010-01-01

    a microscope and we show how the method can handle transparent particles with significant glare point. The method generalizes to other problems. THis is illustrated by applying the method to camera calibration images and MRI of the midsagittal plane for gray and white matter separation and segmentation......We propose a novel and efficient way of performing local image segmentation. For many applications a threshold of pixel intensities is sufficient but determine the appropriate threshold value can be difficult. In cases with large global intensity variation the threshold value has to be adapted...... locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...

  5. Recent Advances in Neural Recording Microsystems

    Directory of Open Access Journals (Sweden)

    Benoit Gosselin

    2011-04-01

    Full Text Available The accelerating pace of research in neuroscience has created a considerable demand for neural interfacing microsystems capable of monitoring the activity of large groups of neurons. These emerging tools have revealed a tremendous potential for the advancement of knowledge in brain research and for the development of useful clinical applications. They can extract the relevant control signals directly from the brain enabling individuals with severe disabilities to communicate their intentions to other devices, like computers or various prostheses. Such microsystems are self-contained devices composed of a neural probe attached with an integrated circuit for extracting neural signals from multiple channels, and transferring the data outside the body. The greatest challenge facing development of such emerging devices into viable clinical systems involves addressing their small form factor and low-power consumption constraints, while providing superior resolution. In this paper, we survey the recent progress in the design and the implementation of multi-channel neural recording Microsystems, with particular emphasis on the design of recording and telemetry electronics. An overview of the numerous neural signal modalities is given and the existing microsystem topologies are covered. We present energy-efficient sensory circuits to retrieve weak signals from neural probes and we compare them. We cover data management and smart power scheduling approaches, and we review advances in low-power telemetry. Finally, we conclude by summarizing the remaining challenges and by highlighting the emerging trends in the field.

  6. Adaptive competitive learning neural networks

    Directory of Open Access Journals (Sweden)

    Ahmed R. Abas

    2013-11-01

    Full Text Available In this paper, the adaptive competitive learning (ACL neural network algorithm is proposed. This neural network not only groups similar input feature vectors together but also determines the appropriate number of groups of these vectors. This algorithm uses a new proposed criterion referred to as the ACL criterion. This criterion evaluates different clustering structures produced by the ACL neural network for an input data set. Then, it selects the best clustering structure and the corresponding network architecture for this data set. The selected structure is composed of the minimum number of clusters that are compact and balanced in their sizes. The selected network architecture is efficient, in terms of its complexity, as it contains the minimum number of neurons. Synaptic weight vectors of these neurons represent well-separated, compact and balanced clusters in the input data set. The performance of the ACL algorithm is evaluated and compared with the performance of a recently proposed algorithm in the literature in clustering an input data set and determining its number of clusters. Results show that the ACL algorithm is more accurate and robust in both determining the number of clusters and allocating input feature vectors into these clusters than the other algorithm especially with data sets that are sparsely distributed.

  7. Neural Systems Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — As part of the Electrical and Computer Engineering Department and The Institute for System Research, the Neural Systems Laboratory studies the functionality of the...

  8. Alternatives to the linear risk hypothesis

    International Nuclear Information System (INIS)

    Craig, A.G.

    1976-01-01

    A theoretical argument is presented which suggests that in using the linear hypothesis for all values of LET the low dose risk is overestimated for low LET but that it is underestimated for very high LET. The argument is based upon the idea that cell lesions which do not lead to cell death may in fact lead to a malignant cell. Expressions for the Surviving Fraction and the Cancer Risk based on this argument are given. An advantage of this very general approach is that is expresses cell survival and cancer risk entirely in terms of the cell lesions and avoids the rather contentious argument as to how the average number of lesions should be related to the dose. (U.K.)

  9. Large numbers hypothesis. II - Electromagnetic radiation

    Science.gov (United States)

    Adams, P. J.

    1983-01-01

    This paper develops the theory of electromagnetic radiation in the units covariant formalism incorporating Dirac's large numbers hypothesis (LNH). A direct field-to-particle technique is used to obtain the photon propagation equation which explicitly involves the photon replication rate. This replication rate is fixed uniquely by requiring that the form of a free-photon distribution function be preserved, as required by the 2.7 K cosmic radiation. One finds that with this particular photon replication rate the units covariant formalism developed in Paper I actually predicts that the ratio of photon number to proton number in the universe varies as t to the 1/4, precisely in accord with LNH. The cosmological red-shift law is also derived and it is shown to differ considerably from the standard form of (nu)(R) - const.

  10. Artistic talent in dyslexia--a hypothesis.

    Science.gov (United States)

    Chakravarty, Ambar

    2009-10-01

    The present article hints at a curious neurocognitive phenomenon of development of artistic talents in some children with dyslexia. The article also takes note of the phenomenon of creating in the midst of language disability as observed in the lives of such creative people like Leonardo da Vinci and Albert Einstein who were most probably affected with developmental learning disorders. It has been hypothesised that a developmental delay in the dominant hemisphere most likely 'disinhibits' the non-dominant parietal lobe to unmask talents, artistic or otherwise, in some such individuals. The present hypothesis follows the phenomenon of paradoxical functional facilitation described earlier. It has been suggested that children with learning disorders be encouraged to develop such hidden talents to full capacity, rather than be subjected to overemphasising on the correction of the disturbed coded symbol operations, in remedial training.

  11. Tissue misrepair hypothesis for radiation carcinogenesis

    International Nuclear Information System (INIS)

    Kondo, Sohei

    1991-01-01

    Dose-response curves for chronic leukemia in A-bomb survivors and liver tumors in patients given Thorotrast (colloidal thorium dioxide) show large threshold effects. The existence of these threshold effects can be explained by the following hypothesis. A high dose of radiation causes a persistent wound in a cellrenewable tissue. Disorder of the injured cell society partly frees the component cells from territorial restraints on their proliferation, enabling them to continue development of their cellular functions toward advanced autonomy. This progression might be achieved by continued epigenetic and genetic changes as a result of occasional errors in the otherwise concerted healing action of various endogeneous factors recruited for tissue repair. Carcinogenesis is not simply a single-cell problem but a cell-society problem. Therefore, it is not warranted to estimate risk at low doses by linear extrapolation from cancer data at high doses without knowledge of the mechanism of radiation carcinogenesis. (author) 57 refs

  12. Statistical hypothesis tests of some micrometeorological observations

    International Nuclear Information System (INIS)

    SethuRaman, S.; Tichler, J.

    1977-01-01

    Chi-square goodness-of-fit is used to test the hypothesis that the medium scale of turbulence in the atmospheric surface layer is normally distributed. Coefficients of skewness and excess are computed from the data. If the data are not normal, these coefficients are used in Edgeworth's asymptotic expansion of Gram-Charlier series to determine an altrnate probability density function. The observed data are then compared with the modified probability densities and the new chi-square values computed.Seventy percent of the data analyzed was either normal or approximatley normal. The coefficient of skewness g 1 has a good correlation with the chi-square values. Events with vertical-barg 1 vertical-bar 1 vertical-bar<0.43 were approximately normal. Intermittency associated with the formation and breaking of internal gravity waves in surface-based inversions over water is thought to be the reason for the non-normality

  13. The hexagon hypothesis: Six disruptive scenarios.

    Science.gov (United States)

    Burtles, Jim

    2015-01-01

    This paper aims to bring a simple but effective and comprehensive approach to the development, delivery and monitoring of business continuity solutions. To ensure that the arguments and principles apply across the board, the paper sticks to basic underlying concepts rather than sophisticated interpretations. First, the paper explores what exactly people are defending themselves against. Secondly, the paper looks at how defences should be set up. Disruptive events tend to unfold in phases, each of which invites a particular style of protection, ranging from risk management through to business continuity to insurance cover. Their impact upon any business operation will fall into one of six basic scenarios. The hexagon hypothesis suggests that everyone should be prepared to deal with each of these six disruptive scenarios and it provides them with a useful benchmark for business continuity.

  14. Novae, supernovae, and the island universe hypothesis

    International Nuclear Information System (INIS)

    Van Den Bergh, S.

    1988-01-01

    Arguments in Curtis's (1917) paper related to the island universe hypothesis and the existence of novae in spiral nebulae are considered. It is noted that the maximum magnitude versus rate-of-decline relation for novae may be the best tool presently available for the calibration of the extragalactic distance scale. Light curve observations of six novae are used to determine a distance of 18.6 + or - 3.5 MPc to the Virgo cluster. Results suggest that Type Ia supernovae cannot easily be used as standard candles, and that Type II supernovae are unsuitable as distance indicators. Factors other than precursor mass are probably responsible for determining the ultimate fate of evolving stars. 83 references

  15. Extra dimensions hypothesis in high energy physics

    Directory of Open Access Journals (Sweden)

    Volobuev Igor

    2017-01-01

    Full Text Available We discuss the history of the extra dimensions hypothesis and the physics and phenomenology of models with large extra dimensions with an emphasis on the Randall- Sundrum (RS model with two branes. We argue that the Standard Model extension based on the RS model with two branes is phenomenologically acceptable only if the inter-brane distance is stabilized. Within such an extension of the Standard Model, we study the influence of the infinite Kaluza-Klein (KK towers of the bulk fields on collider processes. In particular, we discuss the modification of the scalar sector of the theory, the Higgs-radion mixing due to the coupling of the Higgs boson to the radion and its KK tower, and the experimental restrictions on the mass of the radion-dominated states.

  16. Multiple model cardinalized probability hypothesis density filter

    Science.gov (United States)

    Georgescu, Ramona; Willett, Peter

    2011-09-01

    The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.

  17. EQUITY EVALUATION OF PADDY IRRIGATION WATER DISTRIBUTION BY SOCIETY-JUSTICE-WATER DISTRIBUTION RULE HYPOTHESIS

    Science.gov (United States)

    Tanji, Hajime; Kiri, Hirohide; Kobayashi, Shintaro

    When total supply is smaller than total demand, it is difficult to apply the paddy irrigation water distribution rule. The gap must be narrowed by decreasing demand. Historically, the upstream served rule, rotation schedule, or central schedule weight to irrigated area was adopted. This paper proposes the hypothesis that these rules are dependent on social justice, a hypothesis called the "Society-Justice-Water Distribution Rule Hypothesis". Justice, which means a balance of efficiency and equity of distribution, is discussed under the political philosophy of utilitarianism, liberalism (Rawls), libertarianism, and communitarianism. The upstream served rule can be derived from libertarianism. The rotation schedule and central schedule can be derived from communitarianism. Liberalism can provide arranged schedule to adjust supply and demand based on "the Difference Principle". The authors conclude that to achieve efficiency and equity, liberalism may provide the best solution after modernization.

  18. On the immunostimulatory hypothesis of cancer

    Directory of Open Access Journals (Sweden)

    Juan Bruzzo

    2011-12-01

    Full Text Available There is a rather generalized belief that the worst possible outcome for the application of immunological therapies against cancer is a null effect on tumor growth. However, a significant body of evidence summarized in the immunostimulatory hypothesis of cancer suggests that, upon certain circumstances, the growth of incipient and established tumors can be accelerated rather than inhibited by the immune response supposedly mounted to limit tumor growth. In order to provide more compelling evidence of this proposition, we have explored the growth behavior characteristics of twelve murine tumors -most of them of spontaneous origin- arisen in the colony of our laboratory, in putatively immunized and control mice. Using classical immunization procedures, 8 out of 12 tumors were actually stimulated in "immunized" mice while the remaining 4 were neither inhibited nor stimulated. Further, even these apparently non-antigenic tumors could reveal some antigenicity if more stringent than classical immunization procedures were used. This possibility was suggested by the results obtained with one of these four apparently non-antigenic tumors: the LB lymphoma. In effect, upon these stringent immunization pretreatments, LB was slightly inhibited or stimulated, depending on the titer of the immune reaction mounted against the tumor, with higher titers rendering inhibition and lower titers rendering tumor stimulation. All the above results are consistent with the immunostimulatory hypothesis that entails the important therapeutic implications -contrary to the orthodoxy- that, anti-tumor vaccines may run a real risk of doing harm if the vaccine-induced immunity is too weak to move the reaction into the inhibitory part of the immune response curve and that, a slight and prolonged immunodepression -rather than an immunostimulation- might interfere with the progression of some tumors and thus be an aid to cytotoxic therapies.

  19. Neural Networks: Implementations and Applications

    OpenAIRE

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.

    1996-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  20. Neural networks prove effective at NOx reduction

    Energy Technology Data Exchange (ETDEWEB)

    Radl, B.J. [Pegasus Technologies, Mentor, OH (USA)

    2000-05-01

    The availability of low cost computer hardware and software is opening up possibilities for the use of artificial intelligence concepts, notably neural networks, in power plant control applications, delivering lower costs, greater efficiencies and reduced emissions. One example of a neural network system is the NeuSIGHT combustion optimisation system, developed by Pegasus Technologies, a subsidiary of KFx Inc. It can help reduce NOx emissions, improve heat rate and enable either deferral or elimination of capital expenditures. on other NOx control technologies, such as low NOx burners, SNCR and SCR. This paper illustrates these benefits using three recent case studies. 4 figs.

  1. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  2. Livermore Big Artificial Neural Network Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    2016-07-01

    LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.

  3. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  4. Consciousness and neural plasticity

    DEFF Research Database (Denmark)

    changes or to abandon the strong identity thesis altogether. Were one to pursue a theory according to which consciousness is not an epiphenomenon to brain processes, consciousness may in fact affect its own neural basis. The neural correlate of consciousness is often seen as a stable structure, that is...

  5. The Matter-Gravity Entanglement Hypothesis

    Science.gov (United States)

    Kay, Bernard S.

    2018-03-01

    I outline some of my work and results (some dating back to 1998, some more recent) on my matter-gravity entanglement hypothesis, according to which the entropy of a closed quantum gravitational system is equal to the system's matter-gravity entanglement entropy. The main arguments presented are: (1) that this hypothesis is capable of resolving what I call the second-law puzzle, i.e. the puzzle as to how the entropy increase of a closed system can be reconciled with the asssumption of unitary time-evolution; (2) that the black hole information loss puzzle may be regarded as a special case of this second law puzzle and that therefore the same resolution applies to it; (3) that the black hole thermal atmosphere puzzle (which I recall) can be resolved by adopting a radically different-from-usual description of quantum black hole equilibrium states, according to which they are total pure states, entangled between matter and gravity in such a way that the partial states of matter and gravity are each approximately thermal equilibrium states (at the Hawking temperature); (4) that the Susskind-Horowitz-Polchinski string-theoretic understanding of black hole entropy as the logarithm of the degeneracy of a long string (which is the weak string coupling limit of a black hole) cannot be quite correct but should be replaced by a modified understanding according to which it is the entanglement entropy between a long string and its stringy atmosphere, when in a total pure equilibrium state in a suitable box, which (in line with (3)) goes over, at strong-coupling, to a black hole in equilibrium with its thermal atmosphere. The modified understanding in (4) is based on a general result, which I also describe, which concerns the likely state of a quantum system when it is weakly coupled to an energy-bath and the total state is a random pure state with a given energy. This result generalizes Goldstein et al.'s `canonical typicality' result to systems which are not necessarily small.

  6. The Matter-Gravity Entanglement Hypothesis

    Science.gov (United States)

    Kay, Bernard S.

    2018-05-01

    I outline some of my work and results (some dating back to 1998, some more recent) on my matter-gravity entanglement hypothesis, according to which the entropy of a closed quantum gravitational system is equal to the system's matter-gravity entanglement entropy. The main arguments presented are: (1) that this hypothesis is capable of resolving what I call the second-law puzzle, i.e. the puzzle as to how the entropy increase of a closed system can be reconciled with the asssumption of unitary time-evolution; (2) that the black hole information loss puzzle may be regarded as a special case of this second law puzzle and that therefore the same resolution applies to it; (3) that the black hole thermal atmosphere puzzle (which I recall) can be resolved by adopting a radically different-from-usual description of quantum black hole equilibrium states, according to which they are total pure states, entangled between matter and gravity in such a way that the partial states of matter and gravity are each approximately thermal equilibrium states (at the Hawking temperature); (4) that the Susskind-Horowitz-Polchinski string-theoretic understanding of black hole entropy as the logarithm of the degeneracy of a long string (which is the weak string coupling limit of a black hole) cannot be quite correct but should be replaced by a modified understanding according to which it is the entanglement entropy between a long string and its stringy atmosphere, when in a total pure equilibrium state in a suitable box, which (in line with (3)) goes over, at strong-coupling, to a black hole in equilibrium with its thermal atmosphere. The modified understanding in (4) is based on a general result, which I also describe, which concerns the likely state of a quantum system when it is weakly coupled to an energy-bath and the total state is a random pure state with a given energy. This result generalizes Goldstein et al.'s `canonical typicality' result to systems which are not necessarily small.

  7. Marginal contrasts and the Contrastivist Hypothesis

    Directory of Open Access Journals (Sweden)

    Daniel Currie Hall

    2016-12-01

    Full Text Available The Contrastivist Hypothesis (CH; Hall 2007; Dresher 2009 holds that the only features that can be phonologically active in any language are those that serve to distinguish phonemes, which presupposes that phonemic status is categorical. Many researchers, however, demonstrate the existence of gradient relations. For instance, Hall (2009 quantifies these using the information-theoretic measure of entropy (unpredictability of distribution and shows that a pair of sounds may have an entropy between 0 (totally predictable and 1 (totally unpredictable. We argue that the existence of such intermediate degrees of contrastiveness does not make the CH untenable, but rather offers insight into contrastive hierarchies. The existence of a continuum does not preclude categorical distinctions: a categorical line can be drawn between zero entropy (entirely predictable, and thus by the CH phonologically inactive and non-zero entropy (at least partially contrastive, and thus potentially phonologically active. But this does not mean that intermediate degrees of surface contrastiveness are entirely irrelevant to the CH; rather, we argue, they can shed light on how deeply ingrained a phonemic distinction is in the phonological system. As an example, we provide a case study from Pulaar [ATR] harmony, which has previously been claimed to be problematic for the CH.

  8. The Stem Cell Hypothesis of Aging

    Directory of Open Access Journals (Sweden)

    Anna Meiliana

    2010-04-01

    Full Text Available BACKGROUND: There is probably no single way to age. Indeed, so far there is no single accepted explanation or mechanisms of aging (although more than 300 theories have been proposed. There is an overall decline in tissue regenerative potential with age, and the question arises as to whether this is due to the intrinsic aging of stem cells or rather to the impairment of stem cell function in the aged tissue environment. CONTENT: Recent data suggest that we age, in part, because our self-renewing stem cells grow old as a result of heritable intrinsic events, such as DNA damage, as well as extrinsic forces, such as changes in their supporting niches. Mechanisms that suppress the development of cancer, such as senescence and apoptosis, which rely on telomere shortening and the activities of p53 and p16INK4a may also induce an unwanted consequence: a decline in the replicative function of certain stem cells types with advancing age. This decrease regenerative capacity appears to pointing to the stem cell hypothesis of aging. SUMMARY: Recent evidence suggested that we grow old partly because of our stem cells grow old as a result of mechanisms that suppress the development of cancer over a lifetime. We believe that a further, more precise mechanistic understanding of this process will be required before this knowledge can be translated into human anti-aging therapies. KEYWORDS: stem cells, senescence, telomere, DNA damage, epigenetic, aging.

  9. Confabulation: Developing the 'emotion dysregulation' hypothesis.

    Science.gov (United States)

    Turnbull, Oliver H; Salas, Christian E

    2017-02-01

    Confabulations offer unique opportunities for establishing the neurobiological basis of delusional thinking. As regards causal factors, a review of the confabulation literature suggests that neither amnesia nor executive impairment can be the sole (or perhaps even the primary) cause of all delusional beliefs - though they may act in concert with other factors. A key perspective in the modern literature is that many delusions have an emotionally positive or 'wishful' element, that may serve to modulate or manage emotional experience. Some authors have referred to this perspective as the 'emotion dysregulation' hypothesis. In this article we review the theoretical underpinnings of this approach, and develop the idea by suggesting that the positive aspects of confabulatory states may have a role in perpetuating the imbalance between cognitive control and emotion. We draw on existing evidence from fields outside neuropsychology, to argue for three main causal factors: that positive emotions are related to more global or schematic forms of cognitive processing; that positive emotions influence the accuracy of memory recollection; and that positive emotions make people more susceptible to false memories. These findings suggest that the emotions that we want to feel (or do not want to feel) can influence the way we reconstruct past experiences and generate a sense of self - a proposition that bears on a unified theory of delusional belief states. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  10. Evolutionary hypothesis for Chiari type I malformation.

    Science.gov (United States)

    Fernandes, Yvens Barbosa; Ramina, Ricardo; Campos-Herrera, Cynthia Resende; Borges, Guilherme

    2013-10-01

    Chiari I malformation (CM-I) is classically defined as a cerebellar tonsillar herniation (≥5 mm) through the foramen magnum. A decreased posterior fossa volume, mainly due to basioccipital hypoplasia and sometimes platybasia, leads to posterior fossa overcrowding and consequently cerebellar herniation. Regardless of radiological findings, embryological genetic hypothesis or any other postulations, the real cause behind this malformation is yet not well-elucidated and remains largely unknown. The aim of this paper is to approach CM-I under a broader and new perspective, conjoining anthropology, genetics and neurosurgery, with special focus on the substantial changes that have occurred in the posterior cranial base through human evolution. Important evolutionary allometric changes occurred during brain expansion and genetics studies of human evolution demonstrated an unexpected high rate of gene flow interchange and possibly interbreeding during this process. Based upon this review we hypothesize that CM-I may be the result of an evolutionary anthropological imprint, caused by evolving species populations that eventually met each other and mingled in the last 1.7 million years. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Environmental Kuznets Curve Hypothesis. A Survey

    International Nuclear Information System (INIS)

    Dinda, Soumyananda

    2004-01-01

    The Environmental Kuznets Curve (EKC) hypothesis postulates an inverted-U-shaped relationship between different pollutants and per capita income, i.e., environmental pressure increases up to a certain level as income goes up; after that, it decreases. An EKC actually reveals how a technically specified measurement of environmental quality changes as the fortunes of a country change. A sizeable literature on EKC has grown in recent period. The common point of all the studies is the assertion that the environmental quality deteriorates at the early stages of economic development/growth and subsequently improves at the later stages. In other words, environmental pressure increases faster than income at early stages of development and slows down relative to GDP growth at higher income levels. This paper reviews some theoretical developments and empirical studies dealing with EKC phenomenon. Possible explanations for this EKC are seen in (1) the progress of economic development, from clean agrarian economy to polluting industrial economy to clean service economy; (2) tendency of people with higher income having higher preference for environmental quality, etc. Evidence of the existence of the EKC has been questioned from several corners. Only some air quality indicators, especially local pollutants, show the evidence of an EKC. However, an EKC is empirically observed, till there is no agreement in the literature on the income level at which environmental degradation starts declining. This paper provides an overview of the EKC literature, background history, conceptual insights, policy and the conceptual and methodological critique

  12. DAMPs, ageing, and cancer: The 'DAMP Hypothesis'.

    Science.gov (United States)

    Huang, Jin; Xie, Yangchun; Sun, Xiaofang; Zeh, Herbert J; Kang, Rui; Lotze, Michael T; Tang, Daolin

    2015-11-01

    Ageing is a complex and multifactorial process characterized by the accumulation of many forms of damage at the molecular, cellular, and tissue level with advancing age. Ageing increases the risk of the onset of chronic inflammation-associated diseases such as cancer, diabetes, stroke, and neurodegenerative disease. In particular, ageing and cancer share some common origins and hallmarks such as genomic instability, epigenetic alteration, aberrant telomeres, inflammation and immune injury, reprogrammed metabolism, and degradation system impairment (including within the ubiquitin-proteasome system and the autophagic machinery). Recent advances indicate that damage-associated molecular pattern molecules (DAMPs) such as high mobility group box 1, histones, S100, and heat shock proteins play location-dependent roles inside and outside the cell. These provide interaction platforms at molecular levels linked to common hallmarks of ageing and cancer. They can act as inducers, sensors, and mediators of stress through individual plasma membrane receptors, intracellular recognition receptors (e.g., advanced glycosylation end product-specific receptors, AIM2-like receptors, RIG-I-like receptors, and NOD1-like receptors, and toll-like receptors), or following endocytic uptake. Thus, the DAMP Hypothesis is novel and complements other theories that explain the features of ageing. DAMPs represent ideal biomarkers of ageing and provide an attractive target for interventions in ageing and age-associated diseases. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Identity of Particles and Continuum Hypothesis

    Science.gov (United States)

    Berezin, Alexander A.

    2001-04-01

    Why all electrons are the same? Unlike other objects, particles and atoms (same isotopes) are forbidden to have individuality or personal history (or reveal their hidden variables, even if they do have them). Or at least, what we commonly call physics so far was unable to disprove particle's sameness (Berezin and Nakhmanson, Physics Essays, 1990). Consider two opposing hypotheses: (A) particles are indeed absolutely same, or (B) they do have individuality, but it is beyond our capacity to demonstrate. This dilemma sounds akin to undecidability of Continuum Hypothesis of existence (or not) of intermediate cardinalities between integers and reals (P.Cohen). Both yes and no of it are true. Thus, (alleged) sameness of electrons and atoms may be a physical translation (embodiment) of this fundamental Goedelian undecidability. Experiments unlikely to help: even if we find that all electrons are same within 30 decimal digits, could their masses (or charges) still differ in100-th digit? Within (B) personalized informationally rich (infinitely rich?) digital tails (starting at, say, 100-th decimal) may carry individual record of each particle history. Within (A) parameters (m, q) are indeed exactly same in all digits and their sameness is based on some inherent (meta)physical principle akin to Platonism or Eddington-type numerology.

  14. Environmental Kuznets Curve Hypothesis. A Survey

    Energy Technology Data Exchange (ETDEWEB)

    Dinda, Soumyananda [Economic Research Unit, Indian Statistical Institute, 203, B.T. Road, Kolkata-108 (India)

    2004-08-01

    The Environmental Kuznets Curve (EKC) hypothesis postulates an inverted-U-shaped relationship between different pollutants and per capita income, i.e., environmental pressure increases up to a certain level as income goes up; after that, it decreases. An EKC actually reveals how a technically specified measurement of environmental quality changes as the fortunes of a country change. A sizeable literature on EKC has grown in recent period. The common point of all the studies is the assertion that the environmental quality deteriorates at the early stages of economic development/growth and subsequently improves at the later stages. In other words, environmental pressure increases faster than income at early stages of development and slows down relative to GDP growth at higher income levels. This paper reviews some theoretical developments and empirical studies dealing with EKC phenomenon. Possible explanations for this EKC are seen in (1) the progress of economic development, from clean agrarian economy to polluting industrial economy to clean service economy; (2) tendency of people with higher income having higher preference for environmental quality, etc. Evidence of the existence of the EKC has been questioned from several corners. Only some air quality indicators, especially local pollutants, show the evidence of an EKC. However, an EKC is empirically observed, till there is no agreement in the literature on the income level at which environmental degradation starts declining. This paper provides an overview of the EKC literature, background history, conceptual insights, policy and the conceptual and methodological critique.

  15. A NONPARAMETRIC HYPOTHESIS TEST VIA THE BOOTSTRAP RESAMPLING

    OpenAIRE

    Temel, Tugrul T.

    2001-01-01

    This paper adapts an already existing nonparametric hypothesis test to the bootstrap framework. The test utilizes the nonparametric kernel regression method to estimate a measure of distance between the models stated under the null hypothesis. The bootstraped version of the test allows to approximate errors involved in the asymptotic hypothesis test. The paper also develops a Mathematica Code for the test algorithm.

  16. A quantum-implementable neural network model

    Science.gov (United States)

    Chen, Jialin; Wang, Lingli; Charbon, Edoardo

    2017-10-01

    A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.

  17. Dynamics of neural cryptography.

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.

  18. Dynamics of neural cryptography

    International Nuclear Information System (INIS)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-01-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible

  19. Dynamics of neural cryptography

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.

  20. SOXE neofunctionalization and elaboration of the neural crest during chordate evolution

    Science.gov (United States)

    Tai, Andrew; Cheung, Martin; Huang, Yong-Heng; Jauch, Ralf; Bronner, Marianne E.; Cheah, Kathryn S. E.

    2016-01-01

    During chordate evolution, two genome-wide duplications facilitated acquisition of vertebrate traits, including emergence of neural crest cells (NCCs), in which neofunctionalization of the duplicated genes are thought to have facilitated development of craniofacial structures and the peripheral nervous system. How these duplicated genes evolve and acquire the ability to specify NC and their derivatives are largely unknown. Vertebrate SoxE paralogues, most notably Sox9/10, are essential for NC induction, delamination and lineage specification. In contrast, the basal chordate, amphioxus, has a single SoxE gene and lacks NC-like cells. Here, we test the hypothesis that duplication and divergence of an ancestral SoxE gene may have facilitated elaboration of NC lineages. By using an in vivo expression assay to compare effects of AmphiSoxE and vertebrate Sox9 on NC development, we demonstrate that all SOXE proteins possess similar DNA binding and homodimerization properties and can induce NCCs. However, AmphiSOXE is less efficient than SOX9 in transactivation activity and in the ability to preferentially promote glial over neuronal fate, a difference that lies within the combined properties of amino terminal and transactivation domains. We propose that acquisition of AmphiSoxE expression in the neural plate border led to NCC emergence while duplication and divergence produced advantageous mutations in vertebrate homologues, promoting elaboration of NC traits. PMID:27734831

  1. Updating the mild encephalitis hypothesis of schizophrenia.

    Science.gov (United States)

    Bechter, K

    2013-04-05

    Schizophrenia seems to be a heterogeneous disorder. Emerging evidence indicates that low level neuroinflammation (LLNI) may not occur infrequently. Many infectious agents with low overall pathogenicity are risk factors for psychoses including schizophrenia and for autoimmune disorders. According to the mild encephalitis (ME) hypothesis, LLNI represents the core pathogenetic mechanism in a schizophrenia subgroup that has syndromal overlap with other psychiatric disorders. ME may be triggered by infections, autoimmunity, toxicity, or trauma. A 'late hit' and gene-environment interaction are required to explain major findings about schizophrenia, and both aspects would be consistent with the ME hypothesis. Schizophrenia risk genes stay rather constant within populations despite a resulting low number of progeny; this may result from advantages associated with risk genes, e.g., an improved immune response, which may act protectively within changing environments, although they are associated with the disadvantage of increased susceptibility to psychotic disorders. Specific schizophrenic symptoms may arise with instances of LLNI when certain brain functional systems are involved, in addition to being shaped by pre-existing liability factors. Prodrome phase and the transition to a diseased status may be related to LLNI processes emerging and varying over time. The variability in the course of schizophrenia resembles the varying courses of autoimmune disorders, which result from three required factors: genes, the environment, and the immune system. Preliminary criteria for subgrouping neurodevelopmental, genetic, ME, and other types of schizophrenias are provided. A rare example of ME schizophrenia may be observed in Borna disease virus infection. Neurodevelopmental schizophrenia due to early infections has been estimated by others to explain approximately 30% of cases, but the underlying pathomechanisms of transition to disease remain in question. LLNI (e.g. from

  2. [Psychodynamic hypothesis about suicidality in elderly men].

    Science.gov (United States)

    Lindner, Reinhard

    2010-08-01

    Old men are overrepresented in the whole of all suicides. In contrast, only very few elderly men find their way to specialised treatment facilities. Elderly accept psychotherapy more rarely than younger persons. Therefore presentations on the psychodynamics of suicidality in old men are rare and mostly casuistical. By means of a stepwise reconstructable qualitative case comparison of five randomly chosen elderly suicidal men with ideal types of suicidal (younger) men concerning biography, suicidal symptoms and transference, psychodynamic hypothesis of suicidality in elderly men are developed. All patients came into psychotherapy in a specialised academic out-patient clinic for psychodynamic treatment of acute and chronic suicidality. The five elderly suicidal men predominantly were living in long-term, conflictuous sexual relationships and also had ambivalent relationships to their children. Suicidality in old age refers to lifelong existing intrapsychic conflicts, concerning (male) identity, self-esteem and a core conflict between fusion and separation wishes. The body gets a central role in suicidal experiences, being a defensive instance modified by age and/or physical illness, which brings up to consciousness aggressive and envious impulses, but also feelings of emptiness and insecurity, which have to be warded off again by projection into the body. In transference relationships there are on the one hand the regular transference, on the other hand an age specific turned around transference, with their counter transference reactions. The chosen methodological approach serves the systematic finding of hypotheses with a higher degree in evidence than hypotheses generated from single case studies. Georg Thieme Verlag KG Stuttgart - New York.

  3. Atopic dermatitis and the hygiene hypothesis revisited.

    Science.gov (United States)

    Flohr, Carsten; Yeo, Lindsey

    2011-01-01

    We published a systematic review on atopic dermatitis (AD) and the hygiene hypothesis in 2005. Since then, the body of literature has grown significantly. We therefore repeated our systematic review to examine the evidence from population-based studies for an association between AD risk and specific infections, childhood immunizations, the use of antibiotics and environmental exposures that lead to a change in microbial burden. Medline was searched from 1966 until June 2010 to identify relevant studies. We found an additional 49 papers suitable for inclusion. There is evidence to support an inverse relationship between AD and endotoxin, early day care, farm animal and dog exposure in early life. Cat exposure in the presence of skin barrier impairment is positively associated with AD. Helminth infection at least partially protects against AD. This is not the case for viral and bacterial infections, but consumption of unpasteurized farm milk seems protective. Routine childhood vaccinations have no effect on AD risk. The positive association between viral infections and AD found in some studies appears confounded by antibiotic prescription, which has been consistently associated with an increase in AD risk. There is convincing evidence for an inverse relationship between helminth infections and AD but no other pathogens. The protective effect seen with early day care, endotoxin, unpasteurized farm milk and animal exposure is likely to be due to a general increase in exposure to non-pathogenic microbes. This would also explain the risk increase associated with the use of broad-spectrum antibiotics. Future studies should assess skin barrier gene mutation carriage and phenotypic skin barrier impairment, as gene-environment interactions are likely to impact on AD risk. Copyright © 041_ S. Karger AG, Basel.

  4. Research progress on neural mechanisms of primary insomnia by MRI

    Directory of Open Access Journals (Sweden)

    Man WANG

    2018-04-01

    Full Text Available In recent years, more and more researches focused on the neural mechanism of primary insomnia (PI, especially with the development and application of MRI, and researches of brain structure and function related with primary insomnia were more and more in-depth. According to the hyperarousal hypothesis, there are abnormal structure, function and metabolism under certain brain regions of the cortex and subcortex of primary insomnia patients, including amygdala, hippocampus, cingulate gyrus, insular lobe, frontal lobe and parietal lobe. This paper reviewed the research progress of neural mechanisms of primary insomnia by using MRI. DOI: 10.3969/j.issn.1672-6731.2018.03.003

  5. ANT Advanced Neural Tool

    Energy Technology Data Exchange (ETDEWEB)

    Labrador, I.; Carrasco, R.; Martinez, L.

    1996-07-01

    This paper describes a practical introduction to the use of Artificial Neural Networks. Artificial Neural Nets are often used as an alternative to the traditional symbolic manipulation and first order logic used in Artificial Intelligence, due the high degree of difficulty to solve problems that can not be handled by programmers using algorithmic strategies. As a particular case of Neural Net a Multilayer Perception developed by programming in C language on OS9 real time operating system is presented. A detailed description about the program structure and practical use are included. Finally, several application examples that have been treated with the tool are presented, and some suggestions about hardware implementations. (Author) 15 refs.

  6. ANT Advanced Neural Tool

    International Nuclear Information System (INIS)

    Labrador, I.; Carrasco, R.; Martinez, L.

    1996-01-01

    This paper describes a practical introduction to the use of Artificial Neural Networks. Artificial Neural Nets are often used as an alternative to the traditional symbolic manipulation and first order logic used in Artificial Intelligence, due the high degree of difficulty to solve problems that can not be handled by programmers using algorithmic strategies. As a particular case of Neural Net a Multilayer Perception developed by programming in C language on OS9 real time operating system is presented. A detailed description about the program structure and practical use are included. Finally, several application examples that have been treated with the tool are presented, and some suggestions about hardware implementations. (Author) 15 refs

  7. Why Traditional Expository Teaching-Learning Approaches May Founder? An Experimental Examination of Neural Networks in Biology Learning

    Science.gov (United States)

    Lee, Jun-Ki; Kwon, Yong-Ju

    2011-01-01

    Using functional magnetic resonance imaging (fMRI), this study investigates and discusses neurological explanations for, and the educational implications of, the neural network activations involved in hypothesis-generating and hypothesis-understanding for biology education. Two sets of task paradigms about biological phenomena were designed:…

  8. Introduction to neural networks with electric power applications

    International Nuclear Information System (INIS)

    Wildberger, A.M.; Hickok, K.A.

    1990-01-01

    This is an introduction to the general field of neural networks with emphasis on prospects for their application in the power industry. It is intended to provide enough background information for its audience to begin to follow technical developments in neural networks and to recognize those which might impact on electric power engineering. Beginning with a brief discussion of natural and artificial neurons, the characteristics of neural networks in general and how they learn, neural networks are compared with other modeling tools such as simulation and expert systems in order to provide guidance in selecting appropriate applications. In the power industry, possible applications include plant control, dispatching, and maintenance scheduling. In particular, neural networks are currently being investigated for enhancements to the Thermal Performance Advisor (TPA) which General Physics Corporation (GP) has developed to improve the efficiency of electric power generation

  9. The zinc dyshomeostasis hypothesis of Alzheimer's disease.

    Directory of Open Access Journals (Sweden)

    Travis J A Craddock

    Full Text Available Alzheimer's disease (AD is the most common form of dementia in the elderly. Hallmark AD neuropathology includes extracellular amyloid plaques composed largely of the amyloid-β protein (Aβ, intracellular neurofibrillary tangles (NFTs composed of hyper-phosphorylated microtubule-associated protein tau (MAP-tau, and microtubule destabilization. Early-onset autosomal dominant AD genes are associated with excessive Aβ accumulation, however cognitive impairment best correlates with NFTs and disrupted microtubules. The mechanisms linking Aβ and NFT pathologies in AD are unknown. Here, we propose that sequestration of zinc by Aβ-amyloid deposits (Aβ oligomers and plaques not only drives Aβ aggregation, but also disrupts zinc homeostasis in zinc-enriched brain regions important for memory and vulnerable to AD pathology, resulting in intra-neuronal zinc levels, which are either too low, or excessively high. To evaluate this hypothesis, we 1 used molecular modeling of zinc binding to the microtubule component protein tubulin, identifying specific, high-affinity zinc binding sites that influence side-to-side tubulin interaction, the sensitive link in microtubule polymerization and stability. We also 2 performed kinetic modeling showing zinc distribution in extra-neuronal Aβ deposits can reduce intra-neuronal zinc binding to microtubules, destabilizing microtubules. Finally, we 3 used metallomic imaging mass spectrometry (MIMS to show anatomically-localized and age-dependent zinc dyshomeostasis in specific brain regions of Tg2576 transgenic, mice, a model for AD. We found excess zinc in brain regions associated with memory processing and NFT pathology. Overall, we present a theoretical framework and support for a new theory of AD linking extra-neuronal Aβ amyloid to intra-neuronal NFTs and cognitive dysfunction. The connection, we propose, is based on β-amyloid-induced alterations in zinc ion concentration inside neurons affecting stability of

  10. The zinc dyshomeostasis hypothesis of Alzheimer's disease.

    Science.gov (United States)

    Craddock, Travis J A; Tuszynski, Jack A; Chopra, Deepak; Casey, Noel; Goldstein, Lee E; Hameroff, Stuart R; Tanzi, Rudolph E

    2012-01-01

    Alzheimer's disease (AD) is the most common form of dementia in the elderly. Hallmark AD neuropathology includes extracellular amyloid plaques composed largely of the amyloid-β protein (Aβ), intracellular neurofibrillary tangles (NFTs) composed of hyper-phosphorylated microtubule-associated protein tau (MAP-tau), and microtubule destabilization. Early-onset autosomal dominant AD genes are associated with excessive Aβ accumulation, however cognitive impairment best correlates with NFTs and disrupted microtubules. The mechanisms linking Aβ and NFT pathologies in AD are unknown. Here, we propose that sequestration of zinc by Aβ-amyloid deposits (Aβ oligomers and plaques) not only drives Aβ aggregation, but also disrupts zinc homeostasis in zinc-enriched brain regions important for memory and vulnerable to AD pathology, resulting in intra-neuronal zinc levels, which are either too low, or excessively high. To evaluate this hypothesis, we 1) used molecular modeling of zinc binding to the microtubule component protein tubulin, identifying specific, high-affinity zinc binding sites that influence side-to-side tubulin interaction, the sensitive link in microtubule polymerization and stability. We also 2) performed kinetic modeling showing zinc distribution in extra-neuronal Aβ deposits can reduce intra-neuronal zinc binding to microtubules, destabilizing microtubules. Finally, we 3) used metallomic imaging mass spectrometry (MIMS) to show anatomically-localized and age-dependent zinc dyshomeostasis in specific brain regions of Tg2576 transgenic, mice, a model for AD. We found excess zinc in brain regions associated with memory processing and NFT pathology. Overall, we present a theoretical framework and support for a new theory of AD linking extra-neuronal Aβ amyloid to intra-neuronal NFTs and cognitive dysfunction. The connection, we propose, is based on β-amyloid-induced alterations in zinc ion concentration inside neurons affecting stability of polymerized

  11. Hidden neural networks

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose; Riis, Søren Kamaric

    1999-01-01

    A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability...... parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum...... likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear...

  12. Neural networks for aircraft control

    Science.gov (United States)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  13. Neural cryptography with feedback.

    Science.gov (United States)

    Ruttor, Andreas; Kinzel, Wolfgang; Shacham, Lanir; Kanter, Ido

    2004-04-01

    Neural cryptography is based on a competition between attractive and repulsive stochastic forces. A feedback mechanism is added to neural cryptography which increases the repulsive forces. Using numerical simulations and an analytic approach, the probability of a successful attack is calculated for different model parameters. Scaling laws are derived which show that feedback improves the security of the system. In addition, a network with feedback generates a pseudorandom bit sequence which can be used to encrypt and decrypt a secret message.

  14. Comparison of 2D and 3D neural induction methods for the generation of neural progenitor cells from human induced pluripotent stem cells

    DEFF Research Database (Denmark)

    Chandrasekaran, Abinaya; Avci, Hasan; Ochalek, Anna

    2017-01-01

    Neural progenitor cells (NPCs) from human induced pluripotent stem cells (hiPSCs) are frequently induced using 3D culture methodologies however, it is unknown whether spheroid-based (3D) neural induction is actually superior to monolayer (2D) neural induction. Our aim was to compare the efficiency......), cortical layer (TBR1, CUX1) and glial markers (SOX9, GFAP, AQP4). Electron microscopy demonstrated that both methods resulted in morphologically similar neural rosettes. However, quantification of NPCs derived from 3D neural induction exhibited an increase in the number of PAX6/NESTIN double positive cells...... the electrophysiological properties between the two induction methods. In conclusion, 3D neural induction increases the yield of PAX6+/NESTIN+ cells and gives rise to neurons with longer neurites, which might be an advantage for the production of forebrain cortical neurons, highlighting the potential of 3D neural...

  15. The neural correlates of beauty comparison.

    Science.gov (United States)

    Kedia, Gayannée; Mussweiler, Thomas; Mullins, Paul; Linden, David E J

    2014-05-01

    Beauty is in the eye of the beholder. How attractive someone is perceived to be depends on the individual or cultural standards to which this person is compared. But although comparisons play a central role in the way people judge the appearance of others, the brain processes underlying attractiveness comparisons remain unknown. In the present experiment, we tested the hypothesis that attractiveness comparisons rely on the same cognitive and neural mechanisms as comparisons of simple nonsocial magnitudes such as size. We recorded brain activity with functional magnetic resonance imaging (fMRI) while participants compared the beauty or height of two women or two dogs. Our data support the hypothesis of a common process underlying these different types of comparisons. First, we demonstrate that the distance effect characteristic of nonsocial comparisons also holds for attractiveness comparisons. Behavioral results indicated, for all our comparisons, longer response times for near than far distances. Second, the neural correlates of these distance effects overlapped in a frontoparietal network known for its involvement in processing simple nonsocial quantities. These results provide evidence for overlapping processes in the comparison of physical attractiveness and nonsocial magnitudes.

  16. Research on Fault Diagnosis Method Based on Rule Base Neural Network

    Directory of Open Access Journals (Sweden)

    Zheng Ni

    2017-01-01

    Full Text Available The relationship between fault phenomenon and fault cause is always nonlinear, which influences the accuracy of fault location. And neural network is effective in dealing with nonlinear problem. In order to improve the efficiency of uncertain fault diagnosis based on neural network, a neural network fault diagnosis method based on rule base is put forward. At first, the structure of BP neural network is built and the learning rule is given. Then, the rule base is built by fuzzy theory. An improved fuzzy neural construction model is designed, in which the calculated methods of node function and membership function are also given. Simulation results confirm the effectiveness of this method.

  17. Categorization of the processes contributing to ttH(H→bb) using deep neural networks with the CMS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Rath, Yannik; Erdmann, Martin; Fischer, Benjamin; Fischer, Robert; Heidemann, Fabian; Quast, Thorben; Rieger, Marcel [III. Physikalisches Institut A, RWTH Aachen University (Germany)

    2016-07-01

    In ttH(H→bb) analyses, event categorization is introduced to simultaneously constrain signal and background processes. A common procedure is to categorize events according to both their jet and b-tag multiplicities. The separation power of this approach is limited by the b-tagging efficiency. Especially ttH(H→bb) events with their high b-tag multiplicities suffer from migrations to background categories. In this presentation, we explore deep neural networks (DNNs) as a method of categorizing events according to their jet multiplicity and a DNN event class hypothesis. DNNs have the advantage of being able to learn discriminating features from low level variables, e.g. kinematic properties, and are naturally suited for multiclass classification problems. We compare the ttH signal separation achieved with the DNN method with that of a common categorization approach.

  18. Character recognition from trajectory by recurrent spiking neural networks.

    Science.gov (United States)

    Jiangrong Shen; Kang Lin; Yueming Wang; Gang Pan

    2017-07-01

    Spiking neural networks are biologically plausible and power-efficient on neuromorphic hardware, while recurrent neural networks have been proven to be efficient on time series data. However, how to use the recurrent property to improve the performance of spiking neural networks is still a problem. This paper proposes a recurrent spiking neural network for character recognition using trajectories. In the network, a new encoding method is designed, in which varying time ranges of input streams are used in different recurrent layers. This is able to improve the generalization ability of our model compared with general encoding methods. The experiments are conducted on four groups of the character data set from University of Edinburgh. The results show that our method can achieve a higher average recognition accuracy than existing methods.

  19. Inherently stochastic spiking neurons for probabilistic neural computation

    KAUST Repository

    Al-Shedivat, Maruan

    2015-04-01

    Neuromorphic engineering aims to design hardware that efficiently mimics neural circuitry and provides the means for emulating and studying neural systems. In this paper, we propose a new memristor-based neuron circuit that uniquely complements the scope of neuron implementations and follows the stochastic spike response model (SRM), which plays a cornerstone role in spike-based probabilistic algorithms. We demonstrate that the switching of the memristor is akin to the stochastic firing of the SRM. Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards memristive, scalable and efficient stochastic neuromorphic platforms. © 2015 IEEE.

  20. The modulation of neural gain facilitates a transition between functional segregation and integration in the brain.

    Science.gov (United States)

    Shine, James M; Aburn, Matthew J; Breakspear, Michael; Poldrack, Russell A

    2018-01-29

    Cognitive function relies on a dynamic, context-sensitive balance between functional integration and segregation in the brain. Previous work has proposed that this balance is mediated by global fluctuations in neural gain by projections from ascending neuromodulatory nuclei. To test this hypothesis in silico, we studied the effects of neural gain on network dynamics in a model of large-scale neuronal dynamics. We found that increases in neural gain directed the network through an abrupt dynamical transition, leading to an integrated network topology that was maximal in frontoparietal 'rich club' regions. This gain-mediated transition was also associated with increased topological complexity, as well as increased variability in time-resolved topological structure, further highlighting the potential computational benefits of the gain-mediated network transition. These results support the hypothesis that neural gain modulation has the computational capacity to mediate the balance between integration and segregation in the brain. © 2018, Shine et al.

  1. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream.

    Science.gov (United States)

    Güçlü, Umut; van Gerven, Marcel A J

    2015-07-08

    Converging evidence suggests that the primate ventral visual pathway encodes increasingly complex stimulus features in downstream areas. We quantitatively show that there indeed exists an explicit gradient for feature complexity in the ventral pathway of the human brain. This was achieved by mapping thousands of stimulus features of increasing complexity across the cortical sheet using a deep neural network. Our approach also revealed a fine-grained functional specialization of downstream areas of the ventral stream. Furthermore, it allowed decoding of representations from human brain activity at an unsurpassed degree of accuracy, confirming the quality of the developed approach. Stimulus features that successfully explained neural responses indicate that population receptive fields were explicitly tuned for object categorization. This provides strong support for the hypothesis that object categorization is a guiding principle in the functional organization of the primate ventral stream. Copyright © 2015 the authors 0270-6474/15/3510005-10$15.00/0.

  2. Neural network decoder for quantum error correcting codes

    Science.gov (United States)

    Krastanov, Stefan; Jiang, Liang

    Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.

  3. Neural dynamics in reconfigurable silicon.

    Science.gov (United States)

    Basu, A; Ramakrishnan, S; Petre, C; Koziol, S; Brink, S; Hasler, P E

    2010-10-01

    A neuromorphic analog chip is presented that is capable of implementing massively parallel neural computations while retaining the programmability of digital systems. We show measurements from neurons with Hopf bifurcations and integrate and fire neurons, excitatory and inhibitory synapses, passive dendrite cables, coupled spiking neurons, and central pattern generators implemented on the chip. This chip provides a platform for not only simulating detailed neuron dynamics but also uses the same to interface with actual cells in applications such as a dynamic clamp. There are 28 computational analog blocks (CAB), each consisting of ion channels with tunable parameters, synapses, winner-take-all elements, current sources, transconductance amplifiers, and capacitors. There are four other CABs which have programmable bias generators. The programmability is achieved using floating gate transistors with on-chip programming control. The switch matrix for interconnecting the components in CABs also consists of floating-gate transistors. Emphasis is placed on replicating the detailed dynamics of computational neural models. Massive computational area efficiency is obtained by using the reconfigurable interconnect as synaptic weights, resulting in more than 50 000 possible 9-b accurate synapses in 9 mm(2).

  4. [Laughter and depression: hypothesis of pathogenic and therapeutic correlation].

    Science.gov (United States)

    Fonzi, Laura; Matteucci, Gabriella; Bersani, Giuseppe

    2010-01-01

    Laughter is a very common behaviour in everyday life, nevertheless scientific literature is lacking in studies which examine closely its nature. The study aims are: to summarise the present knowledge about laughter and its relation with depression and to make hypotheses on its possible therapeutic function. In the first part of the review the main data existing about encephalic structures involved in laughter genesis, which show participation of cortical and subcortical regions, are reported and the effects of laughter on the organism physiologic equilibrium, particularly on the neuroendocrine and immune systems, are described. In the second part, scientific evidence about the influence of depression on the ability to laugh are referred, which suggests that reduction of laughter frequency is a symptom of the disease and that its increase may be used as a marker of clinical improvement. Finally, the main assumptions supporting the hypothesis of the therapeutic action of laughter on depression are examined: first of all, it has been demonstrated that laughter is able to improve mood directly and to moderate negative consequences of stressful events on psychological well-being; in addition, it is possible that the stimulation of particular cerebral regions, involved in depression pathogenesis, and the normalisation of the hypothalamic pituitary adrenocortical system dysfunctions, both mediated by laughter, can counteract efficiently depressive symptoms; finally, the favourable effects of laughter on social relationships and physical health may have a role in influencing the ability of depressed patients to face the disease.

  5. Evaluation and Comparison of Extremal Hypothesis-Based Regime Methods

    Directory of Open Access Journals (Sweden)

    Ishwar Joshi

    2018-03-01

    Full Text Available Regime channels are important for stable canal design and to determine river response to environmental changes, e.g., due to the construction of a dam, land use change, and climate shifts. A plethora of methods is available describing the hydraulic geometry of alluvial rivers in the regime. However, comparison of these methods using the same set of data seems lacking. In this study, we evaluate and compare four different extremal hypothesis-based regime methods, namely minimization of Froude number (MFN, maximum entropy and minimum energy dissipation rate (ME and MEDR, maximum flow efficiency (MFE, and Millar’s method, by dividing regime channel data into sand and gravel beds. The results show that for sand bed channels MFN gives a very high accuracy of prediction for regime channel width and depth. For gravel bed channels we find that MFN and ‘ME and MEDR’ give a very high accuracy of prediction for width and depth. Therefore the notion that extremal hypotheses which do not contain bank stability criteria are inappropriate for use is shown false as both MFN and ‘ME and MEDR’ lack bank stability criteria. Also, we find that bank vegetation has significant influence in the prediction of hydraulic geometry by MFN and ‘ME and MEDR’.

  6. Neural network real time event selection for the DIRAC experiment

    CERN Document Server

    Kokkas, P; Tauscher, Ludwig; Vlachos, S

    2001-01-01

    The neural network first level trigger for the DIRAC experiment at CERN is presented. Both the neural network algorithm used and its actual hardware implementation are described. The system uses the fast plastic scintillator information of the DIRAC spectrometer. In 210 ns it selects events with two particles having low relative momentum. Such events are selected with an efficiency of more than 0.94. The corresponding rate reduction for background events is a factor of 2.5. (10 refs).

  7. Energy Threshold Hypothesis for Household Consumption

    International Nuclear Information System (INIS)

    Ortiz, Samira; Castro-Sitiriche, Marcel; Amador, Isamar

    2017-01-01

    A strong positive relationship among quality of life and electricity consumption at impoverished countries is found in many studies. However, previous work has presented that the positive relationship does not hold beyond certain electricity consumption threshold. Consequently, there is a need of exploring the possibility for communities to live with sustainable level of energy consumption without sacrificing their quality of life. The Gallup-Healthways Report measures global citizen’s wellbeing. This paper provides a new outlook using these elements to explore the relationships among actual percentage of population thriving in most countries and their energy consumption. A measurement of efficiency is computed to determine an adjusted relative social value of energy considering the variability in the happy life years as a function of electric power consumption. Adjustment is performed so single components don’t dominate in the measurement. It is interesting to note that the countries with the highest relative social value of energy are in the top 10 countries of the Gallup report.

  8. Stock returns predictability and the adaptive market hypothesis in emerging markets: evidence from India.

    Science.gov (United States)

    Hiremath, Gourishankar S; Kumari, Jyoti

    2014-01-01

    This study addresses the question of whether the adaptive market hypothesis provides a better description of the behaviour of emerging stock market like India. We employed linear and nonlinear methods to evaluate the hypothesis empirically. The linear tests show a cyclical pattern in linear dependence suggesting that the Indian stock market switched between periods of efficiency and inefficiency. In contrast, the results from nonlinear tests reveal a strong evidence of nonlinearity in returns throughout the sample period with a sign of tapering magnitude of nonlinear dependence in the recent period. The findings suggest that Indian stock market is moving towards efficiency. The results provide additional insights on association between financial crises, foreign portfolio investments and inefficiency. G14; G12; C12.

  9. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  10. Extracting the Behaviorally Relevant Stimulus: Unique Neural Representation of Farnesol, a Component of the Recruitment Pheromone of Bombus terrestris.

    Directory of Open Access Journals (Sweden)

    Martin F Strube-Bloss

    Full Text Available To trigger innate behavior, sensory neural networks are pre-tuned to extract biologically relevant stimuli. Many male-female or insect-plant interactions depend on this phenomenon. Especially communication among individuals within social groups depends on innate behaviors. One example is the efficient recruitment of nest mates by successful bumblebee foragers. Returning foragers release a recruitment pheromone in the nest while they perform a 'dance' behavior to activate unemployed nest mates. A major component of this pheromone is the sesquiterpenoid farnesol. How farnesol is processed and perceived by the olfactory system, has not yet been identified. It is much likely that processing farnesol involves an innate mechanism for the extraction of relevant information to trigger a fast and reliable behavioral response. To test this hypothesis, we used population response analyses of 100 antennal lobe (AL neurons recorded in alive bumblebee workers under repeated stimulation with four behaviorally different, but chemically related odorants (geraniol, citronellol, citronellal and farnesol. The analysis identified a unique neural representation of the recruitment pheromone component compared to the other odorants that are predominantly emitted by flowers. The farnesol induced population activity in the AL allowed a reliable separation of farnesol from all other chemically related odor stimuli we tested. We conclude that the farnesol induced population activity may reflect a predetermined representation within the AL-neural network allowing efficient and fast extraction of a behaviorally relevant stimulus. Furthermore, the results show that population response analyses of multiple single AL-units may provide a powerful tool to identify distinct representations of behaviorally relevant odors.

  11. Introducing the refined gravity hypothesis of extreme sexual size dimorphism

    Directory of Open Access Journals (Sweden)

    Corcobado Guadalupe

    2010-08-01

    Full Text Available Abstract Background Explanations for the evolution of female-biased, extreme Sexual Size Dimorphism (SSD, which has puzzled researchers since Darwin, are still controversial. Here we propose an extension of the Gravity Hypothesis (i.e., the GH, which postulates a climbing advantage for small males that in conjunction with the fecundity hypothesis appears to have the most general power to explain the evolution of SSD in spiders so far. In this "Bridging GH" we propose that bridging locomotion (i.e., walking upside-down under own-made silk bridges may be behind the evolution of extreme SSD. A biomechanical model shows that there is a physical constraint for large spiders to bridge. This should lead to a trade-off between other traits and dispersal in which bridging would favor smaller sizes and other selective forces (e.g. fecundity selection in females would favor larger sizes. If bridging allows faster dispersal, small males would have a selective advantage by enjoying more mating opportunities. We predicted that both large males and females would show a lower propensity to bridge, and that SSD would be negatively correlated with sexual dimorphism in bridging propensity. To test these hypotheses we experimentally induced bridging in males and females of 13 species of spiders belonging to the two clades in which bridging locomotion has evolved independently and in which most of the cases of extreme SSD in spiders are found. Results We found that 1 as the degree of SSD increased and females became larger, females tended to bridge less relative to males, and that 2 smaller males and females show a higher propensity to bridge. Conclusions Physical constraints make bridging inefficient for large spiders. Thus, in species where bridging is a very common mode of locomotion, small males, by being more efficient at bridging, will be competitively superior and enjoy more mating opportunities. This "Bridging GH" helps to solve the controversial question of

  12. Continuity and change in children's longitudinal neural responses to numbers.

    Science.gov (United States)

    Emerson, Robert W; Cantlon, Jessica F

    2015-03-01

    Human children possess the ability to approximate numerical quantity nonverbally from a young age. Over the course of early childhood, children develop increasingly precise representations of numerical values, including a symbolic number system that allows them to conceive of numerical information as Arabic numerals or number words. Functional brain imaging studies of adults report that activity in bilateral regions of the intraparietal sulcus (IPS) represents a key neural correlate of numerical cognition. Developmental neuroimaging studies indicate that the right IPS develops its number-related neural response profile more rapidly than the left IPS during early childhood. One prediction that can be derived from previous findings is that there is longitudinal continuity in the number-related neural responses of the right IPS over development while the development of the left IPS depends on the acquisition of numerical skills. We tested this hypothesis using fMRI in a longitudinal design with children ages 4 to 9. We found that neural responses in the right IPS are correlated over a 1-2-year period in young children whereas left IPS responses change systematically as a function of children's numerical discrimination acuity. The data are consistent with the hypothesis that functional properties of the right IPS in numerical processing are stable over early childhood whereas the functions of the left IPS are dynamically modulated by the development of numerical skills. © 2014 John Wiley & Sons Ltd.

  13. Invariant recognition drives neural representations of action sequences.

    Directory of Open Access Journals (Sweden)

    Andrea Tacchetti

    2017-12-01

    Full Text Available Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs, that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences.

  14. Men’s Perception of Raped Women: Test of the Sexually Transmitted Disease Hypothesis and the Cuckoldry Hypothesis

    Directory of Open Access Journals (Sweden)

    Prokop Pavol

    2016-06-01

    Full Text Available Rape is a recurrent adaptive problem of female humans and females of a number of non-human animals. Rape has various physiological and reproductive costs to the victim. The costs of rape are furthermore exaggerated by social rejection and blaming of a victim, particularly by men. The negative perception of raped women by men has received little attention from an evolutionary perspective. Across two independent studies, we investigated whether the risk of sexually transmitted diseases (the STD hypothesis, Hypothesis 1 or paternity uncertainty (the cuckoldry hypothesis, Hypothesis 2 influence the negative perception of raped women by men. Raped women received lower attractiveness score than non-raped women, especially in long-term mate attractiveness score. The perceived attractiveness of raped women was not influenced by the presence of experimentally manipulated STD cues on faces of putative rapists. Women raped by three men received lower attractiveness score than women raped by one man. These results provide stronger support for the cuckoldry hypothesis (Hypothesis 2 than for the STD hypothesis (Hypothesis 1. Single men perceived raped women as more attractive than men in a committed relationship (Hypothesis 3, suggesting that the mating opportunities mediate men’s perception of victims of rape. Overall, our results suggest that the risk of cuckoldry underlie the negative perception of victims of rape by men rather than the fear of disease transmission.

  15. Efficiency of Microfinance Institutions in Sub – Saharan Africa: A ...

    African Journals Online (AJOL)

    2016-10-02

    Oct 2, 2016 ... major constraint on the development of the microfinance industry (Helms, 2006). The efficient ..... It also enables statistical tests of hypothesis to be performed. Hence, the ...... Cost efficiency in Australian Local Government: A.

  16. Neural Architectures for Control

    Science.gov (United States)

    Peterson, James K.

    1991-01-01

    The cerebellar model articulated controller (CMAC) neural architectures are shown to be viable for the purposes of real-time learning and control. Software tools for the exploration of CMAC performance are developed for three hardware platforms, the MacIntosh, the IBM PC, and the SUN workstation. All algorithm development was done using the C programming language. These software tools were then used to implement an adaptive critic neuro-control design that learns in real-time how to back up a trailer truck. The truck backer-upper experiment is a standard performance measure in the neural network literature, but previously the training of the controllers was done off-line. With the CMAC neural architectures, it was possible to train the neuro-controllers on-line in real-time on a MS-DOS PC 386. CMAC neural architectures are also used in conjunction with a hierarchical planning approach to find collision-free paths over 2-D analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The CMAC architectures are trained in real-time for each obstacle field presented. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array. These results are a very good indication of the potential power of the neural architectures in control design. In order to reach as wide an audience as possible, we have run a seminar on neuro-control that has met once per week since 20 May 1991. This seminar has thoroughly discussed the CMAC architecture, relevant portions of classical control, back propagation through time, and adaptive critic designs.

  17. [Distinguishing the voice of self from others: the self-monitoring hypothesis of auditory hallucination].

    Science.gov (United States)

    Asai, Tomohisa; Tanno, Yoshihiko

    2010-08-01

    Auditory hallucinations (AH), a psychopathological phenomenon where a person hears non-existent voices, commonly occur in schizophrenia. Recent cognitive and neuroscience studies suggest that AH may be the misattribution of one's own inner speech. Self-monitoring through neural feedback mechanisms allows individuals to distinguish between their own and others' actions, including speech. AH maybe the results of an individual's inability to discriminate between their own speech and that of others. The present paper tries to integrate the three theories (behavioral, brain, and model approaches) proposed to explain the self-monitoring hypothesis of AH. In addition, we investigate the lateralization of self-other representation in the brain, as suggested by recent studies, and discuss future research directions.

  18. New Hypothesis for SOFC Ceramic Oxygen Electrode Mechanisms

    DEFF Research Database (Denmark)

    Mogensen, Mogens Bjerg; Chatzichristodoulou, Christodoulos; Graves, Christopher R.

    2016-01-01

    A new hypothesis for the electrochemical reaction mechanism in solid oxide cell ceramic oxygen electrodes is proposed based on literature including our own results. The hypothesis postulates that the observed thin layers of SrO-La2O3 on top of ceramic perovskite and other Ruddlesden-Popper...

  19. Assess the Critical Period Hypothesis in Second Language Acquisition

    Science.gov (United States)

    Du, Lihong

    2010-01-01

    The Critical Period Hypothesis aims to investigate the reason for significant difference between first language acquisition and second language acquisition. Over the past few decades, researchers carried out a series of studies to test the validity of the hypothesis. Although there were certain limitations in these studies, most of their results…

  20. An Exercise for Illustrating the Logic of Hypothesis Testing

    Science.gov (United States)

    Lawton, Leigh

    2009-01-01

    Hypothesis testing is one of the more difficult concepts for students to master in a basic, undergraduate statistics course. Students often are puzzled as to why statisticians simply don't calculate the probability that a hypothesis is true. This article presents an exercise that forces students to lay out on their own a procedure for testing a…

  1. A default Bayesian hypothesis test for ANOVA designs

    NARCIS (Netherlands)

    Wetzels, R.; Grasman, R.P.P.P.; Wagenmakers, E.J.

    2012-01-01

    This article presents a Bayesian hypothesis test for analysis of variance (ANOVA) designs. The test is an application of standard Bayesian methods for variable selection in regression models. We illustrate the effect of various g-priors on the ANOVA hypothesis test. The Bayesian test for ANOVA

  2. New methods of testing nonlinear hypothesis using iterative NLLS estimator

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.

  3. The Younger Dryas impact hypothesis: A critical review

    NARCIS (Netherlands)

    van Hoesel, A.; Hoek, W.Z.; Pennock, G.M.; Drury, Martyn

    2014-01-01

    The Younger Dryas impact hypothesis suggests that multiple extraterrestrial airbursts or impacts resulted in the Younger Dryas cooling, extensive wildfires, megafaunal extinctions and changes in human population. After the hypothesis was first published in 2007, it gained much criticism, as the

  4. Is the stock market efficient?

    Science.gov (United States)

    Malkiel, B G

    1989-03-10

    A stock market is said to be efficient if it accurately reflects all relevant information in determining security prices. Critics have asserted that share prices are far too volatile to be explained by changes in objective economic events-the October 1987 crash being a case in point. Although the evidence is not unambiguous, reports of the death of the efficient market hypothesis appear premature.

  5. Multiple simultaneous fault diagnosis via hierarchical and single artificial neural networks

    International Nuclear Information System (INIS)

    Eslamloueyan, R.; Shahrokhi, M.; Bozorgmehri, R.

    2003-01-01

    Process fault diagnosis involves interpreting the current status of the plant given sensor reading and process knowledge. There has been considerable work done in this area with a variety of approaches being proposed for process fault diagnosis. Neural networks have been used to solve process fault diagnosis problems in chemical process, as they are well suited for recognizing multi-dimensional nonlinear patterns. In this work, the use of Hierarchical Artificial Neural Networks in diagnosing the multi-faults of a chemical process are discussed and compared with that of Single Artificial Neural Networks. The lower efficiency of Hierarchical Artificial Neural Networks , in comparison to Single Artificial Neural Networks, in process fault diagnosis is elaborated and analyzed. Also, the concept of a multi-level selection switch is presented and developed to improve the performance of hierarchical artificial neural networks. Simulation results indicate that application of multi-level selection switch increase the performance of the hierarchical artificial neural networks considerably

  6. Development of an accident diagnosis system using a dynamic neural network for nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jong Hyun; Seong, Poong Hyun

    2004-01-01

    In this work, an accident diagnosis system using the dynamic neural network is developed. In order to help the plant operators to quickly identify the problem, perform diagnosis and initiate recovery actions ensuring the safety of the plant, many operator support system and accident diagnosis systems have been developed. Neural networks have been recognized as a good method to implement an accident diagnosis system. However, conventional accident diagnosis systems that used neural networks did not consider a time factor sufficiently. If the neural network could be trained according to time, it is possible to perform more efficient and detailed accidents analysis. Therefore, this work suggests a dynamic neural network which has different features from existing dynamic neural networks. And a simple accident diagnosis system is implemented in order to validate the dynamic neural network. After training of the prototype, several accident diagnoses were performed. The results show that the prototype can detect the accidents correctly with good performances

  7. Sacred or Neural?

    DEFF Research Database (Denmark)

    Runehov, Anne Leona Cesarine

    Are religious spiritual experiences merely the product of the human nervous system? Anne L.C. Runehov investigates the potential of contemporary neuroscience to explain religious experiences. Following the footsteps of Michael Persinger, Andrew Newberg and Eugene d'Aquili she defines...... the terminological bounderies of "religious experiences" and explores the relevant criteria for the proper evaluation of scientific research, with a particular focus on the validity of reductionist models. Runehov's theis is that the perspectives looked at do not necessarily exclude each other but can be merged....... The question "sacred or neural?" becomes a statement "sacred and neural". The synergies thus produced provide manifold opportunities for interdisciplinary dialogue and research....

  8. IMPLEMENTATION OF NEURAL - CRYPTOGRAPHIC SYSTEM USING FPGA

    Directory of Open Access Journals (Sweden)

    KARAM M. Z. OTHMAN

    2011-08-01

    Full Text Available Modern cryptography techniques are virtually unbreakable. As the Internet and other forms of electronic communication become more prevalent, electronic security is becoming increasingly important. Cryptography is used to protect e-mail messages, credit card information, and corporate data. The design of the cryptography system is a conventional cryptography that uses one key for encryption and decryption process. The chosen cryptography algorithm is stream cipher algorithm that encrypt one bit at a time. The central problem in the stream-cipher cryptography is the difficulty of generating a long unpredictable sequence of binary signals from short and random key. Pseudo random number generators (PRNG have been widely used to construct this key sequence. The pseudo random number generator was designed using the Artificial Neural Networks (ANN. The Artificial Neural Networks (ANN providing the required nonlinearity properties that increases the randomness statistical properties of the pseudo random generator. The learning algorithm of this neural network is backpropagation learning algorithm. The learning process was done by software program in Matlab (software implementation to get the efficient weights. Then, the learned neural network was implemented using field programmable gate array (FPGA.

  9. Localizing Tortoise Nests by Neural Networks.

    Directory of Open Access Journals (Sweden)

    Roberto Barbuti

    Full Text Available The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating. Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN. We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours, the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition.

  10. Training Deep Spiking Neural Networks Using Backpropagation.

    Science.gov (United States)

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  11. Vibration monitoring with artificial neural networks

    International Nuclear Information System (INIS)

    Alguindigue, I.

    1991-01-01

    Vibration monitoring of components in nuclear power plants has been used for a number of years. This technique involves the analysis of vibration data coming from vital components of the plant to detect features which reflect the operational state of machinery. The analysis leads to the identification of potential failures and their causes, and makes it possible to perform efficient preventive maintenance. Earlydetection is important because it can decrease the probability of catastrophic failures, reduce forced outgage, maximize utilization of available assets, increase the life of the plant, and reduce maintenance costs. This paper documents our work on the design of a vibration monitoring methodology based on neural network technology. This technology provides an attractive complement to traditional vibration analysis because of the potential of neural network to operate in real-time mode and to handle data which may be distorted or noisy. Our efforts have been concentrated on the analysis and classification of vibration signatures collected from operating machinery. Two neural networks algorithms were used in our project: the Recirculation algorithm for data compression and the Backpropagation algorithm to perform the actual classification of the patterns. Although this project is in the early stages of development it indicates that neural networks may provide a viable methodology for monitoring and diagnostics of vibrating components. Our results to date are very encouraging

  12. Deconvolution using a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  13. Introduction to Artificial Neural Networks

    DEFF Research Database (Denmark)

    Larsen, Jan

    1999-01-01

    The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks.......The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks....

  14. Efficient Learning Design

    DEFF Research Database (Denmark)

    Godsk, Mikkel

    This paper presents the current approach to implementing educational technology with learning design at the Faculty of Science and Technology, Aarhus University, by introducing the concept of ‘efficient learning design’. The underlying hypothesis is that implementing learning design is more than...... engaging educators in the design process and developing teaching and learning, it is a shift in educational practice that potentially requires a stakeholder analysis and ultimately a business model for the deployment. What is most important is to balance the institutional, educator, and student...... perspectives and to consider all these in conjunction in order to obtain a sustainable, efficient learning design. The approach to deploying learning design in terms of the concept of efficient learning design, the catalyst for educational development, i.e. the learning design model and how it is being used...

  15. The planet beyond the plume hypothesis

    Science.gov (United States)

    Smith, Alan D.; Lewis, Charles

    1999-12-01

    but not counterflow, though convergent margin geometry may still induce propagating fractures which set up melting anomalies. Lateral migration of asthenospheric domains allows the sources of Pacific intraplate volcanism to be traced back to continental mantle eroded during the breakup of Gondwana and the amalgamation of Asia in the Paleozoic. Intraplate volcanism in the South Pacific therefore has a common Gondwanan origin to intraplate volcanism in the South Atlantic and Indian Oceans, hence the DUPAL anomaly is entirely of shallow origin. Such domains constitute a second order geochemical heterogeneity superimposed on a streaky/marble-cake structure arising from remixing of subducted crust with the convecting mantle. During the Proterozoic and Phanerozoic, remixing of slabs has buffered the evolution of the depleted mantle to a rate of 2.2 ɛNd units Ga -1, with fractionation of Lu from Hf in the sediment component imparting the large range in 176Hf/ 177Hf relative to 143Nd/ 144Nd observed in MORB. Only the high ɛNd values of some Archean komatiites are compatible with derivation from unbuffered mantle. The existence of a very depleted reservoir is attributed to stabilisation of a large early continental crust through either obduction tectonics or slab melting regimes which reduced the efficiency of crustal recycling back into the mantle. Generation of komatiite is therefore a consequence of mantle composition, and is permitted in ocean ridge environments and/or under hydrous melting conditions. Correspondingly, as intraplate volcanism depends on survival of volatile-bearing sources, its appearance in the Middle Proterozoic corresponds to the time in the Earth's thermal evolution at which minerals such as phlogopite and amphibole could survive in off-ridge environments in the shallow asthenosphere. The geodynamic evolution of the Earth was thus determined at convergent margins, not by plumes and hotspots, with the decline in thermal regime causing both a reduction

  16. Thermal photovoltaic solar integrated system analysis using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ashhab, S. [Hashemite Univ., Zarqa (Jordan). Dept. of Mechanical Engineering

    2007-07-01

    The energy demand in Jordan is primarily met by petroleum products. As such, the development of renewable energy systems is quite attractive. In particular, solar energy is a promising renewable energy source in Jordan and has been used for food canning, paper production, air-conditioning and sterilization. Artificial neural networks (ANNs) have received significant attention due to their capabilities in forecasting, modelling of complex nonlinear systems and control. ANNs have been used for forecasting solar energy. This paper presented a study that examined a thermal photovoltaic solar integrated system that was built in Jordan. Historical input-output system data that was collected experimentally was used to train an ANN that predicted the collector, PV module, pump and total efficiencies. The model predicted the efficiencies well and can therefore be utilized to find the operating conditions of the system that will produce the maximum system efficiencies. The paper provided a description of the photovoltaic solar system including equations for PV module efficiency; pump efficiency; and total efficiency. The paper also presented data relevant to the system performance and neural networks. The results of a neural net model were also presented based on the thermal PV solar integrated system data that was collected. It was concluded that the neural net model of the thermal photovoltaic solar integrated system set the background for achieving the best system performance. 10 refs., 6 figs.

  17. The Market Efficiency of the Stock Market in India

    OpenAIRE

    Rahman, Sahnawaz

    2011-01-01

    The greatest and engendering event in the Twenty first century is capital and financial market revolution and reformation especially for India. Efficient Market Hypothesis has attracted numbers of studies in empirical finance particularly in determining the market efficiency of an emerging financial market which produced conflicting and inconclusive outcomes. This paper tests the efficiency of the Indian Capital Market in its semi-strong form and weak form of Efficient Market Hypothesis (EMH)...

  18. Predictive Control of Hydronic Floor Heating Systems using Neural Networks and Genetic Algorithms

    DEFF Research Database (Denmark)

    Vinther, Kasper; Green, Torben; Østergaard, Søren

    2017-01-01

    This paper presents the use a neural network and a micro genetic algorithm to optimize future set-points in existing hydronic floor heating systems for improved energy efficiency. The neural network can be trained to predict the impact of changes in set-points on future room temperatures. Additio...... space is not guaranteed. Evaluation of the performance of multiple neural networks is performed, using different levels of information, and optimization results are presented on a detailed house simulation model....

  19. Concerns regarding a call for pluralism of information theory and hypothesis testing

    Science.gov (United States)

    Lukacs, P.M.; Thompson, W.L.; Kendall, W.L.; Gould, W.R.; Doherty, P.F.; Burnham, K.P.; Anderson, D.R.

    2007-01-01

    1. Stephens et al . (2005) argue for `pluralism? in statistical analysis, combining null hypothesis testing and information-theoretic (I-T) methods. We show that I-T methods are more informative even in single variable problems and we provide an ecological example. 2. I-T methods allow inferences to be made from multiple models simultaneously. We believe multimodel inference is the future of data analysis, which cannot be achieved with null hypothesis-testing approaches. 3. We argue for a stronger emphasis on critical thinking in science in general and less reliance on exploratory data analysis and data dredging. Deriving alternative hypotheses is central to science; deriving a single interesting science hypothesis and then comparing it to a default null hypothesis (e.g. `no difference?) is not an efficient strategy for gaining knowledge. We think this single-hypothesis strategy has been relied upon too often in the past. 4. We clarify misconceptions presented by Stephens et al . (2005). 5. We think inference should be made about models, directly linked to scientific hypotheses, and their parameters conditioned on data, Prob(Hj| data). I-T methods provide a basis for this inference. Null hypothesis testing merely provides a probability statement about the data conditioned on a null model, Prob(data |H0). 6. Synthesis and applications. I-T methods provide a more informative approach to inference. I-T methods provide a direct measure of evidence for or against hypotheses and a means to consider simultaneously multiple hypotheses as a basis for rigorous inference. Progress in our science can be accelerated if modern methods can be used intelligently; this includes various I-T and Bayesian methods.

  20. Efficient market hypothesis in emerging markets: Panel data evidence with multiple breaks and cross sectional dependence

    OpenAIRE

    Abd Halim Ahmad; Siti Nurazira Mohd Daud; W.N.W. Azman-Saini

    2010-01-01

    The purpose of this paper is to re-examine whether mean reversion property hold for 15 emerging stock markets for the period 1985 to 2006. Utilizing a panel stationarity test that is able to account for multiple structural breaks and cross sectional dependence, we find that the emerging stock markets follow a random walk process. However, further analysis on individual series show that the majority of stock prices in emerging markets are governed by a mean reverting process. This result, whic...

  1. Semanticized autobiographical memory and the default - executive coupling hypothesis of aging.

    Science.gov (United States)

    Spreng, R Nathan; Lockrow, Amber W; DuPre, Elizabeth; Setton, Roni; Spreng, Karen A P; Turner, Gary R

    2018-02-01

    As we age, the architecture of cognition undergoes a fundamental transition. Fluid intellectual abilities decline while crystalized abilities remain stable or increase. This shift has a profound impact across myriad cognitive and functional domains, yet the neural mechanisms remain under-specified. We have proposed that greater connectivity between the default network and executive control regions in lateral prefrontal cortex may underlie this shift, as older adults increasingly rely upon accumulated knowledge to support goal-directed behavior. Here we provide direct evidence for this mechanism within the domain of autobiographical memory. In a large sample of healthy adult participants (n = 103 Young; n = 80 Old) the strength of default - executive coupling reliably predicted more semanticized, or knowledge-based, recollection of autobiographical memories in the older adult cohort. The findings are consistent with the default - executive coupling hypothesis of aging and identify this shift in network dynamics as a candidate neural mechanism associated with crystalized cognition in later life that may signal adaptive capacity in the context of declining fluid cognitive abilities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Algorithmic design of a noise-resistant and efficient closed-loop deep brain stimulation system: A computational approach.

    Directory of Open Access Journals (Sweden)

    Sofia D Karamintziou

    Full Text Available Advances in the field of closed-loop neuromodulation call for analysis and modeling approaches capable of confronting challenges related to the complex neuronal response to stimulation and the presence of strong internal and measurement noise in neural recordings. Here we elaborate on the algorithmic aspects of a noise-resistant closed-loop subthalamic nucleus deep brain stimulation system for advanced Parkinson's disease and treatment-refractory obsessive-compulsive disorder, ensuring remarkable performance in terms of both efficiency and selectivity of stimulation, as well as in terms of computational speed. First, we propose an efficient method drawn from dynamical systems theory, for the reliable assessment of significant nonlinear coupling between beta and high-frequency subthalamic neuronal activity, as a biomarker for feedback control. Further, we present a model-based strategy through which optimal parameters of stimulation for minimum energy desynchronizing control of neuronal activity are being identified. The strategy integrates stochastic modeling and derivative-free optimization of neural dynamics based on quadratic modeling. On the basis of numerical simulations, we demonstrate the potential of the presented modeling approach to identify, at a relatively low computational cost, stimulation settings potentially associated with a significantly higher degree of efficiency and selectivity compared with stimulation settings determined post-operatively. Our data reinforce the hypothesis that model-based control strategies are crucial for the design of novel stimulation protocols at the backstage of clinical applications.

  3. Algorithmic design of a noise-resistant and efficient closed-loop deep brain stimulation system: A computational approach.

    Science.gov (United States)

    Karamintziou, Sofia D; Custódio, Ana Luísa; Piallat, Brigitte; Polosan, Mircea; Chabardès, Stéphan; Stathis, Pantelis G; Tagaris, George A; Sakas, Damianos E; Polychronaki, Georgia E; Tsirogiannis, George L; David, Olivier; Nikita, Konstantina S

    2017-01-01

    Advances in the field of closed-loop neuromodulation call for analysis and modeling approaches capable of confronting challenges related to the complex neuronal response to stimulation and the presence of strong internal and measurement noise in neural recordings. Here we elaborate on the algorithmic aspects of a noise-resistant closed-loop subthalamic nucleus deep brain stimulation system for advanced Parkinson's disease and treatment-refractory obsessive-compulsive disorder, ensuring remarkable performance in terms of both efficiency and selectivity of stimulation, as well as in terms of computational speed. First, we propose an efficient method drawn from dynamical systems theory, for the reliable assessment of significant nonlinear coupling between beta and high-frequency subthalamic neuronal activity, as a biomarker for feedback control. Further, we present a model-based strategy through which optimal parameters of stimulation for minimum energy desynchronizing control of neuronal activity are being identified. The strategy integrates stochastic modeling and derivative-free optimization of neural dynamics based on quadratic modeling. On the basis of numerical simulations, we demonstrate the potential of the presented modeling approach to identify, at a relatively low computational cost, stimulation settings potentially associated with a significantly higher degree of efficiency and selectivity compared with stimulation settings determined post-operatively. Our data reinforce the hypothesis that model-based control strategies are crucial for the design of novel stimulation protocols at the backstage of clinical applications.

  4. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  5. Neural correlates of consciousness

    African Journals Online (AJOL)

    neural cells.1 Under this approach, consciousness is believed to be a product of the ... possible only when the 40 Hz electrical hum is sustained among the brain circuits, ... expect the brain stem ascending reticular activating system. (ARAS) and the ... related synchrony of cortical neurons.11 Indeed, stimulation of brainstem ...

  6. Neural Networks and Micromechanics

    Science.gov (United States)

    Kussul, Ernst; Baidyk, Tatiana; Wunsch, Donald C.

    The title of the book, "Neural Networks and Micromechanics," seems artificial. However, the scientific and technological developments in recent decades demonstrate a very close connection between the two different areas of neural networks and micromechanics. The purpose of this book is to demonstrate this connection. Some artificial intelligence (AI) methods, including neural networks, could be used to improve automation system performance in manufacturing processes. However, the implementation of these AI methods within industry is rather slow because of the high cost of conducting experiments using conventional manufacturing and AI systems. To lower the cost, we have developed special micromechanical equipment that is similar to conventional mechanical equipment but of much smaller size and therefore of lower cost. This equipment could be used to evaluate different AI methods in an easy and inexpensive way. The proved methods could be transferred to industry through appropriate scaling. In this book, we describe the prototypes of low cost microequipment for manufacturing processes and the implementation of some AI methods to increase precision, such as computer vision systems based on neural networks for microdevice assembly and genetic algorithms for microequipment characterization and the increase of microequipment precision.

  7. Introduction to neural networks

    International Nuclear Information System (INIS)

    Pavlopoulos, P.

    1996-01-01

    This lecture is a presentation of today's research in neural computation. Neural computation is inspired by knowledge from neuro-science. It draws its methods in large degree from statistical physics and its potential applications lie mainly in computer science and engineering. Neural networks models are algorithms for cognitive tasks, such as learning and optimization, which are based on concepts derived from research into the nature of the brain. The lecture first gives an historical presentation of neural networks development and interest in performing complex tasks. Then, an exhaustive overview of data management and networks computation methods is given: the supervised learning and the associative memory problem, the capacity of networks, the Perceptron networks, the functional link networks, the Madaline (Multiple Adalines) networks, the back-propagation networks, the reduced coulomb energy (RCE) networks, the unsupervised learning and the competitive learning and vector quantization. An example of application in high energy physics is given with the trigger systems and track recognition system (track parametrization, event selection and particle identification) developed for the CPLEAR experiment detectors from the LEAR at CERN. (J.S.). 56 refs., 20 figs., 1 tab., 1 appendix

  8. Learning from neural control.

    Science.gov (United States)

    Wang, Cong; Hill, David J

    2006-01-01

    One of the amazing successes of biological systems is their ability to "learn by doing" and so adapt to their environment. In this paper, first, a deterministic learning mechanism is presented, by which an appropriately designed adaptive neural controller is capable of learning closed-loop system dynamics during tracking control to a periodic reference orbit. Among various neural network (NN) architectures, the localized radial basis function (RBF) network is employed. A property of persistence of excitation (PE) for RBF networks is established, and a partial PE condition of closed-loop signals, i.e., the PE condition of a regression subvector constructed out of the RBFs along a periodic state trajectory, is proven to be satisfied. Accurate NN approximation for closed-loop system dynamics is achieved in a local region along the periodic state trajectory, and a learning ability is implemented during a closed-loop feedback control process. Second, based on the deterministic learning mechanism, a neural learning control scheme is proposed which can effectively recall and reuse the learned knowledge to achieve closed-loop stability and improved control performance. The significance of this paper is that the presented deterministic learning mechanism and the neural learning control scheme provide elementary components toward the development of a biologically-plausible learning and control methodology. Simulation studies are included to demonstrate the effectiveness of the approach.

  9. Neural systems for control

    National Research Council Canada - National Science Library

    Omidvar, Omid; Elliott, David L

    1997-01-01

    ... is reprinted with permission from A. Barto, "Reinforcement Learning," Handbook of Brain Theory and Neural Networks, M.A. Arbib, ed.. The MIT Press, Cambridge, MA, pp. 804-809, 1995. Chapter 4, Figures 4-5 and 7-9 and Tables 2-5, are reprinted with permission, from S. Cho, "Map Formation in Proprioceptive Cortex," International Jour...

  10. Neural underpinnings of music

    DEFF Research Database (Denmark)

    Vuust, Peter; Gebauer, Line K; Witek, Maria A G

    2014-01-01

    . According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Fourth, empirical studies of neural and behavioral effects of syncopation, polyrhythm and groove will be reported, and we...

  11. Function approximation of tasks by neural networks

    International Nuclear Information System (INIS)

    Gougam, L.A.; Chikhi, A.; Mekideche-Chafa, F.

    2008-01-01

    For several years now, neural network models have enjoyed wide popularity, being applied to problems of regression, classification and time series analysis. Neural networks have been recently seen as attractive tools for developing efficient solutions for many real world problems in function approximation. The latter is a very important task in environments where computation has to be based on extracting information from data samples in real world processes. In a previous contribution, we have used a well known simplified architecture to show that it provides a reasonably efficient, practical and robust, multi-frequency analysis. We have investigated the universal approximation theory of neural networks whose transfer functions are: sigmoid (because of biological relevance), Gaussian and two specified families of wavelets. The latter have been found to be more appropriate to use. The aim of the present contribution is therefore to use a m exican hat wavelet a s transfer function to approximate different tasks relevant and inherent to various applications in physics. The results complement and provide new insights into previously published results on this problem

  12. Artificial neural network detects human uncertainty

    Science.gov (United States)

    Hramov, Alexander E.; Frolov, Nikita S.; Maksimenko, Vladimir A.; Makarov, Vladimir V.; Koronovskii, Alexey A.; Garcia-Prieto, Juan; Antón-Toro, Luis Fernando; Maestú, Fernando; Pisarchik, Alexander N.

    2018-03-01

    Artificial neural networks (ANNs) are known to be a powerful tool for data analysis. They are used in social science, robotics, and neurophysiology for solving tasks of classification, forecasting, pattern recognition, etc. In neuroscience, ANNs allow the recognition of specific forms of brain activity from multichannel EEG or MEG data. This makes the ANN an efficient computational core for brain-machine systems. However, despite significant achievements of artificial intelligence in recognition and classification of well-reproducible patterns of neural activity, the use of ANNs for recognition and classification of patterns in neural networks still requires additional attention, especially in ambiguous situations. According to this, in this research, we demonstrate the efficiency of application of the ANN for classification of human MEG trials corresponding to the perception of bistable visual stimuli with different degrees of ambiguity. We show that along with classification of brain states associated with multistable image interpretations, in the case of significant ambiguity, the ANN can detect an uncertain state when the observer doubts about the image interpretation. With the obtained results, we describe the possible application of ANNs for detection of bistable brain activity associated with difficulties in the decision-making process.

  13. Environmental policy without costs? A review of the Porter hypothesis

    Energy Technology Data Exchange (ETDEWEB)

    Braennlund, Runar; Lundgren, Tommy. e-mail: runar.brannlund@econ.umu.se

    2009-03-15

    This paper reviews the theoretical and empirical literature connected to the so called Porter Hypothesis. That is, to review the literature connected to the discussion about the relation between environmental policy and competitiveness. According to the conventional wisdom environmental policy, aiming for improving the environment through for example emission reductions, do imply costs since scarce resources must be diverted from somewhere else. However, this conventional wisdom has been challenged and questioned recently through what has been denoted the 'Porter hypothesis'. Those in the forefront of the Porter hypothesis challenge the conventional wisdom basically on the ground that resources are used inefficiently in the absence of the right kind of environmental regulations, and that the conventional neo-classical view is too static to take inefficiencies into account. The conclusions that can be made from this review is (1) that the theoretical literature can identify the circumstances and mechanisms that must exist for a Porter effect to occur, (2) that these circumstances are rather non-general, hence rejecting the Porter hypothesis in general, (3) that the empirical literature give no general support for the Porter hypothesis. Furthermore, a closer look at the 'Swedish case' reveals no support for the Porter hypothesis in spite of the fact that Swedish environmental policy the last 15-20 years seems to be in line the prerequisites stated by the Porter hypothesis concerning environmental policy

  14. The linear hypothesis - an idea whose time has passed

    International Nuclear Information System (INIS)

    Tschaeche, A.N.

    1995-01-01

    The linear no-threshold hypothesis is the basis for radiation protection standards in the United States. In the words of the National Council on Radiation Protection and Measurements (NCRP), the hypothesis is: open-quotes In the interest of estimating effects in humans conservatively, it is not unreasonable to follow the assumption of a linear relationship between dose and effect in the low dose regions for which direct observational data are not available.close quotes The International Commission on Radiological Protection (ICRP) stated the hypothesis in a slightly different manner: open-quotes One such basic assumption ... is that ... there is ... a linear relationship without threshold between dose and the probability of an effect. The hypothesis was necessary 50 yr ago when it was first enunciated because the dose-effect curve for ionizing radiation for effects in humans was not known. The ICRP and NCRP needed a model to extrapolate high-dose effects to low-dose effects. So the linear no-threshold hypothesis was born. Certain details of the history of the development and use of the linear hypothesis are presented. In particular, use of the hypothesis by the U.S. regulatory agencies is examined. Over time, the sense of the hypothesis has been corrupted. The corruption of the hypothesis into the current paradigm of open-quote a little radiation, no matter how small, can and will harm youclose quotes is presented. The reasons the corruption occurred are proposed. The effects of the corruption are enumerated, specifically, the use of the corruption by the antinuclear forces in the United States and some of the huge costs to U.S. taxpayers due to the corruption. An alternative basis for radiation protection standards to assure public safety, based on the weight of scientific evidence on radiation health effects, is proposed

  15. Preserving information in neural transmission.

    Science.gov (United States)

    Sincich, Lawrence C; Horton, Jonathan C; Sharpee, Tatyana O

    2009-05-13

    Along most neural pathways, the spike trains transmitted from one neuron to the next are altered. In the process, neurons can either achieve a more efficient stimulus representation, or extract some biologically important stimulus parameter, or succeed at both. We recorded the inputs from single retinal ganglion cells and the outputs from connected lateral geniculate neurons in the macaque to examine how visual signals are relayed from retina to cortex. We found that geniculate neurons re-encoded multiple temporal stimulus features to yield output spikes that carried more information about stimuli than was available in each input spike. The coding transformation of some relay neurons occurred with no decrement in information rate, despite output spike rates that averaged half the input spike rates. This preservation of transmitted information was achieved by the short-term summation of inputs that geniculate neurons require to spike. A reduced model of the retinal and geniculate visual responses, based on two stimulus features and their associated nonlinearities, could account for >85% of the total information available in the spike trains and the preserved information transmission. These results apply to neurons operating on a single time-varying input, suggesting that synaptic temporal integration can alter the temporal receptive field properties to create a more efficient representation of visual signals in the thalamus than the retina.

  16. Supervised Learning with Complex-valued Neural Networks

    CERN Document Server

    Suresh, Sundaram; Savitha, Ramasamy

    2013-01-01

    Recent advancements in the field of telecommunications, medical imaging and signal processing deal with signals that are inherently time varying, nonlinear and complex-valued. The time varying, nonlinear characteristics of these signals can be effectively analyzed using artificial neural networks.  Furthermore, to efficiently preserve the physical characteristics of these complex-valued signals, it is important to develop complex-valued neural networks and derive their learning algorithms to represent these signals at every step of the learning process. This monograph comprises a collection of new supervised learning algorithms along with novel architectures for complex-valued neural networks. The concepts of meta-cognition equipped with a self-regulated learning have been known to be the best human learning strategy. In this monograph, the principles of meta-cognition have been introduced for complex-valued neural networks in both the batch and sequential learning modes. For applications where the computati...

  17. Learning in neural networks based on a generalized fluctuation theorem

    Science.gov (United States)

    Hayakawa, Takashi; Aoyagi, Toshio

    2015-11-01

    Information maximization has been investigated as a possible mechanism of learning governing the self-organization that occurs within the neural systems of animals. Within the general context of models of neural systems bidirectionally interacting with environments, however, the role of information maximization remains to be elucidated. For bidirectionally interacting physical systems, universal laws describing the fluctuation they exhibit and the information they possess have recently been discovered. These laws are termed fluctuation theorems. In the present study, we formulate a theory of learning in neural networks bidirectionally interacting with environments based on the principle of information maximization. Our formulation begins with the introduction of a generalized fluctuation theorem, employing an interpretation appropriate for the present application, which differs from the original thermodynamic interpretation. We analytically and numerically demonstrate that the learning mechanism presented in our theory allows neural networks to efficiently explore their environments and optimally encode information about them.

  18. Biostatistics series module 2: Overview of hypothesis testing

    Directory of Open Access Journals (Sweden)

    Avijit Hazra

    2016-01-01

    Full Text Available Hypothesis testing (or statistical inference is one of the major applications of biostatistics. Much of medical research begins with a research question that can be framed as a hypothesis. Inferential statistics begins with a null hypothesis that reflects the conservative position of no change or no difference in comparison to baseline or between groups. Usually, the researcher has reason to believe that there is some effect or some difference which is the alternative hypothesis. The researcher therefore proceeds to study samples and measure outcomes in the hope of generating evidence strong enough for the statistician to be able to reject the null hypothesis. The concept of the P value is almost universally used in hypothesis testing. It denotes the probability of obtaining by chance a result at least as extreme as that observed, even when the null hypothesis is true and no real difference exists. Usually, if P is < 0.05 the null hypothesis is rejected and sample results are deemed statistically significant. With the increasing availability of computers and access to specialized statistical software, the drudgery involved in statistical calculations is now a thing of the past, once the learning curve of the software has been traversed. The life sciences researcher is therefore free to devote oneself to optimally designing the study, carefully selecting the hypothesis tests to be applied, and taking care in conducting the study well. Unfortunately, selecting the right test seems difficult initially. Thinking of the research hypothesis as addressing one of five generic research questions helps in selection of the right hypothesis test. In addition, it is important to be clear about the nature of the variables (e.g., numerical vs. categorical; parametric vs. nonparametric and the number of groups or data sets being compared (e.g., two or more than two at a time. The same research question may be explored by more than one type of hypothesis test

  19. On the Keyhole Hypothesis: High Mutual Information between Ear and Scalp EEG

    Directory of Open Access Journals (Sweden)

    Kaare B. Mikkelsen

    2017-06-01

    Full Text Available We propose and test the keyhole hypothesis—that measurements from low dimensional EEG, such as ear-EEG reflect a broadly distributed set of neural processes. We formulate the keyhole hypothesis in information theoretical terms. The experimental investigation is based on legacy data consisting of 10 subjects exposed to a battery of stimuli, including alpha-attenuation, auditory onset, and mismatch-negativity responses and a new medium-long EEG experiment involving data acquisition during 13 h. Linear models were estimated to lower bound the scalp-to-ear capacity, i.e., predicting ear-EEG data from simultaneously recorded scalp EEG. A cross-validation procedure was employed to ensure unbiased estimates. We present several pieces of evidence in support of the keyhole hypothesis: There is a high mutual information between data acquired at scalp electrodes and through the ear-EEG “keyhole,” furthermore we show that the view—represented as a linear mapping—is stable across both time and mental states. Specifically, we find that ear-EEG data can be predicted reliably from scalp EEG. We also address the reverse view, and demonstrate that large portions of the scalp EEG can be predicted from ear-EEG, with the highest predictability achieved in the temporal regions and when using ear-EEG electrodes with a common reference electrode.

  20. Neural Markers of Performance States in an Olympic Athlete: An EEG Case Study in Air-Pistol Shooting

    Directory of Open Access Journals (Sweden)

    Selenia di Fronso, Claudio Robazza, Edson Filho, Laura Bortoli, Silvia Comani, Maurizio Bertollo

    2016-06-01

    Full Text Available This study focused on identifying the neural markers underlying optimal and suboptimal performance experiences of an elite air-pistol shooter, based on the tenets of the multi-action plan (MAP model. According to the MAP model’s assumptions, skilled athletes’ cortical patterns are expected to differ among optimal/automatic (Type 1, optimal/controlled (Type 2, suboptimal/controlled (Type 3, and suboptimal/automatic (Type 4 performance experiences. We collected performance (target pistol shots, cognitive-affective (perceived control, accuracy, and hedonic tone, and cortical activity data (32-channel EEG of an elite shooter. Idiosyncratic descriptive analyses revealed differences in perceived accuracy in regard to optimal and suboptimal performance states. Event-Related Desynchronization/Synchronization analysis supported the notion that optimal-automatic performance experiences (Type 1 were characterized by a global synchronization of cortical arousal associated with the shooting task, whereas suboptimal controlled states (Type 3 were underpinned by high cortical activity levels in the attentional brain network. Results are addressed in light of the neural efficiency hypothesis and reinvestment theory. Perceptual training recommendations aimed at restoring optimal performance levels are discussed.

  1. Neural Global Pattern Similarity Underlies True and False Memories.

    Science.gov (United States)

    Ye, Zhifang; Zhu, Bi; Zhuang, Liping; Lu, Zhonglin; Chen, Chuansheng; Xue, Gui

    2016-06-22

    The neural processes giving rise to human memory strength signals remain poorly understood. Inspired by formal computational models that posit a central role of global matching in memory strength, we tested a novel hypothesis that the strengths of both true and false memories arise from the global similarity of an item's neural activation pattern during retrieval to that of all the studied items during encoding (i.e., the encoding-retrieval neural global pattern similarity [ER-nGPS]). We revealed multiple ER-nGPS signals that carried distinct information and contributed differentially to true and false memories: Whereas the ER-nGPS in the parietal regions reflected semantic similarity and was scaled with the recognition strengths of both true and false memories, ER-nGPS in the visual cortex contributed solely to true memory. Moreover, ER-nGPS differences between the parietal and visual cortices were correlated with frontal monitoring processes. By combining computational and neuroimaging approaches, our results advance a mechanistic understanding of memory strength in recognition. What neural processes give rise to memory strength signals, and lead to our conscious feelings of familiarity? Using fMRI, we found that the memory strength of a given item depends not only on how it was encoded during learning, but also on the similarity of its neural representation with other studied items. The global neural matching signal, mainly in the parietal lobule, could account for the memory strengths of both studied and unstudied items. Interestingly, a different global matching signal, originated from the visual cortex, could distinguish true from false memories. The findings reveal multiple neural mechanisms underlying the memory strengths of events registered in the brain. Copyright © 2016 the authors 0270-6474/16/366792-11$15.00/0.

  2. Daily rainfall-runoff modelling by neural networks in semi-arid zone ...

    African Journals Online (AJOL)

    This research work will allow checking efficiency of formal neural networks for flows' modelling of wadi Ouahrane's basin from rainfall-runoff relation which is non-linear. Two models of neural networks were optimized through supervised learning and compared in order to achieve this goal, the first model with input rain, and ...

  3. Bioprinting for Neural Tissue Engineering.

    Science.gov (United States)

    Knowlton, Stephanie; Anand, Shivesh; Shah, Twisha; Tasoglu, Savas

    2018-01-01

    Bioprinting is a method by which a cell-encapsulating bioink is patterned to create complex tissue architectures. Given the potential impact of this technology on neural research, we review the current state-of-the-art approaches for bioprinting neural tissues. While 2D neural cultures are ubiquitous for studying neural cells, 3D cultures can more accurately replicate the microenvironment of neural tissues. By bioprinting neuronal constructs, one can precisely control the microenvironment by specifically formulating the bioink for neural tissues, and by spatially patterning cell types and scaffold properties in three dimensions. We review a range of bioprinted neural tissue models and discuss how they can be used to observe how neurons behave, understand disease processes, develop new therapies and, ultimately, design replacement tissues. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A hypothesis on the biological origins and social evolution of music and dance

    Directory of Open Access Journals (Sweden)

    Tianyan eWang

    2015-02-01

    Full Text Available The origins of music and musical emotions is still an enigma, here I propose a comprehensive hypothesis on the origins and evolution of music, dance and speech from a biological and sociological perspective. I suggest that every pitch interval between neighboring notes in music represents corresponding movement pattern through interpreting the Doppler effect of sound, which not only provides a possible explanation to the transposition invariance of music, but also integrates music and dance into a common form—rhythmic movements. Accordingly, investigating the origins of music poses the question: why do humans appreciate rhythmic movements? I suggest that human appreciation of rhythmic movements and rhythmic events developed from the natural selection of organisms adapting to the internal and external rhythmic environments. The perception and production of, as well as synchronization with external and internal rhythms are so vital for an organism’s survival and reproduction, that animals have a rhythm-related reward and emotion (RRRE system. The RRRE system enables the appreciation of rhythmic movements and events, and is integral to the origination of music, dance and speech. The first type of rewards and emotions (rhythm-related rewards and emotions, RRREs are evoked by music and dance, and have biological and social functions, which in turn, promote the evolution of music, dance and speech. These functions also evoke a second type of rewards and emotions, which I name society-related rewards and emotions (SRREs. The neural circuits of RRREs and SRREs develop in species formation and personal growth, with congenital and acquired characteristics, respectively, namely music is the combination of nature and culture. This hypothesis provides probable selection pressures and outlines the evolution of music, dance and speech. The links between the Doppler effect and the RRREs and SRREs can be empirically tested, making the current hypothesis

  5. A hypothesis on improving foreign accents by optimizing variability in vocal learning brain circuits.

    Science.gov (United States)

    Simmonds, Anna J

    2015-01-01

    Rapid vocal motor learning is observed when acquiring a language in early childhood, or learning to speak another language later in life. Accurate pronunciation is one of the hardest things for late learners to master and they are almost always left with a non-native accent. Here, I propose a novel hypothesis that this accent could be improved by optimizing variability in vocal learning brain circuits during learning. Much of the neurobiology of human vocal motor learning has been inferred from studies on songbirds. Jarvis (2004) proposed the hypothesis that as in songbirds there are two pathways in humans: one for learning speech (the striatal vocal learning pathway), and one for production of previously learnt speech (the motor pathway). Learning new motor sequences necessary for accurate non-native pronunciation is challenging and I argue that in late learners of a foreign language the vocal learning pathway becomes inactive prematurely. The motor pathway is engaged once again and learners maintain their original native motor patterns for producing speech, resulting in speaking with a foreign accent. Further, I argue that variability in neural activity within vocal motor circuitry generates vocal variability that supports accurate non-native pronunciation. Recent theoretical and experimental work on motor learning suggests that variability in the motor movement is necessary for the development of expertise. I propose that there is little trial-by-trial variability when using the motor pathway. When using the vocal learning pathway variability gradually increases, reflecting an exploratory phase in which learners try out different ways of pronouncing words, before decreasing and stabilizing once the "best" performance has been identified. The hypothesis proposed here could be tested using behavioral interventions that optimize variability and engage the vocal learning pathway for longer, with the prediction that this would allow learners to develop new motor

  6. A hypothesis on the biological origins and social evolution of music and dance.

    Science.gov (United States)

    Wang, Tianyan

    2015-01-01

    The origins of music and musical emotions is still an enigma, here I propose a comprehensive hypothesis on the origins and evolution of music, dance, and speech from a biological and sociological perspective. I suggest that every pitch interval between neighboring notes in music represents corresponding movement pattern through interpreting the Doppler effect of sound, which not only provides a possible explanation for the transposition invariance of music, but also integrates music and dance into a common form-rhythmic movements. Accordingly, investigating the origins of music poses the question: why do humans appreciate rhythmic movements? I suggest that human appreciation of rhythmic movements and rhythmic events developed from the natural selection of organisms adapting to the internal and external rhythmic environments. The perception and production of, as well as synchronization with external and internal rhythms are so vital for an organism's survival and reproduction, that animals have a rhythm-related reward and emotion (RRRE) system. The RRRE system enables the appreciation of rhythmic movements and events, and is integral to the origination of music, dance and speech. The first type of rewards and emotions (rhythm-related rewards and emotions, RRREs) are evoked by music and dance, and have biological and social functions, which in turn, promote the evolution of music, dance and speech. These functions also evoke a second type of rewards and emotions, which I name society-related rewards and emotions (SRREs). The neural circuits of RRREs and SRREs develop in species formation and personal growth, with congenital and acquired characteristics, respectively, namely music is the combination of nature and culture. This hypothesis provides probable selection pressures and outlines the evolution of music, dance, and speech. The links between the Doppler effect and the RRREs and SRREs can be empirically tested, making the current hypothesis scientifically

  7. Energy-efficient neuromorphic classifiers

    OpenAIRE

    Martí, Daniel; Rigotti, Mattia; Seok, Mingoo; Fusi, Stefano

    2015-01-01

    Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. Neuromorphic engineering promises extremely low energy consumptions, comparable to those of the nervous system. However, until now the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, rendering el...

  8. Implementing Signature Neural Networks with Spiking Neurons.

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  9. Null but not void: considerations for hypothesis testing.

    Science.gov (United States)

    Shaw, Pamela A; Proschan, Michael A

    2013-01-30

    Standard statistical theory teaches us that once the null and alternative hypotheses have been defined for a parameter, the choice of the statistical test is clear. Standard theory does not teach us how to choose the null or alternative hypothesis appropriate to the scientific question of interest. Neither does it tell us that in some cases, depending on which alternatives are realistic, we may want to define our null hypothesis differently. Problems in statistical practice are frequently not as pristinely summarized as the classic theory in our textbooks. In this article, we present examples in statistical hypothesis testing in which seemingly simple choices are in fact rich with nuance that, when given full consideration, make the choice of the right hypothesis test much less straightforward. Published 2012. This article is a US Government work and is in the public domain in the USA.

  10. Alzheimer's disease: the amyloid hypothesis and the Inverse Warburg effect

    KAUST Repository

    Demetrius, Lloyd A.; Magistretti, Pierre J.; Pellerin, Luc

    2015-01-01

    Epidemiological and biochemical studies show that the sporadic forms of Alzheimer's disease (AD) are characterized by the following hallmarks: (a) An exponential increase with age; (b) Selective neuronal vulnerability; (c) Inverse cancer comorbidity. The present article appeals to these hallmarks to evaluate and contrast two competing models of AD: the amyloid hypothesis (a neuron-centric mechanism) and the Inverse Warburg hypothesis (a neuron-astrocytic mechanism). We show that these three hallmarks of AD conflict with the amyloid hypothesis, but are consistent with the Inverse Warburg hypothesis, a bioenergetic model which postulates that AD is the result of a cascade of three events—mitochondrial dysregulation, metabolic reprogramming (the Inverse Warburg effect), and natural selection. We also provide an explanation for the failures of the clinical trials based on amyloid immunization, and we propose a new class of therapeutic strategies consistent with the neuroenergetic selection model.

  11. Cross-system log file analysis for hypothesis testing

    NARCIS (Netherlands)

    Glahn, Christian

    2008-01-01

    Glahn, C. (2008). Cross-system log file analysis for hypothesis testing. Presented at Empowering Learners for Lifelong Competence Development: pedagogical, organisational and technological issues. 4th TENCompetence Open Workshop. April, 10, 2008, Madrid, Spain.

  12. Hypothesis Testing Using the Films of the Three Stooges

    Science.gov (United States)

    Gardner, Robert; Davidson, Robert

    2010-01-01

    The use of The Three Stooges' films as a source of data in an introductory statistics class is described. The Stooges' films are separated into three populations. Using these populations, students may conduct hypothesis tests with data they collect.

  13. Incidence of allergy and atopic disorders and hygiene hypothesis.

    Czech Academy of Sciences Publication Activity Database

    Bencko, V.; Šíma, Petr

    2017-01-01

    Roč. 2, 6 March (2017), č. článku 1244. ISSN 2474-1663 Institutional support: RVO:61388971 Keywords : allergy disorders * atopic disorders * hygiene hypothesis Subject RIV: EE - Microbiology, Virology OBOR OECD: Microbiology

  14. Alzheimer's disease: the amyloid hypothesis and the Inverse Warburg effect

    KAUST Repository

    Demetrius, Lloyd A.

    2015-01-14

    Epidemiological and biochemical studies show that the sporadic forms of Alzheimer\\'s disease (AD) are characterized by the following hallmarks: (a) An exponential increase with age; (b) Selective neuronal vulnerability; (c) Inverse cancer comorbidity. The present article appeals to these hallmarks to evaluate and contrast two competing models of AD: the amyloid hypothesis (a neuron-centric mechanism) and the Inverse Warburg hypothesis (a neuron-astrocytic mechanism). We show that these three hallmarks of AD conflict with the amyloid hypothesis, but are consistent with the Inverse Warburg hypothesis, a bioenergetic model which postulates that AD is the result of a cascade of three events—mitochondrial dysregulation, metabolic reprogramming (the Inverse Warburg effect), and natural selection. We also provide an explanation for the failures of the clinical trials based on amyloid immunization, and we propose a new class of therapeutic strategies consistent with the neuroenergetic selection model.

  15. The Double-Deficit Hypothesis in Spanish Developmental Dyslexia

    Science.gov (United States)

    Jimenez, Juan E.; Hernandez-Valle, Isabel; Rodriguez, Cristina; Guzman, Remedios; Diaz, Alicia; Ortiz, Rosario

    2008-01-01

    The double-deficit hypothesis (DDH) of developmental dyslexia was investigated in seven to twelve year old Spanish children. It was observed that the double deficit (DD) group had the greatest difficulty with reading.

  16. Disrupting morphosyntactic and lexical semantic processing has opposite effects on the sample entropy of neural signals

    NARCIS (Netherlands)

    Fonseca, Andre; Boboeva, Vezha; Brederoo, Sanne; Baggio, Giosue

    2015-01-01

    Converging evidence in neuroscience suggests that syntax and semantics are dissociable in brain space and time. However, it is possible that partly disjoint cortical networks, operating in successive time frames, still perform similar types of neural computations. To test the alternative hypothesis,

  17. The Random-Walk Hypothesis on the Indian Stock Market

    OpenAIRE

    Ankita Mishra; Vinod Mishra; Russell Smyth

    2014-01-01

    This study tests the random walk hypothesis for the Indian stock market. Using 19 years of monthly data on six indices from the National Stock Exchange (NSE) and the Bombay Stock Exchange (BSE), this study applies three different unit root tests with two structural breaks to analyse the random walk hypothesis. We find that unit root tests that allow for two structural breaks alone are not able to reject the unit root null; however, a recently developed unit root test that simultaneously accou...

  18. Dopamine and Reward: The Anhedonia Hypothesis 30 years on

    OpenAIRE

    Wise, Roy A.

    2008-01-01

    The anhedonia hypothesis – that brain dopamine plays a critical role in the subjective pleasure associated with positive rewards – was intended to draw the attention of psychiatrists to the growing evidence that dopamine plays a critical role in the objective reinforcement and incentive motivation associated with food and water, brain stimulation reward, and psychomotor stimulant and opiate reward. The hypothesis called to attention the apparent paradox that neuroleptics, drugs used to treat ...

  19. Energy efficiency

    International Nuclear Information System (INIS)

    2010-01-01

    After a speech of the CEA's (Commissariat a l'Energie Atomique) general administrator about energy efficiency as a first rank challenge for the planet and for France, this publications proposes several contributions: a discussion of the efficiency of nuclear energy, an economic analysis of R and D's value in the field of fourth generation fast reactors, discussions about biofuels and the relationship between energy efficiency and economic competitiveness, and a discussion about solar photovoltaic efficiency

  20. Testing the null hypothesis: the forgotten legacy of Karl Popper?

    Science.gov (United States)

    Wilkinson, Mick

    2013-01-01

    Testing of the null hypothesis is a fundamental aspect of the scientific method and has its basis in the falsification theory of Karl Popper. Null hypothesis testing makes use of deductive reasoning to ensure that the truth of conclusions is irrefutable. In contrast, attempting to demonstrate the new facts on the basis of testing the experimental or research hypothesis makes use of inductive reasoning and is prone to the problem of the Uniformity of Nature assumption described by David Hume in the eighteenth century. Despite this issue and the well documented solution provided by Popper's falsification theory, the majority of publications are still written such that they suggest the research hypothesis is being tested. This is contrary to accepted scientific convention and possibly highlights a poor understanding of the application of conventional significance-based data analysis approaches. Our work should remain driven by conjecture and attempted falsification such that it is always the null hypothesis that is tested. The write up of our studies should make it clear that we are indeed testing the null hypothesis and conforming to the established and accepted philosophical conventions of the scientific method.

  1. Nonlinear Effects in Piezoelectric Transformers Explained by Thermal-Electric Model Based on a Hypothesis of Self-Heating

    DEFF Research Database (Denmark)

    Andersen, Thomas; Andersen, Michael A. E.; Thomsen, Ole Cornelius

    2012-01-01

    As the trend within power electronic still goes in the direction of higher power density and higher efficiency, it is necessary to develop new topologies and push the limit for the existing technology. Piezoelectric transformers are a fast developing technology to improve efficiency and increase ...... is developed to explain nonlinearities as voltage jumps and voltage saturation and thereby avoid the complex theory of electro elasticity. The model is based on the hypothesis of self-heating and tested with measurements with good correlation....

  2. Analysis of neural data

    CERN Document Server

    Kass, Robert E; Brown, Emery N

    2014-01-01

    Continual improvements in data collection and processing have had a huge impact on brain research, producing data sets that are often large and complicated. By emphasizing a few fundamental principles, and a handful of ubiquitous techniques, Analysis of Neural Data provides a unified treatment of analytical methods that have become essential for contemporary researchers. Throughout the book ideas are illustrated with more than 100 examples drawn from the literature, ranging from electrophysiology, to neuroimaging, to behavior. By demonstrating the commonality among various statistical approaches the authors provide the crucial tools for gaining knowledge from diverse types of data. Aimed at experimentalists with only high-school level mathematics, as well as computationally-oriented neuroscientists who have limited familiarity with statistics, Analysis of Neural Data serves as both a self-contained introduction and a reference work.

  3. Deep Neural Yodelling

    OpenAIRE

    Pfäffli, Daniel (Autor/in)

    2018-01-01

    Yodel music differs from most other genres by exercising the transition from chest voice to falsetto with an audible glottal stop which is recognised even by laymen. Yodel often consists of a yodeller with a choir accompaniment. In Switzerland, it is differentiated between the natural yodel and yodel songs. Today's approaches to music generation with machine learning algorithms are based on neural networks, which are best described by stacked layers of neurons which are connected with neurons...

  4. Artificial neural network modelling

    CERN Document Server

    Samarasinghe, Sandhya

    2016-01-01

    This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .

  5. Rotation Invariance Neural Network

    OpenAIRE

    Li, Shiyuan

    2017-01-01

    Rotation invariance and translation invariance have great values in image recognition tasks. In this paper, we bring a new architecture in convolutional neural network (CNN) named cyclic convolutional layer to achieve rotation invariance in 2-D symbol recognition. We can also get the position and orientation of the 2-D symbol by the network to achieve detection purpose for multiple non-overlap target. Last but not least, this architecture can achieve one-shot learning in some cases using thos...

  6. Neural Mechanisms of Foraging

    OpenAIRE

    Kolling, Nils; Behrens, Timothy EJ; Mars, Rogier B; Rushworth, Matthew FS

    2012-01-01

    Behavioural economic studies, involving limited numbers of choices, have provided key insights into neural decision-making mechanisms. By contrast, animals’ foraging choices arise in the context of sequences of encounters with prey/food. On each encounter the animal chooses to engage or whether the environment is sufficiently rich that searching elsewhere is merited. The cost of foraging is also critical. We demonstrate humans can alternate between two modes of choice, comparative decision-ma...

  7. Chaotic annealing with hypothesis test for function optimization in noisy environments

    International Nuclear Information System (INIS)

    Pan Hui; Wang Ling; Liu Bo

    2008-01-01

    As a special mechanism to avoid being trapped in local minimum, the ergodicity property of chaos has been used as a novel searching technique for optimization problems, but there is no research work on chaos for optimization in noisy environments. In this paper, the performance of chaotic annealing (CA) for uncertain function optimization is investigated, and a new hybrid approach (namely CAHT) that combines CA and hypothesis test (HT) is proposed. In CAHT, the merits of CA are applied for well exploration and exploitation in searching space, and solution quality can be identified reliably by hypothesis test to reduce the repeated search to some extent and to reasonably estimate performance for solution. Simulation results and comparisons show that, chaos is helpful to improve the performance of SA for uncertain function optimization, and CAHT can further improve the searching efficiency, quality and robustness

  8. Event-driven simulation of neural population synchronization facilitated by electrical coupling.

    Science.gov (United States)

    Carrillo, Richard R; Ros, Eduardo; Barbour, Boris; Boucheny, Christian; Coenen, Olivier

    2007-02-01

    Most neural communication and processing tasks are driven by spikes. This has enabled the application of the event-driven simulation schemes. However the simulation of spiking neural networks based on complex models that cannot be simplified to analytical expressions (requiring numerical calculation) is very time consuming. Here we describe briefly an event-driven simulation scheme that uses pre-calculated table-based neuron characterizations to avoid numerical calculations during a network simulation, allowing the simulation of large-scale neural systems. More concretely we explain how electrical coupling can be simulated efficiently within this computation scheme, reproducing synchronization processes observed in detailed simulations of neural populations.

  9. Neural basis for generalized quantifier comprehension.

    Science.gov (United States)

    McMillan, Corey T; Clark, Robin; Moore, Peachie; Devita, Christian; Grossman, Murray

    2005-01-01

    Generalized quantifiers like "all cars" are semantically well understood, yet we know little about their neural representation. Our model of quantifier processing includes a numerosity device, operations that combine number elements and working memory. Semantic theory posits two types of quantifiers: first-order quantifiers identify a number state (e.g. "at least 3") and higher-order quantifiers additionally require maintaining a number state actively in working memory for comparison with another state (e.g. "less than half"). We used BOLD fMRI to test the hypothesis that all quantifiers recruit inferior parietal cortex associated with numerosity, while only higher-order quantifiers recruit prefrontal cortex associated with executive resources like working memory. Our findings showed that first-order and higher-order quantifiers both recruit right inferior parietal cortex, suggesting that a numerosity component contributes to quantifier comprehension. Moreover, only probes of higher-order quantifiers recruited right dorsolateral prefrontal cortex, suggesting involvement of executive resources like working memory. We also observed activation of thalamus and anterior cingulate that may be associated with selective attention. Our findings are consistent with a large-scale neural network centered in frontal and parietal cortex that supports comprehension of generalized quantifiers.

  10. Neural mechanisms mediating degrees of strategic uncertainty.

    Science.gov (United States)

    Nagel, Rosemarie; Brovelli, Andrea; Heinemann, Frank; Coricelli, Giorgio

    2018-01-01

    In social interactions, strategic uncertainty arises when the outcome of one's choice depends on the choices of others. An important question is whether strategic uncertainty can be resolved by assessing subjective probabilities to the counterparts' behavior, as if playing against nature, and thus transforming the strategic interaction into a risky (individual) situation. By means of functional magnetic resonance imaging with human participants we tested the hypothesis that choices under strategic uncertainty are supported by the neural circuits mediating choices under individual risk and deliberation in social settings (i.e. strategic thinking). Participants were confronted with risky lotteries and two types of coordination games requiring different degrees of strategic thinking of the kind 'I think that you think that I think etc.' We found that the brain network mediating risk during lotteries (anterior insula, dorsomedial prefrontal cortex and parietal cortex) is also engaged in the processing of strategic uncertainty in games. In social settings, activity in this network is modulated by the level of strategic thinking that is reflected in the activity of the dorsomedial and dorsolateral prefrontal cortex. These results suggest that strategic uncertainty is resolved by the interplay between the neural circuits mediating risk and higher order beliefs (i.e. beliefs about others' beliefs). © The Author(s) (2017). Published by Oxford University Press.

  11. Efficient Load Scheduling Method For Power Management

    Directory of Open Access Journals (Sweden)

    Vijo M Joy

    2015-08-01

    Full Text Available An efficient load scheduling method to meet varying power supply needs is presented in this paper. At peak load times the power generation system fails due to its instability. Traditionally we use load shedding process. In load shedding process disconnect the unnecessary and extra loads. The proposed method overcomes this problem by scheduling the load based on the requirement. Artificial neural networks are used for this optimal load scheduling process. For generate economic scheduling artificial neural network has been used because generation of power from each source is economically different. In this the total load required is the inputs of this network and the power generation from each source and power losses at the time of transmission are the output of the neural network. Training and programming of the artificial neural networks are done using MATLAB.

  12. Neural basis of postural focus effect on concurrent postural and motor tasks: phase-locked electroencephalogram responses.

    Science.gov (United States)

    Huang, Cheng-Ya; Zhao, Chen-Guang; Hwang, Ing-Shiou

    2014-11-01

    Dual-task performance is strongly affected by the direction of attentional focus. This study investigated neural control of a postural-suprapostural procedure when postural focus strategy varied. Twelve adults concurrently conducted force-matching and maintained stabilometer stance with visual feedback on ankle movement (visual internal focus, VIF) and on stabilometer movement (visual external focus, VEF). Force-matching error, dynamics of ankle and stabilometer movements, and event-related potentials (ERPs) were registered. Postural control with VEF caused superior force-matching performance, more complex ankle movement, and stronger kinematic coupling between the ankle and stabilometer movements than postural control with VIF. The postural focus strategy also altered ERP temporal-spatial patterns. Postural control with VEF resulted in later N1 with less negativity around the bilateral fronto-central and contralateral sensorimotor areas, earlier P2 deflection with more positivity around the bilateral fronto-central and ipsilateral temporal areas, and late movement-related potential commencing in the left frontal-central area, as compared with postural control with VIF. The time-frequency distribution of the ERP principal component revealed phase-locked neural oscillations in the delta (1-4Hz), theta (4-7Hz), and beta (13-35Hz) rhythms. The delta and theta rhythms were more pronounced prior to the timing of P2 positive deflection, and beta rebound was greater after the completion of force-matching in VEF condition than VIF condition. This study is the first to reveal the neural correlation of postural focusing effect on a postural-suprapostural task. Postural control with VEF takes advantage of efficient task-switching to facilitate autonomous postural response, in agreement with the "constrained-action" hypothesis. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Neural Markers of Performance States in an Olympic Athlete: An EEG Case Study in Air-Pistol Shooting.

    Science.gov (United States)

    di Fronso, Selenia; Robazza, Claudio; Filho, Edson; Bortoli, Laura; Comani, Silvia; Bertollo, Maurizio

    2016-06-01

    This study focused on identifying the neural markers underlying optimal and suboptimal performance experiences of an elite air-pistol shooter, based on the tenets of the multi-action plan (MAP) model. According to the MAP model's assumptions, skilled athletes' cortical patterns are expected to differ among optimal/automatic (Type 1), optimal/controlled (Type 2), suboptimal/controlled (Type 3), and suboptimal/automatic (Type 4) performance experiences. We collected performance (target pistol shots), cognitive-affective (perceived control, accuracy, and hedonic tone), and cortical activity data (32-channel EEG) of an elite shooter. Idiosyncratic descriptive analyses revealed differences in perceived accuracy in regard to optimal and suboptimal performance states. Event-Related Desynchronization/Synchronization analysis supported the notion that optimal-automatic performance experiences (Type 1) were characterized by a global synchronization of cortical arousal associated with the shooting task, whereas suboptimal controlled states (Type 3) were underpinned by high cortical activity levels in the attentional brain network. Results are addressed in light of the neural efficiency hypothesis and reinvestment theory. Perceptual training recommendations aimed at restoring optimal performance levels are discussed. Key pointsWe investigated the neural markers underlying optimal and suboptimal performance experiences of an elite air-pistol shooter.Optimal/automatic performance is characterized by a global synchronization of cortical activity associated with the shooting task.Suboptimal controlled performance is characterized by high cortical arousal levels in the attentional brain networks.Focused Event Related Desynchronization activity during Type 1 performance in frontal midline theta was found, with a clear distribution of Event Related Synchronization in the frontal and central areas just prior to shot release.Event Related Desynchronization patterns in low Alpha band

  14. Combining BMI stimulation and mathematical modeling for acute stroke recovery and neural repair

    Directory of Open Access Journals (Sweden)

    Sara L Gonzalez Andino

    2011-07-01

    Full Text Available Rehabilitation is a neural plasticity-exploiting approach that forces undamaged neural circuits to undertake the functionality of other circuits damaged by stroke. It aims to partial restoration of the neural functions by circuit remodeling rather than by the regeneration of damaged circuits. The core hypothesis of the present paper is that - in stroke - Brain Machine Interfaces can be designed to target neural repair instead of rehabilitation. To support this hypothesis we first review existing evidence on the role of endogenous or externally applied electric fields on all processes involved in CNS repair. We then describe our own results to illustrate the neuroprotective and neuroregenerative effects of BMI- electrical stimulation on sensory deprivation-related degenerative processes of the CNS. Finally, we discuss three of the crucial issues involved in the design of neural repair-oriented BMIs: when to stimulate, where to stimulate and - the particularly important but unsolved issue of - how to stimulate. We argue that optimal parameters for the electrical stimulation can be determined from studying and modeling the dynamics of the electric fields that naturally emerge at the central and peripheral nervous system during spontaneous healing in both, experimental animals and human patients. We conclude that a closed-loop BMI that defines the optimal stimulation parameters from a priori developed experimental models of the dynamics of spontaneous repair and the on-line monitoring of neural activity might place BMIs as an alternative or complement to stem-cell transplantation or pharmacological approaches, intensively pursued nowadays.

  15. The Neural Basis of Vocal Pitch Imitation in Humans.

    Science.gov (United States)

    Belyk, Michel; Pfordresher, Peter Q; Liotti, Mario; Brown, Steven

    2016-04-01

    Vocal imitation is a phenotype that is unique to humans among all primate species, and so an understanding of its neural basis is critical in explaining the emergence of both speech and song in human evolution. Two principal neural models of vocal imitation have emerged from a consideration of nonhuman animals. One hypothesis suggests that putative mirror neurons in the inferior frontal gyrus pars opercularis of Broca's area may be important for imitation. An alternative hypothesis derived from the study of songbirds suggests that the corticostriate motor pathway performs sensorimotor processes that are specific to vocal imitation. Using fMRI with a sparse event-related sampling design, we investigated the neural basis of vocal imitation in humans by comparing imitative vocal production of pitch sequences with both nonimitative vocal production and pitch discrimination. The strongest difference between these tasks was found in the putamen bilaterally, providing a striking parallel to the role of the analogous region in songbirds. Other areas preferentially activated during imitation included the orofacial motor cortex, Rolandic operculum, and SMA, which together outline the corticostriate motor loop. No differences were seen in the inferior frontal gyrus. The corticostriate system thus appears to be the central pathway for vocal imitation in humans, as predicted from an analogy with songbirds.

  16. Neural Mechanisms and Information Processing in Recognition Systems

    Directory of Open Access Journals (Sweden)

    Mamiko Ozaki

    2014-10-01

    Full Text Available Nestmate recognition is a hallmark of social insects. It is based on the match/mismatch of an identity signal carried by members of the society with that of the perceiving individual. While the behavioral response, amicable or aggressive, is very clear, the neural systems underlying recognition are not fully understood. Here we contrast two alternative hypotheses for the neural mechanisms that are responsible for the perception and information processing in recognition. We focus on recognition via chemical signals, as the common modality in social insects. The first, classical, hypothesis states that upon perception of recognition cues by the sensory system the information is passed as is to the antennal lobes and to higher brain centers where the information is deciphered and compared to a neural template. Match or mismatch information is then transferred to some behavior-generating centers where the appropriate response is elicited. An alternative hypothesis, that of “pre-filter mechanism”, posits that the decision as to whether to pass on the information to the central nervous system takes place in the peripheral sensory system. We suggest that, through sensory adaptation, only alien signals are passed on to the brain, specifically to an “aggressive-behavior-switching center”, where the response is generated if the signal is above a certain threshold.

  17. Neural Based Orthogonal Data Fitting The EXIN Neural Networks

    CERN Document Server

    Cirrincione, Giansalvo

    2008-01-01

    Written by three leaders in the field of neural based algorithms, Neural Based Orthogonal Data Fitting proposes several neural networks, all endowed with a complete theory which not only explains their behavior, but also compares them with the existing neural and traditional algorithms. The algorithms are studied from different points of view, including: as a differential geometry problem, as a dynamic problem, as a stochastic problem, and as a numerical problem. All algorithms have also been analyzed on real time problems (large dimensional data matrices) and have shown accurate solutions. Wh

  18. Escherichia coli growth modeling using neural network | Shamsudin ...

    African Journals Online (AJOL)

    technique that has the ability to predict with efficient and good performance. Using NARX, a highly accurate model was developed to predict the growth of Escherichia coli (E. coli) based on pH water parameter. The multiparameter portable sensor and spectrophotometer data were used to build and train the neural network.

  19. MODELLING OF CONCENTRATION LIMITS BASED ON NEURAL NETWORKS.

    Directory of Open Access Journals (Sweden)

    A. L. Osipov

    2017-02-01

    Full Text Available We study the forecasting model with the concentration limits is-the use of neural network technology. The software for the implementation of these models. It is shown that the efficiency of the system in the experimental material.

  20. Neural networks for predictive control of the mechanism of ...

    African Journals Online (AJOL)

    In this paper, we are interested in the study of the control of orientation of a wind turbine like means of optimization of his output/input ratio (efficiency). The approach suggested is based on the neural predictive control which is justified by the randomness of the wind on the one hand, and on the other hand by the capacity of ...