WorldWideScience

Sample records for grn inference schemes

  1. GRN2SBML: automated encoding and annotation of inferred gene regulatory networks complying with SBML.

    Science.gov (United States)

    Vlaic, Sebastian; Hoffmann, Bianca; Kupfer, Peter; Weber, Michael; Dräger, Andreas

    2013-09-01

    GRN2SBML automatically encodes gene regulatory networks derived from several inference tools in systems biology markup language. Providing a graphical user interface, the networks can be annotated via the simple object access protocol (SOAP)-based application programming interface of BioMart Central Portal and minimum information required in the annotation of models registry. Additionally, we provide an R-package, which processes the output of supported inference algorithms and automatically passes all required parameters to GRN2SBML. Therefore, GRN2SBML closes a gap in the processing pipeline between the inference of gene regulatory networks and their subsequent analysis, visualization and storage. GRN2SBML is freely available under the GNU Public License version 3 and can be downloaded from http://www.hki-jena.de/index.php/0/2/490. General information on GRN2SBML, examples and tutorials are available at the tool's web page.

  2. Compositional rule of inference as an analogical scheme

    Czech Academy of Sciences Publication Activity Database

    Bouchon-Meunier, B.; Mesiar, Radko; Marsala, Ch.; Rifqi, M.

    2003-01-01

    Roč. 26, č. 138 (2003), s. 53-65 ISSN 0165-0114 Institutional research plan: CEZ:AV0Z1075907 Keywords : analogical scheme * compositional rule of inference * conjunctions Subject RIV: BA - General Mathematics Impact factor: 0.577, year: 2003

  3. A novel power swing blocking scheme using adaptive neuro-fuzzy inference system

    Energy Technology Data Exchange (ETDEWEB)

    Zadeh, Hassan Khorashadi; Li, Zuyi [Illinois Institute of Technology, Department of Electrical and Computer Engineering, 3301 S. Dearborn Street, Chicago, IL 60616 (United States)

    2008-07-15

    A power swing may be caused by any sudden change in the configuration or the loading of an electrical network. During a power swing, the impedance locus moves along an impedance circle with possible encroachment into the distance relay zone, which may cause an unnecessary tripping. In order to prevent the distance relay from tripping under such condition, a novel power swing blocking (PSB) scheme is proposed in this paper. The proposed scheme uses an adaptive neuro-fuzzy inference systems (ANFIS) for preventing distance relay from tripping during power swings. The input signals to ANFIS, include the change of positive sequence impedance, positive and negative sequence currents, and power swing center voltage. Extensive tests show that the proposed PSB has two distinct features that are advantageous over existing schemes. The first is that the proposed scheme is able to detect various kinds of power swings thus block distance relays during power swings, even if the power swings are fast or the power swings occur during single pole open conditions. The second distinct feature is that the proposed scheme is able to clear the blocking if faults occur within the relay trip zone during power swings, even if the faults are high resistance faults, or the faults occur at the power swing center, or the faults occur when the power angle is close to 180 . (author)

  4. Cerebrospinal Fluid Progranulin, but Not Serum Progranulin, Is Reduced in GRN-Negative Frontotemporal Dementia.

    Science.gov (United States)

    Wilke, Carlo; Gillardon, Frank; Deuschle, Christian; Hobert, Markus A; Jansen, Iris E; Metzger, Florian G; Heutink, Peter; Gasser, Thomas; Maetzler, Walter; Blauwendraat, Cornelis; Synofzik, Matthis

    2017-01-01

    Reduced progranulin levels are a hallmark of frontotemporal dementia (FTD) caused by loss-of-function (LoF) mutations in the progranulin gene (GRN). However, alterations of central nervous progranulin expression also occur in neurodegenerative disorders unrelated to GRN mutations, such as Alzheimer's disease. We hypothesised that central nervous progranulin levels are also reduced in GRN-negative FTD. Progranulin levels were determined in both cerebrospinal fluid (CSF) and serum in 75 subjects (37 FTD patients and 38 controls). All FTD patients were assessed by whole-exome sequencing for GRN mutations, yielding a target cohort of 34 patients without pathogenic mutations in GRN (GRN-negative cohort) and 3 GRN mutation carriers (2 LoF variants and 1 novel missense variant). Not only the GRN mutation carriers but also the GRN-negative patients showed decreased CSF levels of progranulin (serum levels in GRN-negative patients were normal). The decreased CSF progranulin levels were unrelated to patients' increased CSF levels of total tau, possibly indicating different destructive neuronal processes within FTD neurodegeneration. The patient with the novel GRN missense variant (c.1117C>T, p.P373S) showed substantially decreased CSF levels of progranulin, comparable to the 2 patients with GRN LoF mutations, suggesting a pathogenic effect of this missense variant. Our results indicate that central nervous progranulin reduction is not restricted to the relatively rare cases of FTD caused by GRN LoF mutations, but also contributes to the more common GRN-negative forms of FTD. Central nervous progranulin reduction might reflect a partially distinct pathogenic mechanism underlying FTD neurodegeneration and is not directly linked to tau alterations. © 2016 S. Karger AG, Basel.

  5. Inference

    DEFF Research Database (Denmark)

    Møller, Jesper

    2010-01-01

    Chapter 9: This contribution concerns statistical inference for parametric models used in stochastic geometry and based on quick and simple simulation free procedures as well as more comprehensive methods based on a maximum likelihood or Bayesian approach combined with markov chain Monte Carlo...... (MCMC) techniques. Due to space limitations the focus is on spatial point processes....

  6. Inference

    DEFF Research Database (Denmark)

    Møller, Jesper

    (This text written by Jesper Møller, Aalborg University, is submitted for the collection ‘Stochastic Geometry: Highlights, Interactions and New Perspectives', edited by Wilfrid S. Kendall and Ilya Molchanov, to be published by ClarendonPress, Oxford, and planned to appear as Section 4.1 with the ......(This text written by Jesper Møller, Aalborg University, is submitted for the collection ‘Stochastic Geometry: Highlights, Interactions and New Perspectives', edited by Wilfrid S. Kendall and Ilya Molchanov, to be published by ClarendonPress, Oxford, and planned to appear as Section 4.......1 with the title ‘Inference'.) This contribution concerns statistical inference for parametric models used in stochastic geometry and based on quick and simple simulation free procedures as well as more comprehensive methods using Markov chain Monte Carlo (MCMC) simulations. Due to space limitations the focus...

  7. New PDE-based methods for image enhancement using SOM and Bayesian inference in various discretization schemes

    International Nuclear Information System (INIS)

    Karras, D A; Mertzios, G B

    2009-01-01

    A novel approach is presented in this paper for improving anisotropic diffusion PDE models, based on the Perona–Malik equation. A solution is proposed from an engineering perspective to adaptively estimate the parameters of the regularizing function in this equation. The goal of such a new adaptive diffusion scheme is to better preserve edges when the anisotropic diffusion PDE models are applied to image enhancement tasks. The proposed adaptive parameter estimation in the anisotropic diffusion PDE model involves self-organizing maps and Bayesian inference to define edge probabilities accurately. The proposed modifications attempt to capture not only simple edges but also difficult textural edges and incorporate their probability in the anisotropic diffusion model. In the context of the application of PDE models to image processing such adaptive schemes are closely related to the discrete image representation problem and the investigation of more suitable discretization algorithms using constraints derived from image processing theory. The proposed adaptive anisotropic diffusion model illustrates these concepts when it is numerically approximated by various discretization schemes in a database of magnetic resonance images (MRI), where it is shown to be efficient in image filtering and restoration applications

  8. Response of multiferroic composites inferred from a fast-Fourier-transform-based numerical scheme

    International Nuclear Information System (INIS)

    Brenner, Renald; Bravo-Castillero, Julián

    2010-01-01

    The effective response and the local fields within periodic magneto-electric multiferroic composites are investigated by means of a numerical scheme based on fast Fourier transforms. This computational framework relies on the iterative resolution of coupled series expansions for the magnetic, electric and strain fields. By using an augmented Lagrangian formulation, a simple and robust procedure which makes use of the uncoupled Green operators for the elastic, electrostatics and magnetostatics problems is proposed. Its accuracy is assessed in the cases of laminated and fibrous two-phase composites for which analytical solutions exist

  9. DMAC-AN INTEGRATED ENCRYPTION SCHEME WITH RSA FOR AC TO OBSTRUCT INFERENCE ATTACKS

    Directory of Open Access Journals (Sweden)

    R. Jeeva

    2012-12-01

    Full Text Available The proposal of indistinguishable encryption in Randomized Arithmetic Coding(RAC doesn’t make the system efficient because it was not encrypting the messages it sends. It recomputes the cipher form of every messages it sends that increases not only the computational cost but also increases the response time.Floating point representation in cipher increases the difficulty in decryption side because of loss in precison.RAC doesn’t handle the inference attacks like Man-in-Middle attack,Third party attack etc. In our system, Dynamic Matrix Arithmetic Coding(DMAC using dynamic session matrix to encrypt the messages. The size of the matrix is deduced from the session key that contains ID of end users which proves the server authentication.Nonce values is represented as the public key of the opponents encrypted by the session key will be exchanged between the end users to provide mutual authentication. If the adversary try to compromise either server or end users,the other system won’t respond and the intrusion will be easily detected. we have increased the hacking complexity of AC by integrating with RSA upto 99%.

  10. White matter hyperintensities are seen only in GRN mutation carriers in the GENFI cohort

    Directory of Open Access Journals (Sweden)

    Carole H. Sudre

    2017-01-01

    Full Text Available Genetic frontotemporal dementia is most commonly caused by mutations in the progranulin (GRN, microtubule-associated protein tau (MAPT and chromosome 9 open reading frame 72 (C9orf72 genes. Previous small studies have reported the presence of cerebral white matter hyperintensities (WMH in genetic FTD but this has not been systematically studied across the different mutations. In this study WMH were assessed in 180 participants from the Genetic FTD Initiative (GENFI with 3D T1- and T2-weighed magnetic resonance images: 43 symptomatic (7 GRN, 13 MAPT and 23 C9orf72, 61 presymptomatic mutation carriers (25 GRN, 8 MAPT and 28 C9orf72 and 76 mutation negative non-carrier family members. An automatic detection and quantification algorithm was developed for determining load, location and appearance of WMH. Significant differences were seen only in the symptomatic GRN group compared with the other groups with no differences in the MAPT or C9orf72 groups: increased global load of WMH was seen, with WMH located in the frontal and occipital lobes more so than the parietal lobes, and nearer to the ventricles rather than juxtacortical. Although no differences were seen in the presymptomatic group as a whole, in the GRN cohort only there was an association of increased WMH volume with expected years from symptom onset. The appearance of the WMH was also different in the GRN group compared with the other groups, with the lesions in the GRN group being more similar to each other. The presence of WMH in those with progranulin deficiency may be related to the known role of progranulin in neuroinflammation, although other roles are also proposed including an effect on blood-brain barrier permeability and the cerebral vasculature. Future studies will be useful to investigate the longitudinal evolution of WMH and their potential use as a biomarker as well as post-mortem studies investigating the histopathological nature of the lesions.

  11. White matter hyperintensities are seen only in GRN mutation carriers in the GENFI cohort

    NARCIS (Netherlands)

    Sudre, C.H. (Carole H.); M. Bocchetta (Martina); D.M. Cash (David M); D.L. Thomas (David L); Woollacott, I. (Ione); Dick, K.M. (Katrina M.); J.C. van Swieten (John); B. Borroni (Barbara); D. Galimberti (Daniela); M. Masellis (Mario); M.C. Tartaglia (Maria Carmela); J.B. Rowe (James); M.J. Graff (Maud J.L.); F. Tagliavini (Fabrizio); G.B. Frisoni (Giovanni B.); R. Laforce (Robert); E. Finger (Elizabeth); A. De Mendonça (Alexandre); S. Sorbi (Sandro); S. Ourselin (Sebastien); M.J. Cardoso (Manuel Jorge); J.D. Rohrer (Jonathan D)

    2017-01-01

    textabstractGenetic frontotemporal dementia is most commonly caused by mutations in the progranulin (GRN), microtubule-associated protein tau (MAPT) and chromosome 9 open reading frame 72 (C9orf72) genes. Previous small studies have reported the presence of cerebral white matter hyperintensities

  12. Using polarimetric radar observations and probabilistic inference to develop the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), a novel microphysical parameterization framework

    Science.gov (United States)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.

    2016-12-01

    Microphysical parameterization schemes have reached an impressive level of sophistication: numerous prognostic hydrometeor categories, and either size-resolved (bin) particle size distributions, or multiple prognostic moments of the size distribution. Yet, uncertainty in model representation of microphysical processes and the effects of microphysics on numerical simulation of weather has not shown a improvement commensurate with the advanced sophistication of these schemes. We posit that this may be caused by unconstrained assumptions of these schemes, such as ad-hoc parameter value choices and structural uncertainties (e.g. choice of a particular form for the size distribution). We present work on development and observational constraint of a novel microphysical parameterization approach, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), which seeks to address these sources of uncertainty. Our framework avoids unnecessary a priori assumptions, and instead relies on observations to provide probabilistic constraint of the scheme structure and sensitivities to environmental and microphysical conditions. We harness the rich microphysical information content of polarimetric radar observations to develop and constrain BOSS within a Bayesian inference framework using a Markov Chain Monte Carlo sampler (see Kumjian et al., this meeting for details on development of an associated polarimetric forward operator). Our work shows how knowledge of microphysical processes is provided by polarimetric radar observations of diverse weather conditions, and which processes remain highly uncertain, even after considering observations.

  13. CSF protein changes associated with hippocampal sclerosis risk gene variants highlight impact of GRN/PGRN.

    Science.gov (United States)

    Fardo, David W; Katsumata, Yuriko; Kauwe, John S K; Deming, Yuetiva; Harari, Oscar; Cruchaga, Carlos; Nelson, Peter T

    2017-04-01

    Hippocampal sclerosis of aging (HS-Aging) is a common cause of dementia in older adults. We tested the variability in cerebrospinal fluid (CSF) proteins associated with previously identified HS-Aging risk single nucleotide polymorphisms (SNPs). Alzheimer's Disease Neuroimaging Initiative cohort (ADNI; n=237) data, combining both multiplexed proteomics CSF and genotype data, were used to assess the association between CSF analytes and risk SNPs in four genes (SNPs): GRN (rs5848), TMEM106B (rs1990622), ABCC9 (rs704180), and KCNMB2 (rs9637454). For controls, non-HS-Aging SNPs in APOE (rs429358/rs7412) and MAPT (rs8070723) were also analyzed against Aβ1-42 and total tau CSF analytes. The GRN risk SNP (rs5848) status correlated with variation in CSF proteins, with the risk allele (T) associated with increased levels of AXL Receptor Tyrosine Kinase (AXL), TNF-Related Apoptosis-Inducing Ligand Receptor 3 (TRAIL-R3), Vascular Cell Adhesion Molecule-1 (VCAM-1) and clusterin (CLU) (all p<0.05 after Bonferroni correction). The TRAIL-R3 correlation was significant in meta-analysis with an additional dataset (p=5.05×10 -5 ). Further, the rs5848 SNP status was associated with increased CSF tau protein - a marker of neurodegeneration (p=0.015). These data are remarkable since this GRN SNP has been found to be a risk factor for multiple types of dementia-related brain pathologies. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Missense mutation in GRN gene affecting RNA splicing and plasma progranulin level in a family affected by frontotemporal lobar degeneration.

    Science.gov (United States)

    Luzzi, Simona; Colleoni, Lara; Corbetta, Paola; Baldinelli, Sara; Fiori, Chiara; Girelli, Francesca; Silvestrini, Mauro; Caroppo, Paola; Giaccone, Giorgio; Tagliavini, Fabrizio; Rossi, Giacomina

    2017-06-01

    Gene coding for progranulin, GRN, is a major gene linked to frontotemporal lobar degeneration. While most of pathogenic GRN mutations are null mutations leading to haploinsufficiency, GRN missense mutations do not have an obvious pathogenicity, and only a few have been revealed to act through different pathogenetic mechanisms, such as cytoplasmic missorting, protein degradation, and abnormal cleavage by elastase. The aim of this study was to disclose the pathogenetic mechanisms of the GRN A199V missense mutation, which was previously reported not to alter physiological progranulin features but was associated with a reduced plasma progranulin level. After investigating the family pedigree, we performed genetic and biochemical analysis on its members and performed RNA expression studies. We found that the mutation segregates with the disease and discovered that its pathogenic feature is the alteration of GRN mRNA splicing, actually leading to haploinsufficiency. Thus, when facing with a missense GRN mutation, its pathogenetic effects should be investigated, especially if associated with low plasma progranulin levels, to determine its nature of either benign polymorphism or pathogenic mutation. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Entropic Inference

    Science.gov (United States)

    Caticha, Ariel

    2011-03-01

    In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEnt and Bayes' rule, and therefore unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme.

  16. Trehalose upregulates progranulin expression in human and mouse models of GRN haploinsufficiency: a novel therapeutic lead to treat frontotemporal dementia.

    Science.gov (United States)

    Holler, Christopher J; Taylor, Georgia; McEachin, Zachary T; Deng, Qiudong; Watkins, William J; Hudson, Kathryn; Easley, Charles A; Hu, William T; Hales, Chadwick M; Rossoll, Wilfried; Bassell, Gary J; Kukar, Thomas

    2016-06-24

    Progranulin (PGRN) is a secreted growth factor important for neuronal survival and may do so, in part, by regulating lysosome homeostasis. Mutations in the PGRN gene (GRN) are a common cause of frontotemporal lobar degeneration (FTLD) and lead to disease through PGRN haploinsufficiency. Additionally, complete loss of PGRN in humans leads to neuronal ceroid lipofuscinosis (NCL), a lysosomal storage disease. Importantly, Grn-/- mouse models recapitulate pathogenic lysosomal features of NCL. Further, GRN variants that decrease PGRN expression increase the risk of developing Alzheimer's disease (AD) and Parkinson's disease (PD). Together these findings demonstrate that insufficient PGRN predisposes neurons to degeneration. Therefore, compounds that increase PGRN levels are potential therapeutics for multiple neurodegenerative diseases. Here, we performed a cell-based screen of a library of known autophagy-lysosome modulators and identified multiple novel activators of a human GRN promoter reporter including several common mTOR inhibitors and an mTOR-independent activator of autophagy, trehalose. Secondary cellular screens identified trehalose, a natural disaccharide, as the most promising lead compound because it increased endogenous PGRN in all cell lines tested and has multiple reported neuroprotective properties. Trehalose dose-dependently increased GRN mRNA as well as intracellular and secreted PGRN in both mouse and human cell lines and this effect was independent of the transcription factor EB (TFEB). Moreover, trehalose rescued PGRN deficiency in human fibroblasts and neurons derived from induced pluripotent stem cells (iPSCs) generated from GRN mutation carriers. Finally, oral administration of trehalose to Grn haploinsufficient mice significantly increased PGRN expression in the brain. This work reports several novel autophagy-lysosome modulators that enhance PGRN expression and identifies trehalose as a promising therapeutic for raising PGRN levels to treat

  17. The unexpected co-occurrence of GRN and MAPT p.A152T in Basque families: Clinical and pathological characteristics.

    Directory of Open Access Journals (Sweden)

    Fermin Moreno

    Full Text Available The co-occurrence of the c.709-1G>A GRN mutation and the p.A152T MAPT variant has been identified in 18 Basque families affected by frontotemporal dementia (FTD. We aimed to investigate the influence of the p.A152T MAPT variant on the clinical and neuropathological features of these Basque GRN families.We compared clinical characteristics of 14 patients who carried the c.709-1G>A GRN mutation (GRN+/A152T- with 21 patients who carried both the c.709-1G>A GRN mutation and the p.A152T MAPT variant (GRN+/A152T+. Neuropsychological data (n = 17 and plasma progranulin levels (n = 23 were compared between groups, and 7 subjects underwent neuropathological studies. We genotyped six short tandem repeat markers in the two largest families. By the analysis of linkage disequilibrium decay in the haplotype block we estimated the time when the first ancestor to carry both genetic variants emerged. GRN+/A152T+ and GRN+/A152T- patients shared similar clinical and neuropsychological features and plasma progranulin levels. All were diagnosed with an FTD disorder, including behavioral variant FTD or non fluent / agrammatic variant primary progressive aphasia, and shared a similar pattern of neuropsychological deficits, predominantly in executive function, memory, and language. All seven participants with available brain autopsies (6 GRN+/A152T+, 1 GRN+/A152T- showed frontotemporal lobar degeneration with TDP-43 inclusions (type A classification, which is characteristic of GRN carriers. Additionally, all seven showed mild to moderate tau inclusion burden: five cases lacked β-amyloid pathology and two cases had Alzheimer's pathology. The co-occurrence of both genes within one individual is recent, with the birth of the first GRN+/A152T+ individual estimated to be within the last 50 generations (95% probability.In our sample, the p.A152T MAPT variant does not appear to show a discernible influence on the clinical phenotype of GRN carriers. Whether p.A152T confers a

  18. A novel frameshift GRN mutation results in frontotemporal lobar degeneration with a distinct clinical phenotype in two siblings: case report and literature review.

    Science.gov (United States)

    Hosaka, Takashi; Ishii, Kazuhiro; Miura, Takeshi; Mezaki, Naomi; Kasuga, Kensaku; Ikeuchi, Takeshi; Tamaoka, Akira

    2017-09-15

    Progranulin gene (GRN) mutations are major causes of frontotemporal lobar degeneration. To date, 68 pathogenic GRN mutations have been identified. However, very few of these mutations have been reported in Asians. Moreover, some GRN mutations manifest with familial phenotypic heterogeneity. Here, we present a novel GRN mutation resulting in frontotemporal lobar degeneration with a distinct clinical phenotype, and we review reports of GRN mutations associated with familial phenotypic heterogeneity. We describe the case of a 74-year-old woman with left frontotemporal lobe atrophy who presented with progressive anarthria and non-fluent aphasia. Her brother had been diagnosed with corticobasal syndrome (CBS) with right-hand limb-kinetic apraxia, aphasia, and a similar pattern of brain atrophy. Laboratory blood examinations did not reveal abnormalities that could have caused cognitive dysfunction. In the cerebrospinal fluid, cell counts and protein concentrations were within normal ranges, and concentrations of tau protein and phosphorylated tau protein were also normal. Since similar familial cases due to mutation of GRN and microtubule-associated protein tau gene (MAPT) were reported, we performed genetic analysis. No pathological mutations of MAPT were identified, but we identified a novel GRN frameshift mutation (c.1118_1119delCCinsG: p.Pro373ArgX37) that resulted in progranulin haploinsufficiency. This is the first report of a GRN mutation associated with familial phenotypic heterogeneity in Japan. Literature review of GRN mutations associated with familial phenotypic heterogeneity revealed no tendency of mutation sites. The role of progranulin has been reported in this and other neurodegenerative diseases, and the analysis of GRN mutations may lead to the discovery of a new therapeutic target.

  19. Emergent adaptive behaviour of GRN-controlled simulated robots in a changing environment

    Directory of Open Access Journals (Sweden)

    Yao Yao

    2016-12-01

    Full Text Available We developed a bio-inspired robot controller combining an artificial genome with an agent-based control system. The genome encodes a gene regulatory network (GRN that is switched on by environmental cues and, following the rules of transcriptional regulation, provides output signals to actuators. Whereas the genome represents the full encoding of the transcriptional network, the agent-based system mimics the active regulatory network and signal transduction system also present in naturally occurring biological systems. Using such a design that separates the static from the conditionally active part of the gene regulatory network contributes to a better general adaptive behaviour. Here, we have explored the potential of our platform with respect to the evolution of adaptive behaviour, such as preying when food becomes scarce, in a complex and changing environment and show through simulations of swarm robots in an A-life environment that evolution of collective behaviour likely can be attributed to bio-inspired evolutionary processes acting at different levels, from the gene and the genome to the individual robot and robot population.

  20. Emergent adaptive behaviour of GRN-controlled simulated robots in a changing environment.

    Science.gov (United States)

    Yao, Yao; Storme, Veronique; Marchal, Kathleen; Van de Peer, Yves

    2016-01-01

    We developed a bio-inspired robot controller combining an artificial genome with an agent-based control system. The genome encodes a gene regulatory network (GRN) that is switched on by environmental cues and, following the rules of transcriptional regulation, provides output signals to actuators. Whereas the genome represents the full encoding of the transcriptional network, the agent-based system mimics the active regulatory network and signal transduction system also present in naturally occurring biological systems. Using such a design that separates the static from the conditionally active part of the gene regulatory network contributes to a better general adaptive behaviour. Here, we have explored the potential of our platform with respect to the evolution of adaptive behaviour, such as preying when food becomes scarce, in a complex and changing environment and show through simulations of swarm robots in an A-life environment that evolution of collective behaviour likely can be attributed to bio-inspired evolutionary processes acting at different levels, from the gene and the genome to the individual robot and robot population.

  1. Emergent adaptive behaviour of GRN-controlled simulated robots in a changing environment

    Science.gov (United States)

    Yao, Yao; Storme, Veronique; Marchal, Kathleen

    2016-01-01

    We developed a bio-inspired robot controller combining an artificial genome with an agent-based control system. The genome encodes a gene regulatory network (GRN) that is switched on by environmental cues and, following the rules of transcriptional regulation, provides output signals to actuators. Whereas the genome represents the full encoding of the transcriptional network, the agent-based system mimics the active regulatory network and signal transduction system also present in naturally occurring biological systems. Using such a design that separates the static from the conditionally active part of the gene regulatory network contributes to a better general adaptive behaviour. Here, we have explored the potential of our platform with respect to the evolution of adaptive behaviour, such as preying when food becomes scarce, in a complex and changing environment and show through simulations of swarm robots in an A-life environment that evolution of collective behaviour likely can be attributed to bio-inspired evolutionary processes acting at different levels, from the gene and the genome to the individual robot and robot population. PMID:28028477

  2. Single-cell and coupled GRN models of cell patterning in the Arabidopsis thaliana root stem cell niche

    Directory of Open Access Journals (Sweden)

    Alvarez-Buylla Elena R

    2010-10-01

    Full Text Available Abstract Background Recent experimental work has uncovered some of the genetic components required to maintain the Arabidopsis thaliana root stem cell niche (SCN and its structure. Two main pathways are involved. One pathway depends on the genes SHORTROOT and SCARECROW and the other depends on the PLETHORA genes, which have been proposed to constitute the auxin readouts. Recent evidence suggests that a regulatory circuit, composed of WOX5 and CLE40, also contributes to the SCN maintenance. Yet, we still do not understand how the niche is dynamically maintained and patterned or if the uncovered molecular components are sufficient to recover the observed gene expression configurations that characterize the cell types within the root SCN. Mathematical and computational tools have proven useful in understanding the dynamics of cell differentiation. Hence, to further explore root SCN patterning, we integrated available experimental data into dynamic Gene Regulatory Network (GRN models and addressed if these are sufficient to attain observed gene expression configurations in the root SCN in a robust and autonomous manner. Results We found that an SCN GRN model based only on experimental data did not reproduce the configurations observed within the root SCN. We developed several alternative GRN models that recover these expected stable gene configurations. Such models incorporate a few additional components and interactions in addition to those that have been uncovered. The recovered configurations are stable to perturbations, and the models are able to recover the observed gene expression profiles of almost all the mutants described so far. However, the robustness of the postulated GRNs is not as high as that of other previously studied networks. Conclusions These models are the first published approximations for a dynamic mechanism of the A. thaliana root SCN cellular pattering. Our model is useful to formally show that the data now available are not

  3. Hybrid Optical Inference Machines

    Science.gov (United States)

    1991-09-27

    with labels. Now, events. a set of facts cal be generated in the dyadic form "u, R 1,2" Eichmann and Caulfield (19] consider the same type of and can...these enceding-schemes. These architectures are-based pri- 19. G. Eichmann and H. J. Caulfield, "Optical Learning (Inference)marily on optical inner

  4. Defining the association of TMEM106B variants among frontotemporal lobar degeneration patients with GRN mutations and C9orf72 repeat expansions.

    Science.gov (United States)

    Lattante, Serena; Le Ber, Isabelle; Galimberti, Daniela; Serpente, Maria; Rivaud-Péchoux, Sophie; Camuzat, Agnès; Clot, Fabienne; Fenoglio, Chiara; Scarpini, Elio; Brice, Alexis; Kabashi, Edor

    2014-11-01

    TMEM106B was identified as a risk factor for frontotemporal lobar degeneration (FTD) with TAR DNA-binding protein 43 kDa inclusions. It has been reported that variants in this gene are genetic modifiers of the disease and that this association is stronger in patients carrying a GRN mutation or a pathogenic expansion in chromosome 9 open reading frame 72 (C9orf72) gene. Here, we investigated the contribution of TMEM106B polymorphisms in cohorts of FTD and FTD with amyotrophic lateral sclerosis patients from France and Italy. Patients carrying the C9orf72 expansion (n = 145) and patients with GRN mutations (n = 76) were compared with a group of FTD patients (n = 384) negative for mutations and to a group of healthy controls (n = 552). In our cohorts, the presence of the C9orf72 expansion did not correlate with TMEM106B genotypes but the association was very strong in individuals with pathogenic GRN mutations (p = 9.54 × 10(-6)). Our data suggest that TMEM106B genotypes differ in FTD patient cohorts and strengthen the protective role of TMEM106B in GRN carriers. Further studies are needed to determine whether TMEM106B polymorphisms are associated with other genetic causes for FTD, including C9orf72 repeat expansions. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Profiling of Ubiquitination Pathway Genes in Peripheral Cells from Patients with Frontotemporal Dementia due to C9ORF72 and GRN Mutations

    Directory of Open Access Journals (Sweden)

    Maria Serpente

    2015-01-01

    Full Text Available We analysed the expression levels of 84 key genes involved in the regulated degradation of cellular protein by the ubiquitin-proteasome system in peripheral cells from patients with frontotemporal dementia (FTD due to C9ORF72 and GRN mutations, as compared with sporadic FTD and age-matched controls. A SABiosciences PCR array was used to investigate the transcription profile in a discovery population consisting of six patients each in C9ORF72, GRN, sporadic FTD and age-matched control groups. A generalized down-regulation of gene expression compared with controls was observed in C9ORF72 expansion carriers and sporadic FTD patients. In particular, in both groups, four genes, UBE2I, UBE2Q1, UBE2E1 and UBE2N, were down-regulated at a statistically significant (p < 0.05 level. All of them encode for members of the E2 ubiquitin-conjugating enzyme family. In GRN mutation carriers, no statistically significant deregulation of ubiquitination pathway genes was observed, except for the UBE2Z gene, which displays E2 ubiquitin conjugating enzyme activity, and was found to be statistically significant up-regulated (p = 0.006. These preliminary results suggest that the proteasomal degradation pathway plays a role in the pathogenesis of FTD associated with TDP-43 pathology, although different proteins are altered in carriers of GRN mutations as compared with carriers of the C9ORF72 expansion.

  6. Progranulin plasma levels predict the presence of GRN mutations in asymptomatic subjects and do not correlate with brain atrophy: Results from the GENFI study

    NARCIS (Netherlands)

    D. Galimberti (Daniela); Fumagalli, G.G. (Giorgio G.); C. Fenoglio (Chiara); Cioffi, S.M.G. (Sara M.G.); A. Arighi (Andrea); M. Serpente (Maria); B. Borroni (Barbara); A. Padovani (Alessandro); F. Tagliavini (Fabrizio); M. Masellis (Mario); M.C. Tartaglia (Maria Carmela); J.C. van Swieten (John); L.H.H. Meeter (Lieke H.H.); C. Graff (Caroline); A. De Mendonça (Alexandre); M. Bocchetta (Martina); J.D. Rohrer (Jonathan Daniel); Scarpini, E. (Elio)

    2017-01-01

    textabstractWe investigated whether progranulin plasma levels are predictors of the presence of progranulin gene (GRN) null mutations or of the development of symptoms in asymptomatic at risk members participating in the Genetic Frontotemporal Dementia Initiative, including 19 patients, 64

  7. Active inference and learning.

    Science.gov (United States)

    Friston, Karl; FitzGerald, Thomas; Rigoli, Francesco; Schwartenbeck, Philipp; O Doherty, John; Pezzulo, Giovanni

    2016-09-01

    This paper offers an active inference account of choice behaviour and learning. It focuses on the distinction between goal-directed and habitual behaviour and how they contextualise each other. We show that habits emerge naturally (and autodidactically) from sequential policy optimisation when agents are equipped with state-action policies. In active inference, behaviour has explorative (epistemic) and exploitative (pragmatic) aspects that are sensitive to ambiguity and risk respectively, where epistemic (ambiguity-resolving) behaviour enables pragmatic (reward-seeking) behaviour and the subsequent emergence of habits. Although goal-directed and habitual policies are usually associated with model-based and model-free schemes, we find the more important distinction is between belief-free and belief-based schemes. The underlying (variational) belief updating provides a comprehensive (if metaphorical) process theory for several phenomena, including the transfer of dopamine responses, reversal learning, habit formation and devaluation. Finally, we show that active inference reduces to a classical (Bellman) scheme, in the absence of ambiguity. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Assessment of network inference methods: how to cope with an underdetermined problem.

    Directory of Open Access Journals (Sweden)

    Caroline Siegenthaler

    Full Text Available The inference of biological networks is an active research area in the field of systems biology. The number of network inference algorithms has grown tremendously in the last decade, underlining the importance of a fair assessment and comparison among these methods. Current assessments of the performance of an inference method typically involve the application of the algorithm to benchmark datasets and the comparison of the network predictions against the gold standard or reference networks. While the network inference problem is often deemed underdetermined, implying that the inference problem does not have a (unique solution, the consequences of such an attribute have not been rigorously taken into consideration. Here, we propose a new procedure for assessing the performance of gene regulatory network (GRN inference methods. The procedure takes into account the underdetermined nature of the inference problem, in which gene regulatory interactions that are inferable or non-inferable are determined based on causal inference. The assessment relies on a new definition of the confusion matrix, which excludes errors associated with non-inferable gene regulations. For demonstration purposes, the proposed assessment procedure is applied to the DREAM 4 In Silico Network Challenge. The results show a marked change in the ranking of participating methods when taking network inferability into account.

  9. CompareSVM: supervised, Support Vector Machine (SVM) inference of gene regularity networks.

    Science.gov (United States)

    Gillani, Zeeshan; Akash, Muhammad Sajid Hamid; Rahaman, M D Matiur; Chen, Ming

    2014-11-30

    Predication of gene regularity network (GRN) from expression data is a challenging task. There are many methods that have been developed to address this challenge ranging from supervised to unsupervised methods. Most promising methods are based on support vector machine (SVM). There is a need for comprehensive analysis on prediction accuracy of supervised method SVM using different kernels on different biological experimental conditions and network size. We developed a tool (CompareSVM) based on SVM to compare different kernel methods for inference of GRN. Using CompareSVM, we investigated and evaluated different SVM kernel methods on simulated datasets of microarray of different sizes in detail. The results obtained from CompareSVM showed that accuracy of inference method depends upon the nature of experimental condition and size of the network. For network with nodes (SVM Gaussian kernel outperform on knockout, knockdown, and multifactorial datasets compared to all the other inference methods. For network with large number of nodes (~500), choice of inference method depend upon nature of experimental condition. CompareSVM is available at http://bis.zju.edu.cn/CompareSVM/ .

  10. Potential genetic modifiers of disease risk and age at onset in patients with frontotemporal lobar degeneration and GRN mutations: a genome-wide association study.

    Science.gov (United States)

    Pottier, Cyril; Zhou, Xiaolai; Perkerson, Ralph B; Baker, Matt; Jenkins, Gregory D; Serie, Daniel J; Ghidoni, Roberta; Benussi, Luisa; Binetti, Giuliano; López de Munain, Adolfo; Zulaica, Miren; Moreno, Fermin; Le Ber, Isabelle; Pasquier, Florence; Hannequin, Didier; Sánchez-Valle, Raquel; Antonell, Anna; Lladó, Albert; Parsons, Tammee M; Finch, NiCole A; Finger, Elizabeth C; Lippa, Carol F; Huey, Edward D; Neumann, Manuela; Heutink, Peter; Synofzik, Matthis; Wilke, Carlo; Rissman, Robert A; Slawek, Jaroslaw; Sitek, Emilia; Johannsen, Peter; Nielsen, Jørgen E; Ren, Yingxue; van Blitterswijk, Marka; DeJesus-Hernandez, Mariely; Christopher, Elizabeth; Murray, Melissa E; Bieniek, Kevin F; Evers, Bret M; Ferrari, Camilla; Rollinson, Sara; Richardson, Anna; Scarpini, Elio; Fumagalli, Giorgio G; Padovani, Alessandro; Hardy, John; Momeni, Parastoo; Ferrari, Raffaele; Frangipane, Francesca; Maletta, Raffaele; Anfossi, Maria; Gallo, Maura; Petrucelli, Leonard; Suh, EunRan; Lopez, Oscar L; Wong, Tsz H; van Rooij, Jeroen G J; Seelaar, Harro; Mead, Simon; Caselli, Richard J; Reiman, Eric M; Noel Sabbagh, Marwan; Kjolby, Mads; Nykjaer, Anders; Karydas, Anna M; Boxer, Adam L; Grinberg, Lea T; Grafman, Jordan; Spina, Salvatore; Oblak, Adrian; Mesulam, M-Marsel; Weintraub, Sandra; Geula, Changiz; Hodges, John R; Piguet, Olivier; Brooks, William S; Irwin, David J; Trojanowski, John Q; Lee, Edward B; Josephs, Keith A; Parisi, Joseph E; Ertekin-Taner, Nilüfer; Knopman, David S; Nacmias, Benedetta; Piaceri, Irene; Bagnoli, Silvia; Sorbi, Sandro; Gearing, Marla; Glass, Jonathan; Beach, Thomas G; Black, Sandra E; Masellis, Mario; Rogaeva, Ekaterina; Vonsattel, Jean-Paul; Honig, Lawrence S; Kofler, Julia; Bruni, Amalia C; Snowden, Julie; Mann, David; Pickering-Brown, Stuart; Diehl-Schmid, Janine; Winkelmann, Juliane; Galimberti, Daniela; Graff, Caroline; Öijerstedt, Linn; Troakes, Claire; Al-Sarraj, Safa; Cruchaga, Carlos; Cairns, Nigel J; Rohrer, Jonathan D; Halliday, Glenda M; Kwok, John B; van Swieten, John C; White, Charles L; Ghetti, Bernardino; Murell, Jill R; Mackenzie, Ian R A; Hsiung, Ging-Yuek R; Borroni, Barbara; Rossi, Giacomina; Tagliavini, Fabrizio; Wszolek, Zbigniew K; Petersen, Ronald C; Bigio, Eileen H; Grossman, Murray; Van Deerlin, Vivianna M; Seeley, William W; Miller, Bruce L; Graff-Radford, Neill R; Boeve, Bradley F; Dickson, Dennis W; Biernacka, Joanna M; Rademakers, Rosa

    2018-06-01

    Loss-of-function mutations in GRN cause frontotemporal lobar degeneration (FTLD). Patients with GRN mutations present with a uniform subtype of TAR DNA-binding protein 43 (TDP-43) pathology at autopsy (FTLD-TDP type A); however, age at onset and clinical presentation are variable, even within families. We aimed to identify potential genetic modifiers of disease onset and disease risk in GRN mutation carriers. The study was done in three stages: a discovery stage, a replication stage, and a meta-analysis of the discovery and replication data. In the discovery stage, genome-wide logistic and linear regression analyses were done to test the association of genetic variants with disease risk (case or control status) and age at onset in patients with a GRN mutation and controls free of neurodegenerative disorders. Suggestive loci (p<1 × 10 -5 ) were genotyped in a replication cohort of patients and controls, followed by a meta-analysis. The effect of genome-wide significant variants at the GFRA2 locus on expression of GFRA2 was assessed using mRNA expression studies in cerebellar tissue samples from the Mayo Clinic brain bank. The effect of the GFRA2 locus on progranulin concentrations was studied using previously generated ELISA-based expression data. Co-immunoprecipitation experiments in HEK293T cells were done to test for a direct interaction between GFRA2 and progranulin. Individuals were enrolled in the current study between Sept 16, 2014, and Oct 5, 2017. After quality control measures, statistical analyses in the discovery stage included 382 unrelated symptomatic GRN mutation carriers and 1146 controls free of neurodegenerative disorders collected from 34 research centres located in the USA, Canada, Australia, and Europe. In the replication stage, 210 patients (67 symptomatic GRN mutation carriers and 143 patients with FTLD without GRN mutations pathologically confirmed as FTLD-TDP type A) and 1798 controls free of neurodegenerative diseases were recruited

  11. Distributional Inference

    NARCIS (Netherlands)

    Kroese, A.H.; van der Meulen, E.A.; Poortema, Klaas; Schaafsma, W.

    1995-01-01

    The making of statistical inferences in distributional form is conceptionally complicated because the epistemic 'probabilities' assigned are mixtures of fact and fiction. In this respect they are essentially different from 'physical' or 'frequency-theoretic' probabilities. The distributional form is

  12. Entropic Inference

    OpenAIRE

    Caticha, Ariel

    2010-01-01

    In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEn...

  13. Perceptual inference.

    Science.gov (United States)

    Aggelopoulos, Nikolaos C

    2015-08-01

    Perceptual inference refers to the ability to infer sensory stimuli from predictions that result from internal neural representations built through prior experience. Methods of Bayesian statistical inference and decision theory model cognition adequately by using error sensing either in guiding action or in "generative" models that predict the sensory information. In this framework, perception can be seen as a process qualitatively distinct from sensation, a process of information evaluation using previously acquired and stored representations (memories) that is guided by sensory feedback. The stored representations can be utilised as internal models of sensory stimuli enabling long term associations, for example in operant conditioning. Evidence for perceptual inference is contributed by such phenomena as the cortical co-localisation of object perception with object memory, the response invariance in the responses of some neurons to variations in the stimulus, as well as from situations in which perception can be dissociated from sensation. In the context of perceptual inference, sensory areas of the cerebral cortex that have been facilitated by a priming signal may be regarded as comparators in a closed feedback loop, similar to the better known motor reflexes in the sensorimotor system. The adult cerebral cortex can be regarded as similar to a servomechanism, in using sensory feedback to correct internal models, producing predictions of the outside world on the basis of past experience. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Progranulin plasma levels predict the presence of GRN mutations in asymptomatic subjects and do not correlate with brain atrophy: results from the GENFI study.

    Science.gov (United States)

    Galimberti, Daniela; Fumagalli, Giorgio G; Fenoglio, Chiara; Cioffi, Sara M G; Arighi, Andrea; Serpente, Maria; Borroni, Barbara; Padovani, Alessandro; Tagliavini, Fabrizio; Masellis, Mario; Tartaglia, Maria Carmela; van Swieten, John; Meeter, Lieke; Graff, Caroline; de Mendonça, Alexandre; Bocchetta, Martina; Rohrer, Jonathan D; Scarpini, Elio

    2018-02-01

    We investigated whether progranulin plasma levels are predictors of the presence of progranulin gene (GRN) null mutations or of the development of symptoms in asymptomatic at risk members participating in the Genetic Frontotemporal Dementia Initiative, including 19 patients, 64 asymptomatic carriers, and 77 noncarriers. In addition, we evaluated a possible role of TMEM106B rs1990622 as a genetic modifier and correlated progranulin plasma levels and gray-matter atrophy. Plasma progranulin mean ± SD plasma levels in patients and asymptomatic carriers were significantly decreased compared with noncarriers (30.5 ± 13.0 and 27.7 ± 7.5 versus 99.6 ± 24.8 ng/mL, p 61.55 ng/mL, the test had a sensitivity of 98.8% and a specificity of 97.5% in predicting the presence of a mutation, independent of symptoms. No correlations were found between progranulin plasma levels and age, years from average age at onset in each family, or TMEM106B rs1990622 genotype (p > 0.05). Plasma progranulin levels did not correlate with brain atrophy. Plasma progranulin levels predict the presence of GRN null mutations independent of proximity to symptoms and brain atrophy. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  15. Statistical inference

    CERN Document Server

    Rohatgi, Vijay K

    2003-01-01

    Unified treatment of probability and statistics examines and analyzes the relationship between the two fields, exploring inferential issues. Numerous problems, examples, and diagrams--some with solutions--plus clear-cut, highlighted summaries of results. Advanced undergraduate to graduate level. Contents: 1. Introduction. 2. Probability Model. 3. Probability Distributions. 4. Introduction to Statistical Inference. 5. More on Mathematical Expectation. 6. Some Discrete Models. 7. Some Continuous Models. 8. Functions of Random Variables and Random Vectors. 9. Large-Sample Theory. 10. General Meth

  16. A Bayesian Framework That Integrates Heterogeneous Data for Inferring Gene Regulatory Networks

    Energy Technology Data Exchange (ETDEWEB)

    Santra, Tapesh, E-mail: tapesh.santra@ucd.ie [Systems Biology Ireland, University College Dublin, Dublin (Ireland)

    2014-05-20

    Reconstruction of gene regulatory networks (GRNs) from experimental data is a fundamental challenge in systems biology. A number of computational approaches have been developed to infer GRNs from mRNA expression profiles. However, expression profiles alone are proving to be insufficient for inferring GRN topologies with reasonable accuracy. Recently, it has been shown that integration of external data sources (such as gene and protein sequence information, gene ontology data, protein–protein interactions) with mRNA expression profiles may increase the reliability of the inference process. Here, I propose a new approach that incorporates transcription factor binding sites (TFBS) and physical protein interactions (PPI) among transcription factors (TFs) in a Bayesian variable selection (BVS) algorithm which can infer GRNs from mRNA expression profiles subjected to genetic perturbations. Using real experimental data, I show that the integration of TFBS and PPI data with mRNA expression profiles leads to significantly more accurate networks than those inferred from expression profiles alone. Additionally, the performance of the proposed algorithm is compared with a series of least absolute shrinkage and selection operator (LASSO) regression-based network inference methods that can also incorporate prior knowledge in the inference framework. The results of this comparison suggest that BVS can outperform LASSO regression-based method in some circumstances.

  17. A Bayesian Framework That Integrates Heterogeneous Data for Inferring Gene Regulatory Networks

    International Nuclear Information System (INIS)

    Santra, Tapesh

    2014-01-01

    Reconstruction of gene regulatory networks (GRNs) from experimental data is a fundamental challenge in systems biology. A number of computational approaches have been developed to infer GRNs from mRNA expression profiles. However, expression profiles alone are proving to be insufficient for inferring GRN topologies with reasonable accuracy. Recently, it has been shown that integration of external data sources (such as gene and protein sequence information, gene ontology data, protein–protein interactions) with mRNA expression profiles may increase the reliability of the inference process. Here, I propose a new approach that incorporates transcription factor binding sites (TFBS) and physical protein interactions (PPI) among transcription factors (TFs) in a Bayesian variable selection (BVS) algorithm which can infer GRNs from mRNA expression profiles subjected to genetic perturbations. Using real experimental data, I show that the integration of TFBS and PPI data with mRNA expression profiles leads to significantly more accurate networks than those inferred from expression profiles alone. Additionally, the performance of the proposed algorithm is compared with a series of least absolute shrinkage and selection operator (LASSO) regression-based network inference methods that can also incorporate prior knowledge in the inference framework. The results of this comparison suggest that BVS can outperform LASSO regression-based method in some circumstances.

  18. Colour schemes

    DEFF Research Database (Denmark)

    van Leeuwen, Theo

    2013-01-01

    This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation.......This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation....

  19. Tradable schemes

    NARCIS (Netherlands)

    J.K. Hoogland (Jiri); C.D.D. Neumann

    2000-01-01

    textabstractIn this article we present a new approach to the numerical valuation of derivative securities. The method is based on our previous work where we formulated the theory of pricing in terms of tradables. The basic idea is to fit a finite difference scheme to exact solutions of the pricing

  20. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Xiangyun Xiao

    Full Text Available The reconstruction of gene regulatory networks (GRNs from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM, experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  1. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    Science.gov (United States)

    Xiao, Xiangyun; Zhang, Wei; Zou, Xiufen

    2015-01-01

    The reconstruction of gene regulatory networks (GRNs) from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE)-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM), experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  2. Inferring regulatory networks from expression data using tree-based methods.

    Directory of Open Access Journals (Sweden)

    Vân Anh Huynh-Thu

    2010-09-01

    Full Text Available One of the pressing open problems of computational systems biology is the elucidation of the topology of genetic regulatory networks (GRNs using high throughput genomic data, in particular microarray gene expression data. The Dialogue for Reverse Engineering Assessments and Methods (DREAM challenge aims to evaluate the success of GRN inference algorithms on benchmarks of simulated data. In this article, we present GENIE3, a new algorithm for the inference of GRNs that was best performer in the DREAM4 In Silico Multifactorial challenge. GENIE3 decomposes the prediction of a regulatory network between p genes into p different regression problems. In each of the regression problems, the expression pattern of one of the genes (target gene is predicted from the expression patterns of all the other genes (input genes, using tree-based ensemble methods Random Forests or Extra-Trees. The importance of an input gene in the prediction of the target gene expression pattern is taken as an indication of a putative regulatory link. Putative regulatory links are then aggregated over all genes to provide a ranking of interactions from which the whole network is reconstructed. In addition to performing well on the DREAM4 In Silico Multifactorial challenge simulated data, we show that GENIE3 compares favorably with existing algorithms to decipher the genetic regulatory network of Escherichia coli. It doesn't make any assumption about the nature of gene regulation, can deal with combinatorial and non-linear interactions, produces directed GRNs, and is fast and scalable. In conclusion, we propose a new algorithm for GRN inference that performs well on both synthetic and real gene expression data. The algorithm, based on feature selection with tree-based ensemble methods, is simple and generic, making it adaptable to other types of genomic data and interactions.

  3. Isolation and Characterization of Two Lytic Bacteriophages, φSt2 and φGrn1; Phage Therapy Application for Biological Control of Vibrio alginolyticus in Aquaculture Live Feeds.

    Directory of Open Access Journals (Sweden)

    Panos G Kalatzis

    Full Text Available Bacterial infections are a serious problem in aquaculture since they can result in massive mortalities in farmed fish and invertebrates. Vibriosis is one of the most common diseases in marine aquaculture hatcheries and its causative agents are bacteria of the genus Vibrio mostly entering larval rearing water through live feeds, such as Artemia and rotifers. The pathogenic Vibrio alginolyticus strain V1, isolated during a vibriosis outbreak in cultured seabream, Sparus aurata, was used as host to isolate and characterize the two novel bacteriophages φSt2 and φGrn1 for phage therapy application. In vitro cell lysis experiments were performed against the bacterial host V. alginolyticus strain V1 but also against 12 presumptive Vibrio strains originating from live prey Artemia salina cultures indicating the strong lytic efficacy of the 2 phages. In vivo administration of the phage cocktail, φSt2 and φGrn1, at MOI = 100 directly on live prey A. salina cultures, led to a 93% decrease of presumptive Vibrio population after 4 h of treatment. Current study suggests that administration of φSt2 and φGrn1 to live preys could selectively reduce Vibrio load in fish hatcheries. Innovative and environmental friendly solutions against bacterial diseases are more than necessary and phage therapy is one of them.

  4. Evaluation of artificial time series microarray data for dynamic gene regulatory network inference.

    Science.gov (United States)

    Xenitidis, P; Seimenis, I; Kakolyris, S; Adamopoulos, A

    2017-08-07

    High-throughput technology like microarrays is widely used in the inference of gene regulatory networks (GRNs). We focused on time series data since we are interested in the dynamics of GRNs and the identification of dynamic networks. We evaluated the amount of information that exists in artificial time series microarray data and the ability of an inference process to produce accurate models based on them. We used dynamic artificial gene regulatory networks in order to create artificial microarray data. Key features that characterize microarray data such as the time separation of directly triggered genes, the percentage of directly triggered genes and the triggering function type were altered in order to reveal the limits that are imposed by the nature of microarray data on the inference process. We examined the effect of various factors on the inference performance such as the network size, the presence of noise in microarray data, and the network sparseness. We used a system theory approach and examined the relationship between the pole placement of the inferred system and the inference performance. We examined the relationship between the inference performance in the time domain and the true system parameter identification. Simulation results indicated that time separation and the percentage of directly triggered genes are crucial factors. Also, network sparseness, the triggering function type and noise in input data affect the inference performance. When two factors were simultaneously varied, it was found that variation of one parameter significantly affects the dynamic response of the other. Crucial factors were also examined using a real GRN and acquired results confirmed simulation findings with artificial data. Different initial conditions were also used as an alternative triggering approach. Relevant results confirmed that the number of datasets constitutes the most significant parameter with regard to the inference performance. Copyright © 2017 Elsevier

  5. Gauging Variational Inference

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ahn, Sungsoo [Korea Advanced Inst. Science and Technology (KAIST), Daejeon (Korea, Republic of); Shin, Jinwoo [Korea Advanced Inst. Science and Technology (KAIST), Daejeon (Korea, Republic of)

    2017-05-25

    Computing partition function is the most important statistical inference task arising in applications of Graphical Models (GM). Since it is computationally intractable, approximate methods have been used to resolve the issue in practice, where meanfield (MF) and belief propagation (BP) are arguably the most popular and successful approaches of a variational type. In this paper, we propose two new variational schemes, coined Gauged-MF (G-MF) and Gauged-BP (G-BP), improving MF and BP, respectively. Both provide lower bounds for the partition function by utilizing the so-called gauge transformation which modifies factors of GM while keeping the partition function invariant. Moreover, we prove that both G-MF and G-BP are exact for GMs with a single loop of a special structure, even though the bare MF and BP perform badly in this case. Our extensive experiments, on complete GMs of relatively small size and on large GM (up-to 300 variables) confirm that the newly proposed algorithms outperform and generalize MF and BP.

  6. Additive operator-difference schemes splitting schemes

    CERN Document Server

    Vabishchevich, Petr N

    2013-01-01

    Applied mathematical modeling isconcerned with solving unsteady problems. This bookshows how toconstruct additive difference schemes to solve approximately unsteady multi-dimensional problems for PDEs. Two classes of schemes are highlighted: methods of splitting with respect to spatial variables (alternating direction methods) and schemes of splitting into physical processes. Also regionally additive schemes (domain decomposition methods)and unconditionally stable additive schemes of multi-component splitting are considered for evolutionary equations of first and second order as well as for sy

  7. Improved Inference of Heteroscedastic Fixed Effects Models

    Directory of Open Access Journals (Sweden)

    Afshan Saeed

    2016-12-01

    Full Text Available Heteroscedasticity is a stern problem that distorts estimation and testing of panel data model (PDM. Arellano (1987 proposed the White (1980 estimator for PDM with heteroscedastic errors but it provides erroneous inference for the data sets including high leverage points. In this paper, our attempt is to improve heteroscedastic consistent covariance matrix estimator (HCCME for panel dataset with high leverage points. To draw robust inference for the PDM, our focus is to improve kernel bootstrap estimators, proposed by Racine and MacKinnon (2007. The Monte Carlo scheme is used for assertion of the results.

  8. SEMANTIC PATCH INFERENCE

    DEFF Research Database (Denmark)

    Andersen, Jesper

    2009-01-01

    Collateral evolution the problem of updating several library-using programs in response to API changes in the used library. In this dissertation we address the issue of understanding collateral evolutions by automatically inferring a high-level specification of the changes evident in a given set ...... specifications inferred by spdiff in Linux are shown. We find that the inferred specifications concisely capture the actual collateral evolution performed in the examples....

  9. Inference in `poor` languages

    Energy Technology Data Exchange (ETDEWEB)

    Petrov, S.

    1996-10-01

    Languages with a solvable implication problem but without complete and consistent systems of inference rules (`poor` languages) are considered. The problem of existence of finite complete and consistent inference rule system for a ``poor`` language is stated independently of the language or rules syntax. Several properties of the problem arc proved. An application of results to the language of join dependencies is given.

  10. Bayesian statistical inference

    Directory of Open Access Journals (Sweden)

    Bruno De Finetti

    2017-04-01

    Full Text Available This work was translated into English and published in the volume: Bruno De Finetti, Induction and Probability, Biblioteca di Statistica, eds. P. Monari, D. Cocchi, Clueb, Bologna, 1993.Bayesian statistical Inference is one of the last fundamental philosophical papers in which we can find the essential De Finetti's approach to the statistical inference.

  11. Geometric statistical inference

    International Nuclear Information System (INIS)

    Periwal, Vipul

    1999-01-01

    A reparametrization-covariant formulation of the inverse problem of probability is explicitly solved for finite sample sizes. The inferred distribution is explicitly continuous for finite sample size. A geometric solution of the statistical inference problem in higher dimensions is outlined

  12. Practical Bayesian Inference

    Science.gov (United States)

    Bailer-Jones, Coryn A. L.

    2017-04-01

    Preface; 1. Probability basics; 2. Estimation and uncertainty; 3. Statistical models and inference; 4. Linear models, least squares, and maximum likelihood; 5. Parameter estimation: single parameter; 6. Parameter estimation: multiple parameters; 7. Approximating distributions; 8. Monte Carlo methods for inference; 9. Parameter estimation: Markov chain Monte Carlo; 10. Frequentist hypothesis testing; 11. Model comparison; 12. Dealing with more complicated problems; References; Index.

  13. A canonical correlation analysis-based dynamic bayesian network prior to infer gene regulatory networks from multiple types of biological data.

    Science.gov (United States)

    Baur, Brittany; Bozdag, Serdar

    2015-04-01

    One of the challenging and important computational problems in systems biology is to infer gene regulatory networks (GRNs) of biological systems. Several methods that exploit gene expression data have been developed to tackle this problem. In this study, we propose the use of copy number and DNA methylation data to infer GRNs. We developed an algorithm that scores regulatory interactions between genes based on canonical correlation analysis. In this algorithm, copy number or DNA methylation variables are treated as potential regulator variables, and expression variables are treated as potential target variables. We first validated that the canonical correlation analysis method is able to infer true interactions in high accuracy. We showed that the use of DNA methylation or copy number datasets leads to improved inference over steady-state expression. Our results also showed that epigenetic and structural information could be used to infer directionality of regulatory interactions. Additional improvements in GRN inference can be gleaned from incorporating the result in an informative prior in a dynamic Bayesian algorithm. This is the first study that incorporates copy number and DNA methylation into an informative prior in dynamic Bayesian framework. By closely examining top-scoring interactions with different sources of epigenetic or structural information, we also identified potential novel regulatory interactions.

  14. Knowledge and inference

    CERN Document Server

    Nagao, Makoto

    1990-01-01

    Knowledge and Inference discusses an important problem for software systems: How do we treat knowledge and ideas on a computer and how do we use inference to solve problems on a computer? The book talks about the problems of knowledge and inference for the purpose of merging artificial intelligence and library science. The book begins by clarifying the concept of """"knowledge"""" from many points of view, followed by a chapter on the current state of library science and the place of artificial intelligence in library science. Subsequent chapters cover central topics in the artificial intellig

  15. Logical inference and evaluation

    International Nuclear Information System (INIS)

    Perey, F.G.

    1981-01-01

    Most methodologies of evaluation currently used are based upon the theory of statistical inference. It is generally perceived that this theory is not capable of dealing satisfactorily with what are called systematic errors. Theories of logical inference should be capable of treating all of the information available, including that not involving frequency data. A theory of logical inference is presented as an extension of deductive logic via the concept of plausibility and the application of group theory. Some conclusions, based upon the application of this theory to evaluation of data, are also given

  16. Probability and Statistical Inference

    OpenAIRE

    Prosper, Harrison B.

    2006-01-01

    These lectures introduce key concepts in probability and statistical inference at a level suitable for graduate students in particle physics. Our goal is to paint as vivid a picture as possible of the concepts covered.

  17. On quantum statistical inference

    NARCIS (Netherlands)

    Barndorff-Nielsen, O.E.; Gill, R.D.; Jupp, P.E.

    2003-01-01

    Interest in problems of statistical inference connected to measurements of quantum systems has recently increased substantially, in step with dramatic new developments in experimental techniques for studying small quantum systems. Furthermore, developments in the theory of quantum measurements have

  18. INFERENCE BUILDING BLOCKS

    Science.gov (United States)

    2018-02-15

    expressed a variety of inference techniques on discrete and continuous distributions: exact inference, importance sampling, Metropolis-Hastings (MH...without redoing any math or rewriting any code. And although our main goal is composable reuse, our performance is also good because we can use...control paths. • The Hakaru language can express mixtures of discrete and continuous distributions, but the current disintegration transformation

  19. Introductory statistical inference

    CERN Document Server

    Mukhopadhyay, Nitis

    2014-01-01

    This gracefully organized text reveals the rigorous theory of probability and statistical inference in the style of a tutorial, using worked examples, exercises, figures, tables, and computer simulations to develop and illustrate concepts. Drills and boxed summaries emphasize and reinforce important ideas and special techniques.Beginning with a review of the basic concepts and methods in probability theory, moments, and moment generating functions, the author moves to more intricate topics. Introductory Statistical Inference studies multivariate random variables, exponential families of dist

  20. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-06

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  1. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-01

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  2. Exact nonparametric inference for detection of nonlinear determinism

    OpenAIRE

    Luo, Xiaodong; Zhang, Jie; Small, Michael; Moroz, Irene

    2005-01-01

    We propose an exact nonparametric inference scheme for the detection of nonlinear determinism. The essential fact utilized in our scheme is that, for a linear stochastic process with jointly symmetric innovations, its ordinary least square (OLS) linear prediction error is symmetric about zero. Based on this viewpoint, a class of linear signed rank statistics, e.g. the Wilcoxon signed rank statistic, can be derived with the known null distributions from the prediction error. Thus one of the ad...

  3. Type Inference with Inequalities

    DEFF Research Database (Denmark)

    Schwartzbach, Michael Ignatieff

    1991-01-01

    of (monotonic) inequalities on the types of variables and expressions. A general result about systems of inequalities over semilattices yields a solvable form. We distinguish between deciding typability (the existence of solutions) and type inference (the computation of a minimal solution). In our case, both......Type inference can be phrased as constraint-solving over types. We consider an implicitly typed language equipped with recursive types, multiple inheritance, 1st order parametric polymorphism, and assignments. Type correctness is expressed as satisfiability of a possibly infinite collection...

  4. Inverse Ising inference with correlated samples

    International Nuclear Information System (INIS)

    Obermayer, Benedikt; Levine, Erel

    2014-01-01

    Correlations between two variables of a high-dimensional system can be indicative of an underlying interaction, but can also result from indirect effects. Inverse Ising inference is a method to distinguish one from the other. Essentially, the parameters of the least constrained statistical model are learned from the observed correlations such that direct interactions can be separated from indirect correlations. Among many other applications, this approach has been helpful for protein structure prediction, because residues which interact in the 3D structure often show correlated substitutions in a multiple sequence alignment. In this context, samples used for inference are not independent but share an evolutionary history on a phylogenetic tree. Here, we discuss the effects of correlations between samples on global inference. Such correlations could arise due to phylogeny but also via other slow dynamical processes. We present a simple analytical model to address the resulting inference biases, and develop an exact method accounting for background correlations in alignment data by combining phylogenetic modeling with an adaptive cluster expansion algorithm. We find that popular reweighting schemes are only marginally effective at removing phylogenetic bias, suggest a rescaling strategy that yields better results, and provide evidence that our conclusions carry over to the frequently used mean-field approach to the inverse Ising problem. (paper)

  5. Dopamine, reward learning, and active inference

    Directory of Open Access Journals (Sweden)

    Thomas eFitzgerald

    2015-11-01

    Full Text Available Temporal difference learning models propose phasic dopamine signalling encodes reward prediction errors that drive learning. This is supported by studies where optogenetic stimulation of dopamine neurons can stand in lieu of actual reward. Nevertheless, a large body of data also shows that dopamine is not necessary for learning, and that dopamine depletion primarily affects task performance. We offer a resolution to this paradox based on an hypothesis that dopamine encodes the precision of beliefs about alternative actions, and thus controls the outcome-sensitivity of behaviour. We extend an active inference scheme for solving Markov decision processes to include learning, and show that simulated dopamine dynamics strongly resemble those actually observed during instrumental conditioning. Furthermore, simulated dopamine depletion impairs performance but spares learning, while simulated excitation of dopamine neurons drives reward learning, through aberrant inference about outcome states. Our formal approach provides a novel and parsimonious reconciliation of apparently divergent experimental findings.

  6. Dopamine, reward learning, and active inference.

    Science.gov (United States)

    FitzGerald, Thomas H B; Dolan, Raymond J; Friston, Karl

    2015-01-01

    Temporal difference learning models propose phasic dopamine signaling encodes reward prediction errors that drive learning. This is supported by studies where optogenetic stimulation of dopamine neurons can stand in lieu of actual reward. Nevertheless, a large body of data also shows that dopamine is not necessary for learning, and that dopamine depletion primarily affects task performance. We offer a resolution to this paradox based on an hypothesis that dopamine encodes the precision of beliefs about alternative actions, and thus controls the outcome-sensitivity of behavior. We extend an active inference scheme for solving Markov decision processes to include learning, and show that simulated dopamine dynamics strongly resemble those actually observed during instrumental conditioning. Furthermore, simulated dopamine depletion impairs performance but spares learning, while simulated excitation of dopamine neurons drives reward learning, through aberrant inference about outcome states. Our formal approach provides a novel and parsimonious reconciliation of apparently divergent experimental findings.

  7. Finite Boltzmann schemes

    NARCIS (Netherlands)

    Sman, van der R.G.M.

    2006-01-01

    In the special case of relaxation parameter = 1 lattice Boltzmann schemes for (convection) diffusion and fluid flow are equivalent to finite difference/volume (FD) schemes, and are thus coined finite Boltzmann (FB) schemes. We show that the equivalence is inherent to the homology of the

  8. The anatomy of choice: active inference and agency

    Directory of Open Access Journals (Sweden)

    Karl eFriston

    2013-09-01

    Full Text Available This paper considers agency in the setting of embodied or active inference. In brief, we associate a sense of agency with prior beliefs about action and ask what sorts of beliefs underlie optimal behaviour. In particular, we consider prior beliefs that action minimises the Kullback-Leibler divergence between desired states and attainable states in the future. This allows one to formulate bounded rationality as approximate Bayesian inference that optimises a free energy bound on model evidence. We show that constructs like expected utility, exploration bonuses, softmax choice rules and optimism bias emerge as natural consequences of this formulation. Previous accounts of active inference have focused on predictive coding and Bayesian filtering schemes for minimising free energy. Here, we consider variational Bayes as an alternative scheme that provides formal constraints on the computational anatomy of inference and action – constraints that are remarkably consistent with neuroanatomy. Furthermore, this scheme contextualises optimal decision theory and economic (utilitarian formulations as pure inference problems. For example, expected utility theory emerges as a special case of free energy minimisation, where the sensitivity or inverse temperature (of softmax functions and quantal response equilibria has a unique and Bayes-optimal solution – that minimises free energy. This sensitivity corresponds to the precision of beliefs about behaviour, such that attainable goals are afforded a higher precision or confidence. In turn, this means that optimal behaviour entails a representation of confidence about outcomes that are under an agent's control.

  9. The anatomy of choice: active inference and agency.

    Science.gov (United States)

    Friston, Karl; Schwartenbeck, Philipp; Fitzgerald, Thomas; Moutoussis, Michael; Behrens, Timothy; Dolan, Raymond J

    2013-01-01

    This paper considers agency in the setting of embodied or active inference. In brief, we associate a sense of agency with prior beliefs about action and ask what sorts of beliefs underlie optimal behavior. In particular, we consider prior beliefs that action minimizes the Kullback-Leibler (KL) divergence between desired states and attainable states in the future. This allows one to formulate bounded rationality as approximate Bayesian inference that optimizes a free energy bound on model evidence. We show that constructs like expected utility, exploration bonuses, softmax choice rules and optimism bias emerge as natural consequences of this formulation. Previous accounts of active inference have focused on predictive coding and Bayesian filtering schemes for minimizing free energy. Here, we consider variational Bayes as an alternative scheme that provides formal constraints on the computational anatomy of inference and action-constraints that are remarkably consistent with neuroanatomy. Furthermore, this scheme contextualizes optimal decision theory and economic (utilitarian) formulations as pure inference problems. For example, expected utility theory emerges as a special case of free energy minimization, where the sensitivity or inverse temperature (of softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution-that minimizes free energy. This sensitivity corresponds to the precision of beliefs about behavior, such that attainable goals are afforded a higher precision or confidence. In turn, this means that optimal behavior entails a representation of confidence about outcomes that are under an agent's control.

  10. Bayesian inference for hybrid discrete-continuous stochastic kinetic models

    International Nuclear Information System (INIS)

    Sherlock, Chris; Golightly, Andrew; Gillespie, Colin S

    2014-01-01

    We consider the problem of efficiently performing simulation and inference for stochastic kinetic models. Whilst it is possible to work directly with the resulting Markov jump process (MJP), computational cost can be prohibitive for networks of realistic size and complexity. In this paper, we consider an inference scheme based on a novel hybrid simulator that classifies reactions as either ‘fast’ or ‘slow’ with fast reactions evolving as a continuous Markov process whilst the remaining slow reaction occurrences are modelled through a MJP with time-dependent hazards. A linear noise approximation (LNA) of fast reaction dynamics is employed and slow reaction events are captured by exploiting the ability to solve the stochastic differential equation driving the LNA. This simulation procedure is used as a proposal mechanism inside a particle MCMC scheme, thus allowing Bayesian inference for the model parameters. We apply the scheme to a simple application and compare the output with an existing hybrid approach and also a scheme for performing inference for the underlying discrete stochastic model. (paper)

  11. Active Inference, Epistemic Value, and Vicarious Trial and Error

    Science.gov (United States)

    Pezzulo, Giovanni; Cartoni, Emilio; Rigoli, Francesco; io-Lopez, Léo; Friston, Karl

    2016-01-01

    Balancing habitual and deliberate forms of choice entails a comparison of their respective merits--the former being faster but inflexible, and the latter slower but more versatile. Here, we show that arbitration between these two forms of control can be derived from first principles within an Active Inference scheme. We illustrate our arguments…

  12. Inference as Prediction

    Science.gov (United States)

    Watson, Jane

    2007-01-01

    Inference, or decision making, is seen in curriculum documents as the final step in a statistical investigation. For a formal statistical enquiry this may be associated with sophisticated tests involving probability distributions. For young students without the mathematical background to perform such tests, it is still possible to draw informal…

  13. Mathematical inference and control of molecular networks from perturbation experiments

    Science.gov (United States)

    Mohammed-Rasheed, Mohammed

    One of the main challenges facing biologists and mathematicians in the post genomic era is to understand the behavior of molecular networks and harness this understanding into an educated intervention of the cell. The cell maintains its function via an elaborate network of interconnecting positive and negative feedback loops of genes, RNA and proteins that send different signals to a large number of pathways and molecules. These structures are referred to as genetic regulatory networks (GRNs) or molecular networks. GRNs can be viewed as dynamical systems with inherent properties and mechanisms, such as steady-state equilibriums and stability, that determine the behavior of the cell. The biological relevance of the mathematical concepts are important as they may predict the differentiation of a stem cell, the maintenance of a normal cell, the development of cancer and its aberrant behavior, and the design of drugs and response to therapy. Uncovering the underlying GRN structure from gene/protein expression data, e.g., microarrays or perturbation experiments, is called inference or reverse engineering of the molecular network. Because of the high cost and time consuming nature of biological experiments, the number of available measurements or experiments is very small compared to the number of molecules (genes, RNA and proteins). In addition, the observations are noisy, where the noise is due to the measurements imperfections as well as the inherent stochasticity of genetic expression levels. Intra-cellular activities and extra-cellular environmental attributes are also another source of variability. Thus, the inference of GRNs is, in general, an under-determined problem with a highly noisy set of observations. The ultimate goal of GRN inference and analysis is to be able to intervene within the network, in order to force it away from undesirable cellular states and into desirable ones. However, it remains a major challenge to design optimal intervention strategies

  14. Active inference and epistemic value.

    Science.gov (United States)

    Friston, Karl; Rigoli, Francesco; Ognibene, Dimitri; Mathys, Christoph; Fitzgerald, Thomas; Pezzulo, Giovanni

    2015-01-01

    We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms.

  15. Inference rule and problem solving

    Energy Technology Data Exchange (ETDEWEB)

    Goto, S

    1982-04-01

    Intelligent information processing signifies an opportunity of having man's intellectual activity executed on the computer, in which inference, in place of ordinary calculation, is used as the basic operational mechanism for such an information processing. Many inference rules are derived from syllogisms in formal logic. The problem of programming this inference function is referred to as a problem solving. Although logically inference and problem-solving are in close relation, the calculation ability of current computers is on a low level for inferring. For clarifying the relation between inference and computers, nonmonotonic logic has been considered. The paper deals with the above topics. 16 references.

  16. Stochastic processes inference theory

    CERN Document Server

    Rao, Malempati M

    2014-01-01

    This is the revised and enlarged 2nd edition of the authors’ original text, which was intended to be a modest complement to Grenander's fundamental memoir on stochastic processes and related inference theory. The present volume gives a substantial account of regression analysis, both for stochastic processes and measures, and includes recent material on Ridge regression with some unexpected applications, for example in econometrics. The first three chapters can be used for a quarter or semester graduate course on inference on stochastic processes. The remaining chapters provide more advanced material on stochastic analysis suitable for graduate seminars and discussions, leading to dissertation or research work. In general, the book will be of interest to researchers in probability theory, mathematical statistics and electrical and information theory.

  17. Making Type Inference Practical

    DEFF Research Database (Denmark)

    Schwartzbach, Michael Ignatieff; Oxhøj, Nicholas; Palsberg, Jens

    1992-01-01

    We present the implementation of a type inference algorithm for untyped object-oriented programs with inheritance, assignments, and late binding. The algorithm significantly improves our previous one, presented at OOPSLA'91, since it can handle collection classes, such as List, in a useful way. Abo......, the complexity has been dramatically improved, from exponential time to low polynomial time. The implementation uses the techniques of incremental graph construction and constraint template instantiation to avoid representing intermediate results, doing superfluous work, and recomputing type information....... Experiments indicate that the implementation type checks as much as 100 lines pr. second. This results in a mature product, on which a number of tools can be based, for example a safety tool, an image compression tool, a code optimization tool, and an annotation tool. This may make type inference for object...

  18. Russell and Humean Inferences

    Directory of Open Access Journals (Sweden)

    João Paulo Monteiro

    2001-12-01

    Full Text Available Russell's The Problems of Philosophy tries to establish a new theory of induction, at the same time that Hume is there accused of an irrational/ scepticism about induction". But a careful analysis of the theory of knowledge explicitly acknowledged by Hume reveals that, contrary to the standard interpretation in the XXth century, possibly influenced by Russell, Hume deals exclusively with causal inference (which he never classifies as "causal induction", although now we are entitled to do so, never with inductive inference in general, mainly generalizations about sensible qualities of objects ( whether, e.g., "all crows are black" or not is not among Hume's concerns. Russell's theories are thus only false alternatives to Hume's, in (1912 or in his (1948.

  19. Causal inference in econometrics

    CERN Document Server

    Kreinovich, Vladik; Sriboonchitta, Songsak

    2016-01-01

    This book is devoted to the analysis of causal inference which is one of the most difficult tasks in data analysis: when two phenomena are observed to be related, it is often difficult to decide whether one of them causally influences the other one, or whether these two phenomena have a common cause. This analysis is the main focus of this volume. To get a good understanding of the causal inference, it is important to have models of economic phenomena which are as accurate as possible. Because of this need, this volume also contains papers that use non-traditional economic models, such as fuzzy models and models obtained by using neural networks and data mining techniques. It also contains papers that apply different econometric models to analyze real-life economic dependencies.

  20. Learning Convex Inference of Marginals

    OpenAIRE

    Domke, Justin

    2012-01-01

    Graphical models trained using maximum likelihood are a common tool for probabilistic inference of marginal distributions. However, this approach suffers difficulties when either the inference process or the model is approximate. In this paper, the inference process is first defined to be the minimization of a convex function, inspired by free energy approximations. Learning is then done directly in terms of the performance of the inference process at univariate marginal prediction. The main ...

  1. Scheme Program Documentation Tools

    DEFF Research Database (Denmark)

    Nørmark, Kurt

    2004-01-01

    are separate and intended for different documentation purposes they are related to each other in several ways. Both tools are based on XML languages for tool setup and for documentation authoring. In addition, both tools rely on the LAML framework which---in a systematic way---makes an XML language available...... as named functions in Scheme. Finally, the Scheme Elucidator is able to integrate SchemeDoc resources as part of an internal documentation resource....

  2. Probabilistic inductive inference: a survey

    OpenAIRE

    Ambainis, Andris

    2001-01-01

    Inductive inference is a recursion-theoretic theory of learning, first developed by E. M. Gold (1967). This paper surveys developments in probabilistic inductive inference. We mainly focus on finite inference of recursive functions, since this simple paradigm has produced the most interesting (and most complex) results.

  3. Multimodel inference and adaptive management

    Science.gov (United States)

    Rehme, S.E.; Powell, L.A.; Allen, Craig R.

    2011-01-01

    Ecology is an inherently complex science coping with correlated variables, nonlinear interactions and multiple scales of pattern and process, making it difficult for experiments to result in clear, strong inference. Natural resource managers, policy makers, and stakeholders rely on science to provide timely and accurate management recommendations. However, the time necessary to untangle the complexities of interactions within ecosystems is often far greater than the time available to make management decisions. One method of coping with this problem is multimodel inference. Multimodel inference assesses uncertainty by calculating likelihoods among multiple competing hypotheses, but multimodel inference results are often equivocal. Despite this, there may be pressure for ecologists to provide management recommendations regardless of the strength of their study’s inference. We reviewed papers in the Journal of Wildlife Management (JWM) and the journal Conservation Biology (CB) to quantify the prevalence of multimodel inference approaches, the resulting inference (weak versus strong), and how authors dealt with the uncertainty. Thirty-eight percent and 14%, respectively, of articles in the JWM and CB used multimodel inference approaches. Strong inference was rarely observed, with only 7% of JWM and 20% of CB articles resulting in strong inference. We found the majority of weak inference papers in both journals (59%) gave specific management recommendations. Model selection uncertainty was ignored in most recommendations for management. We suggest that adaptive management is an ideal method to resolve uncertainty when research results in weak inference.

  4. Multiresolution signal decomposition schemes

    NARCIS (Netherlands)

    J. Goutsias (John); H.J.A.M. Heijmans (Henk)

    1998-01-01

    textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis

  5. Inference of time-delayed gene regulatory networks based on dynamic Bayesian network hybrid learning method.

    Science.gov (United States)

    Yu, Bin; Xu, Jia-Meng; Li, Shan; Chen, Cheng; Chen, Rui-Xin; Wang, Lei; Zhang, Yan; Wang, Ming-Hui

    2017-10-06

    Gene regulatory networks (GRNs) research reveals complex life phenomena from the perspective of gene interaction, which is an important research field in systems biology. Traditional Bayesian networks have a high computational complexity, and the network structure scoring model has a single feature. Information-based approaches cannot identify the direction of regulation. In order to make up for the shortcomings of the above methods, this paper presents a novel hybrid learning method (DBNCS) based on dynamic Bayesian network (DBN) to construct the multiple time-delayed GRNs for the first time, combining the comprehensive score (CS) with the DBN model. DBNCS algorithm first uses CMI2NI (conditional mutual inclusive information-based network inference) algorithm for network structure profiles learning, namely the construction of search space. Then the redundant regulations are removed by using the recursive optimization algorithm (RO), thereby reduce the false positive rate. Secondly, the network structure profiles are decomposed into a set of cliques without loss, which can significantly reduce the computational complexity. Finally, DBN model is used to identify the direction of gene regulation within the cliques and search for the optimal network structure. The performance of DBNCS algorithm is evaluated by the benchmark GRN datasets from DREAM challenge as well as the SOS DNA repair network in Escherichia coli , and compared with other state-of-the-art methods. The experimental results show the rationality of the algorithm design and the outstanding performance of the GRNs.

  6. Inference and interrogation of a coregulatory network in the context of lipid accumulation in Yarrowia lipolytica.

    Science.gov (United States)

    Trébulle, Pauline; Nicaud, Jean-Marc; Leplat, Christophe; Elati, Mohamed

    2017-01-01

    Complex phenotypes, such as lipid accumulation, result from cooperativity between regulators and the integration of multiscale information. However, the elucidation of such regulatory programs by experimental approaches may be challenging, particularly in context-specific conditions. In particular, we know very little about the regulators of lipid accumulation in the oleaginous yeast of industrial interest Yarrowia lipolytica . This lack of knowledge limits the development of this yeast as an industrial platform, due to the time-consuming and costly laboratory efforts required to design strains with the desired phenotypes. In this study, we aimed to identify context-specific regulators and mechanisms, to guide explorations of the regulation of lipid accumulation in Y. lipolytica . Using gene regulatory network inference, and considering the expression of 6539 genes over 26 time points from GSE35447 for biolipid production and a list of 151 transcription factors, we reconstructed a gene regulatory network comprising 111 transcription factors, 4451 target genes and 17048 regulatory interactions (YL-GRN-1) supported by evidence of protein-protein interactions. This study, based on network interrogation and wet laboratory validation (a) highlights the relevance of our proposed measure, the transcription factors influence, for identifying phases corresponding to changes in physiological state without prior knowledge (b) suggests new potential regulators and drivers of lipid accumulation and (c) experimentally validates the impact of six of the nine regulators identified on lipid accumulation, with variations in lipid content from +43.2% to -31.2% on glucose or glycerol.

  7. Nonparametric statistical inference

    CERN Document Server

    Gibbons, Jean Dickinson

    2010-01-01

    Overall, this remains a very fine book suitable for a graduate-level course in nonparametric statistics. I recommend it for all people interested in learning the basic ideas of nonparametric statistical inference.-Eugenia Stoimenova, Journal of Applied Statistics, June 2012… one of the best books available for a graduate (or advanced undergraduate) text for a theory course on nonparametric statistics. … a very well-written and organized book on nonparametric statistics, especially useful and recommended for teachers and graduate students.-Biometrics, 67, September 2011This excellently presente

  8. Emotional inferences by pragmatics

    OpenAIRE

    Iza-Miqueleiz, Mauricio

    2017-01-01

    It has for long been taken for granted that, along the course of reading a text, world knowledge is often required in order to establish coherent links between sentences (McKoon & Ratcliff 1992, Iza & Ezquerro 2000). The content grasped from a text turns out to be strongly dependent upon the reader’s additional knowledge that allows a coherent interpretation of the text as a whole. The world knowledge directing the inference may be of distinctive nature. Gygax et al. (2007) showed that m...

  9. Generic patch inference

    DEFF Research Database (Denmark)

    Andersen, Jesper; Lawall, Julia

    2010-01-01

    A key issue in maintaining Linux device drivers is the need to keep them up to date with respect to evolutions in Linux internal libraries. Currently, there is little tool support for performing and documenting such changes. In this paper we present a tool, spdiff, that identifies common changes...... developers can use it to extract an abstract representation of the set of changes that others have made. Our experiments on recent changes in Linux show that the inferred generic patches are more concise than the corresponding patches found in commits to the Linux source tree while being safe with respect...

  10. Dynamic droop scheme considering effect of intermittent renewable energy source

    DEFF Research Database (Denmark)

    Wang, Yanbo; Chen, Zhe; Deng, Fujin

    2016-01-01

    This paper presents a dynamic droop control scheme for islanded microgrids dominated by intermittent renewable energy sources, which is able to perform desirable power sharing in the presence of renewable energy source fluctuation. First, allowable maximum power points of wind generator and PV...... flexibility and effectiveness in the presence of the renewable energy sources fluctuation....... controller of each DG unit is activated through local logic variable inferred by wind speed and solar insolation information. Simulation results are given for validating the droop control scheme. The proposed dynamic droop scheme preserves the advantage of conventional droop control method, and provides...

  11. Adaptive protection scheme

    Directory of Open Access Journals (Sweden)

    R. Sitharthan

    2016-09-01

    Full Text Available This paper aims at modelling an electronically coupled distributed energy resource with an adaptive protection scheme. The electronically coupled distributed energy resource is a microgrid framework formed by coupling the renewable energy source electronically. Further, the proposed adaptive protection scheme provides a suitable protection to the microgrid for various fault conditions irrespective of the operating mode of the microgrid: namely, grid connected mode and islanded mode. The outstanding aspect of the developed adaptive protection scheme is that it monitors the microgrid and instantly updates relay fault current according to the variations that occur in the system. The proposed adaptive protection scheme also employs auto reclosures, through which the proposed adaptive protection scheme recovers faster from the fault and thereby increases the consistency of the microgrid. The effectiveness of the proposed adaptive protection is studied through the time domain simulations carried out in the PSCAD⧹EMTDC software environment.

  12. Active Inference and Learning in the Cerebellum.

    Science.gov (United States)

    Friston, Karl; Herreros, Ivan

    2016-09-01

    This letter offers a computational account of Pavlovian conditioning in the cerebellum based on active inference and predictive coding. Using eyeblink conditioning as a canonical paradigm, we formulate a minimal generative model that can account for spontaneous blinking, startle responses, and (delay or trace) conditioning. We then establish the face validity of the model using simulated responses to unconditioned and conditioned stimuli to reproduce the sorts of behavior that are observed empirically. The scheme's anatomical validity is then addressed by associating variables in the predictive coding scheme with nuclei and neuronal populations to match the (extrinsic and intrinsic) connectivity of the cerebellar (eyeblink conditioning) system. Finally, we try to establish predictive validity by reproducing selective failures of delay conditioning, trace conditioning, and extinction using (simulated and reversible) focal lesions. Although rather metaphorical, the ensuing scheme can account for a remarkable range of anatomical and neurophysiological aspects of cerebellar circuitry-and the specificity of lesion-deficit mappings that have been established experimentally. From a computational perspective, this work shows how conditioning or learning can be formulated in terms of minimizing variational free energy (or maximizing Bayesian model evidence) using exactly the same principles that underlie predictive coding in perception.

  13. Threshold Signature Schemes Application

    Directory of Open Access Journals (Sweden)

    Anastasiya Victorovna Beresneva

    2015-10-01

    Full Text Available This work is devoted to an investigation of threshold signature schemes. The systematization of the threshold signature schemes was done, cryptographic constructions based on interpolation Lagrange polynomial, elliptic curves and bilinear pairings were examined. Different methods of generation and verification of threshold signatures were explored, the availability of practical usage of threshold schemes in mobile agents, Internet banking and e-currency was shown. The topics of further investigation were given and it could reduce a level of counterfeit electronic documents signed by a group of users.

  14. CSR schemes in agribusiness

    DEFF Research Database (Denmark)

    Pötz, Katharina Anna; Haas, Rainer; Balzarova, Michaela

    2013-01-01

    of schemes that can be categorized on focus areas, scales, mechanisms, origins, types and commitment levels. Research limitations/implications – The findings contribute to conceptual and empirical research on existing models to compare and analyse CSR standards. Sampling technique and depth of analysis limit......Purpose – The rise of CSR followed a demand for CSR standards and guidelines. In a sector already characterized by a large number of standards, the authors seek to ask what CSR schemes apply to agribusiness, and how they can be systematically compared and analysed. Design....../methodology/approach – Following a deductive-inductive approach the authors develop a model to compare and analyse CSR schemes based on existing studies and on coding qualitative data on 216 CSR schemes. Findings – The authors confirm that CSR standards and guidelines have entered agribusiness and identify a complex landscape...

  15. Tabled Execution in Scheme

    Energy Technology Data Exchange (ETDEWEB)

    Willcock, J J; Lumsdaine, A; Quinlan, D J

    2008-08-19

    Tabled execution is a generalization of memorization developed by the logic programming community. It not only saves results from tabled predicates, but also stores the set of currently active calls to them; tabled execution can thus provide meaningful semantics for programs that seemingly contain infinite recursions with the same arguments. In logic programming, tabled execution is used for many purposes, both for improving the efficiency of programs, and making tasks simpler and more direct to express than with normal logic programs. However, tabled execution is only infrequently applied in mainstream functional languages such as Scheme. We demonstrate an elegant implementation of tabled execution in Scheme, using a mix of continuation-passing style and mutable data. We also show the use of tabled execution in Scheme for a problem in formal language and automata theory, demonstrating that tabled execution can be a valuable tool for Scheme users.

  16. Evaluating statistical cloud schemes

    OpenAIRE

    Grützun, Verena; Quaas, Johannes; Morcrette , Cyril J.; Ament, Felix

    2015-01-01

    Statistical cloud schemes with prognostic probability distribution functions have become more important in atmospheric modeling, especially since they are in principle scale adaptive and capture cloud physics in more detail. While in theory the schemes have a great potential, their accuracy is still questionable. High-resolution three-dimensional observational data of water vapor and cloud water, which could be used for testing them, are missing. We explore the potential of ground-based re...

  17. Gamma spectrometry; level schemes

    International Nuclear Information System (INIS)

    Blachot, J.; Bocquet, J.P.; Monnand, E.; Schussler, F.

    1977-01-01

    The research presented dealt with: a new beta emitter, isomer of 131 Sn; the 136 I levels fed through the radioactive decay of 136 Te (20.9s); the A=145 chain (β decay of Ba, La and Ce, and level schemes for 145 La, 145 Ce, 145 Pr); the A=47 chain (La and Ce, β decay, and the level schemes of 147 Ce and 147 Pr) [fr

  18. Scheme of energy utilities

    International Nuclear Information System (INIS)

    2002-04-01

    This scheme defines the objectives relative to the renewable energies and the rational use of the energy in the framework of the national energy policy. It evaluates the needs and the potentialities of the regions and preconizes the actions between the government and the territorial organizations. The document is presented in four parts: the situation, the stakes and forecasts; the possible actions for new measures; the scheme management and the regional contributions analysis. (A.L.B.)

  19. Implementing and analyzing the multi-threaded LP-inference

    Science.gov (United States)

    Bolotova, S. Yu; Trofimenko, E. V.; Leschinskaya, M. V.

    2018-03-01

    The logical production equations provide new possibilities for the backward inference optimization in intelligent production-type systems. The strategy of a relevant backward inference is aimed at minimization of a number of queries to external information source (either to a database or an interactive user). The idea of the method is based on the computing of initial preimages set and searching for the true preimage. The execution of each stage can be organized independently and in parallel and the actual work at a given stage can also be distributed between parallel computers. This paper is devoted to the parallel algorithms of the relevant inference based on the advanced scheme of the parallel computations “pipeline” which allows to increase the degree of parallelism. The author also provides some details of the LP-structures implementation.

  20. Feature Inference Learning and Eyetracking

    Science.gov (United States)

    Rehder, Bob; Colner, Robert M.; Hoffman, Aaron B.

    2009-01-01

    Besides traditional supervised classification learning, people can learn categories by inferring the missing features of category members. It has been proposed that feature inference learning promotes learning a category's internal structure (e.g., its typical features and interfeature correlations) whereas classification promotes the learning of…

  1. An Inference Language for Imaging

    DEFF Research Database (Denmark)

    Pedemonte, Stefano; Catana, Ciprian; Van Leemput, Koen

    2014-01-01

    We introduce iLang, a language and software framework for probabilistic inference. The iLang framework enables the definition of directed and undirected probabilistic graphical models and the automated synthesis of high performance inference algorithms for imaging applications. The iLang framewor...

  2. Social Inference Through Technology

    Science.gov (United States)

    Oulasvirta, Antti

    Awareness cues are computer-mediated, real-time indicators of people’s undertakings, whereabouts, and intentions. Already in the mid-1970 s, UNIX users could use commands such as “finger” and “talk” to find out who was online and to chat. The small icons in instant messaging (IM) applications that indicate coconversants’ presence in the discussion space are the successors of “finger” output. Similar indicators can be found in online communities, media-sharing services, Internet relay chat (IRC), and location-based messaging applications. But presence and availability indicators are only the tip of the iceberg. Technological progress has enabled richer, more accurate, and more intimate indicators. For example, there are mobile services that allow friends to query and follow each other’s locations. Remote monitoring systems developed for health care allow relatives and doctors to assess the wellbeing of homebound patients (see, e.g., Tang and Venables 2000). But users also utilize cues that have not been deliberately designed for this purpose. For example, online gamers pay attention to other characters’ behavior to infer what the other players are like “in real life.” There is a common denominator underlying these examples: shared activities rely on the technology’s representation of the remote person. The other human being is not physically present but present only through a narrow technological channel.

  3. Quantum Enhanced Inference in Markov Logic Networks.

    Science.gov (United States)

    Wittek, Peter; Gogolin, Christian

    2017-04-19

    Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.

  4. Quantum Enhanced Inference in Markov Logic Networks

    Science.gov (United States)

    Wittek, Peter; Gogolin, Christian

    2017-04-01

    Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.

  5. Towards Symbolic Encryption Schemes

    DEFF Research Database (Denmark)

    Ahmed, Naveed; Jensen, Christian D.; Zenner, Erik

    2012-01-01

    , namely an authenticated encryption scheme that is secure under chosen ciphertext attack. Therefore, many reasonable encryption schemes, such as AES in the CBC or CFB mode, are not among the implementation options. In this paper, we report new attacks on CBC and CFB based implementations of the well......Symbolic encryption, in the style of Dolev-Yao models, is ubiquitous in formal security models. In its common use, encryption on a whole message is specified as a single monolithic block. From a cryptographic perspective, however, this may require a resource-intensive cryptographic algorithm......-known Needham-Schroeder and Denning-Sacco protocols. To avoid such problems, we advocate the use of refined notions of symbolic encryption that have natural correspondence to standard cryptographic encryption schemes....

  6. Compact Spreader Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Placidi, M.; Jung, J. -Y.; Ratti, A.; Sun, C.

    2014-07-25

    This paper describes beam distribution schemes adopting a novel implementation based on low amplitude vertical deflections combined with horizontal ones generated by Lambertson-type septum magnets. This scheme offers substantial compactness in the longitudinal layouts of the beam lines and increased flexibility for beam delivery of multiple beam lines on a shot-to-shot basis. Fast kickers (FK) or transverse electric field RF Deflectors (RFD) provide the low amplitude deflections. Initially proposed at the Stanford Linear Accelerator Center (SLAC) as tools for beam diagnostics and more recently adopted for multiline beam pattern schemes, RFDs offer repetition capabilities and a likely better amplitude reproducibility when compared to FKs, which, in turn, offer more modest financial involvements both in construction and operation. Both solutions represent an ideal approach for the design of compact beam distribution systems resulting in space and cost savings while preserving flexibility and beam quality.

  7. New analytic unitarization schemes

    International Nuclear Information System (INIS)

    Cudell, J.-R.; Predazzi, E.; Selyugin, O. V.

    2009-01-01

    We consider two well-known classes of unitarization of Born amplitudes of hadron elastic scattering. The standard class, which saturates at the black-disk limit includes the standard eikonal representation, while the other class, which goes beyond the black-disk limit to reach the full unitarity circle, includes the U matrix. It is shown that the basic properties of these schemes are independent of the functional form used for the unitarization, and that U matrix and eikonal schemes can be extended to have similar properties. A common form of unitarization is proposed interpolating between both classes. The correspondence with different nonlinear equations are also briefly examined.

  8. 4. Payment Schemes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 2. Electronic Commerce - Payment Schemes. V Rajaraman. Series Article Volume 6 Issue 2 February 2001 pp 6-13. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/006/02/0006-0013 ...

  9. Contract saving schemes

    NARCIS (Netherlands)

    Ronald, R.; Smith, S.J.; Elsinga, M.; Eng, O.S.; Fox O'Mahony, L.; Wachter, S.

    2012-01-01

    Contractual saving schemes for housing are institutionalised savings programmes normally linked to rights to loans for home purchase. They are diverse types as they have been developed differently in each national context, but normally fall into categories of open, closed, compulsory, and ‘free

  10. Alternative reprocessing schemes evaluation

    International Nuclear Information System (INIS)

    1979-02-01

    This paper reviews the parameters which determine the inaccessibility of the plutonium in reprocessing plants. Among the various parameters, the physical and chemical characteristics of the materials, the various processing schemes and the confinement are considered. The emphasis is placed on that latter parameter, and the advantages of an increased confinement in the socalled PIPEX reprocessing plant type are presented

  11. Introduction to association schemes

    NARCIS (Netherlands)

    Seidel, J.J.

    1991-01-01

    The present paper gives an introduction to the theory of association schemes, following Bose-Mesner (1959), Biggs (1974), Delsarte (1973), Bannai-Ito (1984) and Brouwer-Cohen-Neumaier (1989). Apart from definitions and many examples, also several proofs and some problems are included. The paragraphs

  12. Reaction schemes of immunoanalysis

    International Nuclear Information System (INIS)

    Delaage, M.; Barbet, J.

    1991-01-01

    The authors apply a general theory for multiple equilibria to the reaction schemes of immunoanalysis, competition and sandwich. This approach allows the manufacturer to optimize the system and provide the user with interpolation functions for the standard curve and its first derivative as well, thus giving access to variance [fr

  13. Alternative health insurance schemes

    DEFF Research Database (Denmark)

    Keiding, Hans; Hansen, Bodil O.

    2002-01-01

    In this paper, we present a simple model of health insurance with asymmetric information, where we compare two alternative ways of organizing the insurance market. Either as a competitive insurance market, where some risks remain uninsured, or as a compulsory scheme, where however, the level...... competitive insurance; this situation turns out to be at least as good as either of the alternatives...

  14. Optimization methods for logical inference

    CERN Document Server

    Chandru, Vijay

    2011-01-01

    Merging logic and mathematics in deductive inference-an innovative, cutting-edge approach. Optimization methods for logical inference? Absolutely, say Vijay Chandru and John Hooker, two major contributors to this rapidly expanding field. And even though ""solving logical inference problems with optimization methods may seem a bit like eating sauerkraut with chopsticks. . . it is the mathematical structure of a problem that determines whether an optimization model can help solve it, not the context in which the problem occurs."" Presenting powerful, proven optimization techniques for logic in

  15. On principles of inductive inference

    OpenAIRE

    Kostecki, Ryszard Paweł

    2011-01-01

    We propose an intersubjective epistemic approach to foundations of probability theory and statistical inference, based on relative entropy and category theory, and aimed to bypass the mathematical and conceptual problems of existing foundational approaches.

  16. Statistical inference via fiducial methods

    OpenAIRE

    Salomé, Diemer

    1998-01-01

    In this thesis the attention is restricted to inductive reasoning using a mathematical probability model. A statistical procedure prescribes, for every theoretically possible set of data, the inference about the unknown of interest. ... Zie: Summary

  17. Statistical inference for stochastic processes

    National Research Council Canada - National Science Library

    Basawa, Ishwar V; Prakasa Rao, B. L. S

    1980-01-01

    The aim of this monograph is to attempt to reduce the gap between theory and applications in the area of stochastic modelling, by directing the interest of future researchers to the inference aspects...

  18. Active inference, communication and hermeneutics.

    Science.gov (United States)

    Friston, Karl J; Frith, Christopher D

    2015-07-01

    Hermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others--during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions--both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then--in principle--they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. On Converting Secret Sharing Scheme to Visual Secret Sharing Scheme

    Directory of Open Access Journals (Sweden)

    Wang Daoshun

    2010-01-01

    Full Text Available Abstract Traditional Secret Sharing (SS schemes reconstruct secret exactly the same as the original one but involve complex computation. Visual Secret Sharing (VSS schemes decode the secret without computation, but each share is m times as big as the original and the quality of the reconstructed secret image is reduced. Probabilistic visual secret sharing (Prob.VSS schemes for a binary image use only one subpixel to share the secret image; however the probability of white pixels in a white area is higher than that in a black area in the reconstructed secret image. SS schemes, VSS schemes, and Prob. VSS schemes have various construction methods and advantages. This paper first presents an approach to convert (transform a -SS scheme to a -VSS scheme for greyscale images. The generation of the shadow images (shares is based on Boolean XOR operation. The secret image can be reconstructed directly by performing Boolean OR operation, as in most conventional VSS schemes. Its pixel expansion is significantly smaller than that of VSS schemes. The quality of the reconstructed images, measured by average contrast, is the same as VSS schemes. Then a novel matrix-concatenation approach is used to extend the greyscale -SS scheme to a more general case of greyscale -VSS scheme.

  20. Neural model of gene regulatory network: a survey on supportive meta-heuristics.

    Science.gov (United States)

    Biswas, Surama; Acharyya, Sriyankar

    2016-06-01

    Gene regulatory network (GRN) is produced as a result of regulatory interactions between different genes through their coded proteins in cellular context. Having immense importance in disease detection and drug finding, GRN has been modelled through various mathematical and computational schemes and reported in survey articles. Neural and neuro-fuzzy models have been the focus of attraction in bioinformatics. Predominant use of meta-heuristic algorithms in training neural models has proved its excellence. Considering these facts, this paper is organized to survey neural modelling schemes of GRN and the efficacy of meta-heuristic algorithms towards parameter learning (i.e. weighting connections) within the model. This survey paper renders two different structure-related approaches to infer GRN which are global structure approach and substructure approach. It also describes two neural modelling schemes, such as artificial neural network/recurrent neural network based modelling and neuro-fuzzy modelling. The meta-heuristic algorithms applied so far to learn the structure and parameters of neutrally modelled GRN have been reviewed here.

  1. Selectively strippable paint schemes

    Science.gov (United States)

    Stein, R.; Thumm, D.; Blackford, Roger W.

    1993-03-01

    In order to meet the requirements of more environmentally acceptable paint stripping processes many different removal methods are under evaluation. These new processes can be divided into mechanical and chemical methods. ICI has developed a paint scheme with intermediate coat and fluid resistant polyurethane topcoat which can be stripped chemically in a short period of time with methylene chloride free and phenol free paint strippers.

  2. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    Science.gov (United States)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  3. Scalable Nonlinear Compact Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, Debojyoti [Argonne National Lab. (ANL), Argonne, IL (United States); Constantinescu, Emil M. [Univ. of Chicago, IL (United States); Brown, Jed [Univ. of Colorado, Boulder, CO (United States)

    2014-04-01

    In this work, we focus on compact schemes resulting in tridiagonal systems of equations, specifically the fifth-order CRWENO scheme. We propose a scalable implementation of the nonlinear compact schemes by implementing a parallel tridiagonal solver based on the partitioning/substructuring approach. We use an iterative solver for the reduced system of equations; however, we solve this system to machine zero accuracy to ensure that no parallelization errors are introduced. It is possible to achieve machine-zero convergence with few iterations because of the diagonal dominance of the system. The number of iterations is specified a priori instead of a norm-based exit criterion, and collective communications are avoided. The overall algorithm thus involves only point-to-point communication between neighboring processors. Our implementation of the tridiagonal solver differs from and avoids the drawbacks of past efforts in the following ways: it introduces no parallelization-related approximations (multiprocessor solutions are exactly identical to uniprocessor ones), it involves minimal communication, the mathematical complexity is similar to that of the Thomas algorithm on a single processor, and it does not require any communication and computation scheduling.

  4. Optimal inference with suboptimal models: Addiction and active Bayesian inference

    Science.gov (United States)

    Schwartenbeck, Philipp; FitzGerald, Thomas H.B.; Mathys, Christoph; Dolan, Ray; Wurst, Friedrich; Kronbichler, Martin; Friston, Karl

    2015-01-01

    When casting behaviour as active (Bayesian) inference, optimal inference is defined with respect to an agent’s beliefs – based on its generative model of the world. This contrasts with normative accounts of choice behaviour, in which optimal actions are considered in relation to the true structure of the environment – as opposed to the agent’s beliefs about worldly states (or the task). This distinction shifts an understanding of suboptimal or pathological behaviour away from aberrant inference as such, to understanding the prior beliefs of a subject that cause them to behave less ‘optimally’ than our prior beliefs suggest they should behave. Put simply, suboptimal or pathological behaviour does not speak against understanding behaviour in terms of (Bayes optimal) inference, but rather calls for a more refined understanding of the subject’s generative model upon which their (optimal) Bayesian inference is based. Here, we discuss this fundamental distinction and its implications for understanding optimality, bounded rationality and pathological (choice) behaviour. We illustrate our argument using addictive choice behaviour in a recently described ‘limited offer’ task. Our simulations of pathological choices and addictive behaviour also generate some clear hypotheses, which we hope to pursue in ongoing empirical work. PMID:25561321

  5. ESCAP mobile training scheme.

    Science.gov (United States)

    Yasas, F M

    1977-01-01

    In response to a United Nations resolution, the Mobile Training Scheme (MTS) was set up to provide training to the trainers of national cadres engaged in frontline and supervisory tasks in social welfare and rural development. The training is innovative in its being based on an analysis of field realities. The MTS team consisted of a leader, an expert on teaching methods and materials, and an expert on action research and evaluation. The country's trainers from different departments were sent to villages to work for a short period and to report their problems in fulfilling their roles. From these grass roots experiences, they made an analysis of the job, determining what knowledge, attitude and skills it required. Analysis of daily incidents and problems were used to produce indigenous teaching materials drawn from actual field practice. How to consider the problems encountered through government structures for policy making and decisions was also learned. Tasks of the students were to identify the skills needed for role performance by job analysis, daily diaries and project histories; to analyze the particular community by village profiles; to produce indigenous teaching materials; and to practice the role skills by actual role performance. The MTS scheme was tried in Nepal in 1974-75; 3 training programs trained 25 trainers and 51 frontline workers; indigenous teaching materials were created; technical papers written; and consultations were provided. In Afghanistan the scheme was used in 1975-76; 45 participants completed the training; seminars were held; and an ongoing Council was created. It is hoped that the training program will be expanded to other countries.

  6. Bonus schemes and trading activity

    NARCIS (Netherlands)

    Pikulina, E.S.; Renneboog, L.D.R.; ter Horst, J.R.; Tobler, P.N.

    2014-01-01

    Little is known about how different bonus schemes affect traders' propensity to trade and which bonus schemes improve traders' performance. We study the effects of linear versus threshold bonus schemes on traders' behavior. Traders buy and sell shares in an experimental stock market on the basis of

  7. Succesful labelling schemes

    DEFF Research Database (Denmark)

    Juhl, Hans Jørn; Stacey, Julia

    2001-01-01

    . In the spring of 2001 MAPP carried out an extensive consumer study with special emphasis on the Nordic environmentally friendly label 'the swan'. The purpose was to find out how much consumers actually know and use various labelling schemes. 869 households were contacted and asked to fill in a questionnaire...... it into consideration when I go shopping. The respondent was asked to pick the most suitable answer, which described her use of each label. 29% - also called 'the labelling blind' - responded that they basically only knew the recycling label and the Government controlled organic label 'Ø-mærket'. Another segment of 6...

  8. Scheme of stepmotor control

    International Nuclear Information System (INIS)

    Grashilin, V.A.; Karyshev, Yu.Ya.

    1982-01-01

    A 6-cycle scheme of step motor is described. The block-diagram and the basic circuit of the step motor control are presented. The step motor control comprises a pulse shaper, electronic commutator and power amplifiers. The step motor supply from 6-cycle electronic commutator provides for higher reliability and accuracy than from 3-cycle commutator. The control of step motor work is realised by the program given by the external source of control signals. Time-dependent diagrams for step motor control are presented. The specifications of the step-motor is given

  9. Interactive Instruction in Bayesian Inference

    DEFF Research Database (Denmark)

    Khan, Azam; Breslav, Simon; Hornbæk, Kasper

    2018-01-01

    An instructional approach is presented to improve human performance in solving Bayesian inference problems. Starting from the original text of the classic Mammography Problem, the textual expression is modified and visualizations are added according to Mayer’s principles of instruction. These pri......An instructional approach is presented to improve human performance in solving Bayesian inference problems. Starting from the original text of the classic Mammography Problem, the textual expression is modified and visualizations are added according to Mayer’s principles of instruction....... These principles concern coherence, personalization, signaling, segmenting, multimedia, spatial contiguity, and pretraining. Principles of self-explanation and interactivity are also applied. Four experiments on the Mammography Problem showed that these principles help participants answer the questions...... that an instructional approach to improving human performance in Bayesian inference is a promising direction....

  10. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  11. Eight challenges in phylodynamic inference

    Directory of Open Access Journals (Sweden)

    Simon D.W. Frost

    2015-03-01

    Full Text Available The field of phylodynamics, which attempts to enhance our understanding of infectious disease dynamics using pathogen phylogenies, has made great strides in the past decade. Basic epidemiological and evolutionary models are now well characterized with inferential frameworks in place. However, significant challenges remain in extending phylodynamic inference to more complex systems. These challenges include accounting for evolutionary complexities such as changing mutation rates, selection, reassortment, and recombination, as well as epidemiological complexities such as stochastic population dynamics, host population structure, and different patterns at the within-host and between-host scales. An additional challenge exists in making efficient inferences from an ever increasing corpus of sequence data.

  12. Problem solving and inference mechanisms

    Energy Technology Data Exchange (ETDEWEB)

    Furukawa, K; Nakajima, R; Yonezawa, A; Goto, S; Aoyama, A

    1982-01-01

    The heart of the fifth generation computer will be powerful mechanisms for problem solving and inference. A deduction-oriented language is to be designed, which will form the core of the whole computing system. The language is based on predicate logic with the extended features of structuring facilities, meta structures and relational data base interfaces. Parallel computation mechanisms and specialized hardware architectures are being investigated to make possible efficient realization of the language features. The project includes research into an intelligent programming system, a knowledge representation language and system, and a meta inference system to be built on the core. 30 references.

  13. Bayesian inference for Markov jump processes with informative observations.

    Science.gov (United States)

    Golightly, Andrew; Wilkinson, Darren J

    2015-04-01

    In this paper we consider the problem of parameter inference for Markov jump process (MJP) representations of stochastic kinetic models. Since transition probabilities are intractable for most processes of interest yet forward simulation is straightforward, Bayesian inference typically proceeds through computationally intensive methods such as (particle) MCMC. Such methods ostensibly require the ability to simulate trajectories from the conditioned jump process. When observations are highly informative, use of the forward simulator is likely to be inefficient and may even preclude an exact (simulation based) analysis. We therefore propose three methods for improving the efficiency of simulating conditioned jump processes. A conditioned hazard is derived based on an approximation to the jump process, and used to generate end-point conditioned trajectories for use inside an importance sampling algorithm. We also adapt a recently proposed sequential Monte Carlo scheme to our problem. Essentially, trajectories are reweighted at a set of intermediate time points, with more weight assigned to trajectories that are consistent with the next observation. We consider two implementations of this approach, based on two continuous approximations of the MJP. We compare these constructs for a simple tractable jump process before using them to perform inference for a Lotka-Volterra system. The best performing construct is used to infer the parameters governing a simple model of motility regulation in Bacillus subtilis.

  14. Packet reversed packet combining scheme

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2006-07-01

    The packet combining scheme is a well defined simple error correction scheme with erroneous copies at the receiver. It offers higher throughput combined with ARQ protocols in networks than that of basic ARQ protocols. But packet combining scheme fails to correct errors when the errors occur in the same bit locations of two erroneous copies. In the present work, we propose a scheme that will correct error if the errors occur at the same bit location of the erroneous copies. The proposed scheme when combined with ARQ protocol will offer higher throughput. (author)

  15. A full quantum network scheme

    International Nuclear Information System (INIS)

    Ma Hai-Qiang; Wei Ke-Jin; Yang Jian-Hui; Li Rui-Xue; Zhu Wu

    2014-01-01

    We present a full quantum network scheme using a modified BB84 protocol. Unlike other quantum network schemes, it allows quantum keys to be distributed between two arbitrary users with the help of an intermediary detecting user. Moreover, it has good expansibility and prevents all potential attacks using loopholes in a detector, so it is more practical to apply. Because the fiber birefringence effects are automatically compensated, the scheme is distinctly stable in principle and in experiment. The simple components for every user make our scheme easier for many applications. The experimental results demonstrate the stability and feasibility of this scheme. (general)

  16. Inferring modules from human protein interactome classes

    Directory of Open Access Journals (Sweden)

    Chaurasia Gautam

    2010-07-01

    Full Text Available Abstract Background The integration of protein-protein interaction networks derived from high-throughput screening approaches and complementary sources is a key topic in systems biology. Although integration of protein interaction data is conventionally performed, the effects of this procedure on the result of network analyses has not been examined yet. In particular, in order to optimize the fusion of heterogeneous interaction datasets, it is crucial to consider not only their degree of coverage and accuracy, but also their mutual dependencies and additional salient features. Results We examined this issue based on the analysis of modules detected by network clustering methods applied to both integrated and individual (disaggregated data sources, which we call interactome classes. Due to class diversity, we deal with variable dependencies of data features arising from structural specificities and biases, but also from possible overlaps. Since highly connected regions of the human interactome may point to potential protein complexes, we have focused on the concept of modularity, and elucidated the detection power of module extraction algorithms by independent validations based on GO, MIPS and KEGG. From the combination of protein interactions with gene expressions, a confidence scoring scheme has been proposed before proceeding via GO with further classification in permanent and transient modules. Conclusions Disaggregated interactomes are shown to be informative for inferring modularity, thus contributing to perform an effective integrative analysis. Validation of the extracted modules by multiple annotation allows for the assessment of confidence measures assigned to the modules in a protein pathway context. Notably, the proposed multilayer confidence scheme can be used for network calibration by enabling a transition from unweighted to weighted interactomes based on biological evidence.

  17. Bayesian inference for spatio-temporal spike-and-slab priors

    DEFF Research Database (Denmark)

    Andersen, Michael Riis; Vehtari, Aki; Winther, Ole

    2017-01-01

    a transformed Gaussian process on the spike-and-slab probabilities. An expectation propagation (EP) algorithm for posterior inference under the proposed model is derived. For large scale problems, the standard EP algorithm can be prohibitively slow. We therefore introduce three different approximation schemes...

  18. A method of inferring k-infinity from reaction rate measurements in thermal reactor systems

    International Nuclear Information System (INIS)

    Newmarch, D.A.

    1967-05-01

    A scheme is described for inferring a value of k-infinity from reaction rate measurements. The method is devised with the METHUSELAH group structure in mind and was developed for the analysis of S.G.H.W. reactor experiments; the underlying principles, however, are general. (author)

  19. Object-Oriented Type Inference

    DEFF Research Database (Denmark)

    Schwartzbach, Michael Ignatieff; Palsberg, Jens

    1991-01-01

    We present a new approach to inferring types in untyped object-oriented programs with inheritance, assignments, and late binding. It guarantees that all messages are understood, annotates the program with type information, allows polymorphic methods, and can be used as the basis of an op...

  20. Inference in hybrid Bayesian networks

    DEFF Research Database (Denmark)

    Lanseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2009-01-01

    Since the 1980s, Bayesian Networks (BNs) have become increasingly popular for building statistical models of complex systems. This is particularly true for boolean systems, where BNs often prove to be a more efficient modelling framework than traditional reliability-techniques (like fault trees...... decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability....

  1. Mixed normal inference on multicointegration

    NARCIS (Netherlands)

    Boswijk, H.P.

    2009-01-01

    Asymptotic likelihood analysis of cointegration in I(2) models, see Johansen (1997, 2006), Boswijk (2000) and Paruolo (2000), has shown that inference on most parameters is mixed normal, implying hypothesis test statistics with an asymptotic 2 null distribution. The asymptotic distribution of the

  2. Statistical inference and Aristotle's Rhetoric.

    Science.gov (United States)

    Macdonald, Ranald R

    2004-11-01

    Formal logic operates in a closed system where all the information relevant to any conclusion is present, whereas this is not the case when one reasons about events and states of the world. Pollard and Richardson drew attention to the fact that the reasoning behind statistical tests does not lead to logically justifiable conclusions. In this paper statistical inferences are defended not by logic but by the standards of everyday reasoning. Aristotle invented formal logic, but argued that people mostly get at the truth with the aid of enthymemes--incomplete syllogisms which include arguing from examples, analogies and signs. It is proposed that statistical tests work in the same way--in that they are based on examples, invoke the analogy of a model and use the size of the effect under test as a sign that the chance hypothesis is unlikely. Of existing theories of statistical inference only a weak version of Fisher's takes this into account. Aristotle anticipated Fisher by producing an argument of the form that there were too many cases in which an outcome went in a particular direction for that direction to be plausibly attributed to chance. We can therefore conclude that Aristotle would have approved of statistical inference and there is a good reason for calling this form of statistical inference classical.

  3. Inferring Molecular Processes Heterogeneity from Transcriptional Data.

    Science.gov (United States)

    Gogolewski, Krzysztof; Wronowska, Weronika; Lech, Agnieszka; Lesyng, Bogdan; Gambin, Anna

    2017-01-01

    RNA microarrays and RNA-seq are nowadays standard technologies to study the transcriptional activity of cells. Most studies focus on tracking transcriptional changes caused by specific experimental conditions. Information referring to genes up- and downregulation is evaluated analyzing the behaviour of relatively large population of cells by averaging its properties. However, even assuming perfect sample homogeneity, different subpopulations of cells can exhibit diverse transcriptomic profiles, as they may follow different regulatory/signaling pathways. The purpose of this study is to provide a novel methodological scheme to account for possible internal, functional heterogeneity in homogeneous cell lines, including cancer ones. We propose a novel computational method to infer the proportion between subpopulations of cells that manifest various functional behaviour in a given sample. Our method was validated using two datasets from RNA microarray experiments. Both experiments aimed to examine cell viability in specific experimental conditions. The presented methodology can be easily extended to RNA-seq data as well as other molecular processes. Moreover, it complements standard tools to indicate most important networks from transcriptomic data and in particular could be useful in the analysis of cancer cell lines affected by biologically active compounds or drugs.

  4. Modified Aggressive Packet Combining Scheme

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2010-06-01

    In this letter, a few schemes are presented to improve the performance of aggressive packet combining scheme (APC). To combat error in computer/data communication networks, ARQ (Automatic Repeat Request) techniques are used. Several modifications to improve the performance of ARQ are suggested by recent research and are found in literature. The important modifications are majority packet combining scheme (MjPC proposed by Wicker), packet combining scheme (PC proposed by Chakraborty), modified packet combining scheme (MPC proposed by Bhunia), and packet reversed packet combining (PRPC proposed by Bhunia) scheme. These modifications are appropriate for improving throughput of conventional ARQ protocols. Leung proposed an idea of APC for error control in wireless networks with the basic objective of error control in uplink wireless data network. We suggest a few modifications of APC to improve its performance in terms of higher throughput, lower delay and higher error correction capability. (author)

  5. Transmission usage cost allocation schemes

    International Nuclear Information System (INIS)

    Abou El Ela, A.A.; El-Sehiemy, R.A.

    2009-01-01

    This paper presents different suggested transmission usage cost allocation (TCA) schemes to the system individuals. Different independent system operator (ISO) visions are presented using the proportional rata and flow-based TCA methods. There are two proposed flow-based TCA schemes (FTCA). The first FTCA scheme generalizes the equivalent bilateral exchanges (EBE) concepts for lossy networks through two-stage procedure. The second FTCA scheme is based on the modified sensitivity factors (MSF). These factors are developed from the actual measurements of power flows in transmission lines and the power injections at different buses. The proposed schemes exhibit desirable apportioning properties and are easy to implement and understand. Case studies for different loading conditions are carried out to show the capability of the proposed schemes for solving the TCA problem. (author)

  6. Statistical learning and selective inference.

    Science.gov (United States)

    Taylor, Jonathan; Tibshirani, Robert J

    2015-06-23

    We describe the problem of "selective inference." This addresses the following challenge: Having mined a set of data to find potential associations, how do we properly assess the strength of these associations? The fact that we have "cherry-picked"--searched for the strongest associations--means that we must set a higher bar for declaring significant the associations that we see. This challenge becomes more important in the era of big data and complex statistical modeling. The cherry tree (dataset) can be very large and the tools for cherry picking (statistical learning methods) are now very sophisticated. We describe some recent new developments in selective inference and illustrate their use in forward stepwise regression, the lasso, and principal components analysis.

  7. Bayesian inference with ecological applications

    CERN Document Server

    Link, William A

    2009-01-01

    This text is written to provide a mathematically sound but accessible and engaging introduction to Bayesian inference specifically for environmental scientists, ecologists and wildlife biologists. It emphasizes the power and usefulness of Bayesian methods in an ecological context. The advent of fast personal computers and easily available software has simplified the use of Bayesian and hierarchical models . One obstacle remains for ecologists and wildlife biologists, namely the near absence of Bayesian texts written specifically for them. The book includes many relevant examples, is supported by software and examples on a companion website and will become an essential grounding in this approach for students and research ecologists. Engagingly written text specifically designed to demystify a complex subject Examples drawn from ecology and wildlife research An essential grounding for graduate and research ecologists in the increasingly prevalent Bayesian approach to inference Companion website with analyt...

  8. Statistical inference an integrated approach

    CERN Document Server

    Migon, Helio S; Louzada, Francisco

    2014-01-01

    Introduction Information The concept of probability Assessing subjective probabilities An example Linear algebra and probability Notation Outline of the bookElements of Inference Common statistical modelsLikelihood-based functions Bayes theorem Exchangeability Sufficiency and exponential family Parameter elimination Prior Distribution Entirely subjective specification Specification through functional forms Conjugacy with the exponential family Non-informative priors Hierarchical priors Estimation Introduction to decision theoryBayesian point estimation Classical point estimation Empirical Bayes estimation Comparison of estimators Interval estimation Estimation in the Normal model Approximating Methods The general problem of inference Optimization techniquesAsymptotic theory Other analytical approximations Numerical integration methods Simulation methods Hypothesis Testing Introduction Classical hypothesis testingBayesian hypothesis testing Hypothesis testing and confidence intervalsAsymptotic tests Prediction...

  9. Bayesian inference on proportional elections.

    Directory of Open Access Journals (Sweden)

    Gabriel Hideki Vatanabe Brunello

    Full Text Available Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software.

  10. Causal inference based on counterfactuals

    Directory of Open Access Journals (Sweden)

    Höfler M

    2005-09-01

    Full Text Available Abstract Background The counterfactual or potential outcome model has become increasingly standard for causal inference in epidemiological and medical studies. Discussion This paper provides an overview on the counterfactual and related approaches. A variety of conceptual as well as practical issues when estimating causal effects are reviewed. These include causal interactions, imperfect experiments, adjustment for confounding, time-varying exposures, competing risks and the probability of causation. It is argued that the counterfactual model of causal effects captures the main aspects of causality in health sciences and relates to many statistical procedures. Summary Counterfactuals are the basis of causal inference in medicine and epidemiology. Nevertheless, the estimation of counterfactual differences pose several difficulties, primarily in observational studies. These problems, however, reflect fundamental barriers only when learning from observations, and this does not invalidate the counterfactual concept.

  11. System Support for Forensic Inference

    Science.gov (United States)

    Gehani, Ashish; Kirchner, Florent; Shankar, Natarajan

    Digital evidence is playing an increasingly important role in prosecuting crimes. The reasons are manifold: financially lucrative targets are now connected online, systems are so complex that vulnerabilities abound and strong digital identities are being adopted, making audit trails more useful. If the discoveries of forensic analysts are to hold up to scrutiny in court, they must meet the standard for scientific evidence. Software systems are currently developed without consideration of this fact. This paper argues for the development of a formal framework for constructing “digital artifacts” that can serve as proxies for physical evidence; a system so imbued would facilitate sound digital forensic inference. A case study involving a filesystem augmentation that provides transparent support for forensic inference is described.

  12. Probability biases as Bayesian inference

    Directory of Open Access Journals (Sweden)

    Andre; C. R. Martins

    2006-11-01

    Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.

  13. Statistical inference on residual life

    CERN Document Server

    Jeong, Jong-Hyeon

    2014-01-01

    This is a monograph on the concept of residual life, which is an alternative summary measure of time-to-event data, or survival data. The mean residual life has been used for many years under the name of life expectancy, so it is a natural concept for summarizing survival or reliability data. It is also more interpretable than the popular hazard function, especially for communications between patients and physicians regarding the efficacy of a new drug in the medical field. This book reviews existing statistical methods to infer the residual life distribution. The review and comparison includes existing inference methods for mean and median, or quantile, residual life analysis through medical data examples. The concept of the residual life is also extended to competing risks analysis. The targeted audience includes biostatisticians, graduate students, and PhD (bio)statisticians. Knowledge in survival analysis at an introductory graduate level is advisable prior to reading this book.

  14. Nonparametric Bayesian inference in biostatistics

    CERN Document Server

    Müller, Peter

    2015-01-01

    As chapters in this book demonstrate, BNP has important uses in clinical sciences and inference for issues like unknown partitions in genomics. Nonparametric Bayesian approaches (BNP) play an ever expanding role in biostatistical inference from use in proteomics to clinical trials. Many research problems involve an abundance of data and require flexible and complex probability models beyond the traditional parametric approaches. As this book's expert contributors show, BNP approaches can be the answer. Survival Analysis, in particular survival regression, has traditionally used BNP, but BNP's potential is now very broad. This applies to important tasks like arrangement of patients into clinically meaningful subpopulations and segmenting the genome into functionally distinct regions. This book is designed to both review and introduce application areas for BNP. While existing books provide theoretical foundations, this book connects theory to practice through engaging examples and research questions. Chapters c...

  15. Statistical inference a short course

    CERN Document Server

    Panik, Michael J

    2012-01-01

    A concise, easily accessible introduction to descriptive and inferential techniques Statistical Inference: A Short Course offers a concise presentation of the essentials of basic statistics for readers seeking to acquire a working knowledge of statistical concepts, measures, and procedures. The author conducts tests on the assumption of randomness and normality, provides nonparametric methods when parametric approaches might not work. The book also explores how to determine a confidence interval for a population median while also providing coverage of ratio estimation, randomness, and causal

  16. On Quantum Statistical Inference, II

    OpenAIRE

    Barndorff-Nielsen, O. E.; Gill, R. D.; Jupp, P. E.

    2003-01-01

    Interest in problems of statistical inference connected to measurements of quantum systems has recently increased substantially, in step with dramatic new developments in experimental techniques for studying small quantum systems. Furthermore, theoretical developments in the theory of quantum measurements have brought the basic mathematical framework for the probability calculations much closer to that of classical probability theory. The present paper reviews this field and proposes and inte...

  17. Nonparametric predictive inference in reliability

    International Nuclear Information System (INIS)

    Coolen, F.P.A.; Coolen-Schrijner, P.; Yan, K.J.

    2002-01-01

    We introduce a recently developed statistical approach, called nonparametric predictive inference (NPI), to reliability. Bounds for the survival function for a future observation are presented. We illustrate how NPI can deal with right-censored data, and discuss aspects of competing risks. We present possible applications of NPI for Bernoulli data, and we briefly outline applications of NPI for replacement decisions. The emphasis is on introduction and illustration of NPI in reliability contexts, detailed mathematical justifications are presented elsewhere

  18. The confounding effect of population structure on bayesian skyline plot inferences of demographic history

    DEFF Research Database (Denmark)

    Heller, Rasmus; Chikhi, Lounes; Siegismund, Hans

    2013-01-01

    Many coalescent-based methods aiming to infer the demographic history of populations assume a single, isolated and panmictic population (i.e. a Wright-Fisher model). While this assumption may be reasonable under many conditions, several recent studies have shown that the results can be misleading...... when it is violated. Among the most widely applied demographic inference methods are Bayesian skyline plots (BSPs), which are used across a range of biological fields. Violations of the panmixia assumption are to be expected in many biological systems, but the consequences for skyline plot inferences...... the best scheme for inferring demographic change over a typical time scale. Analyses of data from a structured African buffalo population demonstrate how BSP results can be strengthened by simulations. We recommend that sample selection should be carefully considered in relation to population structure...

  19. Variational inference & deep learning : A new synthesis

    NARCIS (Netherlands)

    Kingma, D.P.

    2017-01-01

    In this thesis, Variational Inference and Deep Learning: A New Synthesis, we propose novel solutions to the problems of variational (Bayesian) inference, generative modeling, representation learning, semi-supervised learning, and stochastic optimization.

  20. Variational inference & deep learning: A new synthesis

    OpenAIRE

    Kingma, D.P.

    2017-01-01

    In this thesis, Variational Inference and Deep Learning: A New Synthesis, we propose novel solutions to the problems of variational (Bayesian) inference, generative modeling, representation learning, semi-supervised learning, and stochastic optimization.

  1. Continuous Integrated Invariant Inference, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed project will develop a new technique for invariant inference and embed this and other current invariant inference and checking techniques in an...

  2. Variations on Bayesian Prediction and Inference

    Science.gov (United States)

    2016-05-09

    inference 2.2.1 Background There are a number of statistical inference problems that are not generally formulated via a full probability model...problem of inference about an unknown parameter, the Bayesian approach requires a full probability 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...the problem of inference about an unknown parameter, the Bayesian approach requires a full probability model/likelihood which can be an obstacle

  3. Adaptive Inference on General Graphical Models

    OpenAIRE

    Acar, Umut A.; Ihler, Alexander T.; Mettu, Ramgopal; Sumer, Ozgur

    2012-01-01

    Many algorithms and applications involve repeatedly solving variations of the same inference problem; for example we may want to introduce new evidence to the model or perform updates to conditional dependencies. The goal of adaptive inference is to take advantage of what is preserved in the model and perform inference more rapidly than from scratch. In this paper, we describe techniques for adaptive inference on general graphs that support marginal computation and updates to the conditional ...

  4. Coordinated renewable energy support schemes

    DEFF Research Database (Denmark)

    Morthorst, P.E.; Jensen, S.G.

    2006-01-01

    . The first example covers countries with regional power markets that also regionalise their support schemes, the second countries with separate national power markets that regionalise their support schemes. The main findings indicate that the almost ideal situation exists if the region prior to regionalising...

  5. CANONICAL BACKWARD DIFFERENTIATION SCHEMES FOR ...

    African Journals Online (AJOL)

    This paper describes a new nonlinear backward differentiation schemes for the numerical solution of nonlinear initial value problems of first order ordinary differential equations. The schemes are based on rational interpolation obtained from canonical polynomials. They are A-stable. The test problems show that they give ...

  6. Children's schemes for anticipating the validity of nets for solids

    Science.gov (United States)

    Wright, Vince; Smith, Ken

    2017-09-01

    There is growing acknowledgement of the importance of spatial abilities to student achievement across a broad range of domains and disciplines. Nets are one way to connect three-dimensional shapes and their two-dimensional representations and are a common focus of geometry curricula. Thirty-four students at year 6 (upper primary school) were interviewed on two occasions about their anticipation of whether or not given nets for the cube- and square-based pyramid would fold to form the target solid. Vergnaud's ( Journal of Mathematical Behavior, 17(2), 167-181, 1998, Human Development, 52, 83-94, 2009) four characteristics of schemes were used as a theoretical lens to analyse the data. Successful schemes depended on the interaction of operational invariants, such as strategic choice of the base, rules for action, particularly rotation of shapes, and anticipations of composites of polygons in the net forming arrangements of faces in the solid. Inferences were rare. These data suggest that students need teacher support to make inferences, in order to create transferable schemes.

  7. A Self-adaptive Scope Allocation Scheme for Labeling Dynamic XML Documents

    NARCIS (Netherlands)

    Shen, Y.; Feng, L.; Shen, T.; Wang, B.

    This paper proposes a self-adaptive scope allocation scheme for labeling dynamic XML documents. It is general, light-weight and can be built upon existing data retrieval mechanisms. Bayesian inference is used to compute the actual scope allocated for labeling a certain node based on both the prior

  8. More than one kind of inference: re-examining what's learned in feature inference and classification.

    Science.gov (United States)

    Sweller, Naomi; Hayes, Brett K

    2010-08-01

    Three studies examined how task demands that impact on attention to typical or atypical category features shape the category representations formed through classification learning and inference learning. During training categories were learned via exemplar classification or by inferring missing exemplar features. In the latter condition inferences were made about missing typical features alone (typical feature inference) or about both missing typical and atypical features (mixed feature inference). Classification and mixed feature inference led to the incorporation of typical and atypical features into category representations, with both kinds of features influencing inferences about familiar (Experiments 1 and 2) and novel (Experiment 3) test items. Those in the typical inference condition focused primarily on typical features. Together with formal modelling, these results challenge previous accounts that have characterized inference learning as producing a focus on typical category features. The results show that two different kinds of inference learning are possible and that these are subserved by different kinds of category representations.

  9. Generative inference for cultural evolution.

    Science.gov (United States)

    Kandler, Anne; Powell, Adam

    2018-04-05

    One of the major challenges in cultural evolution is to understand why and how various forms of social learning are used in human populations, both now and in the past. To date, much of the theoretical work on social learning has been done in isolation of data, and consequently many insights focus on revealing the learning processes or the distributions of cultural variants that are expected to have evolved in human populations. In population genetics, recent methodological advances have allowed a greater understanding of the explicit demographic and/or selection mechanisms that underlie observed allele frequency distributions across the globe, and their change through time. In particular, generative frameworks-often using coalescent-based simulation coupled with approximate Bayesian computation (ABC)-have provided robust inferences on the human past, with no reliance on a priori assumptions of equilibrium. Here, we demonstrate the applicability and utility of generative inference approaches to the field of cultural evolution. The framework advocated here uses observed population-level frequency data directly to establish the likely presence or absence of particular hypothesized learning strategies. In this context, we discuss the problem of equifinality and argue that, in the light of sparse cultural data and the multiplicity of possible social learning processes, the exclusion of those processes inconsistent with the observed data might be the most instructive outcome. Finally, we summarize the findings of generative inference approaches applied to a number of case studies.This article is part of the theme issue 'Bridging cultural gaps: interdisciplinary studies in human cultural evolution'. © 2018 The Author(s).

  10. sick: The Spectroscopic Inference Crank

    Science.gov (United States)

    Casey, Andrew R.

    2016-03-01

    There exists an inordinate amount of spectral data in both public and private astronomical archives that remain severely under-utilized. The lack of reliable open-source tools for analyzing large volumes of spectra contributes to this situation, which is poised to worsen as large surveys successively release orders of magnitude more spectra. In this article I introduce sick, the spectroscopic inference crank, a flexible and fast Bayesian tool for inferring astrophysical parameters from spectra. sick is agnostic to the wavelength coverage, resolving power, or general data format, allowing any user to easily construct a generative model for their data, regardless of its source. sick can be used to provide a nearest-neighbor estimate of model parameters, a numerically optimized point estimate, or full Markov Chain Monte Carlo sampling of the posterior probability distributions. This generality empowers any astronomer to capitalize on the plethora of published synthetic and observed spectra, and make precise inferences for a host of astrophysical (and nuisance) quantities. Model intensities can be reliably approximated from existing grids of synthetic or observed spectra using linear multi-dimensional interpolation, or a Cannon-based model. Additional phenomena that transform the data (e.g., redshift, rotational broadening, continuum, spectral resolution) are incorporated as free parameters and can be marginalized away. Outlier pixels (e.g., cosmic rays or poorly modeled regimes) can be treated with a Gaussian mixture model, and a noise model is included to account for systematically underestimated variance. Combining these phenomena into a scalar-justified, quantitative model permits precise inferences with credible uncertainties on noisy data. I describe the common model features, the implementation details, and the default behavior, which is balanced to be suitable for most astronomical applications. Using a forward model on low-resolution, high signal

  11. Inferring network structure from cascades

    Science.gov (United States)

    Ghonge, Sushrut; Vural, Dervis Can

    2017-07-01

    Many physical, biological, and social phenomena can be described by cascades taking place on a network. Often, the activity can be empirically observed, but not the underlying network of interactions. In this paper we offer three topological methods to infer the structure of any directed network given a set of cascade arrival times. Our formulas hold for a very general class of models where the activation probability of a node is a generic function of its degree and the number of its active neighbors. We report high success rates for synthetic and real networks, for several different cascade models.

  12. SICK: THE SPECTROSCOPIC INFERENCE CRANK

    Energy Technology Data Exchange (ETDEWEB)

    Casey, Andrew R., E-mail: arc@ast.cam.ac.uk [Institute of Astronomy, University of Cambridge, Madingley Road, Cambdridge, CB3 0HA (United Kingdom)

    2016-03-15

    There exists an inordinate amount of spectral data in both public and private astronomical archives that remain severely under-utilized. The lack of reliable open-source tools for analyzing large volumes of spectra contributes to this situation, which is poised to worsen as large surveys successively release orders of magnitude more spectra. In this article I introduce sick, the spectroscopic inference crank, a flexible and fast Bayesian tool for inferring astrophysical parameters from spectra. sick is agnostic to the wavelength coverage, resolving power, or general data format, allowing any user to easily construct a generative model for their data, regardless of its source. sick can be used to provide a nearest-neighbor estimate of model parameters, a numerically optimized point estimate, or full Markov Chain Monte Carlo sampling of the posterior probability distributions. This generality empowers any astronomer to capitalize on the plethora of published synthetic and observed spectra, and make precise inferences for a host of astrophysical (and nuisance) quantities. Model intensities can be reliably approximated from existing grids of synthetic or observed spectra using linear multi-dimensional interpolation, or a Cannon-based model. Additional phenomena that transform the data (e.g., redshift, rotational broadening, continuum, spectral resolution) are incorporated as free parameters and can be marginalized away. Outlier pixels (e.g., cosmic rays or poorly modeled regimes) can be treated with a Gaussian mixture model, and a noise model is included to account for systematically underestimated variance. Combining these phenomena into a scalar-justified, quantitative model permits precise inferences with credible uncertainties on noisy data. I describe the common model features, the implementation details, and the default behavior, which is balanced to be suitable for most astronomical applications. Using a forward model on low-resolution, high signal

  13. Bayesian inference for Hawkes processes

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl

    The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....

  14. Bayesian inference for Hawkes processes

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl

    2013-01-01

    The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....

  15. Inference in hybrid Bayesian networks

    International Nuclear Information System (INIS)

    Langseth, Helge; Nielsen, Thomas D.; Rumi, Rafael; Salmeron, Antonio

    2009-01-01

    Since the 1980s, Bayesian networks (BNs) have become increasingly popular for building statistical models of complex systems. This is particularly true for boolean systems, where BNs often prove to be a more efficient modelling framework than traditional reliability techniques (like fault trees and reliability block diagrams). However, limitations in the BNs' calculation engine have prevented BNs from becoming equally popular for domains containing mixtures of both discrete and continuous variables (the so-called hybrid domains). In this paper we focus on these difficulties, and summarize some of the last decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability.

  16. SICK: THE SPECTROSCOPIC INFERENCE CRANK

    International Nuclear Information System (INIS)

    Casey, Andrew R.

    2016-01-01

    There exists an inordinate amount of spectral data in both public and private astronomical archives that remain severely under-utilized. The lack of reliable open-source tools for analyzing large volumes of spectra contributes to this situation, which is poised to worsen as large surveys successively release orders of magnitude more spectra. In this article I introduce sick, the spectroscopic inference crank, a flexible and fast Bayesian tool for inferring astrophysical parameters from spectra. sick is agnostic to the wavelength coverage, resolving power, or general data format, allowing any user to easily construct a generative model for their data, regardless of its source. sick can be used to provide a nearest-neighbor estimate of model parameters, a numerically optimized point estimate, or full Markov Chain Monte Carlo sampling of the posterior probability distributions. This generality empowers any astronomer to capitalize on the plethora of published synthetic and observed spectra, and make precise inferences for a host of astrophysical (and nuisance) quantities. Model intensities can be reliably approximated from existing grids of synthetic or observed spectra using linear multi-dimensional interpolation, or a Cannon-based model. Additional phenomena that transform the data (e.g., redshift, rotational broadening, continuum, spectral resolution) are incorporated as free parameters and can be marginalized away. Outlier pixels (e.g., cosmic rays or poorly modeled regimes) can be treated with a Gaussian mixture model, and a noise model is included to account for systematically underestimated variance. Combining these phenomena into a scalar-justified, quantitative model permits precise inferences with credible uncertainties on noisy data. I describe the common model features, the implementation details, and the default behavior, which is balanced to be suitable for most astronomical applications. Using a forward model on low-resolution, high signal

  17. hybrid modulation scheme fo rid modulation scheme fo dulation

    African Journals Online (AJOL)

    eobe

    control technique is done through simulations and ex control technique .... HYBRID MODULATION SCHEME FOR CASCADED H-BRIDGE INVERTER CELLS. C. I. Odeh ..... and OR operations. Referring to ... MATLAB/SIMULINK environment.

  18. Subjective randomness as statistical inference.

    Science.gov (United States)

    Griffiths, Thomas L; Daniels, Dylan; Austerweil, Joseph L; Tenenbaum, Joshua B

    2018-06-01

    Some events seem more random than others. For example, when tossing a coin, a sequence of eight heads in a row does not seem very random. Where do these intuitions about randomness come from? We argue that subjective randomness can be understood as the result of a statistical inference assessing the evidence that an event provides for having been produced by a random generating process. We show how this account provides a link to previous work relating randomness to algorithmic complexity, in which random events are those that cannot be described by short computer programs. Algorithmic complexity is both incomputable and too general to capture the regularities that people can recognize, but viewing randomness as statistical inference provides two paths to addressing these problems: considering regularities generated by simpler computing machines, and restricting the set of probability distributions that characterize regularity. Building on previous work exploring these different routes to a more restricted notion of randomness, we define strong quantitative models of human randomness judgments that apply not just to binary sequences - which have been the focus of much of the previous work on subjective randomness - but also to binary matrices and spatial clustering. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Good governance for pension schemes

    CERN Document Server

    Thornton, Paul

    2011-01-01

    Regulatory and market developments have transformed the way in which UK private sector pension schemes operate. This has increased demands on trustees and advisors and the trusteeship governance model must evolve in order to remain fit for purpose. This volume brings together leading practitioners to provide an overview of what today constitutes good governance for pension schemes, from both a legal and a practical perspective. It provides the reader with an appreciation of the distinctive characteristics of UK occupational pension schemes, how they sit within the capital markets and their social and fiduciary responsibilities. Providing a holistic analysis of pension risk, both from the trustee and the corporate perspective, the essays cover the crucial role of the employer covenant, financing and investment risk, developments in longevity risk hedging and insurance de-risking, and best practice scheme administration.

  20. Optimum RA reactor fuelling scheme

    International Nuclear Information System (INIS)

    Strugar, P.; Nikolic, V.

    1965-10-01

    Ideal reactor refueling scheme can be achieved only by continuous fuel elements movement in the core, which is not possible, and thus approximations are applied. One of the possible approximations is discontinuous movement of fuel elements groups in radial direction. This enables higher burnup especially if axial exchange is possible. Analysis of refueling schemes in the RA reactor core and schemes with mixing the fresh and used fuel elements show that 30% higher burnup can be achieved by applying mixing, and even 40% if reactivity due to decrease in experimental space is taken into account. Up to now, mean burnup of 4400 MWd/t has been achieved, and the proposed fueling scheme with reduction of experimental space could achieve mean burnup of 6300 MWd/t which means about 25 Mwd/t per fuel channel [sr

  1. A Novel Iris Segmentation Scheme

    Directory of Open Access Journals (Sweden)

    Chen-Chung Liu

    2014-01-01

    Full Text Available One of the key steps in the iris recognition system is the accurate iris segmentation from its surrounding noises including pupil, sclera, eyelashes, and eyebrows of a captured eye-image. This paper presents a novel iris segmentation scheme which utilizes the orientation matching transform to outline the outer and inner iris boundaries initially. It then employs Delogne-Kåsa circle fitting (instead of the traditional Hough transform to further eliminate the outlier points to extract a more precise iris area from an eye-image. In the extracted iris region, the proposed scheme further utilizes the differences in the intensity and positional characteristics of the iris, eyelid, and eyelashes to detect and delete these noises. The scheme is then applied on iris image database, UBIRIS.v1. The experimental results show that the presented scheme provides a more effective and efficient iris segmentation than other conventional methods.

  2. Nonparametric bootstrap procedures for predictive inference based on recursive estimation schemes

    OpenAIRE

    Corradi, Valentina; Swanson, Norman R.

    2005-01-01

    Our objectives in this paper are twofold. First, we introduce block bootstrap techniques that are (first order) valid in recursive estimation frameworks. Thereafter, we present two examples where predictive accuracy tests are made operational using our new bootstrap procedures. In one application, we outline a consistent test for out-of-sample nonlinear Granger causality, and in the other we outline a test for selecting amongst multiple alternative forecasting models, all of which are possibl...

  3. Numerical schemes for explosion hazards

    International Nuclear Information System (INIS)

    Therme, Nicolas

    2015-01-01

    In nuclear facilities, internal or external explosions can cause confinement breaches and radioactive materials release in the environment. Hence, modeling such phenomena is crucial for safety matters. Blast waves resulting from explosions are modeled by the system of Euler equations for compressible flows, whereas Navier-Stokes equations with reactive source terms and level set techniques are used to simulate the propagation of flame front during the deflagration phase. The purpose of this thesis is to contribute to the creation of efficient numerical schemes to solve these complex models. The work presented here focuses on two major aspects: first, the development of consistent schemes for the Euler equations, then the buildup of reliable schemes for the front propagation. In both cases, explicit in time schemes are used, but we also introduce a pressure correction scheme for the Euler equations. Staggered discretization is used in space. It is based on the internal energy formulation of the Euler system, which insures its positivity and avoids tedious discretization of the total energy over staggered grids. A discrete kinetic energy balance is derived from the scheme and a source term is added in the discrete internal energy balance equation to preserve the exact total energy balance at the limit. High order methods of MUSCL type are used in the discrete convective operators, based solely on material velocity. They lead to positivity of density and internal energy under CFL conditions. This ensures that the total energy cannot grow and we can furthermore derive a discrete entropy inequality. Under stability assumptions of the discrete L8 and BV norms of the scheme's solutions one can prove that a sequence of converging discrete solutions necessarily converges towards the weak solution of the Euler system. Besides it satisfies a weak entropy inequality at the limit. Concerning the front propagation, we transform the flame front evolution equation (the so called

  4. Breeding schemes in reindeer husbandry

    Directory of Open Access Journals (Sweden)

    Lars Rönnegård

    2003-04-01

    Full Text Available The objective of the paper was to investigate annual genetic gain from selection (G, and the influence of selection on the inbreeding effective population size (Ne, for different possible breeding schemes within a reindeer herding district. The breeding schemes were analysed for different proportions of the population within a herding district included in the selection programme. Two different breeding schemes were analysed: an open nucleus scheme where males mix and mate between owner flocks, and a closed nucleus scheme where the males in non-selected owner flocks are culled to maximise G in the whole population. The theory of expected long-term genetic contributions was used and maternal effects were included in the analyses. Realistic parameter values were used for the population, modelled with 5000 reindeer in the population and a sex ratio of 14 adult females per male. The standard deviation of calf weights was 4.1 kg. Four different situations were explored and the results showed: 1. When the population was randomly culled, Ne equalled 2400. 2. When the whole population was selected on calf weights, Ne equalled 1700 and the total annual genetic gain (direct + maternal in calf weight was 0.42 kg. 3. For the open nucleus scheme, G increased monotonically from 0 to 0.42 kg as the proportion of the population included in the selection programme increased from 0 to 1.0, and Ne decreased correspondingly from 2400 to 1700. 4. In the closed nucleus scheme the lowest value of Ne was 1300. For a given proportion of the population included in the selection programme, the difference in G between a closed nucleus scheme and an open one was up to 0.13 kg. We conclude that for mass selection based on calf weights in herding districts with 2000 animals or more, there are no risks of inbreeding effects caused by selection.

  5. Lower complexity bounds for lifted inference

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2015-01-01

    instances of the model. Numerous approaches for such “lifted inference” techniques have been proposed. While it has been demonstrated that these techniques will lead to significantly more efficient inference on some specific models, there are only very recent and still quite restricted results that show...... the feasibility of lifted inference on certain syntactically defined classes of models. Lower complexity bounds that imply some limitations for the feasibility of lifted inference on more expressive model classes were established earlier in Jaeger (2000; Jaeger, M. 2000. On the complexity of inference about...... that under the assumption that NETIME≠ETIME, there is no polynomial lifted inference algorithm for knowledge bases of weighted, quantifier-, and function-free formulas. Further strengthening earlier results, this is also shown to hold for approximate inference and for knowledge bases not containing...

  6. Statistical inference for financial engineering

    CERN Document Server

    Taniguchi, Masanobu; Ogata, Hiroaki; Taniai, Hiroyuki

    2014-01-01

    This monograph provides the fundamentals of statistical inference for financial engineering and covers some selected methods suitable for analyzing financial time series data. In order to describe the actual financial data, various stochastic processes, e.g. non-Gaussian linear processes, non-linear processes, long-memory processes, locally stationary processes etc. are introduced and their optimal estimation is considered as well. This book also includes several statistical approaches, e.g., discriminant analysis, the empirical likelihood method, control variate method, quantile regression, realized volatility etc., which have been recently developed and are considered to be powerful tools for analyzing the financial data, establishing a new bridge between time series and financial engineering. This book is well suited as a professional reference book on finance, statistics and statistical financial engineering. Readers are expected to have an undergraduate-level knowledge of statistics.

  7. Type inference for correspondence types

    DEFF Research Database (Denmark)

    Hüttel, Hans; Gordon, Andy; Hansen, Rene Rydhof

    2009-01-01

    We present a correspondence type/effect system for authenticity in a π-calculus with polarized channels, dependent pair types and effect terms and show how one may, given a process P and an a priori type environment E, generate constraints that are formulae in the Alternating Least Fixed......-Point (ALFP) logic. We then show how a reasonable model of the generated constraints yields a type/effect assignment such that P becomes well-typed with respect to E if and only if this is possible. The formulae generated satisfy a finite model property; a system of constraints is satisfiable if and only...... if it has a finite model. As a consequence, we obtain the result that type/effect inference in our system is polynomial-time decidable....

  8. Causal inference in public health.

    Science.gov (United States)

    Glass, Thomas A; Goodman, Steven N; Hernán, Miguel A; Samet, Jonathan M

    2013-01-01

    Causal inference has a central role in public health; the determination that an association is causal indicates the possibility for intervention. We review and comment on the long-used guidelines for interpreting evidence as supporting a causal association and contrast them with the potential outcomes framework that encourages thinking in terms of causes that are interventions. We argue that in public health this framework is more suitable, providing an estimate of an action's consequences rather than the less precise notion of a risk factor's causal effect. A variety of modern statistical methods adopt this approach. When an intervention cannot be specified, causal relations can still exist, but how to intervene to change the outcome will be unclear. In application, the often-complex structure of causal processes needs to be acknowledged and appropriate data collected to study them. These newer approaches need to be brought to bear on the increasingly complex public health challenges of our globalized world.

  9. Inference Attacks and Control on Database Structures

    Directory of Open Access Journals (Sweden)

    Muhamed Turkanovic

    2015-02-01

    Full Text Available Today’s databases store information with sensitivity levels that range from public to highly sensitive, hence ensuring confidentiality can be highly important, but also requires costly control. This paper focuses on the inference problem on different database structures. It presents possible treats on privacy with relation to the inference, and control methods for mitigating these treats. The paper shows that using only access control, without any inference control is inadequate, since these models are unable to protect against indirect data access. Furthermore, it covers new inference problems which rise from the dimensions of new technologies like XML, semantics, etc.

  10. Multiuser switched diversity scheduling schemes

    KAUST Repository

    Shaqfeh, Mohammad; Alnuweiri, Hussein M.; Alouini, Mohamed-Slim

    2012-01-01

    Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed, and ordered scheduling mechanism. The main idea behind these schemes is that slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we characterize the achievable rate region of multiuser switched diversity systems and compare it with the rate region of full feedback multiuser diversity systems. We propose also a novel proportional fair multiuser switched-based scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the feedback thresholds. We finally demonstrate by numerical examples that switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the ultimate network capacity of full feedback systems in Rayleigh fading conditions. © 2012 IEEE.

  11. Nonlinear secret image sharing scheme.

    Science.gov (United States)

    Shin, Sang-Ho; Lee, Gil-Je; Yoo, Kee-Young

    2014-01-01

    Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2 m⌉ bit-per-pixel (bpp), respectively.

  12. Multiuser switched diversity scheduling schemes

    KAUST Repository

    Shaqfeh, Mohammad

    2012-09-01

    Multiuser switched-diversity scheduling schemes were recently proposed in order to overcome the heavy feedback requirements of conventional opportunistic scheduling schemes by applying a threshold-based, distributed, and ordered scheduling mechanism. The main idea behind these schemes is that slight reduction in the prospected multiuser diversity gains is an acceptable trade-off for great savings in terms of required channel-state-information feedback messages. In this work, we characterize the achievable rate region of multiuser switched diversity systems and compare it with the rate region of full feedback multiuser diversity systems. We propose also a novel proportional fair multiuser switched-based scheduling scheme and we demonstrate that it can be optimized using a practical and distributed method to obtain the feedback thresholds. We finally demonstrate by numerical examples that switched-diversity scheduling schemes operate within 0.3 bits/sec/Hz from the ultimate network capacity of full feedback systems in Rayleigh fading conditions. © 2012 IEEE.

  13. Simple simulation of diffusion bridges with application to likelihood inference for diffusions

    DEFF Research Database (Denmark)

    Bladt, Mogens; Sørensen, Michael

    2014-01-01

    the accuracy and efficiency of the approximate method and compare it to exact simulation methods. In the study, our method provides a very good approximation to the distribution of a diffusion bridge for bridges that are likely to occur in applications to statistical inference. To illustrate the usefulness......With a view to statistical inference for discretely observed diffusion models, we propose simple methods of simulating diffusion bridges, approximately and exactly. Diffusion bridge simulation plays a fundamental role in likelihood and Bayesian inference for diffusion processes. First a simple......-dimensional diffusions and is applicable to all one-dimensional diffusion processes with finite speed-measure. One advantage of the new approach is that simple simulation methods like the Milstein scheme can be applied to bridge simulation. Another advantage over previous bridge simulation methods is that the proposed...

  14. Phylogenetic relationships of Hemiptera inferred from mitochondrial and nuclear genes.

    Science.gov (United States)

    Song, Nan; Li, Hu; Cai, Wanzhi; Yan, Fengming; Wang, Jianyun; Song, Fan

    2016-11-01

    Here, we reconstructed the Hemiptera phylogeny based on the expanded mitochondrial protein-coding genes and the nuclear 18S rRNA gene, separately. The differential rates of change across lineages may associate with long-branch attraction (LBA) effect and result in conflicting estimates of phylogeny from different types of data. To reduce the potential effects of systematic biases on inferences of topology, various data coding schemes, site removal method, and different algorithms were utilized in phylogenetic reconstruction. We show that the outgroups Phthiraptera, Thysanoptera, and the ingroup Sternorrhyncha share similar base composition, and exhibit "long branches" relative to other hemipterans. Thus, the long-branch attraction between these groups is suspected to cause the failure of recovering Hemiptera under the homogeneous model. In contrast, a monophyletic Hemiptera is supported when heterogeneous model is utilized in the analysis. Although higher level phylogenetic relationships within Hemiptera remain to be answered, consensus between analyses is beginning to converge on a stable phylogeny.

  15. The Multivariate Generalised von Mises Distribution: Inference and Applications

    DEFF Research Database (Denmark)

    Navarro, Alexandre Khae Wu; Frellsen, Jes; Turner, Richard

    2017-01-01

    Circular variables arise in a multitude of data-modelling contexts ranging from robotics to the social sciences, but they have been largely overlooked by the machine learning community. This paper partially redresses this imbalance by extending some standard probabilistic modelling tools to the c......Circular variables arise in a multitude of data-modelling contexts ranging from robotics to the social sciences, but they have been largely overlooked by the machine learning community. This paper partially redresses this imbalance by extending some standard probabilistic modelling tools....... These models can leverage standard modelling tools (e.g. kernel functions and automatic relevance determination). Third, we show that the posterior distribution in these models is a mGvM distribution which enables development of an efficient variational free-energy scheme for performing approximate inference...... and approximate maximum-likelihood learning....

  16. Adaptive neuro-fuzzy inference system based automatic generation control

    Energy Technology Data Exchange (ETDEWEB)

    Hosseini, S.H.; Etemadi, A.H. [Department of Electrical Engineering, Sharif University of Technology, Tehran (Iran)

    2008-07-15

    Fixed gain controllers for automatic generation control are designed at nominal operating conditions and fail to provide best control performance over a wide range of operating conditions. So, to keep system performance near its optimum, it is desirable to track the operating conditions and use updated parameters to compute control gains. A control scheme based on artificial neuro-fuzzy inference system (ANFIS), which is trained by the results of off-line studies obtained using particle swarm optimization, is proposed in this paper to optimize and update control gains in real-time according to load variations. Also, frequency relaxation is implemented using ANFIS. The efficiency of the proposed method is demonstrated via simulations. Compliance of the proposed method with NERC control performance standard is verified. (author)

  17. LAIT: a local ancestry inference toolkit.

    Science.gov (United States)

    Hui, Daniel; Fang, Zhou; Lin, Jerome; Duan, Qing; Li, Yun; Hu, Ming; Chen, Wei

    2017-09-06

    Inferring local ancestry in individuals of mixed ancestry has many applications, most notably in identifying disease-susceptible loci that vary among different ethnic groups. Many software packages are available for inferring local ancestry in admixed individuals. However, most of these existing software packages require specific formatted input files and generate output files in various types, yielding practical inconvenience. We developed a tool set, Local Ancestry Inference Toolkit (LAIT), which can convert standardized files into software-specific input file formats as well as standardize and summarize inference results for four popular local ancestry inference software: HAPMIX, LAMP, LAMP-LD, and ELAI. We tested LAIT using both simulated and real data sets and demonstrated that LAIT provides convenience to run multiple local ancestry inference software. In addition, we evaluated the performance of local ancestry software among different supported software packages, mainly focusing on inference accuracy and computational resources used. We provided a toolkit to facilitate the use of local ancestry inference software, especially for users with limited bioinformatics background.

  18. Forward and backward inference in spatial cognition.

    Directory of Open Access Journals (Sweden)

    Will D Penny

    Full Text Available This paper shows that the various computations underlying spatial cognition can be implemented using statistical inference in a single probabilistic model. Inference is implemented using a common set of 'lower-level' computations involving forward and backward inference over time. For example, to estimate where you are in a known environment, forward inference is used to optimally combine location estimates from path integration with those from sensory input. To decide which way to turn to reach a goal, forward inference is used to compute the likelihood of reaching that goal under each option. To work out which environment you are in, forward inference is used to compute the likelihood of sensory observations under the different hypotheses. For reaching sensory goals that require a chaining together of decisions, forward inference can be used to compute a state trajectory that will lead to that goal, and backward inference to refine the route and estimate control signals that produce the required trajectory. We propose that these computations are reflected in recent findings of pattern replay in the mammalian brain. Specifically, that theta sequences reflect decision making, theta flickering reflects model selection, and remote replay reflects route and motor planning. We also propose a mapping of the above computational processes onto lateral and medial entorhinal cortex and hippocampus.

  19. Generative Inferences Based on Learned Relations

    Science.gov (United States)

    Chen, Dawn; Lu, Hongjing; Holyoak, Keith J.

    2017-01-01

    A key property of relational representations is their "generativity": From partial descriptions of relations between entities, additional inferences can be drawn about other entities. A major theoretical challenge is to demonstrate how the capacity to make generative inferences could arise as a result of learning relations from…

  20. Inference in models with adaptive learning

    NARCIS (Netherlands)

    Chevillon, G.; Massmann, M.; Mavroeidis, S.

    2010-01-01

    Identification of structural parameters in models with adaptive learning can be weak, causing standard inference procedures to become unreliable. Learning also induces persistent dynamics, and this makes the distribution of estimators and test statistics non-standard. Valid inference can be

  1. Fiducial inference - A Neyman-Pearson interpretation

    NARCIS (Netherlands)

    Salome, D; VonderLinden, W; Dose,; Fischer, R; Preuss, R

    1999-01-01

    Fisher's fiducial argument is a tool for deriving inferences in the form of a probability distribution on the parameter space, not based on Bayes's Theorem. Lindley established that in exceptional situations fiducial inferences coincide with posterior distributions; in the other situations fiducial

  2. Uncertainty in prediction and in inference

    NARCIS (Netherlands)

    Hilgevoord, J.; Uffink, J.

    1991-01-01

    The concepts of uncertainty in prediction and inference are introduced and illustrated using the diffraction of light as an example. The close re-lationship between the concepts of uncertainty in inference and resolving power is noted. A general quantitative measure of uncertainty in

  3. Causal inference in economics and marketing.

    Science.gov (United States)

    Varian, Hal R

    2016-07-05

    This is an elementary introduction to causal inference in economics written for readers familiar with machine learning methods. The critical step in any causal analysis is estimating the counterfactual-a prediction of what would have happened in the absence of the treatment. The powerful techniques used in machine learning may be useful for developing better estimates of the counterfactual, potentially improving causal inference.

  4. Nonparametric predictive inference in statistical process control

    NARCIS (Netherlands)

    Arts, G.R.J.; Coolen, F.P.A.; Laan, van der P.

    2000-01-01

    New methods for statistical process control are presented, where the inferences have a nonparametric predictive nature. We consider several problems in process control in terms of uncertainties about future observable random quantities, and we develop inferences for these random quantities hased on

  5. The Impact of Disablers on Predictive Inference

    Science.gov (United States)

    Cummins, Denise Dellarosa

    2014-01-01

    People consider alternative causes when deciding whether a cause is responsible for an effect (diagnostic inference) but appear to neglect them when deciding whether an effect will occur (predictive inference). Five experiments were conducted to test a 2-part explanation of this phenomenon: namely, (a) that people interpret standard predictive…

  6. Compiling Relational Bayesian Networks for Exact Inference

    DEFF Research Database (Denmark)

    Jaeger, Manfred; Darwiche, Adnan; Chavira, Mark

    2006-01-01

    We describe in this paper a system for exact inference with relational Bayesian networks as defined in the publicly available PRIMULA tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference...

  7. Compiling Relational Bayesian Networks for Exact Inference

    DEFF Research Database (Denmark)

    Jaeger, Manfred; Chavira, Mark; Darwiche, Adnan

    2004-01-01

    We describe a system for exact inference with relational Bayesian networks as defined in the publicly available \\primula\\ tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evaluating...

  8. Electrical Injection Schemes for Nanolasers

    DEFF Research Database (Denmark)

    Lupi, Alexandra; Chung, Il-Sug; Yvind, Kresten

    2014-01-01

    Three electrical injection schemes based on recently demonstrated electrically pumped photonic crystal nanolasers have been numerically investigated: 1) a vertical p-i-n junction through a post structure; 2) a lateral p-i-n junction with a homostructure; and 3) a lateral p-i-n junction....... For this analysis, the properties of different schemes, i.e., electrical resistance, threshold voltage, threshold current, and internal efficiency as energy requirements for optical interconnects are compared and the physics behind the differences is discussed....

  9. Signal multiplexing scheme for LINAC

    International Nuclear Information System (INIS)

    Sujo, C.I.; Mohan, Shyam; Joshi, Gopal; Singh, S.K.; Karande, Jitendra

    2004-01-01

    For the proper operation of the LINAC some signals, RF (radio frequency) as well as LF (low frequency) have to be available at the Master Control Station (MCS). These signals are needed to control, calibrate and characterize the RF fields in the resonators. This can be achieved by proper multiplexing of various signals locally and then routing the selected signals to the MCS. A multiplexing scheme has been designed and implemented, which will allow the signals from the selected cavity to the MCS. High isolation between channels and low insertion loss for a given signal are important issues while selecting the multiplexing scheme. (author)

  10. Capacity-achieving CPM schemes

    OpenAIRE

    Perotti, Alberto; Tarable, Alberto; Benedetto, Sergio; Montorsi, Guido

    2008-01-01

    The pragmatic approach to coded continuous-phase modulation (CPM) is proposed as a capacity-achieving low-complexity alternative to the serially-concatenated CPM (SC-CPM) coding scheme. In this paper, we first perform a selection of the best spectrally-efficient CPM modulations to be embedded into SC-CPM schemes. Then, we consider the pragmatic capacity (a.k.a. BICM capacity) of CPM modulations and optimize it through a careful design of the mapping between input bits and CPM waveforms. The s...

  11. BayesCLUMPY: BAYESIAN INFERENCE WITH CLUMPY DUSTY TORUS MODELS

    International Nuclear Information System (INIS)

    Asensio Ramos, A.; Ramos Almeida, C.

    2009-01-01

    Our aim is to present a fast and general Bayesian inference framework based on the synergy between machine learning techniques and standard sampling methods and apply it to infer the physical properties of clumpy dusty torus using infrared photometric high spatial resolution observations of active galactic nuclei. We make use of the Metropolis-Hastings Markov Chain Monte Carlo algorithm for sampling the posterior distribution function. Such distribution results from combining all a priori knowledge about the parameters of the model and the information introduced by the observations. The main difficulty resides in the fact that the model used to explain the observations is computationally demanding and the sampling is very time consuming. For this reason, we apply a set of artificial neural networks that are used to approximate and interpolate a database of models. As a consequence, models not present in the original database can be computed ensuring continuity. We focus on the application of this solution scheme to the recently developed public database of clumpy dusty torus models. The machine learning scheme used in this paper allows us to generate any model from the database using only a factor of 10 -4 of the original size of the database and a factor of 10 -3 in computing time. The posterior distribution obtained for each model parameter allows us to investigate how the observations constrain the parameters and which ones remain partially or completely undetermined, providing statistically relevant confidence intervals. As an example, the application to the nuclear region of Centaurus A shows that the optical depth of the clouds, the total number of clouds, and the radial extent of the cloud distribution zone are well constrained using only six filters. The code is freely available from the authors.

  12. Extended likelihood inference in reliability

    International Nuclear Information System (INIS)

    Martz, H.F. Jr.; Beckman, R.J.; Waller, R.A.

    1978-10-01

    Extended likelihood methods of inference are developed in which subjective information in the form of a prior distribution is combined with sampling results by means of an extended likelihood function. The extended likelihood function is standardized for use in obtaining extended likelihood intervals. Extended likelihood intervals are derived for the mean of a normal distribution with known variance, the failure-rate of an exponential distribution, and the parameter of a binomial distribution. Extended second-order likelihood methods are developed and used to solve several prediction problems associated with the exponential and binomial distributions. In particular, such quantities as the next failure-time, the number of failures in a given time period, and the time required to observe a given number of failures are predicted for the exponential model with a gamma prior distribution on the failure-rate. In addition, six types of life testing experiments are considered. For the binomial model with a beta prior distribution on the probability of nonsurvival, methods are obtained for predicting the number of nonsurvivors in a given sample size and for predicting the required sample size for observing a specified number of nonsurvivors. Examples illustrate each of the methods developed. Finally, comparisons are made with Bayesian intervals in those cases where these are known to exist

  13. Reinforcement learning or active inference?

    Science.gov (United States)

    Friston, Karl J; Daunizeau, Jean; Kiebel, Stefan J

    2009-07-29

    This paper questions the need for reinforcement learning or control theory when optimising behaviour. We show that it is fairly simple to teach an agent complicated and adaptive behaviours using a free-energy formulation of perception. In this formulation, agents adjust their internal states and sampling of the environment to minimize their free-energy. Such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. This results in behavioural policies that reproduce those optimised by reinforcement learning and dynamic programming. Critically, we do not need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem, using active perception or inference under the free-energy principle. The ensuing proof-of-concept may be important because the free-energy formulation furnishes a unified account of both action and perception and may speak to a reappraisal of the role of dopamine in the brain.

  14. Reinforcement learning or active inference?

    Directory of Open Access Journals (Sweden)

    Karl J Friston

    2009-07-01

    Full Text Available This paper questions the need for reinforcement learning or control theory when optimising behaviour. We show that it is fairly simple to teach an agent complicated and adaptive behaviours using a free-energy formulation of perception. In this formulation, agents adjust their internal states and sampling of the environment to minimize their free-energy. Such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. This results in behavioural policies that reproduce those optimised by reinforcement learning and dynamic programming. Critically, we do not need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem, using active perception or inference under the free-energy principle. The ensuing proof-of-concept may be important because the free-energy formulation furnishes a unified account of both action and perception and may speak to a reappraisal of the role of dopamine in the brain.

  15. Ancient Biomolecules and Evolutionary Inference.

    Science.gov (United States)

    Cappellini, Enrico; Prohaska, Ana; Racimo, Fernando; Welker, Frido; Pedersen, Mikkel Winther; Allentoft, Morten E; de Barros Damgaard, Peter; Gutenbrunner, Petra; Dunne, Julie; Hammann, Simon; Roffet-Salque, Mélanie; Ilardo, Melissa; Moreno-Mayar, J Víctor; Wang, Yucheng; Sikora, Martin; Vinner, Lasse; Cox, Jürgen; Evershed, Richard P; Willerslev, Eske

    2018-04-25

    Over the last decade, studies of ancient biomolecules-particularly ancient DNA, proteins, and lipids-have revolutionized our understanding of evolutionary history. Though initially fraught with many challenges, the field now stands on firm foundations. Researchers now successfully retrieve nucleotide and amino acid sequences, as well as lipid signatures, from progressively older samples, originating from geographic areas and depositional environments that, until recently, were regarded as hostile to long-term preservation of biomolecules. Sampling frequencies and the spatial and temporal scope of studies have also increased markedly, and with them the size and quality of the data sets generated. This progress has been made possible by continuous technical innovations in analytical methods, enhanced criteria for the selection of ancient samples, integrated experimental methods, and advanced computational approaches. Here, we discuss the history and current state of ancient biomolecule research, its applications to evolutionary inference, and future directions for this young and exciting field. Expected final online publication date for the Annual Review of Biochemistry Volume 87 is June 20, 2018. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.

  16. Bayesian Inference Methods for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand

    2013-01-01

    This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...

  17. EI: A Program for Ecological Inference

    Directory of Open Access Journals (Sweden)

    Gary King

    2004-09-01

    Full Text Available The program EI provides a method of inferring individual behavior from aggregate data. It implements the statistical procedures, diagnostics, and graphics from the book A Solution to the Ecological Inference Problem: Reconstructing Individual Behavior from Aggregate Data (King 1997. Ecological inference, as traditionally defined, is the process of using aggregate (i.e., "ecological" data to infer discrete individual-level relationships of interest when individual-level data are not available. Ecological inferences are required in political science research when individual-level surveys are unavailable (e.g., local or comparative electoral politics, unreliable (racial politics, insufficient (political geography, or infeasible (political history. They are also required in numerous areas of ma jor significance in public policy (e.g., for applying the Voting Rights Act and other academic disciplines ranging from epidemiology and marketing to sociology and quantitative history.

  18. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  19. Reassessment of MLST schemes for Leptospira spp. typing worldwide.

    Science.gov (United States)

    Varni, Vanina; Ruybal, Paula; Lauthier, Juan José; Tomasini, Nicolás; Brihuega, Bibiana; Koval, Ariel; Caimi, Karina

    2014-03-01

    Leptospirosis is a neglected zoonosis of global importance. Several multilocus sequence typing (MLST) methods have been developed for Leptospira spp., the causative agent of leptospirosis. In this study we reassessed the most commonly used MLST schemes in a set of worldwide isolates, in order to select the loci that achieve the maximum power of discrimination for typing Leptospira spp. Global eBURST algorithm was used to detect clonal complexes among STs and phylogenetic relationships among concatenated and individual sequences were inferred through maximum likelihood (ML) analysis. The evaluation of 12 loci combined to type a subset of strains rendered 57 different STs. Seven of these loci were selected into a final scheme upon studying the number of alleles and polymorphisms, the typing efficiency, the discriminatory power and the ratio dN/dS per nucleotide site for each locus. This new 7-locus scheme was applied to a wider collection of worldwide strains. The ML tree constructed from concatenated sequences of the 7 loci identified 6 major clusters corresponding to 6 Leptospira species. Global eBURST established 8 CCs, which showed that genotypes were clearly related by geographic origin and host. ST52 and ST47, represented mostly by Argentinian isolates, grouped the higher number of isolates. These isolates were serotyped as serogroups Pomona and Icterohaemorrhagiae, showing a unidirectional correlation in which the isolates with the same ST belong to the same serogroup. In summary, this scheme combines the best loci from the most widely used MLST schemes for Leptospira spp. and supports worldwide strains classification. The Argentinian isolates exhibited congruence between allelic profile and serogroup, providing an alternative to serological methods. Published by Elsevier B.V.

  20. On 165Ho level scheme

    International Nuclear Information System (INIS)

    Ardisson, Claire; Ardisson, Gerard.

    1976-01-01

    A 165 Ho level scheme was constructed which led to the interpretation of sixty γ rays belonging to the decay of 165 Dy. A new 702.9keV level was identified to be the 5/2 - member of the 1/2 ) 7541{ Nilsson orbit. )] [fr

  1. Homogenization scheme for acoustic metamaterials

    KAUST Repository

    Yang, Min; Ma, Guancong; Wu, Ying; Yang, Zhiyu; Sheng, Ping

    2014-01-01

    the scattering amplitudes. We verify our scheme by applying it to three different examples: a layered lattice, a two-dimensional hexagonal lattice, and a decorated-membrane system. It is shown that the predicted characteristics and wave fields agree almost

  2. Homogenization scheme for acoustic metamaterials

    KAUST Repository

    Yang, Min

    2014-02-26

    We present a homogenization scheme for acoustic metamaterials that is based on reproducing the lowest orders of scattering amplitudes from a finite volume of metamaterials. This approach is noted to differ significantly from that of coherent potential approximation, which is based on adjusting the effective-medium parameters to minimize scatterings in the long-wavelength limit. With the aid of metamaterials’ eigenstates, the effective parameters, such as mass density and elastic modulus can be obtained by matching the surface responses of a metamaterial\\'s structural unit cell with a piece of homogenized material. From the Green\\'s theorem applied to the exterior domain problem, matching the surface responses is noted to be the same as reproducing the scattering amplitudes. We verify our scheme by applying it to three different examples: a layered lattice, a two-dimensional hexagonal lattice, and a decorated-membrane system. It is shown that the predicted characteristics and wave fields agree almost exactly with numerical simulations and experiments and the scheme\\'s validity is constrained by the number of dominant surface multipoles instead of the usual long-wavelength assumption. In particular, the validity extends to the full band in one dimension and to regimes near the boundaries of the Brillouin zone in two dimensions.

  3. New practicable Siberian Snake schemes

    International Nuclear Information System (INIS)

    Steffen, K.

    1983-07-01

    Siberian Snake schemes can be inserted in ring accelerators for making the spin tune almost independent of energy. Two such schemes are here suggested which lend particularly well to practical application over a wide energy range. Being composed of horizontal and vertical bending magnets, the proposed snakes are designed to have a small maximum beam excursion in one plane. By applying in this plane a bending correction that varies with energy, they can be operated at fixed geometry in the other plane where most of the bending occurs, thus avoiding complicated magnet motion or excessively large magnet apertures that would otherwise be needed for large energy variations. The first of the proposed schemes employs a pair of standard-type Siberian Snakes, i.e. of the usual 1st and 2nd kind which rotate the spin about the longitudinal and the transverse horizontal axis, respectively. The second scheme employs a pair of novel-type snakes which rotate the spin about either one of the horizontal axes that are at 45 0 to the beam direction. In obvious reference to these axes, they are called left-pointed and right-pointed snakes. (orig.)

  4. Nonlinear Secret Image Sharing Scheme

    Directory of Open Access Journals (Sweden)

    Sang-Ho Shin

    2014-01-01

    efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB and 1.74tlog2⁡m bit-per-pixel (bpp, respectively.

  5. Fuzzy inference game approach to uncertainty in business decisions and market competitions.

    Science.gov (United States)

    Oderanti, Festus Oluseyi

    2013-01-01

    The increasing challenges and complexity of business environments are making business decisions and operations more difficult for entrepreneurs to predict the outcomes of these processes. Therefore, we developed a decision support scheme that could be used and adapted to various business decision processes. These involve decisions that are made under uncertain situations such as business competition in the market or wage negotiation within a firm. The scheme uses game strategies and fuzzy inference concepts to effectively grasp the variables in these uncertain situations. The games are played between human and fuzzy players. The accuracy of the fuzzy rule base and the game strategies help to mitigate the adverse effects that a business may suffer from these uncertain factors. We also introduced learning which enables the fuzzy player to adapt over time. We tested this scheme in different scenarios and discover that it could be an invaluable tool in the hand of entrepreneurs that are operating under uncertain and competitive business environments.

  6. A first generation numerical geomagnetic storm prediction scheme

    International Nuclear Information System (INIS)

    Akasofu, S.-I.; Fry, C.F.

    1986-01-01

    Because geomagnetic and auroral disturbances cause significant interference on many electrical systems, it is essential to develop a reliable geomagnetic and auroral storm prediction scheme. A first generation numerical prediction scheme has been developed. The scheme consists of two major computer codes which in turn consist of a large number of subroutine codes and of empirical relationships. First of all, when a solar flare occurs, six flare parameters are determined as the input data set for the first code which is devised to show the simulated propagation of solar wind disturbances in the heliosphere to a distance of 2 a.u. Thus, one can determine the relative location of the propagating disturbances with the Earth's position. The solar wind speed and the three interplanetary magnetic field (IMF) components are then computed as a function of time at the Earth's location or any other desired (space probe) locations. These quantities in turn become the input parameters for the second major code which computes first the power of the solar wind-magnetosphere dynamo as a function of time. The power thus obtained and the three IMF components can be used to compute or infer: the predicted geometry of the auroral oval; the cross-polar cap potential; the two geomagnetic indices AE and Dst; the total energy injection rate into the polar ionosphere; and the atmospheric temperature, etc. (author)

  7. Statistical inference an integrated Bayesianlikelihood approach

    CERN Document Server

    Aitkin, Murray

    2010-01-01

    Filling a gap in current Bayesian theory, Statistical Inference: An Integrated Bayesian/Likelihood Approach presents a unified Bayesian treatment of parameter inference and model comparisons that can be used with simple diffuse prior specifications. This novel approach provides new solutions to difficult model comparison problems and offers direct Bayesian counterparts of frequentist t-tests and other standard statistical methods for hypothesis testing.After an overview of the competing theories of statistical inference, the book introduces the Bayes/likelihood approach used throughout. It pre

  8. Structural Inference in the Art of Violin Making.

    Science.gov (United States)

    Morse-Fortier, Leonard Joseph

    The "secrets" of success of early Italian violins have long been sought. Among their many efforts to reproduce the results of Stradiveri, Guarneri, and Amati, luthiers have attempted to order and match natural resonant frequencies in the free violin plates. This tap-tone plate tuning technique is simply an eigenvalue extraction scheme. In the final stages of carving, the violin maker complements considerable intuitive knowledge of violin plate structure and of modal attributes with tap-tone frequency estimates to better understand plate structure and to inform decisions about plate carving and completeness. Examining the modal attributes of violin plates, this work develops and incorporates an impulse-response scheme for modal inference, measures resonant frequencies and modeshapes for a pair of violin plates, and presents modeshapes through a unique computer visualization scheme developed specifically for this purpose. The work explores, through simple examples questions of how plate modal attributes reflect underlying structure, and questions about the so -called evolution of modeshapes and frequencies through assembly of the violin. Separately, the work develops computer code for a carved, anisotropic, plate/shell finite element. Solutions are found to the static displacement and free-vibration eigenvalue problems for an orthotropic plate, and used to verify element accuracy. Finally, a violin back plate is modelled with full consideration of plate thickness and arching. Model estimates for modal attributes compare very well against experimentally acquired values. Finally, the modal synthesis technique is applied to predicting the modal attributes of the violin top plate with ribs attached from those of the top plate alone, and with an estimate of rib mass and stiffness. This last analysis serves to verify the modal synthesis method, and to quantify its limits of applicability in attempting to solve problems with severe structural modification. Conclusions

  9. Support Schemes and Ownership Structures

    DEFF Research Database (Denmark)

    Ropenus, Stephanie; Schröder, Sascha Thorsten; Costa, Ana

    , Denmark, France and Portugal. Another crucial aspect for the diffusion of the mCHP technology is possible ownership structures. These may range from full consumer ownership to ownership by utilities and energy service companies, which is discussed in Section 6. Finally, a conclusion (Section 7) wraps up......In recent years, fuel cell based micro‐combined heat and power has received increasing attention due to its potential contribution to energy savings, efficiency gains, customer proximity and flexibility in operation and capacity size. The FC4Home project assesses technical and economic aspects...... of support scheme simultaneously affects risk and technological development, which is the focus of Section 4. Subsequent to this conceptual overview, Section 5 takes a glance at the national application of support schemes for mCHP in practice, notably in the three country cases of the FC4Home project...

  10. [PICS: pharmaceutical inspection cooperation scheme].

    Science.gov (United States)

    Morénas, J

    2009-01-01

    The pharmaceutical inspection cooperation scheme (PICS) is a structure containing 34 participating authorities located worldwide (October 2008). It has been created in 1995 on the basis of the pharmaceutical inspection convention (PIC) settled by the European free trade association (EFTA) in1970. This scheme has different goals as to be an international recognised body in the field of good manufacturing practices (GMP), for training inspectors (by the way of an annual seminar and experts circles related notably to active pharmaceutical ingredients [API], quality risk management, computerized systems, useful for the writing of inspection's aide-memoires). PICS is also leading to high standards for GMP inspectorates (through regular crossed audits) and being a room for exchanges on technical matters between inspectors but also between inspectors and pharmaceutical industry.

  11. Project financing renewable energy schemes

    International Nuclear Information System (INIS)

    Brandler, A.

    1993-01-01

    The viability of many Renewable Energy projects is critically dependent upon the ability of these projects to secure the necessary financing on acceptable terms. The principal objective of the study was to provide an overview to project developers of project financing techniques and the conditions under which project finance for Renewable Energy schemes could be raised, focussing on the potential sources of finance, the typical project financing structures that could be utilised for Renewable Energy schemes and the risk/return and security requirements of lenders, investors and other potential sources of financing. A second objective is to describe the appropriate strategy and tactics for developers to adopt in approaching the financing markets for such projects. (author)

  12. Network Regulation and Support Schemes

    DEFF Research Database (Denmark)

    Ropenus, Stephanie; Schröder, Sascha Thorsten; Jacobsen, Henrik

    2009-01-01

    -in tariffs to market-based quota systems, and network regulation approaches, comprising rate-of-return and incentive regulation. National regulation and the vertical structure of the electricity sector shape the incentives of market agents, notably of distributed generators and network operators......At present, there exists no explicit European policy framework on distributed generation. Various Directives encompass distributed generation; inherently, their implementation is to the discretion of the Member States. The latter have adopted different kinds of support schemes, ranging from feed....... This article seeks to investigate the interactions between the policy dimensions of support schemes and network regulation and how they affect the deployment of distributed generation. Firstly, a conceptual analysis examines how the incentives of the different market agents are affected. In particular...

  13. Distance labeling schemes for trees

    DEFF Research Database (Denmark)

    Alstrup, Stephen; Gørtz, Inge Li; Bistrup Halvorsen, Esben

    2016-01-01

    We consider distance labeling schemes for trees: given a tree with n nodes, label the nodes with binary strings such that, given the labels of any two nodes, one can determine, by looking only at the labels, the distance in the tree between the two nodes. A lower bound by Gavoille et al. [Gavoille...... variants such as, for example, small distances in trees [Alstrup et al., SODA, 2003]. We improve the known upper and lower bounds of exact distance labeling by showing that 1/4 log2(n) bits are needed and that 1/2 log2(n) bits are sufficient. We also give (1 + ε)-stretch labeling schemes using Theta...

  14. Small-scale classification schemes

    DEFF Research Database (Denmark)

    Hertzum, Morten

    2004-01-01

    Small-scale classification schemes are used extensively in the coordination of cooperative work. This study investigates the creation and use of a classification scheme for handling the system requirements during the redevelopment of a nation-wide information system. This requirements...... classification inherited a lot of its structure from the existing system and rendered requirements that transcended the framework laid out by the existing system almost invisible. As a result, the requirements classification became a defining element of the requirements-engineering process, though its main...... effects remained largely implicit. The requirements classification contributed to constraining the requirements-engineering process by supporting the software engineers in maintaining some level of control over the process. This way, the requirements classification provided the software engineers...

  15. Inferring Domain Plans in Question-Answering

    National Research Council Canada - National Science Library

    Pollack, Martha E

    1986-01-01

    The importance of plan inference in models of conversation has been widely noted in the computational-linguistics literature, and its incorporation in question-answering systems has enabled a range...

  16. Scalable inference for stochastic block models

    KAUST Repository

    Peng, Chengbin; Zhang, Zhihua; Wong, Ka-Chun; Zhang, Xiangliang; Keyes, David E.

    2017-01-01

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference

  17. Cambridge community Optometry Glaucoma Scheme.

    Science.gov (United States)

    Keenan, Jonathan; Shahid, Humma; Bourne, Rupert R; White, Andrew J; Martin, Keith R

    2015-04-01

    With a higher life expectancy, there is an increased demand for hospital glaucoma services in the United Kingdom. The Cambridge community Optometry Glaucoma Scheme (COGS) was initiated in 2010, where new referrals for suspected glaucoma are evaluated by community optometrists with a special interest in glaucoma, with virtual electronic review and validation by a consultant ophthalmologist with special interest in glaucoma. 1733 patients were evaluated by this scheme between 2010 and 2013. Clinical assessment is performed by the optometrist at a remote site. Goldmann applanation tonometry, pachymetry, monoscopic colour optic disc photographs and automated Humphrey visual field testing are performed. A clinical decision is made as to whether a patient has glaucoma or is a suspect, and referred on or discharged as a false positive referral. The clinical findings, optic disc photographs and visual field test results are transmitted electronically for virtual review by a consultant ophthalmologist. The number of false positive referrals from initial referral into the scheme. Of the patients, 46.6% were discharged at assessment and a further 5.7% were discharged following virtual review. Of the patients initially discharged, 2.8% were recalled following virtual review. Following assessment at the hospital, a further 10.5% were discharged after a single visit. The COGS community-based glaucoma screening programme is a safe and effective way of evaluating glaucoma referrals in the community and reducing false-positive referrals for glaucoma into the hospital system. © 2014 Royal Australian and New Zealand College of Ophthalmologists.

  18. New schemes for particle accelerators

    International Nuclear Information System (INIS)

    Nishida, Y.

    1985-01-01

    In the present paper, the authors propose new schemes for realizing the v/sub p/xB accelerator, by using no plasma system for producing the strong longitudinal waves. The first method is to use a grating for obtaining extended interaction of an electron beam moving along the grating surface with light beam incident also along the surface. Here, the light beam propagates obliquely to the grating grooves for producing strong electric field, and the electron beam propagates in parallel to the light beam. The static magnetic field is applied perpendicularly to the grating surface. In the present system, the beam interacts synchronously with the p-polarized wave which has the electric field be parallel to the grating surface. Another conventional scheme is to use a delay circuit. Here, the light beam propagates obliquely between a pair of array of conductor fins or slots. The phase velocity of the spatial harmonics in the y-direction (right angle to the array of slots) is slower than the speed of light. With the aid of powerful laser light or microwave source, it should be possible to miniaturise linacs by using the v/sub p/xB effect and schemes proposed here

  19. A Memory Efficient Network Encryption Scheme

    Science.gov (United States)

    El-Fotouh, Mohamed Abo; Diepold, Klaus

    In this paper, we studied the two widely used encryption schemes in network applications. Shortcomings have been found in both schemes, as these schemes consume either more memory to gain high throughput or low memory with low throughput. The need has aroused for a scheme that has low memory requirements and in the same time possesses high speed, as the number of the internet users increases each day. We used the SSM model [1], to construct an encryption scheme based on the AES. The proposed scheme possesses high throughput together with low memory requirements.

  20. An Arbitrated Quantum Signature Scheme without Entanglement*

    International Nuclear Information System (INIS)

    Li Hui-Ran; Luo Ming-Xing; Peng Dai-Yuan; Wang Xiao-Jun

    2017-01-01

    Several quantum signature schemes are recently proposed to realize secure signatures of quantum or classical messages. Arbitrated quantum signature as one nontrivial scheme has attracted great interests because of its usefulness and efficiency. Unfortunately, previous schemes cannot against Trojan horse attack and DoS attack and lack of the unforgeability and the non-repudiation. In this paper, we propose an improved arbitrated quantum signature to address these secure issues with the honesty arbitrator. Our scheme takes use of qubit states not entanglements. More importantly, the qubit scheme can achieve the unforgeability and the non-repudiation. Our scheme is also secure for other known quantum attacks . (paper)

  1. Efficient algorithms for conditional independence inference

    Czech Academy of Sciences Publication Activity Database

    Bouckaert, R.; Hemmecke, R.; Lindner, S.; Studený, Milan

    2010-01-01

    Roč. 11, č. 1 (2010), s. 3453-3479 ISSN 1532-4435 R&D Projects: GA ČR GA201/08/0539; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : conditional independence inference * linear programming approach Subject RIV: BA - General Mathematics Impact factor: 2.949, year: 2010 http://library.utia.cas.cz/separaty/2010/MTR/studeny-efficient algorithms for conditional independence inference.pdf

  2. On the criticality of inferred models

    Science.gov (United States)

    Mastromatteo, Iacopo; Marsili, Matteo

    2011-10-01

    Advanced inference techniques allow one to reconstruct a pattern of interaction from high dimensional data sets, from probing simultaneously thousands of units of extended systems—such as cells, neural tissues and financial markets. We focus here on the statistical properties of inferred models and argue that inference procedures are likely to yield models which are close to singular values of parameters, akin to critical points in physics where phase transitions occur. These are points where the response of physical systems to external perturbations, as measured by the susceptibility, is very large and diverges in the limit of infinite size. We show that the reparameterization invariant metrics in the space of probability distributions of these models (the Fisher information) are directly related to the susceptibility of the inferred model. As a result, distinguishable models tend to accumulate close to critical points, where the susceptibility diverges in infinite systems. This region is the one where the estimate of inferred parameters is most stable. In order to illustrate these points, we discuss inference of interacting point processes with application to financial data and show that sensible choices of observation time scales naturally yield models which are close to criticality.

  3. On the criticality of inferred models

    International Nuclear Information System (INIS)

    Mastromatteo, Iacopo; Marsili, Matteo

    2011-01-01

    Advanced inference techniques allow one to reconstruct a pattern of interaction from high dimensional data sets, from probing simultaneously thousands of units of extended systems—such as cells, neural tissues and financial markets. We focus here on the statistical properties of inferred models and argue that inference procedures are likely to yield models which are close to singular values of parameters, akin to critical points in physics where phase transitions occur. These are points where the response of physical systems to external perturbations, as measured by the susceptibility, is very large and diverges in the limit of infinite size. We show that the reparameterization invariant metrics in the space of probability distributions of these models (the Fisher information) are directly related to the susceptibility of the inferred model. As a result, distinguishable models tend to accumulate close to critical points, where the susceptibility diverges in infinite systems. This region is the one where the estimate of inferred parameters is most stable. In order to illustrate these points, we discuss inference of interacting point processes with application to financial data and show that sensible choices of observation time scales naturally yield models which are close to criticality

  4. Polynomial Chaos Surrogates for Bayesian Inference

    KAUST Repository

    Le Maitre, Olivier

    2016-01-06

    The Bayesian inference is a popular probabilistic method to solve inverse problems, such as the identification of field parameter in a PDE model. The inference rely on the Bayes rule to update the prior density of the sought field, from observations, and derive its posterior distribution. In most cases the posterior distribution has no explicit form and has to be sampled, for instance using a Markov-Chain Monte Carlo method. In practice the prior field parameter is decomposed and truncated (e.g. by means of Karhunen- Lo´eve decomposition) to recast the inference problem into the inference of a finite number of coordinates. Although proved effective in many situations, the Bayesian inference as sketched above faces several difficulties requiring improvements. First, sampling the posterior can be a extremely costly task as it requires multiple resolutions of the PDE model for different values of the field parameter. Second, when the observations are not very much informative, the inferred parameter field can highly depends on its prior which can be somehow arbitrary. These issues have motivated the introduction of reduced modeling or surrogates for the (approximate) determination of the parametrized PDE solution and hyperparameters in the description of the prior field. Our contribution focuses on recent developments in these two directions: the acceleration of the posterior sampling by means of Polynomial Chaos expansions and the efficient treatment of parametrized covariance functions for the prior field. We also discuss the possibility of making such approach adaptive to further improve its efficiency.

  5. A Bayesian Network Schema for Lessening Database Inference

    National Research Council Canada - National Science Library

    Chang, LiWu; Moskowitz, Ira S

    2001-01-01

    .... The authors introduce a formal schema for database inference analysis, based upon a Bayesian network structure, which identifies critical parameters involved in the inference problem and represents...

  6. Student Teachers’ Proof Schemes on Proof Tasks Involving Inequality: Deductive or Inductive?

    Science.gov (United States)

    Rosyidi, A. H.; Kohar, A. W.

    2018-01-01

    Exploring student teachers’ proof ability is crucial as it is important for improving the quality of their learning process and help their future students learn how to construct a proof. Hence, this study aims at exploring at the proof schemes of student teachers in the beginning of their studies. Data were collected from 130 proofs resulted by 65 Indonesian student teachers on two proof tasks involving algebraic inequality. To analyse, the proofs were classified into the refined proof schemes level proposed by Lee (2016) ranging from inductive, which only provides irrelevant inferences, to deductive proofs, which consider addressing formal representation. Findings present several examples of each of Lee’s level on the student teachers’ proofs spanning from irrelevant inferences, novice use of examples or logical reasoning, strategic use examples for reasoning, deductive inferences with major and minor logical coherence, and deductive proof with informal and formal representation. Besides, it was also found that more than half of the students’ proofs coded as inductive schemes, which does not meet the requirement for doing the proof for the proof tasks examined in this study. This study suggests teacher educators in teacher colleges to reform the curriculum regarding proof learning which can accommodate the improvement of student teachers’ proving ability from inductive to deductive proof as well from informal to formal proof.

  7. Decoupling schemes for the SSC Collider

    International Nuclear Information System (INIS)

    Cai, Y.; Bourianoff, G.; Cole, B.; Meinke, R.; Peterson, J.; Pilat, F.; Stampke, S.; Syphers, M.; Talman, R.

    1993-05-01

    A decoupling system is designed for the SSC Collider. This system can accommodate three decoupling schemes by using 44 skew quadrupoles in the different configurations. Several decoupling schemes are studied and compared in this paper

  8. Renormalization scheme-invariant perturbation theory

    International Nuclear Information System (INIS)

    Dhar, A.

    1983-01-01

    A complete solution to the problem of the renormalization scheme dependence of perturbative approximants to physical quantities is presented. An equation is derived which determines any physical quantity implicitly as a function of only scheme independent variables. (orig.)

  9. Wireless Broadband Access and Accounting Schemes

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In this paper, we propose two wireless broadband access and accounting schemes. In both schemes, the accounting system adopts RADIUS protocol, but the access system adopts SSH and SSL protocols respectively.

  10. Tightly Secure Signatures From Lossy Identification Schemes

    OpenAIRE

    Abdalla , Michel; Fouque , Pierre-Alain; Lyubashevsky , Vadim; Tibouchi , Mehdi

    2015-01-01

    International audience; In this paper, we present three digital signature schemes with tight security reductions in the random oracle model. Our first signature scheme is a particularly efficient version of the short exponent discrete log-based scheme of Girault et al. (J Cryptol 19(4):463–487, 2006). Our scheme has a tight reduction to the decisional short discrete logarithm problem, while still maintaining the non-tight reduction to the computational version of the problem upon which the or...

  11. A formal model of interpersonal inference

    Directory of Open Access Journals (Sweden)

    Michael eMoutoussis

    2014-03-01

    Full Text Available Introduction: We propose that active Bayesian inference – a general framework for decision-making – can equally be applied to interpersonal exchanges. Social cognition, however, entails special challenges. We address these challenges through a novel formulation of a formal model and demonstrate its psychological significance. Method: We review relevant literature, especially with regards to interpersonal representations, formulate a mathematical model and present a simulation study. The model accommodates normative models from utility theory and places them within the broader setting of Bayesian inference. Crucially, we endow people's prior beliefs, into which utilities are absorbed, with preferences of self and others. The simulation illustrates the model's dynamics and furnishes elementary predictions of the theory. Results: 1. Because beliefs about self and others inform both the desirability and plausibility of outcomes, in this framework interpersonal representations become beliefs that have to be actively inferred. This inference, akin to 'mentalising' in the psychological literature, is based upon the outcomes of interpersonal exchanges. 2. We show how some well-known social-psychological phenomena (e.g. self-serving biases can be explained in terms of active interpersonal inference. 3. Mentalising naturally entails Bayesian updating of how people value social outcomes. Crucially this includes inference about one’s own qualities and preferences. Conclusion: We inaugurate a Bayes optimal framework for modelling intersubject variability in mentalising during interpersonal exchanges. Here, interpersonal representations are endowed with explicit functional and affective properties. We suggest the active inference framework lends itself to the study of psychiatric conditions where mentalising is distorted.

  12. Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme

    Science.gov (United States)

    Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook

    1995-01-01

    Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.

  13. Optimal Sales Schemes for Network Goods

    DEFF Research Database (Denmark)

    Parakhonyak, Alexei; Vikander, Nick

    consumers simultaneously, serve them all sequentially, or employ any intermediate scheme. We show that the optimal sales scheme is purely sequential, where each consumer observes all previous sales before choosing whether to buy himself. A sequential scheme maximizes the amount of information available...

  14. THROUGHPUT ANALYSIS OF EXTENDED ARQ SCHEMES

    African Journals Online (AJOL)

    PUBLICATIONS1

    ABSTRACT. Various Automatic Repeat Request (ARQ) schemes have been used to combat errors that befall in- formation transmitted in digital communication systems. Such schemes include simple ARQ, mixed mode ARQ and Hybrid ARQ (HARQ). In this study we introduce extended ARQ schemes and derive.

  15. Arbitrated quantum signature scheme with message recovery

    International Nuclear Information System (INIS)

    Lee, Hwayean; Hong, Changho; Kim, Hyunsang; Lim, Jongin; Yang, Hyung Jin

    2004-01-01

    Two quantum signature schemes with message recovery relying on the availability of an arbitrator are proposed. One scheme uses a public board and the other does not. However both schemes provide confidentiality of the message and a higher efficiency in transmission

  16. Estimating uncertainty of inference for validation

    Energy Technology Data Exchange (ETDEWEB)

    Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  17. Deep Learning for Population Genetic Inference.

    Science.gov (United States)

    Sheehan, Sara; Song, Yun S

    2016-03-01

    Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme.

  18. Deep Learning for Population Genetic Inference.

    Directory of Open Access Journals (Sweden)

    Sara Sheehan

    2016-03-01

    Full Text Available Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data to the output (e.g., population genetic parameters of interest. We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history. Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme.

  19. Deep Learning for Population Genetic Inference

    Science.gov (United States)

    Sheehan, Sara; Song, Yun S.

    2016-01-01

    Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme. PMID:27018908

  20. Inferring Phylogenetic Networks Using PhyloNet.

    Science.gov (United States)

    Wen, Dingqiao; Yu, Yun; Zhu, Jiafan; Nakhleh, Luay

    2018-07-01

    PhyloNet was released in 2008 as a software package for representing and analyzing phylogenetic networks. At the time of its release, the main functionalities in PhyloNet consisted of measures for comparing network topologies and a single heuristic for reconciling gene trees with a species tree. Since then, PhyloNet has grown significantly. The software package now includes a wide array of methods for inferring phylogenetic networks from data sets of unlinked loci while accounting for both reticulation (e.g., hybridization) and incomplete lineage sorting. In particular, PhyloNet now allows for maximum parsimony, maximum likelihood, and Bayesian inference of phylogenetic networks from gene tree estimates. Furthermore, Bayesian inference directly from sequence data (sequence alignments or biallelic markers) is implemented. Maximum parsimony is based on an extension of the "minimizing deep coalescences" criterion to phylogenetic networks, whereas maximum likelihood and Bayesian inference are based on the multispecies network coalescent. All methods allow for multiple individuals per species. As computing the likelihood of a phylogenetic network is computationally hard, PhyloNet allows for evaluation and inference of networks using a pseudolikelihood measure. PhyloNet summarizes the results of the various analyzes and generates phylogenetic networks in the extended Newick format that is readily viewable by existing visualization software.

  1. REMINDER: Saved Leave Scheme (SLS)

    CERN Multimedia

    2003-01-01

    Transfer of leave to saved leave accounts Under the provisions of the voluntary saved leave scheme (SLS), a maximum total of 10 days'* annual and compensatory leave (excluding saved leave accumulated in accordance with the provisions of Administrative Circular No 22B) can be transferred to the saved leave account at the end of the leave year (30 September). We remind you that unused leave of all those taking part in the saved leave scheme at the closure of the leave year accounts is transferred automatically to the saved leave account on that date. Therefore, staff members have no administrative steps to take. In addition, the transfer, which eliminates the risk of omitting to request leave transfers and rules out calculation errors in transfer requests, will be clearly shown in the list of leave transactions that can be consulted in EDH from October 2003 onwards. Furthermore, this automatic leave transfer optimizes staff members' chances of benefiting from a saved leave bonus provided that they ar...

  2. Goal inferences about robot behavior : goal inferences and human response behaviors

    NARCIS (Netherlands)

    Broers, H.A.T.; Ham, J.R.C.; Broeders, R.; De Silva, P.; Okada, M.

    2014-01-01

    This explorative research focused on the goal inferences human observers draw based on a robot's behavior, and the extent to which those inferences predict people's behavior in response to that robot. Results show that different robot behaviors cause different response behavior from people.

  3. Quantum Secure Communication Scheme with W State

    International Nuclear Information System (INIS)

    Wang Jian; Zhang Quan; Tang Chaojng

    2007-01-01

    We present a quantum secure communication scheme using three-qubit W state. It is unnecessary for the present scheme to use alternative measurement or Bell basis measurement. Compared with the quantum secure direct communication scheme proposed by Cao et al. [H.J. Cao and H.S. Song, Chin. Phys. Lett. 23 (2006) 290], in our scheme, the detection probability for an eavesdropper's attack increases from 8.3% to 25%. We also show that our scheme is secure for a noise quantum channel.

  4. Labeling schemes for bounded degree graphs

    DEFF Research Database (Denmark)

    Adjiashvili, David; Rotbart, Noy Galil

    2014-01-01

    We investigate adjacency labeling schemes for graphs of bounded degree Δ = O(1). In particular, we present an optimal (up to an additive constant) log n + O(1) adjacency labeling scheme for bounded degree trees. The latter scheme is derived from a labeling scheme for bounded degree outerplanar...... graphs. Our results complement a similar bound recently obtained for bounded depth trees [Fraigniaud and Korman, SODA 2010], and may provide new insights for closing the long standing gap for adjacency in trees [Alstrup and Rauhe, FOCS 2002]. We also provide improved labeling schemes for bounded degree...

  5. Using Alien Coins to Test Whether Simple Inference Is Bayesian

    Science.gov (United States)

    Cassey, Peter; Hawkins, Guy E.; Donkin, Chris; Brown, Scott D.

    2016-01-01

    Reasoning and inference are well-studied aspects of basic cognition that have been explained as statistically optimal Bayesian inference. Using a simplified experimental design, we conducted quantitative comparisons between Bayesian inference and human inference at the level of individuals. In 3 experiments, with more than 13,000 participants, we…

  6. Explanatory Preferences Shape Learning and Inference.

    Science.gov (United States)

    Lombrozo, Tania

    2016-10-01

    Explanations play an important role in learning and inference. People often learn by seeking explanations, and they assess the viability of hypotheses by considering how well they explain the data. An emerging body of work reveals that both children and adults have strong and systematic intuitions about what constitutes a good explanation, and that these explanatory preferences have a systematic impact on explanation-based processes. In particular, people favor explanations that are simple and broad, with the consequence that engaging in explanation can shape learning and inference by leading people to seek patterns and favor hypotheses that support broad and simple explanations. Given the prevalence of explanation in everyday cognition, understanding explanation is therefore crucial to understanding learning and inference. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Fuzzy logic controller using different inference methods

    International Nuclear Information System (INIS)

    Liu, Z.; De Keyser, R.

    1994-01-01

    In this paper the design of fuzzy controllers by using different inference methods is introduced. Configuration of the fuzzy controllers includes a general rule-base which is a collection of fuzzy PI or PD rules, the triangular fuzzy data model and a centre of gravity defuzzification algorithm. The generalized modus ponens (GMP) is used with the minimum operator of the triangular norm. Under the sup-min inference rule, six fuzzy implication operators are employed to calculate the fuzzy look-up tables for each rule base. The performance is tested in simulated systems with MATLAB/SIMULINK. Results show the effects of using the fuzzy controllers with different inference methods and applied to different test processes

  8. Uncertainty in prediction and in inference

    International Nuclear Information System (INIS)

    Hilgevoord, J.; Uffink, J.

    1991-01-01

    The concepts of uncertainty in prediction and inference are introduced and illustrated using the diffraction of light as an example. The close relationship between the concepts of uncertainty in inference and resolving power is noted. A general quantitative measure of uncertainty in inference can be obtained by means of the so-called statistical distance between probability distributions. When applied to quantum mechanics, this distance leads to a measure of the distinguishability of quantum states, which essentially is the absolute value of the matrix element between the states. The importance of this result to the quantum mechanical uncertainty principle is noted. The second part of the paper provides a derivation of the statistical distance on the basis of the so-called method of support

  9. A Learning Algorithm for Multimodal Grammar Inference.

    Science.gov (United States)

    D'Ulizia, A; Ferri, F; Grifoni, P

    2011-12-01

    The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.

  10. Examples in parametric inference with R

    CERN Document Server

    Dixit, Ulhas Jayram

    2016-01-01

    This book discusses examples in parametric inference with R. Combining basic theory with modern approaches, it presents the latest developments and trends in statistical inference for students who do not have an advanced mathematical and statistical background. The topics discussed in the book are fundamental and common to many fields of statistical inference and thus serve as a point of departure for in-depth study. The book is divided into eight chapters: Chapter 1 provides an overview of topics on sufficiency and completeness, while Chapter 2 briefly discusses unbiased estimation. Chapter 3 focuses on the study of moments and maximum likelihood estimators, and Chapter 4 presents bounds for the variance. In Chapter 5, topics on consistent estimator are discussed. Chapter 6 discusses Bayes, while Chapter 7 studies some more powerful tests. Lastly, Chapter 8 examines unbiased and other tests. Senior undergraduate and graduate students in statistics and mathematics, and those who have taken an introductory cou...

  11. Grammatical inference algorithms, routines and applications

    CERN Document Server

    Wieczorek, Wojciech

    2017-01-01

    This book focuses on grammatical inference, presenting classic and modern methods of grammatical inference from the perspective of practitioners. To do so, it employs the Python programming language to present all of the methods discussed. Grammatical inference is a field that lies at the intersection of multiple disciplines, with contributions from computational linguistics, pattern recognition, machine learning, computational biology, formal learning theory and many others. Though the book is largely practical, it also includes elements of learning theory, combinatorics on words, the theory of automata and formal languages, plus references to real-world problems. The listings presented here can be directly copied and pasted into other programs, thus making the book a valuable source of ready recipes for students, academic researchers, and programmers alike, as well as an inspiration for their further development.>.

  12. Statistical inference based on divergence measures

    CERN Document Server

    Pardo, Leandro

    2005-01-01

    The idea of using functionals of Information Theory, such as entropies or divergences, in statistical inference is not new. However, in spite of the fact that divergence statistics have become a very good alternative to the classical likelihood ratio test and the Pearson-type statistic in discrete models, many statisticians remain unaware of this powerful approach.Statistical Inference Based on Divergence Measures explores classical problems of statistical inference, such as estimation and hypothesis testing, on the basis of measures of entropy and divergence. The first two chapters form an overview, from a statistical perspective, of the most important measures of entropy and divergence and study their properties. The author then examines the statistical analysis of discrete multivariate data with emphasis is on problems in contingency tables and loglinear models using phi-divergence test statistics as well as minimum phi-divergence estimators. The final chapter looks at testing in general populations, prese...

  13. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisova, K.

    2010-01-01

    This is probably the first paper which discusses likelihood inference for a random set using a germ-grain model, where the individual grains are unobservable, edge effects occur and other complications appear. We consider the case where the grains form a disc process modelled by a marked point...... process, where the germs are the centres and the marks are the associated radii of the discs. We propose to use a recent parametric class of interacting disc process models, where the minimal sufficient statistic depends on various geometric properties of the random set, and the density is specified......-based maximum likelihood inference and the effect of specifying different reference Poisson models....

  14. IMAGINE: Interstellar MAGnetic field INference Engine

    Science.gov (United States)

    Steininger, Theo

    2018-03-01

    IMAGINE (Interstellar MAGnetic field INference Engine) performs inference on generic parametric models of the Galaxy. The modular open source framework uses highly optimized tools and technology such as the MultiNest sampler (ascl:1109.006) and the information field theory framework NIFTy (ascl:1302.013) to create an instance of the Milky Way based on a set of parameters for physical observables, using Bayesian statistics to judge the mismatch between measured data and model prediction. The flexibility of the IMAGINE framework allows for simple refitting for newly available data sets and makes state-of-the-art Bayesian methods easily accessible particularly for random components of the Galactic magnetic field.

  15. Inferring causality from noisy time series data

    DEFF Research Database (Denmark)

    Mønster, Dan; Fusaroli, Riccardo; Tylén, Kristian

    2016-01-01

    Convergent Cross-Mapping (CCM) has shown high potential to perform causal inference in the absence of models. We assess the strengths and weaknesses of the method by varying coupling strength and noise levels in coupled logistic maps. We find that CCM fails to infer accurate coupling strength...... and even causality direction in synchronized time-series and in the presence of intermediate coupling. We find that the presence of noise deterministically reduces the level of cross-mapping fidelity, while the convergence rate exhibits higher levels of robustness. Finally, we propose that controlled noise...

  16. Inferring Pre-shock Acoustic Field From Post-shock Pitot Pressure Measurement

    Science.gov (United States)

    Wang, Jian-Xun; Zhang, Chao; Duan, Lian; Xiao, Heng; Virginia Tech Team; Missouri Univ of Sci; Tech Team

    2017-11-01

    Linear interaction analysis (LIA) and iterative ensemble Kalman method are used to convert post-shock Pitot pressure fluctuations to static pressure fluctuations in front of the shock. The LIA is used as the forward model for the transfer function associated with a homogeneous field of acoustic waves passing through a nominally normal shock wave. The iterative ensemble Kalman method is then employed to infer the spectrum of upstream acoustic waves based on the post-shock Pitot pressure measured at a single point. Several test cases with synthetic and real measurement data are used to demonstrate the merits of the proposed inference scheme. The study provides the basis for measuring tunnel freestream noise with intrusive probes in noisy supersonic wind tunnels.

  17. Outcome-Dependent Sampling Design and Inference for Cox's Proportional Hazards Model.

    Science.gov (United States)

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P; Zhou, Haibo

    2016-11-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study.

  18. Outcome-Dependent Sampling Design and Inference for Cox’s Proportional Hazards Model

    Science.gov (United States)

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P.; Zhou, Haibo

    2016-01-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study. PMID:28090134

  19. Fragment separator momentum compression schemes

    Energy Technology Data Exchange (ETDEWEB)

    Bandura, Laura, E-mail: bandura@anl.gov [Facility for Rare Isotope Beams (FRIB), 1 Cyclotron, East Lansing, MI 48824-1321 (United States); National Superconducting Cyclotron Lab, Michigan State University, 1 Cyclotron, East Lansing, MI 48824-1321 (United States); Erdelyi, Bela [Argonne National Laboratory, Argonne, IL 60439 (United States); Northern Illinois University, DeKalb, IL 60115 (United States); Hausmann, Marc [Facility for Rare Isotope Beams (FRIB), 1 Cyclotron, East Lansing, MI 48824-1321 (United States); Kubo, Toshiyuki [RIKEN Nishina Center, RIKEN, Wako (Japan); Nolen, Jerry [Argonne National Laboratory, Argonne, IL 60439 (United States); Portillo, Mauricio [Facility for Rare Isotope Beams (FRIB), 1 Cyclotron, East Lansing, MI 48824-1321 (United States); Sherrill, Bradley M. [National Superconducting Cyclotron Lab, Michigan State University, 1 Cyclotron, East Lansing, MI 48824-1321 (United States)

    2011-07-21

    We present a scheme to use a fragment separator and profiled energy degraders to transfer longitudinal phase space into transverse phase space while maintaining achromatic beam transport. The first order beam optics theory of the method is presented and the consequent enlargement of the transverse phase space is discussed. An interesting consequence of the technique is that the first order mass resolving power of the system is determined by the first dispersive section up to the energy degrader, independent of whether or not momentum compression is used. The fragment separator at the Facility for Rare Isotope Beams is a specific application of this technique and is described along with simulations by the code COSY INFINITY.

  20. Fragment separator momentum compression schemes

    International Nuclear Information System (INIS)

    Bandura, Laura; Erdelyi, Bela; Hausmann, Marc; Kubo, Toshiyuki; Nolen, Jerry; Portillo, Mauricio; Sherrill, Bradley M.

    2011-01-01

    We present a scheme to use a fragment separator and profiled energy degraders to transfer longitudinal phase space into transverse phase space while maintaining achromatic beam transport. The first order beam optics theory of the method is presented and the consequent enlargement of the transverse phase space is discussed. An interesting consequence of the technique is that the first order mass resolving power of the system is determined by the first dispersive section up to the energy degrader, independent of whether or not momentum compression is used. The fragment separator at the Facility for Rare Isotope Beams is a specific application of this technique and is described along with simulations by the code COSY INFINITY.

  1. Electrical injection schemes for nanolasers

    DEFF Research Database (Denmark)

    Lupi, Alexandra; Chung, Il-Sug; Yvind, Kresten

    2013-01-01

    The performance of injection schemes among recently demonstrated electrically pumped photonic crystal nanolasers has been investigated numerically. The computation has been carried out at room temperature using a commercial semiconductor simulation software. For the simulations two electrical...... of 3 InGaAsP QWs on an InP substrate has been chosen for the modeling. In the simulations the main focus is on the electrical and optical properties of the nanolasers i.e. electrical resistance, threshold voltage, threshold current and wallplug efficiency. In the current flow evaluation the lowest...... threshold current has been achieved with the lateral electrical injection through the BH; while the lowest resistance has been obtained from the current post structure even though this model shows a higher current threshold because of the lack of carrier confinement. Final scope of the simulations...

  2. Scheme of thinking quantum systems

    International Nuclear Information System (INIS)

    Yukalov, V I; Sornette, D

    2009-01-01

    A general approach describing quantum decision procedures is developed. The approach can be applied to quantum information processing, quantum computing, creation of artificial quantum intelligence, as well as to analyzing decision processes of human decision makers. Our basic point is to consider an active quantum system possessing its own strategic state. Processing information by such a system is analogous to the cognitive processes associated to decision making by humans. The algebra of probability operators, associated with the possible options available to the decision maker, plays the role of the algebra of observables in quantum theory of measurements. A scheme is advanced for a practical realization of decision procedures by thinking quantum systems. Such thinking quantum systems can be realized by using spin lattices, systems of magnetic molecules, cold atoms trapped in optical lattices, ensembles of quantum dots, or multilevel atomic systems interacting with electromagnetic field

  3. Yellow light for green scheme

    International Nuclear Information System (INIS)

    Morch, Stein

    2004-01-01

    The article asserts that there could be an investment boom for wind, hydro and bio power in a common Norwegian-Swedish market scheme for green certificates. The Swedish authorities are ready, and the Norwegian government is preparing a report to the Norwegian Parliament. What are the ambitions of Norway, and will hydro power be included? A green certificate market common to more countries have never before been established and requires the solution of many challenging problems. In Sweden, certificate support is expected to promote primarily bioenergy, wind power and small-scale hydro power. In Norway there is an evident potential for wind power, and more hydro power can be developed if desired

  4. Pomeranchuk conjecture and symmetry schemes

    Energy Technology Data Exchange (ETDEWEB)

    Galindo, A.; Morales, A.; Ruegg, H. [Junta de Energia Nuclear, Madrid (Spain); European Organization for Nuclear Research, Geneva (Switzerland); University of Geneva, Geneva (Switzerland)

    1963-01-15

    Pomeranchuk has conjectured that the cross-sections for charge-exchange processes vanish asymptotically as the energy tends to infinity. (By ''charge'' it is meant any internal quantum number, like electric charge, hypercharge, .. . ). It has been stated by several people that this conjecture implies equalities among the total cross-sections whenever any symmetry scheme is invoked for the strong interactions. But to our knowledge no explicit general proof of this statement has been given so far. We want to give this proof for any compact Lie group. We also prove, under certain assumptions, that the equality of the total cross-sections implies that s{sup -l} times the charge-exchange forward scattering absorptive amplitudes tend to zero as s -> ∞.

  5. Further attacks on Yeung-Mintzer fragile watermarking scheme

    Science.gov (United States)

    Fridrich, Jessica; Goljan, Miroslav; Memon, Nasir D.

    2000-05-01

    In this paper, we describe new and improved attacks on the authentication scheme previously proposed by Yeung and Mintzer. Previous attacks assumed that the binary watermark logo inserted in an image for the purposes of authentication was known. Here we remove that assumption and show how the scheme is still vulnerable, even if the binary logo is not known but the attacker has access to multiple images that have been watermarked with the same secret key and contain the same (but unknown) logo. We present two attacks. The first attack infers the secret watermark insertion function and the binary logo, given multiple images authenticated with the same key and containing the same logo. We show that a very good approximation to the logo and watermark insertion function can be constructed using as few as two images. With color images, one needs many more images, nevertheless the attack is still feasible. The second attack we present, which we call the 'collage-attack' is a variation of the Holliman-Memon counterfeiting attack. The proposed variation does not require knowledge of the watermark logo and produces counterfeits of superior quality by means of a suitable dithering process that we develop.

  6. Matroids and quantum-secret-sharing schemes

    International Nuclear Information System (INIS)

    Sarvepalli, Pradeep; Raussendorf, Robert

    2010-01-01

    A secret-sharing scheme is a cryptographic protocol to distribute a secret state in an encoded form among a group of players such that only authorized subsets of the players can reconstruct the secret. Classically, efficient secret-sharing schemes have been shown to be induced by matroids. Furthermore, access structures of such schemes can be characterized by an excluded minor relation. No such relations are known for quantum secret-sharing schemes. In this paper we take the first steps toward a matroidal characterization of quantum-secret-sharing schemes. In addition to providing a new perspective on quantum-secret-sharing schemes, this characterization has important benefits. While previous work has shown how to construct quantum-secret-sharing schemes for general access structures, these schemes are not claimed to be efficient. In this context the present results prove to be useful; they enable us to construct efficient quantum-secret-sharing schemes for many general access structures. More precisely, we show that an identically self-dual matroid that is representable over a finite field induces a pure-state quantum-secret-sharing scheme with information rate 1.

  7. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  8. Efficient Bayesian inference for ARFIMA processes

    Science.gov (United States)

    Graves, T.; Gramacy, R. B.; Franzke, C. L. E.; Watkins, N. W.

    2015-03-01

    Many geophysical quantities, like atmospheric temperature, water levels in rivers, and wind speeds, have shown evidence of long-range dependence (LRD). LRD means that these quantities experience non-trivial temporal memory, which potentially enhances their predictability, but also hampers the detection of externally forced trends. Thus, it is important to reliably identify whether or not a system exhibits LRD. In this paper we present a modern and systematic approach to the inference of LRD. Rather than Mandelbrot's fractional Gaussian noise, we use the more flexible Autoregressive Fractional Integrated Moving Average (ARFIMA) model which is widely used in time series analysis, and of increasing interest in climate science. Unlike most previous work on the inference of LRD, which is frequentist in nature, we provide a systematic treatment of Bayesian inference. In particular, we provide a new approximate likelihood for efficient parameter inference, and show how nuisance parameters (e.g. short memory effects) can be integrated over in order to focus on long memory parameters, and hypothesis testing more directly. We illustrate our new methodology on the Nile water level data, with favorable comparison to the standard estimators.

  9. Campbell's and Rubin's Perspectives on Causal Inference

    Science.gov (United States)

    West, Stephen G.; Thoemmes, Felix

    2010-01-01

    Donald Campbell's approach to causal inference (D. T. Campbell, 1957; W. R. Shadish, T. D. Cook, & D. T. Campbell, 2002) is widely used in psychology and education, whereas Donald Rubin's causal model (P. W. Holland, 1986; D. B. Rubin, 1974, 2005) is widely used in economics, statistics, medicine, and public health. Campbell's approach focuses on…

  10. Bayesian structural inference for hidden processes

    Science.gov (United States)

    Strelioff, Christopher C.; Crutchfield, James P.

    2014-04-01

    We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ɛ-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ɛ-machines, irrespective of estimated transition probabilities. Properties of ɛ-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.

  11. HIERARCHICAL PROBABILISTIC INFERENCE OF COSMIC SHEAR

    International Nuclear Information System (INIS)

    Schneider, Michael D.; Dawson, William A.; Hogg, David W.; Marshall, Philip J.; Bard, Deborah J.; Meyers, Joshua; Lang, Dustin

    2015-01-01

    Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxy properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics

  12. Interest, Inferences, and Learning from Texts

    Science.gov (United States)

    Clinton, Virginia; van den Broek, Paul

    2012-01-01

    Topic interest and learning from texts have been found to be positively associated with each other. However, the reason for this positive association is not well understood. The purpose of this study is to examine a cognitive process, inference generation, that could explain the positive association between interest and learning from texts. In…

  13. Ignorability in Statistical and Probabilistic Inference

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2005-01-01

    When dealing with incomplete data in statistical learning, or incomplete observations in probabilistic inference, one needs to distinguish the fact that a certain event is observed from the fact that the observed event has happened. Since the modeling and computational complexities entailed...

  14. Evolutionary inference via the Poisson Indel Process.

    Science.gov (United States)

    Bouchard-Côté, Alexandre; Jordan, Michael I

    2013-01-22

    We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments.

  15. Culture and Pragmatic Inference in Interpersonal Communication

    African Journals Online (AJOL)

    cognitive process, and that the human capacity for inference is crucially important ... been noted that research in interpersonal communication is currently pushing the ... communicative actions, the social-cultural world of everyday life is not only ... personal experiences of the authors', as documented over time and recreated ...

  16. Inference and the Introductory Statistics Course

    Science.gov (United States)

    Pfannkuch, Maxine; Regan, Matt; Wild, Chris; Budgett, Stephanie; Forbes, Sharleen; Harraway, John; Parsonage, Ross

    2011-01-01

    This article sets out some of the rationale and arguments for making major changes to the teaching and learning of statistical inference in introductory courses at our universities by changing from a norm-based, mathematical approach to more conceptually accessible computer-based approaches. The core problem of the inferential argument with its…

  17. Statistical Inference on the Canadian Middle Class

    Directory of Open Access Journals (Sweden)

    Russell Davidson

    2018-03-01

    Full Text Available Conventional wisdom says that the middle classes in many developed countries have recently suffered losses, in terms of both the share of the total population belonging to the middle class, and also their share in total income. Here, distribution-free methods are developed for inference on these shares, by means of deriving expressions for their asymptotic variances of sample estimates, and the covariance of the estimates. Asymptotic inference can be undertaken based on asymptotic normality. Bootstrap inference can be expected to be more reliable, and appropriate bootstrap procedures are proposed. As an illustration, samples of individual earnings drawn from Canadian census data are used to test various hypotheses about the middle-class shares, and confidence intervals for them are computed. It is found that, for the earlier censuses, sample sizes are large enough for asymptotic and bootstrap inference to be almost identical, but that, in the twenty-first century, the bootstrap fails on account of a strange phenomenon whereby many presumably different incomes in the data are rounded to one and the same value. Another difference between the centuries is the appearance of heavy right-hand tails in the income distributions of both men and women.

  18. Spurious correlations and inference in landscape genetics

    Science.gov (United States)

    Samuel A. Cushman; Erin L. Landguth

    2010-01-01

    Reliable interpretation of landscape genetic analyses depends on statistical methods that have high power to identify the correct process driving gene flow while rejecting incorrect alternative hypotheses. Little is known about statistical power and inference in individual-based landscape genetics. Our objective was to evaluate the power of causalmodelling with partial...

  19. Cortical information flow during inferences of agency

    NARCIS (Netherlands)

    Dogge, Myrthel; Hofman, Dennis; Boersma, Maria; Dijkerman, H Chris; Aarts, Henk

    2014-01-01

    Building on the recent finding that agency experiences do not merely rely on sensorimotor information but also on cognitive cues, this exploratory study uses electroencephalographic recordings to examine functional connectivity during agency inference processing in a setting where action and outcome

  20. Quasi-Experimental Designs for Causal Inference

    Science.gov (United States)

    Kim, Yongnam; Steiner, Peter

    2016-01-01

    When randomized experiments are infeasible, quasi-experimental designs can be exploited to evaluate causal treatment effects. The strongest quasi-experimental designs for causal inference are regression discontinuity designs, instrumental variable designs, matching and propensity score designs, and comparative interrupted time series designs. This…

  1. The importance of learning when making inferences

    Directory of Open Access Journals (Sweden)

    Jorg Rieskamp

    2008-03-01

    Full Text Available The assumption that people possess a repertoire of strategies to solve the inference problems they face has been made repeatedly. The experimental findings of two previous studies on strategy selection are reexamined from a learning perspective, which argues that people learn to select strategies for making probabilistic inferences. This learning process is modeled with the strategy selection learning (SSL theory, which assumes that people develop subjective expectancies for the strategies they have. They select strategies proportional to their expectancies, which are updated on the basis of experience. For the study by Newell, Weston, and Shanks (2003 it can be shown that people did not anticipate the success of a strategy from the beginning of the experiment. Instead, the behavior observed at the end of the experiment was the result of a learning process that can be described by the SSL theory. For the second study, by Br"oder and Schiffer (2006, the SSL theory is able to provide an explanation for why participants only slowly adapted to new environments in a dynamic inference situation. The reanalysis of the previous studies illustrates the importance of learning for probabilistic inferences.

  2. Colligation, Or the Logical Inference of Interconnection

    DEFF Research Database (Denmark)

    Falster, Peter

    1998-01-01

    laws or assumptions. Yet interconnection as an abstract concept seems to be without scientific underpinning in pure logic. Adopting a historical viewpoint, our aim is to show that the reasoning of interconnection may be identified with a neglected kind of logical inference, called "colligation...

  3. Colligation or, The Logical Inference of Interconnection

    DEFF Research Database (Denmark)

    Franksen, Ole Immanuel; Falster, Peter

    2000-01-01

    laws or assumptions. Yet interconnection as an abstract concept seems to be without scientific underpinning in oure logic. Adopting a historical viewpoint, our aim is to show that the reasoning of interconnection may be identified with a neglected kind of logical inference, called "colligation...

  4. Inferring motion and location using WLAN RSSI

    NARCIS (Netherlands)

    Kavitha Muthukrishnan, K.; van der Zwaag, B.J.; Havinga, Paul J.M.; Fuller, R.; Koutsoukos, X.

    2009-01-01

    We present novel algorithms to infer movement by making use of inherent fluctuations in the received signal strengths from existing WLAN infrastructure. We evaluate the performance of the presented algorithms based on classification metrics such as recall and precision using annotated traces

  5. Study of the shock ignition scheme in inertial confinement fusion

    International Nuclear Information System (INIS)

    Lafon, M.

    2011-01-01

    The Shock Ignition (SI) scheme is an alternative to classical ignition schemes in Inertial Confinement Fusion. Its singularity relies on the relaxation of constraints during the compression phase and fulfilment of ignition conditions by launching a short and intense laser pulse (∼500 ps, ∼300 TW) on the pre-assembled fuel at the end of the implosion.In this thesis, it has been established that the SI process leads to a non-isobaric fuel configuration at the ignition time thus modifying the ignition criteria of Deuterium-Tritium (DT) against the conventional schemes. A gain model has been developed and gain curves have been inferred and numerically validated. This hydrodynamical modeling has demonstrated that the SI process allows higher gain and lower ignition energy threshold than conventional ignition due to the high hot spot pressure at ignition time resulting from the ignitor shock propagation.The radiative hydrodynamic CHIC code developed at the CELIA laboratory has been used to determine parametric dependences describing the optimal conditions for target design leading to ignition. These numerical studies have enlightened the potential of SI with regards to saving up laser energy, obtain high gains but also to safety margins and ignition robustness.Finally, the results of the first SI experiments performed in spherical geometry on the OMEGA laser facility (NY, USA) are presented. An interpretation of the experimental data is proposed from mono and bidimensional hydrodynamic simulations. Then, different trails are explored to account for the differences observed between experimental and numerical data and alternative solutions to improve performances are suggested. (author) [fr

  6. Bayesian Inference on the Memory Parameter for Gamma-Modulated Regression Models

    Directory of Open Access Journals (Sweden)

    Plinio Andrade

    2015-09-01

    Full Text Available In this work, we propose a Bayesian methodology to make inferences for the memory parameter and other characteristics under non-standard assumptions for a class of stochastic processes. This class generalizes the Gamma-modulated process, with trajectories that exhibit long memory behavior, as well as decreasing variability as time increases. Different values of the memory parameter influence the speed of this decrease, making this heteroscedastic model very flexible. Its properties are used to implement an approximate Bayesian computation and MCMC scheme to obtain posterior estimates. We test and validate our method through simulations and real data from the big earthquake that occurred in 2010 in Chile.

  7. A Multiobjective Fuzzy Inference System based Deployment Strategy for a Distributed Mobile Sensor Network

    Directory of Open Access Journals (Sweden)

    Amol P. Bhondekar

    2010-03-01

    Full Text Available Sensor deployment scheme highly governs the effectiveness of distributed wireless sensor network. Issues such as energy conservation and clustering make the deployment problem much more complex. A multiobjective Fuzzy Inference System based strategy for mobile sensor deployment is presented in this paper. This strategy gives a synergistic combination of energy capacity, clustering and peer-to-peer deployment. Performance of our strategy is evaluated in terms of coverage, uniformity, speed and clustering. Our algorithm is compared against a modified distributed self-spreading algorithm to exhibit better performance.

  8. Time clustered sampling can inflate the inferred substitution rate in foot-and-mouth disease virus analyses

    DEFF Research Database (Denmark)

    Pedersen, Casper-Emil Tingskov; Frandsen, Peter; Wekesa, Sabenzia N.

    2015-01-01

    abundance of sequence data sampled under widely different schemes, an effort to keep results consistent and comparable is needed. This study emphasizes commonly disregarded problems in the inference of evolutionary rates in viral sequence data when sampling is unevenly distributed on a temporal scale...... through a study of the foot-and-mouth (FMD) disease virus serotypes SAT 1 and SAT 2. Our study shows that clustered temporal sampling in phylogenetic analyses of FMD viruses will strongly bias the inferences of substitution rates and tMRCA because the inferred rates in such data sets reflect a rate closer...... to the mutation rate rather than the substitution rate. Estimating evolutionary parameters from viral sequences should be performed with due consideration of the differences in short-term and longer-term evolutionary processes occurring within sets of temporally sampled viruses, and studies should carefully...

  9. Active inference, sensory attenuation and illusions.

    Science.gov (United States)

    Brown, Harriet; Adams, Rick A; Parees, Isabel; Edwards, Mark; Friston, Karl

    2013-11-01

    Active inference provides a simple and neurobiologically plausible account of how action and perception are coupled in producing (Bayes) optimal behaviour. This can be seen most easily as minimising prediction error: we can either change our predictions to explain sensory input through perception. Alternatively, we can actively change sensory input to fulfil our predictions. In active inference, this action is mediated by classical reflex arcs that minimise proprioceptive prediction error created by descending proprioceptive predictions. However, this creates a conflict between action and perception; in that, self-generated movements require predictions to override the sensory evidence that one is not actually moving. However, ignoring sensory evidence means that externally generated sensations will not be perceived. Conversely, attending to (proprioceptive and somatosensory) sensations enables the detection of externally generated events but precludes generation of actions. This conflict can be resolved by attenuating the precision of sensory evidence during movement or, equivalently, attending away from the consequences of self-made acts. We propose that this Bayes optimal withdrawal of precise sensory evidence during movement is the cause of psychophysical sensory attenuation. Furthermore, it explains the force-matching illusion and reproduces empirical results almost exactly. Finally, if attenuation is removed, the force-matching illusion disappears and false (delusional) inferences about agency emerge. This is important, given the negative correlation between sensory attenuation and delusional beliefs in normal subjects--and the reduction in the magnitude of the illusion in schizophrenia. Active inference therefore links the neuromodulatory optimisation of precision to sensory attenuation and illusory phenomena during the attribution of agency in normal subjects. It also provides a functional account of deficits in syndromes characterised by false inference

  10. How can conceptual schemes change teaching?

    Science.gov (United States)

    Wickman, Per-Olof

    2012-03-01

    Lundqvist, Almqvist and Östman describe a teacher's manner of teaching and the possible consequences it may have for students' meaning making. In doing this the article examines a teacher's classroom practice by systematizing the teacher's transactions with the students in terms of certain conceptual schemes, namely the epistemological moves, educational philosophies and the selective traditions of this practice. In connection to their study one may ask how conceptual schemes could change teaching. This article examines how the relationship of the conceptual schemes produced by educational researchers to educational praxis has developed from the middle of the last century to today. The relationship is described as having been transformed in three steps: (1) teacher deficit and social engineering, where conceptual schemes are little acknowledged, (2) reflecting practitioners, where conceptual schemes are mangled through teacher practice to aid the choices of already knowledgeable teachers, and (3) the mangling of the conceptual schemes by researchers through practice with the purpose of revising theory.

  11. Resonance ionization scheme development for europium

    Energy Technology Data Exchange (ETDEWEB)

    Chrysalidis, K., E-mail: katerina.chrysalidis@cern.ch; Goodacre, T. Day; Fedosseev, V. N.; Marsh, B. A. [CERN (Switzerland); Naubereit, P. [Johannes Gutenberg-Universität, Institiut für Physik (Germany); Rothe, S.; Seiffert, C. [CERN (Switzerland); Kron, T.; Wendt, K. [Johannes Gutenberg-Universität, Institiut für Physik (Germany)

    2017-11-15

    Odd-parity autoionizing states of europium have been investigated by resonance ionization spectroscopy via two-step, two-resonance excitations. The aim of this work was to establish ionization schemes specifically suited for europium ion beam production using the ISOLDE Resonance Ionization Laser Ion Source (RILIS). 13 new RILIS-compatible ionization schemes are proposed. The scheme development was the first application of the Photo Ionization Spectroscopy Apparatus (PISA) which has recently been integrated into the RILIS setup.

  12. Secure RAID Schemes for Distributed Storage

    OpenAIRE

    Huang, Wentao; Bruck, Jehoshua

    2016-01-01

    We propose secure RAID, i.e., low-complexity schemes to store information in a distributed manner that is resilient to node failures and resistant to node eavesdropping. We generalize the concept of systematic encoding to secure RAID and show that systematic schemes have significant advantages in the efficiencies of encoding, decoding and random access. For the practical high rate regime, we construct three XOR-based systematic secure RAID schemes with optimal or almost optimal encoding and ...

  13. A new access scheme in OFDMA systems

    Institute of Scientific and Technical Information of China (English)

    GU Xue-lin; YAN Wei; TIAN Hui; ZHANG Ping

    2006-01-01

    This article presents a dynamic random access scheme for orthogonal frequency division multiple access (OFDMA) systems. The key features of the proposed scheme are:it is a combination of both the distributed and the centralized schemes, it can accommodate several delay sensitivity classes,and it can adjust the number of random access channels in a media access control (MAC) frame and the access probability according to the outcome of Mobile Terminals access attempts in previous MAC frames. For floating populated packet-based networks, the proposed scheme possibly leads to high average user satisfaction.

  14. A Spatial Domain Quantum Watermarking Scheme

    International Nuclear Information System (INIS)

    Wei Zhan-Hong; Chen Xiu-Bo; Niu Xin-Xin; Yang Yi-Xian; Xu Shu-Jiang

    2016-01-01

    This paper presents a spatial domain quantum watermarking scheme. For a quantum watermarking scheme, a feasible quantum circuit is a key to achieve it. This paper gives a feasible quantum circuit for the presented scheme. In order to give the quantum circuit, a new quantum multi-control rotation gate, which can be achieved with quantum basic gates, is designed. With this quantum circuit, our scheme can arbitrarily control the embedding position of watermark images on carrier images with the aid of auxiliary qubits. Besides reversely acting the given quantum circuit, the paper gives another watermark extracting algorithm based on quantum measurements. Moreover, this paper also gives a new quantum image scrambling method and its quantum circuit. Differ from other quantum watermarking schemes, all given quantum circuits can be implemented with basic quantum gates. Moreover, the scheme is a spatial domain watermarking scheme, and is not based on any transform algorithm on quantum images. Meanwhile, it can make sure the watermark be secure even though the watermark has been found. With the given quantum circuit, this paper implements simulation experiments for the presented scheme. The experimental result shows that the scheme does well in the visual quality and the embedding capacity. (paper)

  15. Quantum signature scheme for known quantum messages

    International Nuclear Information System (INIS)

    Kim, Taewan; Lee, Hyang-Sook

    2015-01-01

    When we want to sign a quantum message that we create, we can use arbitrated quantum signature schemes which are possible to sign for not only known quantum messages but also unknown quantum messages. However, since the arbitrated quantum signature schemes need the help of a trusted arbitrator in each verification of the signature, it is known that the schemes are not convenient in practical use. If we consider only known quantum messages such as the above situation, there can exist a quantum signature scheme with more efficient structure. In this paper, we present a new quantum signature scheme for known quantum messages without the help of an arbitrator. Differing from arbitrated quantum signature schemes based on the quantum one-time pad with the symmetric key, since our scheme is based on quantum public-key cryptosystems, the validity of the signature can be verified by a receiver without the help of an arbitrator. Moreover, we show that our scheme provides the functions of quantum message integrity, user authentication and non-repudiation of the origin as in digital signature schemes. (paper)

  16. A perturbative study of two four-quark operators in finite volume renormalization schemes

    CERN Document Server

    Palombi, Filippo; Sint, S

    2006-01-01

    Starting from the QCD Schroedinger functional (SF), we define a family of renormalization schemes for two four-quark operators, which are, in the chiral limit, protected against mixing with other operators. With the appropriate flavour assignments these operators can be interpreted as part of either the $\\Delta F=1$ or $\\Delta F=2$ effective weak Hamiltonians. In view of lattice QCD with Wilson-type quarks, we focus on the parity odd components of the operators, since these are multiplicatively renormalized both on the lattice and in continuum schemes. We consider 9 different SF schemes and relate them to commonly used continuum schemes at one-loop order of perturbation theory. In this way the two-loop anomalous dimensions in the SF schemes can be inferred. As a by-product of our calculation we also obtain the one-loop cutoff effects in the step-scaling functions of the respective renormalization constants, for both O(a) improved and unimproved Wilson quarks. Our results will be needed in a separate study of ...

  17. Genetics Home Reference: GRN-related frontotemporal dementia

    Science.gov (United States)

    ... making a protein called granulin (also known as progranulin). Granulin is active in many different tissues in ... Boeve B, Feldman H, Hutton M. Mutations in progranulin cause tau-negative frontotemporal dementia linked to chromosome ...

  18. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisová, Katarina

    To the best of our knowledge, this is the first paper which discusses likelihood inference or a random set using a germ-grain model, where the individual grains are unobservable edge effects occur, and other complications appear. We consider the case where the grains form a disc process modelled...... is specified with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analyzing Peter Diggle's heather dataset, where we discuss the results...... of simulation-based maximum likelihood inference and the effect of specifying different reference Poisson models....

  19. An Intuitive Dashboard for Bayesian Network Inference

    International Nuclear Information System (INIS)

    Reddy, Vikas; Farr, Anna Charisse; Wu, Paul; Mengersen, Kerrie; Yarlagadda, Prasad K D V

    2014-01-01

    Current Bayesian network software packages provide good graphical interface for users who design and develop Bayesian networks for various applications. However, the intended end-users of these networks may not necessarily find such an interface appealing and at times it could be overwhelming, particularly when the number of nodes in the network is large. To circumvent this problem, this paper presents an intuitive dashboard, which provides an additional layer of abstraction, enabling the end-users to easily perform inferences over the Bayesian networks. Unlike most software packages, which display the nodes and arcs of the network, the developed tool organises the nodes based on the cause-and-effect relationship, making the user-interaction more intuitive and friendly. In addition to performing various types of inferences, the users can conveniently use the tool to verify the behaviour of the developed Bayesian network. The tool has been developed using QT and SMILE libraries in C++

  20. The NIFTY way of Bayesian signal inference

    International Nuclear Information System (INIS)

    Selig, Marco

    2014-01-01

    We introduce NIFTY, 'Numerical Information Field Theory', a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTY can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTY as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D 3 PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy

  1. The NIFTy way of Bayesian signal inference

    Science.gov (United States)

    Selig, Marco

    2014-12-01

    We introduce NIFTy, "Numerical Information Field Theory", a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTy can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTy as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D3PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.

  2. Bayesianism and inference to the best explanation

    Directory of Open Access Journals (Sweden)

    Valeriano IRANZO

    2008-01-01

    Full Text Available Bayesianism and Inference to the best explanation (IBE are two different models of inference. Recently there has been some debate about the possibility of “bayesianizing” IBE. Firstly I explore several alternatives to include explanatory considerations in Bayes’s Theorem. Then I distinguish two different interpretations of prior probabilities: “IBE-Bayesianism” (IBE-Bay and “frequentist-Bayesianism” (Freq-Bay. After detailing the content of the latter, I propose a rule for assessing the priors. I also argue that Freq-Bay: (i endorses a role for explanatory value in the assessment of scientific hypotheses; (ii avoids a purely subjectivist reading of prior probabilities; and (iii fits better than IBE-Bayesianism with two basic facts about science, i.e., the prominent role played by empirical testing and the existence of many scientific theories in the past that failed to fulfil their promises and were subsequently abandoned.

  3. Inferring genetic interactions from comparative fitness data.

    Science.gov (United States)

    Crona, Kristina; Gavryushkin, Alex; Greene, Devin; Beerenwinkel, Niko

    2017-12-20

    Darwinian fitness is a central concept in evolutionary biology. In practice, however, it is hardly possible to measure fitness for all genotypes in a natural population. Here, we present quantitative tools to make inferences about epistatic gene interactions when the fitness landscape is only incompletely determined due to imprecise measurements or missing observations. We demonstrate that genetic interactions can often be inferred from fitness rank orders, where all genotypes are ordered according to fitness, and even from partial fitness orders. We provide a complete characterization of rank orders that imply higher order epistasis. Our theory applies to all common types of gene interactions and facilitates comprehensive investigations of diverse genetic interactions. We analyzed various genetic systems comprising HIV-1, the malaria-causing parasite Plasmodium vivax , the fungus Aspergillus niger , and the TEM-family of β-lactamase associated with antibiotic resistance. For all systems, our approach revealed higher order interactions among mutations.

  4. An emergent approach to analogical inference

    Science.gov (United States)

    Thibodeau, Paul H.; Flusberg, Stephen J.; Glick, Jeremy J.; Sternberg, Daniel A.

    2013-03-01

    In recent years, a growing number of researchers have proposed that analogy is a core component of human cognition. According to the dominant theoretical viewpoint, analogical reasoning requires a specific suite of cognitive machinery, including explicitly coded symbolic representations and a mapping or binding mechanism that operates over these representations. Here we offer an alternative approach: we find that analogical inference can emerge naturally and spontaneously from a relatively simple, error-driven learning mechanism without the need to posit any additional analogy-specific machinery. The results also parallel findings from the developmental literature on analogy, demonstrating a shift from an initial reliance on surface feature similarity to the use of relational similarity later in training. Variants of the model allow us to consider and rule out alternative accounts of its performance. We conclude by discussing how these findings can potentially refine our understanding of the processes that are required to perform analogical inference.

  5. Pointwise probability reinforcements for robust statistical inference.

    Science.gov (United States)

    Frénay, Benoît; Verleysen, Michel

    2014-02-01

    Statistical inference using machine learning techniques may be difficult with small datasets because of abnormally frequent data (AFDs). AFDs are observations that are much more frequent in the training sample that they should be, with respect to their theoretical probability, and include e.g. outliers. Estimates of parameters tend to be biased towards models which support such data. This paper proposes to introduce pointwise probability reinforcements (PPRs): the probability of each observation is reinforced by a PPR and a regularisation allows controlling the amount of reinforcement which compensates for AFDs. The proposed solution is very generic, since it can be used to robustify any statistical inference method which can be formulated as a likelihood maximisation. Experiments show that PPRs can be easily used to tackle regression, classification and projection: models are freed from the influence of outliers. Moreover, outliers can be filtered manually since an abnormality degree is obtained for each observation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Statistical inference from imperfect photon detection

    International Nuclear Information System (INIS)

    Audenaert, Koenraad M R; Scheel, Stefan

    2009-01-01

    We consider the statistical properties of photon detection with imperfect detectors that exhibit dark counts and less than unit efficiency, in the context of tomographic reconstruction. In this context, the detectors are used to implement certain positive operator-valued measures (POVMs) that would allow us to reconstruct the quantum state or quantum process under consideration. Here we look at the intermediate step of inferring outcome probabilities from measured outcome frequencies, and show how this inference can be performed in a statistically sound way in the presence of detector imperfections. Merging outcome probabilities for different sets of POVMs into a consistent quantum state picture has been treated elsewhere (Audenaert and Scheel 2009 New J. Phys. 11 023028). Single-photon pulsed measurements as well as continuous wave measurements are covered.

  7. An Intuitive Dashboard for Bayesian Network Inference

    Science.gov (United States)

    Reddy, Vikas; Charisse Farr, Anna; Wu, Paul; Mengersen, Kerrie; Yarlagadda, Prasad K. D. V.

    2014-03-01

    Current Bayesian network software packages provide good graphical interface for users who design and develop Bayesian networks for various applications. However, the intended end-users of these networks may not necessarily find such an interface appealing and at times it could be overwhelming, particularly when the number of nodes in the network is large. To circumvent this problem, this paper presents an intuitive dashboard, which provides an additional layer of abstraction, enabling the end-users to easily perform inferences over the Bayesian networks. Unlike most software packages, which display the nodes and arcs of the network, the developed tool organises the nodes based on the cause-and-effect relationship, making the user-interaction more intuitive and friendly. In addition to performing various types of inferences, the users can conveniently use the tool to verify the behaviour of the developed Bayesian network. The tool has been developed using QT and SMILE libraries in C++.

  8. Working with sample data exploration and inference

    CERN Document Server

    Chaffe-Stengel, Priscilla

    2014-01-01

    Managers and analysts routinely collect and examine key performance measures to better understand their operations and make good decisions. Being able to render the complexity of operations data into a coherent account of significant events requires an understanding of how to work well with raw data and to make appropriate inferences. Although some statistical techniques for analyzing data and making inferences are sophisticated and require specialized expertise, there are methods that are understandable and applicable by anyone with basic algebra skills and the support of a spreadsheet package. By applying these fundamental methods themselves rather than turning over both the data and the responsibility for analysis and interpretation to an expert, managers will develop a richer understanding and potentially gain better control over their environment. This text is intended to describe these fundamental statistical techniques to managers, data analysts, and students. Statistical analysis of sample data is enh...

  9. Parametric inference for biological sequence analysis.

    Science.gov (United States)

    Pachter, Lior; Sturmfels, Bernd

    2004-11-16

    One of the major successes in computational biology has been the unification, by using the graphical model formalism, of a multitude of algorithms for annotating and comparing biological sequences. Graphical models that have been applied to these problems include hidden Markov models for annotation, tree models for phylogenetics, and pair hidden Markov models for alignment. A single algorithm, the sum-product algorithm, solves many of the inference problems that are associated with different statistical models. This article introduces the polytope propagation algorithm for computing the Newton polytope of an observation from a graphical model. This algorithm is a geometric version of the sum-product algorithm and is used to analyze the parametric behavior of maximum a posteriori inference calculations for graphical models.

  10. Inferences on Children’s Reading Groups

    Directory of Open Access Journals (Sweden)

    Javier González García

    2009-05-01

    Full Text Available This article focuses on the non-literal information of a text, which can be inferred from key elements or clues offered by the text itself. This kind of text is called implicit text or inference, due to the thinking process that it stimulates. The explicit resources that lead to information retrieval are related to others of implicit information, which have increased their relevance. In this study, during two courses, how two teachers interpret three stories and how they establish a debate dividing the class into three student groups, was analyzed. The sample was formed by two classes of two urban public schools of Burgos capital (Spain, and two of public schools of Tampico (Mexico. This allowed us to observe an increasing percentage value of the group focused in text comprehension, and a lesser percentage of the group perceiving comprehension as a secondary objective.

  11. Inferring Genetic Ancestry: Opportunities, Challenges, and Implications

    OpenAIRE

    Royal, Charmaine D.; Novembre, John; Fullerton, Stephanie M.; Goldstein, David B.; Long, Jeffrey C.; Bamshad, Michael J.; Clark, Andrew G.

    2010-01-01

    Increasing public interest in direct-to-consumer (DTC) genetic ancestry testing has been accompanied by growing concern about issues ranging from the personal and societal implications of the testing to the scientific validity of ancestry inference. The very concept of “ancestry” is subject to misunderstanding in both the general and scientific communities. What do we mean by ancestry? How exactly is ancestry measured? How far back can such ancestry be defined and by which genetic tools? How ...

  12. Spatial Inference Based on Geometric Proportional Analogies

    OpenAIRE

    Mullally, Emma-Claire; O'Donoghue, Diarmuid P.

    2006-01-01

    We describe an instance-based reasoning solution to a variety of spatial reasoning problems. The solution centers on identifying an isomorphic mapping between labelled graphs that represent some problem data and a known solution instance. We describe a number of spatial reasoning problems that are solved by generating non-deductive inferences, integrating topology with area (and other) features. We report the accuracy of our algorithm on different categories of spatial reasoning tasks from th...

  13. Inferring ontology graph structures using OWL reasoning

    KAUST Repository

    Rodriguez-Garcia, Miguel Angel

    2018-01-05

    Ontologies are representations of a conceptualization of a domain. Traditionally, ontologies in biology were represented as directed acyclic graphs (DAG) which represent the backbone taxonomy and additional relations between classes. These graphs are widely exploited for data analysis in the form of ontology enrichment or computation of semantic similarity. More recently, ontologies are developed in a formal language such as the Web Ontology Language (OWL) and consist of a set of axioms through which classes are defined or constrained. While the taxonomy of an ontology can be inferred directly from the axioms of an ontology as one of the standard OWL reasoning tasks, creating general graph structures from OWL ontologies that exploit the ontologies\\' semantic content remains a challenge.We developed a method to transform ontologies into graphs using an automated reasoner while taking into account all relations between classes. Searching for (existential) patterns in the deductive closure of ontologies, we can identify relations between classes that are implied but not asserted and generate graph structures that encode for a large part of the ontologies\\' semantic content. We demonstrate the advantages of our method by applying it to inference of protein-protein interactions through semantic similarity over the Gene Ontology and demonstrate that performance is increased when graph structures are inferred using deductive inference according to our method. Our software and experiment results are available at http://github.com/bio-ontology-research-group/Onto2Graph .Onto2Graph is a method to generate graph structures from OWL ontologies using automated reasoning. The resulting graphs can be used for improved ontology visualization and ontology-based data analysis.

  14. Role of Speaker Cues in Attention Inference

    OpenAIRE

    Jin Joo Lee; Cynthia Breazeal; David DeSteno

    2017-01-01

    Current state-of-the-art approaches to emotion recognition primarily focus on modeling the nonverbal expressions of the sole individual without reference to contextual elements such as the co-presence of the partner. In this paper, we demonstrate that the accurate inference of listeners’ social-emotional state of attention depends on accounting for the nonverbal behaviors of their storytelling partner, namely their speaker cues. To gain a deeper understanding of the role of speaker cues in at...

  15. Inferring ontology graph structures using OWL reasoning.

    Science.gov (United States)

    Rodríguez-García, Miguel Ángel; Hoehndorf, Robert

    2018-01-05

    Ontologies are representations of a conceptualization of a domain. Traditionally, ontologies in biology were represented as directed acyclic graphs (DAG) which represent the backbone taxonomy and additional relations between classes. These graphs are widely exploited for data analysis in the form of ontology enrichment or computation of semantic similarity. More recently, ontologies are developed in a formal language such as the Web Ontology Language (OWL) and consist of a set of axioms through which classes are defined or constrained. While the taxonomy of an ontology can be inferred directly from the axioms of an ontology as one of the standard OWL reasoning tasks, creating general graph structures from OWL ontologies that exploit the ontologies' semantic content remains a challenge. We developed a method to transform ontologies into graphs using an automated reasoner while taking into account all relations between classes. Searching for (existential) patterns in the deductive closure of ontologies, we can identify relations between classes that are implied but not asserted and generate graph structures that encode for a large part of the ontologies' semantic content. We demonstrate the advantages of our method by applying it to inference of protein-protein interactions through semantic similarity over the Gene Ontology and demonstrate that performance is increased when graph structures are inferred using deductive inference according to our method. Our software and experiment results are available at http://github.com/bio-ontology-research-group/Onto2Graph . Onto2Graph is a method to generate graph structures from OWL ontologies using automated reasoning. The resulting graphs can be used for improved ontology visualization and ontology-based data analysis.

  16. Constrained bayesian inference of project performance models

    OpenAIRE

    Sunmola, Funlade

    2013-01-01

    Project performance models play an important role in the management of project success. When used for monitoring projects, they can offer predictive ability such as indications of possible delivery problems. Approaches for monitoring project performance relies on available project information including restrictions imposed on the project, particularly the constraints of cost, quality, scope and time. We study in this paper a Bayesian inference methodology for project performance modelling in ...

  17. Using metacognitive cues to infer others' thinking

    OpenAIRE

    André Mata; Tiago Almeida

    2014-01-01

    Three studies tested whether people use cues about the way other people think---for example, whether others respond fast vs. slow---to infer what responses other people might give to reasoning problems. People who solve reasoning problems using deliberative thinking have better insight than intuitive problem-solvers into the responses that other people might give to the same problems. Presumably because deliberative responders think of intuitive responses before they think o...

  18. Thermodynamics of statistical inference by cells.

    Science.gov (United States)

    Lang, Alex H; Fisher, Charles K; Mora, Thierry; Mehta, Pankaj

    2014-10-03

    The deep connection between thermodynamics, computation, and information is now well established both theoretically and experimentally. Here, we extend these ideas to show that thermodynamics also places fundamental constraints on statistical estimation and learning. To do so, we investigate the constraints placed by (nonequilibrium) thermodynamics on the ability of biochemical signaling networks to estimate the concentration of an external signal. We show that accuracy is limited by energy consumption, suggesting that there are fundamental thermodynamic constraints on statistical inference.

  19. Anonymous Credential Schemes with Encrypted Attributes

    NARCIS (Netherlands)

    Guajardo Merchan, J.; Mennink, B.; Schoenmakers, B.

    2011-01-01

    In anonymous credential schemes, users obtain credentials on certain attributes from an issuer, and later show these credentials to a relying party anonymously and without fully disclosing the attributes. In this paper, we introduce the notion of (anonymous) credential schemes with encrypted

  20. Community healthcare financing scheme: findings among residents ...

    African Journals Online (AJOL)

    ... none were active participants as 2(0.6%) were indifferent. There was a statistically significant relationship, Fischers <0.0001 between sex and the scheme's knowledge. Conclusion: Knowledge of the scheme was poor among majority of the respondents and none were active participants. Bribery and corruption was the ...

  1. Improved Load Shedding Scheme considering Distributed Generation

    DEFF Research Database (Denmark)

    Das, Kaushik; Nitsas, Antonios; Altin, Müfit

    2017-01-01

    With high penetration of distributed generation (DG), the conventional under-frequency load shedding (UFLS) face many challenges and may not perform as expected. This article proposes new UFLS schemes, which are designed to overcome the shortcomings of traditional load shedding scheme...

  2. A generalized scheme for designing multistable continuous ...

    Indian Academy of Sciences (India)

    In this paper, a generalized scheme is proposed for designing multistable continuous dynamical systems. The scheme is based on the concept of partial synchronization of states and the concept of constants of motion. The most important observation is that by coupling two mdimensional dynamical systems, multistable ...

  3. Consolidation of the health insurance scheme

    CERN Document Server

    Association du personnel

    2009-01-01

    In the last issue of Echo, we highlighted CERN’s obligation to guarantee a social security scheme for all employees, pensioners and their families. In that issue we talked about the first component: pensions. This time we shall discuss the other component: the CERN Health Insurance Scheme (CHIS).

  4. A hierarchical classification scheme of psoriasis images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    A two-stage hierarchical classification scheme of psoriasis lesion images is proposed. These images are basically composed of three classes: normal skin, lesion and background. The scheme combines conventional tools to separate the skin from the background in the first stage, and the lesion from...

  5. Privacy Preserving Mapping Schemes Supporting Comparison

    NARCIS (Netherlands)

    Tang, Qiang

    2010-01-01

    To cater to the privacy requirements in cloud computing, we introduce a new primitive, namely Privacy Preserving Mapping (PPM) schemes supporting comparison. An PPM scheme enables a user to map data items into images in such a way that, with a set of images, any entity can determine the <, =, >

  6. Mixed ultrasoft/norm-conserved pseudopotential scheme

    DEFF Research Database (Denmark)

    Stokbro, Kurt

    1996-01-01

    A variant of the Vanderbilt ultrasoft pseudopotential scheme, where the norm conservation is released for only one or a few angular channels, is presented. Within this scheme some difficulties of the truly ultrasoft pseudopotentials are overcome without sacrificing the pseudopotential softness. (...

  7. An Adaptive Handover Prediction Scheme for Seamless Mobility Based Wireless Networks

    Directory of Open Access Journals (Sweden)

    Ali Safa Sadiq

    2014-01-01

    Full Text Available We propose an adaptive handover prediction (AHP scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches.

  8. Bootstrap inference when using multiple imputation.

    Science.gov (United States)

    Schomaker, Michael; Heumann, Christian

    2018-04-16

    Many modern estimators require bootstrapping to calculate confidence intervals because either no analytic standard error is available or the distribution of the parameter of interest is nonsymmetric. It remains however unclear how to obtain valid bootstrap inference when dealing with multiple imputation to address missing data. We present 4 methods that are intuitively appealing, easy to implement, and combine bootstrap estimation with multiple imputation. We show that 3 of the 4 approaches yield valid inference, but that the performance of the methods varies with respect to the number of imputed data sets and the extent of missingness. Simulation studies reveal the behavior of our approaches in finite samples. A topical analysis from HIV treatment research, which determines the optimal timing of antiretroviral treatment initiation in young children, demonstrates the practical implications of the 4 methods in a sophisticated and realistic setting. This analysis suffers from missing data and uses the g-formula for inference, a method for which no standard errors are available. Copyright © 2018 John Wiley & Sons, Ltd.

  9. Inferring epidemic network topology from surveillance data.

    Directory of Open Access Journals (Sweden)

    Xiang Wan

    Full Text Available The transmission of infectious diseases can be affected by many or even hidden factors, making it difficult to accurately predict when and where outbreaks may emerge. One approach at the moment is to develop and deploy surveillance systems in an effort to detect outbreaks as timely as possible. This enables policy makers to modify and implement strategies for the control of the transmission. The accumulated surveillance data including temporal, spatial, clinical, and demographic information, can provide valuable information with which to infer the underlying epidemic networks. Such networks can be quite informative and insightful as they characterize how infectious diseases transmit from one location to another. The aim of this work is to develop a computational model that allows inferences to be made regarding epidemic network topology in heterogeneous populations. We apply our model on the surveillance data from the 2009 H1N1 pandemic in Hong Kong. The inferred epidemic network displays significant effect on the propagation of infectious diseases.

  10. Role of Speaker Cues in Attention Inference

    Directory of Open Access Journals (Sweden)

    Jin Joo Lee

    2017-10-01

    Full Text Available Current state-of-the-art approaches to emotion recognition primarily focus on modeling the nonverbal expressions of the sole individual without reference to contextual elements such as the co-presence of the partner. In this paper, we demonstrate that the accurate inference of listeners’ social-emotional state of attention depends on accounting for the nonverbal behaviors of their storytelling partner, namely their speaker cues. To gain a deeper understanding of the role of speaker cues in attention inference, we conduct investigations into real-world interactions of children (5–6 years old storytelling with their peers. Through in-depth analysis of human–human interaction data, we first identify nonverbal speaker cues (i.e., backchannel-inviting cues and listener responses (i.e., backchannel feedback. We then demonstrate how speaker cues can modify the interpretation of attention-related backchannels as well as serve as a means to regulate the responsiveness of listeners. We discuss the design implications of our findings toward our primary goal of developing attention recognition models for storytelling robots, and we argue that social robots can proactively use speaker cues to form more accurate inferences about the attentive state of their human partners.

  11. Cortical information flow during inferences of agency

    Directory of Open Access Journals (Sweden)

    Myrthel eDogge

    2014-08-01

    Full Text Available Building on the recent finding that agency experiences do not merely rely on sensorimotor information but also on cognitive cues, this exploratory study uses electroencephalographic recordings to examine functional connectivity during agency inference processing in a setting where action and outcome are independent. Participants completed a computerized task in which they pressed a button followed by one of two color words (red or blue and rated their experienced agency over producing the color. Before executing the action, a matching or mismatching color word was pre-activated by explicitly instructing participants to produce the color (goal condition or by briefly presenting the color word (prime condition. In both conditions, experienced agency was higher in matching versus mismatching trials. Furthermore, increased electroencephalography (EEG-based connectivity strength was observed between parietal and frontal nodes and within the (prefrontal cortex when color-outcomes matched with goals and participants reported high agency. This pattern of increased connectivity was not identified in trials where outcomes were pre-activated through primes. These results suggest that different connections are involved in the experience and in the loss of agency, as well as in inferences of agency resulting from different types of pre-activation. Moreover, the findings provide novel support for the involvement of a fronto-parietal network in agency inferences.

  12. Phylogenetic Inference of HIV Transmission Clusters

    Directory of Open Access Journals (Sweden)

    Vlad Novitsky

    2017-10-01

    Full Text Available Better understanding the structure and dynamics of HIV transmission networks is essential for designing the most efficient interventions to prevent new HIV transmissions, and ultimately for gaining control of the HIV epidemic. The inference of phylogenetic relationships and the interpretation of results rely on the definition of the HIV transmission cluster. The definition of the HIV cluster is complex and dependent on multiple factors, including the design of sampling, accuracy of sequencing, precision of sequence alignment, evolutionary models, the phylogenetic method of inference, and specified thresholds for cluster support. While the majority of studies focus on clusters, non-clustered cases could also be highly informative. A new dimension in the analysis of the global and local HIV epidemics is the concept of phylogenetically distinct HIV sub-epidemics. The identification of active HIV sub-epidemics reveals spreading viral lineages and may help in the design of targeted interventions.HIVclustering can also be affected by sampling density. Obtaining a proper sampling density may increase statistical power and reduce sampling bias, so sampling density should be taken into account in study design and in interpretation of phylogenetic results. Finally, recent advances in long-range genotyping may enable more accurate inference of HIV transmission networks. If performed in real time, it could both inform public-health strategies and be clinically relevant (e.g., drug-resistance testing.

  13. Causal inference of asynchronous audiovisual speech

    Directory of Open Access Journals (Sweden)

    John F Magnotti

    2013-11-01

    Full Text Available During speech perception, humans integrate auditory information from the voice with visual information from the face. This multisensory integration increases perceptual precision, but only if the two cues come from the same talker; this requirement has been largely ignored by current models of speech perception. We describe a generative model of multisensory speech perception that includes this critical step of determining the likelihood that the voice and face information have a common cause. A key feature of the model is that it is based on a principled analysis of how an observer should solve this causal inference problem using the asynchrony between two cues and the reliability of the cues. This allows the model to make predictions abut the behavior of subjects performing a synchrony judgment task, predictive power that does not exist in other approaches, such as post hoc fitting of Gaussian curves to behavioral data. We tested the model predictions against the performance of 37 subjects performing a synchrony judgment task viewing audiovisual speech under a variety of manipulations, including varying asynchronies, intelligibility, and visual cue reliability. The causal inference model outperformed the Gaussian model across two experiments, providing a better fit to the behavioral data with fewer parameters. Because the causal inference model is derived from a principled understanding of the task, model parameters are directly interpretable in terms of stimulus and subject properties.

  14. Functional neuroanatomy of intuitive physical inference.

    Science.gov (United States)

    Fischer, Jason; Mikhael, John G; Tenenbaum, Joshua B; Kanwisher, Nancy

    2016-08-23

    To engage with the world-to understand the scene in front of us, plan actions, and predict what will happen next-we must have an intuitive grasp of the world's physical structure and dynamics. How do the objects in front of us rest on and support each other, how much force would be required to move them, and how will they behave when they fall, roll, or collide? Despite the centrality of physical inferences in daily life, little is known about the brain mechanisms recruited to interpret the physical structure of a scene and predict how physical events will unfold. Here, in a series of fMRI experiments, we identified a set of cortical regions that are selectively engaged when people watch and predict the unfolding of physical events-a "physics engine" in the brain. These brain regions are selective to physical inferences relative to nonphysical but otherwise highly similar scenes and tasks. However, these regions are not exclusively engaged in physical inferences per se or, indeed, even in scene understanding; they overlap with the domain-general "multiple demand" system, especially the parts of that system involved in action planning and tool use, pointing to a close relationship between the cognitive and neural mechanisms involved in parsing the physical content of a scene and preparing an appropriate action.

  15. Elements of Causal Inference: Foundations and Learning Algorithms

    DEFF Research Database (Denmark)

    Peters, Jonas Martin; Janzing, Dominik; Schölkopf, Bernhard

    A concise and self-contained introduction to causal inference, increasingly important in data science and machine learning......A concise and self-contained introduction to causal inference, increasingly important in data science and machine learning...

  16. Integrating distributed Bayesian inference and reinforcement learning for sensor management

    NARCIS (Netherlands)

    Grappiolo, C.; Whiteson, S.; Pavlin, G.; Bakker, B.

    2009-01-01

    This paper introduces a sensor management approach that integrates distributed Bayesian inference (DBI) and reinforcement learning (RL). DBI is implemented using distributed perception networks (DPNs), a multiagent approach to performing efficient inference, while RL is used to automatically

  17. Labelling schemes: From a consumer perspective

    DEFF Research Database (Denmark)

    Juhl, Hans Jørn; Stacey, Julia

    2000-01-01

    Labelling of food products attracts a lot of political attention these days. As a result of a number of food scandals, most European countries have acknowledged the need for more information and better protection of consumers. Labelling schemes are one way of informing and guiding consumers....... However, initiatives in relation to labelling schemes seldom take their point of departure in consumers' needs and expectations; and in many cases, the schemes are defined by the institutions guaranteeing the label. It is therefore interesting to study how consumers actually value labelling schemes....... A recent MAPP study has investigated the value consumers attach the Government-controlled labels 'Ø-mærket' and 'Den Blå Lup' and the private supermarket label 'Mesterhakket' when they purchase minced meat. The results reveal four consumer segments that use labelling schemes for food products very...

  18. Birkhoffian Symplectic Scheme for a Quantum System

    International Nuclear Information System (INIS)

    Su Hongling

    2010-01-01

    In this paper, a classical system of ordinary differential equations is built to describe a kind of n-dimensional quantum systems. The absorption spectrum and the density of the states for the system are defined from the points of quantum view and classical view. From the Birkhoffian form of the equations, a Birkhoffian symplectic scheme is derived for solving n-dimensional equations by using the generating function method. Besides the Birkhoffian structure-preserving, the new scheme is proven to preserve the discrete local energy conservation law of the system with zero vector f. Some numerical experiments for a 3-dimensional example show that the new scheme can simulate the general Birkhoffian system better than the implicit midpoint scheme, which is well known to be symplectic scheme for Hamiltonian system. (general)

  19. Autonomous droop scheme with reduced generation cost

    DEFF Research Database (Denmark)

    Nutkani, Inam Ullah; Loh, Poh Chiang; Blaabjerg, Frede

    2013-01-01

    Droop scheme has been widely applied to the control of Distributed Generators (DGs) in microgrids for proportional power sharing based on their ratings. For standalone microgrid, where centralized management system is not viable, the proportional power sharing based droop might not suit well since...... DGs are usually of different types unlike synchronous generators. This paper presents an autonomous droop scheme that takes into consideration the operating cost, efficiency and emission penalty of each DG since all these factors directly or indirectly contributes to the Total Generation Cost (TGC......) of the overall microgrid. Comparing it with the traditional scheme, the proposed scheme has retained its simplicity, which certainly is a feature preferred by the industry. The overall performance of the proposed scheme has been verified through simulation and experiment....

  20. Exact analysis of Packet Reversed Packet Combining Scheme and Modified Packet Combining Scheme; and a combined scheme

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-07-01

    Packet combining scheme is a well defined simple error correction scheme for the detection and correction of errors at the receiver. Although it permits a higher throughput when compared to other basic ARQ protocols, packet combining (PC) scheme fails to correct errors when errors occur in the same bit locations of copies. In a previous work, a scheme known as Packet Reversed Packet Combining (PRPC) Scheme that will correct errors which occur at the same bit location of erroneous copies, was studied however PRPC does not handle a situation where a packet has more than 1 error bit. The Modified Packet Combining (MPC) Scheme that can correct double or higher bit errors was studied elsewhere. Both PRPC and MPC schemes are believed to offer higher throughput in previous studies, however neither adequate investigation nor exact analysis was done to substantiate this claim of higher throughput. In this work, an exact analysis of both PRPC and MPC is carried out and the results reported. A combined protocol (PRPC and MPC) is proposed and the analysis shows that it is capable of offering even higher throughput and better error correction capability at high bit error rate (BER) and larger packet size. (author)

  1. Analysis of central and upwind compact schemes

    International Nuclear Information System (INIS)

    Sengupta, T.K.; Ganeriwal, G.; De, S.

    2003-01-01

    Central and upwind compact schemes for spatial discretization have been analyzed with respect to accuracy in spectral space, numerical stability and dispersion relation preservation. A von Neumann matrix spectral analysis is developed here to analyze spatial discretization schemes for any explicit and implicit schemes to investigate the full domain simultaneously. This allows one to evaluate various boundary closures and their effects on the domain interior. The same method can be used for stability analysis performed for the semi-discrete initial boundary value problems (IBVP). This analysis tells one about the stability for every resolved length scale. Some well-known compact schemes that were found to be G-K-S and time stable are shown here to be unstable for selective length scales by this analysis. This is attributed to boundary closure and we suggest special boundary treatment to remove this shortcoming. To demonstrate the asymptotic stability of the resultant schemes, numerical solution of the wave equation is compared with analytical solution. Furthermore, some of these schemes are used to solve two-dimensional Navier-Stokes equation and a computational acoustic problem to check their ability to solve problems for long time. It is found that those schemes, that were found unstable for the wave equation, are unsuitable for solving incompressible Navier-Stokes equation. In contrast, the proposed compact schemes with improved boundary closure and an explicit higher-order upwind scheme produced correct results. The numerical solution for the acoustic problem is compared with the exact solution and the quality of the match shows that the used compact scheme has the requisite DRP property

  2. Bootstrapping phylogenies inferred from rearrangement data

    Directory of Open Access Journals (Sweden)

    Lin Yu

    2012-08-01

    Full Text Available Abstract Background Large-scale sequencing of genomes has enabled the inference of phylogenies based on the evolution of genomic architecture, under such events as rearrangements, duplications, and losses. Many evolutionary models and associated algorithms have been designed over the last few years and have found use in comparative genomics and phylogenetic inference. However, the assessment of phylogenies built from such data has not been properly addressed to date. The standard method used in sequence-based phylogenetic inference is the bootstrap, but it relies on a large number of homologous characters that can be resampled; yet in the case of rearrangements, the entire genome is a single character. Alternatives such as the jackknife suffer from the same problem, while likelihood tests cannot be applied in the absence of well established probabilistic models. Results We present a new approach to the assessment of distance-based phylogenetic inference from whole-genome data; our approach combines features of the jackknife and the bootstrap and remains nonparametric. For each feature of our method, we give an equivalent feature in the sequence-based framework; we also present the results of extensive experimental testing, in both sequence-based and genome-based frameworks. Through the feature-by-feature comparison and the experimental results, we show that our bootstrapping approach is on par with the classic phylogenetic bootstrap used in sequence-based reconstruction, and we establish the clear superiority of the classic bootstrap for sequence data and of our corresponding new approach for rearrangement data over proposed variants. Finally, we test our approach on a small dataset of mammalian genomes, verifying that the support values match current thinking about the respective branches. Conclusions Our method is the first to provide a standard of assessment to match that of the classic phylogenetic bootstrap for aligned sequences. Its

  3. Bootstrapping phylogenies inferred from rearrangement data.

    Science.gov (United States)

    Lin, Yu; Rajan, Vaibhav; Moret, Bernard Me

    2012-08-29

    Large-scale sequencing of genomes has enabled the inference of phylogenies based on the evolution of genomic architecture, under such events as rearrangements, duplications, and losses. Many evolutionary models and associated algorithms have been designed over the last few years and have found use in comparative genomics and phylogenetic inference. However, the assessment of phylogenies built from such data has not been properly addressed to date. The standard method used in sequence-based phylogenetic inference is the bootstrap, but it relies on a large number of homologous characters that can be resampled; yet in the case of rearrangements, the entire genome is a single character. Alternatives such as the jackknife suffer from the same problem, while likelihood tests cannot be applied in the absence of well established probabilistic models. We present a new approach to the assessment of distance-based phylogenetic inference from whole-genome data; our approach combines features of the jackknife and the bootstrap and remains nonparametric. For each feature of our method, we give an equivalent feature in the sequence-based framework; we also present the results of extensive experimental testing, in both sequence-based and genome-based frameworks. Through the feature-by-feature comparison and the experimental results, we show that our bootstrapping approach is on par with the classic phylogenetic bootstrap used in sequence-based reconstruction, and we establish the clear superiority of the classic bootstrap for sequence data and of our corresponding new approach for rearrangement data over proposed variants. Finally, we test our approach on a small dataset of mammalian genomes, verifying that the support values match current thinking about the respective branches. Our method is the first to provide a standard of assessment to match that of the classic phylogenetic bootstrap for aligned sequences. Its support values follow a similar scale and its receiver

  4. Surface radiant flux densities inferred from LAC and GAC AVHRR data

    Science.gov (United States)

    Berger, F.; Klaes, D.

    To infer surface radiant flux densities from current (NOAA-AVHRR, ERS-1/2 ATSR) and future meteorological (Envisat AATSR, MSG, METOP) satellite data, the complex, modular analysis scheme SESAT (Strahlungs- und Energieflüsse aus Satellitendaten) could be developed (Berger, 2001). This scheme allows the determination of cloud types, optical and microphysical cloud properties as well as surface and TOA radiant flux densities. After testing of SESAT in Central Europe and the Baltic Sea catchment (more than 400scenes U including a detailed validation with various surface measurements) it could be applied to a large number of NOAA-16 AVHRR overpasses covering the globe.For the analysis, two different spatial resolutions U local area coverage (LAC) andwere considered. Therefore, all inferred results, like global area coverage (GAC) U cloud cover, cloud properties and radiant properties, could be intercompared. Specific emphasis could be made to the surface radiant flux densities (all radiative balance compoments), where results for different regions, like Southern America, Southern Africa, Northern America, Europe, and Indonesia, will be presented. Applying SESAT, energy flux densities, like latent and sensible heat flux densities could also be determined additionally. A statistical analysis of all results including a detailed discussion for the two spatial resolutions will close this study.

  5. Type Inference for Session Types in the Pi-Calculus

    DEFF Research Database (Denmark)

    Graversen, Eva Fajstrup; Harbo, Jacob Buchreitz; Huttel, Hans

    2014-01-01

    In this paper we present a direct algorithm for session type inference for the π-calculus. Type inference for session types has previously been achieved by either imposing limitations and restriction on the π-calculus, or by reducing the type inference problem to that for linear types. Our approach...

  6. Reasoning about Informal Statistical Inference: One Statistician's View

    Science.gov (United States)

    Rossman, Allan J.

    2008-01-01

    This paper identifies key concepts and issues associated with the reasoning of informal statistical inference. I focus on key ideas of inference that I think all students should learn, including at secondary level as well as tertiary. I argue that a fundamental component of inference is to go beyond the data at hand, and I propose that statistical…

  7. Statistical Inference at Work: Statistical Process Control as an Example

    Science.gov (United States)

    Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia

    2008-01-01

    To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…

  8. Is there a hierarchy of social inferences? The likelihood and speed of inferring intentionality, mind, and personality.

    Science.gov (United States)

    Malle, Bertram F; Holbrook, Jess

    2012-04-01

    People interpret behavior by making inferences about agents' intentionality, mind, and personality. Past research studied such inferences 1 at a time; in real life, people make these inferences simultaneously. The present studies therefore examined whether 4 major inferences (intentionality, desire, belief, and personality), elicited simultaneously in response to an observed behavior, might be ordered in a hierarchy of likelihood and speed. To achieve generalizability, the studies included a wide range of stimulus behaviors, presented them verbally and as dynamic videos, and assessed inferences both in a retrieval paradigm (measuring the likelihood and speed of accessing inferences immediately after they were made) and in an online processing paradigm (measuring the speed of forming inferences during behavior observation). Five studies provide evidence for a hierarchy of social inferences-from intentionality and desire to belief to personality-that is stable across verbal and visual presentations and that parallels the order found in developmental and primate research. (c) 2012 APA, all rights reserved.

  9. The inference from a single case: moral versus scientific inferences in implementing new biotechnologies.

    Science.gov (United States)

    Hofmann, B

    2008-06-01

    Are there similarities between scientific and moral inference? This is the key question in this article. It takes as its point of departure an instance of one person's story in the media changing both Norwegian public opinion and a brand-new Norwegian law prohibiting the use of saviour siblings. The case appears to falsify existing norms and to establish new ones. The analysis of this case reveals similarities in the modes of inference in science and morals, inasmuch as (a) a single case functions as a counter-example to an existing rule; (b) there is a common presupposition of stability, similarity and order, which makes it possible to reason from a few cases to a general rule; and (c) this makes it possible to hold things together and retain order. In science, these modes of inference are referred to as falsification, induction and consistency. In morals, they have a variety of other names. Hence, even without abandoning the fact-value divide, there appear to be similarities between inference in science and inference in morals, which may encourage communication across the boundaries between "the two cultures" and which are relevant to medical humanities.

  10. Symmetric weak ternary quantum homomorphic encryption schemes

    Science.gov (United States)

    Wang, Yuqi; She, Kun; Luo, Qingbin; Yang, Fan; Zhao, Chao

    2016-03-01

    Based on a ternary quantum logic circuit, four symmetric weak ternary quantum homomorphic encryption (QHE) schemes were proposed. First, for a one-qutrit rotation gate, a QHE scheme was constructed. Second, in view of the synthesis of a general 3 × 3 unitary transformation, another one-qutrit QHE scheme was proposed. Third, according to the one-qutrit scheme, the two-qutrit QHE scheme about generalized controlled X (GCX(m,n)) gate was constructed and further generalized to the n-qutrit unitary matrix case. Finally, the security of these schemes was analyzed in two respects. It can be concluded that the attacker can correctly guess the encryption key with a maximum probability pk = 1/33n, thus it can better protect the privacy of users’ data. Moreover, these schemes can be well integrated into the future quantum remote server architecture, and thus the computational security of the users’ private quantum information can be well protected in a distributed computing environment.

  11. Ponzi scheme diffusion in complex networks

    Science.gov (United States)

    Zhu, Anding; Fu, Peihua; Zhang, Qinghe; Chen, Zhenyue

    2017-08-01

    Ponzi schemes taking the form of Internet-based financial schemes have been negatively affecting China's economy for the last two years. Because there is currently a lack of modeling research on Ponzi scheme diffusion within social networks yet, we develop a potential-investor-divestor (PID) model to investigate the diffusion dynamics of Ponzi scheme in both homogeneous and inhomogeneous networks. Our simulation study of artificial and real Facebook social networks shows that the structure of investor networks does indeed affect the characteristics of dynamics. Both the average degree of distribution and the power-law degree of distribution will reduce the spreading critical threshold and will speed up the rate of diffusion. A high speed of diffusion is the key to alleviating the interest burden and improving the financial outcomes for the Ponzi scheme operator. The zero-crossing point of fund flux function we introduce proves to be a feasible index for reflecting the fast-worsening situation of fiscal instability and predicting the forthcoming collapse. The faster the scheme diffuses, the higher a peak it will reach and the sooner it will collapse. We should keep a vigilant eye on the harm of Ponzi scheme diffusion through modern social networks.

  12. Optimal Face-Iris Multimodal Fusion Scheme

    Directory of Open Access Journals (Sweden)

    Omid Sharifi

    2016-06-01

    Full Text Available Multimodal biometric systems are considered a way to minimize the limitations raised by single traits. This paper proposes new schemes based on score level, feature level and decision level fusion to efficiently fuse face and iris modalities. Log-Gabor transformation is applied as the feature extraction method on face and iris modalities. At each level of fusion, different schemes are proposed to improve the recognition performance and, finally, a combination of schemes at different fusion levels constructs an optimized and robust scheme. In this study, CASIA Iris Distance database is used to examine the robustness of all unimodal and multimodal schemes. In addition, Backtracking Search Algorithm (BSA, a novel population-based iterative evolutionary algorithm, is applied to improve the recognition accuracy of schemes by reducing the number of features and selecting the optimized weights for feature level and score level fusion, respectively. Experimental results on verification rates demonstrate a significant improvement of proposed fusion schemes over unimodal and multimodal fusion methods.

  13. Multidimensional flux-limited advection schemes

    International Nuclear Information System (INIS)

    Thuburn, J.

    1996-01-01

    A general method for building multidimensional shape preserving advection schemes using flux limiters is presented. The method works for advected passive scalars in either compressible or incompressible flow and on arbitrary grids. With a minor modification it can be applied to the equation for fluid density. Schemes using the simplest form of the flux limiter can cause distortion of the advected profile, particularly sideways spreading, depending on the orientation of the flow relative to the grid. This is partly because the simple limiter is too restrictive. However, some straightforward refinements lead to a shape-preserving scheme that gives satisfactory results, with negligible grid-flow angle-dependent distortion

  14. Finite-volume scheme for anisotropic diffusion

    Energy Technology Data Exchange (ETDEWEB)

    Es, Bram van, E-mail: bramiozo@gmail.com [Centrum Wiskunde & Informatica, P.O. Box 94079, 1090GB Amsterdam (Netherlands); FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, The Netherlands" 1 (Netherlands); Koren, Barry [Eindhoven University of Technology (Netherlands); Blank, Hugo J. de [FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, The Netherlands" 1 (Netherlands)

    2016-02-01

    In this paper, we apply a special finite-volume scheme, limited to smooth temperature distributions and Cartesian grids, to test the importance of connectivity of the finite volumes. The area of application is nuclear fusion plasma with field line aligned temperature gradients and extreme anisotropy. We apply the scheme to the anisotropic heat-conduction equation, and compare its results with those of existing finite-volume schemes for anisotropic diffusion. Also, we introduce a general model adaptation of the steady diffusion equation for extremely anisotropic diffusion problems with closed field lines.

  15. Vector domain decomposition schemes for parabolic equations

    Science.gov (United States)

    Vabishchevich, P. N.

    2017-09-01

    A new class of domain decomposition schemes for finding approximate solutions of timedependent problems for partial differential equations is proposed and studied. A boundary value problem for a second-order parabolic equation is used as a model problem. The general approach to the construction of domain decomposition schemes is based on partition of unity. Specifically, a vector problem is set up for solving problems in individual subdomains. Stability conditions for vector regionally additive schemes of first- and second-order accuracy are obtained.

  16. The new WAGR data acquisition scheme

    International Nuclear Information System (INIS)

    Ellis, W.E.; Leng, J.H.; Smith, I.C.; Smith, M.R.

    1976-06-01

    The existing WAGR data acquisition equipment was inadequate to meet the requirements introduced by the installation of two additional experimental loops and was in any case due for replacement. A completely new scheme was planned and implemented based on mini-computers, which while preserving all the useful features of the old scheme provided additional flexibility and improved data display. Both the initial objectives of the design and the final implementation are discussed without introducing detailed descriptions of hardware or the programming techniques employed. Although the scheme solves a specific problem the general principles are more widely applicable and could readily be adapted to other data checking and display problems. (author)

  17. Kinematic reversal schemes for the geomagnetic dipole.

    Science.gov (United States)

    Levy, E. H.

    1972-01-01

    Fluctuations in the distribution of cyclonic convective cells, in the earth's core, can reverse the sign of the geomagnetic field. Two kinematic reversal schemes are discussed. In the first scheme, a field maintained by cyclones concentrated at low latitude is reversed by a burst of cyclones at high latitude. Conversely, in the second scheme, a field maintained predominantly by cyclones in high latitudes is reversed by a fluctuation consisting of a burst of cyclonic convection at low latitude. The precise fluid motions which produce the geomagnetic field are not known. However, it appears that, whatever the details are, a fluctuation in the distribution of cyclonic cells over latitude can cause a geomagnetic reversal.

  18. Autonomous Droop Scheme With Reduced Generation Cost

    DEFF Research Database (Denmark)

    Nutkani, Inam Ullah; Loh, Poh Chiang; Wang, Peng

    2014-01-01

    ) of the microgrid. To reduce this TGC without relying on fast communication links, an autonomous droop scheme is proposed here, whose resulting power sharing is decided by the individual DG generation costs. Comparing it with the traditional scheme, the proposed scheme retains its simplicity and it is hence more....... This objective might, however, not suit microgrids well since DGs are usually of different types, unlike synchronous generators. Other factors like cost, efficiency, and emission penalty of each DG at different loading must be considered since they contribute directly to the total generation cost (TGC...

  19. Cognitive radio networks dynamic resource allocation schemes

    CERN Document Server

    Wang, Shaowei

    2014-01-01

    This SpringerBrief presents a survey of dynamic resource allocation schemes in Cognitive Radio (CR) Systems, focusing on the spectral-efficiency and energy-efficiency in wireless networks. It also introduces a variety of dynamic resource allocation schemes for CR networks and provides a concise introduction of the landscape of CR technology. The author covers in detail the dynamic resource allocation problem for the motivations and challenges in CR systems. The Spectral- and Energy-Efficient resource allocation schemes are comprehensively investigated, including new insights into the trade-off

  20. Algebraic K-theory of generalized schemes

    DEFF Research Database (Denmark)

    Anevski, Stella Victoria Desiree

    and geometry over the field with one element. It also permits the construction of important Arakelov theoretical objects, such as the completion \\Spec Z of Spec Z. In this thesis, we prove a projective bundle theorem for the eld with one element and compute the Chow rings of the generalized schemes Sp\\ec ZN......Nikolai Durov has developed a generalization of conventional scheme theory in which commutative algebraic monads replace commutative unital rings as the basic algebraic objects. The resulting geometry is expressive enough to encompass conventional scheme theory, tropical algebraic geometry......, appearing in the construction of \\Spec Z....

  1. Nonparametric inference of network structure and dynamics

    Science.gov (United States)

    Peixoto, Tiago P.

    The network structure of complex systems determine their function and serve as evidence for the evolutionary mechanisms that lie behind them. Despite considerable effort in recent years, it remains an open challenge to formulate general descriptions of the large-scale structure of network systems, and how to reliably extract such information from data. Although many approaches have been proposed, few methods attempt to gauge the statistical significance of the uncovered structures, and hence the majority cannot reliably separate actual structure from stochastic fluctuations. Due to the sheer size and high-dimensionality of many networks, this represents a major limitation that prevents meaningful interpretations of the results obtained with such nonstatistical methods. In this talk, I will show how these issues can be tackled in a principled and efficient fashion by formulating appropriate generative models of network structure that can have their parameters inferred from data. By employing a Bayesian description of such models, the inference can be performed in a nonparametric fashion, that does not require any a priori knowledge or ad hoc assumptions about the data. I will show how this approach can be used to perform model comparison, and how hierarchical models yield the most appropriate trade-off between model complexity and quality of fit based on the statistical evidence present in the data. I will also show how this general approach can be elegantly extended to networks with edge attributes, that are embedded in latent spaces, and that change in time. The latter is obtained via a fully dynamic generative network model, based on arbitrary-order Markov chains, that can also be inferred in a nonparametric fashion. Throughout the talk I will illustrate the application of the methods with many empirical networks such as the internet at the autonomous systems level, the global airport network, the network of actors and films, social networks, citations among

  2. Impact of noise on molecular network inference.

    Directory of Open Access Journals (Sweden)

    Radhakrishnan Nagarajan

    Full Text Available Molecular entities work in concert as a system and mediate phenotypic outcomes and disease states. There has been recent interest in modelling the associations between molecular entities from their observed expression profiles as networks using a battery of algorithms. These networks have proven to be useful abstractions of the underlying pathways and signalling mechanisms. Noise is ubiquitous in molecular data and can have a pronounced effect on the inferred network. Noise can be an outcome of several factors including: inherent stochastic mechanisms at the molecular level, variation in the abundance of molecules, heterogeneity, sensitivity of the biological assay or measurement artefacts prevalent especially in high-throughput settings. The present study investigates the impact of discrepancies in noise variance on pair-wise dependencies, conditional dependencies and constraint-based Bayesian network structure learning algorithms that incorporate conditional independence tests as a part of the learning process. Popular network motifs and fundamental connections, namely: (a common-effect, (b three-chain, and (c coherent type-I feed-forward loop (FFL are investigated. The choice of these elementary networks can be attributed to their prevalence across more complex networks. Analytical expressions elucidating the impact of discrepancies in noise variance on pairwise dependencies and conditional dependencies for special cases of these motifs are presented. Subsequently, the impact of noise on two popular constraint-based Bayesian network structure learning algorithms such as Grow-Shrink (GS and Incremental Association Markov Blanket (IAMB that implicitly incorporate tests for conditional independence is investigated. Finally, the impact of noise on networks inferred from publicly available single cell molecular expression profiles is investigated. While discrepancies in noise variance are overlooked in routine molecular network inference, the

  3. Bayesian Estimation and Inference using Stochastic Hardware

    Directory of Open Access Journals (Sweden)

    Chetan Singh Thakur

    2016-03-01

    Full Text Available In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker, demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND, we show how inference can be performed in a Directed Acyclic Graph (DAG using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.

  4. Bayesian Estimation and Inference Using Stochastic Electronics.

    Science.gov (United States)

    Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan; van Schaik, André

    2016-01-01

    In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.

  5. Children's and adults' judgments of the certainty of deductive inferences, inductive inferences, and guesses.

    Science.gov (United States)

    Pillow, Bradford H; Pearson, Raeanne M; Hecht, Mary; Bremer, Amanda

    2010-01-01

    Children and adults rated their own certainty following inductive inferences, deductive inferences, and guesses. Beginning in kindergarten, participants rated deductions as more certain than weak inductions or guesses. Deductions were rated as more certain than strong inductions beginning in Grade 3, and fourth-grade children and adults differentiated strong inductions, weak inductions, and informed guesses from pure guesses. By Grade 3, participants also gave different types of explanations for their deductions and inductions. These results are discussed in relation to children's concepts of cognitive processes, logical reasoning, and epistemological development.

  6. Robust Inference with Multi-way Clustering

    OpenAIRE

    A. Colin Cameron; Jonah B. Gelbach; Douglas L. Miller; Doug Miller

    2009-01-01

    In this paper we propose a variance estimator for the OLS estimator as well as for nonlinear estimators such as logit, probit and GMM. This variance estimator enables cluster-robust inference when there is two-way or multi-way clustering that is non-nested. The variance estimator extends the standard cluster-robust variance estimator or sandwich estimator for one-way clustering (e.g. Liang and Zeger (1986), Arellano (1987)) and relies on similar relatively weak distributional assumptions. Our...

  7. Approximate Inference and Deep Generative Models

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. In this talk I'll review a few standard methods for approximate inference and introduce modern approximations which allow for efficient large-scale training of a wide variety of generative models. Finally, I'll demonstrate several important application of these models to density estimation, missing data imputation, data compression and planning.

  8. Abductive Inference using Array-Based Logic

    DEFF Research Database (Denmark)

    Frisvad, Jeppe Revall; Falster, Peter; Møller, Gert L.

    The notion of abduction has found its usage within a wide variety of AI fields. Computing abductive solutions has, however, shown to be highly intractable in logic programming. To avoid this intractability we present a new approach to logicbased abduction; through the geometrical view of data...... employed in array-based logic we embrace abduction in a simple structural operation. We argue that a theory of abduction on this form allows for an implementation which, at runtime, can perform abductive inference quite efficiently on arbitrary rules of logic representing knowledge of finite domains....

  9. Generic Patch Inference

    DEFF Research Database (Denmark)

    Andersen, Jesper; Lawall, Julia Laetitia

    2008-01-01

    A key issue in maintaining Linux device drivers is the need to update drivers in response to evolutions in Linux internal libraries. Currently, there is little tool support for performing and documenting such changes. In this paper we present a tool, spfind, that identifies common changes made...... developers can use it to extract an abstract representation of the set of changes that others have made. Our experiments on recent changes in Linux show that the inferred generic patches are more concise than the corresponding patches found in commits to the Linux source tree while being safe with respect...

  10. Inverse Ising Inference Using All the Data

    Science.gov (United States)

    Aurell, Erik; Ekeberg, Magnus

    2012-03-01

    We show that a method based on logistic regression, using all the data, solves the inverse Ising problem far better than mean-field calculations relying only on sample pairwise correlation functions, while still computationally feasible for hundreds of nodes. The largest improvement in reconstruction occurs for strong interactions. Using two examples, a diluted Sherrington-Kirkpatrick model and a two-dimensional lattice, we also show that interaction topologies can be recovered from few samples with good accuracy and that the use of l1 regularization is beneficial in this process, pushing inference abilities further into low-temperature regimes.

  11. A survey of Strong Convergent Schemes for the Simulation of ...

    African Journals Online (AJOL)

    We considered strong convergent stochastic schemes for the simulation of stochastic differential equations. The stochastic Taylor's expansion, which is the main tool used for the derivation of strong convergent schemes; the Euler Maruyama, Milstein scheme, stochastic multistep schemes, Implicit and Explicit schemes were ...

  12. Setting aside transactions from pyramid schemes as impeachable ...

    African Journals Online (AJOL)

    These schemes, which are often referred to as pyramid or Ponzi schemes, are unsustainable operations and give rise to problems in the law of insolvency. Investors in these schemes are often left empty-handed upon the scheme's eventual collapse and insolvency. Investors who received pay-outs from the scheme find ...

  13. Betatron tune correction schemes in nuclotron

    International Nuclear Information System (INIS)

    Shchepunov, V.A.

    1992-01-01

    Algorithms of the betatron tune corrections in Nuclotron with sextupolar and octupolar magnets are considered. Second order effects caused by chromaticity correctors are taken into account and sextupolar compensation schemes are proposed to suppress them. 6 refs.; 1 tab

  14. A Directed Signature Scheme and its Applications

    OpenAIRE

    Lal, Sunder; Kumar, Manoj

    2004-01-01

    This paper presents a directed signature scheme with the property that the signature can be verified only with the help of signer or signature receiver. We also propose its applications to share verification of signatures and to threshold cryptosystems.

  15. ONU Power Saving Scheme for EPON System

    Science.gov (United States)

    Mukai, Hiroaki; Tano, Fumihiko; Tanaka, Masaki; Kozaki, Seiji; Yamanaka, Hideaki

    PON (Passive Optical Network) achieves FTTH (Fiber To The Home) economically, by sharing an optical fiber among plural subscribers. Recently, global climate change has been recognized as a serious near term problem. Power saving techniques for electronic devices are important. In PON system, the ONU (Optical Network Unit) power saving scheme has been studied and defined in XG-PON. In this paper, we propose an ONU power saving scheme for EPON. Then, we present an analysis of the power reduction effect and the data transmission delay caused by the ONU power saving scheme. According to the analysis, we propose an efficient provisioning method for the ONU power saving scheme which is applicable to both of XG-PON and EPON.

  16. Nigeria's first national social protection scheme | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    2017-06-14

    Jun 14, 2017 ... Women and children at an IDP Camp in DRC ... The cash transfer was provided through the Nigerian Ekiti State Social Security Scheme, ... national policy conference to discuss the findings with media and policy stakeholders.

  17. Verifiable Secret Redistribution for Threshold Sharing Schemes

    National Research Council Canada - National Science Library

    Wong, Theodore M; Wang, Chenxi; Wing, Jeannette M

    2002-01-01

    .... Our protocol guards against dynamic adversaries. We observe that existing protocols either cannot be readily extended to allow redistribution between different threshold schemes, or have vulnerabilities that allow faulty old shareholders...

  18. Boson expansion theory in the seniority scheme

    International Nuclear Information System (INIS)

    Tamura, T.; Li, C.; Pedrocchi, V.G.

    1985-01-01

    A boson expansion formalism in the seniority scheme is presented and its relation with number-conserving quasiparticle calculations is elucidated. Accuracy and convergence are demonstrated numerically. A comparative discussion with other related approaches is given

  19. Designing optimal sampling schemes for field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-10-01

    Full Text Available This is a presentation of a statistical method for deriving optimal spatial sampling schemes. The research focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF...

  20. AUTOMATIC ROAD GAP DETECTION USING FUZZY INFERENCE SYSTEM

    Directory of Open Access Journals (Sweden)

    S. Hashemi

    2012-09-01

    Full Text Available Automatic feature extraction from aerial and satellite images is a high-level data processing which is still one of the most important research topics of the field. In this area, most of the researches are focused on the early step of road detection, where road tracking methods, morphological analysis, dynamic programming and snakes, multi-scale and multi-resolution methods, stereoscopic and multi-temporal analysis, hyper spectral experiments, are some of the mature methods in this field. Although most researches are focused on detection algorithms, none of them can extract road network perfectly. On the other hand, post processing algorithms accentuated on the refining of road detection results, are not developed as well. In this article, the main is to design an intelligent method to detect and compensate road gaps remained on the early result of road detection algorithms. The proposed algorithm consists of five main steps as follow: 1 Short gap coverage: In this step, a multi-scale morphological is designed that covers short gaps in a hierarchical scheme. 2 Long gap detection: In this step, the long gaps, could not be covered in the previous stage, are detected using a fuzzy inference system. for this reason, a knowledge base consisting of some expert rules are designed which are fired on some gap candidates of the road detection results. 3 Long gap coverage: In this stage, detected long gaps are compensated by two strategies of linear and polynomials for this reason, shorter gaps are filled by line fitting while longer ones are compensated by polynomials.4 Accuracy assessment: In order to evaluate the obtained results, some accuracy assessment criteria are proposed. These criteria are obtained by comparing the obtained results with truly compensated ones produced by a human expert. The complete evaluation of the obtained results whit their technical discussions are the materials of the full paper.

  1. CONSTRUCTING A FLEXIBLE LIKELIHOOD FUNCTION FOR SPECTROSCOPIC INFERENCE

    International Nuclear Information System (INIS)

    Czekala, Ian; Andrews, Sean M.; Mandel, Kaisey S.; Green, Gregory M.; Hogg, David W.

    2015-01-01

    We present a modular, extensible likelihood framework for spectroscopic inference based on synthetic model spectra. The subtraction of an imperfect model from a continuously sampled spectrum introduces covariance between adjacent datapoints (pixels) into the residual spectrum. For the high signal-to-noise data with large spectral range that is commonly employed in stellar astrophysics, that covariant structure can lead to dramatically underestimated parameter uncertainties (and, in some cases, biases). We construct a likelihood function that accounts for the structure of the covariance matrix, utilizing the machinery of Gaussian process kernels. This framework specifically addresses the common problem of mismatches in model spectral line strengths (with respect to data) due to intrinsic model imperfections (e.g., in the atomic/molecular databases or opacity prescriptions) by developing a novel local covariance kernel formalism that identifies and self-consistently downweights pathological spectral line “outliers.” By fitting many spectra in a hierarchical manner, these local kernels provide a mechanism to learn about and build data-driven corrections to synthetic spectral libraries. An open-source software implementation of this approach is available at http://iancze.github.io/Starfish, including a sophisticated probabilistic scheme for spectral interpolation when using model libraries that are sparsely sampled in the stellar parameters. We demonstrate some salient features of the framework by fitting the high-resolution V-band spectrum of WASP-14, an F5 dwarf with a transiting exoplanet, and the moderate-resolution K-band spectrum of Gliese 51, an M5 field dwarf

  2. Automatic Road Gap Detection Using Fuzzy Inference System

    Science.gov (United States)

    Hashemi, S.; Valadan Zoej, M. J.; Mokhtarzadeh, M.

    2011-09-01

    Automatic feature extraction from aerial and satellite images is a high-level data processing which is still one of the most important research topics of the field. In this area, most of the researches are focused on the early step of road detection, where road tracking methods, morphological analysis, dynamic programming and snakes, multi-scale and multi-resolution methods, stereoscopic and multi-temporal analysis, hyper spectral experiments, are some of the mature methods in this field. Although most researches are focused on detection algorithms, none of them can extract road network perfectly. On the other hand, post processing algorithms accentuated on the refining of road detection results, are not developed as well. In this article, the main is to design an intelligent method to detect and compensate road gaps remained on the early result of road detection algorithms. The proposed algorithm consists of five main steps as follow: 1) Short gap coverage: In this step, a multi-scale morphological is designed that covers short gaps in a hierarchical scheme. 2) Long gap detection: In this step, the long gaps, could not be covered in the previous stage, are detected using a fuzzy inference system. for this reason, a knowledge base consisting of some expert rules are designed which are fired on some gap candidates of the road detection results. 3) Long gap coverage: In this stage, detected long gaps are compensated by two strategies of linear and polynomials for this reason, shorter gaps are filled by line fitting while longer ones are compensated by polynomials.4) Accuracy assessment: In order to evaluate the obtained results, some accuracy assessment criteria are proposed. These criteria are obtained by comparing the obtained results with truly compensated ones produced by a human expert. The complete evaluation of the obtained results whit their technical discussions are the materials of the full paper.

  3. Inferring high-confidence human protein-protein interactions

    Directory of Open Access Journals (Sweden)

    Yu Xueping

    2012-05-01

    Full Text Available Abstract Background As numerous experimental factors drive the acquisition, identification, and interpretation of protein-protein interactions (PPIs, aggregated assemblies of human PPI data invariably contain experiment-dependent noise. Ascertaining the reliability of PPIs collected from these diverse studies and scoring them to infer high-confidence networks is a non-trivial task. Moreover, a large number of PPIs share the same number of reported occurrences, making it impossible to distinguish the reliability of these PPIs and rank-order them. For example, for the data analyzed here, we found that the majority (>83% of currently available human PPIs have been reported only once. Results In this work, we proposed an unsupervised statistical approach to score a set of diverse, experimentally identified PPIs from nine primary databases to create subsets of high-confidence human PPI networks. We evaluated this ranking method by comparing it with other methods and assessing their ability to retrieve protein associations from a number of diverse and independent reference sets. These reference sets contain known biological data that are either directly or indirectly linked to interactions between proteins. We quantified the average effect of using ranked protein interaction data to retrieve this information and showed that, when compared to randomly ranked interaction data sets, the proposed method created a larger enrichment (~134% than either ranking based on the hypergeometric test (~109% or occurrence ranking (~46%. Conclusions From our evaluations, it was clear that ranked interactions were always of value because higher-ranked PPIs had a higher likelihood of retrieving high-confidence experimental data. Reducing the noise inherent in aggregated experimental PPIs via our ranking scheme further increased the accuracy and enrichment of PPIs derived from a number of biologically relevant data sets. These results suggest that using our high

  4. Inferring general relations between network characteristics from specific network ensembles.

    Science.gov (United States)

    Cardanobile, Stefano; Pernice, Volker; Deger, Moritz; Rotter, Stefan

    2012-01-01

    Different network models have been suggested for the topology underlying complex interactions in natural systems. These models are aimed at replicating specific statistical features encountered in real-world networks. However, it is rarely considered to which degree the results obtained for one particular network class can be extrapolated to real-world networks. We address this issue by comparing different classical and more recently developed network models with respect to their ability to generate networks with large structural variability. In particular, we consider the statistical constraints which the respective construction scheme imposes on the generated networks. After having identified the most variable networks, we address the issue of which constraints are common to all network classes and are thus suitable candidates for being generic statistical laws of complex networks. In fact, we find that generic, not model-related dependencies between different network characteristics do exist. This makes it possible to infer global features from local ones using regression models trained on networks with high generalization power. Our results confirm and extend previous findings regarding the synchronization properties of neural networks. Our method seems especially relevant for large networks, which are difficult to map completely, like the neural networks in the brain. The structure of such large networks cannot be fully sampled with the present technology. Our approach provides a method to estimate global properties of under-sampled networks in good approximation. Finally, we demonstrate on three different data sets (C. elegans neuronal network, R. prowazekii metabolic network, and a network of synonyms extracted from Roget's Thesaurus) that real-world networks have statistical relations compatible with those obtained using regression models.

  5. Secret Sharing Schemes and Advanced Encryption Standard

    Science.gov (United States)

    2015-09-01

    25 4.7 Computational Example . . . . . . . . . . . . . . . . . . . . . 26 5 Side-Channel Effect on Advanced Encryption Standard ( AES ) 31...improvements, and to build upon them to discuss the side-channel effects on the Advanced Encryption Standard ( AES ). The following questions are asked...secret sharing scheme? • Can the improvements to the current secret sharing scheme prove to be beneficial in strengthening/weakening AES encryption

  6. Cost Comparison Among Provable Data Possession Schemes

    Science.gov (United States)

    2016-03-01

    of Acronyms and Abbreviations AE authenticated encryption AWS Amazon Web Services CIO Chief Information Officer DISA Defense Information Systems Agency...the number of possible challenges, H be a cryptographic hash function, AE be an authenticated encryption scheme, f be a keyed pseudo-random function...key kenc R←− Kenc for symmetric encryption scheme Enc, and a random HMAC key kmac R←− Kmac. The secret key is sk = 〈kenc, kmac〉 and public key is pk

  7. A Classification Scheme for Production System Processes

    DEFF Research Database (Denmark)

    Sørensen, Daniel Grud Hellerup; Brunø, Thomas Ditlev; Nielsen, Kjeld

    2018-01-01

    Manufacturing companies often have difficulties developing production platforms, partly due to the complexity of many production systems and difficulty determining which processes constitute a platform. Understanding production processes is an important step to identifying candidate processes...... for a production platform based on existing production systems. Reviewing a number of existing classifications and taxonomies, a consolidated classification scheme for processes in production of discrete products has been outlined. The classification scheme helps ensure consistency during mapping of existing...

  8. A scheme for the hadron spectrum

    International Nuclear Information System (INIS)

    Hoyer, P.

    1978-03-01

    A theoretically self-consistent dual scheme is proposed for the hadron spectrum, which follows naturally from basic requirements and phenomenology. All resonance properties and couplings are calculable in terms of a limited number of input parameters. A first application to ππ→ππ explains the linear trajectory and small daughter couplings. The Zweig rule and the decoupling of baryonium from mesons are expected to be consequences of the scheme. (Auth.)

  9. A Practical Voter-Verifiable Election Scheme.

    OpenAIRE

    Chaum, D; Ryan, PYA; Schneider, SA

    2005-01-01

    We present an election scheme designed to allow voters to verify that their vote is accurately included in the count. The scheme provides a high degree of transparency whilst ensuring the secrecy of votes. Assurance is derived from close auditing of all the steps of the vote recording and counting process with minimal dependence on the system components. Thus, assurance arises from verification of the election rather than having to place trust in the correct behaviour of components of the vot...

  10. Sellafield site (including Drigg) emergency scheme manual

    International Nuclear Information System (INIS)

    1987-02-01

    This Emergency Scheme defines the organisation and procedures available should there be an accident at the Sellafield Site which results in, or may result in, the release of radioactive material, or the generation of a high radiation field, which might present a hazard to employees and/or the general public. This manual covers the general principles of the total emergency scheme and those detailed procedures which are not specific to any single department. (U.K.)

  11. Signature scheme based on bilinear pairs

    Science.gov (United States)

    Tong, Rui Y.; Geng, Yong J.

    2013-03-01

    An identity-based signature scheme is proposed by using bilinear pairs technology. The scheme uses user's identity information as public key such as email address, IP address, telephone number so that it erases the cost of forming and managing public key infrastructure and avoids the problem of user private generating center generating forgery signature by using CL-PKC framework to generate user's private key.

  12. An Optimization Scheme for ProdMod

    International Nuclear Information System (INIS)

    Gregory, M.V.

    1999-01-01

    A general purpose dynamic optimization scheme has been devised in conjunction with the ProdMod simulator. The optimization scheme is suitable for the Savannah River Site (SRS) High Level Waste (HLW) complex operations, and able to handle different types of optimizations such as linear, nonlinear, etc. The optimization is performed in the stand-alone FORTRAN based optimization deliver, while the optimizer is interfaced with the ProdMod simulator for flow of information between the two

  13. Employee-referral schemes and discrimination law

    OpenAIRE

    Connolly, M.

    2015-01-01

    Employee-referral schemes (‘introduce a friend’) are in common usage in recruitment. They carry a potential to discriminate by perpetuating an already unbalanced workforce (say, by gender and ethnicity). With this, or course, comes the risk of litigation and bad publicity as well as any inherent inefficiencies associated with discrimination. This article is threefold. First, it examines the present state of the law. Second, it is based on a survey of employers who use these schemes. Third, it...

  14. Basis scheme of personnel training system

    International Nuclear Information System (INIS)

    Rerucha, F.; Odehnal, J.

    1998-01-01

    Basic scheme of the training system for NPP personnel of CEZ-EDU personnel training system is described in detail. This includes: specific training both basic and periodic, and professional training meaning specialized and continuous training. The following schemes are shown: licence acquisition and authorisation for PWR-440 Control Room Personnel; upgrade training for job positions of Control Room personnel; maintaining and refresh training; module training for certificate acquisition of servicing shift and operating personnel

  15. Navigators’ Behavior in Traffic Separation Schemes

    Directory of Open Access Journals (Sweden)

    Zbigniew Pietrzykowski

    2015-03-01

    Full Text Available One of the areas of decision support in the navigational ship conduct process is a Traffic Separation Scheme. TSSs are established in areas with high traffic density, often near the shore and in port approaches. The main purpose of these schemes is to improve maritime safety by channeling vessel traffic into streams. Traffic regulations as well as ships behavior in real conditions in chosen TSSs have been analyzed in order to develop decision support algorithms.

  16. A Classification Scheme for Literary Characters

    Directory of Open Access Journals (Sweden)

    Matthew Berry

    2017-10-01

    Full Text Available There is no established classification scheme for literary characters in narrative theory short of generic categories like protagonist vs. antagonist or round vs. flat. This is so despite the ubiquity of stock characters that recur across media, cultures, and historical time periods. We present here a proposal of a systematic psychological scheme for classifying characters from the literary and dramatic fields based on a modification of the Thomas-Kilmann (TK Conflict Mode Instrument used in applied studies of personality. The TK scheme classifies personality along the two orthogonal dimensions of assertiveness and cooperativeness. To examine the validity of a modified version of this scheme, we had 142 participants provide personality ratings for 40 characters using two of the Big Five personality traits as well as assertiveness and cooperativeness from the TK scheme. The results showed that assertiveness and cooperativeness were orthogonal dimensions, thereby supporting the validity of using a modified version of TK’s two-dimensional scheme for classifying characters.

  17. Canonical, stable, general mapping using context schemes.

    Science.gov (United States)

    Novak, Adam M; Rosen, Yohei; Haussler, David; Paten, Benedict

    2015-11-15

    Sequence mapping is the cornerstone of modern genomics. However, most existing sequence mapping algorithms are insufficiently general. We introduce context schemes: a method that allows the unambiguous recognition of a reference base in a query sequence by testing the query for substrings from an algorithmically defined set. Context schemes only map when there is a unique best mapping, and define this criterion uniformly for all reference bases. Mappings under context schemes can also be made stable, so that extension of the query string (e.g. by increasing read length) will not alter the mapping of previously mapped positions. Context schemes are general in several senses. They natively support the detection of arbitrary complex, novel rearrangements relative to the reference. They can scale over orders of magnitude in query sequence length. Finally, they are trivially extensible to more complex reference structures, such as graphs, that incorporate additional variation. We demonstrate empirically the existence of high-performance context schemes, and present efficient context scheme mapping algorithms. The software test framework created for this study is available from https://registry.hub.docker.com/u/adamnovak/sequence-graphs/. anovak@soe.ucsc.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Cancelable remote quantum fingerprint templates protection scheme

    International Nuclear Information System (INIS)

    Liao Qin; Guo Ying; Huang Duan

    2017-01-01

    With the increasing popularity of fingerprint identification technology, its security and privacy have been paid much attention. Only the security and privacy of biological information are insured, the biological technology can be better accepted and used by the public. In this paper, we propose a novel quantum bit (qbit)-based scheme to solve the security and privacy problem existing in the traditional fingerprint identification system. By exploiting the properties of quantm mechanics, our proposed scheme, cancelable remote quantum fingerprint templates protection scheme, can achieve the unconditional security guaranteed in an information-theoretical sense. Moreover, this novel quantum scheme can invalidate most of the attacks aimed at the fingerprint identification system. In addition, the proposed scheme is applicable to the requirement of remote communication with no need to worry about its security and privacy during the transmission. This is an absolute advantage when comparing with other traditional methods. Security analysis shows that the proposed scheme can effectively ensure the communication security and the privacy of users’ information for the fingerprint identification. (paper)

  19. Efficient multiparty quantum-secret-sharing schemes

    International Nuclear Information System (INIS)

    Xiao Li; Deng Fuguo; Long Guilu; Pan Jianwei

    2004-01-01

    In this work, we generalize the quantum-secret-sharing scheme of Hillery, Buzek, and Berthiaume [Phys. Rev. A 59, 1829 (1999)] into arbitrary multiparties. Explicit expressions for the shared secret bit is given. It is shown that in the Hillery-Buzek-Berthiaume quantum-secret-sharing scheme the secret information is shared in the parity of binary strings formed by the measured outcomes of the participants. In addition, we have increased the efficiency of the quantum-secret-sharing scheme by generalizing two techniques from quantum key distribution. The favored-measuring-basis quantum-secret-sharing scheme is developed from the Lo-Chau-Ardehali technique [H. K. Lo, H. F. Chau, and M. Ardehali, e-print quant-ph/0011056] where all the participants choose their measuring-basis asymmetrically, and the measuring-basis-encrypted quantum-secret-sharing scheme is developed from the Hwang-Koh-Han technique [W. Y. Hwang, I. G. Koh, and Y. D. Han, Phys. Lett. A 244, 489 (1998)] where all participants choose their measuring basis according to a control key. Both schemes are asymptotically 100% in efficiency, hence nearly all the Greenberger-Horne-Zeilinger states in a quantum-secret-sharing process are used to generate shared secret information

  20. Inferring network topology from complex dynamics

    International Nuclear Information System (INIS)

    Shandilya, Srinivas Gorur; Timme, Marc

    2011-01-01

    Inferring the network topology from dynamical observations is a fundamental problem pervading research on complex systems. Here, we present a simple, direct method for inferring the structural connection topology of a network, given an observation of one collective dynamical trajectory. The general theoretical framework is applicable to arbitrary network dynamical systems described by ordinary differential equations. No interference (external driving) is required and the type of dynamics is hardly restricted in any way. In particular, the observed dynamics may be arbitrarily complex; stationary, invariant or transient; synchronous or asynchronous and chaotic or periodic. Presupposing a knowledge of the functional form of the dynamical units and of the coupling functions between them, we present an analytical solution to the inverse problem of finding the network topology from observing a time series of state variables only. Robust reconstruction is achieved in any sufficiently long generic observation of the system. We extend our method to simultaneously reconstructing both the entire network topology and all parameters appearing linear in the system's equations of motion. Reconstruction of network topology and system parameters is viable even in the presence of external noise that distorts the original dynamics substantially. The method provides a conceptually new step towards reconstructing a variety of real-world networks, including gene and protein interaction networks and neuronal circuits.

  1. Inferring climate sensitivity from volcanic events

    Energy Technology Data Exchange (ETDEWEB)

    Boer, G.J. [Environment Canada, University of Victoria, Canadian Centre for Climate Modelling and Analysis, Victoria, BC (Canada); Stowasser, M.; Hamilton, K. [University of Hawaii, International Pacific Research Centre, Honolulu, HI (United States)

    2007-04-15

    The possibility of estimating the equilibrium climate sensitivity of the earth-system from observations following explosive volcanic eruptions is assessed in the context of a perfect model study. Two modern climate models (the CCCma CGCM3 and the NCAR CCSM2) with different equilibrium climate sensitivities are employed in the investigation. The models are perturbed with the same transient volcano-like forcing and the responses analysed to infer climate sensitivities. For volcano-like forcing the global mean surface temperature responses of the two models are very similar, despite their differing equilibrium climate sensitivities, indicating that climate sensitivity cannot be inferred from the temperature record alone even if the forcing is known. Equilibrium climate sensitivities can be reasonably determined only if both the forcing and the change in heat storage in the system are known very accurately. The geographic patterns of clear-sky atmosphere/surface and cloud feedbacks are similar for both the transient volcano-like and near-equilibrium constant forcing simulations showing that, to a considerable extent, the same feedback processes are invoked, and determine the climate sensitivity, in both cases. (orig.)

  2. Facility Activity Inference Using Radiation Networks

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Nageswara S. [ORNL; Ramirez Aviles, Camila A. [ORNL

    2017-11-01

    We consider the problem of inferring the operational status of a reactor facility using measurements from a radiation sensor network deployed around the facility’s ventilation off-gas stack. The intensity of stack emissions decays with distance, and the sensor counts or measurements are inherently random with parameters determined by the intensity at the sensor’s location. We utilize the measurements to estimate the intensity at the stack, and use it in a one-sided Sequential Probability Ratio Test (SPRT) to infer on/off status of the reactor. We demonstrate the superior performance of this method over conventional majority fusers and individual sensors using (i) test measurements from a network of 21 NaI detectors, and (ii) effluence measurements collected at the stack of a reactor facility. We also analytically establish the superior detection performance of the network over individual sensors with fixed and adaptive thresholds by utilizing the Poisson distribution of the counts. We quantify the performance improvements of the network detection over individual sensors using the packing number of the intensity space.

  3. Models for inference in dynamic metacommunity systems

    Science.gov (United States)

    Dorazio, Robert M.; Kery, Marc; Royle, J. Andrew; Plattner, Matthias

    2010-01-01

    A variety of processes are thought to be involved in the formation and dynamics of species assemblages. For example, various metacommunity theories are based on differences in the relative contributions of dispersal of species among local communities and interactions of species within local communities. Interestingly, metacommunity theories continue to be advanced without much empirical validation. Part of the problem is that statistical models used to analyze typical survey data either fail to specify ecological processes with sufficient complexity or they fail to account for errors in detection of species during sampling. In this paper, we describe a statistical modeling framework for the analysis of metacommunity dynamics that is based on the idea of adopting a unified approach, multispecies occupancy modeling, for computing inferences about individual species, local communities of species, or the entire metacommunity of species. This approach accounts for errors in detection of species during sampling and also allows different metacommunity paradigms to be specified in terms of species- and location-specific probabilities of occurrence, extinction, and colonization: all of which are estimable. In addition, this approach can be used to address inference problems that arise in conservation ecology, such as predicting temporal and spatial changes in biodiversity for use in making conservation decisions. To illustrate, we estimate changes in species composition associated with the species-specific phenologies of flight patterns of butterflies in Switzerland for the purpose of estimating regional differences in biodiversity.

  4. Causal inference, probability theory, and graphical insights.

    Science.gov (United States)

    Baker, Stuart G

    2013-11-10

    Causal inference from observational studies is a fundamental topic in biostatistics. The causal graph literature typically views probability theory as insufficient to express causal concepts in observational studies. In contrast, the view here is that probability theory is a desirable and sufficient basis for many topics in causal inference for the following two reasons. First, probability theory is generally more flexible than causal graphs: Besides explaining such causal graph topics as M-bias (adjusting for a collider) and bias amplification and attenuation (when adjusting for instrumental variable), probability theory is also the foundation of the paired availability design for historical controls, which does not fit into a causal graph framework. Second, probability theory is the basis for insightful graphical displays including the BK-Plot for understanding Simpson's paradox with a binary confounder, the BK2-Plot for understanding bias amplification and attenuation in the presence of an unobserved binary confounder, and the PAD-Plot for understanding the principal stratification component of the paired availability design. Published 2013. This article is a US Government work and is in the public domain in the USA.

  5. Inferring relevance in a changing world

    Directory of Open Access Journals (Sweden)

    Robert C Wilson

    2012-01-01

    Full Text Available Reinforcement learning models of human and animal learning usually concentrate on how we learn the relationship between different stimuli or actions and rewards. However, in real world situations stimuli are ill-defined. On the one hand, our immediate environment is extremely multi-dimensional. On the other hand, in every decision-making scenario only a few aspects of the environment are relevant for obtaining reward, while most are irrelevant. Thus a key question is how do we learn these relevant dimensions, that is, how do we learn what to learn about? We investigated this process of representation learning experimentally, using a task in which one stimulus dimension was relevant for determining reward at each point in time. As in real life situations, in our task the relevant dimension can change without warning, adding ever-present uncertainty engendered by a constantly changing environment. We show that human performance on this task is better described by a suboptimal strategy based on selective attention and serial hypothesis testing rather than a normative strategy based on probabilistic inference. From this, we conjecture that the problem of inferring relevance in general scenarios is too computationally demanding for the brain to solve optimally. As a result the brain utilizes approximations, employing these even in simplified scenarios in which optimal representation learning is tractable, such as the one in our experiment.

  6. Automated adaptive inference of phenomenological dynamical models

    Science.gov (United States)

    Daniels, Bryan

    Understanding the dynamics of biochemical systems can seem impossibly complicated at the microscopic level: detailed properties of every molecular species, including those that have not yet been discovered, could be important for producing macroscopic behavior. The profusion of data in this area has raised the hope that microscopic dynamics might be recovered in an automated search over possible models, yet the combinatorial growth of this space has limited these techniques to systems that contain only a few interacting species. We take a different approach inspired by coarse-grained, phenomenological models in physics. Akin to a Taylor series producing Hooke's Law, forgoing microscopic accuracy allows us to constrain the search over dynamical models to a single dimension. This makes it feasible to infer dynamics with very limited data, including cases in which important dynamical variables are unobserved. We name our method Sir Isaac after its ability to infer the dynamical structure of the law of gravitation given simulated planetary motion data. Applying the method to output from a microscopically complicated but macroscopically simple biological signaling model, it is able to adapt the level of detail to the amount of available data. Finally, using nematode behavioral time series data, the method discovers an effective switch between behavioral attractors after the application of a painful stimulus.

  7. Graphical models for inferring single molecule dynamics

    Directory of Open Access Journals (Sweden)

    Gonzalez Ruben L

    2010-10-01

    Full Text Available Abstract Background The recent explosion of experimental techniques in single molecule biophysics has generated a variety of novel time series data requiring equally novel computational tools for analysis and inference. This article describes in general terms how graphical modeling may be used to learn from biophysical time series data using the variational Bayesian expectation maximization algorithm (VBEM. The discussion is illustrated by the example of single-molecule fluorescence resonance energy transfer (smFRET versus time data, where the smFRET time series is modeled as a hidden Markov model (HMM with Gaussian observables. A detailed description of smFRET is provided as well. Results The VBEM algorithm returns the model’s evidence and an approximating posterior parameter distribution given the data. The former provides a metric for model selection via maximum evidence (ME, and the latter a description of the model’s parameters learned from the data. ME/VBEM provide several advantages over the more commonly used approach of maximum likelihood (ML optimized by the expectation maximization (EM algorithm, the most important being a natural form of model selection and a well-posed (non-divergent optimization problem. Conclusions The results demonstrate the utility of graphical modeling for inference of dynamic processes in single molecule biophysics.

  8. Causal Inference in the Perception of Verticality.

    Science.gov (United States)

    de Winkel, Ksander N; Katliar, Mikhail; Diers, Daniel; Bülthoff, Heinrich H

    2018-04-03

    The perceptual upright is thought to be constructed by the central nervous system (CNS) as a vector sum; by combining estimates on the upright provided by the visual system and the body's inertial sensors with prior knowledge that upright is usually above the head. Recent findings furthermore show that the weighting of the respective sensory signals is proportional to their reliability, consistent with a Bayesian interpretation of a vector sum (Forced Fusion, FF). However, violations of FF have also been reported, suggesting that the CNS may rely on a single sensory system (Cue Capture, CC), or choose to process sensory signals based on inferred signal causality (Causal Inference, CI). We developed a novel alternative-reality system to manipulate visual and physical tilt independently. We tasked participants (n = 36) to indicate the perceived upright for various (in-)congruent combinations of visual-inertial stimuli, and compared models based on their agreement with the data. The results favor the CI model over FF, although this effect became unambiguous only for large discrepancies (±60°). We conclude that the notion of a vector sum does not provide a comprehensive explanation of the perception of the upright, and that CI offers a better alternative.

  9. Inference of gene regulatory networks with sparse structural equation models exploiting genetic perturbations.

    Directory of Open Access Journals (Sweden)

    Xiaodong Cai

    Full Text Available Integrating genetic perturbations with gene expression data not only improves accuracy of regulatory network topology inference, but also enables learning of causal regulatory relations between genes. Although a number of methods have been developed to integrate both types of data, the desiderata of efficient and powerful algorithms still remains. In this paper, sparse structural equation models (SEMs are employed to integrate both gene expression data and cis-expression quantitative trait loci (cis-eQTL, for modeling gene regulatory networks in accordance with biological evidence about genes regulating or being regulated by a small number of genes. A systematic inference method named sparsity-aware maximum likelihood (SML is developed for SEM estimation. Using simulated directed acyclic or cyclic networks, the SML performance is compared with that of two state-of-the-art algorithms: the adaptive Lasso (AL based scheme, and the QTL-directed dependency graph (QDG method. Computer simulations demonstrate that the novel SML algorithm offers significantly better performance than the AL-based and QDG algorithms across all sample sizes from 100 to 1,000, in terms of detection power and false discovery rate, in all the cases tested that include acyclic or cyclic networks of 10, 30 and 300 genes. The SML method is further applied to infer a network of 39 human genes that are related to the immune function and are chosen to have a reliable eQTL per gene. The resulting network consists of 9 genes and 13 edges. Most of the edges represent interactions reasonably expected from experimental evidence, while the remaining may just indicate the emergence of new interactions. The sparse SEM and efficient SML algorithm provide an effective means of exploiting both gene expression and perturbation data to infer gene regulatory networks. An open-source computer program implementing the SML algorithm is freely available upon request.

  10. Bayesian electron density inference from JET lithium beam emission spectra using Gaussian processes

    Science.gov (United States)

    Kwak, Sehyun; Svensson, J.; Brix, M.; Ghim, Y.-C.; Contributors, JET

    2017-03-01

    A Bayesian model to infer edge electron density profiles is developed for the JET lithium beam emission spectroscopy (Li-BES) system, measuring Li I (2p-2s) line radiation using 26 channels with  ∼1 cm spatial resolution and 10∼ 20 ms temporal resolution. The density profile is modelled using a Gaussian process prior, and the uncertainty of the density profile is calculated by a Markov Chain Monte Carlo (MCMC) scheme. From the spectra measured by the transmission grating spectrometer, the Li I line intensities are extracted, and modelled as a function of the plasma density by a multi-state model which describes the relevant processes between neutral lithium beam atoms and plasma particles. The spectral model fully takes into account interference filter and instrument effects, that are separately estimated, again using Gaussian processes. The line intensities are inferred based on a spectral model consistent with the measured spectra within their uncertainties, which includes photon statistics and electronic noise. Our newly developed method to infer JET edge electron density profiles has the following advantages in comparison to the conventional method: (i) providing full posterior distributions of edge density profiles, including their associated uncertainties, (ii) the available radial range for density profiles is increased to the full observation range (∼26 cm), (iii) an assumption of monotonic electron density profile is not necessary, (iv) the absolute calibration factor of the diagnostic system is automatically estimated overcoming the limitation of the conventional technique and allowing us to infer the electron density profiles for all pulses without preprocessing the data or an additional boundary condition, and (v) since the full spectrum is modelled, the procedure of modulating the beam to measure the background signal is only necessary for the case of overlapping of the Li I line with impurity lines.

  11. HYBRID SYSTEM BASED FUZZY-PID CONTROL SCHEMES FOR UNPREDICTABLE PROCESS

    Directory of Open Access Journals (Sweden)

    M.K. Tan

    2011-07-01

    Full Text Available In general, the primary aim of polymerization industry is to enhance the process operation in order to obtain high quality and purity product. However, a sudden and large amount of heat will be released rapidly during the mixing process of two reactants, i.e. phenol and formalin due to its exothermic behavior. The unpredictable heat will cause deviation of process temperature and hence affect the quality of the product. Therefore, it is vital to control the process temperature during the polymerization. In the modern industry, fuzzy logic is commonly used to auto-tune PID controller to control the process temperature. However, this method needs an experienced operator to fine tune the fuzzy membership function and universe of discourse via trial and error approach. Hence, the setting of fuzzy inference system might not be accurate due to the human errors. Besides that, control of the process can be challenging due to the rapid changes in the plant parameters which will increase the process complexity. This paper proposes an optimization scheme using hybrid of Q-learning (QL and genetic algorithm (GA to optimize the fuzzy membership function in order to allow the conventional fuzzy-PID controller to control the process temperature more effectively. The performances of the proposed optimization scheme are compared with the existing fuzzy-PID scheme. The results show that the proposed optimization scheme is able to control the process temperature more effectively even if disturbance is introduced.

  12. Constraint Satisfaction Inference : Non-probabilistic Global Inference for Sequence Labelling

    NARCIS (Netherlands)

    Canisius, S.V.M.; van den Bosch, A.; Daelemans, W.; Basili, R.; Moschitti, A.

    2006-01-01

    We present a new method for performing sequence labelling based on the idea of using a machine-learning classifier to generate several possible output sequences, and then applying an inference procedure to select the best sequence among those. Most sequence labelling methods following a similar

  13. Human brain lesion-deficit inference remapped.

    Science.gov (United States)

    Mah, Yee-Haur; Husain, Masud; Rees, Geraint; Nachev, Parashkev

    2014-09-01

    Our knowledge of the anatomical organization of the human brain in health and disease draws heavily on the study of patients with focal brain lesions. Historically the first method of mapping brain function, it is still potentially the most powerful, establishing the necessity of any putative neural substrate for a given function or deficit. Great inferential power, however, carries a crucial vulnerability: without stronger alternatives any consistent error cannot be easily detected. A hitherto unexamined source of such error is the structure of the high-dimensional distribution of patterns of focal damage, especially in ischaemic injury-the commonest aetiology in lesion-deficit studies-where the anatomy is naturally shaped by the architecture of the vascular tree. This distribution is so complex that analysis of lesion data sets of conventional size cannot illuminate its structure, leaving us in the dark about the presence or absence of such error. To examine this crucial question we assembled the largest known set of focal brain lesions (n = 581), derived from unselected patients with acute ischaemic injury (mean age = 62.3 years, standard deviation = 17.8, male:female ratio = 0.547), visualized with diffusion-weighted magnetic resonance imaging, and processed with validated automated lesion segmentation routines. High-dimensional analysis of this data revealed a hidden bias within the multivariate patterns of damage that will consistently distort lesion-deficit maps, displacing inferred critical regions from their true locations, in a manner opaque to replication. Quantifying the size of this mislocalization demonstrates that past lesion-deficit relationships estimated with conventional inferential methodology are likely to be significantly displaced, by a magnitude dependent on the unknown underlying lesion-deficit relationship itself. Past studies therefore cannot be retrospectively corrected, except by new knowledge that would render them redundant

  14. Problem Solving as Probabilistic Inference with Subgoaling: Explaining Human Successes and Pitfalls in the Tower of Hanoi.

    Science.gov (United States)

    Donnarumma, Francesco; Maisto, Domenico; Pezzulo, Giovanni

    2016-04-01

    How do humans and other animals face novel problems for which predefined solutions are not available? Human problem solving links to flexible reasoning and inference rather than to slow trial-and-error learning. It has received considerable attention since the early days of cognitive science, giving rise to well known cognitive architectures such as SOAR and ACT-R, but its computational and brain mechanisms remain incompletely known. Furthermore, it is still unclear whether problem solving is a "specialized" domain or module of cognition, in the sense that it requires computations that are fundamentally different from those supporting perception and action systems. Here we advance a novel view of human problem solving as probabilistic inference with subgoaling. In this perspective, key insights from cognitive architectures are retained such as the importance of using subgoals to split problems into subproblems. However, here the underlying computations use probabilistic inference methods analogous to those that are increasingly popular in the study of perception and action systems. To test our model we focus on the widely used Tower of Hanoi (ToH) task, and show that our proposed method can reproduce characteristic idiosyncrasies of human problem solvers: their sensitivity to the "community structure" of the ToH and their difficulties in executing so-called "counterintuitive" movements. Our analysis reveals that subgoals have two key roles in probabilistic inference and problem solving. First, prior beliefs on (likely) useful subgoals carve the problem space and define an implicit metric for the problem at hand-a metric to which humans are sensitive. Second, subgoals are used as waypoints in the probabilistic problem solving inference and permit to find effective solutions that, when unavailable, lead to problem solving deficits. Our study thus suggests that a probabilistic inference scheme enhanced with subgoals provides a comprehensive framework to study problem

  15. Meta-learning framework applied in bioinformatics inference system design.

    Science.gov (United States)

    Arredondo, Tomás; Ormazábal, Wladimir

    2015-01-01

    This paper describes a meta-learner inference system development framework which is applied and tested in the implementation of bioinformatic inference systems. These inference systems are used for the systematic classification of the best candidates for inclusion in bacterial metabolic pathway maps. This meta-learner-based approach utilises a workflow where the user provides feedback with final classification decisions which are stored in conjunction with analysed genetic sequences for periodic inference system training. The inference systems were trained and tested with three different data sets related to the bacterial degradation of aromatic compounds. The analysis of the meta-learner-based framework involved contrasting several different optimisation methods with various different parameters. The obtained inference systems were also contrasted with other standard classification methods with accurate prediction capabilities observed.

  16. Active Inference, homeostatic regulation and adaptive behavioural control.

    Science.gov (United States)

    Pezzulo, Giovanni; Rigoli, Francesco; Friston, Karl

    2015-11-01

    We review a theory of homeostatic regulation and adaptive behavioural control within the Active Inference framework. Our aim is to connect two research streams that are usually considered independently; namely, Active Inference and associative learning theories of animal behaviour. The former uses a probabilistic (Bayesian) formulation of perception and action, while the latter calls on multiple (Pavlovian, habitual, goal-directed) processes for homeostatic and behavioural control. We offer a synthesis these classical processes and cast them as successive hierarchical contextualisations of sensorimotor constructs, using the generative models that underpin Active Inference. This dissolves any apparent mechanistic distinction between the optimization processes that mediate classical control or learning. Furthermore, we generalize the scope of Active Inference by emphasizing interoceptive inference and homeostatic regulation. The ensuing homeostatic (or allostatic) perspective provides an intuitive explanation for how priors act as drives or goals to enslave action, and emphasises the embodied nature of inference. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Bayesian inference data evaluation and decisions

    CERN Document Server

    Harney, Hanns Ludwig

    2016-01-01

    This new edition offers a comprehensive introduction to the analysis of data using Bayes rule. It generalizes Gaussian error intervals to situations in which the data follow distributions other than Gaussian. This is particularly useful when the observed parameter is barely above the background or the histogram of multiparametric data contains many empty bins, so that the determination of the validity of a theory cannot be based on the chi-squared-criterion. In addition to the solutions of practical problems, this approach provides an epistemic insight: the logic of quantum mechanics is obtained as the logic of unbiased inference from counting data. New sections feature factorizing parameters, commuting parameters, observables in quantum mechanics, the art of fitting with coherent and with incoherent alternatives and fitting with multinomial distribution. Additional problems and examples help deepen the knowledge. Requiring no knowledge of quantum mechanics, the book is written on introductory level, with man...

  18. Bayesian inference and updating of reliability data

    International Nuclear Information System (INIS)

    Sabri, Z.A.; Cullingford, M.C.; David, H.T.; Husseiny, A.A.

    1980-01-01

    A Bayes methodology for inference of reliability values using available but scarce current data is discussed. The method can be used to update failure rates as more information becomes available from field experience, assuming that the performance of a given component (or system) exhibits a nonhomogeneous Poisson process. Bayes' theorem is used to summarize the historical evidence and current component data in the form of a posterior distribution suitable for prediction and for smoothing or interpolation. An example is given. It may be appropriate to apply the methodology developed here to human error data, in which case the exponential model might be used to describe the learning behavior of the operator or maintenance crew personnel

  19. Automatic inference of indexing rules for MEDLINE

    Directory of Open Access Journals (Sweden)

    Shooshan Sonya E

    2008-11-01

    Full Text Available Abstract Background: Indexing is a crucial step in any information retrieval system. In MEDLINE, a widely used database of the biomedical literature, the indexing process involves the selection of Medical Subject Headings in order to describe the subject matter of articles. The need for automatic tools to assist MEDLINE indexers in this task is growing with the increasing number of publications being added to MEDLINE. Methods: In this paper, we describe the use and the customization of Inductive Logic Programming (ILP to infer indexing rules that may be used to produce automatic indexing recommendations for MEDLINE indexers. Results: Our results show that this original ILP-based approach outperforms manual rules when they exist. In addition, the use of ILP rules also improves the overall performance of the Medical Text Indexer (MTI, a system producing automatic indexing recommendations for MEDLINE. Conclusion: We expect the sets of ILP rules obtained in this experiment to be integrated into MTI.

  20. Progression inference for somatic mutations in cancer

    Directory of Open Access Journals (Sweden)

    Leif E. Peterson

    2017-04-01

    Full Text Available Computational methods were employed to determine progression inference of genomic alterations in commonly occurring cancers. Using cross-sectional TCGA data, we computed evolutionary trajectories involving selectivity relationships among pairs of gene-specific genomic alterations such as somatic mutations, deletions, amplifications, downregulation, and upregulation among the top 20 driver genes associated with each cancer. Results indicate that the majority of hierarchies involved TP53, PIK3CA, ERBB2, APC, KRAS, EGFR, IDH1, VHL, etc. Research into the order and accumulation of genomic alterations among cancer driver genes will ever-increase as the costs of nextgen sequencing subside, and personalized/precision medicine incorporates whole-genome scans into the diagnosis and treatment of cancer. Keywords: Oncology, Cancer research, Genetics, Computational biology

  1. Inferring Phylogenetic Networks from Gene Order Data

    Directory of Open Access Journals (Sweden)

    Alexey Anatolievich Morozov

    2013-01-01

    Full Text Available Existing algorithms allow us to infer phylogenetic networks from sequences (DNA, protein or binary, sets of trees, and distance matrices, but there are no methods to build them using the gene order data as an input. Here we describe several methods to build split networks from the gene order data, perform simulation studies, and use our methods for analyzing and interpreting different real gene order datasets. All proposed methods are based on intermediate data, which can be generated from genome structures under study and used as an input for network construction algorithms. Three intermediates are used: set of jackknife trees, distance matrix, and binary encoding. According to simulations and case studies, the best intermediates are jackknife trees and distance matrix (when used with Neighbor-Net algorithm. Binary encoding can also be useful, but only when the methods mentioned above cannot be used.

  2. Supplier Selection Using Fuzzy Inference System

    Directory of Open Access Journals (Sweden)

    hamidreza kadhodazadeh

    2014-01-01

    Full Text Available Suppliers are one of the most vital parts of supply chain whose operation has significant indirect effect on customer satisfaction. Since customer's expectations from organization are different, organizations should consider different standards, respectively. There are many researches in this field using different standards and methods in recent years. The purpose of this study is to propose an approach for choosing a supplier in a food manufacturing company considering cost, quality, service, type of relationship and structure standards of the supplier organization. To evaluate supplier according to the above standards, the fuzzy inference system has been used. Input data of this system includes supplier's score in any standard that is achieved by AHP approach and the output is final score of each supplier. Finally, a supplier has been selected that although is not the best in price and quality, has achieved good score in all of the standards.

  3. Financial incentive schemes in primary care

    Directory of Open Access Journals (Sweden)

    Gillam S

    2015-09-01

    Full Text Available Stephen Gillam Department of Public Health and Primary Care, Institute of Public Health, University of Cambridge, Cambridge, UK Abstract: Pay-for-performance (P4P schemes have become increasingly common in primary care, and this article reviews their impact. It is based primarily on existing systematic reviews. The evidence suggests that P4P schemes can change health professionals' behavior and improve recorded disease management of those clinical processes that are incentivized. P4P may narrow inequalities in performance comparing deprived with nondeprived areas. However, such schemes have unintended consequences. Whether P4P improves the patient experience, the outcomes of care or population health is less clear. These practical uncertainties mirror the ethical concerns of many clinicians that a reductionist approach to managing markers of chronic disease runs counter to the humanitarian values of family practice. The variation in P4P schemes between countries reflects different historical and organizational contexts. With so much uncertainty regarding the effects of P4P, policy makers are well advised to proceed carefully with the implementation of such schemes until and unless clearer evidence for their cost–benefit emerges. Keywords: financial incentives, pay for performance, quality improvement, primary care

  4. Short-Term Saved Leave Scheme

    CERN Multimedia

    2007-01-01

    As announced at the meeting of the Standing Concertation Committee (SCC) on 26 June 2007 and in http://Bulletin No. 28/2007, the existing Saved Leave Scheme will be discontinued as of 31 December 2007. Staff participating in the Scheme will shortly receive a contract amendment stipulating the end of financial contributions compensated by save leave. Leave already accumulated on saved leave accounts can continue to be taken in accordance with the rules applicable to the current scheme. A new system of saved leave will enter into force on 1 January 2008 and will be the subject of a new implementation procedure entitled "Short-term saved leave scheme" dated 1 January 2008. At its meeting on 4 December 2007, the SCC agreed to recommend the Director-General to approve this procedure, which can be consulted on the HR Department’s website at the following address: https://cern.ch/hr-services/services-Ben/sls_shortterm.asp All staff wishing to participate in the new scheme a...

  5. Short-Term Saved Leave Scheme

    CERN Multimedia

    HR Department

    2007-01-01

    As announced at the meeting of the Standing Concertation Committee (SCC) on 26 June 2007 and in http://Bulletin No. 28/2007, the existing Saved Leave Scheme will be discontinued as of 31 December 2007. Staff participating in the Scheme will shortly receive a contract amendment stipulating the end of financial contributions compensated by save leave. Leave already accumulated on saved leave accounts can continue to be taken in accordance with the rules applicable to the current scheme. A new system of saved leave will enter into force on 1 January 2008 and will be the subject of a new im-plementation procedure entitled "Short-term saved leave scheme" dated 1 January 2008. At its meeting on 4 December 2007, the SCC agreed to recommend the Director-General to approve this procedure, which can be consulted on the HR Department’s website at the following address: https://cern.ch/hr-services/services-Ben/sls_shortterm.asp All staff wishing to participate in the new scheme ...

  6. Gene expression inference with deep learning.

    Science.gov (United States)

    Chen, Yifei; Li, Yi; Narayan, Rajiv; Subramanian, Aravind; Xie, Xiaohui

    2016-06-15

    Large-scale gene expression profiling has been widely used to characterize cellular states in response to various disease conditions, genetic perturbations, etc. Although the cost of whole-genome expression profiles has been dropping steadily, generating a compendium of expression profiling over thousands of samples is still very expensive. Recognizing that gene expressions are often highly correlated, researchers from the NIH LINCS program have developed a cost-effective strategy of profiling only ∼1000 carefully selected landmark genes and relying on computational methods to infer the expression of remaining target genes. However, the computational approach adopted by the LINCS program is currently based on linear regression (LR), limiting its accuracy since it does not capture complex nonlinear relationship between expressions of genes. We present a deep learning method (abbreviated as D-GEX) to infer the expression of target genes from the expression of landmark genes. We used the microarray-based Gene Expression Omnibus dataset, consisting of 111K expression profiles, to train our model and compare its performance to those from other methods. In terms of mean absolute error averaged across all genes, deep learning significantly outperforms LR with 15.33% relative improvement. A gene-wise comparative analysis shows that deep learning achieves lower error than LR in 99.97% of the target genes. We also tested the performance of our learned model on an independent RNA-Seq-based GTEx dataset, which consists of 2921 expression profiles. Deep learning still outperforms LR with 6.57% relative improvement, and achieves lower error in 81.31% of the target genes. D-GEX is available at https://github.com/uci-cbcl/D-GEX CONTACT: xhx@ics.uci.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Systematic parameter inference in stochastic mesoscopic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Lei, Huan; Yang, Xiu [Pacific Northwest National Laboratory, Richland, WA 99352 (United States); Li, Zhen [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States); Karniadakis, George Em, E-mail: george_karniadakis@brown.edu [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States)

    2017-02-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are “sparse”. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.

  8. State-Space Inference and Learning with Gaussian Processes

    OpenAIRE

    Turner, R; Deisenroth, MP; Rasmussen, CE

    2010-01-01

    18.10.13 KB. Ok to add author version to spiral, authors hold copyright. State-space inference and learning with Gaussian processes (GPs) is an unsolved problem. We propose a new, general methodology for inference and learning in nonlinear state-space models that are described probabilistically by non-parametric GP models. We apply the expectation maximization algorithm to iterate between inference in the latent state-space and learning the parameters of the underlying GP dynamics model. C...

  9. Time Clustered Sampling Can Inflate the Inferred Substitution Rate in Foot-And-Mouth Disease Virus Analyses.

    Science.gov (United States)

    Pedersen, Casper-Emil T; Frandsen, Peter; Wekesa, Sabenzia N; Heller, Rasmus; Sangula, Abraham K; Wadsworth, Jemma; Knowles, Nick J; Muwanika, Vincent B; Siegismund, Hans R

    2015-01-01

    With the emergence of analytical software for the inference of viral evolution, a number of studies have focused on estimating important parameters such as the substitution rate and the time to the most recent common ancestor (tMRCA) for rapidly evolving viruses. Coupled with an increasing abundance of sequence data sampled under widely different schemes, an effort to keep results consistent and comparable is needed. This study emphasizes commonly disregarded problems in the inference of evolutionary rates in viral sequence data when sampling is unevenly distributed on a temporal scale through a study of the foot-and-mouth (FMD) disease virus serotypes SAT 1 and SAT 2. Our study shows that clustered temporal sampling in phylogenetic analyses of FMD viruses will strongly bias the inferences of substitution rates and tMRCA because the inferred rates in such data sets reflect a rate closer to the mutation rate rather than the substitution rate. Estimating evolutionary parameters from viral sequences should be performed with due consideration of the differences in short-term and longer-term evolutionary processes occurring within sets of temporally sampled viruses, and studies should carefully consider how samples are combined.

  10. A fast resonance interference treatment scheme with subgroup method

    International Nuclear Information System (INIS)

    Cao, L.; He, Q.; Wu, H.; Zu, T.; Shen, W.

    2015-01-01

    A fast Resonance Interference Factor (RIF) scheme is proposed to treat the resonance interference effects between different resonance nuclides. This scheme utilizes the conventional subgroup method to evaluate the self-shielded cross sections of the dominant resonance nuclide in the heterogeneous system and the hyper-fine energy group method to represent the resonance interference effects in a simplified homogeneous model. In this paper, the newly implemented scheme is compared to the background iteration scheme, the Resonance Nuclide Group (RNG) scheme and the conventional RIF scheme. The numerical results show that the errors of the effective self-shielded cross sections are significantly reduced by the fast RIF scheme compared with the background iteration scheme and the RNG scheme. Besides, the fast RIF scheme consumes less computation time than the conventional RIF schemes. The speed-up ratio is ~4.5 for MOX pin cell problems. (author)

  11. Probabilistic logic networks a comprehensive framework for uncertain inference

    CERN Document Server

    Goertzel, Ben; Goertzel, Izabela Freire; Heljakka, Ari

    2008-01-01

    This comprehensive book describes Probabilistic Logic Networks (PLN), a novel conceptual, mathematical and computational approach to uncertain inference. A broad scope of reasoning types are considered.

  12. Parametric statistical inference basic theory and modern approaches

    CERN Document Server

    Zacks, Shelemyahu; Tsokos, C P

    1981-01-01

    Parametric Statistical Inference: Basic Theory and Modern Approaches presents the developments and modern trends in statistical inference to students who do not have advanced mathematical and statistical preparation. The topics discussed in the book are basic and common to many fields of statistical inference and thus serve as a jumping board for in-depth study. The book is organized into eight chapters. Chapter 1 provides an overview of how the theory of statistical inference is presented in subsequent chapters. Chapter 2 briefly discusses statistical distributions and their properties. Chapt

  13. How update schemes influence crowd simulations

    International Nuclear Information System (INIS)

    Seitz, Michael J; Köster, Gerta

    2014-01-01

    Time discretization is a key modeling aspect of dynamic computer simulations. In current pedestrian motion models based on discrete events, e.g. cellular automata and the Optimal Steps Model, fixed-order sequential updates and shuffle updates are prevalent. We propose to use event-driven updates that process events in the order they occur, and thus better match natural movement. In addition, we present a parallel update with collision detection and resolution for situations where computational speed is crucial. Two simulation studies serve to demonstrate the practical impact of the choice of update scheme. Not only do density-speed relations differ, but there is a statistically significant effect on evacuation times. Fixed-order sequential and random shuffle updates with a short update period come close to event-driven updates. The parallel update scheme overestimates evacuation times. All schemes can be employed for arbitrary simulation models with discrete events, such as car traffic or animal behavior. (paper)

  14. Asynchronous discrete event schemes for PDEs

    Science.gov (United States)

    Stone, D.; Geiger, S.; Lord, G. J.

    2017-08-01

    A new class of asynchronous discrete-event simulation schemes for advection-diffusion-reaction equations is introduced, based on the principle of allowing quanta of mass to pass through faces of a (regular, structured) Cartesian finite volume grid. The timescales of these events are linked to the flux on the face. The resulting schemes are self-adaptive, and local in both time and space. Experiments are performed on realistic physical systems related to porous media flow applications, including a large 3D advection diffusion equation and advection diffusion reaction systems. The results are compared to highly accurate reference solutions where the temporal evolution is computed with exponential integrator schemes using the same finite volume discretisation. This allows a reliable estimation of the solution error. Our results indicate a first order convergence of the error as a control parameter is decreased, and we outline a framework for analysis.

  15. An adaptive Cartesian control scheme for manipulators

    Science.gov (United States)

    Seraji, H.

    1987-01-01

    A adaptive control scheme for direct control of manipulator end-effectors to achieve trajectory tracking in Cartesian space is developed. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for online implementation with high sampling rates.

  16. A Traffic Restriction Scheme for Enhancing Carpooling

    Directory of Open Access Journals (Sweden)

    Dong Ding

    2017-01-01

    Full Text Available For the purpose of alleviating traffic congestion, this paper proposes a scheme to encourage travelers to carpool by traffic restriction. By a variational inequity we describe travelers’ mode (solo driving and carpooling and route choice under user equilibrium principle in the context of fixed demand and detect the performance of a simple network with various restriction links, restriction proportions, and carpooling costs. Then the optimal traffic restriction scheme aiming at minimal total travel cost is designed through a bilevel program and applied to a Sioux Fall network example with genetic algorithm. According to various requirements, optimal restriction regions and proportions for restricted automobiles are captured. From the results it is found that traffic restriction scheme is possible to enhance carpooling and alleviate congestion. However, higher carpooling demand is not always helpful to the whole network. The topology of network, OD demand, and carpooling cost are included in the factors influencing the performance of the traffic system.

  17. Quantum Watermarking Scheme Based on INEQR

    Science.gov (United States)

    Zhou, Ri-Gui; Zhou, Yang; Zhu, Changming; Wei, Lai; Zhang, Xiafen; Ian, Hou

    2018-04-01

    Quantum watermarking technology protects copyright by embedding invisible quantum signal in quantum multimedia data. In this paper, a watermarking scheme based on INEQR was presented. Firstly, the watermark image is extended to achieve the requirement of embedding carrier image. Secondly, the swap and XOR operation is used on the processed pixels. Since there is only one bit per pixel, XOR operation can achieve the effect of simple encryption. Thirdly, both the watermark image extraction and embedding operations are described, where the key image, swap operation and LSB algorithm are used. When the embedding is made, the binary image key is changed. It means that the watermark has been embedded. Of course, if the watermark image is extracted, the key's state need detected. When key's state is |1>, this extraction operation is carried out. Finally, for validation of the proposed scheme, both the Signal-to-noise ratio (PSNR) and the security of the scheme are analyzed.

  18. Improvement of One Quantum Encryption Scheme

    Science.gov (United States)

    Cao, Zhengjun; Liu, Lihua

    2012-01-01

    Zhou et al. proposed a quantum encryption scheme based on quantum computation in 2006 [N. Zhou et al., Physica A362 (2006) 305]. Each qubit of the ciphertext is constrained to two pairs of conjugate states. So, its implementation is feasible with the existing technology. But it is inefficient since it entails six key bits to encrypt one message bit, and the resulting ciphertext for one message bit consists of three qubits. In addition, its security cannot be directly reduced to the well-known BB84 protocol. In this paper, we improve it using the technique developed in BB84 protocol. The new scheme entails only two key bits to encrypt one message bit. The resulting ciphertext is just composed of two qubits. It saves about a half cost without the loss of security. Moreover, the new scheme is probabilistic instead of deterministic.

  19. A universal encoding scheme for MIMO transmission using a single active element for PSK modulation schemes

    DEFF Research Database (Denmark)

    Alrabadi, Osama; Papadias, C.B.; Kalis, A.

    2009-01-01

    A universal scheme for encoding multiple symbol streams using a single driven element (and consequently a single radio frequency (RF) frontend) surrounded by parasitic elements (PE) loaded with variable reactive loads, is proposed in this paper. The proposed scheme is based on creating a MIMO sys...

  20. A simple angular transmit diversity scheme using a single RF frontend for PSK modulation schemes

    DEFF Research Database (Denmark)

    Alrabadi, Osama Nafeth Saleem; Papadias, Constantinos B.; Kalis, Antonis

    2009-01-01

    array (SPA) with a single transceiver, and an array area of 0.0625 square wavelengths. The scheme which requires no channel state information (CSI) at the transmitter, provides mainly a diversity gain to combat against multipath fading. The performance/capacity of the proposed diversity scheme...

  1. Carbon trading: Current schemes and future developments

    International Nuclear Information System (INIS)

    Perdan, Slobodan; Azapagic, Adisa

    2011-01-01

    This paper looks at the greenhouse gas (GHG) emissions trading schemes and examines the prospects of carbon trading. The first part of the paper gives an overview of several mandatory GHG trading schemes around the world. The second part focuses on the future trends in carbon trading. It argues that the emergence of new schemes, a gradual enlargement of the current ones, and willingness to link existing and planned schemes seem to point towards geographical, temporal and sectoral expansion of emissions trading. However, such expansion would need to overcome some considerable technical and non-technical obstacles. Linking of the current and emerging trading schemes requires not only considerable technical fixes and harmonisation of different trading systems, but also necessitates clear regulatory and policy signals, continuing political support and a more stable economic environment. Currently, the latter factors are missing. The global economic turmoil and its repercussions for the carbon market, a lack of the international deal on climate change defining the Post-Kyoto commitments, and unfavourable policy shifts in some countries, cast serious doubts on the expansion of emissions trading and indicate that carbon trading enters an uncertain period. - Highlights: → The paper provides an extensive overview of mandatory emissions trading schemes around the world. → Geographical, temporal and sectoral expansion of emissions trading are identified as future trends. → The expansion requires considerable technical fixes and harmonisation of different trading systems. → Clear policy signals, political support and a stable economic environment are needed for the expansion. → A lack of the post-Kyoto commitments and unfavourable policy shifts indicate an uncertain future for carbon trading.

  2. Pressure correction schemes for compressible flows

    International Nuclear Information System (INIS)

    Kheriji, W.

    2011-01-01

    This thesis is concerned with the development of semi-implicit fractional step schemes, for the compressible Navier-Stokes equations; these schemes are part of the class of the pressure correction methods. The chosen spatial discretization is staggered: non conforming mixed finite elements (Crouzeix-Raviart or Rannacher-Turek) or the classic MA C scheme. An upwind finite volume discretization of the mass balance guarantees the positivity of the density. The positivity of the internal energy is obtained by discretizing the internal energy balance by an upwind finite volume scheme and b y coupling the discrete internal energy balance with the pressure correction step. A special finite volume discretization on dual cells is performed for the convection term in the momentum balance equation, and a renormalisation step for the pressure is added to the algorithm; this ensures the control in time of the integral of the total energy over the domain. All these a priori estimates imply the existence of a discrete solution by a topological degree argument. The application of this scheme to Euler equations raises an additional difficulty. Indeed, obtaining correct shocks requires the scheme to be consistent with the total energy balance, property which we obtain as follows. First of all, a local discrete kinetic energy balance is established; it contains source terms winch we somehow compensate in the internal energy balance. The kinetic and internal energy equations are associated with the dual and primal meshes respectively, and thus cannot be added to obtain a total energy balance; its continuous counterpart is however recovered at the limit: if we suppose that a sequence of discrete solutions converges when the space and time steps tend to 0, we indeed show, in 1D at least, that the limit satisfies a weak form of the equation. These theoretical results are comforted by numerical tests. Similar results are obtained for the baro-tropic Navier-Stokes equations. (author)

  3. EPU correction scheme study at the CLS

    Energy Technology Data Exchange (ETDEWEB)

    Bertwistle, Drew, E-mail: drew.bertwistle@lightsource.ca; Baribeau, C.; Dallin, L.; Chen, S.; Vogt, J.; Wurtz, W. [Canadian Light Source Inc. 44 Innovation Boulevard, Saskatoon, SK S7N 2V3 (Canada)

    2016-07-27

    The Canadian Light Source (CLS) Quantum Materials Spectroscopy Center (QMSC) beamline will employ a novel double period (55 mm, 180 mm) elliptically polarizing undulator (EPU) to produce photons of arbitrary polarization in the soft X-ray regime. The long period and high field of the 180 mm period EPU will have a strong dynamic focusing effect on the storage ring electron beam. We have considered two partial correction schemes, a 4 m long planar array of BESSY-II style current strips, and soft iron L-shims. In this paper we briefly consider the implementation of these correction schemes.

  4. Verification of an objective analysis scheme

    International Nuclear Information System (INIS)

    Cats, G.J.; Haan, B.J. de; Hafkenscheid, L.M.

    1987-01-01

    An intermittent data assimilation scheme has been used to produce wind and precipitation fields during the 10 days after the explosion at the Chernobyl nuclear power plant on 25 April 1986. The wind fields are analyses, the precipitation fields have been generated by the forecast model part of the scheme. The precipitation fields are of fair quality. The quality of the wind fields has been monitored by the ensuing trajectories. These were found to describe the arrival times of radioactive air in good agreement with most observational data, taken all over Europe. The wind analyses are therefore considered to be reliable. 25 refs.; 13 figs

  5. Optimal powering schemes for legged robotics

    Science.gov (United States)

    Muench, Paul; Bednarz, David; Czerniak, Gregory P.; Cheok, Ka C.

    2010-04-01

    Legged Robots have tremendous mobility, but they can also be very inefficient. These inefficiencies can be due to suboptimal control schemes, among other things. If your goal is to get from point A to point B in the least amount of time, your control scheme will be different from if your goal is to get there using the least amount of energy. In this paper, we seek a balance between these extremes by looking at both efficiency and speed. We model a walking robot as a rimless wheel, and, using Pontryagin's Maximum Principle (PMP), we find an "on-off" control for the model, and describe the switching curve between these control extremes.

  6. System Protection Schemes in Eastern Denmark

    DEFF Research Database (Denmark)

    Rasmussen, Joana

    outages in the southern part of the 132-kV system introduce further stress in the power system, eventually leading to a voltage collapse. The local System Protection Scheme against voltage collapse is designed as a response-based scheme, which is dependent on local indication of reactive and active power...... effective measures, because they are associated with large reactive power losses in the transmission system. Ordered reduction of wind generation is considered an effective measure to maintain voltage stability in the system. Reactive power in the system is released due to tripping of a significant amount...... system. In that way, the power system capability could be extended beyond normal limits....

  7. Group Buying Schemes : A Sustainable Business Model?

    OpenAIRE

    Köpp, Sebastian; Mukhachou, Aliaksei; Schwaninger, Markus

    2013-01-01

    Die Autoren gehen der Frage nach, ob "Group Buying Schemes" wie beispielsweise von den Unternehmen Groupon und Dein Deal angeboten, ein nachhaltiges Geschäftsmodell sind. Anhand der Fallstudie Groupon wird mit einem System Dynamics Modell festgestellt, dass das Geschäftsmodell geändert werden muss, wenn die Unternehmung auf Dauer lebensfähig sein soll. The authors examine if group buying schemes are a sustainable business model. By means of the Groupon case study and using a System Dynami...

  8. New Imaging Operation Scheme at VLTI

    Science.gov (United States)

    Haubois, Xavier

    2018-04-01

    After PIONIER and GRAVITY, MATISSE will soon complete the set of 4 telescope beam combiners at VLTI. Together with recent developments in the image reconstruction algorithms, the VLTI aims to develop its operation scheme to allow optimized and adaptive UV plane coverage. The combination of spectro-imaging instruments, optimized operation framework and image reconstruction algorithms should lead to an increase of the reliability and quantity of the interferometric images. In this contribution, I will present the status of this new scheme as well as possible synergies with other instruments.

  9. Hilbert schemes of points and Heisenberg algebras

    International Nuclear Information System (INIS)

    Ellingsrud, G.; Goettsche, L.

    2000-01-01

    Let X [n] be the Hilbert scheme of n points on a smooth projective surface X over the complex numbers. In these lectures we describe the action of the Heisenberg algebra on the direct sum of the cohomologies of all the X [n] , which has been constructed by Nakajima. In the second half of the lectures we study the relation of the Heisenberg algebra action and the ring structures of the cohomologies of the X [n] , following recent work of Lehn. In particular we study the Chern and Segre classes of tautological vector bundles on the Hilbert schemes X [n] . (author)

  10. Security problem on arbitrated quantum signature schemes

    International Nuclear Information System (INIS)

    Choi, Jeong Woon; Chang, Ku-Young; Hong, Dowon

    2011-01-01

    Many arbitrated quantum signature schemes implemented with the help of a trusted third party have been developed up to now. In order to guarantee unconditional security, most of them take advantage of the optimal quantum one-time encryption based on Pauli operators. However, in this paper we point out that the previous schemes provide security only against a total break attack and show in fact that there exists an existential forgery attack that can validly modify the transmitted pair of message and signature. In addition, we also provide a simple method to recover security against the proposed attack.

  11. Optimal sampling schemes applied in geology

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-05-01

    Full Text Available Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology UP 2010 2 / 47 Outline 1 Introduction to hyperspectral remote... sensing 2 Objective of Study 1 3 Study Area 4 Data used 5 Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology...

  12. Quadratically convergent MCSCF scheme using Fock operators

    International Nuclear Information System (INIS)

    Das, G.

    1981-01-01

    A quadratically convergent formulation of the MCSCF method using Fock operators is presented. Among its advantages the present formulation is quadratically convergent unlike the earlier ones based on Fock operators. In contrast to other quadratically convergent schemes as well as the one based on generalized Brillouin's theorem, this method leads easily to a hybrid scheme where the weakly coupled orbitals (such as the core) are handled purely by Fock equations, while the rest of the orbitals are treated by a quadratically convergent approach with a truncated virtual space obtained by the use of the corresponding Fock equations

  13. Security problem on arbitrated quantum signature schemes

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jeong Woon [Emerging Technology R and D Center, SK Telecom, Kyunggi 463-784 (Korea, Republic of); Chang, Ku-Young; Hong, Dowon [Cryptography Research Team, Electronics and Telecommunications Research Institute, Daejeon 305-700 (Korea, Republic of)

    2011-12-15

    Many arbitrated quantum signature schemes implemented with the help of a trusted third party have been developed up to now. In order to guarantee unconditional security, most of them take advantage of the optimal quantum one-time encryption based on Pauli operators. However, in this paper we point out that the previous schemes provide security only against a total break attack and show in fact that there exists an existential forgery attack that can validly modify the transmitted pair of message and signature. In addition, we also provide a simple method to recover security against the proposed attack.

  14. Clocking Scheme for Switched-Capacitor Circuits

    DEFF Research Database (Denmark)

    Steensgaard-Madsen, Jesper

    1998-01-01

    A novel clocking scheme for switched-capacitor (SC) circuits is presented. It can enhance the understanding of SC circuits and the errors caused by MOSFET (MOS) switches. Charge errors, and techniques to make SC circuits less sensitive to them are discussed.......A novel clocking scheme for switched-capacitor (SC) circuits is presented. It can enhance the understanding of SC circuits and the errors caused by MOSFET (MOS) switches. Charge errors, and techniques to make SC circuits less sensitive to them are discussed....

  15. The Probabilistic Convolution Tree: Efficient Exact Bayesian Inference for Faster LC-MS/MS Protein Inference

    Science.gov (United States)

    Serang, Oliver

    2014-01-01

    Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called “causal independence”). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to and the space to where is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions. PMID:24626234

  16. Making inference from wildlife collision data: inferring predator absence from prey strikes

    Directory of Open Access Journals (Sweden)

    Peter Caley

    2017-02-01

    Full Text Available Wildlife collision data are ubiquitous, though challenging for making ecological inference due to typically irreducible uncertainty relating to the sampling process. We illustrate a new approach that is useful for generating inference from predator data arising from wildlife collisions. By simply conditioning on a second prey species sampled via the same collision process, and by using a biologically realistic numerical response functions, we can produce a coherent numerical response relationship between predator and prey. This relationship can then be used to make inference on the population size of the predator species, including the probability of extinction. The statistical conditioning enables us to account for unmeasured variation in factors influencing the runway strike incidence for individual airports and to enable valid comparisons. A practical application of the approach for testing hypotheses about the distribution and abundance of a predator species is illustrated using the hypothesized red fox incursion into Tasmania, Australia. We estimate that conditional on the numerical response between fox and lagomorph runway strikes on mainland Australia, the predictive probability of observing no runway strikes of foxes in Tasmania after observing 15 lagomorph strikes is 0.001. We conclude there is enough evidence to safely reject the null hypothesis that there is a widespread red fox population in Tasmania at a population density consistent with prey availability. The method is novel and has potential wider application.

  17. Making inference from wildlife collision data: inferring predator absence from prey strikes.

    Science.gov (United States)

    Caley, Peter; Hosack, Geoffrey R; Barry, Simon C

    2017-01-01

    Wildlife collision data are ubiquitous, though challenging for making ecological inference due to typically irreducible uncertainty relating to the sampling process. We illustrate a new approach that is useful for generating inference from predator data arising from wildlife collisions. By simply conditioning on a second prey species sampled via the same collision process, and by using a biologically realistic numerical response functions, we can produce a coherent numerical response relationship between predator and prey. This relationship can then be used to make inference on the population size of the predator species, including the probability of extinction. The statistical conditioning enables us to account for unmeasured variation in factors influencing the runway strike incidence for individual airports and to enable valid comparisons. A practical application of the approach for testing hypotheses about the distribution and abundance of a predator species is illustrated using the hypothesized red fox incursion into Tasmania, Australia. We estimate that conditional on the numerical response between fox and lagomorph runway strikes on mainland Australia, the predictive probability of observing no runway strikes of foxes in Tasmania after observing 15 lagomorph strikes is 0.001. We conclude there is enough evidence to safely reject the null hypothesis that there is a widespread red fox population in Tasmania at a population density consistent with prey availability. The method is novel and has potential wider application.

  18. Inferred referendum: a rule for committee decisions

    Energy Technology Data Exchange (ETDEWEB)

    Wolsky, A; Sanathanan, L

    1979-01-01

    A new method of social choice is presented. The result of the method coincides with that of majority voting when it does not produce an intransitivity among the alternatives under consideration. When majority voting would produce an intransitivity, the method orders the alternatives in the same way as the transitive constituency would whom the committee members are most likely to represent. Analysis of the application of the method to three alternatives shows that the resulting order depends only on the committee members' votes between pairs of alternatives; the resulting order is less sensitive to irrelevant alternatives than the orders provided by other schemes; when majority voting provides an intransitivity, the hypothesis that, in fact, the committee's constituency is as assumed is almost as likely as the hypothesis that it precisely mirrors the committee.

  19. Analysis of Program Obfuscation Schemes with Variable Encoding Technique

    Science.gov (United States)

    Fukushima, Kazuhide; Kiyomoto, Shinsaku; Tanaka, Toshiaki; Sakurai, Kouichi

    Program analysis techniques have improved steadily over the past several decades, and software obfuscation schemes have come to be used in many commercial programs. A software obfuscation scheme transforms an original program or a binary file into an obfuscated program that is more complicated and difficult to analyze, while preserving its functionality. However, the security of obfuscation schemes has not been properly evaluated. In this paper, we analyze obfuscation schemes in order to clarify the advantages of our scheme, the XOR-encoding scheme. First, we more clearly define five types of attack models that we defined previously, and define quantitative resistance to these attacks. Then, we compare the security, functionality and efficiency of three obfuscation schemes with encoding variables: (1) Sato et al.'s scheme with linear transformation, (2) our previous scheme with affine transformation, and (3) the XOR-encoding scheme. We show that the XOR-encoding scheme is superior with regard to the following two points: (1) the XOR-encoding scheme is more secure against a data-dependency attack and a brute force attack than our previous scheme, and is as secure against an information-collecting attack and an inverse transformation attack as our previous scheme, (2) the XOR-encoding scheme does not restrict the calculable ranges of programs and the loss of efficiency is less than in our previous scheme.

  20. Investigation of schemes for incorporating generator Q limits in the ...

    Indian Academy of Sciences (India)

    Handling generator Q limits is one such important feature needed in any practical load flow method. This paper presents a comprehensive investigation of two classes of schemes intended to handle this aspect i.e. the bus type switching scheme and the sensitivity scheme. We propose two new sensitivity based schemes ...

  1. Sources of funding for community schemes

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-11-01

    There is an increasing level of interest amongst community groups in the UK to become involved in the development of renewable energy schemes. Often however these community groups have only limited funds of their own, so any additional funds that can be identified to help fund their renewable energy scheme can be very useful. There are a range of funding sources available that provide grants or loans for which community groups are eligible to apply. Few of these funding sources are targeted towards renewable energy specifically, nevertheless the funds may be applicable to renewable energy schemes under appropriate circumstances. To date, however, few of these funds have been accessed by community groups for renewable energy initiatives. One of the reasons for this low take-up of funds on offer could be that the funding sources may be difficult and time-consuming to identify, especially where the energy component of the fund is not readily apparent. This directory draws together details about many of the principal funding sources available in the UK that may consider providing funds to community groups wanting to develop a renewable energy scheme. (author)

  2. The QKD network: model and routing scheme

    Science.gov (United States)

    Yang, Chao; Zhang, Hongqi; Su, Jinhai

    2017-11-01

    Quantum key distribution (QKD) technology can establish unconditional secure keys between two communicating parties. Although this technology has some inherent constraints, such as the distance and point-to-point mode limits, building a QKD network with multiple point-to-point QKD devices can overcome these constraints. Considering the development level of current technology, the trust relaying QKD network is the first choice to build a practical QKD network. However, the previous research didn't address a routing method on the trust relaying QKD network in detail. This paper focuses on the routing issues, builds a model of the trust relaying QKD network for easily analysing and understanding this network, and proposes a dynamical routing scheme for this network. From the viewpoint of designing a dynamical routing scheme in classical network, the proposed scheme consists of three components: a Hello protocol helping share the network topology information, a routing algorithm to select a set of suitable paths and establish the routing table and a link state update mechanism helping keep the routing table newly. Experiments and evaluation demonstrates the validity and effectiveness of the proposed routing scheme.

  3. SYNTHESIS OF VISCOELASTIC MATERIAL MODELS (SCHEMES

    Directory of Open Access Journals (Sweden)

    V. Bogomolov

    2014-10-01

    Full Text Available The principles of structural viscoelastic schemes construction for materials with linear viscoelastic properties in accordance with the given experimental data on creep tests are analyzed. It is shown that there can be only four types of materials with linear visco-elastic properties.

  4. BPHZL-subtraction scheme and axial gauges

    Energy Technology Data Exchange (ETDEWEB)

    Kreuzer, M.; Rebhan, A.; Schweda, M.; Piguet, O.

    1986-03-27

    The application of the BPHZL subtraction scheme to Yang-Mills theories in axial gauges is presented. In the auxillary mass formulation we show the validity of the convergence theorems for subtracted momentum space integrals, and we give the integral formulae necessary for one-loop calculations. (orig.).

  5. The data cyclotron query processing scheme

    NARCIS (Netherlands)

    R.A. Goncalves (Romulo); M.L. Kersten (Martin)

    2010-01-01

    htmlabstractDistributed database systems exploit static workload characteristics to steer data fragmentation and data allocation schemes. However, the grand challenge of distributed query processing is to come up with a self-organizing architecture, which exploits all resources to manage the hot

  6. THE DEVELOPMENT OF FREE PRIMARY EDUCATION SCHEME ...

    African Journals Online (AJOL)

    user

    Education scheme in Western Region and marked a radical departure from the hitherto ... academic symposia, lectures, debates, reputable journals and standard .... Enrolment in Primary Schools in the Western Region by Sex, 1953 – 1960. Year Boys .... “Possibly no single decision of the decade prior to independence had.

  7. High Order Semi-Lagrangian Advection Scheme

    Science.gov (United States)

    Malaga, Carlos; Mandujano, Francisco; Becerra, Julian

    2014-11-01

    In most fluid phenomena, advection plays an important roll. A numerical scheme capable of making quantitative predictions and simulations must compute correctly the advection terms appearing in the equations governing fluid flow. Here we present a high order forward semi-Lagrangian numerical scheme specifically tailored to compute material derivatives. The scheme relies on the geometrical interpretation of material derivatives to compute the time evolution of fields on grids that deform with the material fluid domain, an interpolating procedure of arbitrary order that preserves the moments of the interpolated distributions, and a nonlinear mapping strategy to perform interpolations between undeformed and deformed grids. Additionally, a discontinuity criterion was implemented to deal with discontinuous fields and shocks. Tests of pure advection, shock formation and nonlinear phenomena are presented to show performance and convergence of the scheme. The high computational cost is considerably reduced when implemented on massively parallel architectures found in graphic cards. The authors acknowledge funding from Fondo Sectorial CONACYT-SENER Grant Number 42536 (DGAJ-SPI-34-170412-217).

  8. An HFB scheme in natural orbitals

    International Nuclear Information System (INIS)

    Reinhard, P.G.; Rutz, K.; Maruhn, J.A.

    1997-01-01

    We present a formulation of the Hartree-Fock-Bogoliubov (HFB) equations which solves the problem directly in the basis of natural orbitals. This provides a very efficient scheme which is particularly suited for large scale calculations on coordinate-space grids. (orig.)

  9. A classification scheme for LWR fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Moore, R.S.; Williamson, D.A.; Notz, K.J.

    1988-11-01

    With over 100 light water nuclear reactors operating nationwide, representing designs by four primary vendors, and with reload fuel manufactured by these vendors and additional suppliers, a wide variety of fuel assembly types are in existence. At Oak Ridge National Laboratory, both the Systems Integration Program and the Characteristics Data Base project required a classification scheme for these fuels. This scheme can be applied to other areas and is expected to be of value to many Office of Civilian Radioactive Waste Management programs. To develop the classification scheme, extensive information on the fuel assemblies that have been and are being manufactured by the various nuclear fuel vendors was compiled, reviewed, and evaluated. It was determined that it is possible to characterize assemblies in a systematic manner, using a combination of physical factors. A two-stage scheme was developed consisting of 79 assembly types, which are grouped into 22 assembly classes. The assembly classes are determined by the general design of the reactor cores in which the assemblies are, or were, used. The general BWR and PWR classes are divided differently but both are based on reactor core configuration. 2 refs., 15 tabs.

  10. The EU Greenhouse Gas Emissions Trading Scheme

    NARCIS (Netherlands)

    Woerdman, Edwin; Woerdman, Edwin; Roggenkamp, Martha; Holwerda, Marijn

    2015-01-01

    This chapter explains how greenhouse gas emissions trading works, provides the essentials of the Directive on the European Union Emissions Trading Scheme (EU ETS) and summarizes the main implementation problems of the EU ETS. In addition, a law and economics approach is used to discuss the dilemmas

  11. Study of 228Ac decay scheme

    International Nuclear Information System (INIS)

    Pinto, H.V.

    1976-02-01

    Calibration in energy and efficiency of the system used. Obtainement of singles gamma ray spectra of low and high energy. Reduction of the data obtained in the spectrometer by means of computer: localization and determination of the areas of the peaks, also the analysis of the shape of the peaks for identification of doublets. Checking of the decay scheme [pt

  12. Parallel knock-out schemes in networks

    NARCIS (Netherlands)

    Broersma, H.J.; Fomin, F.V.; Woeginger, G.J.

    2004-01-01

    We consider parallel knock-out schemes, a procedure on graphs introduced by Lampert and Slater in 1997 in which each vertex eliminates exactly one of its neighbors in each round. We are considering cases in which after a finite number of rounds, where the minimimum number is called the parallel

  13. Nonclassical lightstates in optical communication schemes

    International Nuclear Information System (INIS)

    Mattle, K. U.

    1997-11-01

    The present thesis is a result in theoretical and experimental work on quant information and quant communication. The first part describes a new high intense source for polarization entangled photon pairs. The high quality of the source is clearly demonstrated by violating a Bell-inequality in less than 5 minutes with 100 standard deviations. This new source is a genius tool for new experiments in the field of fundamental physics as well as applied physics. The next chapter shows an experimental implementation of an optical dense quantum coding scheme. The combination of Bell-state generation and analysis of this entangled states leads to a new nonclassical communication scheme, where the channel capacity is enhanced. A single two state photon can be used for coding and decoding 1.58 bit instead of 1 bit for classical two state systems. The following chapter discusses two photon interference effects for two independent light sources. In an experiment two independent fluorescence pulses show this kind of interference effects. The fifth chapter describes 3-photon interference effects. This nonclassical interference effect is the elementary process for the quantum teleportation scheme. In this scheme an unknown particle state is transmitted from A to B without sending the particle itself. (author)

  14. The complete flux scheme in cylindrical coordinates

    NARCIS (Netherlands)

    Anthonissen, M.J.H.; Thije Boonkkamp, ten J.H.M.

    2014-01-01

    We consider the complete ¿ux (CF) scheme, a ¿nite volume method (FVM) presented in [1]. CF is based on an integral representation for the ¿uxes, found by solving a local boundary value problem that includes the source term. It performs well (second order accuracy) for both diffusion and advection

  15. Traffic calming schemes : opportunities and implementation strategies.

    NARCIS (Netherlands)

    Schagen, I.N.L.G. van (ed.)

    2003-01-01

    Commissioned by the Swedish National Road Authority, this report aims to provide a concise overview of knowledge of and experiences with traffic calming schemes in urban areas, both on a technical level and on a policy level. Traffic calming refers to a combination of network planning and

  16. Harmonic generation with multiple wiggler schemes

    Energy Technology Data Exchange (ETDEWEB)

    Bonifacio, R.; De Salvo, L.; Pierini, P. [Universita degli Studi, Milano (Italy)

    1995-02-01

    In this paper the authors give a simple theoretical description of the basic physics of the single pass high gain free electron laser (FEL), describing in some detail the FEL bunching properties and the harmonic generation technique with a multiple-wiggler scheme or a high gain optical klystron configuration.

  17. Zero leakage quantization scheme for biometric verification

    NARCIS (Netherlands)

    Groot, de J.A.; Linnartz, J.P.M.G.

    2011-01-01

    Biometrics gain increasing interest as a solution for many security issues, but privacy risks exist in case we do not protect the stored templates well. This paper presents a new verification scheme, which protects the secrets of the enrolled users. We will show that zero leakage is achieved if

  18. A classification scheme for LWR fuel assemblies

    International Nuclear Information System (INIS)

    Moore, R.S.; Williamson, D.A.; Notz, K.J.

    1988-11-01

    With over 100 light water nuclear reactors operating nationwide, representing designs by four primary vendors, and with reload fuel manufactured by these vendors and additional suppliers, a wide variety of fuel assembly types are in existence. At Oak Ridge National Laboratory, both the Systems Integration Program and the Characteristics Data Base project required a classification scheme for these fuels. This scheme can be applied to other areas and is expected to be of value to many Office of Civilian Radioactive Waste Management programs. To develop the classification scheme, extensive information on the fuel assemblies that have been and are being manufactured by the various nuclear fuel vendors was compiled, reviewed, and evaluated. It was determined that it is possible to characterize assemblies in a systematic manner, using a combination of physical factors. A two-stage scheme was developed consisting of 79 assembly types, which are grouped into 22 assembly classes. The assembly classes are determined by the general design of the reactor cores in which the assemblies are, or were, used. The general BWR and PWR classes are divided differently but both are based on reactor core configuration. 2 refs., 15 tabs

  19. Asynchronous schemes for CFD at extreme scales

    Science.gov (United States)

    Konduri, Aditya; Donzis, Diego

    2013-11-01

    Recent advances in computing hardware and software have made simulations an indispensable research tool in understanding fluid flow phenomena in complex conditions at great detail. Due to the nonlinear nature of the governing NS equations, simulations of high Re turbulent flows are computationally very expensive and demand for extreme levels of parallelism. Current large simulations are being done on hundreds of thousands of processing elements (PEs). Benchmarks from these simulations show that communication between PEs take a substantial amount of time, overwhelming the compute time, resulting in substantial waste in compute cycles as PEs remain idle. We investigate a novel approach based on widely used finite-difference schemes in which computations are carried out asynchronously, i.e. synchronization of data among PEs is not enforced and computations proceed regardless of the status of messages. This drastically reduces PE idle time and results in much larger computation rates. We show that while these schemes remain stable, their accuracy is significantly affected. We present new schemes that maintain accuracy under asynchronous conditions and provide a viable path towards exascale computing. Performance of these schemes will be shown for simple models like Burgers' equation.

  20. Generalization of binary tensor product schemes depends upon four parameters

    International Nuclear Information System (INIS)

    Bashir, R.; Bari, M.; Mustafa, G.

    2018-01-01

    This article deals with general formulae of parametric and non parametric bivariate subdivision scheme with four parameters. By assigning specific values to those parameters we get some special cases of existing tensor product schemes as well as a new proposed scheme. The behavior of schemes produced by the general formulae is interpolating, approximating and relaxed. Approximating bivariate subdivision schemes produce some other surfaces as compared to interpolating bivariate subdivision schemes. Polynomial reproduction and polynomial generation are desirable properties of subdivision schemes. Capability of polynomial reproduction and polynomial generation is strongly connected with smoothness, sum rules, convergence and approximation order. We also calculate the polynomial generation and polynomial reproduction of 9-point bivariate approximating subdivision scheme. Comparison of polynomial reproduction, polynomial generation and continuity of existing and proposed schemes has also been established. Some numerical examples are also presented to show the behavior of bivariate schemes. (author)