WorldWideScience

Sample records for energy range inferences

  1. Inference with minimal Gibbs free energy in information field theory

    International Nuclear Information System (INIS)

    Ensslin, Torsten A.; Weig, Cornelius

    2010-01-01

    Non-linear and non-Gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the Gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from Poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a Gaussian signal with unknown spectrum, and (iii) inference of a Poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how Gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-Gaussian posterior.

  2. Atmospheric Energy Deposition Modeling and Inference for Varied Meteoroid Structures

    Science.gov (United States)

    Wheeler, Lorien; Mathias, Donovan; Stokan, Edward; Brown, Peter

    2018-01-01

    Asteroids populations are highly diverse, ranging from coherent monoliths to loosely-bound rubble piles with a broad range of material and compositional properties. These different structures and properties could significantly affect how an asteroid breaks up and deposits energy in the atmosphere, and how much ground damage may occur from resulting blast waves. We have previously developed a fragment-cloud model (FCM) for assessing the atmospheric breakup and energy deposition of asteroids striking Earth. The approach represents ranges of breakup characteristics by combining progressive fragmentation with releases of variable fractions of debris and larger discrete fragments. In this work, we have extended the FCM to also represent asteroids with varied initial structures, such as rubble piles or fractured bodies. We have used the extended FCM to model the Chelyabinsk, Benesov, Kosice, and Tagish Lake meteors, and have obtained excellent matches to energy deposition profiles derived from their light curves. These matches provide validation for the FCM approach, help guide further model refinements, and enable inferences about pre-entry structure and breakup behavior. Results highlight differences in the amount of small debris vs. discrete fragments in matching the various flare characteristics of each meteor. The Chelyabinsk flares were best represented using relatively high debris fractions, while Kosice and Benesov cases were more notably driven by their discrete fragmentation characteristics, perhaps indicating more cohesive initial structures. Tagish Lake exhibited a combination of these characteristics, with lower-debris fragmentation at high altitudes followed by sudden disintegration into small debris in the lower flares. Results from all cases also suggest that lower ablation coefficients and debris spread rates may be more appropriate for the way in which debris clouds are represented in FCM, offering an avenue for future model refinement.

  3. Inferring Parametric Energy Consumption Functions at Different Software Levels

    DEFF Research Database (Denmark)

    Liqat, Umer; Georgiou, Kyriakos; Kerrison, Steve

    2016-01-01

    The static estimation of the energy consumed by program executions is an important challenge, which has applications in program optimization and verification, and is instrumental in energy-aware software development. Our objective is to estimate such energy consumption in the form of functions...... on the input data sizes of programs. We have developed a tool for experimentation with static analysis which infers such energy functions at two levels, the instruction set architecture (ISA) and the intermediate code (LLVM IR) levels, and reflects it upwards to the higher source code level. This required...... the development of a translation from LLVM IR to an intermediate representation and its integration with existing components, a translation from ISA to the same representation, a resource analyzer, an ISA-level energy model, and a mapping from this model to LLVM IR. The approach has been applied to programs...

  4. Spatiotemporal patterns of precipitation inferred from streamflow observations across the Sierra Nevada mountain range

    Science.gov (United States)

    Henn, Brian; Clark, Martyn P.; Kavetski, Dmitri; Newman, Andrew J.; Hughes, Mimi; McGurk, Bruce; Lundquist, Jessica D.

    2018-01-01

    Given uncertainty in precipitation gauge-based gridded datasets over complex terrain, we use multiple streamflow observations as an additional source of information about precipitation, in order to identify spatial and temporal differences between a gridded precipitation dataset and precipitation inferred from streamflow. We test whether gridded datasets capture across-crest and regional spatial patterns of variability, as well as year-to-year variability and trends in precipitation, in comparison to precipitation inferred from streamflow. We use a Bayesian model calibration routine with multiple lumped hydrologic model structures to infer the most likely basin-mean, water-year total precipitation for 56 basins with long-term (>30 year) streamflow records in the Sierra Nevada mountain range of California. We compare basin-mean precipitation derived from this approach with basin-mean precipitation from a precipitation gauge-based, 1/16° gridded dataset that has been used to simulate and evaluate trends in Western United States streamflow and snowpack over the 20th century. We find that the long-term average spatial patterns differ: in particular, there is less precipitation in the gridded dataset in higher-elevation basins whose aspect faces prevailing cool-season winds, as compared to precipitation inferred from streamflow. In a few years and basins, there is less gridded precipitation than there is observed streamflow. Lower-elevation, southern, and east-of-crest basins show better agreement between gridded and inferred precipitation. Implied actual evapotranspiration (calculated as precipitation minus streamflow) then also varies between the streamflow-based estimates and the gridded dataset. Absolute uncertainty in precipitation inferred from streamflow is substantial, but the signal of basin-to-basin and year-to-year differences are likely more robust. The findings suggest that considering streamflow when spatially distributing precipitation in complex terrain

  5. Universal Darwinism As a Process of Bayesian Inference.

    Science.gov (United States)

    Campbell, John O

    2016-01-01

    Many of the mathematical frameworks describing natural selection are equivalent to Bayes' Theorem, also known as Bayesian updating. By definition, a process of Bayesian Inference is one which involves a Bayesian update, so we may conclude that these frameworks describe natural selection as a process of Bayesian inference. Thus, natural selection serves as a counter example to a widely-held interpretation that restricts Bayesian Inference to human mental processes (including the endeavors of statisticians). As Bayesian inference can always be cast in terms of (variational) free energy minimization, natural selection can be viewed as comprising two components: a generative model of an "experiment" in the external world environment, and the results of that "experiment" or the "surprise" entailed by predicted and actual outcomes of the "experiment." Minimization of free energy implies that the implicit measure of "surprise" experienced serves to update the generative model in a Bayesian manner. This description closely accords with the mechanisms of generalized Darwinian process proposed both by Dawkins, in terms of replicators and vehicles, and Campbell, in terms of inferential systems. Bayesian inference is an algorithm for the accumulation of evidence-based knowledge. This algorithm is now seen to operate over a wide range of evolutionary processes, including natural selection, the evolution of mental models and cultural evolutionary processes, notably including science itself. The variational principle of free energy minimization may thus serve as a unifying mathematical framework for universal Darwinism, the study of evolutionary processes operating throughout nature.

  6. Energy-range relation and mean energy variation in therapeutic particle beams

    International Nuclear Information System (INIS)

    Kempe, Johanna; Brahme, Anders

    2008-01-01

    Analytical expressions for the mean energy and range of therapeutic light ion beams and low- and high-energy electrons have been derived, based on the energy dependence of their respective stopping powers. The new mean energy and range relations are power-law expressions relevant for light ion radiation therapy, and are based on measured practical ranges or known tabulated stopping powers and ranges for the relevant incident particle energies. A practical extrapolated range, R p , for light ions was defined, similar to that of electrons, which is very closely related to the extrapolated range of the primary ions. A universal energy-range relation for light ions and electrons that is valid for all material mixtures and compounds has been developed. The new relation can be expressed in terms of the range for protons and alpha particles, and is found to agree closely with experimental data in low atomic number media and when the difference in the mean ionization energy is low. The variation of the mean energy with depth and the new energy-range relation are useful for accurate stopping power and mass scattering power calculations, as well as for general particle transport and dosimetry applications

  7. Universal Darwinism as a process of Bayesian inference

    Directory of Open Access Journals (Sweden)

    John Oberon Campbell

    2016-06-01

    Full Text Available Many of the mathematical frameworks describing natural selection are equivalent to Bayes’ Theorem, also known as Bayesian updating. By definition, a process of Bayesian Inference is one which involves a Bayesian update, so we may conclude that these frameworks describe natural selection as a process of Bayesian inference. Thus natural selection serves as a counter example to a widely-held interpretation that restricts Bayesian Inference to human mental processes (including the endeavors of statisticians. As Bayesian inference can always be cast in terms of (variational free energy minimization, natural selection can be viewed as comprising two components: a generative model of an ‘experiment’ in the external world environment, and the results of that 'experiment' or the 'surprise' entailed by predicted and actual outcomes of the ‘experiment’. Minimization of free energy implies that the implicit measure of 'surprise' experienced serves to update the generative model in a Bayesian manner. This description closely accords with the mechanisms of generalized Darwinian process proposed both by Dawkins, in terms of replicators and vehicles, and Campbell, in terms of inferential systems. Bayesian inference is an algorithm for the accumulation of evidence-based knowledge. This algorithm is now seen to operate over a wide range of evolutionary processes, including natural selection, the evolution of mental models and cultural evolutionary processes, notably including science itself. The variational principle of free energy minimization may thus serve as a unifying mathematical framework for universal Darwinism, the study of evolutionary processes operating throughout nature.

  8. Joint inference of dominant scatterer locations and motion parameters of an extended target in high range-resolution radar

    CSIR Research Space (South Africa)

    De Freitas, A

    2015-06-01

    Full Text Available of scatterers using the PF method are compared with those obtained using standard range-Doppler inverse synthetic aperture radar (ISAR) imaging when using the same radar returns for both cases. The PF infers the location of scatterers more accurately than ISAR...

  9. Learning Convex Inference of Marginals

    OpenAIRE

    Domke, Justin

    2012-01-01

    Graphical models trained using maximum likelihood are a common tool for probabilistic inference of marginal distributions. However, this approach suffers difficulties when either the inference process or the model is approximate. In this paper, the inference process is first defined to be the minimization of a convex function, inspired by free energy approximations. Learning is then done directly in terms of the performance of the inference process at univariate marginal prediction. The main ...

  10. Inference of domain structure at elevated temperature in fine ...

    African Journals Online (AJOL)

    The thermal variation of the number of domains (nd) for Fe7S8 particles (within the size range 1-30 mm and between 20 and 300°C), has been inferred from the room temperature analytic expression between nd and particle size (L), the temperature dependences of the anisotropy energy constant (K) and the spontaneous ...

  11. Long-range prospects of world energy demands and future energy sources

    International Nuclear Information System (INIS)

    Kozaki, Yasuji

    1998-01-01

    The long-range prospects for world energy demands are reviewed, and the major factors which are influential in relation to energy demands are discussed. The potential for various kinds of conventional and new energy sources such as fossil fuels, solar energies, nuclear fission, and fusion energies to need future energy demands is also discussed. (author)

  12. Evaluation of a new neutron energy spectrum unfolding code based on an Adaptive Neuro-Fuzzy Inference System (ANFIS).

    Science.gov (United States)

    Hosseini, Seyed Abolfazl; Esmaili Paeen Afrakoti, Iman

    2018-01-17

    The purpose of the present study was to reconstruct the energy spectrum of a poly-energetic neutron source using an algorithm developed based on an Adaptive Neuro-Fuzzy Inference System (ANFIS). ANFIS is a kind of artificial neural network based on the Takagi-Sugeno fuzzy inference system. The ANFIS algorithm uses the advantages of both fuzzy inference systems and artificial neural networks to improve the effectiveness of algorithms in various applications such as modeling, control and classification. The neutron pulse height distributions used as input data in the training procedure for the ANFIS algorithm were obtained from the simulations performed by MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). Taking into account the normalization condition of each energy spectrum, 4300 neutron energy spectra were generated randomly. (The value in each bin was generated randomly, and finally a normalization of each generated energy spectrum was performed). The randomly generated neutron energy spectra were considered as output data of the developed ANFIS computational code in the training step. To calculate the neutron energy spectrum using conventional methods, an inverse problem with an approximately singular response matrix (with the determinant of the matrix close to zero) should be solved. The solution of the inverse problem using the conventional methods unfold neutron energy spectrum with low accuracy. Application of the iterative algorithms in the solution of such a problem, or utilizing the intelligent algorithms (in which there is no need to solve the problem), is usually preferred for unfolding of the energy spectrum. Therefore, the main reason for development of intelligent algorithms like ANFIS for unfolding of neutron energy spectra is to avoid solving the inverse problem. In the present study, the unfolded neutron energy spectra of 252Cf and 241Am-9Be neutron sources using the developed computational code were

  13. Range energy for heavy ions in CR-39

    International Nuclear Information System (INIS)

    Gil, L.R.; Marques, A.

    1987-01-01

    Range-energy relations in CR-39, for ions from He to Ar, are obtained after their effective nuclear charge. Comparison with earlier calculations and numerical results in the energy range 0,1 to 200 Mev/ Nucleon are also given. (M.W.O.)

  14. Energy dependence of polymer gels in the orthovoltage energy range

    Directory of Open Access Journals (Sweden)

    Yvonne Roed

    2014-03-01

    Full Text Available Purpose: Ortho-voltage energies are often used for treatment of patients’ superficial lesions, and also for small- animal irradiations. Polymer-Gel dosimeters such as MAGAT (Methacrylic acid Gel and THPC are finding increasing use for 3-dimensional verification of radiation doses in a given treatment geometry. For mega-voltage beams, energy dependence of MAGAT has been quoted as nearly energy-independent. In the kilo-voltage range, there is hardly any literature to shade light on its energy dependence.Methods: MAGAT was used to measure depth-dose for 250 kVp beam. Comparison with ion-chamber data showed a discrepancy increasing significantly with depth. An over-response as much as 25% was observed at a depth of 6 cm.Results and Conclusion: Investigation concluded that 6 cm water in the beam resulted in a half-value-layer (HVL change from 1.05 to 1.32 mm Cu. This amounts to an effective-energy change from 81.3 to 89.5 keV. Response measurements of MAGAT at these two energies explained the observed discrepancy in depth-dose measurements. Dose-calibration curves of MAGAT for (i 250 kVp beam, and (ii 250 kVp beam through 6 cm of water column are presented showing significant energy dependence.-------------------Cite this article as: Roed Y, Tailor R, Pinksy L, Ibbott G. Energy dependence of polymer gels in the orthovoltage energy range. Int J Cancer Ther Oncol 2014; 2(2:020232. DOI: 10.14319/ijcto.0202.32 

  15. Inferring energy dissipation from violation of the fluctuation-dissipation theorem

    Science.gov (United States)

    Wang, Shou-Wen

    2018-05-01

    The Harada-Sasa equality elegantly connects the energy dissipation rate of a moving object with its measurable violation of the Fluctuation-Dissipation Theorem (FDT). Although proven for Langevin processes, its validity remains unclear for discrete Markov systems whose forward and backward transition rates respond asymmetrically to external perturbation. A typical example is a motor protein called kinesin. Here we show generally that the FDT violation persists surprisingly in the high-frequency limit due to the asymmetry, resulting in a divergent FDT violation integral and thus a complete breakdown of the Harada-Sasa equality. A renormalized FDT violation integral still well predicts the dissipation rate when each discrete transition produces a small entropy in the environment. Our study also suggests a way to infer this perturbation asymmetry based on the measurable high-frequency-limit FDT violation.

  16. Is there a hierarchy of social inferences? The likelihood and speed of inferring intentionality, mind, and personality.

    Science.gov (United States)

    Malle, Bertram F; Holbrook, Jess

    2012-04-01

    People interpret behavior by making inferences about agents' intentionality, mind, and personality. Past research studied such inferences 1 at a time; in real life, people make these inferences simultaneously. The present studies therefore examined whether 4 major inferences (intentionality, desire, belief, and personality), elicited simultaneously in response to an observed behavior, might be ordered in a hierarchy of likelihood and speed. To achieve generalizability, the studies included a wide range of stimulus behaviors, presented them verbally and as dynamic videos, and assessed inferences both in a retrieval paradigm (measuring the likelihood and speed of accessing inferences immediately after they were made) and in an online processing paradigm (measuring the speed of forming inferences during behavior observation). Five studies provide evidence for a hierarchy of social inferences-from intentionality and desire to belief to personality-that is stable across verbal and visual presentations and that parallels the order found in developmental and primate research. (c) 2012 APA, all rights reserved.

  17. Energy-range relations for hadrons in nuclear matter

    Science.gov (United States)

    Strugalski, Z.

    1985-01-01

    Range-energy relations for hadrons in nuclear matter exist similarly to the range-energy relations for charged particles in materials. When hadrons of GeV kinetic energies collide with atomic nuclei massive enough, events occur in which incident hadron is stopped completely inside the target nucleus without causing particle production - without pion production in particular. The stoppings are always accompanied by intensive emission of nucleons with kinetic energy from about 20 up to about 400 MeV. It was shown experimentally that the mean number of the emitted nucleons is a measure of the mean path in nuclear matter in nucleons on which the incident hadrons are stopped.

  18. An Energy-Efficient and Scalable Deep Learning/Inference Processor With Tetra-Parallel MIMD Architecture for Big Data Applications.

    Science.gov (United States)

    Park, Seong-Wook; Park, Junyoung; Bong, Kyeongryeol; Shin, Dongjoo; Lee, Jinmook; Choi, Sungpill; Yoo, Hoi-Jun

    2015-12-01

    Deep Learning algorithm is widely used for various pattern recognition applications such as text recognition, object recognition and action recognition because of its best-in-class recognition accuracy compared to hand-crafted algorithm and shallow learning based algorithms. Long learning time caused by its complex structure, however, limits its usage only in high-cost servers or many-core GPU platforms so far. On the other hand, the demand on customized pattern recognition within personal devices will grow gradually as more deep learning applications will be developed. This paper presents a SoC implementation to enable deep learning applications to run with low cost platforms such as mobile or portable devices. Different from conventional works which have adopted massively-parallel architecture, this work adopts task-flexible architecture and exploits multiple parallelism to cover complex functions of convolutional deep belief network which is one of popular deep learning/inference algorithms. In this paper, we implement the most energy-efficient deep learning and inference processor for wearable system. The implemented 2.5 mm × 4.0 mm deep learning/inference processor is fabricated using 65 nm 8-metal CMOS technology for a battery-powered platform with real-time deep inference and deep learning operation. It consumes 185 mW average power, and 213.1 mW peak power at 200 MHz operating frequency and 1.2 V supply voltage. It achieves 411.3 GOPS peak performance and 1.93 TOPS/W energy efficiency, which is 2.07× higher than the state-of-the-art.

  19. The anatomy of choice: active inference and agency

    Directory of Open Access Journals (Sweden)

    Karl eFriston

    2013-09-01

    Full Text Available This paper considers agency in the setting of embodied or active inference. In brief, we associate a sense of agency with prior beliefs about action and ask what sorts of beliefs underlie optimal behaviour. In particular, we consider prior beliefs that action minimises the Kullback-Leibler divergence between desired states and attainable states in the future. This allows one to formulate bounded rationality as approximate Bayesian inference that optimises a free energy bound on model evidence. We show that constructs like expected utility, exploration bonuses, softmax choice rules and optimism bias emerge as natural consequences of this formulation. Previous accounts of active inference have focused on predictive coding and Bayesian filtering schemes for minimising free energy. Here, we consider variational Bayes as an alternative scheme that provides formal constraints on the computational anatomy of inference and action – constraints that are remarkably consistent with neuroanatomy. Furthermore, this scheme contextualises optimal decision theory and economic (utilitarian formulations as pure inference problems. For example, expected utility theory emerges as a special case of free energy minimisation, where the sensitivity or inverse temperature (of softmax functions and quantal response equilibria has a unique and Bayes-optimal solution – that minimises free energy. This sensitivity corresponds to the precision of beliefs about behaviour, such that attainable goals are afforded a higher precision or confidence. In turn, this means that optimal behaviour entails a representation of confidence about outcomes that are under an agent's control.

  20. The anatomy of choice: active inference and agency.

    Science.gov (United States)

    Friston, Karl; Schwartenbeck, Philipp; Fitzgerald, Thomas; Moutoussis, Michael; Behrens, Timothy; Dolan, Raymond J

    2013-01-01

    This paper considers agency in the setting of embodied or active inference. In brief, we associate a sense of agency with prior beliefs about action and ask what sorts of beliefs underlie optimal behavior. In particular, we consider prior beliefs that action minimizes the Kullback-Leibler (KL) divergence between desired states and attainable states in the future. This allows one to formulate bounded rationality as approximate Bayesian inference that optimizes a free energy bound on model evidence. We show that constructs like expected utility, exploration bonuses, softmax choice rules and optimism bias emerge as natural consequences of this formulation. Previous accounts of active inference have focused on predictive coding and Bayesian filtering schemes for minimizing free energy. Here, we consider variational Bayes as an alternative scheme that provides formal constraints on the computational anatomy of inference and action-constraints that are remarkably consistent with neuroanatomy. Furthermore, this scheme contextualizes optimal decision theory and economic (utilitarian) formulations as pure inference problems. For example, expected utility theory emerges as a special case of free energy minimization, where the sensitivity or inverse temperature (of softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution-that minimizes free energy. This sensitivity corresponds to the precision of beliefs about behavior, such that attainable goals are afforded a higher precision or confidence. In turn, this means that optimal behavior entails a representation of confidence about outcomes that are under an agent's control.

  1. An accurate energy-range relationship for high-energy electron beams in arbitrary materials

    International Nuclear Information System (INIS)

    Sorcini, B.B.; Brahme, A.

    1994-01-01

    A general analytical energy-range relationship has been derived to relate the practical range, R p to the most probable energy, E p , of incident electron beams in the range 1 to 50 MeV and above, for absorbers of any atomic number. In the present study only Monte Carlo data determined with the new ITS.3 code have been employed. The standard deviations of the mean deviation from the Monte Carlo data at any energy are about 0.10, 0.12, 0.04, 0.11, 0.04, 0.03, 0.02 mm for Be, C, H 2 O, Al, Cu, Ag and U, respectively, and the relative standard deviation of the mean is about 0.5% for all materials. The fitting program gives some priority to water-equivalent materials, which explains the low standard deviation for water. A small error in the fall-off slope can give a different value for R p . We describe a new method which reduces the uncertainty in the R p determination, by fitting an odd function to the descending portion of the depth-dose curve in order to accurately determine the tangent at the inflection point, and thereby the practical range. An approximate inverse relation is given expressing the most probable energy of an electron beam as a function of the practical range. The resultant relative standard error of the energy is less than 0.7%, and the maximum energy error ΔE p is less than 0.3 MeV. (author)

  2. Global history of the ancient monocot family Araceae inferred with models accounting for past continental positions and previous ranges based on fossils.

    Science.gov (United States)

    Nauheimer, Lars; Metzler, Dirk; Renner, Susanne S

    2012-09-01

    The family Araceae (3790 species, 117 genera) has one of the oldest fossil records among angiosperms. Ecologically, members of this family range from free-floating aquatics (Pistia and Lemna) to tropical epiphytes. Here, we infer some of the macroevolutionary processes that have led to the worldwide range of this family and test how the inclusion of fossil (formerly occupied) geographical ranges affects biogeographical reconstructions. Using a complete genus-level phylogeny from plastid sequences and outgroups representing the 13 other Alismatales families, we estimate divergence times by applying different clock models and reconstruct range shifts under different models of past continental connectivity, with or without the incorporation of fossil locations. Araceae began to diversify in the Early Cretaceous (when the breakup of Pangea was in its final stages), and all eight subfamilies existed before the K/T boundary. Early lineages persist in Laurasia, with several relatively recent entries into Africa, South America, South-East Asia and Australia. Water-associated habitats appear to be ancestral in the family, and DNA substitution rates are especially high in free-floating Araceae. Past distributions inferred when fossils are included differ in nontrivial ways from those without fossils. Our complete genus-level time-scale for the Araceae may prove to be useful for ecological and physiological studies. © 2012 The Authors. New Phytologist © 2012 New Phytologist Trust.

  3. Modeling Well Sampled Composite Spectral Energy Distributions of Distant Galaxies via an MCMC-driven Inference Framework

    Science.gov (United States)

    Pasha, Imad; Kriek, Mariska; Johnson, Benjamin; Conroy, Charlie

    2018-01-01

    Using a novel, MCMC-driven inference framework, we have modeled the stellar and dust emission of 32 composite spectral energy distributions (SEDs), which span from the near-ultraviolet (NUV) to far infrared (FIR). The composite SEDs were originally constructed in a previous work from the photometric catalogs of the NEWFIRM Medium-Band Survey, in which SEDs of individual galaxies at 0.5 MIPS 24 μm was added for each SED type, and in this work, PACS 100 μm, PACS160 μm, SPIRE 25 μm, and SPIRE 350 μm photometry have been added to extend the range of the composite SEDs into the FIR. We fit the composite SEDs with the Prospector code, which utilizes an MCMC sampling to explore the parameter space for models created by the Flexible Stellar Population Synthesis (FSPS) code, in order to investigate how specific star formation rate (sSFR), dust temperature, and other galaxy properties vary with SED type.This work is also being used to better constrain the SPS models within FSPS.

  4. EI: A Program for Ecological Inference

    Directory of Open Access Journals (Sweden)

    Gary King

    2004-09-01

    Full Text Available The program EI provides a method of inferring individual behavior from aggregate data. It implements the statistical procedures, diagnostics, and graphics from the book A Solution to the Ecological Inference Problem: Reconstructing Individual Behavior from Aggregate Data (King 1997. Ecological inference, as traditionally defined, is the process of using aggregate (i.e., "ecological" data to infer discrete individual-level relationships of interest when individual-level data are not available. Ecological inferences are required in political science research when individual-level surveys are unavailable (e.g., local or comparative electoral politics, unreliable (racial politics, insufficient (political geography, or infeasible (political history. They are also required in numerous areas of ma jor significance in public policy (e.g., for applying the Voting Rights Act and other academic disciplines ranging from epidemiology and marketing to sociology and quantitative history.

  5. Inference Attacks and Control on Database Structures

    Directory of Open Access Journals (Sweden)

    Muhamed Turkanovic

    2015-02-01

    Full Text Available Today’s databases store information with sensitivity levels that range from public to highly sensitive, hence ensuring confidentiality can be highly important, but also requires costly control. This paper focuses on the inference problem on different database structures. It presents possible treats on privacy with relation to the inference, and control methods for mitigating these treats. The paper shows that using only access control, without any inference control is inadequate, since these models are unable to protect against indirect data access. Furthermore, it covers new inference problems which rise from the dimensions of new technologies like XML, semantics, etc.

  6. Range-separated density-functional theory for molecular excitation energies

    International Nuclear Information System (INIS)

    Rebolini, E.

    2014-01-01

    Linear-response time-dependent density-functional theory (TDDFT) is nowadays a method of choice to compute molecular excitation energies. However, within the usual adiabatic semi-local approximations, it is not able to describe properly Rydberg, charge-transfer or multiple excitations. Range separation of the electronic interaction allows one to mix rigorously density-functional methods at short range and wave function or Green's function methods at long range. When applied to the exchange functional, it already corrects most of these deficiencies but multiple excitations remain absent as they need a frequency-dependent kernel. In this thesis, the effects of range separation are first assessed on the excitation energies of a partially-interacting system in an analytic and numerical study in order to provide guidelines for future developments of range-separated methods for excitation energy calculations. It is then applied on the exchange and correlation TDDFT kernels in a single-determinant approximation in which the long-range part of the correlation kernel vanishes. A long-range frequency-dependent second-order correlation kernel is then derived from the Bethe-Salpeter equation and added perturbatively to the range-separated TDDFT kernel in order to take into account the effects of double excitations. (author)

  7. Properties of short-range and long-range correlation energy density functionals from electron-electron coalescence

    International Nuclear Information System (INIS)

    Gori-Giorgi, Paola; Savin, Andreas

    2006-01-01

    The combination of density-functional theory with other approaches to the many-electron problem through the separation of the electron-electron interaction into a short-range and a long-range contribution is a promising method, which is raising more and more interest in recent years. In this work some properties of the corresponding correlation energy functionals are derived by studying the electron-electron coalescence condition for a modified (long-range-only) interaction. A general relation for the on-top (zero electron-electron distance) pair density is derived, and its usefulness is discussed with some examples. For the special case of the uniform electron gas, a simple parametrization of the on-top pair density for a long-range only interaction is presented and supported by calculations within the ''extended Overhauser model.'' The results of this work can be used to build self-interaction corrected short-range correlation energy functionals

  8. Quantification of uncertainty in photon source spot size inference during laser-driven radiography experiments at TRIDENT

    Energy Technology Data Exchange (ETDEWEB)

    Tobias, Benjamin John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Palaniyappan, Sasikumar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gautier, Donald Cort [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Mendez, Jacob [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Burris-Mog, Trevor John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Huang, Chengkun K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Favalli, Andrea [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hunter, James F. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Espy, Michelle E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Schmidt, Derek William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Nelson, Ronald Owen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sefkow, Adam [Univ. of Rochester, NY (United States); Shimada, Tsutomu [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Johnson, Randall Philip [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Fernandez, Juan Carlos [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-24

    Images of the R2DTO resolution target were obtained during laser-driven-radiography experiments performed at the TRIDENT laser facility, and analysis of these images using the Bayesian Inference Engine (BIE) determines a most probable full-width half maximum (FWHM) spot size of 78 μm. However, significant uncertainty prevails due to variation in the measured detector blur. Propagating this uncertainty in detector blur through the forward model results in an interval of probabilistic ambiguity spanning approximately 35-195 μm when the laser energy impinges on a thick (1 mm) tantalum target. In other phases of the experiment, laser energy is deposited on a thin (~100 nm) aluminum target placed 250 μm ahead of the tantalum converter. When the energetic electron beam is generated in this manner, upstream from the bremsstrahlung converter, the inferred spot size shifts to a range of much larger values, approximately 270-600 μm FWHM. This report discusses methods applied to obtain these intervals as well as concepts necessary for interpreting the result within a context of probabilistic quantitative inference.

  9. Long-range outlook of energy demands and supplies

    International Nuclear Information System (INIS)

    1984-01-01

    An interim report on the long-range outlook of energy demands and supplies in Japan as prepared by an ad hoc committee, Advisory Committee for Energy was given for the period up to the year 2000. As the energy demands in terms of crude oil, the following figures are set: 460 million kl for 1990, 530 million kl for 1995, and 600 million kl for 2000. In Japan, without domestic energy resources, over 80% of the primary energy has been imported; the reliance on Middle East where political situation is unstable, for petroleum is very large. The following things are described. Background and policy; energy demands in industries, transports, and people's livelihood; energy supplies by coal, nuclear energy, petroleum, etc.; energy demand/supply outlook for 2000. (Mori, K.)

  10. Inferring the colonization of a mountain range--refugia vs. nunatak survival in high alpine ground beetles.

    Science.gov (United States)

    Lohse, Konrad; Nicholls, James A; Stone, Graham N

    2011-01-01

    It has long been debated whether high alpine specialists survived ice ages in situ on small ice-free islands of habitat, so-called nunataks, or whether glacial survival was restricted to larger massifs de refuge at the periphery. We evaluate these alternative hypotheses in a local radiation of high alpine carabid beetles (genus Trechus) in the Orobian Alps, Northern Italy. While summits along the northern ridge of this mountain range were surrounded by the icesheet as nunataks during the last glacial maximum, southern areas remained unglaciated. We analyse a total of 1366 bp of mitochondrial (Cox1 and Cox2) data sampled from 150 individuals from twelve populations and 530 bp of nuclear (PEPCK) sequence sampled for a subset of 30 individuals. Using Bayesian inference, we estimate ancestral location states in the gene trees, which in turn are used to infer the most likely order of recolonization under a model of sequential founder events from a massif de refuge from the mitochondrial data. We test for the paraphyly expected under this model and for reciprocal monophyly predicted by a contrasting model of prolonged persistence of nunatak populations. We find that (i) only three populations are incompatible with the paraphyly of the massif de refuge model, (ii) both mitochondrial and nuclear data support separate refugial origins for populations on the western and eastern ends of the northern ridge, and (iii) mitochondrial node ages suggest persistence on the northern ridge for part of the last ice age. © 2010 Blackwell Publishing Ltd.

  11. Long range energy transfer in graphene hybrid structures

    International Nuclear Information System (INIS)

    Gonçalves, Hugo; Bernardo, César; Moura, Cacilda; Belsley, Michael; Schellenberg, Peter; Ferreira, R A S; André, P S; Stauber, Tobias

    2016-01-01

    In this work we quantify the distance dependence for the extraction of energy from excited chromophores by a single layer graphene flake over a large separation range. To this end hybrid structures were prepared, consisting of a thin (2 nm) layer of a polymer matrix doped with a well chosen strongly fluorescent organic molecule, followed by an un-doped spacer layer of well-defined thicknesses made of the same polymer material and an underlying single layer of pristine, undoped graphene. The coupling strength is assessed through the variation of the fluorescence decay kinetics as a function of distance between the graphene and the excited chromophore molecules. Non-radiative energy transfer to the graphene was observed at distances of up to 60 nm; a range much greater than typical energy transfer distances observed in molecular systems. (paper)

  12. Numerical simulation on range of high-energy electron moving in accelerator target

    International Nuclear Information System (INIS)

    Shao Wencheng; Sun Punan; Dai Wenjiang

    2008-01-01

    In order to determine the range of high-energy electron moving in accelerator target, the range of electron with the energy range of 1 to 100 MeV moving in common target material of accelerator was calculated by Monte-Carlo method. Comparison between the calculated result and the published data were performed. The results of Monte-Carlo calculation are in good agreement with the published data. Empirical formulas were obtained for the range of high-energy electron with the energy range of 1 to 100 MeV in common target material by curve fitting, offering a series of referenced data for the design of targets in electron accelerator. (authors)

  13. The Multivariate Generalised von Mises Distribution: Inference and Applications

    DEFF Research Database (Denmark)

    Navarro, Alexandre Khae Wu; Frellsen, Jes; Turner, Richard

    2017-01-01

    Circular variables arise in a multitude of data-modelling contexts ranging from robotics to the social sciences, but they have been largely overlooked by the machine learning community. This paper partially redresses this imbalance by extending some standard probabilistic modelling tools to the c......Circular variables arise in a multitude of data-modelling contexts ranging from robotics to the social sciences, but they have been largely overlooked by the machine learning community. This paper partially redresses this imbalance by extending some standard probabilistic modelling tools....... These models can leverage standard modelling tools (e.g. kernel functions and automatic relevance determination). Third, we show that the posterior distribution in these models is a mGvM distribution which enables development of an efficient variational free-energy scheme for performing approximate inference...... and approximate maximum-likelihood learning....

  14. Transcript of the proceedings of the first Albuquerque informal range/energy workshop

    International Nuclear Information System (INIS)

    Brice, D.K.

    1981-04-01

    An informal workshop was held to discuss aspects of the calculation of range and energy deposition distributions which are of interest in ion implantation experiments. Topics covered include: problems encountered in using published range and energy deposition tabulations; some limitations in the solutions of range/energy transport equations; the effect of the scattering cross section on straggle; Monte Carlo calculations of ranges and straggling; damage studies in aluminum; simulation of heavy-ion irradiation of gold using MARLOWE; and MARLOWE calculations of range distribution parameters - dependence on input data and calculational model

  15. Correlation between blister skin thickness, the maximum in the damage-energy distribution, and projected ranges of helium ions in Nb for the energy range 10 to 1500 keV

    International Nuclear Information System (INIS)

    St-Jacques, R.G.; Martel, J.G.; Terreault, B.; Veilleux, G.; Das, S.K.; Kaminsky, M.; Fenske, G.

    1976-01-01

    The skin thickness of blisters formed on polycrystalline niobium by 4 He + irradiation at room temperature for energies from 15 to 80 keV have been measured. Similar measurements were conducted for 10 keV 4 He + irradiation at 500 0 C to increase blister exfoliation, and thereby allow examination of a larger number of blister skins. For energies smaller than 100 keV the skin thicknesses are compared with the projected range and the damage-energy distributions constructed from moments interpolated from Winterbon's tabulated values. For energies of 10 and 15 keV the projected ranges and damage-energy distributions have also been computed with a Monte Carlo program. For energies larger than 100 keV the projected ranges of 4 He + in Nb were calculated using either Brice's formalism or the one given by Schiott. The thicknesses for 60 and 80 keV, and those reported earlier for 100 to 1500 keV correlate well with calculated projected ranges. For energies lower than 60 keV the measured thicknesses are larger than the calculated ranges

  16. Inferring Domain Plans in Question-Answering

    National Research Council Canada - National Science Library

    Pollack, Martha E

    1986-01-01

    The importance of plan inference in models of conversation has been widely noted in the computational-linguistics literature, and its incorporation in question-answering systems has enabled a range...

  17. Past and future range shifts and loss of diversity in dwarf willow (Salix herbaceae L.) inferred from genetics, fossils and modelling

    DEFF Research Database (Denmark)

    Alsos, Inger Greve; Alm, Torbjørn; Normand, Signe

    2009-01-01

    . Macrofossil records were compiled to infer past distribution, and species distribution models were used to predict the Last Glacial Maximum (LGM) and future distribution of climatically suitable areas. Results  We found strong genetic differentiation between the populations from Europe/East Greenland...... during the last glaciation was inferred based on the fossil records and distribution modelling. A 46-57% reduction in suitable areas was predicted in 2080 compared to present. However, mainly southern alpine populations may go extinct, causing a loss of about 5% of the genetic diversity in the species....... Main conclusions  From a continuous range in Central Europe during the last glaciation, northward colonization probably occurred as a broad front maintaining diversity as the climate warmed. This explains why potential extinction of southern populations by 2080 will cause a comparatively low loss...

  18. Toward Bayesian inference of the spatial distribution of proteins from three-cube Förster resonance energy transfer data

    DEFF Research Database (Denmark)

    Hooghoudt, Jan Otto; Barroso, Margarida; Waagepetersen, Rasmus Plenge

    2017-01-01

    Főrster resonance energy transfer (FRET) is a quantum-physical phenomenon where energy may be transferred from one molecule to a neighbour molecule if the molecules are close enough. Using fluorophore molecule marking of proteins in a cell it is possible to measure in microscopic images to what....... In this paper we propose a new likelihood-based approach to statistical inference for FRET microscopic data. The likelihood function is obtained from a detailed modeling of the FRET data generating mechanism conditional on a protein configuration. We next follow a Bayesian approach and introduce a spatial point...

  19. Mid-range adiabatic wireless energy transfer via a mediator coil

    International Nuclear Information System (INIS)

    Rangelov, A.A.; Vitanov, N.V.

    2012-01-01

    A technique for efficient mid-range wireless energy transfer between two coils via a mediator coil is proposed. By varying the coil frequencies, three resonances are created: emitter–mediator (EM), mediator–receiver (MR) and emitter–receiver (ER). If the frequency sweeps are adiabatic and such that the EM resonance precedes the MR resonance, the energy flows sequentially along the chain emitter–mediator–receiver. If the MR resonance precedes the EM resonance, then the energy flows directly from the emitter to the receiver via the ER resonance; then the losses from the mediator are suppressed. This technique is robust against noise, resonant constraints and external interferences. - Highlights: ► Efficient and robust mid-range wireless energy transfer via a mediator coil. ► The adiabatic energy transfer is analogous to adiabatic passage in quantum optics. ► Wireless energy transfer is insensitive to any resonant constraints. ► Wireless energy transfer is insensitive to noise in the neighborhood of the coils.

  20. Battery electric vehicle energy consumption modelling for range estimation

    NARCIS (Netherlands)

    Wang, J.; Besselink, I.J.M.; Nijmeijer, H.

    2017-01-01

    Range anxiety is considered as one of the major barriers to the mass adoption of battery electric vehicles (BEVs). One method to solve this problem is to provide accurate range estimation to the driver. This paper describes a vehicle energy consumption model considering the influence of weather

  1. Mechanism of long-range penetration of low-energy ions in botanic samples

    International Nuclear Information System (INIS)

    Liu Feng; Wang Yugang; Xue Jianming; Wang Sixue; Du Guanghua; Yan Sha; Zhao Weijiang

    2002-01-01

    The authors present experimental evidence to reveal the mechanism of long-range penetration of low-energy ions in botanic samples. In the 100 keV Ar + ion transmission measurement, the result confirmed that low-energy ions could penetrate at least 60 μm thick kidney bean slices with the probability of about 1.0 x 10 -5 . The energy spectrum of 1 MeV He + ions penetrating botanic samples has shown that there is a peak of the count of ions with little energy loss. The probability of the low-energy ions penetrating the botanic sample is almost the same as that of the high-energy ions penetrating the same samples with little energy loss. The results indicate that there are some micro-regions with mass thickness less than the projectile range of low-energy ions in the botanic samples and they result in the long-range penetration of low-energy ions in botanic samples

  2. Range and energy functions of interest in neutron dosimetry

    International Nuclear Information System (INIS)

    Bhatia, D.P.; Nagarajan, P.S.

    1978-01-01

    This report documents the energy and range functions generated and used in fast neutron interface dosimetry studies. The basic data of stopping power employed are the most recent. The present report covers a number of media mainly air, oxygen, nitrogen, polythene, graphite, bone and tissue, and a number of charged particles, namely protons, alphas, 9 Be, 11 B, 12 C, 13 C, 14 N and 16 O. These functions would be useful for generation of energy and range values for any of the above particles in any of the above media within +- 1% in any dosimetric calculations. (author)

  3. Adaptive Neuro-Fuzzy Inference Systems as a Strategy for Predicting and Controling the Energy Produced from Renewable Sources

    Directory of Open Access Journals (Sweden)

    Otilia Elena Dragomir

    2015-11-01

    Full Text Available The challenge for our paper consists in controlling the performance of the future state of a microgrid with energy produced from renewable energy sources. The added value of this proposal consists in identifying the most used criteria, related to each modeling step, able to lead us to an optimal neural network forecasting tool. In order to underline the effects of users’ decision making on the forecasting performance, in the second part of the article, two Adaptive Neuro-Fuzzy Inference System (ANFIS models are tested and evaluated. Several scenarios are built by changing: the prediction time horizon (Scenario 1 and the shape of membership functions (Scenario 2.

  4. Electron beam absorption in solid and in water phantoms: depth scaling and energy-range relations

    International Nuclear Information System (INIS)

    Grosswendt, B.; Roos, M.

    1989-01-01

    In electron dosimetry energy parameters are used with values evaluated from ranges in water. The electron ranges in water may be deduced from ranges measured in solid phantoms. Several procedures recommended by national and international organisations differ both in the scaling of the ranges and in the energy-range relations for water. Using the Monte Carlo method the application of different procedures for electron energies below 10 MeV is studied for different phantom materials. It is shown that deviations in the range scaling and in the energy-range relations for water may accumulate to give energy errors of several per cent. In consequence energy-range relations are deduced for several solid phantom materials which enable a single-step energy determination. (author)

  5. An improved energy-range relationship for high-energy electron beams based on multiple accurate experimental and Monte Carlo data sets

    International Nuclear Information System (INIS)

    Sorcini, B.B.; Andreo, P.; Hyoedynmaa, S.; Brahme, A.; Bielajew, A.F.

    1995-01-01

    A theoretically based analytical energy-range relationship has been developed and calibrated against well established experimental and Monte Carlo calculated energy-range data. Only published experimental data with a clear statement of accuracy and method of evaluation have been used. Besides published experimental range data for different uniform media, new accurate experimental data on the practical range of high-energy electron beams in water for the energy range 10-50 MeV from accurately calibrated racetrack microtrons have been used. Largely due to the simultaneous pooling of accurate experimental and Monte Carlo data for different materials, the fit has resulted in an increased accuracy of the resultant energy-range relationship, particularly at high energies. Up to date Monte Carlo data from the latest versions of the codes ITS3 and EGS4 for absorbers of atomic numbers between four and 92 (Be, C, H 2 O, PMMA, Al, Cu, Ag, Pb and U) and incident electron energies between 1 and 100 MeV have been used as a complement where experimental data are sparse or missing. The standard deviation of the experimental data relative to the new relation is slightly larger than that of the Monte Carlo data. This is partly due to the fact that theoretically based stopping and scattering cross-sections are used both to account for the material dependence of the analytical energy-range formula and to calculate ranges with the Monte Carlo programs. For water the deviation from the traditional energy-range relation of ICRU Report 35 is only 0.5% at 20 MeV but as high as - 2.2% at 50 MeV. An improved method for divergence and ionization correction in high-energy electron beams has also been developed to enable use of a wider range of experimental results. (Author)

  6. Range-energy relations and stopping power of water, water vapour and tissue equivalent liquid for α particles over the energy range 0.5 to 8 MeV

    International Nuclear Information System (INIS)

    Palmer, R.B.J.; Akhavan-Rezayat, Ahmad

    1978-01-01

    Experimental range-energy relations are presented for alpha particles in water, water vapour and tissue equivalent liquid at energies up to 8 MeV. From these relations differential stopping powers are derived at 0.25 MeV energy intervals. Consideration is given to sources of error in the range-energy measurements and to the uncertainties that these will introduce into the stopping power values. The ratio of the differential stopping power of muscle equivalent liquid to that of water over the energy range 0.5 to 7.5 MeV is discussed in relation to the specific gravity and chemical composition of the muscle equivalent liquid. Theoretical molecular stopping power calculations based upon the Bethe formula are also presented for water. The effect of phase upon the stopping power of water is discussed. The molecular stopping power of water vapour is shown to be significantly higher than that of water for energies below 1.25 MeV and above 2.5 MeV, the ratio of the two stopping powers rising to 1.39 at 0.5 MeV and to 1.13 at 7.0 MeV. Stopping power measurements for other liquids and vapours are compared with the results for water and water vapour and some are observed to have stopping power ratios in the vapour and liquid phases which vary with energy in a similar way to water. It is suggested that there may be several factors contributing to the increased stopping power of liquids. The need for further experimental results on a wider range of liquids is stressed

  7. Auroral electron energies

    International Nuclear Information System (INIS)

    McEwan, D.J.; Duncan, C.N.; Montalbetti, R.

    1981-01-01

    Auroral electron characteristic energies determined from ground-based photometer measurements of the ratio of 5577 A OI and 4278 A N 2 + emissions are compared with electron energies measured during two rocket flights into pulsating aurora. Electron spectra with Maxwellian energy distributions were observed in both flights with an increase in characteristic energy during each pulsation. During the first flight on February 15, 1980 values of E 0 ranging from 1.4 keV at pulsation minima to 1.8 keV at pulsation maxima were inferred from the 5577/4278 ratios, in good agreement with rocket measurements. During the second flight on February 23, direct electron energy measurements yielded E 0 values of 1.8 keV rising to 2.1 keV at pulsation maxima. The photometric ratio measurements in this case gave inferred E 0 values about 0.5 keV lower. This apparent discrepancy is considered due to cloud cover which impaired the absolute emission intensity measurements. It is concluded that the 5577/4278 ratio does yield a meaningful measure of the characteristic energy of incoming electrons. This ratio technique, when added to the more sensitive 6300/4278 ratio technique usable in stable auroras can now provide more complete monitoring of electron influx characteristics. (auth)

  8. Contingency inferences driven by base rates: Valid by sampling

    Directory of Open Access Journals (Sweden)

    Florian Kutzner

    2011-04-01

    Full Text Available Fiedler et al. (2009, reviewed evidence for the utilization of a contingency inference strategy termed pseudocontingencies (PCs. In PCs, the more frequent levels (and, by implication, the less frequent levels are assumed to be associated. PCs have been obtained using a wide range of task settings and dependent measures. Yet, the readiness with which decision makers rely on PCs is poorly understood. A computer simulation explored two potential sources of subjective validity of PCs. First, PCs are shown to perform above chance level when the task is to infer the sign of moderate to strong population contingencies from a sample of observations. Second, contingency inferences based on PCs and inferences based on cell frequencies are shown to partially agree across samples. Intriguingly, this criterion and convergent validity are by-products of random sampling error, highlighting the inductive nature of contingency inferences.

  9. Assessment of tidal range energy resources based on flux conservation in Jiantiao Bay, China

    Science.gov (United States)

    Du, Min; Wu, He; Yu, Huaming; Lv, Ting; Li, Jiangyu; Yu, Yujun

    2017-12-01

    La Rance Tidal Range Power Station in France and Jiangxia Tidal Range Power Station in China have been both long-term successful commercialized operations as kind of role models for public at large for more than 40 years. The Sihwa Lake Tidal Range Power Station in South Korea has also developed to be the largest marine renewable power station with its installed capacity 254 MW since 2010. These practical applications prove that the tidal range energy as one kind of marine renewable energy exploitation and utilization technology is becoming more and more mature and it is used more and more widely. However, the assessment of the tidal range energy resources is not well developed nowadays. This paper summarizes the main problems in tidal range power resource assessment, gives a brief introduction to tidal potential energy theory, and then we present an analyzed and estimated method based on the tide numerical modeling. The technical characteristics and applicability of these two approaches are compared with each other. Furthermore, based on the theory of tidal range energy generation combined with flux conservation, this paper proposes a new assessment method that include a series of evaluation parameters and it can be easily operated to calculate the tidal range energy of the sea. Finally, this method is applied on assessment of the tidal range power energy of the Jiantiao Harbor in Zhejiang Province, China for demonstration and examination.

  10. NEUTRON-PROTON EFFECTIVE RANGE PARAMETERS AND ZERO-ENERGY SHAPE DEPENDENCE.

    Energy Technology Data Exchange (ETDEWEB)

    HACKENBURG, R.W.

    2005-06-01

    A completely model-independent effective range theory fit to available, unpolarized, np scattering data below 3 MeV determines the zero-energy free proton cross section {sigma}{sub 0} = 20.4287 {+-} 0.0078 b, the singlet apparent effective range r{sub s} = 2.754 {+-} 0.018{sub stat} {+-} 0.056{sub syst} fm, and improves the error slightly on the parahydrogen coherent scattering length, a{sub c} = -3.7406 {+-} 0.0010 fm. The triplet and singlet scattering lengths and the triplet mixed effective range are calculated to be a{sub t} = 5.4114 {+-} 0.0015 fm, a{sub s} = -23.7153 {+-} 0.0043 fm, and {rho}{sub t}(0,-{epsilon}{sub t}) = 1.7468 {+-} 0.0019 fm. The model-independent analysis also determines the zero-energy effective ranges by treating them as separate fit parameters without the constraint from the deuteron binding energy {epsilon}{sub t}. These are determined to be {rho}{sub t}(0,0) = 1.705 {+-} 0.023 fm and {rho}{sub s}(0,0) = 2.665 {+-} 0.056 fm. This determination of {rho}{sub t}(0,0) and {rho}{sub s}(0,0) is most sensitive to the sparse data between about 20 and 600 keV, where the correlation between the determined values of {rho}{sub t}(0,0) and {rho}{sub s}(0,0) is at a minimum. This correlation is responsible for the large systematic error in r{sub s}. More precise data in this range are needed. The present data do not event determine (with confidence) that {rho}{sub t}(0,0) {ne} {rho}{sub t}(0, -{epsilon}{sub t}), referred to here as ''zero-energy shape dependence''. The widely used measurement of {sigma}{sub 0} = 20.491 {+-} 0.014 b from W. Dilg, Phys. Rev. C 11, 103 (1975), is argued to be in error.

  11. Efficient Bayesian inference for ARFIMA processes

    Science.gov (United States)

    Graves, T.; Gramacy, R. B.; Franzke, C. L. E.; Watkins, N. W.

    2015-03-01

    Many geophysical quantities, like atmospheric temperature, water levels in rivers, and wind speeds, have shown evidence of long-range dependence (LRD). LRD means that these quantities experience non-trivial temporal memory, which potentially enhances their predictability, but also hampers the detection of externally forced trends. Thus, it is important to reliably identify whether or not a system exhibits LRD. In this paper we present a modern and systematic approach to the inference of LRD. Rather than Mandelbrot's fractional Gaussian noise, we use the more flexible Autoregressive Fractional Integrated Moving Average (ARFIMA) model which is widely used in time series analysis, and of increasing interest in climate science. Unlike most previous work on the inference of LRD, which is frequentist in nature, we provide a systematic treatment of Bayesian inference. In particular, we provide a new approximate likelihood for efficient parameter inference, and show how nuisance parameters (e.g. short memory effects) can be integrated over in order to focus on long memory parameters, and hypothesis testing more directly. We illustrate our new methodology on the Nile water level data, with favorable comparison to the standard estimators.

  12. HESS J1427−608: AN UNUSUAL HARD, UNBROKEN γ -RAY SPECTRUM IN A VERY WIDE ENERGY RANGE

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Xiao-Lei; Gao, Wei-Hong [Department of Physics and Institute of Theoretical Physics, Nanjing Normal University, Nanjing 210046 (China); Xin, Yu-Liang; Liao, Neng-Hui; Yuan, Qiang; He, Hao-Ning; Fan, Yi-Zhong; Liu, Si-Ming, E-mail: yuanq@pmo.ac.cn, E-mail: gaoweihong@njnu.edu.cn, E-mail: liusm@pmo.ac.cn [Key Laboratory of Dark Matter and Space Astronomy, Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210008 (China)

    2017-01-20

    We report the detection of a GeV γ -ray source that spatially overlaps and is thus very likely associated with the unidentified very high energy (VHE) γ -ray source HESS J1427−608 with the Pass 8 data recorded by the Fermi Large Area Telescope . The photon spectrum of this source is best described by a power law with an index of 1.85 ± 0.17 in the energy range of 3–500 GeV, and the measured flux connects smoothly with that of HESS J1427−608 at a few hundred gigaelectronvolts. This source shows no significant extension and time variation. The broadband GeV to TeV emission over four decades of energies can be well fitted by a single power-law function with an index of 2.0, without obvious indication of spectral cutoff toward high energies. Such a result implies that HESS J1427−608 may be a PeV particle accelerator. We discuss the possible nature of HESS J1427−608 according to the multiwavelength spectral fittings. Given the relatively large errors, either a leptonic or a hadronic model can explain the multiwavelength data from radio to VHE γ -rays. The inferred magnetic field strength is a few micro-Gauss, which is smaller than the typical values of supernova remnants (SNRs) and is consistent with some pulsar wind nebulae (PWNe). On the other hand, the flat γ -ray spectrum is slightly different from typical PWNe but is similar to that of some known SNRs.

  13. Relative Wave Energy based Adaptive Neuro-Fuzzy Inference System model for the Estimation of Depth of Anaesthesia.

    Science.gov (United States)

    Benzy, V K; Jasmin, E A; Koshy, Rachel Cherian; Amal, Frank; Indiradevi, K P

    2018-01-01

    The advancement in medical research and intelligent modeling techniques has lead to the developments in anaesthesia management. The present study is targeted to estimate the depth of anaesthesia using cognitive signal processing and intelligent modeling techniques. The neurophysiological signal that reflects cognitive state of anaesthetic drugs is the electroencephalogram signal. The information available on electroencephalogram signals during anaesthesia are drawn by extracting relative wave energy features from the anaesthetic electroencephalogram signals. Discrete wavelet transform is used to decomposes the electroencephalogram signals into four levels and then relative wave energy is computed from approximate and detail coefficients of sub-band signals. Relative wave energy is extracted to find out the degree of importance of different electroencephalogram frequency bands associated with different anaesthetic phases awake, induction, maintenance and recovery. The Kruskal-Wallis statistical test is applied on the relative wave energy features to check the discriminating capability of relative wave energy features as awake, light anaesthesia, moderate anaesthesia and deep anaesthesia. A novel depth of anaesthesia index is generated by implementing a Adaptive neuro-fuzzy inference system based fuzzy c-means clustering algorithm which uses relative wave energy features as inputs. Finally, the generated depth of anaesthesia index is compared with a commercially available depth of anaesthesia monitor Bispectral index.

  14. Gamma-ray astronomy in the medium energy (10-50 MeV) range

    International Nuclear Information System (INIS)

    Kniffen, D.A.; Bertsch, D.L.; Palmeira, R.A.R.; Rao, K.R.

    1977-01-01

    Gamma-ray astronomy in the medium energy (10-50 MeV) range can provide unique information with which to study many astrophysical problems. Observations in the 10-50 MeV range provide the cleanest window with which to view the isotropic diffuse component of the radiation and to study the possible cosmological implications of the spectrum. For the study of compact sources, this is the important region between the X-ray sky and the vastly different γ-ray sky seen by SAS-2 and COS-B. To understand the implications of medium energy γ-ray astronomy to the study of the galactic diffuse γ-radiation, the model developed to explain the high energy γ-ray observations of SAS-2 is extended to the medium energy range. This work illustrates the importance of medium energy γ-ray astronomy for studying the electromagnetic component of the galactic cosmic rays. To observe the medium energy component of the intense galactic center γ-ray emission, two balloon flights of a medium energy γ-ray spark chamber telescope were flown in Brazil in 1975. These results indicate the emission is higher than previously thought and above the predictions of the theoretical model

  15. Photoproduction in the Energy Range 70-200 GeV

    CERN Multimedia

    2002-01-01

    This experiment continues the photoproduction studies of WA4 and WA57 up to the higher energies made available by the upgrading of the West Hall. An electron beam of energy 200 GeV is used to produce tagged photons in the range 65-180 GeV; The photon beam is incident on a 60 cm liquid hydrogen target in the Omega Spectrometer. A Ring Image Cherenkov detector provides pion/kaon separation up to 150 GeV/c. The Transition Radiation Detector extends the charged pion identification to the momentum range from about 80 GeV/c upwards. The large lead/liquid scintillator calorimeter built by the WA70 collaboration and the new lead/scintillating fibre det (Plug) are used for the detection of the $\\gamma$ rays produced by the interactions of the primary photons in the hydrogen target. \\\\ \\\\ The aim is to make a survey of photoproduction reactions up to photon energies of 200 GeV. The large aperture of the Omega Spectrometer will particularly enable study of fragmentation of the photon to states of high mass, up to @C 9 G...

  16. Exploring the range of energy savings likely from energy efficiency retrofit measures in Ireland's residential sector

    International Nuclear Information System (INIS)

    Dineen, D.; Ó Gallachóir, B.P.

    2017-01-01

    This paper estimates the potential energy savings in the Irish residential sector by 2020 due to the introduction of an ambitious retrofit programme. We estimate the technical energy savings potential of retrofit measures targeting energy efficiency of the space and water heating end uses of the 2011 stock of residential dwellings between 2012 and 2020. We build eight separate scenarios, varying the number of dwellings retrofitted and the depth of retrofit carried out in order to investigate the range of energy savings possible. In 2020 the estimated technical savings potential lies in the range from 1713 GWh to 10,817 GWh, but is more likely to fall within the lower end of this range, i.e. between 1700 and 4360 GWh. When rebound effects are taken into account this reduces further to 1100 GWh and 2800 GWh per annum. The purpose of this paper was to test the robustness of the NEEAP target savings for residential retrofit, i.e. 3000 GWh by 2020. We conclude that this target is technically feasible but very challenging and unlikely to be achieved based on progress to date. It will require a significant shift towards deeper retrofit measures compared to what has been achieved by previous schemes. - Highlights: • Paper estimates range of energy savings likely from Irish residential retrofit. • Achieving NEEAP target savings of 3000 GWh by 2020 is feasible but very challenging. • Likely savings of 1100–2800 GWh per annum in 2020, including rebound. • NEEAP target unlikely to be achieved based on current trends.

  17. Alpha Beam Energy Determination Using a Range Measuring Device for Radioisotope Production

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jun Yong; Kim, Byeon Gil; Hong, Seung Pyo; Kim, Ran Young; Chun, Kwon Soo [Korea Institute of Radiological and Medical Sciences, Seoul (Korea, Republic of)

    2016-05-15

    The threshold energy of the {sup 209}Bi(α,3n){sup 210} At reaction is at about 30MeV. Our laboratory suggested an energy measurement method to confirm the proton-beam's energy by using a range measurement device. The experiment was performed energy measurement of alpha beam. The alpha beam of energy 29 MeV has been extracted from the cyclotron for the production of {sup 211}At. This device was composed of four parts: an absorber, a drive shaft, and a servo motor and a Faraday cup. The drive shaft was mounted on the absorber and connects with the axis of the servo motor and rotates linearly and circularly by this servo motor. A Faraday cup is for measuring the beam flux. As this drive shaft rotates, the thickness of the absorber varies depending on the rotation angle of the absorber. The energy of the alpha particle accelerated and extracted from MC-50 cyclotron was calculated with the measurement of the particle range in Al foil and using ASTAR, SRIM, MCNPX software. There were a little discrepancy between the expected energy and the calculated energy within the 0.5MeV error range. We have a plan to make an experiment with various alpha particle energies and another methodology, for example, the cross section measurement of the nuclear reaction.

  18. Human Inferences about Sequences: A Minimal Transition Probability Model.

    Directory of Open Access Journals (Sweden)

    Florent Meyniel

    2016-12-01

    Full Text Available The brain constantly infers the causes of the inputs it receives and uses these inferences to generate statistical expectations about future observations. Experimental evidence for these expectations and their violations include explicit reports, sequential effects on reaction times, and mismatch or surprise signals recorded in electrophysiology and functional MRI. Here, we explore the hypothesis that the brain acts as a near-optimal inference device that constantly attempts to infer the time-varying matrix of transition probabilities between the stimuli it receives, even when those stimuli are in fact fully unpredictable. This parsimonious Bayesian model, with a single free parameter, accounts for a broad range of findings on surprise signals, sequential effects and the perception of randomness. Notably, it explains the pervasive asymmetry between repetitions and alternations encountered in those studies. Our analysis suggests that a neural machinery for inferring transition probabilities lies at the core of human sequence knowledge.

  19. Long-range Coulomb interactions in low energy (e,2e) data

    International Nuclear Information System (INIS)

    Waterhouse, D.

    2000-01-01

    Full text: Proper treatment of long-range Coulomb interactions has confounded atomic collision theory since Schrodinger first presented a quantum-mechanical model for atomic interactions. The long-range Coulomb interactions are difficult to include in models in a way that treats the interaction sufficiently well but at the same time ensures the calculation remains tractable. An innovative application of an existing multi-parameter (e,2e) data acquisition system will be described. To clarify the effects of long-range Coulomb interactions, we will report the correlations and interactions that occur at low energy, observed by studying the energy sharing between outgoing electrons in the electron-impact ionisation of krypton

  20. Technical Note: How to use Winbugs to infer animal models

    DEFF Research Database (Denmark)

    Damgaard, Lars Holm

    2007-01-01

    This paper deals with Bayesian inferences of animal models using Gibbs sampling. First, we suggest a general and efficient method for updating additive genetic effects, in which the computational cost is independent of the pedigree depth and increases linearly only with the size of the pedigree....... Second, we show how this approach can be used to draw inferences from a wide range of animal models using the computer package Winbugs. Finally, we illustrate the approach in a simulation study, in which the data are generated and analyzed using Winbugs according to a linear model with i.i.d errors...... having Student's t distributions. In conclusion, Winbugs can be used to make inferences in small-sized, quantitative, genetic data sets applying a wide range of animal models that are not yet standard in the animal breeding literature...

  1. Comment on "Inference with minimal Gibbs free energy in information field theory".

    Science.gov (United States)

    Iatsenko, D; Stefanovska, A; McClintock, P V E

    2012-03-01

    Enßlin and Weig [Phys. Rev. E 82, 051112 (2010)] have introduced a "minimum Gibbs free energy" (MGFE) approach for estimation of the mean signal and signal uncertainty in Bayesian inference problems: it aims to combine the maximum a posteriori (MAP) and maximum entropy (ME) principles. We point out, however, that there are some important questions to be clarified before the new approach can be considered fully justified, and therefore able to be used with confidence. In particular, after obtaining a Gaussian approximation to the posterior in terms of the MGFE at some temperature T, this approximation should always be raised to the power of T to yield a reliable estimate. In addition, we show explicitly that MGFE indeed incorporates the MAP principle, as well as the MDI (minimum discrimination information) approach, but not the well-known ME principle of Jaynes [E.T. Jaynes, Phys. Rev. 106, 620 (1957)]. We also illuminate some related issues and resolve apparent discrepancies. Finally, we investigate the performance of MGFE estimation for different values of T, and we discuss the advantages and shortcomings of the approach.

  2. Forecasting building energy consumption with hybrid genetic algorithm-hierarchical adaptive network-based fuzzy inference system

    Energy Technology Data Exchange (ETDEWEB)

    Li, Kangji [Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027 (China); School of Electricity Information Engineering, Jiangsu University, Zhenjiang 212013 (China); Su, Hongye [Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027 (China)

    2010-11-15

    There are several ways to forecast building energy consumption, varying from simple regression to models based on physical principles. In this paper, a new method, namely, the hybrid genetic algorithm-hierarchical adaptive network-based fuzzy inference system (GA-HANFIS) model is developed. In this model, hierarchical structure decreases the rule base dimension. Both clustering and rule base parameters are optimized by GAs and neural networks (NNs). The model is applied to predict a hotel's daily air conditioning consumption for a period over 3 months. The results obtained by the proposed model are presented and compared with regular method of NNs, which indicates that GA-HANFIS model possesses better performance than NNs in terms of their forecasting accuracy. (author)

  3. Learning about the internal structure of categories through classification and feature inference.

    Science.gov (United States)

    Jee, Benjamin D; Wiley, Jennifer

    2014-01-01

    Previous research on category learning has found that classification tasks produce representations that are skewed toward diagnostic feature dimensions, whereas feature inference tasks lead to richer representations of within-category structure. Yet, prior studies often measure category knowledge through tasks that involve identifying only the typical features of a category. This neglects an important aspect of a category's internal structure: how typical and atypical features are distributed within a category. The present experiments tested the hypothesis that inference learning results in richer knowledge of internal category structure than classification learning. We introduced several new measures to probe learners' representations of within-category structure. Experiment 1 found that participants in the inference condition learned and used a wider range of feature dimensions than classification learners. Classification learners, however, were more sensitive to the presence of atypical features within categories. Experiment 2 provided converging evidence that classification learners were more likely to incorporate atypical features into their representations. Inference learners were less likely to encode atypical category features, even in a "partial inference" condition that focused learners' attention on the feature dimensions relevant to classification. Overall, these results are contrary to the hypothesis that inference learning produces superior knowledge of within-category structure. Although inference learning promoted representations that included a broad range of category-typical features, classification learning promoted greater sensitivity to the distribution of typical and atypical features within categories.

  4. Mean range and energy of 28Si ions some Makrofol track detectors

    International Nuclear Information System (INIS)

    Shyam, S.; Mishra, R.; Tripathy, S.P.; Mawar, A.K.; Dwivedi, K.K.; Khathing, D.T.; Srivastava, A.; Avasthi, D.K.

    2000-01-01

    The rate of energy loss of the impinging ion as it passes through succeeding layers of the target material gives information regarding the nature of material and helps to calculate the range of the ions in a thick target in which the ions are stopped. Here the range, energy loss of 118 MeV 28 Si were measured in Makrofol-N, Makrofol-G and Makrofol-KG, using nuclear track technique. The experimental range data are compared with the theoretical values obtained from different computer codes. (author)

  5. Analysis for mass distribution of proton-induced reactions in intermediate energy range

    CERN Document Server

    Xiao Yu Heng

    2002-01-01

    The mass and charge distribution of residual products produced in the spallation reactions needs to be studied, because it can provide useful information for the disposal of nuclear waste and residual radioactivity generated by the spallation neutron target system. In present work, the Many State Dynamical Model (MSDM) is based on the Cascade-Exciton Model (CEM). The authors use it to investigate the mass distribution of Nb, Au and Pb proton-induced reactions in energy range from 100 MeV to 3 GeV. The agreement between the MSDM simulations and the measured data is good in this energy range, and deviations mainly show up in the mass range of 90 - 150 for the high energy proton incident upon Au and Pb

  6. Kilovoltage energy imaging with a radiotherapy linac with a continuously variable energy range.

    Science.gov (United States)

    Roberts, D A; Hansen, V N; Thompson, M G; Poludniowski, G; Niven, A; Seco, J; Evans, P M

    2012-03-01

    In this paper, the effect on image quality of significantly reducing the primary electron energy of a radiotherapy accelerator is investigated using a novel waveguide test piece. The waveguide contains a novel variable coupling device (rotovane), allowing for a wide continuously variable energy range of between 1.4 and 9 MeV suitable for both imaging and therapy. Imaging at linac accelerating potentials close to 1 MV was investigated experimentally and via Monte Carlo simulations. An imaging beam line was designed, and planar and cone beam computed tomography images were obtained to enable qualitative and quantitative comparisons with kilovoltage and megavoltage imaging systems. The imaging beam had an electron energy of 1.4 MeV, which was incident on a water cooled electron window consisting of stainless steel, a 5 mm carbon electron absorber and 2.5 mm aluminium filtration. Images were acquired with an amorphous silicon detector sensitive to diagnostic x-ray energies. The x-ray beam had an average energy of 220 keV and half value layer of 5.9 mm of copper. Cone beam CT images with the same contrast to noise ratio as a gantry mounted kilovoltage imaging system were obtained with doses as low as 2 cGy. This dose is equivalent to a single 6 MV portal image. While 12 times higher than a 100 kVp CBCT system (Elekta XVI), this dose is 140 times lower than a 6 MV cone beam imaging system and 6 times lower than previously published LowZ imaging beams operating at higher (4-5 MeV) energies. The novel coupling device provides for a wide range of electron energies that are suitable for kilovoltage quality imaging and therapy. The imaging system provides high contrast images from the therapy portal at low dose, approaching that of gantry mounted kilovoltage x-ray systems. Additionally, the system provides low dose imaging directly from the therapy portal, potentially allowing for target tracking during radiotherapy treatment. There is the scope with such a tuneable system

  7. Staged Inference using Conditional Deep Learning for energy efficient real-time smart diagnosis.

    Science.gov (United States)

    Parsa, Maryam; Panda, Priyadarshini; Sen, Shreyas; Roy, Kaushik

    2017-07-01

    Recent progress in biosensor technology and wearable devices has created a formidable opportunity for remote healthcare monitoring systems as well as real-time diagnosis and disease prevention. The use of data mining techniques is indispensable for analysis of the large pool of data generated by the wearable devices. Deep learning is among the promising methods for analyzing such data for healthcare applications and disease diagnosis. However, the conventional deep neural networks are computationally intensive and it is impractical to use them in real-time diagnosis with low-powered on-body devices. We propose Staged Inference using Conditional Deep Learning (SICDL), as an energy efficient approach for creating healthcare monitoring systems. For smart diagnostics, we observe that all diagnoses are not equally challenging. The proposed approach thus decomposes the diagnoses into preliminary analysis (such as healthy vs unhealthy) and detailed analysis (such as identifying the specific type of cardio disease). The preliminary diagnosis is conducted real-time with a low complexity neural network realized on the resource-constrained on-body device. The detailed diagnosis requires a larger network that is implemented remotely in cloud and is conditionally activated only for detailed diagnosis (unhealthy individuals). We evaluated the proposed approach using available physiological sensor data from Physionet databases, and achieved 38% energy reduction in comparison to the conventional deep learning approach.

  8. Alternative separation of exchange and correlation energies in range-separated density-functional perturbation theory

    DEFF Research Database (Denmark)

    Cornaton, Y.; Stoyanova, A.; Jensen, Hans Jørgen Aagaard

    2013-01-01

    of the noninteracting Kohn-Sham one. When second-order corrections to the density are neglected, the energy expression reduces to a range-separated double-hybrid (RSDH) type of functional, RSDHf, where "f" stands for "full-range integrals" as the regular full-range interaction appears explicitly in the energy...

  9. Application of Bayesian inference to stochastic analytic continuation

    International Nuclear Information System (INIS)

    Fuchs, S; Pruschke, T; Jarrell, M

    2010-01-01

    We present an algorithm for the analytic continuation of imaginary-time quantum Monte Carlo data. The algorithm is strictly based on principles of Bayesian statistical inference. It utilizes Monte Carlo simulations to calculate a weighted average of possible energy spectra. We apply the algorithm to imaginary-time quantum Monte Carlo data and compare the resulting energy spectra with those from a standard maximum entropy calculation.

  10. CMOS Active Pixel Sensors as energy-range detectors for proton Computed Tomography

    International Nuclear Information System (INIS)

    Esposito, M.; Waltham, C.; Allinson, N.M.; Anaxagoras, T.; Evans, P.M.; Poludniowski, G.; Green, S.; Parker, D.J.; Price, T.; Manolopoulos, S.; Nieto-Camero, J.

    2015-01-01

    Since the first proof of concept in the early 70s, a number of technologies has been proposed to perform proton CT (pCT), as a means of mapping tissue stopping power for accurate treatment planning in proton therapy. Previous prototypes of energy-range detectors for pCT have been mainly based on the use of scintillator-based calorimeters, to measure proton residual energy after passing through the patient. However, such an approach is limited by the need for only a single proton passing through the energy-range detector in a read-out cycle. A novel approach to this problem could be the use of pixelated detectors, where the independent read-out of each pixel allows to measure simultaneously the residual energy of a number of protons in the same read-out cycle, facilitating a faster and more efficient pCT scan. This paper investigates the suitability of CMOS Active Pixel Sensors (APSs) to track individual protons as they go through a number of CMOS layers, forming an energy-range telescope. Measurements performed at the iThemba Laboratories will be presented and analysed in terms of correlation, to confirm capability of proton tracking for CMOS APSs

  11. CMOS Active Pixel Sensors as energy-range detectors for proton Computed Tomography.

    Science.gov (United States)

    Esposito, M; Anaxagoras, T; Evans, P M; Green, S; Manolopoulos, S; Nieto-Camero, J; Parker, D J; Poludniowski, G; Price, T; Waltham, C; Allinson, N M

    2015-06-03

    Since the first proof of concept in the early 70s, a number of technologies has been proposed to perform proton CT (pCT), as a means of mapping tissue stopping power for accurate treatment planning in proton therapy. Previous prototypes of energy-range detectors for pCT have been mainly based on the use of scintillator-based calorimeters, to measure proton residual energy after passing through the patient. However, such an approach is limited by the need for only a single proton passing through the energy-range detector in a read-out cycle. A novel approach to this problem could be the use of pixelated detectors, where the independent read-out of each pixel allows to measure simultaneously the residual energy of a number of protons in the same read-out cycle, facilitating a faster and more efficient pCT scan. This paper investigates the suitability of CMOS Active Pixel Sensors (APSs) to track individual protons as they go through a number of CMOS layers, forming an energy-range telescope. Measurements performed at the iThemba Laboratories will be presented and analysed in terms of correlation, to confirm capability of proton tracking for CMOS APSs.

  12. COREL, Ion Implantation in Solids, Range, Straggling Using Thomas-Fermi Cross-Sections. RASE4, Ion Implantation in Solids, Range, Straggling, Energy Deposition, Recoils. DAMG2, Ion Implantation in Solids, Energy Deposition Distribution with Recoils

    International Nuclear Information System (INIS)

    Brice, D. K.

    1979-01-01

    1 - Description of problem or function: COREL calculates the final average projected range, standard deviation in projected range, standard deviation in locations transverse to projected range, and average range along path for energetic atomic projectiles incident on amorphous targets or crystalline targets oriented such that the projectiles are not incident along low index crystallographic axes or planes. RASE4 calculates the instantaneous average projected range, standard deviation in projected range, standard deviation in locations transverse to projected range, and average range along path for energetic atomic projectiles incident on amorphous targets or crystalline targets oriented such that the projectiles are not incident along low index crystallographic axes or planes. RASE4 also calculates the instantaneous rate at which the projectile is depositing energy into atomic processes (damage) and into electronic processes (electronic excitation), the average range of target atom recoils projected onto the direction of motion of the projectiles, and the standard deviation in the recoil projected range. DAMG2 calculates the distribution in depth of the energy deposited into atomic processes (damage), electronic processes (electronic excitation), or other energy-dependent quality produced by energetic atomic projectiles incident on amorphous targets or crystalline targets oriented such that the projectiles are not incident along low index crystallographic axes or planes. 2 - Method of solution: COREL: The truncated differential equation which governs the several variables being sought is solved through second-order by trapezoidal integration. The energy-dependent coefficients in the equation are obtained by rectangular integration over the Thomas-Fermi elastic scattering cross section. RASE4: The truncated differential equation which governs the range and straggling variables is solved through second-order by trapezoidal integration. The energy

  13. Prediction of the metabolizable energy requirements of free-range laying hens.

    Science.gov (United States)

    Brainer, M M A; Rabello, C B V; Santos, M J B; Lopes, C C; Ludke, J V; Silva, J H V; Lima, R A

    2016-01-01

    This experiment was conducted with the aim of estimating the ME requirements of free-range laying hens for maintenance, weight gain, and egg production. These experiments were performed to develop an energy requirement prediction equation by using the comparative slaughter technique and the total excreta collection method. Regression equations were used to relate the energy intake, the energy retained in the body and eggs, and the heat production of the hens. These relationships were used to determine the daily ME requirement for maintenance, the efficiency energy utilization above the requirements for maintenance, and the NE requirement for maintenance. The requirement for weight gain was estimated from the energy content of the carcass, and the diet's efficiency energy utilization was determined from the weight gain, which was measured during weekly slaughter. The requirement for egg production was estimated by considering the energy content of the eggs and the efficiency of energy deposition in the eggs. The requirement and efficiency energy utilization for maintenance were 121.8 kcal ME/(kg∙d)and 0.68, respectively. Similarly, the NE requirement for maintenance was 82.4 kcal ME/(kg∙d), and the efficiency energy utilization above maintenance was 0.61. Because the carcass body weight and energy did not increase during the trial, the weight gain could not be estimated. The requirements for egg production requirement and efficiency energy utilization for egg production were 2.48 kcal/g and 0.61, respectively. The following energy prediction equation for free-range laying hens (without weight gain) was developed: ME /(hen ∙ d) = 121.8 × W + 2.48 × EM, in which W = body weight (kg) and EM = egg mass (g/[hen ∙ d]).

  14. A Combined Methodology of Adaptive Neuro-Fuzzy Inference System and Genetic Algorithm for Short-term Energy Forecasting

    Directory of Open Access Journals (Sweden)

    KAMPOUROPOULOS, K.

    2014-02-01

    Full Text Available This document presents an energy forecast methodology using Adaptive Neuro-Fuzzy Inference System (ANFIS and Genetic Algorithms (GA. The GA has been used for the selection of the training inputs of the ANFIS in order to minimize the training result error. The presented algorithm has been installed and it is being operating in an automotive manufacturing plant. It periodically communicates with the plant to obtain new information and update the database in order to improve its training results. Finally the obtained results of the algorithm are used in order to provide a short-term load forecasting for the different modeled consumption processes.

  15. Statistical causal inferences and their applications in public health research

    CERN Document Server

    Wu, Pan; Chen, Ding-Geng

    2016-01-01

    This book compiles and presents new developments in statistical causal inference. The accompanying data and computer programs are publicly available so readers may replicate the model development and data analysis presented in each chapter. In this way, methodology is taught so that readers may implement it directly. The book brings together experts engaged in causal inference research to present and discuss recent issues in causal inference methodological development. This is also a timely look at causal inference applied to scenarios that range from clinical trials to mediation and public health research more broadly. In an academic setting, this book will serve as a reference and guide to a course in causal inference at the graduate level (Master's or Doctorate). It is particularly relevant for students pursuing degrees in Statistics, Biostatistics and Computational Biology. Researchers and data analysts in public health and biomedical research will also find this book to be an important reference.

  16. Ion implantation range and energy deposition codes COREL, RASE4, and DAMG2

    International Nuclear Information System (INIS)

    Brice, D.K.

    1977-07-01

    The FORTRAN codes COREL, RASE4 and DAMG2 can be used to calculate quantities associated with ion implantation range and energy deposition distributions within an amorphous target, or for ions incident far from low index directions and planes in crystalline targets. RASE4 calculates the projected range, R/sub p/, the root mean square spread in the projected range, ΔR/sub p/, and the root mean square spread of the distribution perpendicular to the projected range ΔR/sub perpendicular to/. These parameters are calculated as a function of incident ion energy, E, and the instantaneous energy of the ion, E'. They are sufficient to determine the three dimensional spatial distribution of the ions in the target in the Gaussian approximation when the depth distribution is independent of the lateral distribution. RASE4 can perform these calculations for targets having up to four different component atomic species. The code COREL is a short, economical version of RASE4 which calculates the range and straggling variables for E' = 0. Its primary use in the present package is to provide the average range and straggling variables for recoiling target atoms which are created by the incident ion. This information is used by RASE4 in calculating the redistribution of deposited energy by the target atom recoils. The code DAMG2 uses the output from RASE4 to calculate the depth distribution of energy deposition into either atomic processes or electronic processes. With other input DAMG2 can be used to calculate the depth distribution of any energy dependent interaction between the incident ions and target atoms. This report documents the basic theory behind COREL, RASE4 and DAMG2, including a description of codes, listings, and complete instructions for using the codes, and their limitations

  17. Inference of a Nonlinear Stochastic Model of the Cardiorespiratory Interaction

    Science.gov (United States)

    Smelyanskiy, V. N.; Luchinsky, D. G.; Stefanovska, A.; McClintock, P. V.

    2005-03-01

    We reconstruct a nonlinear stochastic model of the cardiorespiratory interaction in terms of a set of polynomial basis functions representing the nonlinear force governing system oscillations. The strength and direction of coupling and noise intensity are simultaneously inferred from a univariate blood pressure signal. Our new inference technique does not require extensive global optimization, and it is applicable to a wide range of complex dynamical systems subject to noise.

  18. Daily energy expenditure in free-ranging Gopher Tortoises (Gopherus polyphemus)

    Science.gov (United States)

    Jodice, P.G.R.; Epperson, D.M.; Visser, G. Henk

    2006-01-01

    Studies of ecological energetics in chelonians are rare. Here, we report the first measurements of daily energy expenditure (DEE) and water influx rates (WIRs) in free-ranging adult Gopher Tortoises (Gopherus polyphemus). We used the doubly labeled water (DLW) method to measure DEE in six adult tortoises during the non-breeding season in south-central Mississippi, USA. Tortoise DEE ranged from 76.7-187.5 kj/day and WIR ranged from 30.6-93.1 ml H2O/day. Daily energy expenditure did not differ between the sexes, but DEE was positively related to body mass. Water influx rates varied with the interaction of sex and body mass. We used a log/log regression model to assess the allometric relationship between DEE and body mass for Gopher Tortoises, Desert Tortoises (Gopherus agassizii), and Box Turtles (Terrapene carolina), the only chelonians for which DEE has been measured. The slope of this allometric model (0.626) was less than that previously calculated for herbivorous reptiles (0.813), suggesting that chelonians may expend energy at a slower rate per unit of body mass compared to other herbivorous reptiles. We used retrospective power analyses and data from the DLW isotope analyses to develop guidelines for sample sizes and duration of measurement intervals, respectively, for larger-scale energetic studies in this species. ?? 2006 by the American Society of Ichthyologists and Herpetologists.

  19. Range-energy relation, range straggling and response function of CsI(Tl), BGO and GSO(Ce) scintillators for light ions

    CERN Document Server

    Avdeichikov, V; Jakobsson, B; Rodin, A M; Ter-Akopian, G M

    2000-01-01

    Range-energy relations and range straggling of sup 1 sup , sup 2 sup , sup 3 H and sup 4 sup , sup 6 He isotopes with the energy approx 50A MeV are measured for the CsI(Tl), BGO and GSO(Ce) scintillators with an accuracy better than 0.2% and 5%, respectively. The Si-Sci/PD telescope was exposed to secondary beams from the mass separator ACCULINNA. The experimental technique is based on the registration of the 'jump' in the amplitude of the photodiode signal for ions passing through the scintillation crystal. Light response of the scintillators for ions 1<=Z<=4 is measured in energy range (5-50)A MeV, the results are in good agreement with calculations based on Birks model. The energy loss straggling for particles with DELTA E/E=0.01-0.50 and mass up to A=10 in 286 mu m DELTA E silicon detector is studied and compared with theoretical prescriptions. The results allow a precise absolute calibration of the scintillation crystal and to optimize the particle identification by the DELTA E-E(Sci/PD) method.

  20. Alternative separation of exchange and correlation energies in multi-configuration range-separated density-functional theory.

    Science.gov (United States)

    Stoyanova, Alexandrina; Teale, Andrew M; Toulouse, Julien; Helgaker, Trygve; Fromager, Emmanuel

    2013-10-07

    The alternative separation of exchange and correlation energies proposed by Toulouse et al. [Theor. Chem. Acc. 114, 305 (2005)] is explored in the context of multi-configuration range-separated density-functional theory. The new decomposition of the short-range exchange-correlation energy relies on the auxiliary long-range interacting wavefunction rather than the Kohn-Sham (KS) determinant. The advantage, relative to the traditional KS decomposition, is that the wavefunction part of the energy is now computed with the regular (fully interacting) Hamiltonian. One potential drawback is that, because of double counting, the wavefunction used to compute the energy cannot be obtained by minimizing the energy expression with respect to the wavefunction parameters. The problem is overcome by using short-range optimized effective potentials (OEPs). The resulting combination of OEP techniques with wavefunction theory has been investigated in this work, at the Hartree-Fock (HF) and multi-configuration self-consistent-field (MCSCF) levels. In the HF case, an analytical expression for the energy gradient has been derived and implemented. Calculations have been performed within the short-range local density approximation on H2, N2, Li2, and H2O. Significant improvements in binding energies are obtained with the new decomposition of the short-range energy. The importance of optimizing the short-range OEP at the MCSCF level when static correlation becomes significant has also been demonstrated for H2, using a finite-difference gradient. The implementation of the analytical gradient for MCSCF wavefunctions is currently in progress.

  1. Short-range second order screened exchange correction to RPA correlation energies

    Science.gov (United States)

    Beuerle, Matthias; Ochsenfeld, Christian

    2017-11-01

    Direct random phase approximation (RPA) correlation energies have become increasingly popular as a post-Kohn-Sham correction, due to significant improvements over DFT calculations for properties such as long-range dispersion effects, which are problematic in conventional density functional theory. On the other hand, RPA still has various weaknesses, such as unsatisfactory results for non-isogyric processes. This can in parts be attributed to the self-correlation present in RPA correlation energies, leading to significant self-interaction errors. Therefore a variety of schemes have been devised to include exchange in the calculation of RPA correlation energies in order to correct this shortcoming. One of the most popular RPA plus exchange schemes is the second order screened exchange (SOSEX) correction. RPA + SOSEX delivers more accurate absolute correlation energies and also improves upon RPA for non-isogyric processes. On the other hand, RPA + SOSEX barrier heights are worse than those obtained from plain RPA calculations. To combine the benefits of RPA correlation energies and the SOSEX correction, we introduce a short-range RPA + SOSEX correction. Proof of concept calculations and benchmarks showing the advantages of our method are presented.

  2. Energy-Efficient Algorithm for Sensor Networks with Non-Uniform Maximum Transmission Range

    Directory of Open Access Journals (Sweden)

    Yimin Yu

    2011-06-01

    Full Text Available In wireless sensor networks (WSNs, the energy hole problem is a key factor affecting the network lifetime. In a circular multi-hop sensor network (modeled as concentric coronas, the optimal transmission ranges of all coronas can effectively improve network lifetime. In this paper, we investigate WSNs with non-uniform maximum transmission ranges, where sensor nodes deployed in different regions may differ in their maximum transmission range. Then, we propose an Energy-efficient algorithm for Non-uniform Maximum Transmission range (ENMT, which can search approximate optimal transmission ranges of all coronas in order to prolong network lifetime. Furthermore, the simulation results indicate that ENMT performs better than other algorithms.

  3. Ion range estimation by using dual energy computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Huenemohr, Nora; Greilich, Steffen [German Cancer Research Center (DKFZ), Heidelberg (Germany). Medical Physics in Radiation Oncology; Krauss, Bernhard [Siemens AG, Forchheim (Germany). Imaging and Therapy; Dinkel, Julien [German Cancer Research Center (DKFZ), Heidelberg (Germany). Radiology; Massachusetts General Hospital, Boston, MA (United States). Radiology; Gillmann, Clarissa [German Cancer Research Center (DKFZ), Heidelberg (Germany). Medical Physics in Radiation Oncology; University Hospital Heidelberg (Germany). Radiation Oncology; Ackermann, Benjamin [Heidelberg Ion-Beam Therapy Center (HIT), Heidelberg (Germany); Jaekel, Oliver [German Cancer Research Center (DKFZ), Heidelberg (Germany). Medical Physics in Radiation Oncology; Heidelberg Ion-Beam Therapy Center (HIT), Heidelberg (Germany); University Hospital Heidelberg (Germany). Radiation Oncology

    2013-07-01

    Inaccurate conversion of CT data to water-equivalent path length (WEPL) is one of the most important uncertainty sources in ion treatment planning. Dual energy CT (DECT) imaging might help to reduce CT number ambiguities with the additional information. In our study we scanned a series of materials (tissue substitutes, aluminum, PMMA, and other polymers) in the dual source scanner (Siemens Somatom Definition Flash). Based on the 80 kVp/140Sn kVp dual energy images, the electron densities Q{sub e} and effective atomic numbers Z{sub eff} were calculated. We introduced a new lookup table that translates the Q{sub e} to the WEPL. The WEPL residuals from the calibration were significantly reduced for the investigated tissue surrogates compared to the empirical Hounsfield-look-up table (single energy CT imaging) from (-1.0 {+-} 1.8)% to (0.1 {+-} 0.7)% and for non-tissue equivalent PMMA from -7.8% to -1.0%. To assess the benefit of the new DECT calibration, we conducted a treatment planning study for three different idealized cases based on tissue surrogates and PMMA. The DECT calibration yielded a significantly higher target coverage in tissue surrogates and phantom material (i.e. PMMA cylinder, mean target coverage improved from 62% to 98%). To verify the DECT calibration for real tissue, ion ranges through a frozen pig head were measured and compared to predictions calculated by the standard single energy CT calibration and the novel DECT calibration. By using this method, an improvement of ion range estimation from -2.1% water-equivalent thickness deviation (single energy CT) to 0.3% (DECT) was achieved. If one excludes raypaths located on the edge of the sample accompanied with high uncertainties, no significant difference could be observed. (orig.)

  4. Physics of ep collisions in the TeV energy range

    International Nuclear Information System (INIS)

    Altarelli, G.; Mele, B.; Rueckl, R.

    1984-01-01

    We study the physics of electron-proton collisions in the range of centre-of-mass energies between √s approx.= 0.3 TeV (HERA) and √s approx.= (1-2) TeV. The latter energies would be achieved if the electron or positron beam of LEP [Esub(e) approx.= (50-100) GeV] is made to collide with the proton beam of LHC [Esub(p) approx.= (5-10) TeV]. (orig.)

  5. Inferring Magnetospheric Heavy Ion Density using EMIC Waves

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Eun-Hwa; Johnson, Jay R.; Kim, Hyomin; Lee, Dong-Hun

    2014-05-01

    We present a method to infer heavy ion concentration ratios from EMIC wave observations that result from ionion hybrid (IIH) resonance. A key feature of the ion-ion hybrid resonance is the concentration of wave energy in a field-aligned resonant mode that exhibits linear polarization. This mode converted wave is localized at the location where the frequency of a compressional wave driver matches the IIH resonance condition, which depends sensitively on the heavy ion concentration. This dependence makes it possible to estimate the heavy ion concentration ratio. In this letter, we evaluate the absorption coefficients at the IIH resonance at Earth's geosynchronous orbit for variable concentrations of He+ and field-aligned wave numbers using a dipole magnetic field. Although wave absorption occurs for a wide range of heavy ion concentrations, it only occurs for a limited range of field-aligned wave numbers such that the IIH resonance frequency is close to, but not exactly the same as the crossover frequency. Using the wave absorption and observed EMIC waves from GOES-12 satellite, we demonstrate how this technique can be used to estimate that the He+ concentration is around 4% near L = 6.6.

  6. Fossil preservation and the stratigraphic ranges of taxa

    Science.gov (United States)

    Foote, M.; Raup, D. M.

    1996-01-01

    The incompleteness of the fossil record hinders the inference of evolutionary rates and patterns. Here, we derive relationships among true taxonomic durations, preservation probability, and observed taxonomic ranges. We use these relationships to estimate original distributions of taxonomic durations, preservation probability, and completeness (proportion of taxa preserved), given only the observed ranges. No data on occurrences within the ranges of taxa are required. When preservation is random and the original distribution of durations is exponential, the inference of durations, preservability, and completeness is exact. However, reasonable approximations are possible given non-exponential duration distributions and temporal and taxonomic variation in preservability. Thus, the approaches we describe have great potential in studies of taphonomy, evolutionary rates and patterns, and genealogy. Analyses of Upper Cambrian-Lower Ordovician trilobite species, Paleozoic crinoid genera, Jurassic bivalve species, and Cenozoic mammal species yield the following results: (1) The preservation probability inferred from stratigraphic ranges alone agrees with that inferred from the analysis of stratigraphic gaps when data on the latter are available. (2) Whereas median durations based on simple tabulations of observed ranges are biased by stratigraphic resolution, our estimates of median duration, extinction rate, and completeness are not biased.(3) The shorter geologic ranges of mammalian species relative to those of bivalves cannot be attributed to a difference in preservation potential. However, we cannot rule out the contribution of taxonomic practice to this difference. (4) In the groups studied, completeness (proportion of species [trilobites, bivalves, mammals] or genera [crinoids] preserved) ranges from 60% to 90%. The higher estimates of completeness at smaller geographic scales support previous suggestions that the incompleteness of the fossil record reflects loss of

  7. Entropic Inference

    Science.gov (United States)

    Caticha, Ariel

    2011-03-01

    In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEnt and Bayes' rule, and therefore unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme.

  8. Systematic parameter inference in stochastic mesoscopic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Lei, Huan; Yang, Xiu [Pacific Northwest National Laboratory, Richland, WA 99352 (United States); Li, Zhen [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States); Karniadakis, George Em, E-mail: george_karniadakis@brown.edu [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States)

    2017-02-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are “sparse”. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.

  9. Energy metabolism in mobile, wild-sampled sharks inferred by plasma lipids.

    Science.gov (United States)

    Gallagher, Austin J; Skubel, Rachel A; Pethybridge, Heidi R; Hammerschlag, Neil

    2017-01-01

    Evaluating how predators metabolize energy is increasingly useful for conservation physiology, as it can provide information on their current nutritional condition. However, obtaining metabolic information from mobile marine predators is inherently challenging owing to their relative rarity, cryptic nature and often wide-ranging underwater movements. Here, we investigate aspects of energy metabolism in four free-ranging shark species ( n  = 281; blacktip, bull, nurse, and tiger) by measuring three metabolic parameters [plasma triglycerides (TAG), free fatty acids (FFA) and cholesterol (CHOL)] via non-lethal biopsy sampling. Plasma TAG, FFA and total CHOL concentrations (in millimoles per litre) varied inter-specifically and with season, year, and shark length varied within a species. The TAG were highest in the plasma of less active species (nurse and tiger sharks), whereas FFA were highest among species with relatively high energetic demands (blacktip and bull sharks), and CHOL concentrations were highest in bull sharks. Although temporal patterns in all metabolites were varied among species, there appeared to be peaks in the spring and summer, with ratios of TAG/CHOL (a proxy for condition) in all species displaying a notable peak in summer. These results provide baseline information of energy metabolism in large sharks and are an important step in understanding how the metabolic parameters can be assessed through non-lethal sampling in the future. In particular, this study emphasizes the importance of accounting for intra-specific and temporal variability in sampling designs seeking to monitor the nutritional condition and metabolic responses of shark populations.

  10. Reinforcement learning or active inference?

    Science.gov (United States)

    Friston, Karl J; Daunizeau, Jean; Kiebel, Stefan J

    2009-07-29

    This paper questions the need for reinforcement learning or control theory when optimising behaviour. We show that it is fairly simple to teach an agent complicated and adaptive behaviours using a free-energy formulation of perception. In this formulation, agents adjust their internal states and sampling of the environment to minimize their free-energy. Such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. This results in behavioural policies that reproduce those optimised by reinforcement learning and dynamic programming. Critically, we do not need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem, using active perception or inference under the free-energy principle. The ensuing proof-of-concept may be important because the free-energy formulation furnishes a unified account of both action and perception and may speak to a reappraisal of the role of dopamine in the brain.

  11. Reinforcement learning or active inference?

    Directory of Open Access Journals (Sweden)

    Karl J Friston

    2009-07-01

    Full Text Available This paper questions the need for reinforcement learning or control theory when optimising behaviour. We show that it is fairly simple to teach an agent complicated and adaptive behaviours using a free-energy formulation of perception. In this formulation, agents adjust their internal states and sampling of the environment to minimize their free-energy. Such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. This results in behavioural policies that reproduce those optimised by reinforcement learning and dynamic programming. Critically, we do not need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem, using active perception or inference under the free-energy principle. The ensuing proof-of-concept may be important because the free-energy formulation furnishes a unified account of both action and perception and may speak to a reappraisal of the role of dopamine in the brain.

  12. Application of long range energy alternative planning (LEAP) model for Thailand energy outlook 2030 : reference case

    International Nuclear Information System (INIS)

    Charusiri, W.; Eua-arporn, B.; Ubonwat, J.

    2008-01-01

    In 2004, the total energy consumption in Thailand increased 8.8 per cent, from 47,806 to 60,260 ktoe. Long-range Energy Alternatives Planning (LEAP) is an accounting tool that simulates future energy scenarios. According to a Business As Usual (BAU) case, the overall energy demand in Thailand is estimated to increase from 61,262 to 254,200 ktoe between 2004 and 2030. Commercial energy consumption, which comprises petroleum products, natural gas, coal and its products, and electricity, increased by 9.0 per cent in Thailand in 2004, and new and renewable energy increased by 7.8 per cent. Nearly 60 per cent of the total commercial energy supply in Thailand was imported and increased for a fifth year in a row. The changes in energy consumption can be attributed to population growth and increase in economic activity and development. 10 refs., 5 tabs., 14 figs

  13. The use of remotely-sensed wildland fire radiation to infer the fates of carbon during biomass combustion - the need to understand and quantify a fire's mass and energy budget

    Science.gov (United States)

    Dickinson, M. B.; Dietenberger, M.; Ellicott, E. A.; Hardy, C.; Hudak, A. T.; Kremens, R.; Mathews, W.; Schroeder, W.; Smith, A. M.; Strand, E. K.

    2016-12-01

    Few measurement techniques offer broad-scale insight on the extent and characteristics of biomass combustion during wildland fires. Remotely-sensed radiation is one of these techniques but its measurement suffers from several limitations and, when quantified, its use to derive variables of real interest depends on an understanding of the fire's mass and energy budget. In this talk, we will review certain assumptions of wildland fire radiation measurement and explore the use of those measurements to infer the fates of biomass and the dissipation of combustion energy. Recent measurements show that the perspective of the sensor (nadir vs oblique) matters relative to estimates of fire radiated power. Other considerations for producing accurate estimates of fire radiation from remote sensing include obscuration by an intervening forest canopy and to what extent measurements that are based on the assumption of graybody/blackbody behavior underestimate fire radiation. Fire radiation measurements are generally a means of quantifying other variables and are often not of interest in and of themselves. Use of fire radiation measurements as a means of inference currently relies on correlations with variables of interest such as biomass consumption and sensible and latent heat and emissions fluxes. Radiation is an imperfect basis for these correlations in that it accounts for a minority of combustion energy ( 15-30%) and is not a constant as is often assumed. Measurements suggest that fire convective energy accounts for the majority of combustion energy and (after radiation) is followed by latent energy, soil heating, and pyrolysis energy, more or less in that order. Combustion energy in and of itself is not its potential maximum, but is reduced to an effective heat of combustion by combustion inefficiency and by work done to pyrolyze fuel (important in char production) and in moisture vaporization. The effective heat of combustion is often on the order of 65% of its potential

  14. Renewable Energy Opportunities at White Sands Missile Range, New Mexico

    Energy Technology Data Exchange (ETDEWEB)

    Chvala, William D.; Solana, Amy E.; States, Jennifer C.; Warwick, William M.; Weimar, Mark R.; Dixon, Douglas R.

    2008-09-01

    The document provides an overview of renewable resource potential at White Sands Missile Range (WSMR) based primarily upon analysis of secondary data sources supplemented with limited on-site evaluations. The effort was funded by the U.S. Army Installation Management Command (IMCOM) as follow-on to the 2005 DoD Renewable Energy Assessment. This effort focuses on grid-connected generation of electricity from renewable energy sources and also ground source heat pumps (GSHPs) for heating and cooling buildings, as directed by IMCOM.

  15. Structural Information Inference from Lanthanoid Complexing Systems: Photoluminescence Studies on Isolated Ions

    Science.gov (United States)

    Greisch, Jean Francois; Harding, Michael E.; Chmela, Jiri; Klopper, Willem M.; Schooss, Detlef; Kappes, Manfred M.

    2016-06-01

    The application of lanthanoid complexes ranges from photovoltaics and light-emitting diodes to quantum memories and biological assays. Rationalization of their design requires a thorough understanding of intramolecular processes such as energy transfer, charge transfer, and non-radiative decay involving their subunits. Characterization of the excited states of such complexes considerably benefits from mass spectrometric methods since the associated optical transitions and processes are strongly affected by stoichiometry, symmetry, and overall charge state. We report herein spectroscopic measurements on ensembles of ions trapped in the gas phase and soft-landed in neon matrices. Their interpretation is considerably facilitated by direct comparison with computations. The combination of energy- and time-resolved measurements on isolated species with density functional as well as ligand-field and Franck-Condon computations enables us to infer structural as well as dynamical information about the species studied. The approach is first illustrated for sets of model lanthanoid complexes whose structure and electronic properties are systematically varied via the substitution of one component (lanthanoid or alkali,alkali-earth ion): (i) systematic dependence of ligand-centered phosphorescence on the lanthanoid(III) promotion energy and its impact on sensitization, and (ii) structural changes induced by the substitution of alkali or alkali-earth ions in relation with structures inferred using ion mobility spectroscopy. The temperature dependence of sensitization is briefly discussed. The focus is then shifted to measurements involving europium complexes with doxycycline an antibiotic of the tetracycline family. Besides discussing the complexes' structural and electronic features, we report on their use to monitor enzymatic processes involving hydrogen peroxide or biologically relevant molecules such as adenosine triphosphate (ATP).

  16. More than one kind of inference: re-examining what's learned in feature inference and classification.

    Science.gov (United States)

    Sweller, Naomi; Hayes, Brett K

    2010-08-01

    Three studies examined how task demands that impact on attention to typical or atypical category features shape the category representations formed through classification learning and inference learning. During training categories were learned via exemplar classification or by inferring missing exemplar features. In the latter condition inferences were made about missing typical features alone (typical feature inference) or about both missing typical and atypical features (mixed feature inference). Classification and mixed feature inference led to the incorporation of typical and atypical features into category representations, with both kinds of features influencing inferences about familiar (Experiments 1 and 2) and novel (Experiment 3) test items. Those in the typical inference condition focused primarily on typical features. Together with formal modelling, these results challenge previous accounts that have characterized inference learning as producing a focus on typical category features. The results show that two different kinds of inference learning are possible and that these are subserved by different kinds of category representations.

  17. Autonomous Vehicles Have a Wide Range of Possible Energy Impacts (Poster)

    Energy Technology Data Exchange (ETDEWEB)

    Brown, A.; Repac, B.; Gonder, J.

    2013-07-01

    This poster presents initial estimates of the net energy impacts of automated vehicles (AVs). Automated vehicle technologies are increasingly recognized as having potential to decrease carbon dioxide emissions and petroleum consumption through mechanisms such as improved efficiency, better routing, lower traffic congestion, and by enabling advanced technologies. However, some effects of AVs could conceivably increase fuel consumption through possible effects such as longer distances traveled, increased use of transportation by underserved groups, and increased travel speeds. The net effect on petroleum use and climate change is still uncertain. To make an aggregate system estimate, we first collect best estimates for the energy impacts of approximately ten effects of AVs. We then use a modified Kaya Identity approach to estimate the range of aggregate effects and avoid double counting. We find that depending on numerous factors, there is a wide range of potential energy impacts. Adoption of automated personal or shared vehicles can lead to significant fuel savings but has potential for backfire.

  18. Perceptual inference.

    Science.gov (United States)

    Aggelopoulos, Nikolaos C

    2015-08-01

    Perceptual inference refers to the ability to infer sensory stimuli from predictions that result from internal neural representations built through prior experience. Methods of Bayesian statistical inference and decision theory model cognition adequately by using error sensing either in guiding action or in "generative" models that predict the sensory information. In this framework, perception can be seen as a process qualitatively distinct from sensation, a process of information evaluation using previously acquired and stored representations (memories) that is guided by sensory feedback. The stored representations can be utilised as internal models of sensory stimuli enabling long term associations, for example in operant conditioning. Evidence for perceptual inference is contributed by such phenomena as the cortical co-localisation of object perception with object memory, the response invariance in the responses of some neurons to variations in the stimulus, as well as from situations in which perception can be dissociated from sensation. In the context of perceptual inference, sensory areas of the cerebral cortex that have been facilitated by a priming signal may be regarded as comparators in a closed feedback loop, similar to the better known motor reflexes in the sensorimotor system. The adult cerebral cortex can be regarded as similar to a servomechanism, in using sensory feedback to correct internal models, producing predictions of the outside world on the basis of past experience. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. SEMANTIC PATCH INFERENCE

    DEFF Research Database (Denmark)

    Andersen, Jesper

    2009-01-01

    Collateral evolution the problem of updating several library-using programs in response to API changes in the used library. In this dissertation we address the issue of understanding collateral evolutions by automatically inferring a high-level specification of the changes evident in a given set ...... specifications inferred by spdiff in Linux are shown. We find that the inferred specifications concisely capture the actual collateral evolution performed in the examples....

  20. Tables of range and rate of energy loss of charged particles of energy 0,5 to 150 MeV

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, C; Boujot, J P [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1962-07-01

    The accurate knowledge of ranges and rates of energy loss of charged particles is very important for physicists working with nuclear accelerators. The tabulations of Aron, Hoffmann, and Williams and later of Madey and Rich have proved extremely useful. However, recent experimental range measurements have indicated the need for a new tabulation of the range-energy relation. It was felt that a useful purpose would be served by performing the calculations for a large number of stopping materials distributed throughout the periodic table, including the materials most commonly used as targets, detectors, and entrance foils. (authors)

  1. Energy System Expectations for Nuclear in the 21. Century: A Plausible Range

    International Nuclear Information System (INIS)

    Langlois, Lucille M.; McDonald, Alan; Rogner, Hans-Holger; Vera, Ivan

    2002-01-01

    This paper outlines a range of scenarios describing what the world's energy system might look like in the middle of the century, and what nuclear energy's most profitable role might be. The starting point is the 40 non-greenhouse-gas-mitigation scenarios in the Special Report on Emissions Scenarios (SRES) of the Intergovernmental Panel on Climate Change (IPCC, 2000). Given their international authorship and comprehensive review by governments and scientific experts, the SRES scenarios are the state of the art in long-term energy scenarios. However, they do not present the underlying energy system structures in enough detail for specific energy technology and infrastructure analyses. This paper therefore describes initial steps within INPRO (The International Project on Innovative Nuclear Reactors and Fuel Cycles of the International Atomic Energy Agency) to translate the SRES results into a range of possible nuclear energy technology requirements for mid-century. The paper summarizes the four SRES scenarios that will be used in INPRO and the reasons for their selection. It provides illustrative examples of the sort of additional detail that is being developed about the overall energy system implied by each scenario, and about specific scenario features particularly relevant to nuclear energy. As recommended in SRES, the selected scenarios cover all four SRES 'story-line families'. The energy system translations being developed in INPRO are intended to indicate how energy services may be provided in mid-century and to delineate likely technology and infrastructure implications. They will indicate answers to questions like the following. The list is illustrative, not comprehensive. - What kind of nuclear power plants will best fit the mid-century energy system? - What energy forms and other products and services provided by nuclear reactors will best fit the mid-century energy system? - What would be their market shares? - How difficult will it be to site new nuclear

  2. Energy Impacts of Effective Range Hood Use for all U.S. Residential Cooking

    Energy Technology Data Exchange (ETDEWEB)

    Logue, Jennifer M; Singer, Brett

    2014-06-01

    Range hood use during residential cooking is essential to maintaining good indoor air quality. However, widespread use will impact the energy demand of the U.S. housing stock. This paper describes a modeling study to determine site energy, source energy, and consumer costs for comprehensive range hood use. To estimate the energy impacts for all 113 million homes in the U.S., we extrapolated from the simulation of a representative weighted sample of 50,000 virtual homes developed from the 2009 Residential Energy Consumption Survey database. A physics-based simulation model that considered fan energy, energy to condition additional incoming air, and the effect on home heating and cooling due to exhausting the heat from cooking was applied to each home. Hoods performing at a level common to hoods currently in U.S. homes would require 19?33 TWh [69?120 PJ] of site energy, 31?53 TWh [110-190 PJ] of source energy; and would cost consumers $1.2?2.1 billion (U.S.$2010) annually in the U.S. housing stock. The average household would spend less than $15 annually. Reducing required airflow, e.g. with designs that promote better pollutant capture has more energy saving potential, on average, than improving fan efficiency.

  3. Multimodel inference and adaptive management

    Science.gov (United States)

    Rehme, S.E.; Powell, L.A.; Allen, Craig R.

    2011-01-01

    Ecology is an inherently complex science coping with correlated variables, nonlinear interactions and multiple scales of pattern and process, making it difficult for experiments to result in clear, strong inference. Natural resource managers, policy makers, and stakeholders rely on science to provide timely and accurate management recommendations. However, the time necessary to untangle the complexities of interactions within ecosystems is often far greater than the time available to make management decisions. One method of coping with this problem is multimodel inference. Multimodel inference assesses uncertainty by calculating likelihoods among multiple competing hypotheses, but multimodel inference results are often equivocal. Despite this, there may be pressure for ecologists to provide management recommendations regardless of the strength of their study’s inference. We reviewed papers in the Journal of Wildlife Management (JWM) and the journal Conservation Biology (CB) to quantify the prevalence of multimodel inference approaches, the resulting inference (weak versus strong), and how authors dealt with the uncertainty. Thirty-eight percent and 14%, respectively, of articles in the JWM and CB used multimodel inference approaches. Strong inference was rarely observed, with only 7% of JWM and 20% of CB articles resulting in strong inference. We found the majority of weak inference papers in both journals (59%) gave specific management recommendations. Model selection uncertainty was ignored in most recommendations for management. We suggest that adaptive management is an ideal method to resolve uncertainty when research results in weak inference.

  4. Advanced Range Safety System for High Energy Vehicles

    Science.gov (United States)

    Claxton, Jeffrey S.; Linton, Donald F.

    2002-01-01

    The advanced range safety system project is a collaboration between the National Aeronautics and Space Administration and the United States Air Force to develop systems that would reduce costs and schedule for safety approval for new classes of unmanned high-energy vehicles. The mission-planning feature for this system would yield flight profiles that satisfy the mission requirements for the user while providing an increased quality of risk assessment, enhancing public safety. By improving the speed and accuracy of predicting risks to the public, mission planners would be able to expand flight envelopes significantly. Once in place, this system is expected to offer the flexibility of handling real-time risk management for the high-energy capabilities of hypersonic vehicles including autonomous return-from-orbit vehicles and extended flight profiles over land. Users of this system would include mission planners of Space Launch Initiative vehicles, space planes, and other high-energy vehicles. The real-time features of the system could make extended flight of a malfunctioning vehicle possible, in lieu of an immediate terminate decision. With this improved capability, the user would have more time for anomaly resolution and potential recovery of a malfunctioning vehicle.

  5. Daily energy expenditure in free-ranging Gopher Tortoises (Gopherus polyphemus)

    NARCIS (Netherlands)

    Jodice, PGR; Epperson, DM; Visser, GH

    2006-01-01

    Studies of ecological energetics in chelonians are rare. Here, we report the first measurements of daily energy expenditure (DEE) and water influx rates (WIRS) in free-ranging adult Gopher Tortoises (Gopherus polyphemus). We used the doubly labeled water (DLW) method to measure DEE in six adult

  6. AD-LIBS: inferring ancestry across hybrid genomes using low-coverage sequence data.

    Science.gov (United States)

    Schaefer, Nathan K; Shapiro, Beth; Green, Richard E

    2017-04-04

    Inferring the ancestry of each region of admixed individuals' genomes is useful in studies ranging from disease gene mapping to speciation genetics. Current methods require high-coverage genotype data and phased reference panels, and are therefore inappropriate for many data sets. We present a software application, AD-LIBS, that uses a hidden Markov model to infer ancestry across hybrid genomes without requiring variant calling or phasing. This approach is useful for non-model organisms and in cases of low-coverage data, such as ancient DNA. We demonstrate the utility of AD-LIBS with synthetic data. We then use AD-LIBS to infer ancestry in two published data sets: European human genomes with Neanderthal ancestry and brown bear genomes with polar bear ancestry. AD-LIBS correctly infers 87-91% of ancestry in simulations and produces ancestry maps that agree with published results and global ancestry estimates in humans. In brown bears, we find more polar bear ancestry than has been published previously, using both AD-LIBS and an existing software application for local ancestry inference, HAPMIX. We validate AD-LIBS polar bear ancestry maps by recovering a geographic signal within bears that mirrors what is seen in SNP data. Finally, we demonstrate that AD-LIBS is more effective than HAPMIX at inferring ancestry when preexisting phased reference data are unavailable and genomes are sequenced to low coverage. AD-LIBS is an effective tool for ancestry inference that can be used even when few individuals are available for comparison or when genomes are sequenced to low coverage. AD-LIBS is therefore likely to be useful in studies of non-model or ancient organisms that lack large amounts of genomic DNA. AD-LIBS can therefore expand the range of studies in which admixture mapping is a viable tool.

  7. Long-range energy transfer in self-assembled quantum dot-DNA cascades

    Science.gov (United States)

    Goodman, Samuel M.; Siu, Albert; Singh, Vivek; Nagpal, Prashant

    2015-11-01

    The size-dependent energy bandgaps of semiconductor nanocrystals or quantum dots (QDs) can be utilized in converting broadband incident radiation efficiently into electric current by cascade energy transfer (ET) between layers of different sized quantum dots, followed by charge dissociation and transport in the bottom layer. Self-assembling such cascade structures with angstrom-scale spatial precision is important for building realistic devices, and DNA-based QD self-assembly can provide an important alternative. Here we show long-range Dexter energy transfer in QD-DNA self-assembled single constructs and ensemble devices. Using photoluminescence, scanning tunneling spectroscopy, current-sensing AFM measurements in single QD-DNA cascade constructs, and temperature-dependent ensemble devices using TiO2 nanotubes, we show that Dexter energy transfer, likely mediated by the exciton-shelves formed in these QD-DNA self-assembled structures, can be used for efficient transport of energy across QD-DNA thin films.The size-dependent energy bandgaps of semiconductor nanocrystals or quantum dots (QDs) can be utilized in converting broadband incident radiation efficiently into electric current by cascade energy transfer (ET) between layers of different sized quantum dots, followed by charge dissociation and transport in the bottom layer. Self-assembling such cascade structures with angstrom-scale spatial precision is important for building realistic devices, and DNA-based QD self-assembly can provide an important alternative. Here we show long-range Dexter energy transfer in QD-DNA self-assembled single constructs and ensemble devices. Using photoluminescence, scanning tunneling spectroscopy, current-sensing AFM measurements in single QD-DNA cascade constructs, and temperature-dependent ensemble devices using TiO2 nanotubes, we show that Dexter energy transfer, likely mediated by the exciton-shelves formed in these QD-DNA self-assembled structures, can be used for efficient

  8. Utilitarian Moral Judgment Exclusively Coheres with Inference from Is to Ought.

    Science.gov (United States)

    Elqayam, Shira; Wilkinson, Meredith R; Thompson, Valerie A; Over, David E; Evans, Jonathan St B T

    2017-01-01

    Faced with moral choice, people either judge according to pre-existing obligations ( deontological judgment), or by taking into account the consequences of their actions ( utilitarian judgment). We propose that the latter coheres with a more general cognitive mechanism - deontic introduction , the tendency to infer normative ('deontic') conclusions from descriptive premises (is-ought inference). Participants were presented with vignettes that allowed either deontological or utilitarian choice, and asked to draw a range of deontic conclusions, as well as judge the overall moral rightness of each choice separately. We predicted and found a selective defeasibility pattern, in which manipulations that suppressed deontic introduction also suppressed utilitarian moral judgment, but had little effect on deontological moral judgment. Thus, deontic introduction coheres with utilitarian moral judgment almost exclusively. We suggest a family of norm-generating informal inferences, in which normative conclusions are drawn from descriptive (although value-laden) premises. This family includes deontic introduction and utilitarian moral judgment as well as other informal inferences. We conclude with a call for greater integration of research in moral judgment and research into deontic reasoning and informal inference.

  9. Optimal inference with suboptimal models: Addiction and active Bayesian inference

    Science.gov (United States)

    Schwartenbeck, Philipp; FitzGerald, Thomas H.B.; Mathys, Christoph; Dolan, Ray; Wurst, Friedrich; Kronbichler, Martin; Friston, Karl

    2015-01-01

    When casting behaviour as active (Bayesian) inference, optimal inference is defined with respect to an agent’s beliefs – based on its generative model of the world. This contrasts with normative accounts of choice behaviour, in which optimal actions are considered in relation to the true structure of the environment – as opposed to the agent’s beliefs about worldly states (or the task). This distinction shifts an understanding of suboptimal or pathological behaviour away from aberrant inference as such, to understanding the prior beliefs of a subject that cause them to behave less ‘optimally’ than our prior beliefs suggest they should behave. Put simply, suboptimal or pathological behaviour does not speak against understanding behaviour in terms of (Bayes optimal) inference, but rather calls for a more refined understanding of the subject’s generative model upon which their (optimal) Bayesian inference is based. Here, we discuss this fundamental distinction and its implications for understanding optimality, bounded rationality and pathological (choice) behaviour. We illustrate our argument using addictive choice behaviour in a recently described ‘limited offer’ task. Our simulations of pathological choices and addictive behaviour also generate some clear hypotheses, which we hope to pursue in ongoing empirical work. PMID:25561321

  10. Inference rule and problem solving

    Energy Technology Data Exchange (ETDEWEB)

    Goto, S

    1982-04-01

    Intelligent information processing signifies an opportunity of having man's intellectual activity executed on the computer, in which inference, in place of ordinary calculation, is used as the basic operational mechanism for such an information processing. Many inference rules are derived from syllogisms in formal logic. The problem of programming this inference function is referred to as a problem solving. Although logically inference and problem-solving are in close relation, the calculation ability of current computers is on a low level for inferring. For clarifying the relation between inference and computers, nonmonotonic logic has been considered. The paper deals with the above topics. 16 references.

  11. Thermodynamics of statistical inference by cells.

    Science.gov (United States)

    Lang, Alex H; Fisher, Charles K; Mora, Thierry; Mehta, Pankaj

    2014-10-03

    The deep connection between thermodynamics, computation, and information is now well established both theoretically and experimentally. Here, we extend these ideas to show that thermodynamics also places fundamental constraints on statistical estimation and learning. To do so, we investigate the constraints placed by (nonequilibrium) thermodynamics on the ability of biochemical signaling networks to estimate the concentration of an external signal. We show that accuracy is limited by energy consumption, suggesting that there are fundamental thermodynamic constraints on statistical inference.

  12. The Impact of Transitive Inference Operations on Mathematics ...

    African Journals Online (AJOL)

    This study examined the extent to which operations of transitive inference tasks have affected the mathematics problem solving abilities of pre-primary school children. Four research hypotheses were tested at 0.05 level of significance using 400 nursery school children whose ages ranged between 4.5 and 5.5 years ...

  13. Low-energy modes and medium-range correlated motions in Pd79Ge21 alloy glass

    International Nuclear Information System (INIS)

    Shibata, Kaoru; Mizuseki, Hiroshi; Suzuki, Kenji

    1993-01-01

    It is well known that there are excess modes over the sound wave in low energy region below about 10 meV in glass materials, which do not exist in corresponding crystalline materials. We examined the low energy modes in a Pd 79 Ge 21 alloys glass by means of inelastic neutron scattering. Measurements were performed on the crystal analyzer type time-of-flight spectrometer LAM-40 with PG(002) and Ge(311) analyzer mirror, which is installed at KENS. The dynamic structure factor S(Q,ω) was obtained over the wide momentum range from 0.5 to 5.2A -1 . The measured S(Q,ω)'s have almost same momentum (Q) dependence at each energy (ℎω) in the energy range from 2.0 to 8.0 meV. In the energy region below 3 meV, we found a small shoulder peak at Q = 1.7A -1 in the momentum dependence of S(Q,ω). It corresponds to a prepeak in S(Q). Therefore it is concluded that the low energy modes in Pd 79 Ge 21 alloy glass is mainly contributed from medium-range correlated motions in the cluster consisting of a few chemical short-range structure units of Pd 6 Ge trigonal prism. (author)

  14. Knowledge and inference

    CERN Document Server

    Nagao, Makoto

    1990-01-01

    Knowledge and Inference discusses an important problem for software systems: How do we treat knowledge and ideas on a computer and how do we use inference to solve problems on a computer? The book talks about the problems of knowledge and inference for the purpose of merging artificial intelligence and library science. The book begins by clarifying the concept of """"knowledge"""" from many points of view, followed by a chapter on the current state of library science and the place of artificial intelligence in library science. Subsequent chapters cover central topics in the artificial intellig

  15. Darwin: Dose monitoring system applicable to various radiations with wide energy ranges

    International Nuclear Information System (INIS)

    Sato, T.; Satoh, D.; Endo, A.; Yamaguchi, Y.

    2007-01-01

    A new radiation dose monitor, designated as DARWIN (Dose monitoring system Applicable to various Radiations with Wide energy ranges), has been developed for real-time monitoring of doses in workspaces and surrounding environments of high-energy accelerator facilities. DARWIN is composed of a Phoswitch-type scintillation detector, which consists of liquid organic scintillator BC501A coupled with ZnS(Ag) scintillation sheets doped with 6 Li, and a data acquisition system based on a Digital-Storage-Oscilloscope. DARWIN has the following features: (1) capable of monitoring doses from neutrons, photons and muons with energies from thermal energy to 1 GeV, 150 keV to 100 MeV and 1 MeV to 100 GeV, respectively, (2) highly sensitive with precision and (3) easy to operate with a simple graphical user-interface. The performance of DARWIN was examined experimentally in several radiation fields. The results of the experiments indicated the accuracy and wide response range of DARWIN for measuring dose rates from neutrons, photons and muons with wide energies. It was also found from the experiments that DARWIN enables us to monitor small fluctuations of neutron dose rates near the background level because of its high sensitivity. With these properties, DARWIN will be able to play a very important role for improving radiation safety in high-energy accelerator facilities. (authors)

  16. Geometric statistical inference

    International Nuclear Information System (INIS)

    Periwal, Vipul

    1999-01-01

    A reparametrization-covariant formulation of the inverse problem of probability is explicitly solved for finite sample sizes. The inferred distribution is explicitly continuous for finite sample size. A geometric solution of the statistical inference problem in higher dimensions is outlined

  17. Bayesian inference model for fatigue life of laminated composites

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimirov; Kiureghian, Armen Der; Berggreen, Christian

    2016-01-01

    A probabilistic model for estimating the fatigue life of laminated composite plates is developed. The model is based on lamina-level input data, making it possible to predict fatigue properties for a wide range of laminate configurations. Model parameters are estimated by Bayesian inference. The ...

  18. Performance tests of a special ionization chamber for X-rays in mammography energy range

    Energy Technology Data Exchange (ETDEWEB)

    Silva, J.O., E-mail: jonas.silva@ufg.br [Universidade Federal de Goiás (UFG), Goiânia (Brazil). Instituto de Física; Caldas, L.V.E. [Instituto de Pesquisas Energéticas e Nucleares (IPEN-CNEN/SP), São Paulo, SP (Brazil). Centro de Metrologia das Radiações

    2017-07-01

    A special mammography homemade ionization chamber was developed to be applied for mammography energy range dosimetry. This chamber has a total sensitive volume of 6 cm{sup 3} and is made of a PMMA body and graphite coated collecting electrode. Performance tests as saturation, ion collection efficiency, linearity of chamber response versus air kerma rate and energy dependence were determined. The results obtained with this special homemade ionization chamber are within the limits stated in international recommendations. This chamber can be used in quality control programs of mammography energy range. All measurements were carried out at the Calibration Laboratory of IPEN. (author)

  19. Goal inferences about robot behavior : goal inferences and human response behaviors

    NARCIS (Netherlands)

    Broers, H.A.T.; Ham, J.R.C.; Broeders, R.; De Silva, P.; Okada, M.

    2014-01-01

    This explorative research focused on the goal inferences human observers draw based on a robot's behavior, and the extent to which those inferences predict people's behavior in response to that robot. Results show that different robot behaviors cause different response behavior from people.

  20. Utilitarian Moral Judgment Exclusively Coheres with Inference from Is to Ought

    Directory of Open Access Journals (Sweden)

    Shira Elqayam

    2017-06-01

    Full Text Available Faced with moral choice, people either judge according to pre-existing obligations (deontological judgment, or by taking into account the consequences of their actions (utilitarian judgment. We propose that the latter coheres with a more general cognitive mechanism – deontic introduction, the tendency to infer normative (‘deontic’ conclusions from descriptive premises (is-ought inference. Participants were presented with vignettes that allowed either deontological or utilitarian choice, and asked to draw a range of deontic conclusions, as well as judge the overall moral rightness of each choice separately. We predicted and found a selective defeasibility pattern, in which manipulations that suppressed deontic introduction also suppressed utilitarian moral judgment, but had little effect on deontological moral judgment. Thus, deontic introduction coheres with utilitarian moral judgment almost exclusively. We suggest a family of norm-generating informal inferences, in which normative conclusions are drawn from descriptive (although value-laden premises. This family includes deontic introduction and utilitarian moral judgment as well as other informal inferences. We conclude with a call for greater integration of research in moral judgment and research into deontic reasoning and informal inference.

  1. Phylogenetic Inference of HIV Transmission Clusters

    Directory of Open Access Journals (Sweden)

    Vlad Novitsky

    2017-10-01

    Full Text Available Better understanding the structure and dynamics of HIV transmission networks is essential for designing the most efficient interventions to prevent new HIV transmissions, and ultimately for gaining control of the HIV epidemic. The inference of phylogenetic relationships and the interpretation of results rely on the definition of the HIV transmission cluster. The definition of the HIV cluster is complex and dependent on multiple factors, including the design of sampling, accuracy of sequencing, precision of sequence alignment, evolutionary models, the phylogenetic method of inference, and specified thresholds for cluster support. While the majority of studies focus on clusters, non-clustered cases could also be highly informative. A new dimension in the analysis of the global and local HIV epidemics is the concept of phylogenetically distinct HIV sub-epidemics. The identification of active HIV sub-epidemics reveals spreading viral lineages and may help in the design of targeted interventions.HIVclustering can also be affected by sampling density. Obtaining a proper sampling density may increase statistical power and reduce sampling bias, so sampling density should be taken into account in study design and in interpretation of phylogenetic results. Finally, recent advances in long-range genotyping may enable more accurate inference of HIV transmission networks. If performed in real time, it could both inform public-health strategies and be clinically relevant (e.g., drug-resistance testing.

  2. Entropic Inference

    OpenAIRE

    Caticha, Ariel

    2010-01-01

    In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEn...

  3. The NIFTY way of Bayesian signal inference

    International Nuclear Information System (INIS)

    Selig, Marco

    2014-01-01

    We introduce NIFTY, 'Numerical Information Field Theory', a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTY can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTY as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D 3 PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy

  4. The NIFTy way of Bayesian signal inference

    Science.gov (United States)

    Selig, Marco

    2014-12-01

    We introduce NIFTy, "Numerical Information Field Theory", a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTy can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTy as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D3PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.

  5. CO-ANALYSIS OF SOLAR MICROWAVE AND HARD X-RAY SPECTRAL EVOLUTIONS. I. IN TWO FREQUENCY OR ENERGY RANGES

    International Nuclear Information System (INIS)

    Song Qiwu; Huang Guangli; Nakajima, Hiroshi

    2011-01-01

    Solar microwave and hard X-ray spectral evolutions are co-analyzed in the 2000 June 10 and 2002 April 10 flares, and are simultaneously observed by the Owens-Valley Solar Array in the microwave band and by Yohkoh/Hard X-ray Telescope or RHESSI in the hard X-ray band, with multiple subpeaks in their light curves. The microwave and hard X-ray spectra are fitted by a power law in two frequency ranges of the optical thin part and two photon energy ranges, respectively. Similar to an earlier event in Shao and Huang, the well-known soft-hard-soft pattern of the lower energy range changed to the hard-soft-hard (HSH) pattern of the higher energy range during the spectral evolution of each subpeak in both hard X-ray flares. This energy dependence is actually supported by a positive correlation between the overall light curves and spectral evolution in the lower energy range, while it becomes an anti-correlation in the higher energy range. Regarding microwave data, the HSH pattern appears in the spectral evolution of each subpeak in the lower frequency range, which is somewhat similar to Huang and Nakajima. However, it returns back to the well-known pattern of soft-hard-harder for the overall spectral evolution in the higher frequency range of both events. This frequency dependence is confirmed by an anti-correlation between the overall light curves and spectral evolution in the lower frequency range, but it becomes a positive correlation in the higher frequency range. The possible mechanisms are discussed, respectively, for reasons why hard X-ray and microwave spectral evolutions have different patterns in different energy and frequency intervals.

  6. Active Inference and Learning in the Cerebellum.

    Science.gov (United States)

    Friston, Karl; Herreros, Ivan

    2016-09-01

    This letter offers a computational account of Pavlovian conditioning in the cerebellum based on active inference and predictive coding. Using eyeblink conditioning as a canonical paradigm, we formulate a minimal generative model that can account for spontaneous blinking, startle responses, and (delay or trace) conditioning. We then establish the face validity of the model using simulated responses to unconditioned and conditioned stimuli to reproduce the sorts of behavior that are observed empirically. The scheme's anatomical validity is then addressed by associating variables in the predictive coding scheme with nuclei and neuronal populations to match the (extrinsic and intrinsic) connectivity of the cerebellar (eyeblink conditioning) system. Finally, we try to establish predictive validity by reproducing selective failures of delay conditioning, trace conditioning, and extinction using (simulated and reversible) focal lesions. Although rather metaphorical, the ensuing scheme can account for a remarkable range of anatomical and neurophysiological aspects of cerebellar circuitry-and the specificity of lesion-deficit mappings that have been established experimentally. From a computational perspective, this work shows how conditioning or learning can be formulated in terms of minimizing variational free energy (or maximizing Bayesian model evidence) using exactly the same principles that underlie predictive coding in perception.

  7. Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback.

    Science.gov (United States)

    Orhan, A Emin; Ma, Wei Ji

    2017-07-26

    Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey's learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.

  8. Inferring time derivatives including cell growth rates using Gaussian processes

    Science.gov (United States)

    Swain, Peter S.; Stevenson, Keiran; Leary, Allen; Montano-Gutierrez, Luis F.; Clark, Ivan B. N.; Vogel, Jackie; Pilizota, Teuta

    2016-12-01

    Often the time derivative of a measured variable is of as much interest as the variable itself. For a growing population of biological cells, for example, the population's growth rate is typically more important than its size. Here we introduce a non-parametric method to infer first and second time derivatives as a function of time from time-series data. Our approach is based on Gaussian processes and applies to a wide range of data. In tests, the method is at least as accurate as others, but has several advantages: it estimates errors both in the inference and in any summary statistics, such as lag times, and allows interpolation with the corresponding error estimation. As illustrations, we infer growth rates of microbial cells, the rate of assembly of an amyloid fibril and both the speed and acceleration of two separating spindle pole bodies. Our algorithm should thus be broadly applicable.

  9. Dietary energy estimate inferred from fruit preferences of Cynopterus sphinx (Mammalia: Chiroptera: Pteropodidae in a flight cage in tropical China

    Directory of Open Access Journals (Sweden)

    A. Mukherjee

    2010-06-01

    Full Text Available From a conservation standpoint, inferences about dietary intake are much more robust when placed within a demographic, temporal and nutritional context. We investigated the dietary cornerstones of fruit preference and the dietary energy gained in the Short-nosed Fruit Bat Cynopterus sphinx. Feeding trials were conducted with 15 wild-caught bats kept in a large flight cage in Xishuangbanna, Yunnan, China, over nine weeks. The goal was to estimate the amount of food required for the sustenance of C. sphinx in captivity and calculate the food amount in terms of energy. Of the fruits (apple, banana, pear, papaya and guava offered, apple (89% and banana (93% were found to be preferred. The relative consumption of fruit species tended to be positively correlated with the energy value per gram fruit. Banana (93% was the most preferred and papaya (47% the least preferred of the offered fruits. The results suggest that the minimum recommended dietary intake is 214-267 kJ per day for an individual of C. sphinx in captivity with conditions allowing flight. From this, we can assume that the same energy requirements may represent the minimum intake for bats in the wild. Both body mass and food consumption decreased significantly when bats were kept in a small cage.

  10. Energy of N two-dimensional bosons with zero-range interactions

    Science.gov (United States)

    Bazak, B.; Petrov, D. S.

    2018-02-01

    We derive an integral equation describing N two-dimensional bosons with zero-range interactions and solve it for the ground state energy B N by applying a stochastic diffusion Monte Carlo scheme for up to 26 particles. We confirm and go beyond the scaling B N ∝ 8.567 N predicted by Hammer and Son (2004 Phys. Rev. Lett. 93 250408) in the large-N limit.

  11. Probabilistic inductive inference: a survey

    OpenAIRE

    Ambainis, Andris

    2001-01-01

    Inductive inference is a recursion-theoretic theory of learning, first developed by E. M. Gold (1967). This paper surveys developments in probabilistic inductive inference. We mainly focus on finite inference of recursive functions, since this simple paradigm has produced the most interesting (and most complex) results.

  12. LAIT: a local ancestry inference toolkit.

    Science.gov (United States)

    Hui, Daniel; Fang, Zhou; Lin, Jerome; Duan, Qing; Li, Yun; Hu, Ming; Chen, Wei

    2017-09-06

    Inferring local ancestry in individuals of mixed ancestry has many applications, most notably in identifying disease-susceptible loci that vary among different ethnic groups. Many software packages are available for inferring local ancestry in admixed individuals. However, most of these existing software packages require specific formatted input files and generate output files in various types, yielding practical inconvenience. We developed a tool set, Local Ancestry Inference Toolkit (LAIT), which can convert standardized files into software-specific input file formats as well as standardize and summarize inference results for four popular local ancestry inference software: HAPMIX, LAMP, LAMP-LD, and ELAI. We tested LAIT using both simulated and real data sets and demonstrated that LAIT provides convenience to run multiple local ancestry inference software. In addition, we evaluated the performance of local ancestry software among different supported software packages, mainly focusing on inference accuracy and computational resources used. We provided a toolkit to facilitate the use of local ancestry inference software, especially for users with limited bioinformatics background.

  13. Bayesian statistical inference

    Directory of Open Access Journals (Sweden)

    Bruno De Finetti

    2017-04-01

    Full Text Available This work was translated into English and published in the volume: Bruno De Finetti, Induction and Probability, Biblioteca di Statistica, eds. P. Monari, D. Cocchi, Clueb, Bologna, 1993.Bayesian statistical Inference is one of the last fundamental philosophical papers in which we can find the essential De Finetti's approach to the statistical inference.

  14. SU-G-TeP1-02: Analytical Stopping Power and Range Parameterization for Therapeutic Energy Intervals

    Energy Technology Data Exchange (ETDEWEB)

    Donahue, W [Louisiana State University, Baton Rouge, LA (United States); Newhauser, W [Louisiana State University, Baton Rouge, LA (United States); Mary Bird Perkins Cancer Center, Baton Rouge, LA (United States); Ziegler, J F [United States Naval Academy, Annapolis, MD (United States)

    2016-06-15

    Purpose: To develop a simple, analytic parameterization of stopping power and range, which covers a wide energy interval and is applicable to many species of projectile ions and target materials, with less than 15% disagreement in linear stopping power and 1 mm in range. Methods: The new parameterization was required to be analytically integrable from stopping power to range, and continuous across the range interval of 1 µm to 50 cm. The model parameters were determined from stopping power and range data for hydrogen, carbon, iron, and uranium ions incident on water, carbon, aluminum, lead and copper. Stopping power and range data was taken from SRIM. A stochastic minimization algorithm was used to find model parameters, with 10 data points per energy decade. Additionally, fitting was performed with 2 and 26 data points per energy decade to test the model’s robustness to sparse Results: 6 free parameters were sufficient to cover the therapeutic energy range for each projectile ion species (e.g. 1 keV – 300 MeV for protons). The model agrees with stopping power and range data well, with less than 9% relative stopping power difference and 0.5 mm difference in range. As few as, 4 bins per decade were required to achieve comparable fitting results to the full data set. Conclusion: This study successfully demonstrated that a simple analytic function can be used to fit the entire energy interval of therapeutic ion beams of hydrogen and heavier elements. Advantages of this model were the small number (6) of free parameters, and that the model calculates both stopping power and range. Applications of this model include GPU-based dose calculation algorithms and Monte Carlo simulations, where available memory is limited. This work was supported in part by a research agreement between United States Naval Academy and Louisiana State University: Contract No N00189-13-P-0786. In addition, this work was accepted for presentation at the American Nuclear Society Annual Meeting

  15. SU-G-TeP1-02: Analytical Stopping Power and Range Parameterization for Therapeutic Energy Intervals

    International Nuclear Information System (INIS)

    Donahue, W; Newhauser, W; Ziegler, J F

    2016-01-01

    Purpose: To develop a simple, analytic parameterization of stopping power and range, which covers a wide energy interval and is applicable to many species of projectile ions and target materials, with less than 15% disagreement in linear stopping power and 1 mm in range. Methods: The new parameterization was required to be analytically integrable from stopping power to range, and continuous across the range interval of 1 µm to 50 cm. The model parameters were determined from stopping power and range data for hydrogen, carbon, iron, and uranium ions incident on water, carbon, aluminum, lead and copper. Stopping power and range data was taken from SRIM. A stochastic minimization algorithm was used to find model parameters, with 10 data points per energy decade. Additionally, fitting was performed with 2 and 26 data points per energy decade to test the model’s robustness to sparse Results: 6 free parameters were sufficient to cover the therapeutic energy range for each projectile ion species (e.g. 1 keV – 300 MeV for protons). The model agrees with stopping power and range data well, with less than 9% relative stopping power difference and 0.5 mm difference in range. As few as, 4 bins per decade were required to achieve comparable fitting results to the full data set. Conclusion: This study successfully demonstrated that a simple analytic function can be used to fit the entire energy interval of therapeutic ion beams of hydrogen and heavier elements. Advantages of this model were the small number (6) of free parameters, and that the model calculates both stopping power and range. Applications of this model include GPU-based dose calculation algorithms and Monte Carlo simulations, where available memory is limited. This work was supported in part by a research agreement between United States Naval Academy and Louisiana State University: Contract No N00189-13-P-0786. In addition, this work was accepted for presentation at the American Nuclear Society Annual Meeting

  16. Deuteron stripping on beryllium target in the 100-2300 MeV energy range

    International Nuclear Information System (INIS)

    Lecolley, J.F.; Varignon, C.; Durand, D.; Le Brun, C.; Lecolley, F.R.; Lefebvres, F.; Louvel, M.; Thun, J.; Borne, F.; Martinez, E.; Menard, S.; Pras, P.; Boudard, A.; Duchazeaubeneix, J.C.; Durand, J.M.; Frehaut, J.; Hanappe, F.; Ledoux, X.; Legrain, R.; Leray, S.; Milleret, G.; Patin, Y.; Stuttge, L.; Terrien, Y.

    1999-01-01

    Cross sections for stripping and dissociation of deuterons interacting with Be targets in the 100-2300 MeV energy range have been measured. Comparisons with model calculations suggest a dominant contribution of the stripping process. It is also shown that the deuteron break-up cross section exhibits the same energy dependence as the nucleon-nucleon cross section. (orig.)

  17. Hierarchical Markov blankets and adaptive active inference. Comment on "Answering Schrödinger's question: A free-energy formulation" by Maxwell James Désormeau Ramstead et al.

    Science.gov (United States)

    Kirchhoff, Michael

    2018-03-01

    Ramstead MJD, Badcock PB, Friston KJ. Answering Schrödinger's question: A free-energy formulation. Phys Life Rev 2018. https://doi.org/10.1016/j.plrev.2017.09.001 [this issue] motivate a multiscale characterisation of living systems in terms of hierarchically structured Markov blankets - a view of living systems as comprised of Markov blankets of Markov blankets [1-4]. It is effectively a treatment of what life is and how it is realised, cast in terms of how Markov blankets of living systems self-organise via active inference - a corollary of the free energy principle [5-7].

  18. Carbon and energy balances for a range of biofuels options

    Energy Technology Data Exchange (ETDEWEB)

    Elsayed, M.A.; Matthews, R.; Mortimer, N.D.

    2003-03-01

    This is the final report of a project to produce a set of baseline energy and carbon balances for a range of electricity, heat and transport fuel production systems based on biomass feedstocks. A list of 18 important biofuel technologies in the UK was selected for study of their energy and carbon balances in a consistent approach. Existing studies on these biofuel options were reviewed and their main features identified in terms of energy input, greenhouse gas emissions (carbon dioxide, methane, nitrous oxide and total), transparency and relevance. Flow charts were produced to represent the key stages of the production of biomass and its conversion to biofuels. Outputs from the study included primary energy input per delivered energy output, carbon dioxide outputs per delivered energy output, methane output per delivered energy output, nitrous oxide output per delivered energy output and total greenhouse gas requirements. The net calorific value of the biofuel is given where relevant. Biofuels studied included: biodiesel from oilseed rape and recycled vegetable oil; combined heat and power (CHP) by combustion of wood chip from forestry residues; CHP by gasification of wood chip from short rotation coppice; electricity from the combustion of miscanthus, straw, wood chip from forestry residues and wood chip from short rotation coppice; electricity from gasification of wood chip from forestry residues and wood chip from short rotation coppice; electricity by pyrolysis of wood chip from forestry residues and wood chip from short rotation coppice; ethanol from lignocellulosics, sugar beet and wheat; heat (small scale) from combustion of wood chip from forestry residues and wood chip from short rotation coppice; and rapeseed oil from oilseed rape.

  19. Experimental investigation of dd reaction in range of ultralow energies using Z-pinch

    International Nuclear Information System (INIS)

    Bystritskij, V.M.; Grebenyuk, V.M.; Parzhitskij, S.S.

    1998-01-01

    Results of the experiments to measure the dd reaction cross section in the range of deuteron collision energies from 0.1 keV to 1.5 keV using Z-pinch technique are presented. The experiment was performed at the Pulsed Ion Beam Accelerator of the High-Current Electronics Institute in Tomsk. The dd fusion neutrons were registered by scintillation detectors using time-of-flight method and BF 3 detectors of thermal neutrons. At 90% confidence level, the upper limits of the neutron producing dd reaction cross sections are obtained for average deuteron collision energies of 0.11, 0.34, 0.37 and 1.46 keV. The results demonstrate that high-intensity pulsed accelerators with a generator current of 2-3 MA allow the dd reaction cross sections to be measured in the range of deuteron collision energies from 0.8 keV to 3 keV

  20. Passively-switched energy harvester for increased operational range

    International Nuclear Information System (INIS)

    Liu, Tian; Livermore, Carol; Pierre, Ryan St

    2014-01-01

    This paper presents modeling and experimental validation of a new type of vibrational energy harvester that passively switches between two dynamical modes of operation to expand the range of driving frequencies and accelerations over which the harvester effectively extracts power. In both modes, a driving beam with a low resonant frequency couples into ambient vibrations and transfers their energy to a generating beam that has a higher resonant frequency. The generating beam converts the mechanical power into electrical power. In coupled-motion mode, the driving beam bounces off the generating beam. In plucked mode, the driving beam deflects the generating beam until the driving beam passes from above the generating beam to below it or vice versa. Analytical system models are implemented numerically in the time domain for driving frequencies of 3 Hz to 27 Hz and accelerations from 0.1 g to 2.6 g, and both system dynamics and output power are predicted. A corresponding switched-dynamics harvester is tested experimentally, and its voltage, power, and dynamics are recorded. In both models and experiments, coupled-motion harvesting is observed at lower accelerations, whereas plucked harvesting and/or mixed mode harvesting are observed at higher accelerations. As expected, plucked harvesting outputs greater power than coupled-motion harvesting in both simulations and experiments. The predicted (1.8 mW) and measured (1.56 mW) maximum average power levels are similar under measured conditions at 0.5 g. When the system switches to dynamics that are characteristic of higher frequencies, the difference between predicted and measured power levels is more pronounced due to non-ideal mechanical interaction between the beams’ tips. Despite the beams’ non-ideal interactions, switched-dynamics operation increases the harvester’s operating range. (paper)

  1. Mocapy++ - a toolkit for inference and learning in dynamic Bayesian networks

    DEFF Research Database (Denmark)

    Paluszewski, Martin; Hamelryck, Thomas Wim

    2010-01-01

    Background Mocapy++ is a toolkit for parameter learning and inference in dynamic Bayesian networks (DBNs). It supports a wide range of DBN architectures and probability distributions, including distributions from directional statistics (the statistics of angles, directions and orientations...

  2. INFERENCE BUILDING BLOCKS

    Science.gov (United States)

    2018-02-15

    expressed a variety of inference techniques on discrete and continuous distributions: exact inference, importance sampling, Metropolis-Hastings (MH...without redoing any math or rewriting any code. And although our main goal is composable reuse, our performance is also good because we can use...control paths. • The Hakaru language can express mixtures of discrete and continuous distributions, but the current disintegration transformation

  3. Practical Bayesian Inference

    Science.gov (United States)

    Bailer-Jones, Coryn A. L.

    2017-04-01

    Preface; 1. Probability basics; 2. Estimation and uncertainty; 3. Statistical models and inference; 4. Linear models, least squares, and maximum likelihood; 5. Parameter estimation: single parameter; 6. Parameter estimation: multiple parameters; 7. Approximating distributions; 8. Monte Carlo methods for inference; 9. Parameter estimation: Markov chain Monte Carlo; 10. Frequentist hypothesis testing; 11. Model comparison; 12. Dealing with more complicated problems; References; Index.

  4. Causal learning and inference as a rational process: the new synthesis.

    Science.gov (United States)

    Holyoak, Keith J; Cheng, Patricia W

    2011-01-01

    Over the past decade, an active line of research within the field of human causal learning and inference has converged on a general representational framework: causal models integrated with bayesian probabilistic inference. We describe this new synthesis, which views causal learning and inference as a fundamentally rational process, and review a sample of the empirical findings that support the causal framework over associative alternatives. Causal events, like all events in the distal world as opposed to our proximal perceptual input, are inherently unobservable. A central assumption of the causal approach is that humans (and potentially nonhuman animals) have been designed in such a way as to infer the most invariant causal relations for achieving their goals based on observed events. In contrast, the associative approach assumes that learners only acquire associations among important observed events, omitting the representation of the distal relations. By incorporating bayesian inference over distributions of causal strength and causal structures, along with noisy-logical (i.e., causal) functions for integrating the influences of multiple causes on a single effect, human judgments about causal strength and structure can be predicted accurately for relatively simple causal structures. Dynamic models of learning based on the causal framework can explain patterns of acquisition observed with serial presentation of contingency data and are consistent with available neuroimaging data. The approach has been extended to a diverse range of inductive tasks, including category-based and analogical inferences.

  5. The confounding effect of population structure on bayesian skyline plot inferences of demographic history

    DEFF Research Database (Denmark)

    Heller, Rasmus; Chikhi, Lounes; Siegismund, Hans

    2013-01-01

    Many coalescent-based methods aiming to infer the demographic history of populations assume a single, isolated and panmictic population (i.e. a Wright-Fisher model). While this assumption may be reasonable under many conditions, several recent studies have shown that the results can be misleading...... when it is violated. Among the most widely applied demographic inference methods are Bayesian skyline plots (BSPs), which are used across a range of biological fields. Violations of the panmixia assumption are to be expected in many biological systems, but the consequences for skyline plot inferences...... the best scheme for inferring demographic change over a typical time scale. Analyses of data from a structured African buffalo population demonstrate how BSP results can be strengthened by simulations. We recommend that sample selection should be carefully considered in relation to population structure...

  6. Transport coefficients for the plasma thermal energy and empirical scaling ''laws''

    International Nuclear Information System (INIS)

    Coppi, B.

    1989-01-01

    A set of transport coefficients has been identified for the electron and nuclei thermal energy of plasmas with temperatures in the multi-keV range, taking into account the available experimental information including the temperature spatial profiles and the inferred scaling ''laws'' for the measured energy replacement times. The specific form of these coefficients is suggested by the theory of a mode, so-called ''ubiquitous,'' that can be excited when a significant fraction of the electron population has magnetically trapped orbits. (author)

  7. Energy- and time-resolved detection of prompt gamma-rays for proton range verification.

    Science.gov (United States)

    Verburg, Joost M; Riley, Kent; Bortfeld, Thomas; Seco, Joao

    2013-10-21

    In this work, we present experimental results of a novel prompt gamma-ray detector for proton beam range verification. The detection system features an actively shielded cerium-doped lanthanum(III) bromide scintillator, coupled to a digital data acquisition system. The acquisition was synchronized to the cyclotron radio frequency to separate the prompt gamma-ray signals from the later-arriving neutron-induced background. We designed the detector to provide a high energy resolution and an effective reduction of background events, enabling discrete proton-induced prompt gamma lines to be resolved. Measuring discrete prompt gamma lines has several benefits for range verification. As the discrete energies correspond to specific nuclear transitions, the magnitudes of the different gamma lines have unique correlations with the proton energy and can be directly related to nuclear reaction cross sections. The quantification of discrete gamma lines also enables elemental analysis of tissue in the beam path, providing a better prediction of prompt gamma-ray yields. We present the results of experiments in which a water phantom was irradiated with proton pencil-beams in a clinical proton therapy gantry. A slit collimator was used to collimate the prompt gamma-rays, and measurements were performed at 27 positions along the path of proton beams with ranges of 9, 16 and 23 g cm(-2) in water. The magnitudes of discrete gamma lines at 4.44, 5.2 and 6.13 MeV were quantified. The prompt gamma lines were found to be clearly resolved in dimensions of energy and time, and had a reproducible correlation with the proton depth-dose curve. We conclude that the measurement of discrete prompt gamma-rays for in vivo range verification of clinical proton beams is feasible, and plan to further study methods and detector designs for clinical use.

  8. Investigation of the neutron-proton-interaction in the energy range from 20 to 50 MEV

    International Nuclear Information System (INIS)

    Wilczynski, J.

    1984-07-01

    In the framework of the investigation of the isospin singlet part of the nucleon-nucleon-interaction in the energy range below 100 MeV two experiments were conducted, which were selected by sensitivity calculations. At the Karlsruhe polarized neutron facility POLKA the analyzing powers Asub(y) and Asub(yy) of the elastic n vector-p- and n vector-p vector-scattering were measured in the energy range from 20 to 50 MeV. The results of this epxeriment are compared to older data. In the energy range from 20 to 50 MeV the new data were analyzed together with other selected data of the nucleon-nucleon-system in phase shift analyses. The knowledge of the isospin singlet phase shifts 1 P 1 and 3 D 3 was improved by the new data. (orig./HSI) [de

  9. Energy dependence of the zero-range DWBA normalization of the /sup 58/Ni(/sup 3/He,. cap alpha. )/sup 57/Ni reaction. [15 to 205 GeV, finite-range and nonlocality corrections

    Energy Technology Data Exchange (ETDEWEB)

    Shepard, J R; Zimmerman, W R; Kraushaar, J J [Colorado Univ., Boulder (USA). Dept. of Physics and Astrophysics

    1977-01-04

    Strong transitions in the /sup 58/Ni(/sup 3/He,..cap alpha..)/sup 57/Ni reaction were analyzed using both the zero-range and exact finite-range DWBA. Data considered covered a range of bombarding energies from 15 to 205 MeV. The zero-range DWBA described all data well when finite-range and non-locality corrections were included in the local energy approximation. Comparison of zero-range and exact finite-range calculations showed the local energy approximation correction to be very accurate over the entire energy region. Empirically determined D/sub 0/ values showed no energy dependence. A theoretical D/sub 0/ value calculated using an ..cap alpha.. wave function which reproduced the measured ..cap alpha.. rms charge radius and the elastic electron scattering form factor agreed well the empirical values. Comparison was made between these values and D/sub 0/ values quoted previously in the literature.

  10. Fuzzy inference system for evaluating and improving nuclear power plant operating performance

    International Nuclear Information System (INIS)

    Guimaraes, Antonio Cesar F.; Lapa, Celso Marcelo Franklin

    2003-01-01

    This paper presents a fuzzy inference system (FIS) as an approach to estimate Nuclear Power Plant (NPP) performance indicators. The performance indicators for this study are the energy availability factor (EAF) and the planned (PUF) and unplanned unavailability factor (UUF). These indicators are obtained from a non analytical combination among the same operational parameters. Such parameters are, for example, environment impacts, industrial safety, radiological protection, safety indicators, scram rate, thermal efficiency, and fuel reliability. This approach uses the concept of a pure fuzzy logic system where the fuzzy rule base consists of a collection of fuzzy IF-THEN rules. The fuzzy inference engine uses these fuzzy IF-THEN rules to determine a mapping from fuzzy sets in the input universe of discourse to fuzzy sets in the output universe of discourse based on fuzzy logic principles. The results demonstrated the potential of the fuzzy inference to generate a knowledge basis that correlate operations occurrences and NPP performance. The inference system became possible the development of the sensitivity studies, future operational condition previsions and may support the eventual corrections on operation of the plant

  11. The importance of advancing technology to America's energy goals

    International Nuclear Information System (INIS)

    Greene, D.L.; Boudreaux, P.R.; Dean, D.J.; Fulkerson, W.; Gaddis, A.L.; Graham, R.L.; Graves, R.L.; Hopson, J.L.; Hughes, P.; Lapsa, M.V.; Mason, T.E.; Standaert, R.F.; Wilbanks, T.J.; Zucker, A.

    2010-01-01

    A wide range of energy technologies appears to be needed for the United States to meet its energy goals. A method is developed that relates the uncertainty of technological progress in eleven technology areas to the achievement of CO 2 mitigation and reduced oil dependence. We conclude that to be confident of meeting both energy goals, each technology area must have a much better than 50/50 probability of success, that carbon capture and sequestration, biomass, battery electric or fuel cell vehicles, advanced fossil liquids, and energy efficiency technologies for buildings appear to be almost essential, and that the success of each one of the 11 technologies is important. These inferences are robust to moderate variations in assumptions.

  12. Logical inference and evaluation

    International Nuclear Information System (INIS)

    Perey, F.G.

    1981-01-01

    Most methodologies of evaluation currently used are based upon the theory of statistical inference. It is generally perceived that this theory is not capable of dealing satisfactorily with what are called systematic errors. Theories of logical inference should be capable of treating all of the information available, including that not involving frequency data. A theory of logical inference is presented as an extension of deductive logic via the concept of plausibility and the application of group theory. Some conclusions, based upon the application of this theory to evaluation of data, are also given

  13. Constructing high-accuracy intermolecular potential energy surface with multi-dimension Morse/Long-Range model

    Science.gov (United States)

    Zhai, Yu; Li, Hui; Le Roy, Robert J.

    2018-04-01

    Spectroscopically accurate Potential Energy Surfaces (PESs) are fundamental for explaining and making predictions of the infrared and microwave spectra of van der Waals (vdW) complexes, and the model used for the potential energy function is critically important for providing accurate, robust and portable analytical PESs. The Morse/Long-Range (MLR) model has proved to be one of the most general, flexible and accurate one-dimensional (1D) model potentials, as it has physically meaningful parameters, is flexible, smooth and differentiable everywhere, to all orders and extrapolates sensibly at both long and short ranges. The Multi-Dimensional Morse/Long-Range (mdMLR) potential energy model described herein is based on that 1D MLR model, and has proved to be effective and accurate in the potentiology of various types of vdW complexes. In this paper, we review the current status of development of the mdMLR model and its application to vdW complexes. The future of the mdMLR model is also discussed. This review can serve as a tutorial for the construction of an mdMLR PES.

  14. A Bootstrap Approach to Computing Uncertainty in Inferred Oil and Gas Reserve Estimates

    International Nuclear Information System (INIS)

    Attanasi, Emil D.; Coburn, Timothy C.

    2004-01-01

    This study develops confidence intervals for estimates of inferred oil and gas reserves based on bootstrap procedures. Inferred reserves are expected additions to proved reserves in previously discovered conventional oil and gas fields. Estimates of inferred reserves accounted for 65% of the total oil and 34% of the total gas assessed in the U.S. Geological Survey's 1995 National Assessment of oil and gas in US onshore and State offshore areas. When the same computational methods used in the 1995 Assessment are applied to more recent data, the 80-year (from 1997 through 2076) inferred reserve estimates for pre-1997 discoveries located in the lower 48 onshore and state offshore areas amounted to a total of 39.7 billion barrels of oil (BBO) and 293 trillion cubic feet (TCF) of gas. The 90% confidence interval about the oil estimate derived from the bootstrap approach is 22.4 BBO to 69.5 BBO. The comparable 90% confidence interval for the inferred gas reserve estimate is 217 TCF to 413 TCF. The 90% confidence interval describes the uncertainty that should be attached to the estimates. It also provides a basis for developing scenarios to explore the implications for energy policy analysis

  15. Average fast neutron flux in three energy ranges in the Quinta assembly irradiated by two types of beams

    Directory of Open Access Journals (Sweden)

    Strugalska-Gola Elzbieta

    2017-01-01

    Full Text Available This work was performed within the international project “Energy plus Transmutation of Radioactive Wastes” (E&T - RAW for investigations of energy production and transmutation of radioactive waste of the nuclear power industry. 89Y (Yttrium 89 samples were located in the Quinta assembly in order to measure an average high neutron flux density in three different energy ranges using deuteron and proton beams from Dubna accelerators. Our analysis showed that the neutron density flux for the neutron energy range 20.8 - 32.7 MeV is higher than for the neutron energy range 11.5 - 20.8 MeV both for protons with an energy of 0.66 GeV and deuterons with an energy of 2 GeV, while for deuteron beams of 4 and 6 GeV we did not observe this.

  16. Evaluation of energy requirements for all-electric range of plug-in hybrid electric two-wheeler

    International Nuclear Information System (INIS)

    Amjad, Shaik; Rudramoorthy, R.; Neelakrishnan, S.; Sri Raja Varman, K.; Arjunan, T.V.

    2011-01-01

    Recently plug-in hybrid electric vehicles (PHEVs) are emerging as one of the promising alternative to improve the sustainability of transportation energy and air quality especially in urban areas. The all-electric range in PHEV design plays a significant role in sizing of battery pack and cost. This paper presents the evaluation of battery energy and power requirements for a plug-in hybrid electric two-wheeler for different all-electric ranges. An analytical vehicle model and MATLAB simulation analysis has been discussed. The MATLAB simulation results estimate the impact of driving cycle and all-electric range on energy capacity, additional mass and initial cost of lead-acid, nickel-metal hydride and lithium-ion batteries. This paper also focuses on influence of cycle life on annual cost of battery pack and recommended suitable battery pack for implementing in plug-in hybrid electric two-wheelers. -- Research highlights: → Evaluates the battery energy and power requirements for a plug-in hybrid electric two-wheeler. → Simulation results reveal that the IDC demand more energy and cost of battery compared to ECE R40. → If cycle life is considered, the annual cost of Ni-MH battery pack is lower than lead-acid and Li-ion.

  17. Inference

    DEFF Research Database (Denmark)

    Møller, Jesper

    (This text written by Jesper Møller, Aalborg University, is submitted for the collection ‘Stochastic Geometry: Highlights, Interactions and New Perspectives', edited by Wilfrid S. Kendall and Ilya Molchanov, to be published by ClarendonPress, Oxford, and planned to appear as Section 4.1 with the ......(This text written by Jesper Møller, Aalborg University, is submitted for the collection ‘Stochastic Geometry: Highlights, Interactions and New Perspectives', edited by Wilfrid S. Kendall and Ilya Molchanov, to be published by ClarendonPress, Oxford, and planned to appear as Section 4.......1 with the title ‘Inference'.) This contribution concerns statistical inference for parametric models used in stochastic geometry and based on quick and simple simulation free procedures as well as more comprehensive methods using Markov chain Monte Carlo (MCMC) simulations. Due to space limitations the focus...

  18. Inferring Genetic Ancestry: Opportunities, Challenges, and Implications

    OpenAIRE

    Royal, Charmaine D.; Novembre, John; Fullerton, Stephanie M.; Goldstein, David B.; Long, Jeffrey C.; Bamshad, Michael J.; Clark, Andrew G.

    2010-01-01

    Increasing public interest in direct-to-consumer (DTC) genetic ancestry testing has been accompanied by growing concern about issues ranging from the personal and societal implications of the testing to the scientific validity of ancestry inference. The very concept of “ancestry” is subject to misunderstanding in both the general and scientific communities. What do we mean by ancestry? How exactly is ancestry measured? How far back can such ancestry be defined and by which genetic tools? How ...

  19. Lower complexity bounds for lifted inference

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2015-01-01

    instances of the model. Numerous approaches for such “lifted inference” techniques have been proposed. While it has been demonstrated that these techniques will lead to significantly more efficient inference on some specific models, there are only very recent and still quite restricted results that show...... the feasibility of lifted inference on certain syntactically defined classes of models. Lower complexity bounds that imply some limitations for the feasibility of lifted inference on more expressive model classes were established earlier in Jaeger (2000; Jaeger, M. 2000. On the complexity of inference about...... that under the assumption that NETIME≠ETIME, there is no polynomial lifted inference algorithm for knowledge bases of weighted, quantifier-, and function-free formulas. Further strengthening earlier results, this is also shown to hold for approximate inference and for knowledge bases not containing...

  20. Long-Range Energy Propagation in Nanometer Arrays of Light Harvesting Antenna Complexes

    NARCIS (Netherlands)

    Escalantet, Maryana; Escalante Marun, M.; Lenferink, Aufrid T.M.; Zhao, Yiping; Tas, Niels Roelof; Huskens, Jurriaan; Hunter, C. Neil; Subramaniam, Vinod; Otto, Cornelis

    2010-01-01

    Here we report the first observation of long-range transport of excitation energy within a biomimetic molecular nanoarray constructed from LH2 antenna complexes from Rhodobacter sphaeroides. Fluorescence microscopy of the emission of light after local excitation with a diffraction-limited light beam

  1. Variations on Bayesian Prediction and Inference

    Science.gov (United States)

    2016-05-09

    inference 2.2.1 Background There are a number of statistical inference problems that are not generally formulated via a full probability model...problem of inference about an unknown parameter, the Bayesian approach requires a full probability 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...the problem of inference about an unknown parameter, the Bayesian approach requires a full probability model/likelihood which can be an obstacle

  2. Active inference and epistemic value.

    Science.gov (United States)

    Friston, Karl; Rigoli, Francesco; Ognibene, Dimitri; Mathys, Christoph; Fitzgerald, Thomas; Pezzulo, Giovanni

    2015-01-01

    We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms.

  3. Sparse Bayesian Inference and the Temperature Structure of the Solar Corona

    Energy Technology Data Exchange (ETDEWEB)

    Warren, Harry P. [Space Science Division, Naval Research Laboratory, Washington, DC 20375 (United States); Byers, Jeff M. [Materials Science and Technology Division, Naval Research Laboratory, Washington, DC 20375 (United States); Crump, Nicholas A. [Naval Center for Space Technology, Naval Research Laboratory, Washington, DC 20375 (United States)

    2017-02-20

    Measuring the temperature structure of the solar atmosphere is critical to understanding how it is heated to high temperatures. Unfortunately, the temperature of the upper atmosphere cannot be observed directly, but must be inferred from spectrally resolved observations of individual emission lines that span a wide range of temperatures. Such observations are “inverted” to determine the distribution of plasma temperatures along the line of sight. This inversion is ill posed and, in the absence of regularization, tends to produce wildly oscillatory solutions. We introduce the application of sparse Bayesian inference to the problem of inferring the temperature structure of the solar corona. Within a Bayesian framework a preference for solutions that utilize a minimum number of basis functions can be encoded into the prior and many ad hoc assumptions can be avoided. We demonstrate the efficacy of the Bayesian approach by considering a test library of 40 assumed temperature distributions.

  4. CompareSVM: supervised, Support Vector Machine (SVM) inference of gene regularity networks.

    Science.gov (United States)

    Gillani, Zeeshan; Akash, Muhammad Sajid Hamid; Rahaman, M D Matiur; Chen, Ming

    2014-11-30

    Predication of gene regularity network (GRN) from expression data is a challenging task. There are many methods that have been developed to address this challenge ranging from supervised to unsupervised methods. Most promising methods are based on support vector machine (SVM). There is a need for comprehensive analysis on prediction accuracy of supervised method SVM using different kernels on different biological experimental conditions and network size. We developed a tool (CompareSVM) based on SVM to compare different kernel methods for inference of GRN. Using CompareSVM, we investigated and evaluated different SVM kernel methods on simulated datasets of microarray of different sizes in detail. The results obtained from CompareSVM showed that accuracy of inference method depends upon the nature of experimental condition and size of the network. For network with nodes (SVM Gaussian kernel outperform on knockout, knockdown, and multifactorial datasets compared to all the other inference methods. For network with large number of nodes (~500), choice of inference method depend upon nature of experimental condition. CompareSVM is available at http://bis.zju.edu.cn/CompareSVM/ .

  5. Adaptive Inference on General Graphical Models

    OpenAIRE

    Acar, Umut A.; Ihler, Alexander T.; Mettu, Ramgopal; Sumer, Ozgur

    2012-01-01

    Many algorithms and applications involve repeatedly solving variations of the same inference problem; for example we may want to introduce new evidence to the model or perform updates to conditional dependencies. The goal of adaptive inference is to take advantage of what is preserved in the model and perform inference more rapidly than from scratch. In this paper, we describe techniques for adaptive inference on general graphs that support marginal computation and updates to the conditional ...

  6. High energy ion range and deposited energy calculation using the Boltzmann-Fokker-Planck splitting of the Boltzmann transport equation

    International Nuclear Information System (INIS)

    Mozolevski, I.E.

    2001-01-01

    We consider the splitting of the straight-ahead Boltzmann transport equation in the Boltzmann-Fokker-Planck equation, decomposing the differential cross-section into a singular part, corresponding to small energy transfer events, and in a regular one, which corresponds to large energy transfer. The convergence of implantation profile, nuclear and electronic energy depositions, calculated from the Boltzmann-Fokker-Planck equation, to the respective exact distributions, calculated from Monte-Carlo method, was exanimate in a large-energy interval for various values of splitting parameter and for different ion-target mass relations. It is shown that for the universal potential there exists an optimal value of splitting parameter, for which range and deposited energy distributions, calculated from the Boltzmann-Fokker-Planck equation, accurately approximate the exact distributions and which minimizes the computational expenses

  7. Search for solar Axion Like Particles in the low energy range at CAST

    International Nuclear Information System (INIS)

    Cantatore, G.; Karuza, M.; Lozza, V.; Raiteri, G.

    2010-01-01

    Axion Like Particles (ALPs) could be continuously produced in the Sun via the Primakoff process. The ALP flux could be seen on Earth by observing the photons produced by the ALP decay. The expected energy distribution of reconverted photons is peaked at 3 keV. There could be, however, a low energy tail due to various processes active in the Sun. We report results of the first test measurements in the low energy range performed at CAST along with a description of the experimental setup. Future detector developments are discussed and preliminary results on a liquid nitrogen cooled Avalanche Photodiode are presented.

  8. True coincidence summing corrections for an extended energy range HPGe detector

    Energy Technology Data Exchange (ETDEWEB)

    Venegas-Argumedo, Y. [Centro de Investigación en Materiales Avanzados (CIMAV), Miguel de Cervantes 120, Chihuahua, Chih 31109 (Mexico); M.S. Student at CIMAV (Mexico); Montero-Cabrera, M. E., E-mail: elena.montero@cimav.edu.mx [Centro de Investigación en Materiales Avanzados (CIMAV), Miguel de Cervantes 120, Chihuahua, Chih 31109 (Mexico)

    2015-07-23

    True coincidence summing (TCS) effect for natural radioactive families of U-238 and Th-232 represents a problem when an environmental sample with a close source-detector geometry measurement is performed. By using a certified multi-nuclide standard source to calibrate an energy extended range (XtRa) HPGe detector, it is possible to obtain an intensity spectrum slightly affected by the TCS effect with energies from 46 to 1836 keV. In this work, the equations and some other considerations required to calculate the TCS correction factor for isotopes of natural radioactive chains are described. It is projected a validation of the calibration, performed with the IAEA-CU-2006-03 samples (soil and water)

  9. Transport properties of gaseous ions over a wide energy range, IV

    International Nuclear Information System (INIS)

    Viehland, L.A.; Mason, E.A.

    1995-01-01

    This paper updates three previous papers entitled open-quotes Transport Properties of Gaseous Ions over a Wide Energy Range.close quotes. These papers referred to as Parts I, II, and III, were by H.W.Ellis, P.Y. Pai, E.W. McDaniel, E.A. Mason, and L.A. Viehland, S.L. Lin, M.G. Thackston. Part IV contains compilations of experimental data on ionic mobilities and diffusion coefficients (both longitudinal and transverse) for ions in neutral gases in an externally applied electrostatic field, at various gas temperatures; the data are tabulated as a function of the ionic energy parameter E/N, where E is the electric field strength and N is the number density of the neutral gas. Part IV also contains a locator key to ionic mobilities and diffusion coefficients compiled in Parts I-IV. The coverage of the literature extends into 1994. The criteria for selection of the data are; (1) the measurements must cover a reasonably wide range of E/N; (2) the identity of the ions must be well established; and (3) the accuracy of the data must be good. 26 refs., 6 tabs

  10. STRIDE: Species Tree Root Inference from Gene Duplication Events.

    Science.gov (United States)

    Emms, David M; Kelly, Steven

    2017-12-01

    The correct interpretation of any phylogenetic tree is dependent on that tree being correctly rooted. We present STRIDE, a fast, effective, and outgroup-free method for identification of gene duplication events and species tree root inference in large-scale molecular phylogenetic analyses. STRIDE identifies sets of well-supported in-group gene duplication events from a set of unrooted gene trees, and analyses these events to infer a probability distribution over an unrooted species tree for the location of its root. We show that STRIDE correctly identifies the root of the species tree in multiple large-scale molecular phylogenetic data sets spanning a wide range of timescales and taxonomic groups. We demonstrate that the novel probability model implemented in STRIDE can accurately represent the ambiguity in species tree root assignment for data sets where information is limited. Furthermore, application of STRIDE to outgroup-free inference of the origin of the eukaryotic tree resulted in a root probability distribution that provides additional support for leading hypotheses for the origin of the eukaryotes. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  11. Theoretical study of cylindrical energy analyzers for MeV range heavy ion beam probes

    International Nuclear Information System (INIS)

    Fujisawa, A.; Hamada, Y.

    1993-07-01

    A cylindrical energy analyzer with drift spaces is shown to have a second order focusing for beam incident angle when the deflection angle is properly chosen. The analyzer has a possibility to be applied to MeV range heavy ion beam probes, and will be also available for accurate particle energy measurements in many other fields. (author)

  12. The inference from a single case: moral versus scientific inferences in implementing new biotechnologies.

    Science.gov (United States)

    Hofmann, B

    2008-06-01

    Are there similarities between scientific and moral inference? This is the key question in this article. It takes as its point of departure an instance of one person's story in the media changing both Norwegian public opinion and a brand-new Norwegian law prohibiting the use of saviour siblings. The case appears to falsify existing norms and to establish new ones. The analysis of this case reveals similarities in the modes of inference in science and morals, inasmuch as (a) a single case functions as a counter-example to an existing rule; (b) there is a common presupposition of stability, similarity and order, which makes it possible to reason from a few cases to a general rule; and (c) this makes it possible to hold things together and retain order. In science, these modes of inference are referred to as falsification, induction and consistency. In morals, they have a variety of other names. Hence, even without abandoning the fact-value divide, there appear to be similarities between inference in science and inference in morals, which may encourage communication across the boundaries between "the two cultures" and which are relevant to medical humanities.

  13. Introductory statistical inference

    CERN Document Server

    Mukhopadhyay, Nitis

    2014-01-01

    This gracefully organized text reveals the rigorous theory of probability and statistical inference in the style of a tutorial, using worked examples, exercises, figures, tables, and computer simulations to develop and illustrate concepts. Drills and boxed summaries emphasize and reinforce important ideas and special techniques.Beginning with a review of the basic concepts and methods in probability theory, moments, and moment generating functions, the author moves to more intricate topics. Introductory Statistical Inference studies multivariate random variables, exponential families of dist

  14. Active inference, communication and hermeneutics.

    Science.gov (United States)

    Friston, Karl J; Frith, Christopher D

    2015-07-01

    Hermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others--during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions--both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then--in principle--they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Assessing children's inference generation: what do tests of reading comprehension measure?

    Science.gov (United States)

    Bowyer-Crane, Claudine; Snowling, Margaret J

    2005-06-01

    Previous research suggests that children with specific comprehension difficulties have problems with the generation of inferences. This raises important questions as to whether poor comprehenders have poor comprehension skills generally, or whether their problems are confined to specific inference types. The main aims of the study were (a) using two commonly used tests of reading comprehension to classify the questions requiring the generation of inferences, and (b) to investigate the relative performance of skilled and less-skilled comprehenders on questions tapping different inference types. The performance of 10 poor comprehenders (mean age 110.06 months) was compared with the performance of 10 normal readers (mean age 112.78 months) on two tests of reading comprehension. A qualitative analysis of the NARA II (form 1) and the WORD comprehension subtest was carried out. Participants were then administered the NARA II, WORD comprehension subtest and a test of non-word reading. The NARA II was heavily reliant on the generation of knowledge-based inferences, while the WORD comprehension subtest was biased towards the retention of literal information. Children identified by the NARA II as having comprehension difficulties performed in the normal range on the WORD comprehension subtests. Further, children with comprehension difficulties performed poorly on questions requiring the generation of knowledge-based and elaborative inferences. However, they were able to answer questions requiring attention to literal information or use of cohesive devices at a level comparable to normal readers. Different reading tests tap different types of inferencing skills. Lessskilled comprehenders have particular difficulty applying real-world knowledge to a text during reading, and this has implications for the formulation of effective intervention strategies.

  16. Interplay of short-range correlations and nuclear symmetry energy in hard-photon production from heavy-ion reactions at Fermi energies

    Science.gov (United States)

    Yong, Gao-Chan; Li, Bao-An

    2017-12-01

    Within an isospin- and momentum-dependent transport model for nuclear reactions at intermediate energies, we investigate the interplay of the nucleon-nucleon short-range correlations (SRCs) and nuclear symmetry energy Esym(ρ ) on hard-photon spectra in collisions of several Ca isotopes on 112Sn and 124Sn targets at a beam energy of 45 MeV/nucleon. It is found that over the whole spectra of hard photons studied, effects of the SRCs overwhelm those owing to the Esym(ρ ) . The energetic photons come mostly from the high-momentum tails (HMTs) of single-nucleon momentum distributions in the target and projectile. Within the neutron-proton dominance model of SRCs based on the consideration that the tensor force acts mostly in the isosinglet and spin-triplet nucleon-nucleon interaction channel, there are equal numbers of neutrons and protons, thus a zero isospin asymmetry in the HMTs. Therefore, experimental measurements of the energetic photons from heavy-ion collisions at Fermi energies have the great potential to help us better understand the nature of SRCs without any appreciable influence by the uncertain Esym(ρ ) . These measurements will be complementary to but also have some advantages over the ongoing and planned experiments using hadronic messengers from reactions induced by high-energy electrons or protons. Because the underlying physics of SRCs and Esym(ρ ) are closely correlated, a better understanding of the SRCs will, in turn, help constrain the nuclear symmetry energy more precisely in a broad density range.

  17. Excitation energies from Görling-Levy perturbation theory along the range-separated adiabatic connection

    Science.gov (United States)

    Rebolini, Elisa; Teale, Andrew M.; Helgaker, Trygve; Savin, Andreas; Toulouse, Julien

    2018-06-01

    A Görling-Levy (GL)-based perturbation theory along the range-separated adiabatic connection is assessed for the calculation of electronic excitation energies. In comparison with the Rayleigh-Schrödinger (RS)-based perturbation theory this GL-based perturbation theory keeps the ground-state density constant at each order and thus gives the correct ionisation energy at each order. Excitation energies up to first order in the perturbation have been calculated numerically for the helium and beryllium atoms and the hydrogen molecule without introducing any density-functional approximations. In comparison with the RS-based perturbation theory, the present GL-based perturbation theory gives much more accurate excitation energies for Rydberg states but similar excitation energies for valence states.

  18. How does Germany's green energy policy affect electricity market volatility? An application of conditional autoregressive range models

    International Nuclear Information System (INIS)

    Auer, Benjamin R.

    2016-01-01

    Based on a dynamic model for the high/low range of electricity prices, this article analyses the effects of Germany's green energy policy on the volatility of the electricity market. Using European Energy Exchange data from 2000 to 2015, we find rather high volatility in the years 2000–2009 but also that the weekly price range has significantly declined in the period following the year 2009. This period is characterised by active regulation under the Energy Industry Law (EnWG), the EU Emissions Trading Directive (ETD) and the Renewable Energy Law (EEG). In contrast to the preceding period, price jumps are smaller and less frequent (especially for day-time hours), implying that current policy measures are effective in promoting renewable energies while simultaneously upholding electricity market stability. This is because the regulations strive towards a more and more flexible and market-oriented structure which allows better integration of renewable energies and supports an efficient alignment of renewable electricity supply with demand. - Highlights: • We estimate a CARR model for German electricity price data. • We augment the model by dummies capturing important regulations. • We find a significant decline in the price range after the year 2009. • This implies effective price stabilisation by German energy policy.

  19. Analyses of Alpha-Alpha Elastic Scattering Data in the Energy Range 140 - 280 MeV

    Energy Technology Data Exchange (ETDEWEB)

    Shehadeh, Zuhair F. [Taif University, Taif (Saudi Arabia)

    2017-01-15

    The differential and the reaction cross-sections for 4He-4He elastic scattering data have been nicely obtained at four energies ranging from 140 MeV to 280 MeV (lab system), namely, 140, 160, 198 and 280 MeV, by using a new optical potential with a short-range repulsive core. The treatment has been handled relativistically as υ/c > 0.25 for the two lower energies and υ/c > 0.31 for the two higher ones. In addition to explaining the elastic angular distributions, the adopted potentials accounted for the structure that may exist at angles close to 90◦ , especially for the 198 and the 280-MeV incident energies. No renormalization has been used, and all our potential parameters are new. The necessity of including a short-range repulsive potential term in our real nuclear potential part has been demonstrated. Our results contribute to solving a long-standing problem concerning the nature of the alpha-alpha potential. This is very beneficial in explaining unknown alpha-nucleus and nucleus-nucleus relativistic reactions by using the cluster formalism.

  20. Electron response of some low-Z scintillators in wide energy range

    International Nuclear Information System (INIS)

    Swiderski, L; Marcinkowski, R; Moszynski, M; Czarnacki, W; Szawlowski, M; Szczesniak, T; Pausch, G; Plettner, C; Roemer, K

    2012-01-01

    Light yield nonproportionality and the intrinsic resolution of some low atomic number scintillators were studied by means of the Wide Angle Compton Coincidence (WACC) technique. The plastic and liquid scintillator response to Compton electrons was measured in the energy range of 10 keV up to 4 MeV, whereas a CaF 2 :Eu sample was scanned from 3 keV up to 1 MeV. The nonproportionality of the CaF 2 :Eu light yield has characteristics typical for inorganic scintillators of the multivalent halides group, whereas tested organic scintillators show steeply increasing nonproportionality without saturation point. This is in contrast to the behavior of all known inorganic scintillators having their nonproportionality curves at saturation above energies between tens and several hundred keV.

  1. Electron response of some low-Z scintillators in wide energy range

    Science.gov (United States)

    Swiderski, L.; Marcinkowski, R.; Moszynski, M.; Czarnacki, W.; Szawlowski, M.; Szczesniak, T.; Pausch, G.; Plettner, C.; Roemer, K.

    2012-06-01

    Light yield nonproportionality and the intrinsic resolution of some low atomic number scintillators were studied by means of the Wide Angle Compton Coincidence (WACC) technique. The plastic and liquid scintillator response to Compton electrons was measured in the energy range of 10 keV up to 4 MeV, whereas a CaF2:Eu sample was scanned from 3 keV up to 1 MeV. The nonproportionality of the CaF2:Eu light yield has characteristics typical for inorganic scintillators of the multivalent halides group, whereas tested organic scintillators show steeply increasing nonproportionality without saturation point. This is in contrast to the behavior of all known inorganic scintillators having their nonproportionality curves at saturation above energies between tens and several hundred keV.

  2. Optimization methods for logical inference

    CERN Document Server

    Chandru, Vijay

    2011-01-01

    Merging logic and mathematics in deductive inference-an innovative, cutting-edge approach. Optimization methods for logical inference? Absolutely, say Vijay Chandru and John Hooker, two major contributors to this rapidly expanding field. And even though ""solving logical inference problems with optimization methods may seem a bit like eating sauerkraut with chopsticks. . . it is the mathematical structure of a problem that determines whether an optimization model can help solve it, not the context in which the problem occurs."" Presenting powerful, proven optimization techniques for logic in

  3. Prospects for bioenergy use in Ghana using Long-range Energy Alternatives Planning model

    DEFF Research Database (Denmark)

    Kemausuor, Francis; Nygaard, Ivan; Mackenzie, Gordon A.

    2015-01-01

    biomass sources, through the production of biogas, liquid biofuels and electricity. Analysis was based on moderate and high use of bioenergy for transportation, electricity generation and residential fuel using the LEAP (Long-range Energy Alternatives Planning) model. Results obtained indicate...

  4. Low energy range dielectronic recombination of Fluorine-like Fe17+ at the CSRm

    Science.gov (United States)

    Khan, Nadir; Huang, Zhong-Kui; Wen, Wei-Qiang; Mahmood, Sultan; Dou, Li-Jun; Wang, Shu-Xing; Xu, Xin; Wang, Han-Bing; Chen, Chong-Yang; Chuai, Xiao-Ya; Zhu, Xiao-Long; Zhao, Dong-Mei; Mao, Li-Jun; Li, Jie; Yin, Da-Yu; Yang, Jian-Cheng; Yuan, You-Jin; Zhu, Lin-Fan; Ma, Xin-Wen

    2018-05-01

    The accuracy of dielectronic recombination (DR) data for astrophysics related ions plays a key role in astrophysical plasma modeling. The absolute DR rate coefficient of Fe17+ ions was measured at the main cooler storage ring at the Institute of Modern Physics, Lanzhou, China. The experimental electron-ion collision energy range covers the first Rydberg series up to n = 24 for the DR resonances associated with the {}2P1/2\\to {}2P3/2{{Δ }}n=0 core excitations. A theoretical calculation was performed by using FAC code and compared with the measured DR rate coefficient. Overall reasonable agreement was found between the experimental results and calculations. Moreover, the plasma rate coefficient was deduced from the experimental DR rate coefficient and compared with the available results from the literature. At the low energy range, significant discrepancies were found, and the measured resonances challenge state-of-the-art theory at low collision energies. Supported by the National Key R&D Program of China (2017YFA0402300), the National Natural Science Foundation of China through (11320101003, U1732133, 11611530684) and Key Research Program of Frontier Sciences, CAS (QYZDY-SSW-SLH006)

  5. Combined production of free-range pigs and energy crops – animal behaviour and crop damages

    DEFF Research Database (Denmark)

    Horsted, Klaus; Kongsted, Anne Grete; Jørgensen, Uffe

    2012-01-01

    Intensive free-range pig production on open grasslands has disadvantages in that it creates nutrient hotspots and little opportunity for pigs to seek shelter from the sun. Combining a perennial energy crop and pig production might benefit the environment and animal welfare because perennial energy...... crops like willow (Salix sp.) and Miscanthus offer the pigs protection from the sun while reducing nutrient leaching from pig excrements due to their deep rooting system. The objectives of this study were to evaluate how season and stocking density of pigs in a free-range system with zones of willow...

  6. Inference in `poor` languages

    Energy Technology Data Exchange (ETDEWEB)

    Petrov, S.

    1996-10-01

    Languages with a solvable implication problem but without complete and consistent systems of inference rules (`poor` languages) are considered. The problem of existence of finite complete and consistent inference rule system for a ``poor`` language is stated independently of the language or rules syntax. Several properties of the problem arc proved. An application of results to the language of join dependencies is given.

  7. Harnessing Big-Data for Estimating the Energy Consumption and Driving Range of Electric Vehicles

    DEFF Research Database (Denmark)

    Fetene, Gebeyehu Manie; Prato, Carlo Giacomo; Kaplan, Sigal

    -effects econometrics model used in this paper predicts that the energy saving speed of driving is between 45 and 56 km/h. In addition to the contribution to the literature about energy efficiency of electric vehicles, the findings from this study enlightens consumers to choose appropriate cars that suit their travel......This study analyses the driving range and investigates the factors affecting the energy consumption rate of fully-battery electric vehicles under real-world driving patterns accounting for weather condition, drivers’ characteristics, and road characteristics. Four data sources are used: (i) up...

  8. A portable and wide energy range semiconductor-based neutron spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Hoshor, C.B. [Department of Physics, University of Missouri, Kansas City, MO (United States); Oakes, T.M. [Nuclear Science and Engineering Institute, University of Missouri, Columbia, MO (United States); Myers, E.R.; Rogers, B.J.; Currie, J.E.; Young, S.M.; Crow, J.A.; Scott, P.R. [Department of Physics, University of Missouri, Kansas City, MO (United States); Miller, W.H. [Nuclear Science and Engineering Institute, University of Missouri, Columbia, MO (United States); Missouri University Research Reactor, Columbia, MO (United States); Bellinger, S.L. [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Sobering, T.J. [Electronics Design Laboratory, Kansas State University, Manhattan, KS (United States); Fronk, R.G.; Shultis, J.K.; McGregor, D.S. [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Caruso, A.N., E-mail: carusoan@umkc.edu [Department of Physics, University of Missouri, Kansas City, MO (United States)

    2015-12-11

    Hand-held instruments that can be used to passively detect and identify sources of neutron radiation—either bare or obscured by neutron moderating and/or absorbing material(s)—in real time are of interest in a variety of nuclear non-proliferation and health physics applications. Such an instrument must provide a means to high intrinsic detection efficiency and energy-sensitive measurements of free neutron fields, for neutrons ranging from thermal energies to the top end of the evaporation spectrum. To address and overcome the challenges inherent to the aforementioned applications, four solid-state moderating-type neutron spectrometers of varying cost, weight, and complexity have been designed, fabricated, and tested. The motivation of this work is to introduce these novel human-portable instruments by discussing the fundamental theory of their operation, investigating and analyzing the principal considerations for optimal instrument design, and evaluating the capability of each of the four fabricated spectrometers to meet the application needs.

  9. A portable and wide energy range semiconductor-based neutron spectrometer

    International Nuclear Information System (INIS)

    Hoshor, C.B.; Oakes, T.M.; Myers, E.R.; Rogers, B.J.; Currie, J.E.; Young, S.M.; Crow, J.A.; Scott, P.R.; Miller, W.H.; Bellinger, S.L.; Sobering, T.J.; Fronk, R.G.; Shultis, J.K.; McGregor, D.S.; Caruso, A.N.

    2015-01-01

    Hand-held instruments that can be used to passively detect and identify sources of neutron radiation—either bare or obscured by neutron moderating and/or absorbing material(s)—in real time are of interest in a variety of nuclear non-proliferation and health physics applications. Such an instrument must provide a means to high intrinsic detection efficiency and energy-sensitive measurements of free neutron fields, for neutrons ranging from thermal energies to the top end of the evaporation spectrum. To address and overcome the challenges inherent to the aforementioned applications, four solid-state moderating-type neutron spectrometers of varying cost, weight, and complexity have been designed, fabricated, and tested. The motivation of this work is to introduce these novel human-portable instruments by discussing the fundamental theory of their operation, investigating and analyzing the principal considerations for optimal instrument design, and evaluating the capability of each of the four fabricated spectrometers to meet the application needs.

  10. Cultural effects on the association between election outcomes and face-based trait inferences

    Science.gov (United States)

    Adolphs, Ralph; Alvarez, R. Michael

    2017-01-01

    How competent a politician looks, as assessed in the laboratory, is correlated with whether the politician wins in real elections. This finding has led many to investigate whether the association between candidate appearances and election outcomes transcends cultures. However, these studies have largely focused on European countries and Caucasian candidates. To the best of our knowledge, there are only four cross-cultural studies that have directly investigated how face-based trait inferences correlate with election outcomes across Caucasian and Asian cultures. These prior studies have provided some initial evidence regarding cultural differences, but methodological problems and inconsistent findings have complicated our understanding of how culture mediates the effects of candidate appearances on election outcomes. Additionally, these four past studies have focused on positive traits, with a relative neglect of negative traits, resulting in an incomplete picture of how culture may impact a broader range of trait inferences. To study Caucasian-Asian cultural effects with a more balanced experimental design, and to explore a more complete profile of traits, here we compared how Caucasian and Korean participants’ inferences of positive and negative traits correlated with U.S. and Korean election outcomes. Contrary to previous reports, we found that inferences of competence (made by participants from both cultures) correlated with both U.S. and Korean election outcomes. Inferences of open-mindedness and threat, two traits neglected in previous cross-cultural studies, were correlated with Korean but not U.S. election outcomes. This differential effect was found in trait judgments made by both Caucasian and Korean participants. Interestingly, the faster the participants made face-based trait inferences, the more strongly those inferences were correlated with real election outcomes. These findings provide new insights into cultural effects and the difficult question of

  11. Cultural effects on the association between election outcomes and face-based trait inferences.

    Science.gov (United States)

    Lin, Chujun; Adolphs, Ralph; Alvarez, R Michael

    2017-01-01

    How competent a politician looks, as assessed in the laboratory, is correlated with whether the politician wins in real elections. This finding has led many to investigate whether the association between candidate appearances and election outcomes transcends cultures. However, these studies have largely focused on European countries and Caucasian candidates. To the best of our knowledge, there are only four cross-cultural studies that have directly investigated how face-based trait inferences correlate with election outcomes across Caucasian and Asian cultures. These prior studies have provided some initial evidence regarding cultural differences, but methodological problems and inconsistent findings have complicated our understanding of how culture mediates the effects of candidate appearances on election outcomes. Additionally, these four past studies have focused on positive traits, with a relative neglect of negative traits, resulting in an incomplete picture of how culture may impact a broader range of trait inferences. To study Caucasian-Asian cultural effects with a more balanced experimental design, and to explore a more complete profile of traits, here we compared how Caucasian and Korean participants' inferences of positive and negative traits correlated with U.S. and Korean election outcomes. Contrary to previous reports, we found that inferences of competence (made by participants from both cultures) correlated with both U.S. and Korean election outcomes. Inferences of open-mindedness and threat, two traits neglected in previous cross-cultural studies, were correlated with Korean but not U.S. election outcomes. This differential effect was found in trait judgments made by both Caucasian and Korean participants. Interestingly, the faster the participants made face-based trait inferences, the more strongly those inferences were correlated with real election outcomes. These findings provide new insights into cultural effects and the difficult question of

  12. Measurement of home-made LaCl3 : Ce scintillation detector sensitivity with different energy points in range of fission energy

    International Nuclear Information System (INIS)

    Hu Mengchun; Li Rurong; Si Fenni

    2010-01-01

    Gamma rays of different energy were obtained in the range of fission energy by Compton scattering in intense 60 Co gamma source and the standard isotopic gamma sources which are 0.67 MeV 137 Cs and l.25 MeV 60 Co sources of point form. Sensitivity of LaCl 3 : Ce scintillator was measured in these gamma ray energy by a fast response scintillation detector with the home-made LaCl 3 : Ce scintillator. Results were normalized by the sensitivity to 0.67 MeV gamma ray. Sensitivity of LaCl 3 : Ce to 1.25 MeV gamma ray is about l.28. For ø40 mm × 2 mm LaCl 3 : Ce scintillator, the biggest sensitivity is l.18 and the smallest is 0.96 with gamma ray from 0.39 to 0.78 MeV. And for ø40 mm × 10 mm LaCl 3 : Ce scintillator, the biggest sensitivity is l.06 and the smallest is 0.98. The experimental results can provide references for theoretical study of the LaCl 3 : Ce scintillator and data to obtain the compounded sensitivity of LaCl 3 : Ce scintillator in the range of fission energy. (authors)

  13. Dual peripheral model up to Serpukhov energies

    CERN Document Server

    Schrempp, Barbara

    1974-01-01

    The high energy behaviour of the s-channel Regge residues is inferred from three plausible requirements. The resulting s-channel helicity amplitudes allow-in a dual sense-the following t-channel interpretation: for -t>or=0.25 GeV/sup 2/ the flip amplitude has the form of a t-channel Regge pole, while the non-flip amplitude looks like a Regge cut. Finally, a quantitative comparison of the predictions with the data available for the set of SU(3) related processes pi N CEX, KN, KN CEX and pi /sup -/p to eta n is performed, covering the energy range 2energies) and the range of momentum transfer 0.25

  14. A Drawing Task to Assess Emotion Inference in Language-Impaired Children

    Science.gov (United States)

    Vendeville, Nathalie; Blanc, Nathalie; Brechet, Claire

    2015-01-01

    Purpose: Studies investigating the ability of children with language impairment (LI) to infer emotions rely on verbal responses (which can be challenging for these children) and/or the selection of a card representing an emotion (which limits the response range). In contrast, a drawing task might allow a broad spectrum of responses without…

  15. On the criticality of inferred models

    Science.gov (United States)

    Mastromatteo, Iacopo; Marsili, Matteo

    2011-10-01

    Advanced inference techniques allow one to reconstruct a pattern of interaction from high dimensional data sets, from probing simultaneously thousands of units of extended systems—such as cells, neural tissues and financial markets. We focus here on the statistical properties of inferred models and argue that inference procedures are likely to yield models which are close to singular values of parameters, akin to critical points in physics where phase transitions occur. These are points where the response of physical systems to external perturbations, as measured by the susceptibility, is very large and diverges in the limit of infinite size. We show that the reparameterization invariant metrics in the space of probability distributions of these models (the Fisher information) are directly related to the susceptibility of the inferred model. As a result, distinguishable models tend to accumulate close to critical points, where the susceptibility diverges in infinite systems. This region is the one where the estimate of inferred parameters is most stable. In order to illustrate these points, we discuss inference of interacting point processes with application to financial data and show that sensible choices of observation time scales naturally yield models which are close to criticality.

  16. On the criticality of inferred models

    International Nuclear Information System (INIS)

    Mastromatteo, Iacopo; Marsili, Matteo

    2011-01-01

    Advanced inference techniques allow one to reconstruct a pattern of interaction from high dimensional data sets, from probing simultaneously thousands of units of extended systems—such as cells, neural tissues and financial markets. We focus here on the statistical properties of inferred models and argue that inference procedures are likely to yield models which are close to singular values of parameters, akin to critical points in physics where phase transitions occur. These are points where the response of physical systems to external perturbations, as measured by the susceptibility, is very large and diverges in the limit of infinite size. We show that the reparameterization invariant metrics in the space of probability distributions of these models (the Fisher information) are directly related to the susceptibility of the inferred model. As a result, distinguishable models tend to accumulate close to critical points, where the susceptibility diverges in infinite systems. This region is the one where the estimate of inferred parameters is most stable. In order to illustrate these points, we discuss inference of interacting point processes with application to financial data and show that sensible choices of observation time scales naturally yield models which are close to criticality

  17. An Inference Language for Imaging

    DEFF Research Database (Denmark)

    Pedemonte, Stefano; Catana, Ciprian; Van Leemput, Koen

    2014-01-01

    We introduce iLang, a language and software framework for probabilistic inference. The iLang framework enables the definition of directed and undirected probabilistic graphical models and the automated synthesis of high performance inference algorithms for imaging applications. The iLang framewor...

  18. Phylodynamic Inference with Kernel ABC and Its Application to HIV Epidemiology.

    Science.gov (United States)

    Poon, Art F Y

    2015-09-01

    The shapes of phylogenetic trees relating virus populations are determined by the adaptation of viruses within each host, and by the transmission of viruses among hosts. Phylodynamic inference attempts to reverse this flow of information, estimating parameters of these processes from the shape of a virus phylogeny reconstructed from a sample of genetic sequences from the epidemic. A key challenge to phylodynamic inference is quantifying the similarity between two trees in an efficient and comprehensive way. In this study, I demonstrate that a new distance measure, based on a subset tree kernel function from computational linguistics, confers a significant improvement over previous measures of tree shape for classifying trees generated under different epidemiological scenarios. Next, I incorporate this kernel-based distance measure into an approximate Bayesian computation (ABC) framework for phylodynamic inference. ABC bypasses the need for an analytical solution of model likelihood, as it only requires the ability to simulate data from the model. I validate this "kernel-ABC" method for phylodynamic inference by estimating parameters from data simulated under a simple epidemiological model. Results indicate that kernel-ABC attained greater accuracy for parameters associated with virus transmission than leading software on the same data sets. Finally, I apply the kernel-ABC framework to study a recent outbreak of a recombinant HIV subtype in China. Kernel-ABC provides a versatile framework for phylodynamic inference because it can fit a broader range of models than methods that rely on the computation of exact likelihoods. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  19. Inference

    DEFF Research Database (Denmark)

    Møller, Jesper

    2010-01-01

    Chapter 9: This contribution concerns statistical inference for parametric models used in stochastic geometry and based on quick and simple simulation free procedures as well as more comprehensive methods based on a maximum likelihood or Bayesian approach combined with markov chain Monte Carlo...... (MCMC) techniques. Due to space limitations the focus is on spatial point processes....

  20. Quantitative DMS mapping for automated RNA secondary structure inference

    OpenAIRE

    Cordero, Pablo; Kladwang, Wipapat; VanLang, Christopher C.; Das, Rhiju

    2012-01-01

    For decades, dimethyl sulfate (DMS) mapping has informed manual modeling of RNA structure in vitro and in vivo. Here, we incorporate DMS data into automated secondary structure inference using a pseudo-energy framework developed for 2'-OH acylation (SHAPE) mapping. On six non-coding RNAs with crystallographic models, DMS- guided modeling achieves overall false negative and false discovery rates of 9.5% and 11.6%, comparable or better than SHAPE-guided modeling; and non-parametric bootstrappin...

  1. Feature Inference Learning and Eyetracking

    Science.gov (United States)

    Rehder, Bob; Colner, Robert M.; Hoffman, Aaron B.

    2009-01-01

    Besides traditional supervised classification learning, people can learn categories by inferring the missing features of category members. It has been proposed that feature inference learning promotes learning a category's internal structure (e.g., its typical features and interfeature correlations) whereas classification promotes the learning of…

  2. Sparse linear models: Variational approximate inference and Bayesian experimental design

    International Nuclear Information System (INIS)

    Seeger, Matthias W

    2009-01-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  3. Sparse linear models: Variational approximate inference and Bayesian experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Seeger, Matthias W [Saarland University and Max Planck Institute for Informatics, Campus E1.4, 66123 Saarbruecken (Germany)

    2009-12-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  4. Photodissociation of HCN and HNC isomers in the 7-10 eV energy range

    Energy Technology Data Exchange (ETDEWEB)

    Chenel, Aurelie; Roncero, Octavio, E-mail: octavio.roncero@csic.es [Instituto de Física Fundamental (IFF-CSIC), C.S.I.C., Serrano 123, 28006 Madrid (Spain); Aguado, Alfredo [Departamento de Química Física Aplicada (UAM), Unidad Asociada a IFF-CSIC, Facultad de Ciencias Módulo 14, Universidad Autónoma de Madrid, 28049 Madrid (Spain); Agúndez, Marcelino; Cernicharo, José [Instituto de Ciencia de Materiales de Madrid, CSIC, C/ Sor Juana Inés de la Cruz 3, 28049 Cantoblanco (Spain)

    2016-04-14

    The ultraviolet photoabsorption spectra of the HCN and HNC isomers have been simulated in the 7-10 eV photon energy range. For this purpose, the three-dimensional adiabatic potential energy surfaces of the 7 lowest electronic states, and the corresponding transition dipole moments, have been calculated, at multireference configuration interaction level. The spectra are calculated with a quantum wave packet method on these adiabatic potential energy surfaces. The spectra for the 3 lower excited states, the dissociative electronic states, correspond essentially to predissociation peaks, most of them through tunneling on the same adiabatic state. The 3 higher electronic states are bound, hereafter electronic bound states, and their spectra consist of delta lines, in the adiabatic approximation. The radiative lifetime towards the ground electronic states of these bound states has been calculated, being longer than 10 ns in all cases, much longer that the characteristic predissociation lifetimes. The spectra of HCN is compared with the available experimental and previous theoretical simulations, while in the case of HNC there are no previous studies to our knowledge. The spectrum for HNC is considerably more intense than that of HCN in the 7-10 eV photon energy range, which points to a higher photodissociation rate for HNC, compared to HCN, in astrophysical environments illuminated by ultraviolet radiation.

  5. Frontier applications of rf superconductivity for high energy physics in the TeV range

    International Nuclear Information System (INIS)

    Tigner, M.; Padamsee, H.

    1988-01-01

    The authors present understanding of the fundamental nature of matter is embodied in the standard theory. This theory views all matter as composed of families of quarks and leptons with their interactions mediated by the family of force-carrying particles. Progress in particle accelerators has been a vital element in bringing about this level of understanding. Although the standard theory is successful in relating a wide range of phenomena, it raises deeper questions about the basic nature of matter and energy. Among these are: why are the masses of the various elementary particles and the strengths of the basic forces what they are? It is expected that over the next decade a new generation of accelerators spanning the 100 Gev mass range will shed light on some of these questions. These accelerators, will provide the means to thoroughly explore the energy regime corresponding to the mass scale of the weak interactions to reveal intimate details of the force carrying particles, the weak bosons, Z0 and W+-. Superconducting rf technology will feature in a major way in the electron storage rings. Current theoretical ideas predict that to make further progress towards a more fundamental theory of matter, it will be necessary to penetrate the TeV energy regime. At this scale a whole new range of phenomena will manifest the nature of the symmetry breaking mechanism that must be responsible for the differences they observe in the familiar weak and electromagnetic forces. History has shown that unexpected discoveries made in a new energy regime have proven to be the main engine of progress. The experimental challenge to accelerator designers and builders is clear. 11 references, 3 figures, 1 table

  6. Radial diffusion in the Uranian radiatian belts - Inferences from satellite absorption loss models

    Science.gov (United States)

    Hood, L. L.

    1989-01-01

    Low-energy charged particle (LECP) phase space density profiles available from the Voyager/1986 Uranus encounter are analyzed, using solutions of the time-averaged radial diffusion equation for charged particle transport in a dipolar planetary magnetic field. Profiles for lower-energy protons and electrons are first analyzed to infer radial diffusion rate as a function of L, assuming that satellite absorption is the dominant loss process and local sources for these particles are negligible. Satellite macrosignatures present in the experimentally derived profiles are approximately reproduced in several cases, lending credence to the loss model and indicating that magnetospheric distributed losses are not as rapid as satellite absorption near the minimum satellite L shells for the particles. Diffusion rates and L dependences are found to be similar to those previously inferred in the inner Jovian magnetosphere (Thomsen et al., 1977) and for the inner Saturnian magnetosphere (Hood, 1985). Profiles for higher energy electrons and protons are also analyzed using solutions that allow for the existence of significant particle sources as well as sinks. Possible implications for radial diffusion mechanisms in the Uranian radiation belts are discussed.

  7. 3He(α, γ7Be cross section in a wide energy range

    Directory of Open Access Journals (Sweden)

    Szücs Tamás

    2017-01-01

    Full Text Available The reaction rate of the 3He(α,γ7 Be reaction is important both in the Big Bang Nucleosynthesis (BBN and in the Solar hydrogen burning. There have been a lot of experimental and theoretical efforts to determine this reaction rate with high precision. Some long standing issues have been solved by the more precise investigations, like the different S(0 values predicted by the activation and in-beam measurement. However, the recent, more detailed astrophysical model predictions require the reaction rate with even higher precision to unravel new issues like the Solar composition. One way to increase the precision is to provide a comprehensive dataset in a wide energy range, extending the experimental cross section database of this reaction. This paper presents a new cross section measurement between Ecm = 2.5 − 4.4 MeV, in an energy range which extends above the 7Be proton separation threshold.

  8. Home in the heat: Dramatic seasonal variation in home range of desert golden eagles informs management for renewable energy development

    Science.gov (United States)

    Braham, Melissa A.; Miller, Tricia A.; Duerr, Adam E.; Lanzone, Michael J.; Fesnock, Amy; LaPre, Larry; Driscoll, Daniel; Katzner, Todd E.

    2015-01-01

    Renewable energy is expanding quickly with sometimes dramatic impacts to species and ecosystems. To understand the degree to which sensitive species may be impacted by renewable energy projects, it is informative to know how much space individuals use and how that space may overlap with planned development. We used global positioning system–global system for mobile communications (GPS-GSM) telemetry to measure year-round movements of golden eagles (Aquila chrysaetos) from the Mojave Desert of California, USA. We estimated monthly space use with adaptive local convex hulls to identify the temporal and spatial scales at which eagles may encounter renewable energy projects in the Desert Renewable Energy Conservation Plan area. Mean size of home ranges was lowest and least variable from November through January and greatest in February–March and May–August. These monthly home range patterns coincided with seasonal variation in breeding ecology, habitat associations, and temperature. The expanded home ranges in hot summer months included movements to cooler, prey-dense, mountainous areas characterized by forest, grasslands, and scrublands. Breeding-season home ranges (October–May) included more lowland semi-desert and rock vegetation. Overlap of eagle home ranges and focus areas for renewable energy development was greatest when eagle home ranges were smallest, during the breeding season. Golden eagles in the Mojave Desert used more space and a wider range of habitat types than expected and renewable energy projects could affect a larger section of the regional population than was previously thought.

  9. Development of dose monitoring system applicable to various radiations with wide energy ranges

    International Nuclear Information System (INIS)

    Sato, Tatsuhiko; Satoh, Daiki; Endo, Akira; Yamaguchi, Yasuhiro

    2005-01-01

    A new inventive radiation dose monitor, designated as DARWIN (Dose monitoring system Applicable to various Radiations with WIde energy raNges), has been developed for monitoring doses in workspaces and surrounding environments of high energy accelerator facilities. DARWIN is composed of a phoswitch-type scintillation detector, which consists of liquid organic scintillator BC501A coupled with ZnS(Ag) scintillation sheets doped with 6 Li, and a data acquisition system based on a Digital-Storage-Oscilloscope. Scintillations from the detector induced by thermal and fast neutrons, photons and muons were discriminated by analyzing their waveforms, and their light outputs were directly converted into the corresponding doses by applying the G-function method. Characteristics of DARWIN were studied by both calculation and experiment. The calculated results indicate that DARWIN gives reasonable estimations of doses in most radiation fields. It was found from the experiment that DARWIN has an excellent property of measuring doses from all particles that significantly contribute to the doses in surrounding environments of accelerator facilities - neutron, photon and muon with wide energy ranges. The experimental results also suggested that DARWIN enables us to monitor small fluctuation of neutron dose rates near the background-level owing to its high sensitivity. (author)

  10. Electron transport in furfural: dependence of the electron ranges on the cross sections and the energy loss distribution functions

    Science.gov (United States)

    Ellis-Gibbings, L.; Krupa, K.; Colmenares, R.; Blanco, F.; Muńoz, A.; Mendes, M.; Ferreira da Silva, F.; Limá Vieira, P.; Jones, D. B.; Brunger, M. J.; García, G.

    2016-09-01

    Recent theoretical and experimental studies have provided a complete set of differential and integral electron scattering cross section data from furfural over a broad energy range. The energy loss distribution functions have been determined in this study by averaging electron energy loss spectra for different incident energies and scattering angles. All these data have been used as input parameters for an event by event Monte Carlo simulation procedure to obtain the electron energy deposition patterns and electron ranges in liquid furfural. The dependence of these results on the input cross sections is then analysed to determine the uncertainty of the simulated values.

  11. Forward and backward inference in spatial cognition.

    Directory of Open Access Journals (Sweden)

    Will D Penny

    Full Text Available This paper shows that the various computations underlying spatial cognition can be implemented using statistical inference in a single probabilistic model. Inference is implemented using a common set of 'lower-level' computations involving forward and backward inference over time. For example, to estimate where you are in a known environment, forward inference is used to optimally combine location estimates from path integration with those from sensory input. To decide which way to turn to reach a goal, forward inference is used to compute the likelihood of reaching that goal under each option. To work out which environment you are in, forward inference is used to compute the likelihood of sensory observations under the different hypotheses. For reaching sensory goals that require a chaining together of decisions, forward inference can be used to compute a state trajectory that will lead to that goal, and backward inference to refine the route and estimate control signals that produce the required trajectory. We propose that these computations are reflected in recent findings of pattern replay in the mammalian brain. Specifically, that theta sequences reflect decision making, theta flickering reflects model selection, and remote replay reflects route and motor planning. We also propose a mapping of the above computational processes onto lateral and medial entorhinal cortex and hippocampus.

  12. Calibration efficiency of HPGe detector in the 50-1800 KeV energy range

    International Nuclear Information System (INIS)

    Venturini, Luzia

    1996-01-01

    This paper describes the efficiency of an HPGe detector in the 50 - 1800 keV energy range, for two geometries for water measurements: Marinelli breaker (850 ml) and a polyethylene flask (100 ml). The experimental data were corrected for the summing effect and fitted to a continuous, differentiable and energy dependent function given by 1n(ε)=b 0 +b 1 .1n(E/E 0 )+ β.1n(E/E 0 ) 2 , where β = b 2 if E>E 0 and β =a 2 if E ≤E 0 ; ε = the full absorption peak efficiency; E is the gamma-ray energy and {b 0 , b 1 , b 2 , a 2 , E 0 } is the parameter set to be fitted. (author)

  13. Online Energy Management of Plug-In Hybrid Electric Vehicles for Prolongation of All-Electric Range Based on Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Zeyu Chen

    2015-01-01

    Full Text Available The employed energy management strategy plays an important role in energy saving performance and exhausted emission reduction of plug-in hybrid electric vehicles (HEVs. An application of dynamic programming for optimization of power allocation is implemented in this paper with certain driving cycle and a limited driving range. Considering the DP algorithm can barely be used in real-time control because of its huge computational task and the dependence on a priori driving cycle, several online useful control rules are established based on the offline optimization results of DP. With the above efforts, an online energy management strategy is proposed finally. The presented energy management strategy concerns the prolongation of all-electric driving range as well as the energy saving performance. A simulation study is deployed to evaluate the control performance of the proposed energy management approach. All-electric range of the plug-in HEV can be prolonged by up to 2.86% for a certain driving condition. The energy saving performance is relative to the driving distance. The presented energy management strategy brings a little higher energy cost when driving distance is short, but for a long driving distance, it can reduce the energy consumption by up to 5.77% compared to the traditional CD-CS strategy.

  14. Total photoabsorption cross section on nuclei measured in energy range 0.5-2.6 GeV

    International Nuclear Information System (INIS)

    Mirazita, M.

    1998-03-01

    The total photoabsorption cross section on several nuclei has been measured in the energy range 0.5 - 2.6 GeV. Nuclear data show a significant reduction of the absorption strength with respect to the free nucleon case suggesting a shadowing effect at low energies

  15. Long range supergravity coupling strengths

    International Nuclear Information System (INIS)

    Kenyon, I.R.

    1991-01-01

    A limit of 2x10 -13 has recently been deduced for the fractional difference between the gravitational masses of the K 0 and anti K 0 mesons. This limit is applied here to put stringent limits on the strengths of the long range vector-scalar gravitational couplings envisaged in supergravity theories. A weaker limit is inferred from the general relativistic fit to the precession of the orbit of the pulsar PSR1913+16. (orig.)

  16. Violent heavy ion collisions around the Fermi energy

    International Nuclear Information System (INIS)

    Borderie, B.

    1985-01-01

    Experimental results on central collisions will be presented and it will be shown that a fusion process still occurs; deexcitation of the hot fused systems formed will be discussed. Then, from the qualitative evolution of central collision products from different reactions studied in the E/A range 20-84 MeV, the vanishing of fusion processes will be inferred; it will be discussed in terms of critical energy deposit and maximum excitation energy per nucleon that nuclei can carry. Finally results concerning the large production of light fragments (3 < approximately Z < approximately 12) experimentally observed in the Fermi energy domain will be presented and discussed in terms of a multifragmentation of the whole nuclear system or of part of it for intermediate impact parameter collisions (109 refs, 49 fig)

  17. Bayesian Estimation and Inference using Stochastic Hardware

    Directory of Open Access Journals (Sweden)

    Chetan Singh Thakur

    2016-03-01

    Full Text Available In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker, demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND, we show how inference can be performed in a Directed Acyclic Graph (DAG using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.

  18. Bayesian Estimation and Inference Using Stochastic Electronics.

    Science.gov (United States)

    Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan; van Schaik, André

    2016-01-01

    In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.

  19. A formal model of interpersonal inference

    Directory of Open Access Journals (Sweden)

    Michael eMoutoussis

    2014-03-01

    Full Text Available Introduction: We propose that active Bayesian inference – a general framework for decision-making – can equally be applied to interpersonal exchanges. Social cognition, however, entails special challenges. We address these challenges through a novel formulation of a formal model and demonstrate its psychological significance. Method: We review relevant literature, especially with regards to interpersonal representations, formulate a mathematical model and present a simulation study. The model accommodates normative models from utility theory and places them within the broader setting of Bayesian inference. Crucially, we endow people's prior beliefs, into which utilities are absorbed, with preferences of self and others. The simulation illustrates the model's dynamics and furnishes elementary predictions of the theory. Results: 1. Because beliefs about self and others inform both the desirability and plausibility of outcomes, in this framework interpersonal representations become beliefs that have to be actively inferred. This inference, akin to 'mentalising' in the psychological literature, is based upon the outcomes of interpersonal exchanges. 2. We show how some well-known social-psychological phenomena (e.g. self-serving biases can be explained in terms of active interpersonal inference. 3. Mentalising naturally entails Bayesian updating of how people value social outcomes. Crucially this includes inference about one’s own qualities and preferences. Conclusion: We inaugurate a Bayes optimal framework for modelling intersubject variability in mentalising during interpersonal exchanges. Here, interpersonal representations are endowed with explicit functional and affective properties. We suggest the active inference framework lends itself to the study of psychiatric conditions where mentalising is distorted.

  20. Distributional Inference

    NARCIS (Netherlands)

    Kroese, A.H.; van der Meulen, E.A.; Poortema, Klaas; Schaafsma, W.

    1995-01-01

    The making of statistical inferences in distributional form is conceptionally complicated because the epistemic 'probabilities' assigned are mixtures of fact and fiction. In this respect they are essentially different from 'physical' or 'frequency-theoretic' probabilities. The distributional form is

  1. EMR-based medical knowledge representation and inference via Markov random fields and distributed representation learning.

    Science.gov (United States)

    Zhao, Chao; Jiang, Jingchi; Guan, Yi; Guo, Xitong; He, Bin

    2018-05-01

    Electronic medical records (EMRs) contain medical knowledge that can be used for clinical decision support (CDS). Our objective is to develop a general system that can extract and represent knowledge contained in EMRs to support three CDS tasks-test recommendation, initial diagnosis, and treatment plan recommendation-given the condition of a patient. We extracted four kinds of medical entities from records and constructed an EMR-based medical knowledge network (EMKN), in which nodes are entities and edges reflect their co-occurrence in a record. Three bipartite subgraphs (bigraphs) were extracted from the EMKN, one to support each task. One part of the bigraph was the given condition (e.g., symptoms), and the other was the condition to be inferred (e.g., diseases). Each bigraph was regarded as a Markov random field (MRF) to support the inference. We proposed three graph-based energy functions and three likelihood-based energy functions. Two of these functions are based on knowledge representation learning and can provide distributed representations of medical entities. Two EMR datasets and three metrics were utilized to evaluate the performance. As a whole, the evaluation results indicate that the proposed system outperformed the baseline methods. The distributed representation of medical entities does reflect similarity relationships with respect to knowledge level. Combining EMKN and MRF is an effective approach for general medical knowledge representation and inference. Different tasks, however, require individually designed energy functions. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Prospects for bioenergy use in Ghana using Long-range Energy Alternatives Planning model

    International Nuclear Information System (INIS)

    Kemausuor, Francis; Nygaard, Ivan; Mackenzie, Gordon

    2015-01-01

    As Ghana's economy grows, the choice of future energy paths and policies in the coming years will have a significant influence on its energy security. A Renewable Energy Act approved in 2011 seeks to encourage the influx of renewable energy sources in Ghana's energy mix. The new legal framework combined with increasing demand for energy has created an opportunity for dramatic changes in the way energy is generated in Ghana. However, the impending changes and their implication remain uncertain. This paper examines the extent to which future energy scenarios in Ghana could rely on energy from biomass sources, through the production of biogas, liquid biofuels and electricity. Analysis was based on moderate and high use of bioenergy for transportation, electricity generation and residential fuel using the LEAP (Long-range Energy Alternatives Planning) model. Results obtained indicate that introducing bioenergy to the energy mix could reduce GHG (greenhouse gas) emissions by about 6 million tonnes CO_2e by 2030, equivalent to a 14% reduction in a business-as-usual scenario. This paper advocates the use of second generation ethanol for transport, to the extent that it is economically exploitable. Resorting to first generation ethanol would require the allocation of over 580,000 ha of agricultural land for ethanol production. - Highlights: • This paper examines modern bioenergy contribution to Ghana's future energy mix. • Three scenarios are developed and analysed. • Opportunities exist for modern bioenergy to replace carbon intensive fuels. • Introducing modern bioenergy to the mix could result in a 14% reduction in GHG.

  3. Gene regulatory network inference by point-based Gaussian approximation filters incorporating the prior information.

    Science.gov (United States)

    Jia, Bin; Wang, Xiaodong

    2013-12-17

    : The extended Kalman filter (EKF) has been applied to inferring gene regulatory networks. However, it is well known that the EKF becomes less accurate when the system exhibits high nonlinearity. In addition, certain prior information about the gene regulatory network exists in practice, and no systematic approach has been developed to incorporate such prior information into the Kalman-type filter for inferring the structure of the gene regulatory network. In this paper, an inference framework based on point-based Gaussian approximation filters that can exploit the prior information is developed to solve the gene regulatory network inference problem. Different point-based Gaussian approximation filters, including the unscented Kalman filter (UKF), the third-degree cubature Kalman filter (CKF3), and the fifth-degree cubature Kalman filter (CKF5) are employed. Several types of network prior information, including the existing network structure information, sparsity assumption, and the range constraint of parameters, are considered, and the corresponding filters incorporating the prior information are developed. Experiments on a synthetic network of eight genes and the yeast protein synthesis network of five genes are carried out to demonstrate the performance of the proposed framework. The results show that the proposed methods provide more accurate inference results than existing methods, such as the EKF and the traditional UKF.

  4. Continuous Integrated Invariant Inference, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed project will develop a new technique for invariant inference and embed this and other current invariant inference and checking techniques in an...

  5. Development of dose monitoring system applicable to various radiations with wide energy ranges

    International Nuclear Information System (INIS)

    Sato, Tatsuhiko; Satoh, Daiki; Endo, Akira

    2006-01-01

    A new radiation dose monitor, designated as DARWIN (Dose monitoring system Applicable to various Radiations with WIde energy raNges), has been developed for real-time monitoring of doses in workspaces and surrounding environments of high energy accelerator facilities. DARWIN is composed of a phoswitch-type scintillation detector, which consists of liquid organic scintillator BC501A coupled with ZnS(Ag) scintillation sheets doped with 6 Li, and a data acquisition system based on a Digital-Storage-Oscilloscope. DARWIN has the following features: (1) capable of monitoring doses from neutrons, photons and muons with energies from thermal energy to 1 GeV, 150 keV to 100 MeV, and 1 MeV to 100 GeV, respectively, (2) highly sensitive with precision, and (3) easy to operate with a simple graphical user-interface. The performance of DARWIN was examined experimentally in several radiation fields. The results of the experiments indicated the accuracy and rapid response of DARWIN for measuring dose rates from neutrons, photons and muons with wide energies. With these properties, we conclude that DARWIN will be able to play a very important role for improving radiation safety in high energy accelerator facilities. (author)

  6. A comparison of ROC inferred from FROC and conventional ROC

    Science.gov (United States)

    McEntee, Mark F.; Littlefair, Stephen; Pietrzyk, Mariusz W.

    2014-03-01

    This study aims to determine whether receiver operating characteristic (ROC) scores inferred from free-response receiver operating characteristic (FROC) were equivalent to conventional ROC scores for the same readers and cases. Forty-five examining radiologists of the American Board of Radiology independently reviewed 47 PA chest radiographs under at least two conditions. Thirty-seven cases had abnormal findings and 10 cases had normal findings. Half the readers were asked to first locate any visualized lung nodules, mark them and assign a level of confidence [the FROC mark-rating pair] and second give an overall to the entire image on the same scale [the ROC score]. The second half of readers gave the ROC rating first followed by the FROC mark-rating pairs. A normal image was represented with number 1 and malignant lesions with numbers 2-5. A jackknife free-response receiver operating characteristic (JAFROC), and inferred ROC (infROC) was calculated from the mark-rating pairs using JAFROC V4.1 software. ROC based on the overall rating of the image calculated using DBM MRMC software, which was also used to compare infROC and ROC AUCs treating the methods as modalities. Pearson's correlations coefficient and linear regression were used to examine their relationship using SPSS, version 21.0; (SPSS, Chicago, IL). The results of this study showed no significant difference between the ROC and Inferred ROC AUCs (p≤0.25). While Pearson's correlation coefficient was 0.7 (p≤0.01). Inter-reader correlation calculated from Obuchowski- Rockette covariance's ranged from 0.43-0.86 while intra-reader agreement was greater than previously reported ranging from 0.68-0.82.

  7. Estimating uncertainty of inference for validation

    Energy Technology Data Exchange (ETDEWEB)

    Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  8. Mass energy-absorption coefficients and average atomic energy-absorption cross-sections for amino acids in the energy range 0.122-1.330 MeV

    Energy Technology Data Exchange (ETDEWEB)

    More, Chaitali V., E-mail: chaitalimore89@gmail.com; Lokhande, Rajkumar M.; Pawar, Pravina P., E-mail: pravinapawar4@gmail.com [Department of physics, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad 431004 (India)

    2016-05-06

    Mass attenuation coefficients of amino acids such as n-acetyl-l-tryptophan, n-acetyl-l-tyrosine and d-tryptophan were measured in the energy range 0.122-1.330 MeV. NaI (Tl) scintillation detection system was used to detect gamma rays with a resolution of 8.2% at 0.662 MeV. The measured attenuation coefficient values were then used to determine the mass energy-absorption coefficients (σ{sub a,en}) and average atomic energy-absorption cross sections (μ{sub en}/ρ) of the amino acids. Theoretical values were calculated based on XCOM data. Theoretical and experimental values are found to be in good agreement.

  9. The astrophysical S-factor for dd-reactions at keV-energy range

    International Nuclear Information System (INIS)

    Bystritskii, V.; Bystritsky, V.; Chaikovsky, S.

    2001-01-01

    The experimental results of measurements of the astrophysical S-factor for dd-reaction at keV-energy range collision energies using liner plasma technique are presented. The experiments were carried out at the high current generator of the Institute of High-Current Electronics in Tomsk, Russia. The measured values of the S-factors for the deuteron collision energies 1.80, 2.06 and 2.27 keV are S dd =(114±68), (64±30), (53±16) b x keV, respectively. The corresponding cross sections for dd-reactions, described as a product of the barrier factor and measured astrophysical S-factor, are σ dd n (E col =1.80 keV)=(4.3±2.6) x 10 -33 cm 2 ; σ dd n (E col =2.06 keV)=(9.8±4.6) x 10 -33 cm 2 ; σ dd n (E col =2.27 keV)=(2.1±0.6) x 10 -32 cm 2 . (orig.) [de

  10. Generalized Efficient Inference on Factor Models with Long-Range Dependence

    DEFF Research Database (Denmark)

    Ergemen, Yunus Emre

    . Short-memory dynamics are allowed in the common factor structure and possibly heteroskedastic error term. In the estimation, a generalized version of the principal components (PC) approach is proposed to achieve efficiency. Asymptotics for efficient common factor and factor loading as well as long......A dynamic factor model is considered that contains stochastic time trends allowing for stationary and nonstationary long-range dependence. The model nests standard I(0) and I(1) behaviour smoothly in common factors and residuals, removing the necessity of a priori unit-root and stationarity testing...

  11. Calibration of a large multi-element neutron counter in the energy range 85-430 MeV

    CERN Document Server

    Strong, J A; Esterling, R J; Garvey, J; Green, M G; Harnew, N; Jane, M R; Jobes, M; Mawson, J; McMahon, T; Robertson, A W; Thomas, D H

    1978-01-01

    Describes the calibration of a large 60 element neutron counter with a threshold of 2.7 MeV equivalent electron energy. The performance of the counter has been measured in the neutron kinetic energy range 8.5-430 MeV using a neutron beam at the CERN Synchrocyclotron. The results obtained for the efficiency as a function of energy are in reasonable agreement with a Monte Carlo calculation. (7 refs).

  12. Investigation of the 232Th neutron cross-sections in resonance energy range

    International Nuclear Information System (INIS)

    Grigoriev, Yu.V.; Kitaev, V.Ya.; Sinitsa, V.V.; Zhuravlev, B.V.; Borzakov, S.B.; Faikov-Stanchik, H.; Ilchev, G.L.; Panteleev, Ts.Ts.; Kim, G.N.

    2001-01-01

    The alternative path in the development of atomic energy is the uranium-thorium cycle. In connection with this, the measurements of the 232 Th neutron capture and total cross-sections and its resonance self-shielding coefficients in resonance energy range are necessary because of their low accuracy. In this work, the results of the investigations of the thorium-232 neutron cross-sections are presented. The measurements have been carried out on the gamma-ray multisection liquid detector and neutron detector as a battery of boron counters on the 120 m flight path of the pulsed fast reactor IBR-30. As the filter samples were used the metallic disks of various thickness and diameter of 45 mm. Two plates from metallic thorium with thickness of 0.2 mm and with the square of 4.5x4.5 cm 2 were used as the radiator samples. The group neutron total and capture cross-sections within the accuracy of 2-7% in the energy range of (10 eV-10 keV) were obtained from the transmissions and the sum spectra of g-rays from the fourth multiplicity to the seventh one. The neutron capture group cross-sections of 238 U were used as the standard for obtaining of thorium ones. Analogous values were calculated on the GRUCON code with the ENDF/B-6, JENDL-3 evaluated data libraries. Within the limits of experimental errors an agreement between the experiment and calculation is observed, but in some groups the experimental values are larger than the calculated ones. (author)

  13. Standardization of radiation protection measurements in mixed fields of an extended energy range

    International Nuclear Information System (INIS)

    Hoefert, M.; Stevenson, G.R.

    1977-01-01

    The improved ICRU concept of dose equivalent index aims at standardizing both area and personnel dose measurements so that the results on the dosimetry of external irradiations in radiation protection become compatible. It seems that for photon and neutron energies up to 3 and 20 MeV respectively the realization of dose-equivalent index is straightforward, but the inclusion of higher energies and/or other types of radiation will lead both to conceptual and practical difficulties. It will be shown that practical measurements in mixed radiation fields of an extended energy range for protection purposes will overestimate the standardized quantity. While area measurements can be performed to represent a good approximation, greater uncertainties have to be accepted in personnel dosimetry for stray radiation fields around GeV proton accelerators

  14. Quantum-Like Representation of Non-Bayesian Inference

    Science.gov (United States)

    Asano, M.; Basieva, I.; Khrennikov, A.; Ohya, M.; Tanaka, Y.

    2013-01-01

    This research is related to the problem of "irrational decision making or inference" that have been discussed in cognitive psychology. There are some experimental studies, and these statistical data cannot be described by classical probability theory. The process of decision making generating these data cannot be reduced to the classical Bayesian inference. For this problem, a number of quantum-like coginitive models of decision making was proposed. Our previous work represented in a natural way the classical Bayesian inference in the frame work of quantum mechanics. By using this representation, in this paper, we try to discuss the non-Bayesian (irrational) inference that is biased by effects like the quantum interference. Further, we describe "psychological factor" disturbing "rationality" as an "environment" correlating with the "main system" of usual Bayesian inference.

  15. Bayesian Inference Methods for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand

    2013-01-01

    This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...

  16. Testing a new multigroup inference approach to reconstructing past environmental conditions

    Directory of Open Access Journals (Sweden)

    Maria RIERADEVALL

    2008-08-01

    Full Text Available A new, quantitative, inference model for environmental reconstruction (transfer function, based for the first time on the simultaneous analysis of multigroup species, has been developed. Quantitative reconstructions based on palaeoecological transfer functions provide a powerful tool for addressing questions of environmental change in a wide range of environments, from oceans to mountain lakes, and over a range of timescales, from decades to millions of years. Much progress has been made in the development of inferences based on multiple proxies but usually these have been considered separately, and the different numeric reconstructions compared and reconciled post-hoc. This paper presents a new method to combine information from multiple biological groups at the reconstruction stage. The aim of the multigroup work was to test the potential of the new approach to making improved inferences of past environmental change by improving upon current reconstruction methodologies. The taxonomic groups analysed include diatoms, chironomids and chrysophyte cysts. We test the new methodology using two cold-environment training-sets, namely mountain lakes from the Pyrenees and the Alps. The use of multiple groups, as opposed to single groupings, was only found to increase the reconstruction skill slightly, as measured by the root mean square error of prediction (leave-one-out cross-validation, in the case of alkalinity, dissolved inorganic carbon and altitude (a surrogate for air-temperature, but not for pH or dissolved CO2. Reasons why the improvement was less than might have been anticipated are discussed. These can include the different life-forms, environmental responses and reaction times of the groups under study.

  17. Cultural effects on the association between election outcomes and face-based trait inferences.

    Directory of Open Access Journals (Sweden)

    Chujun Lin

    Full Text Available How competent a politician looks, as assessed in the laboratory, is correlated with whether the politician wins in real elections. This finding has led many to investigate whether the association between candidate appearances and election outcomes transcends cultures. However, these studies have largely focused on European countries and Caucasian candidates. To the best of our knowledge, there are only four cross-cultural studies that have directly investigated how face-based trait inferences correlate with election outcomes across Caucasian and Asian cultures. These prior studies have provided some initial evidence regarding cultural differences, but methodological problems and inconsistent findings have complicated our understanding of how culture mediates the effects of candidate appearances on election outcomes. Additionally, these four past studies have focused on positive traits, with a relative neglect of negative traits, resulting in an incomplete picture of how culture may impact a broader range of trait inferences. To study Caucasian-Asian cultural effects with a more balanced experimental design, and to explore a more complete profile of traits, here we compared how Caucasian and Korean participants' inferences of positive and negative traits correlated with U.S. and Korean election outcomes. Contrary to previous reports, we found that inferences of competence (made by participants from both cultures correlated with both U.S. and Korean election outcomes. Inferences of open-mindedness and threat, two traits neglected in previous cross-cultural studies, were correlated with Korean but not U.S. election outcomes. This differential effect was found in trait judgments made by both Caucasian and Korean participants. Interestingly, the faster the participants made face-based trait inferences, the more strongly those inferences were correlated with real election outcomes. These findings provide new insights into cultural effects and the

  18. Statistical inference an integrated Bayesianlikelihood approach

    CERN Document Server

    Aitkin, Murray

    2010-01-01

    Filling a gap in current Bayesian theory, Statistical Inference: An Integrated Bayesian/Likelihood Approach presents a unified Bayesian treatment of parameter inference and model comparisons that can be used with simple diffuse prior specifications. This novel approach provides new solutions to difficult model comparison problems and offers direct Bayesian counterparts of frequentist t-tests and other standard statistical methods for hypothesis testing.After an overview of the competing theories of statistical inference, the book introduces the Bayes/likelihood approach used throughout. It pre

  19. Inference of gene regulatory networks from time series by Tsallis entropy

    Directory of Open Access Journals (Sweden)

    de Oliveira Evaldo A

    2011-05-01

    Full Text Available Abstract Background The inference of gene regulatory networks (GRNs from large-scale expression profiles is one of the most challenging problems of Systems Biology nowadays. Many techniques and models have been proposed for this task. However, it is not generally possible to recover the original topology with great accuracy, mainly due to the short time series data in face of the high complexity of the networks and the intrinsic noise of the expression measurements. In order to improve the accuracy of GRNs inference methods based on entropy (mutual information, a new criterion function is here proposed. Results In this paper we introduce the use of generalized entropy proposed by Tsallis, for the inference of GRNs from time series expression profiles. The inference process is based on a feature selection approach and the conditional entropy is applied as criterion function. In order to assess the proposed methodology, the algorithm is applied to recover the network topology from temporal expressions generated by an artificial gene network (AGN model as well as from the DREAM challenge. The adopted AGN is based on theoretical models of complex networks and its gene transference function is obtained from random drawing on the set of possible Boolean functions, thus creating its dynamics. On the other hand, DREAM time series data presents variation of network size and its topologies are based on real networks. The dynamics are generated by continuous differential equations with noise and perturbation. By adopting both data sources, it is possible to estimate the average quality of the inference with respect to different network topologies, transfer functions and network sizes. Conclusions A remarkable improvement of accuracy was observed in the experimental results by reducing the number of false connections in the inferred topology by the non-Shannon entropy. The obtained best free parameter of the Tsallis entropy was on average in the range 2.5

  20. Theoretical photoionization spectra in the UV photon energy range for a Mg-like Al+ ion

    International Nuclear Information System (INIS)

    Kim, Dae-Soung; Kim, Young Soon

    2008-01-01

    In the present work, we report the photoionization cross sections of the Al + ion calculated for the photon energy range 20-26 eV and 30-50 eV. We have expanded our previous calculation (2007 J. Phys. Soc. Japan 76 014302) with an optimized admixture of the initial ground state 3s 21 S and exited states 3s3p 1,3 P, 3s3d 1,3 D and 3s4s 1,3 S, and obtained significantly improved predictions for the main background and autoionizing resonance structures of the reported experimental spectra. The absolute measurements of the photoionization cross sections of the Al + ion in these energy ranges have been performed by West et al (2001 Phys. Rev. A 63 052719), and they reported that the prominent peaks around 21 eV were attributed to the effects of the significant influence of the small fraction of the fourth-order radiation with energies around 84 eV from the synchrotron source. In our previous work, the main shape for these cross sections was calculated assuming an admixture of initial 3s 21 S and 3s3p 3 P states, only with a rough overall estimate for the experimental spectra in the photon energy range 20-26 eV, and without these peaks around 21 eV. The report of the experimental assignment attributes these peaks to the excitation of a 2p electron from the core. However, our present results with the new admixture reveal similar peaks without considering the possibility of the core excitation

  1. Graphical models for inferring single molecule dynamics

    Directory of Open Access Journals (Sweden)

    Gonzalez Ruben L

    2010-10-01

    Full Text Available Abstract Background The recent explosion of experimental techniques in single molecule biophysics has generated a variety of novel time series data requiring equally novel computational tools for analysis and inference. This article describes in general terms how graphical modeling may be used to learn from biophysical time series data using the variational Bayesian expectation maximization algorithm (VBEM. The discussion is illustrated by the example of single-molecule fluorescence resonance energy transfer (smFRET versus time data, where the smFRET time series is modeled as a hidden Markov model (HMM with Gaussian observables. A detailed description of smFRET is provided as well. Results The VBEM algorithm returns the model’s evidence and an approximating posterior parameter distribution given the data. The former provides a metric for model selection via maximum evidence (ME, and the latter a description of the model’s parameters learned from the data. ME/VBEM provide several advantages over the more commonly used approach of maximum likelihood (ML optimized by the expectation maximization (EM algorithm, the most important being a natural form of model selection and a well-posed (non-divergent optimization problem. Conclusions The results demonstrate the utility of graphical modeling for inference of dynamic processes in single molecule biophysics.

  2. Possible manifestation of long range forces in high energy hadron collisions

    International Nuclear Information System (INIS)

    Kuraev, Eh.A.; Ferro, P.; Trentadue, L.

    1997-01-01

    Pion-pion and photon-photon scattering are discussed.. We obtain, starting from the impact representation introduced by Cheng and Wu a new contribution to the high energy hadron-hadron scattering amplitude for small transferred momentum q 2 of the form is (q 2 /m 4 )ln(-q 2 /m 2 ). This behaviour may be interpreted as a manifestation of long transverse-range forces between hadrons which, for ρ>> m -1 fall off as ρ -4 . We consider the examples of pion and photon scattering with photons converted in the intermediate state to two pairs of quarks interacting by exchanging two gluon colorless state. A phenomenological approach for proton impact factor is used to analyze proton-proton scattering. The analysis of the lowest order radiative corrections for the case of photon-photon scattering is done. We discuss the possibility of observing the effects of these long range forces

  3. Sibship reconstruction for inferring mating systems, dispersal and effective population size in headwater brook trout (Salvelinus fontinalis) populations

    Science.gov (United States)

    Kanno, Yoichiro; Vokoun, Jason C.; Letcher, Benjamin H.

    2011-01-01

    Brook trout Salvelinus fontinalis populations have declined in much of the native range in eastern North America and populations are typically relegated to small headwater streams in Connecticut, USA. We used sibship reconstruction to infer mating systems, dispersal and effective population size of resident (non-anadromous) brook trout in two headwater stream channel networks in Connecticut. Brook trout were captured via backpack electrofishing using spatially continuous sampling in the two headwaters (channel network lengths of 4.4 and 7.7 km). Eight microsatellite loci were genotyped in a total of 740 individuals (80–140 mm) subsampled in a stratified random design from all 50 m-reaches in which trout were captured. Sibship reconstruction indicated that males and females were both mostly polygamous although single pair matings were also inferred. Breeder sex ratio was inferred to be nearly 1:1. Few large-sized fullsib families (>3 individuals) were inferred and the majority of individuals were inferred to have no fullsibs among those fish genotyped (family size = 1). The median stream channel distance between pairs of individuals belonging to the same large-sized fullsib families (>3 individuals) was 100 m (range: 0–1,850 m) and 250 m (range: 0–2,350 m) in the two study sites, indicating limited dispersal at least for the size class of individuals analyzed. Using a sibship assignment method, the effective population size for the two streams was estimated at 91 (95%CI: 67–123) and 210 (95%CI: 172–259), corresponding to the ratio of effective-to-census population size of 0.06 and 0.12, respectively. Both-sex polygamy, low variation in reproductive success, and a balanced sex ratio may help maintain genetic diversity of brook trout populations with small breeder sizes persisting in headwater channel networks.

  4. Correlation between blister skin thickness, the maximum in the damage-energy distribution, and projected ranges of He+ ions in metals: Nb

    International Nuclear Information System (INIS)

    Kaminsky, M.; Das, S.K.; Fenske, G.

    1975-01-01

    The skin thicknesses of blisters formed on niobium by helium-ion irradiation at room temperature for energies from 100 to 1500 keV have been measured. The projected ranges of helium ions in Nb for this energy range were calculated using either Brice's formalism or the one given by Schiott. For the damage-energy distribution Brice's formalism was used. The measured skin-thickness values corrleate more closely with the maxima in the projected-range probability distributions than with the maxima in the damage-energy distributions. The results are consistent with our model for blister formation and rupture proposed earlier

  5. Type Inference with Inequalities

    DEFF Research Database (Denmark)

    Schwartzbach, Michael Ignatieff

    1991-01-01

    of (monotonic) inequalities on the types of variables and expressions. A general result about systems of inequalities over semilattices yields a solvable form. We distinguish between deciding typability (the existence of solutions) and type inference (the computation of a minimal solution). In our case, both......Type inference can be phrased as constraint-solving over types. We consider an implicitly typed language equipped with recursive types, multiple inheritance, 1st order parametric polymorphism, and assignments. Type correctness is expressed as satisfiability of a possibly infinite collection...

  6. Information Geometry, Inference Methods and Chaotic Energy Levels Statistics

    OpenAIRE

    Cafaro, Carlo

    2008-01-01

    In this Letter, we propose a novel information-geometric characterization of chaotic (integrable) energy level statistics of a quantum antiferromagnetic Ising spin chain in a tilted (transverse) external magnetic field. Finally, we conjecture our results might find some potential physical applications in quantum energy level statistics.

  7. Correlation between blister skin thickness, the maximum in the damage-energy distribution, and the projected ranges of He+ ions in metals

    International Nuclear Information System (INIS)

    Das, S.K.; Kaminsky, M.; Fenske, G.

    1976-01-01

    The skin thickness of blisters formed on aluminium by helium-ion irradiation at room temperature for energies from 100 to 1000 keV have been measured. The projected ranges of helium ions in Al for this energy range were calculated using either Brice's formalism (Brice, D.K., 1972, Phys. Rev., vol. A6, 1791) or the one given by Schioett (Schioett, H.E., 1966, K. Danske Vidensk.Selsk., Mat.-Fys. Meddr., vol.35, No.9). For the damage-energy distribution Brice's formalism was used. The measured skin thickness values are smaller than the calculated values of the maxima in the projected range distributions over the entire energy range studied. These results on the ductile metal aluminium are contrasted with the results on relatively brittle refractory metals V and Nb where the measured skin thickness values correlate more closely with the maxima in the projected range probability distributions than with the maxima in the damage-energy distributions. Processes affecting the blister skin fracture and the skin thickness are discussed. (author)

  8. High energy storage density over a broad temperature range in sodium bismuth titanate-based lead-free ceramics.

    Science.gov (United States)

    Yang, Haibo; Yan, Fei; Lin, Ying; Wang, Tong; Wang, Fen

    2017-08-18

    A series of (1-x)Bi 0.48 La 0.02 Na 0.48 Li 0.02 Ti 0.98 Zr 0.02 O 3 -xNa 0.73 Bi 0.09 NbO 3 ((1-x)LLBNTZ-xNBN) (x = 0-0.14) ceramics were designed and fabricated using the conventional solid-state sintering method. The phase structure, microstructure, dielectric, ferroelectric and energy storage properties of the ceramics were systematically investigated. The results indicate that the addition of Na 0.73 Bi 0.09 NbO 3 (NBN) could decrease the remnant polarization (P r ) and improve the temperature stability of dielectric constant obviously. The working temperature range satisfying TCC 150  °C  ≤±15% of this work spans over 400 °C with the compositions of x ≥ 0.06. The maximum energy storage density can be obtained for the sample with x = 0.10 at room temperature, with an energy storage density of 2.04 J/cm 3 at 178 kV/cm. In addition, the (1-x)LLBNTZ-xNBN ceramics exhibit excellent energy storage properties over a wide temperature range from room temperature to 90 °C. The values of energy storage density and energy storage efficiency is 0.91 J/cm 3 and 79.51%, respectively, for the 0.90LLBNTZ-0.10NBN ceramic at the condition of 100 kV/cm and 90 °C. It can be concluded that the (1-x)LLBNTZ-xNBN ceramics are promising lead-free candidate materials for energy storage devices over a broad temperature range.

  9. CGC/saturation approach for soft interactions at high energy: long range rapidity correlations

    International Nuclear Information System (INIS)

    Gotsman, E.; Maor, U.; Levin, E.

    2015-01-01

    In this paper we continue our program to construct a model for high energy soft interactions that is based on the CGC/saturation approach. The main result of this paper is that we have discovered a mechanism that leads to large long range rapidity correlations and results in large values of the correlation function R(y 1 , y 2 ) ≥ 1, which is independent of y 1 and y 2 . Such a behavior of the correlation function provides strong support for the idea that at high energies the system of partons that is produced is not only dense but also has strong attractive forces acting between the partons. (orig.)

  10. Inference in models with adaptive learning

    NARCIS (Netherlands)

    Chevillon, G.; Massmann, M.; Mavroeidis, S.

    2010-01-01

    Identification of structural parameters in models with adaptive learning can be weak, causing standard inference procedures to become unreliable. Learning also induces persistent dynamics, and this makes the distribution of estimators and test statistics non-standard. Valid inference can be

  11. Energy loss of muons in the energy range 1-10000 GeV

    International Nuclear Information System (INIS)

    Lohmann, W.; Kopp, R.; Voss, R.

    1985-01-01

    A summary is given of the most recent formulae for the cross-sections contributing to the energy loss of muons in matter, notably due to electro-magnetic interactions (ionization, bremsstrahlung and electron-pair production) and nuclear interactions. Computed energy losses dE/dx are tabulated for muons with energy between 1 GeV and 10,000 GeV in a number of materials commonly used in high-energy physics experiments. In comparison with earlier tables, these show deviations that grow with energy and amount to several per cent at 200 GeV muon energy. (orig.)

  12. Bayesian electron density inference from JET lithium beam emission spectra using Gaussian processes

    Science.gov (United States)

    Kwak, Sehyun; Svensson, J.; Brix, M.; Ghim, Y.-C.; Contributors, JET

    2017-03-01

    A Bayesian model to infer edge electron density profiles is developed for the JET lithium beam emission spectroscopy (Li-BES) system, measuring Li I (2p-2s) line radiation using 26 channels with  ∼1 cm spatial resolution and 10∼ 20 ms temporal resolution. The density profile is modelled using a Gaussian process prior, and the uncertainty of the density profile is calculated by a Markov Chain Monte Carlo (MCMC) scheme. From the spectra measured by the transmission grating spectrometer, the Li I line intensities are extracted, and modelled as a function of the plasma density by a multi-state model which describes the relevant processes between neutral lithium beam atoms and plasma particles. The spectral model fully takes into account interference filter and instrument effects, that are separately estimated, again using Gaussian processes. The line intensities are inferred based on a spectral model consistent with the measured spectra within their uncertainties, which includes photon statistics and electronic noise. Our newly developed method to infer JET edge electron density profiles has the following advantages in comparison to the conventional method: (i) providing full posterior distributions of edge density profiles, including their associated uncertainties, (ii) the available radial range for density profiles is increased to the full observation range (∼26 cm), (iii) an assumption of monotonic electron density profile is not necessary, (iv) the absolute calibration factor of the diagnostic system is automatically estimated overcoming the limitation of the conventional technique and allowing us to infer the electron density profiles for all pulses without preprocessing the data or an additional boundary condition, and (v) since the full spectrum is modelled, the procedure of modulating the beam to measure the background signal is only necessary for the case of overlapping of the Li I line with impurity lines.

  13. Fiducial inference - A Neyman-Pearson interpretation

    NARCIS (Netherlands)

    Salome, D; VonderLinden, W; Dose,; Fischer, R; Preuss, R

    1999-01-01

    Fisher's fiducial argument is a tool for deriving inferences in the form of a probability distribution on the parameter space, not based on Bayes's Theorem. Lindley established that in exceptional situations fiducial inferences coincide with posterior distributions; in the other situations fiducial

  14. Uncertainty in prediction and in inference

    NARCIS (Netherlands)

    Hilgevoord, J.; Uffink, J.

    1991-01-01

    The concepts of uncertainty in prediction and inference are introduced and illustrated using the diffraction of light as an example. The close re-lationship between the concepts of uncertainty in inference and resolving power is noted. A general quantitative measure of uncertainty in

  15. Polynomial Chaos Surrogates for Bayesian Inference

    KAUST Repository

    Le Maitre, Olivier

    2016-01-06

    The Bayesian inference is a popular probabilistic method to solve inverse problems, such as the identification of field parameter in a PDE model. The inference rely on the Bayes rule to update the prior density of the sought field, from observations, and derive its posterior distribution. In most cases the posterior distribution has no explicit form and has to be sampled, for instance using a Markov-Chain Monte Carlo method. In practice the prior field parameter is decomposed and truncated (e.g. by means of Karhunen- Lo´eve decomposition) to recast the inference problem into the inference of a finite number of coordinates. Although proved effective in many situations, the Bayesian inference as sketched above faces several difficulties requiring improvements. First, sampling the posterior can be a extremely costly task as it requires multiple resolutions of the PDE model for different values of the field parameter. Second, when the observations are not very much informative, the inferred parameter field can highly depends on its prior which can be somehow arbitrary. These issues have motivated the introduction of reduced modeling or surrogates for the (approximate) determination of the parametrized PDE solution and hyperparameters in the description of the prior field. Our contribution focuses on recent developments in these two directions: the acceleration of the posterior sampling by means of Polynomial Chaos expansions and the efficient treatment of parametrized covariance functions for the prior field. We also discuss the possibility of making such approach adaptive to further improve its efficiency.

  16. Interactive Instruction in Bayesian Inference

    DEFF Research Database (Denmark)

    Khan, Azam; Breslav, Simon; Hornbæk, Kasper

    2018-01-01

    An instructional approach is presented to improve human performance in solving Bayesian inference problems. Starting from the original text of the classic Mammography Problem, the textual expression is modified and visualizations are added according to Mayer’s principles of instruction. These pri......An instructional approach is presented to improve human performance in solving Bayesian inference problems. Starting from the original text of the classic Mammography Problem, the textual expression is modified and visualizations are added according to Mayer’s principles of instruction....... These principles concern coherence, personalization, signaling, segmenting, multimedia, spatial contiguity, and pretraining. Principles of self-explanation and interactivity are also applied. Four experiments on the Mammography Problem showed that these principles help participants answer the questions...... that an instructional approach to improving human performance in Bayesian inference is a promising direction....

  17. Inferring Phylogenetic Networks Using PhyloNet.

    Science.gov (United States)

    Wen, Dingqiao; Yu, Yun; Zhu, Jiafan; Nakhleh, Luay

    2018-07-01

    PhyloNet was released in 2008 as a software package for representing and analyzing phylogenetic networks. At the time of its release, the main functionalities in PhyloNet consisted of measures for comparing network topologies and a single heuristic for reconciling gene trees with a species tree. Since then, PhyloNet has grown significantly. The software package now includes a wide array of methods for inferring phylogenetic networks from data sets of unlinked loci while accounting for both reticulation (e.g., hybridization) and incomplete lineage sorting. In particular, PhyloNet now allows for maximum parsimony, maximum likelihood, and Bayesian inference of phylogenetic networks from gene tree estimates. Furthermore, Bayesian inference directly from sequence data (sequence alignments or biallelic markers) is implemented. Maximum parsimony is based on an extension of the "minimizing deep coalescences" criterion to phylogenetic networks, whereas maximum likelihood and Bayesian inference are based on the multispecies network coalescent. All methods allow for multiple individuals per species. As computing the likelihood of a phylogenetic network is computationally hard, PhyloNet allows for evaluation and inference of networks using a pseudolikelihood measure. PhyloNet summarizes the results of the various analyzes and generates phylogenetic networks in the extended Newick format that is readily viewable by existing visualization software.

  18. Active inference and learning.

    Science.gov (United States)

    Friston, Karl; FitzGerald, Thomas; Rigoli, Francesco; Schwartenbeck, Philipp; O Doherty, John; Pezzulo, Giovanni

    2016-09-01

    This paper offers an active inference account of choice behaviour and learning. It focuses on the distinction between goal-directed and habitual behaviour and how they contextualise each other. We show that habits emerge naturally (and autodidactically) from sequential policy optimisation when agents are equipped with state-action policies. In active inference, behaviour has explorative (epistemic) and exploitative (pragmatic) aspects that are sensitive to ambiguity and risk respectively, where epistemic (ambiguity-resolving) behaviour enables pragmatic (reward-seeking) behaviour and the subsequent emergence of habits. Although goal-directed and habitual policies are usually associated with model-based and model-free schemes, we find the more important distinction is between belief-free and belief-based schemes. The underlying (variational) belief updating provides a comprehensive (if metaphorical) process theory for several phenomena, including the transfer of dopamine responses, reversal learning, habit formation and devaluation. Finally, we show that active inference reduces to a classical (Bellman) scheme, in the absence of ambiguity. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Greater future global warming inferred from Earth's recent energy budget.

    Science.gov (United States)

    Brown, Patrick T; Caldeira, Ken

    2017-12-06

    Climate models provide the principal means of projecting global warming over the remainder of the twenty-first century but modelled estimates of warming vary by a factor of approximately two even under the same radiative forcing scenarios. Across-model relationships between currently observable attributes of the climate system and the simulated magnitude of future warming have the potential to inform projections. Here we show that robust across-model relationships exist between the global spatial patterns of several fundamental attributes of Earth's top-of-atmosphere energy budget and the magnitude of projected global warming. When we constrain the model projections with observations, we obtain greater means and narrower ranges of future global warming across the major radiative forcing scenarios, in general. In particular, we find that the observationally informed warming projection for the end of the twenty-first century for the steepest radiative forcing scenario is about 15 per cent warmer (+0.5 degrees Celsius) with a reduction of about a third in the two-standard-deviation spread (-1.2 degrees Celsius) relative to the raw model projections reported by the Intergovernmental Panel on Climate Change. Our results suggest that achieving any given global temperature stabilization target will require steeper greenhouse gas emissions reductions than previously calculated.

  20. Active Inference, homeostatic regulation and adaptive behavioural control.

    Science.gov (United States)

    Pezzulo, Giovanni; Rigoli, Francesco; Friston, Karl

    2015-11-01

    We review a theory of homeostatic regulation and adaptive behavioural control within the Active Inference framework. Our aim is to connect two research streams that are usually considered independently; namely, Active Inference and associative learning theories of animal behaviour. The former uses a probabilistic (Bayesian) formulation of perception and action, while the latter calls on multiple (Pavlovian, habitual, goal-directed) processes for homeostatic and behavioural control. We offer a synthesis these classical processes and cast them as successive hierarchical contextualisations of sensorimotor constructs, using the generative models that underpin Active Inference. This dissolves any apparent mechanistic distinction between the optimization processes that mediate classical control or learning. Furthermore, we generalize the scope of Active Inference by emphasizing interoceptive inference and homeostatic regulation. The ensuing homeostatic (or allostatic) perspective provides an intuitive explanation for how priors act as drives or goals to enslave action, and emphasises the embodied nature of inference. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Accelerating inference for diffusions observed with measurement error and large sample sizes using approximate Bayesian computation

    DEFF Research Database (Denmark)

    Picchini, Umberto; Forman, Julie Lyng

    2016-01-01

    a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm......In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers...... applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general...

  2. Generative Inferences Based on Learned Relations

    Science.gov (United States)

    Chen, Dawn; Lu, Hongjing; Holyoak, Keith J.

    2017-01-01

    A key property of relational representations is their "generativity": From partial descriptions of relations between entities, additional inferences can be drawn about other entities. A major theoretical challenge is to demonstrate how the capacity to make generative inferences could arise as a result of learning relations from…

  3. CGC/saturation approach for soft interactions at high energy: long range rapidity correlations

    Energy Technology Data Exchange (ETDEWEB)

    Gotsman, E.; Maor, U. [Tel Aviv University, Department of Particle Physics, School of Physics and Astronomy, Raymond and Beverly Sackler Faculty of Exact Science, Tel Aviv (Israel); Levin, E. [Tel Aviv University, Department of Particle Physics, School of Physics and Astronomy, Raymond and Beverly Sackler Faculty of Exact Science, Tel Aviv (Israel); Universidad Tecnica Federico Santa Maria and Centro Cientifico- Tecnologico de Valparaiso, Departemento de Fisica, Valparaiso (Chile)

    2015-11-15

    In this paper we continue our program to construct a model for high energy soft interactions that is based on the CGC/saturation approach. The main result of this paper is that we have discovered a mechanism that leads to large long range rapidity correlations and results in large values of the correlation function R(y{sub 1}, y{sub 2}) ≥ 1, which is independent of y{sub 1} and y{sub 2}. Such a behavior of the correlation function provides strong support for the idea that at high energies the system of partons that is produced is not only dense but also has strong attractive forces acting between the partons. (orig.)

  4. High-pressure 3He-Xe gas scintillators for simultaneous detection of neutrons and gamma rays over a large energy range

    International Nuclear Information System (INIS)

    Tornow, W.; Esterline, J.H.; Leckey, C.A.; Weisel, G.J.

    2011-01-01

    We report on features of high-pressure 3 He-Xe gas scintillators which have not been sufficiently addressed in the past. Such gas scintillators can be used not only for the efficient detection of low-energy neutrons but at the same time for the detection and identification of γ-rays as well. Furthermore, 3 He-Xe gas scintillators are also very convenient detectors for fast neutrons in the 1-10 MeV energy range and for high-energy γ-rays in the 7-15 MeV energy range. Due to their linear pulse-height response and self calibration via the 3 He(n,p) 3 H reaction, neutron and γ-ray energies can easily be determined in this high-energy regime.

  5. Neutron-rich rare isotope production from projectile fission of heavy beams in the energy range of 20 MeV/nucleon

    OpenAIRE

    Vonta, N.; Souliotis, G. A.; Loveland, W. D.; Kwon, Y. K.; Tshoo, K.; Jeong, S. C.; Veselsky, M.; Bonasera, A.; Botvina, A.

    2016-01-01

    We investigate the possibilities of producing neutron-rich nuclides in projectile fission of heavy beams in the energy range of 20 MeV/nucleon expected from low-energy facilities. We report our efforts to theoretically describe the reaction mechanism of projectile fission following a multinucleon transfer collision at this energy range. Our calculations are mainly based on a two-step approach: the dynamical stage of the collision is described with either the phenomenological Deep-Inelastic Tr...

  6. Investigation of the 93Nb neutron cross-sections in resonance energy range

    International Nuclear Information System (INIS)

    Grigoriev, Yu.V.; Kitaev, V.Ya.; Zhuravlev, B.V.; Sinitsa, V.V.; Borzakov, S.B.; Faikov-Stanchik, H.; Ilchev, G.; Mezentseva, Zh.V.; Panteleev, Ts.Ts.; Kim, G.N.

    2002-01-01

    The results of gamma-ray multiplicity spectra and transmission measurements for 93 Nb in energy range 21.5 eV-100 keV are presented. Gamma spectra from 1 to 7 multiplicity were measured on the 501 m and 121 m flight paths of the IBR-30 using a 16-section scintillation detector with a NaI(Tl) crystals of a total volume of 36 l and a 16-section liquid scintillation detector of a total volume of 80 l for metallic samples of 50, 80 mm in diameter and 1, 1.5 mm thickness with 100% 93 Nb. Besides, the total and scattering cross-section of 93 Nb were measured by means batteries of B-10 and He-3 counters on the 124 m, 504 m and 1006 m flight paths of the IBR-30. Spectra of multiplicity distribution were obtained for resolved resonances in the energy region E=30-6000 eV and for energy groups in the energy region E=21.5 eV- 100 keV. They were used for determination of the average multiplicity, resonance parameters and capture cross-section in energy groups and for low-laying resonances of 93 Nb. Standard capture cross-sections of 238 U and experimental gamma-ray multiplicity spectra were also used for determination of capture cross section 93 Nb in energy groups. Similar values were calculated using the ENDF/B-6 and JENDL-3 evaluated data libraries with the help of the GRUKON computer program. Within the limits of experimental errors there is observed an agreement between the experiment and calculation, but in some groups the experimental values differ from the calculated ones. (author)

  7. Nigeria's energy policy: Inferences, analysis and legal ethics toward RE development

    International Nuclear Information System (INIS)

    Ajayi, Oluseyi O.; Ajayi, Oluwatoyin O.

    2013-01-01

    The study critically assessed the various policy issues of sustainable energy development in Nigeria. The basic focus was to discuss and analyze some of the laws of the federation as it relates to the development of Renewable Energy in Nigeria. It surveyed the nation's energy policy statement and the vision 20:2020 of the federal government. The Renewable Energy Master Plan developed by the joint efforts of the Energy Commission of Nigeria and United Nations Development Programs were also appraised. The level of development and the index of renewable energy production as stated by the policy statement, the vision 20:2020 and the Renewable Energy Master Plan were highlighted. The study found some policy challenges which include weak government motivation, lack of economic incentives, multiple taxations, non-existent favorable customs and excise duty act to promote renewable energy technologies. Further to this, some legal reforms which may aid the promotion of renewable energy development in Nigeria and also make robust the nation's energy policy were proposed. Some of the laws that require amendment to promote renewable energy include the land use act, environmental impact assessment decree and the investment laws of the federation of Nigeria. - Highlights: • The study exposed the energy policy issues of Nigeria. • The various policy documents and the energy statement of vision 20:2020 were surveyed. • Various challenges impinging growth or renewable energy were highlighted. • Some suggestions for policy reformation were proposed

  8. Genetic Network Inference: From Co-Expression Clustering to Reverse Engineering

    Science.gov (United States)

    Dhaeseleer, Patrik; Liang, Shoudan; Somogyi, Roland

    2000-01-01

    Advances in molecular biological, analytical, and computational technologies are enabling us to systematically investigate the complex molecular processes underlying biological systems. In particular, using high-throughput gene expression assays, we are able to measure the output of the gene regulatory network. We aim here to review datamining and modeling approaches for conceptualizing and unraveling the functional relationships implicit in these datasets. Clustering of co-expression profiles allows us to infer shared regulatory inputs and functional pathways. We discuss various aspects of clustering, ranging from distance measures to clustering algorithms and multiple-duster memberships. More advanced analysis aims to infer causal connections between genes directly, i.e., who is regulating whom and how. We discuss several approaches to the problem of reverse engineering of genetic networks, from discrete Boolean networks, to continuous linear and non-linear models. We conclude that the combination of predictive modeling with systematic experimental verification will be required to gain a deeper insight into living organisms, therapeutic targeting, and bioengineering.

  9. Inference of segmented color and texture description by tensor voting.

    Science.gov (United States)

    Jia, Jiaya; Tang, Chi-Keung

    2004-06-01

    A robust synthesis method is proposed to automatically infer missing color and texture information from a damaged 2D image by (N)D tensor voting (N > 3). The same approach is generalized to range and 3D data in the presence of occlusion, missing data and noise. Our method translates texture information into an adaptive (N)D tensor, followed by a voting process that infers noniteratively the optimal color values in the (N)D texture space. A two-step method is proposed. First, we perform segmentation based on insufficient geometry, color, and texture information in the input, and extrapolate partitioning boundaries by either 2D or 3D tensor voting to generate a complete segmentation for the input. Missing colors are synthesized using (N)D tensor voting in each segment. Different feature scales in the input are automatically adapted by our tensor scale analysis. Results on a variety of difficult inputs demonstrate the effectiveness of our tensor voting approach.

  10. Estimation and inference in the same-different test

    DEFF Research Database (Denmark)

    Christensen, Rune Haubo Bojesen; Brockhoff, Per B.

    2009-01-01

    as well as similarity. We show that the likelihood root statistic is equivalent to the well known G(2) likelihood ratio statistic for tests of no difference. As an additional practical tool, we introduce the profile likelihood curve to provide a convenient graphical summary of the information in the data......Inference for the Thurstonian delta in the same-different protocol via the well known Wald statistic is shown to be inappropriate in a wide range of situations. We introduce the likelihood root statistic as an alternative to the Wald statistic to produce CIs and p-values for assessing difference...

  11. Parametric statistical inference basic theory and modern approaches

    CERN Document Server

    Zacks, Shelemyahu; Tsokos, C P

    1981-01-01

    Parametric Statistical Inference: Basic Theory and Modern Approaches presents the developments and modern trends in statistical inference to students who do not have advanced mathematical and statistical preparation. The topics discussed in the book are basic and common to many fields of statistical inference and thus serve as a jumping board for in-depth study. The book is organized into eight chapters. Chapter 1 provides an overview of how the theory of statistical inference is presented in subsequent chapters. Chapter 2 briefly discusses statistical distributions and their properties. Chapt

  12. The anisotropy of cosmic ray particles in the energy range 1011-1019 eV

    International Nuclear Information System (INIS)

    Xu Chunxian

    1985-01-01

    A study of the anisotropy of primary cosmic ray is presented. The expression of the anisotropy is derived in a model of statistical discrete sources in an infinite galaxy. Using these derived formulas, the amplitudes of the first harmonic anisotropies caused by eleven supernovea nearby the Earth are estimated individually and the trend of the resultant anisotropy is investigated. It is found that the expected results can account for the power law of Esup(0.5) of the anisotropy above the energy 5 x 10 15 eV. The Compton-getting effect can cause an additional anisotropy which is independent of energy and added to the resultant anisotropy of these discrete sources. It is apparent that the anisotropies available in the low energy range 10 11 - 10 14 eV are caused by the Compton-Getting effect. Taking the differential spectrum index γ = 2.67 measured in the same energy bound we get the streaming velocity of 35 km/s with respect to the cosmic ray background

  13. Variational inference & deep learning: A new synthesis

    OpenAIRE

    Kingma, D.P.

    2017-01-01

    In this thesis, Variational Inference and Deep Learning: A New Synthesis, we propose novel solutions to the problems of variational (Bayesian) inference, generative modeling, representation learning, semi-supervised learning, and stochastic optimization.

  14. Variational inference & deep learning : A new synthesis

    NARCIS (Netherlands)

    Kingma, D.P.

    2017-01-01

    In this thesis, Variational Inference and Deep Learning: A New Synthesis, we propose novel solutions to the problems of variational (Bayesian) inference, generative modeling, representation learning, semi-supervised learning, and stochastic optimization.

  15. Heat fluxes and energy balance in the FTU machine

    International Nuclear Information System (INIS)

    Ciotti, M.; Ferro, C.; Franzoni, G.; Maddaluno, G.

    1993-01-01

    Thermal loads on the FTU limiter are routinely measured and energy losses via conduction/convection are inferred. A quite small fraction of the input power (4 to 8%) has been measured from mushrooms temperature increase. Numerical evaluation and comparison with thermocouples located at different radial positions in the S.O.L. suggest a long energy decay length λ e . The power loads inferred from the estimated λE in the actual geometry of the limiter and first wall lead to a global energy balance close to be satisfied. (author)

  16. Development of a picture of the van der Waals interaction energy between clusters of nanometer-range particles

    International Nuclear Information System (INIS)

    Arunachalam, V.; Marlow, W.H.; Lu, J.X.

    1998-01-01

    The importance of the long-range Lifshitz-van der Waals interaction energy between condensed bodies is well known. However, its implementation for interacting bodies that are highly irregular and separated by distances varying from contact to micrometers has received little attention. As part of a study of collisions of irregular aerosol particles, an approach based on the Lifshitz theory of van der Waals interaction has been developed to compute the interaction energy between a sphere and an aggregate of spheres at all separations. In the first part of this study, the iterated sum-over-dipole interactions between pairs of approximately spherical molecular clusters are compared with the Lifshitz and Lifshitz-Hamaker interaction energies for continuum spheres of radii equal to those of the clusters' circumscribed spheres and of the same masses as the clusters. The Lifshitz energy is shown to converge to the iterated dipolar energy for quasispherical molecular clusters for sufficiently large separations, while the energy calculated by using the Lifshitz-Hamaker approach does not. Next, the interaction energies between a contacting pair of these molecular clusters and a third cluster in different relative positions are calculated first by coupling all molecules in the three-cluster system and second by ignoring the interactions between the molecules of the adhering clusters. The error calculated by this omission is shown to be very small, and is an indication of the error in computing the long-range interaction energy between a pair of interacting spheres and a third sphere as a simple sum over the Lifshitz energies between individual, condensed-matter spheres. This Lifshitz energy calculation is then combined with the short-separation, nonsingular van der Waals energy calculation of Lu, Marlow, and Arunachalam, to provide an integrated picture of the van der Waals energy from large separations to contact. copyright 1998 The American Physical Society

  17. State special standard for bremsstrahlung energy flux unit in the range of maximum photon energy 0.8-8.0 pJ (5-50 MeV)

    International Nuclear Information System (INIS)

    Yudin, M.F.; Skotnikov, V.V.; Bruj, V.N.; Tsvetkov, I.I.; Fominykh, V.I.

    1976-01-01

    The state special standard is described, which improves the accuracy and ensures unification and correctness of measurements of a bremsstrahlung energy flux. The size of the unit is conveyed, by means of working standards and model measuring means, to working devices measuring the energy flux over a wide range

  18. An empirical Bayesian approach for model-based inference of cellular signaling networks

    Directory of Open Access Journals (Sweden)

    Klinke David J

    2009-11-01

    Full Text Available Abstract Background A common challenge in systems biology is to infer mechanistic descriptions of biological process given limited observations of a biological system. Mathematical models are frequently used to represent a belief about the causal relationships among proteins within a signaling network. Bayesian methods provide an attractive framework for inferring the validity of those beliefs in the context of the available data. However, efficient sampling of high-dimensional parameter space and appropriate convergence criteria provide barriers for implementing an empirical Bayesian approach. The objective of this study was to apply an Adaptive Markov chain Monte Carlo technique to a typical study of cellular signaling pathways. Results As an illustrative example, a kinetic model for the early signaling events associated with the epidermal growth factor (EGF signaling network was calibrated against dynamic measurements observed in primary rat hepatocytes. A convergence criterion, based upon the Gelman-Rubin potential scale reduction factor, was applied to the model predictions. The posterior distributions of the parameters exhibited complicated structure, including significant covariance between specific parameters and a broad range of variance among the parameters. The model predictions, in contrast, were narrowly distributed and were used to identify areas of agreement among a collection of experimental studies. Conclusion In summary, an empirical Bayesian approach was developed for inferring the confidence that one can place in a particular model that describes signal transduction mechanisms and for inferring inconsistencies in experimental measurements.

  19. Ensemble stacking mitigates biases in inference of synaptic connectivity

    Directory of Open Access Journals (Sweden)

    Brendan Chambers

    2018-03-01

    Full Text Available A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches. Mapping the routing of spikes through local circuitry is crucial for understanding neocortical computation. Under appropriate experimental conditions, these maps can be used to infer likely patterns of synaptic recruitment, linking activity to underlying anatomical connections. Such inferences help to reveal the synaptic implementation of population dynamics and computation. We compare a number of standard functional measures to infer underlying connectivity. We find that regularization impacts measures

  20. Constraint Satisfaction Inference : Non-probabilistic Global Inference for Sequence Labelling

    NARCIS (Netherlands)

    Canisius, S.V.M.; van den Bosch, A.; Daelemans, W.; Basili, R.; Moschitti, A.

    2006-01-01

    We present a new method for performing sequence labelling based on the idea of using a machine-learning classifier to generate several possible output sequences, and then applying an inference procedure to select the best sequence among those. Most sequence labelling methods following a similar

  1. On the origin of apparent Z{sub 1}-oscillations in low-energy heavy-ion ranges

    Energy Technology Data Exchange (ETDEWEB)

    Wittmaack, Klaus, E-mail: wittmaack@helmholtz-muenchen.de

    2016-12-01

    It has been known for quite some time that projected ranges measured by Rutherford backscattering spectrometry for a variety of low-energy heavy ions (energy-to-mass ratio E/M{sub 1} less than ∼0.4 keV/u) exhibit significant or even pronounced deviations from the theoretically predicted smooth dependence on the projectile’s atomic number Z{sub 1}. Studied most thoroughly for silicon targets, the effect was attributed to ‘Z{sub 1} oscillations’ in nuclear stopping, in false analogy to the well established Z{sub 1} oscillations in electronic stopping of low-velocity light ions. In this study an attempt was made to get order into range data published by four different groups. To achieve the goal, the absolute values of the ranges from each group had to be (re-)adjusted by up to about ±10%. Adequate justification for this approach is provided. With the changes made, similarities and differences between the different sets of data became much more transparent than before. Very important is the finding that the distortions in heavy-ion ranges are not oscillatory in nature but mostly one-sided, reflecting element-specific transport of implanted atoms deeper into the solid. Exceptions are rare gas and alkali elements, known to exhibit bombardment induced transport towards the surface. Range distortions reported for Xe and Cs could be reproduced on the basis of the recently established rapid relocation model. The extent of transport into the bulk, observed with many other elements, notably noble metals and lanthanides, reflects their high mobility under ion bombardment. The complexity of the element specific transport phenomena became fully evident by also examining the limited number of data available for the apparent range straggling. Profile broadening was identified in several cases. One element (Eu) was found to exhibit profile narrowing. This observation suggests that implanted atoms may agglomerate at peak concentrations up to 2%, possibly a tool for

  2. Reasoning about Informal Statistical Inference: One Statistician's View

    Science.gov (United States)

    Rossman, Allan J.

    2008-01-01

    This paper identifies key concepts and issues associated with the reasoning of informal statistical inference. I focus on key ideas of inference that I think all students should learn, including at secondary level as well as tertiary. I argue that a fundamental component of inference is to go beyond the data at hand, and I propose that statistical…

  3. Meta-learning framework applied in bioinformatics inference system design.

    Science.gov (United States)

    Arredondo, Tomás; Ormazábal, Wladimir

    2015-01-01

    This paper describes a meta-learner inference system development framework which is applied and tested in the implementation of bioinformatic inference systems. These inference systems are used for the systematic classification of the best candidates for inclusion in bacterial metabolic pathway maps. This meta-learner-based approach utilises a workflow where the user provides feedback with final classification decisions which are stored in conjunction with analysed genetic sequences for periodic inference system training. The inference systems were trained and tested with three different data sets related to the bacterial degradation of aromatic compounds. The analysis of the meta-learner-based framework involved contrasting several different optimisation methods with various different parameters. The obtained inference systems were also contrasted with other standard classification methods with accurate prediction capabilities observed.

  4. Using absolute x-ray spectral measurements to infer stagnation conditions in ICF implosions

    Science.gov (United States)

    Patel, Pravesh; Benedetti, L. R.; Cerjan, C.; Clark, D. S.; Hurricane, O. A.; Izumi, N.; Jarrott, L. C.; Khan, S.; Kritcher, A. L.; Ma, T.; Macphee, A. G.; Landen, O.; Spears, B. K.; Springer, P. T.

    2016-10-01

    Measurements of the continuum x-ray spectrum emitted from the hot-spot of an ICF implosion can be used to infer a number thermodynamic properties at stagnation including temperature, pressure, and hot-spot mix. In deuterium-tritium (DT) layered implosion experiments on the National Ignition Facility (NIF) we field a number of x-ray diagnostics that provide spatial, temporal, and spectrally-resolved measurements of the radiated x-ray emission. We report on analysis of these measurements using a 1-D hot-spot model to infer thermodynamic properties at stagnation. We compare these to similar properties that can be derived from DT fusion neutron measurements. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  5. Sampling flies or sampling flaws? Experimental design and inference strength in forensic entomology.

    Science.gov (United States)

    Michaud, J-P; Schoenly, Kenneth G; Moreau, G

    2012-01-01

    Forensic entomology is an inferential science because postmortem interval estimates are based on the extrapolation of results obtained in field or laboratory settings. Although enormous gains in scientific understanding and methodological practice have been made in forensic entomology over the last few decades, a majority of the field studies we reviewed do not meet the standards for inference, which are 1) adequate replication, 2) independence of experimental units, and 3) experimental conditions that capture a representative range of natural variability. Using a mock case-study approach, we identify design flaws in field and lab experiments and suggest methodological solutions for increasing inference strength that can inform future casework. Suggestions for improving data reporting in future field studies are also proposed.

  6. Range-energy relations and stopping powers of organic liquids and vapours for alpha particles

    International Nuclear Information System (INIS)

    Akhavan-Rezayat, A.; Palmer, R.B.J.

    1980-01-01

    Experimental range-energy relations are presented for alpha particles in methyl alcohol, propyl alcohol, dichloromethane, chloroform and carbon tetrachloride in both the liquid and vapour phases. Stopping power values for these materials and for oxygen gas over the energy range 1.0-8.0 MeV are also given. From these results stopping powers have been derived for the -CH 2 -group and for -Cl occurring in chemical combination in the liquid and vapour phases. The molecular stopping power in the vapour phase is shown to exceed that in the liquid phase by 2-6% below 2 MeV, reducing to negligible differences at about 5 MeV for the materials directly investigated and for the -Cl atom. No significant phase effect is observed for the -CH 2 -group, but it is noted that the uncertainties in the values of the derived stopping powers are much greater in this case. Comparison of the experimental molecular stopping powers with values calculated from elemental values using the Bragg additivity rule shows agreement for vapours but not for liquids. (author)

  7. Climate-induced changes in lake ecosystem structure inferred from coupled neo- and paleoecological approaches

    Science.gov (United States)

    Saros, Jasmine E.; Stone, Jeffery R.; Pederson, Gregory T.; Slemmons, Krista; Spanbauer, Trisha; Schliep, Anna; Cahl, Douglas; Williamson, Craig E.; Engstrom, Daniel R.

    2015-01-01

    Over the 20th century, surface water temperatures have increased in many lake ecosystems around the world, but long-term trends in the vertical thermal structure of lakes remain unclear, despite the strong control that thermal stratification exerts on the biological response of lakes to climate change. Here we used both neo- and paleoecological approaches to develop a fossil-based inference model for lake mixing depths and thereby refine understanding of lake thermal structure change. We focused on three common planktonic diatom taxa, the distributions of which previous research suggests might be affected by mixing depth. Comparative lake surveys and growth rate experiments revealed that these species respond to lake thermal structure when nitrogen is sufficient, with species optima ranging from shallower to deeper mixing depths. The diatom-based mixing depth model was applied to sedimentary diatom profiles extending back to 1750 AD in two lakes with moderate nitrate concentrations but differing climate settings. Thermal reconstructions were consistent with expected changes, with shallower mixing depths inferred for an alpine lake where treeline has advanced, and deeper mixing depths inferred for a boreal lake where wind strength has increased. The inference model developed here provides a new tool to expand and refine understanding of climate-induced changes in lake ecosystems.

  8. Report of the subpanel on long-range planning for the US High-Energy-Physics Program of the High-Energy-Physics Advisory Panel

    International Nuclear Information System (INIS)

    1982-01-01

    The US High Energy Program remains strong, but it faces vigorous competition from other regions of the world. To maintain its vitality and preeminence over the next decade it requires the following major ingredients: (1) strong exploitation of existing facilities; (2) the expeditious completion of construction projects which will expand these facilities over the next few years; (3) the construction of a substantial new facility to be ready for research by the end of the 1980's; and (4) the vigorous pursuit of a wide range of advanced accelerator R and D programs in preparation for the design and construction of a higher energy accelerator which would probably be initiated near the end of this decade. The Subpanel has considered how best to accomplish these goals under two different budgetary assumptions; namely, average yearly support levels of $440M DOE, $35M NSF, and $395M DOE, $34M NSF (FY 1982 dollars). It has also considered the impact of a yet lower support level of $360M DOE and $32M NSF. A description of facilities in high energy physics is given, and facility recommendations and long range plans are discussed. Recommendations for international collaboration are included

  9. Analytic continuation of quantum Monte Carlo data by stochastic analytical inference.

    Science.gov (United States)

    Fuchs, Sebastian; Pruschke, Thomas; Jarrell, Mark

    2010-05-01

    We present an algorithm for the analytic continuation of imaginary-time quantum Monte Carlo data which is strictly based on principles of Bayesian statistical inference. Within this framework we are able to obtain an explicit expression for the calculation of a weighted average over possible energy spectra, which can be evaluated by standard Monte Carlo simulations, yielding as by-product also the distribution function as function of the regularization parameter. Our algorithm thus avoids the usual ad hoc assumptions introduced in similar algorithms to fix the regularization parameter. We apply the algorithm to imaginary-time quantum Monte Carlo data and compare the resulting energy spectra with those from a standard maximum-entropy calculation.

  10. Accuracy of a novel multi-sensor board for measuring physical activity and energy expenditure.

    Science.gov (United States)

    Duncan, Glen E; Lester, Jonathan; Migotsky, Sean; Goh, Jorming; Higgins, Lisa; Borriello, Gaetano

    2011-09-01

    The ability to relate physical activity to health depends on accurate measurement. Yet, none of the available methods are fully satisfactory due to several factors. This study examined the accuracy of a multi-sensor board (MSB) that infers activity types (sitting, standing, walking, stair climbing, and running) and estimates energy expenditure in 57 adults (32 females) 39.2 ± 13.5 years. In the laboratory, subjects walked and ran on a treadmill over a select range of speeds and grades for 3 min each (six stages in random order) while connected to a stationary calorimeter, preceded and followed by brief sitting and standing. On a different day, subjects completed scripted activities in the field connected to a portable calorimeter. The MSB was attached to a strap at the right hip. Subjects repeated one condition (randomly selected) on the third day. Accuracy of inferred activities compared with recorded activities (correctly identified activities/total activities × 100) was 97 and 84% in the laboratory and field, respectively. Absolute accuracy of energy expenditure [100 - absolute value (kilocalories MSB - kilocalories calorimeter/kilocalories calorimeter) × 100] was 89 and 76% in the laboratory and field, the later being different (P calorimeter. Test-retest reliability for energy expenditure was significant in both settings (P type in laboratory and field settings and energy expenditure during treadmill walking and running although the device underestimates energy expenditure in the field.

  11. Using the probability method for multigroup calculations of reactor cells in a thermal energy range

    International Nuclear Information System (INIS)

    Rubin, I.E.; Pustoshilova, V.S.

    1984-01-01

    The possibility of using the transmission probability method with performance inerpolation for determining spatial-energy neutron flux distribution in cells of thermal heterogeneous reactors is considered. The results of multigroup calculations of several uranium-water plane and cylindrical cells with different fuel enrichment in a thermal energy range are given. A high accuracy of results is obtained with low computer time consumption. The use of the transmission probability method is particularly reasonable in algorithms of the programmes compiled computer with significant reserve of internal memory

  12. High-energy X-ray diffraction studies of short- and intermediate-range structure in oxide glasses

    International Nuclear Information System (INIS)

    Suzuya, Kentaro

    2002-01-01

    The feature of high-energy X-ray diffraction method is explained. The oxide glasses studies by using BL04B2, high-energy X-ray diffraction beam line of SPring-8, and the random system materials by high-energy monochromatic X-ray diffraction are introduced. An advantage of third generation synchrotron radiation is summarized. On SPring-8, the high-energy X-ray diffraction experiments of random system are carried out by BL04B2 and BL14B1 beam line. BL04B2 can select Si (111)(E=37.8 keV, λ=0.033 nm) and Si(220)(E=61.7 keV, λ=0.020 nm) as Si monochromator. The intermediate-range structure of (MgO) x (P 2 O 5 ) 1-x glass ,MgP 2 O 6 glass, B 2 O 3 glass, SiO 2 and GeO 2 are explained in detail. The future and application of high-energy X-ray diffraction are stated. (S.Y.)

  13. Inferring repeat-protein energetics from evolutionary information.

    Directory of Open Access Journals (Sweden)

    Rocío Espada

    2017-06-01

    Full Text Available Natural protein sequences contain a record of their history. A common constraint in a given protein family is the ability to fold to specific structures, and it has been shown possible to infer the main native ensemble by analyzing covariations in extant sequences. Still, many natural proteins that fold into the same structural topology show different stabilization energies, and these are often related to their physiological behavior. We propose a description for the energetic variation given by sequence modifications in repeat proteins, systems for which the overall problem is simplified by their inherent symmetry. We explicitly account for single amino acid and pair-wise interactions and treat higher order correlations with a single term. We show that the resulting evolutionary field can be interpreted with structural detail. We trace the variations in the energetic scores of natural proteins and relate them to their experimental characterization. The resulting energetic evolutionary field allows the prediction of the folding free energy change for several mutants, and can be used to generate synthetic sequences that are statistically indistinguishable from the natural counterparts.

  14. Statistical inference and Aristotle's Rhetoric.

    Science.gov (United States)

    Macdonald, Ranald R

    2004-11-01

    Formal logic operates in a closed system where all the information relevant to any conclusion is present, whereas this is not the case when one reasons about events and states of the world. Pollard and Richardson drew attention to the fact that the reasoning behind statistical tests does not lead to logically justifiable conclusions. In this paper statistical inferences are defended not by logic but by the standards of everyday reasoning. Aristotle invented formal logic, but argued that people mostly get at the truth with the aid of enthymemes--incomplete syllogisms which include arguing from examples, analogies and signs. It is proposed that statistical tests work in the same way--in that they are based on examples, invoke the analogy of a model and use the size of the effect under test as a sign that the chance hypothesis is unlikely. Of existing theories of statistical inference only a weak version of Fisher's takes this into account. Aristotle anticipated Fisher by producing an argument of the form that there were too many cases in which an outcome went in a particular direction for that direction to be plausibly attributed to chance. We can therefore conclude that Aristotle would have approved of statistical inference and there is a good reason for calling this form of statistical inference classical.

  15. Children's and adults' judgments of the certainty of deductive inferences, inductive inferences, and guesses.

    Science.gov (United States)

    Pillow, Bradford H; Pearson, Raeanne M; Hecht, Mary; Bremer, Amanda

    2010-01-01

    Children and adults rated their own certainty following inductive inferences, deductive inferences, and guesses. Beginning in kindergarten, participants rated deductions as more certain than weak inductions or guesses. Deductions were rated as more certain than strong inductions beginning in Grade 3, and fourth-grade children and adults differentiated strong inductions, weak inductions, and informed guesses from pure guesses. By Grade 3, participants also gave different types of explanations for their deductions and inductions. These results are discussed in relation to children's concepts of cognitive processes, logical reasoning, and epistemological development.

  16. Inferred demand and supply elasticities from a comparison of world oil models

    International Nuclear Information System (INIS)

    Huntington, H.G.

    1992-01-01

    This paper summarizes the responses of oil supply and demand to prices and income in 11 world oil models that were compared in a recent Energy Modeling Forum (EMF) study. In May 1989, the EMF commenced a study of international oil supplies and demands (hereafter, EMF-11) to compare alternative perspectives on supply and demand issues and how these developments influence the level and direction of world oil prices. In analysing these issues, the EMF-11 working group relied partly upon results from 11 world oil models, using standardized assumptions about oil prices and gross domestic product (GDP). During the study, inferred price elasticities of supply and demand were derived from a comparison of results across different oil price scenarios with the same GDP growth path. Inferred income elasticities of demand were derived from a comparison of results across different economic growth scenarios with the same oil price-path. Together, these estimates summarize several important relationships for understanding oil markets. The first section provides some background on the EMF study and on general trends in the scenarios of interest that help to understand the results. Following sections explain the derivation and qualifications of the inferred estimates, report the results and summarize the key conclusions. (author)

  17. Deep Learning for Population Genetic Inference.

    Science.gov (United States)

    Sheehan, Sara; Song, Yun S

    2016-03-01

    Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme.

  18. Using Alien Coins to Test Whether Simple Inference Is Bayesian

    Science.gov (United States)

    Cassey, Peter; Hawkins, Guy E.; Donkin, Chris; Brown, Scott D.

    2016-01-01

    Reasoning and inference are well-studied aspects of basic cognition that have been explained as statistically optimal Bayesian inference. Using a simplified experimental design, we conducted quantitative comparisons between Bayesian inference and human inference at the level of individuals. In 3 experiments, with more than 13,000 participants, we…

  19. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  20. Compiling Relational Bayesian Networks for Exact Inference

    DEFF Research Database (Denmark)

    Jaeger, Manfred; Chavira, Mark; Darwiche, Adnan

    2004-01-01

    We describe a system for exact inference with relational Bayesian networks as defined in the publicly available \\primula\\ tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evaluating...

  1. A Multiobjective Fuzzy Inference System based Deployment Strategy for a Distributed Mobile Sensor Network

    Directory of Open Access Journals (Sweden)

    Amol P. Bhondekar

    2010-03-01

    Full Text Available Sensor deployment scheme highly governs the effectiveness of distributed wireless sensor network. Issues such as energy conservation and clustering make the deployment problem much more complex. A multiobjective Fuzzy Inference System based strategy for mobile sensor deployment is presented in this paper. This strategy gives a synergistic combination of energy capacity, clustering and peer-to-peer deployment. Performance of our strategy is evaluated in terms of coverage, uniformity, speed and clustering. Our algorithm is compared against a modified distributed self-spreading algorithm to exhibit better performance.

  2. High-pressure {sup 3}He-Xe gas scintillators for simultaneous detection of neutrons and gamma rays over a large energy range

    Energy Technology Data Exchange (ETDEWEB)

    Tornow, W., E-mail: tornow@tunl.duke.edu [Department of Physics, Duke University, Durham, NC 27708 (United States); Triangle Universities Nuclear Laboratory, Durham, NC 27708 (United States); Esterline, J.H. [Department of Physics, Duke University, Durham, NC 27708 (United States); Triangle Universities Nuclear Laboratory, Durham, NC 27708 (United States); Leckey, C.A. [Department of Physics, The College of William and Mary, Williamsburg, VA 23187 (United States); Weisel, G.J. [Department of Physics, Penn State Altoona, Altoona, PA 16601 (United States)

    2011-08-11

    We report on features of high-pressure {sup 3}He-Xe gas scintillators which have not been sufficiently addressed in the past. Such gas scintillators can be used not only for the efficient detection of low-energy neutrons but at the same time for the detection and identification of {gamma}-rays as well. Furthermore, {sup 3}He-Xe gas scintillators are also very convenient detectors for fast neutrons in the 1-10 MeV energy range and for high-energy {gamma}-rays in the 7-15 MeV energy range. Due to their linear pulse-height response and self calibration via the {sup 3}He(n,p){sup 3}H reaction, neutron and {gamma}-ray energies can easily be determined in this high-energy regime.

  3. Causal inference in economics and marketing.

    Science.gov (United States)

    Varian, Hal R

    2016-07-05

    This is an elementary introduction to causal inference in economics written for readers familiar with machine learning methods. The critical step in any causal analysis is estimating the counterfactual-a prediction of what would have happened in the absence of the treatment. The powerful techniques used in machine learning may be useful for developing better estimates of the counterfactual, potentially improving causal inference.

  4. Uncertainty in prediction and in inference

    International Nuclear Information System (INIS)

    Hilgevoord, J.; Uffink, J.

    1991-01-01

    The concepts of uncertainty in prediction and inference are introduced and illustrated using the diffraction of light as an example. The close relationship between the concepts of uncertainty in inference and resolving power is noted. A general quantitative measure of uncertainty in inference can be obtained by means of the so-called statistical distance between probability distributions. When applied to quantum mechanics, this distance leads to a measure of the distinguishability of quantum states, which essentially is the absolute value of the matrix element between the states. The importance of this result to the quantum mechanical uncertainty principle is noted. The second part of the paper provides a derivation of the statistical distance on the basis of the so-called method of support

  5. The influence of internal variability on Earth's energy balance framework and implications for estimating climate sensitivity

    Science.gov (United States)

    Dessler, Andrew E.; Mauritsen, Thorsten; Stevens, Bjorn

    2018-04-01

    Our climate is constrained by the balance between solar energy absorbed by the Earth and terrestrial energy radiated to space. This energy balance has been widely used to infer equilibrium climate sensitivity (ECS) from observations of 20th-century warming. Such estimates yield lower values than other methods, and these have been influential in pushing down the consensus ECS range in recent assessments. Here we test the method using a 100-member ensemble of the Max Planck Institute Earth System Model (MPI-ESM1.1) simulations of the period 1850-2005 with known forcing. We calculate ECS in each ensemble member using energy balance, yielding values ranging from 2.1 to 3.9 K. The spread in the ensemble is related to the central assumption in the energy budget framework: that global average surface temperature anomalies are indicative of anomalies in outgoing energy (either of terrestrial origin or reflected solar energy). We find that this assumption is not well supported over the historical temperature record in the model ensemble or more recent satellite observations. We find that framing energy balance in terms of 500 hPa tropical temperature better describes the planet's energy balance.

  6. New Bayesian inference method using two steps of Markov chain Monte Carlo and its application to shock tube experiment data of Furan oxidation

    KAUST Repository

    Kim, Daesang

    2016-01-06

    A new Bayesian inference method has been developed and applied to Furan shock tube experimental data for efficient statistical inferences of the Arrhenius parameters of two OH radical consumption reactions. The collected experimental data, which consist of time series signals of OH radical concentrations of 14 shock tube experiments, may require several days for MCMC computations even with the support of a fast surrogate of the combustion simulation model, while the new method reduces it to several hours by splitting the process into two steps of MCMC: the first inference of rate constants and the second inference of the Arrhenius parameters. Each step has low dimensional parameter spaces and the second step does not need the executions of the combustion simulation. Furthermore, the new approach has more flexibility in choosing the ranges of the inference parameters, and the higher speed and flexibility enable the more accurate inferences and the analyses of the propagation of errors in the measured temperatures and the alignment of the experimental time to the inference results.

  7. Nonparametric predictive inference in statistical process control

    NARCIS (Netherlands)

    Arts, G.R.J.; Coolen, F.P.A.; Laan, van der P.

    2000-01-01

    New methods for statistical process control are presented, where the inferences have a nonparametric predictive nature. We consider several problems in process control in terms of uncertainties about future observable random quantities, and we develop inferences for these random quantities hased on

  8. Compiling Relational Bayesian Networks for Exact Inference

    DEFF Research Database (Denmark)

    Jaeger, Manfred; Darwiche, Adnan; Chavira, Mark

    2006-01-01

    We describe in this paper a system for exact inference with relational Bayesian networks as defined in the publicly available PRIMULA tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference...

  9. Making inference from wildlife collision data: inferring predator absence from prey strikes

    Directory of Open Access Journals (Sweden)

    Peter Caley

    2017-02-01

    Full Text Available Wildlife collision data are ubiquitous, though challenging for making ecological inference due to typically irreducible uncertainty relating to the sampling process. We illustrate a new approach that is useful for generating inference from predator data arising from wildlife collisions. By simply conditioning on a second prey species sampled via the same collision process, and by using a biologically realistic numerical response functions, we can produce a coherent numerical response relationship between predator and prey. This relationship can then be used to make inference on the population size of the predator species, including the probability of extinction. The statistical conditioning enables us to account for unmeasured variation in factors influencing the runway strike incidence for individual airports and to enable valid comparisons. A practical application of the approach for testing hypotheses about the distribution and abundance of a predator species is illustrated using the hypothesized red fox incursion into Tasmania, Australia. We estimate that conditional on the numerical response between fox and lagomorph runway strikes on mainland Australia, the predictive probability of observing no runway strikes of foxes in Tasmania after observing 15 lagomorph strikes is 0.001. We conclude there is enough evidence to safely reject the null hypothesis that there is a widespread red fox population in Tasmania at a population density consistent with prey availability. The method is novel and has potential wider application.

  10. Making inference from wildlife collision data: inferring predator absence from prey strikes.

    Science.gov (United States)

    Caley, Peter; Hosack, Geoffrey R; Barry, Simon C

    2017-01-01

    Wildlife collision data are ubiquitous, though challenging for making ecological inference due to typically irreducible uncertainty relating to the sampling process. We illustrate a new approach that is useful for generating inference from predator data arising from wildlife collisions. By simply conditioning on a second prey species sampled via the same collision process, and by using a biologically realistic numerical response functions, we can produce a coherent numerical response relationship between predator and prey. This relationship can then be used to make inference on the population size of the predator species, including the probability of extinction. The statistical conditioning enables us to account for unmeasured variation in factors influencing the runway strike incidence for individual airports and to enable valid comparisons. A practical application of the approach for testing hypotheses about the distribution and abundance of a predator species is illustrated using the hypothesized red fox incursion into Tasmania, Australia. We estimate that conditional on the numerical response between fox and lagomorph runway strikes on mainland Australia, the predictive probability of observing no runway strikes of foxes in Tasmania after observing 15 lagomorph strikes is 0.001. We conclude there is enough evidence to safely reject the null hypothesis that there is a widespread red fox population in Tasmania at a population density consistent with prey availability. The method is novel and has potential wider application.

  11. Studying the Range of Incident Alpha Particles on Cu , Ge , Ag , Cd , Te and Au, With Energy (4-15 MeV)

    International Nuclear Information System (INIS)

    Kadhim, R.O.; Jasim, W.N.

    2015-01-01

    In this paper theoretical calculation of the range for alpha particles with the energy range (4 – 15)MeV when passing in some metallic media (Cu , Ge , Ag , Cd , Te and Au).Semi empirical formula was used in addition to (SRIM-2012) program. The Semi empirical equation was programmed to calculate the range using Matlab Language.The results of the range in these media were compared with the results obtained from SRIM-2012 and )(2011)Andnet) results.There was good agreement among the semi empirical equation result , SRIM- 2012 results and with )(2011)Andnet) results in the low energy.The results showed exponential relation between the range of alpha particles in these media and the velocity of the particles.By recourse with SRIM- 2012 results and application them in Matlab program and by using Curve Fitting Tool we extraction equation with its constants to calculate the range of alpha particles in any element of these six elements with the energy range (4 – 15)MeV.The maximum deviation between the results from the semi empirical calculation and SRIM-2012 results was calculated the statistical test ( kstest2) in Matlab program

  12. Causal inference in biology networks with integrated belief propagation.

    Science.gov (United States)

    Chang, Rui; Karr, Jonathan R; Schadt, Eric E

    2015-01-01

    Inferring causal relationships among molecular and higher order phenotypes is a critical step in elucidating the complexity of living systems. Here we propose a novel method for inferring causality that is no longer constrained by the conditional dependency arguments that limit the ability of statistical causal inference methods to resolve causal relationships within sets of graphical models that are Markov equivalent. Our method utilizes Bayesian belief propagation to infer the responses of perturbation events on molecular traits given a hypothesized graph structure. A distance measure between the inferred response distribution and the observed data is defined to assess the 'fitness' of the hypothesized causal relationships. To test our algorithm, we infer causal relationships within equivalence classes of gene networks in which the form of the functional interactions that are possible are assumed to be nonlinear, given synthetic microarray and RNA sequencing data. We also apply our method to infer causality in real metabolic network with v-structure and feedback loop. We show that our method can recapitulate the causal structure and recover the feedback loop only from steady-state data which conventional method cannot.

  13. Quantum conductance of carbon nanotubes in a wide energy range

    International Nuclear Information System (INIS)

    Zhang, Yong

    2015-01-01

    The differential conductance of armchair and zigzag carbon nanotubes (CNTs) in a wide energy range has been numerically calculated by using the tight-binding model and the Green’s function method. The effects of the contact coupling between CNTs and electrodes on conductance have been explored. The ballistic conductance is proportional to the band numbers and has a ladder-like feature. As the increase of the contact coupling, the conductance oscillations appear and they are robust against the coupling. More importantly, on the first step of the conductance ladder, the armchair CNTs have two quasi-periodic conductance oscillations, i.e. a rapid conductance oscillation superimposed on a slow fluctuation background; while the zigzag CNTs have only one conductance oscillation. But on the second conductance step, all CNTs have two quasi-periodic conductance oscillations. The physical origin of the conductance oscillations has been revealed

  14. Correlation between blister skin thickness, the maximum in the damage-energy distribution, and projected ranges of He+ ions in metals: V

    International Nuclear Information System (INIS)

    Kaminsky, M.; Das, S.K.; Fenske, G.

    1976-01-01

    In these experiments a systematic study of the correlation of the skin thickness measured directly by scanning electron microscopy with both the calculated projected-range values and the maximum in the damage-energy distribution has been conducted for a broad helium-ion energy range (100 keV-1000 keV in polycrystalline vanadium. (Auth.)

  15. Energy budget closure and field scale estimation of canopy energy storage with increased and sustained turbulence

    Science.gov (United States)

    Eddy Covariance (EC) is widely used for direct, non-invasive observations of land-atmosphere energy and mass fluxes. However, EC observations of available energy fluxes are usually less than fluxes inferred from radiometer and soil heat flux observations; thus introducing additional uncertainty in u...

  16. Inferring late-Holocene climate in the Ecuadorian Andes using a chironomid-based temperature inference model

    Science.gov (United States)

    Matthews-Bird, Frazer; Brooks, Stephen J.; Holden, Philip B.; Montoya, Encarni; Gosling, William D.

    2016-06-01

    Presented here is the first chironomid calibration data set for tropical South America. Surface sediments were collected from 59 lakes across Bolivia (15 lakes), Peru (32 lakes), and Ecuador (12 lakes) between 2004 and 2013 over an altitudinal gradient from 150 m above sea level (a.s.l) to 4655 m a.s.l, between 0-17° S and 64-78° W. The study sites cover a mean annual temperature (MAT) gradient of 25 °C. In total, 55 chironomid taxa were identified in the 59 calibration data set lakes. When used as a single explanatory variable, MAT explains 12.9 % of the variance (λ1/λ2 = 1.431). Two inference models were developed using weighted averaging (WA) and Bayesian methods. The best-performing model using conventional statistical methods was a WA (inverse) model (R2jack = 0.890; RMSEPjack = 2.404 °C, RMSEP - root mean squared error of prediction; mean biasjack = -0.017 °C; max biasjack = 4.665 °C). The Bayesian method produced a model with R2jack = 0.909, RMSEPjack = 2.373 °C, mean biasjack = 0.598 °C, and max biasjack = 3.158 °C. Both models were used to infer past temperatures from a ca. 3000-year record from the tropical Andes of Ecuador, Laguna Pindo. Inferred temperatures fluctuated around modern-day conditions but showed significant departures at certain intervals (ca. 1600 cal yr BP; ca. 3000-2500 cal yr BP). Both methods (WA and Bayesian) showed similar patterns of temperature variability; however, the magnitude of fluctuations differed. In general the WA method was more variable and often underestimated Holocene temperatures (by ca. -7 ± 2.5 °C relative to the modern period). The Bayesian method provided temperature anomaly estimates for cool periods that lay within the expected range of the Holocene (ca. -3 ± 3.4 °C). The error associated with both reconstructions is consistent with a constant temperature of 20 °C for the past 3000 years. We would caution, however, against an over-interpretation at this stage. The reconstruction can only

  17. Deep Learning for Population Genetic Inference.

    Directory of Open Access Journals (Sweden)

    Sara Sheehan

    2016-03-01

    Full Text Available Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data to the output (e.g., population genetic parameters of interest. We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history. Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme.

  18. Deep Learning for Population Genetic Inference

    Science.gov (United States)

    Sheehan, Sara; Song, Yun S.

    2016-01-01

    Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme. PMID:27018908

  19. An Energy-Efficient Link with Adaptive Transmit Power Control for Long Range Networks

    DEFF Research Database (Denmark)

    Lynggaard, P.; Blaszczyk, Tomasz

    2016-01-01

    A considerable amount of research is carried out to develop a reliable smart sensor system with high energy efficiency for battery operated wireless IoT devices in the agriculture sector. However, only a limited amount of research has covered automatic transmission power adjustment schemes...... and algorithms which are essential for deployment of wireless IoT nodes. This paper presents an adaptive link algorithm for farm applications with emphasis on power adjustment for long range communication networks....

  20. A Bayesian Network Schema for Lessening Database Inference

    National Research Council Canada - National Science Library

    Chang, LiWu; Moskowitz, Ira S

    2001-01-01

    .... The authors introduce a formal schema for database inference analysis, based upon a Bayesian network structure, which identifies critical parameters involved in the inference problem and represents...

  1. Dependence of wavelength of Xe ion-induced rippled structures on the fluence in the medium ion energy range

    Energy Technology Data Exchange (ETDEWEB)

    Hanisch, Antje; Grenzer, Joerg [Institute of Ion Beam Physics and Materials Research, Dresden (Germany); Biermanns, Andreas; Pietsch, Ullrich [Institute of Physics, University of Siegen (Germany)

    2010-07-01

    Ion-beam eroded self-organized nanostructures on semiconductors offer new ways for the fabrication of high density memory and optoelectronic devices. It is known that wavelength and amplitude of noble gas ion-induced rippled structures tune with the ion energy and the fluence depending on the energy range, ion type and substrate. The linear theory by Makeev predicts a linear dependence of the ion energy on the wavelength for low temperatures. For Ar{sup +} and O{sub 2}{sup +} it was observed by different groups that the wavelength grows with increasing fluence after being constant up to an onset fluence and before saturation. In this coarsening regime power-law or exponential behavior of the wavelength with the fluence was monitored. So far, investigations for Xe ions on silicon surfaces mainly concentrated on energies below 1 keV. We found a linear dependence of both the ion energy and the fluence on the wavelength and amplitude of rippled structures over a wide range of the Xe{sup +} ion energy between 5 and 70 keV. Moreover, we estimated the ratio of wavelength to amplitude to be constant meaning a shape stability when a threshold fluence of 2.10{sup 17} cm{sup -2} was exceeded.

  2. Type Inference for Session Types in the Pi-Calculus

    DEFF Research Database (Denmark)

    Graversen, Eva Fajstrup; Harbo, Jacob Buchreitz; Huttel, Hans

    2014-01-01

    In this paper we present a direct algorithm for session type inference for the π-calculus. Type inference for session types has previously been achieved by either imposing limitations and restriction on the π-calculus, or by reducing the type inference problem to that for linear types. Our approach...

  3. Inferring animal social networks and leadership: applications for passive monitoring arrays.

    Science.gov (United States)

    Jacoby, David M P; Papastamatiou, Yannis P; Freeman, Robin

    2016-11-01

    Analyses of animal social networks have frequently benefited from techniques derived from other disciplines. Recently, machine learning algorithms have been adopted to infer social associations from time-series data gathered using remote, telemetry systems situated at provisioning sites. We adapt and modify existing inference methods to reveal the underlying social structure of wide-ranging marine predators moving through spatial arrays of passive acoustic receivers. From six months of tracking data for grey reef sharks (Carcharhinus amblyrhynchos) at Palmyra atoll in the Pacific Ocean, we demonstrate that some individuals emerge as leaders within the population and that this behavioural coordination is predicted by both sex and the duration of co-occurrences between conspecifics. In doing so, we provide the first evidence of long-term, spatially extensive social processes in wild sharks. To achieve these results, we interrogate simulated and real tracking data with the explicit purpose of drawing attention to the key considerations in the use and interpretation of inference methods and their impact on resultant social structure. We provide a modified translation of the GMMEvents method for R, including new analyses quantifying the directionality and duration of social events with the aim of encouraging the careful use of these methods more widely in less tractable social animal systems but where passive telemetry is already widespread. © 2016 The Authors.

  4. Nonparametric Bayesian inference for mean residual life functions in survival analysis.

    Science.gov (United States)

    Poynor, Valerie; Kottas, Athanasios

    2018-01-19

    Modeling and inference for survival analysis problems typically revolves around different functions related to the survival distribution. Here, we focus on the mean residual life (MRL) function, which provides the expected remaining lifetime given that a subject has survived (i.e. is event-free) up to a particular time. This function is of direct interest in reliability, medical, and actuarial fields. In addition to its practical interpretation, the MRL function characterizes the survival distribution. We develop general Bayesian nonparametric inference for MRL functions built from a Dirichlet process mixture model for the associated survival distribution. The resulting model for the MRL function admits a representation as a mixture of the kernel MRL functions with time-dependent mixture weights. This model structure allows for a wide range of shapes for the MRL function. Particular emphasis is placed on the selection of the mixture kernel, taken to be a gamma distribution, to obtain desirable properties for the MRL function arising from the mixture model. The inference method is illustrated with a data set of two experimental groups and a data set involving right censoring. The supplementary material available at Biostatistics online provides further results on empirical performance of the model, using simulated data examples. © The Author 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Explanatory Preferences Shape Learning and Inference.

    Science.gov (United States)

    Lombrozo, Tania

    2016-10-01

    Explanations play an important role in learning and inference. People often learn by seeking explanations, and they assess the viability of hypotheses by considering how well they explain the data. An emerging body of work reveals that both children and adults have strong and systematic intuitions about what constitutes a good explanation, and that these explanatory preferences have a systematic impact on explanation-based processes. In particular, people favor explanations that are simple and broad, with the consequence that engaging in explanation can shape learning and inference by leading people to seek patterns and favor hypotheses that support broad and simple explanations. Given the prevalence of explanation in everyday cognition, understanding explanation is therefore crucial to understanding learning and inference. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Grammatical inference algorithms, routines and applications

    CERN Document Server

    Wieczorek, Wojciech

    2017-01-01

    This book focuses on grammatical inference, presenting classic and modern methods of grammatical inference from the perspective of practitioners. To do so, it employs the Python programming language to present all of the methods discussed. Grammatical inference is a field that lies at the intersection of multiple disciplines, with contributions from computational linguistics, pattern recognition, machine learning, computational biology, formal learning theory and many others. Though the book is largely practical, it also includes elements of learning theory, combinatorics on words, the theory of automata and formal languages, plus references to real-world problems. The listings presented here can be directly copied and pasted into other programs, thus making the book a valuable source of ready recipes for students, academic researchers, and programmers alike, as well as an inspiration for their further development.>.

  7. Bottlenecks and Hubs in Inferred Networks Are Important for Virulence in Salmonella typhimurium

    Energy Technology Data Exchange (ETDEWEB)

    McDermott, Jason E.; Taylor, Ronald C.; Yoon, Hyunjin; Heffron, Fred

    2009-02-01

    Recent advances in experimental methods have provided sufficient data to consider systems as large networks of interconnected components. High-throughput determination of protein-protein interaction networks has led to the observation that topological bottlenecks, that is proteins defined by high centrality in the network, are enriched in proteins with systems-level phenotypes such as essentiality. Global transcriptional profiling by microarray analysis has been used extensively to characterize systems, for example, cellular response to environmental conditions and genetic mutations. These transcriptomic datasets have been used to infer regulatory and functional relationship networks based on co-regulation. We use the context likelihood of relatedness (CLR) method to infer networks from two datasets gathered from the pathogen Salmonella typhimurium; one under a range of environmental culture conditions and the other from deletions of 15 regulators found to be essential in virulence. Bottleneck nodes were identified from these inferred networks and we show that these nodes are significantly more likely to be essential for virulence than their non-bottleneck counterparts. A network generated using Pearson correlation did not display this behavior. Overall this study demonstrates that topology of networks inferred from global transcriptional profiles provides information about the systems-level roles of bottleneck genes. Analysis of the differences between the two CLR-derived networks suggests that the bottleneck nodes are either mediators of transitions between system states or sentinels that reflect the dynamics of these transitions.

  8. Damage growth in Si during self-ion irradiation: A study of ion effects over an extended energy range

    International Nuclear Information System (INIS)

    Holland, O.W.; El-Ghor, M.K.; White, C.W.

    1989-01-01

    Damage nucleation/growth in single-crystal Si during ion irradiation is discussed. For MeV ions, the rate of growth as well as the damage morphology are shown to vary widely along the track of the ion. This is attributed to a change in the dominant, defect-related reactions as the ion penetrates the crystal. The nature of these reactions were elucidated by studying the interaction of MeV ions with different types of defects. The defects were introduced into the Si crystal prior to high-energy irradiation by self-ion implantation at a medium energy (100 keV). Varied damage morphologies were produced by implanting different ion fluences. Electron microscopy and ion-channeling measurements, in conjunction with annealing studies, were used to characterize the damage. Subtle changes in the predamage morphology are shown to result in markedly different responses to the high-energy irradiation, ranging from complete annealing of the damage to rapid growth. These divergent responses occur over a narrow range of dose (2--3 times 10 14 cm -2 ) of the medium-energy ions; this range also marks a transition in the growth behavior of the damage during the predamage implantation. A model is proposed which accounts for these observations and provides insight into ion-induced growth of amorphous layers in Si and the role of the amorphous/crystalline interface in this process. 15 refs, 9 figs

  9. The evaporative fraction as a measure of surface energy partitioning

    Energy Technology Data Exchange (ETDEWEB)

    Nichols, W.E. [Pacific Northwest Lab., Richland, WA (United States); Cuenca, R.H. [Oregon State Univ., Corvallis, OR (United States)

    1990-12-31

    The evaporative fraction is a ratio that expresses the proportion of turbulent flux energy over land surfaces devoted to evaporation and transpiration (evapotranspiration). It has been used to characterize the energy partition over land surfaces and has potential for inferring daily energy balance information based on mid-day remote sensing measurements. The HAPEX-MOBILHY program`s SAMER system provided surface energy balance data over a range of agricultural crops and soil types. The databases from this large-scale field experiment was analyzed for the purpose of studying the behavior and daylight stability of the evaporative fraction in both ideal and general meteorological conditions. Strong linear relations were found to exist between the mid-day evaporative fraction and the daylight mean evaporative fraction. Statistical tests however rejected the hypothesis that the two quantities were equal. The relations between the evaporative fraction and the surface soil moisture as well as soil moisture in the complete vegetation root zone were also explored.

  10. The evaporative fraction as a measure of surface energy partitioning

    Energy Technology Data Exchange (ETDEWEB)

    Nichols, W.E. (Pacific Northwest Lab., Richland, WA (United States)); Cuenca, R.H. (Oregon State Univ., Corvallis, OR (United States))

    1990-01-01

    The evaporative fraction is a ratio that expresses the proportion of turbulent flux energy over land surfaces devoted to evaporation and transpiration (evapotranspiration). It has been used to characterize the energy partition over land surfaces and has potential for inferring daily energy balance information based on mid-day remote sensing measurements. The HAPEX-MOBILHY program's SAMER system provided surface energy balance data over a range of agricultural crops and soil types. The databases from this large-scale field experiment was analyzed for the purpose of studying the behavior and daylight stability of the evaporative fraction in both ideal and general meteorological conditions. Strong linear relations were found to exist between the mid-day evaporative fraction and the daylight mean evaporative fraction. Statistical tests however rejected the hypothesis that the two quantities were equal. The relations between the evaporative fraction and the surface soil moisture as well as soil moisture in the complete vegetation root zone were also explored.

  11. On the number of free energy extremums of a solid solution with two long-range order parameters

    International Nuclear Information System (INIS)

    Dateshidze, N.A.; Ratishvili, I.G.

    1977-01-01

    The free energy of ordering f.c.c. lattice solid solution is investigated. The ordering is regarded as homogeneous in the whole bulk of the crystal (i.e. resistant towards formation of antiphase domains). It is described by one of the appropriate distribution functions which contains two long-range order parameters. The calculations have revealed the extrema of the free energy function, and their shape and behaviour upon variations of temperature are analyzed. It is shown that under certain circumstances the system can display more than one minimum of free energy within the ordered phase

  12. BagReg: Protein inference through machine learning.

    Science.gov (United States)

    Zhao, Can; Liu, Dao; Teng, Ben; He, Zengyou

    2015-08-01

    Protein inference from the identified peptides is of primary importance in the shotgun proteomics. The target of protein inference is to identify whether each candidate protein is truly present in the sample. To date, many computational methods have been proposed to solve this problem. However, there is still no method that can fully utilize the information hidden in the input data. In this article, we propose a learning-based method named BagReg for protein inference. The method firstly artificially extracts five features from the input data, and then chooses each feature as the class feature to separately build models to predict the presence probabilities of proteins. Finally, the weak results from five prediction models are aggregated to obtain the final result. We test our method on six public available data sets. The experimental results show that our method is superior to the state-of-the-art protein inference algorithms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Short- and long-range energy strategies for Japan and the world after the Fukushima nuclear accident

    International Nuclear Information System (INIS)

    Muraoka, K.; Wagner, F.; Yamagata, Y.; Donné, A.J.H.

    2016-01-01

    The accident at the Fukushima Dai-ichi nuclear power station in 2011 has caused profound effects on energy policies in Japan and worldwide. This is particularly because it occurred at the time of the growing awareness of global warming forcing measures towards decarbonised energy production, namely the use of fossil fuels has to be drastically reduced from the present level of more than 80% by 2050. A dilemma has now emerged because nuclear power, a CO 2 -free technology with proven large-scale energy production capability, lost confidence in many societies, especially in Japan and Germany. As a consequence, there is a world-wide effort now to expand renewable energies (REs), specifically photo-voltaic (PV) and wind power. However, the authors conjecture that PV and wind power can provide only up to a 40% share of the electricity production as long as sufficient storage is not available. Beyond this level, the technological (high grid power) and economic problems (large surplus production) grow. This is the result of the analysis of the growing use of REs in the electricity systems for Germany and Japan. The key element to overcome this situation is to develop suitable energy storage technologies. This is particularly necessary when electricity will become the main energy source because also transportation, process heat and heating, will be supplied by it. Facing the difficulty in replacing all fossil fuels in all countries with different technology standards, a rapid development of carbon capture and storage (CCS) might also be necessary. Therefore, for the short-range strategy up to 2050, all meaningful options have to be developed. For the long-range strategy beyond 2050, new energy sources (such as thermonuclear fusion, solar fuels and nuclear power—if inherently safe concepts will gain credibility of societies again), and large-scale energy storage systems based on novel concepts (such as large-capacity batteries and hydrogen) is required. It is acknowledged

  14. Short- and long-range energy strategies for Japan and the world after the Fukushima nuclear accident

    Science.gov (United States)

    Muraoka, K.; Wagner, F.; Yamagata, Y.; Donné, A. J. H.

    2016-01-01

    The accident at the Fukushima Dai-ichi nuclear power station in 2011 has caused profound effects on energy policies in Japan and worldwide. This is particularly because it occurred at the time of the growing awareness of global warming forcing measures towards decarbonised energy production, namely the use of fossil fuels has to be drastically reduced from the present level of more than 80% by 2050. A dilemma has now emerged because nuclear power, a CO2-free technology with proven large-scale energy production capability, lost confidence in many societies, especially in Japan and Germany. As a consequence, there is a world-wide effort now to expand renewable energies (REs), specifically photo-voltaic (PV) and wind power. However, the authors conjecture that PV and wind power can provide only up to a 40% share of the electricity production as long as sufficient storage is not available. Beyond this level, the technological (high grid power) and economic problems (large surplus production) grow. This is the result of the analysis of the growing use of REs in the electricity systems for Germany and Japan. The key element to overcome this situation is to develop suitable energy storage technologies. This is particularly necessary when electricity will become the main energy source because also transportation, process heat and heating, will be supplied by it. Facing the difficulty in replacing all fossil fuels in all countries with different technology standards, a rapid development of carbon capture and storage (CCS) might also be necessary. Therefore, for the short-range strategy up to 2050, all meaningful options have to be developed. For the long-range strategy beyond 2050, new energy sources (such as thermonuclear fusion, solar fuels and nuclear power—if inherently safe concepts will gain credibility of societies again), and large-scale energy storage systems based on novel concepts (such as large-capacity batteries and hydrogen) is required. It is acknowledged

  15. Evolutionary rates at codon sites may be used to align sequences and infer protein domain function

    Directory of Open Access Journals (Sweden)

    Hazelhurst Scott

    2010-03-01

    Full Text Available Abstract Background Sequence alignments form part of many investigations in molecular biology, including the determination of phylogenetic relationships, the prediction of protein structure and function, and the measurement of evolutionary rates. However, to obtain meaningful results, a significant degree of sequence similarity is required to ensure that the alignments are accurate and the inferences correct. Limitations arise when sequence similarity is low, which is particularly problematic when working with fast-evolving genes, evolutionary distant taxa, genomes with nucleotide biases, and cases of convergent evolution. Results A novel approach was conceptualized to address the "low sequence similarity" alignment problem. We developed an alignment algorithm termed FIRE (Functional Inference using the Rates of Evolution, which aligns sequences using the evolutionary rate at codon sites, as measured by the dN/dS ratio, rather than nucleotide or amino acid residues. FIRE was used to test the hypotheses that evolutionary rates can be used to align sequences and that the alignments may be used to infer protein domain function. Using a range of test data, we found that aligning domains based on evolutionary rates was possible even when sequence similarity was very low (for example, antibody variable regions. Furthermore, the alignment has the potential to infer protein domain function, indicating that domains with similar functions are subject to similar evolutionary constraints. These data suggest that an evolutionary rate-based approach to sequence analysis (particularly when combined with structural data may be used to study cases of convergent evolution or when sequences have very low similarity. However, when aligning homologous gene sets with sequence similarity, FIRE did not perform as well as the best traditional alignment algorithms indicating that the conventional approach of aligning residues as opposed to evolutionary rates remains the

  16. Ensemble stacking mitigates biases in inference of synaptic connectivity.

    Science.gov (United States)

    Chambers, Brendan; Levy, Maayan; Dechery, Joseph B; MacLean, Jason N

    2018-01-01

    A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.

  17. Stochastic processes inference theory

    CERN Document Server

    Rao, Malempati M

    2014-01-01

    This is the revised and enlarged 2nd edition of the authors’ original text, which was intended to be a modest complement to Grenander's fundamental memoir on stochastic processes and related inference theory. The present volume gives a substantial account of regression analysis, both for stochastic processes and measures, and includes recent material on Ridge regression with some unexpected applications, for example in econometrics. The first three chapters can be used for a quarter or semester graduate course on inference on stochastic processes. The remaining chapters provide more advanced material on stochastic analysis suitable for graduate seminars and discussions, leading to dissertation or research work. In general, the book will be of interest to researchers in probability theory, mathematical statistics and electrical and information theory.

  18. Russell and Humean Inferences

    Directory of Open Access Journals (Sweden)

    João Paulo Monteiro

    2001-12-01

    Full Text Available Russell's The Problems of Philosophy tries to establish a new theory of induction, at the same time that Hume is there accused of an irrational/ scepticism about induction". But a careful analysis of the theory of knowledge explicitly acknowledged by Hume reveals that, contrary to the standard interpretation in the XXth century, possibly influenced by Russell, Hume deals exclusively with causal inference (which he never classifies as "causal induction", although now we are entitled to do so, never with inductive inference in general, mainly generalizations about sensible qualities of objects ( whether, e.g., "all crows are black" or not is not among Hume's concerns. Russell's theories are thus only false alternatives to Hume's, in (1912 or in his (1948.

  19. Efficient algorithms for conditional independence inference

    Czech Academy of Sciences Publication Activity Database

    Bouckaert, R.; Hemmecke, R.; Lindner, S.; Studený, Milan

    2010-01-01

    Roč. 11, č. 1 (2010), s. 3453-3479 ISSN 1532-4435 R&D Projects: GA ČR GA201/08/0539; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : conditional independence inference * linear programming approach Subject RIV: BA - General Mathematics Impact factor: 2.949, year: 2010 http://library.utia.cas.cz/separaty/2010/MTR/studeny-efficient algorithms for conditional independence inference.pdf

  20. State-Space Inference and Learning with Gaussian Processes

    OpenAIRE

    Turner, R; Deisenroth, MP; Rasmussen, CE

    2010-01-01

    18.10.13 KB. Ok to add author version to spiral, authors hold copyright. State-space inference and learning with Gaussian processes (GPs) is an unsolved problem. We propose a new, general methodology for inference and learning in nonlinear state-space models that are described probabilistically by non-parametric GP models. We apply the expectation maximization algorithm to iterate between inference in the latent state-space and learning the parameters of the underlying GP dynamics model. C...

  1. Enhancing Transparency and Control When Drawing Data-Driven Inferences About Individuals.

    Science.gov (United States)

    Chen, Daizhuo; Fraiberger, Samuel P; Moakler, Robert; Provost, Foster

    2017-09-01

    Recent studies show the remarkable power of fine-grained information disclosed by users on social network sites to infer users' personal characteristics via predictive modeling. Similar fine-grained data are being used successfully in other commercial applications. In response, attention is turning increasingly to the transparency that organizations provide to users as to what inferences are drawn and why, as well as to what sort of control users can be given over inferences that are drawn about them. In this article, we focus on inferences about personal characteristics based on information disclosed by users' online actions. As a use case, we explore personal inferences that are made possible from "Likes" on Facebook. We first present a means for providing transparency into the information responsible for inferences drawn by data-driven models. We then introduce the "cloaking device"-a mechanism for users to inhibit the use of particular pieces of information in inference. Using these analytical tools we ask two main questions: (1) How much information must users cloak to significantly affect inferences about their personal traits? We find that usually users must cloak only a small portion of their actions to inhibit inference. We also find that, encouragingly, false-positive inferences are significantly easier to cloak than true-positive inferences. (2) Can firms change their modeling behavior to make cloaking more difficult? The answer is a definitive yes. We demonstrate a simple modeling change that requires users to cloak substantially more information to affect the inferences drawn. The upshot is that organizations can provide transparency and control even into complicated, predictive model-driven inferences, but they also can make control easier or harder for their users.

  2. Recent Advances in System Reliability Signatures, Multi-state Systems and Statistical Inference

    CERN Document Server

    Frenkel, Ilia

    2012-01-01

    Recent Advances in System Reliability discusses developments in modern reliability theory such as signatures, multi-state systems and statistical inference. It describes the latest achievements in these fields, and covers the application of these achievements to reliability engineering practice. The chapters cover a wide range of new theoretical subjects and have been written by leading experts in reliability theory and its applications.  The topics include: concepts and different definitions of signatures (D-spectra),  their  properties and applications  to  reliability of coherent systems and network-type structures; Lz-transform of Markov stochastic process and its application to multi-state system reliability analysis; methods for cost-reliability and cost-availability analysis of multi-state systems; optimal replacement and protection strategy; and statistical inference. Recent Advances in System Reliability presents many examples to illustrate the theoretical results. Real world multi-state systems...

  3. Fused Regression for Multi-source Gene Regulatory Network Inference.

    Directory of Open Access Journals (Sweden)

    Kari Y Lam

    2016-12-01

    Full Text Available Understanding gene regulatory networks is critical to understanding cellular differentiation and response to external stimuli. Methods for global network inference have been developed and applied to a variety of species. Most approaches consider the problem of network inference independently in each species, despite evidence that gene regulation can be conserved even in distantly related species. Further, network inference is often confined to single data-types (single platforms and single cell types. We introduce a method for multi-source network inference that allows simultaneous estimation of gene regulatory networks in multiple species or biological processes through the introduction of priors based on known gene relationships such as orthology incorporated using fused regression. This approach improves network inference performance even when orthology mapping and conservation are incomplete. We refine this method by presenting an algorithm that extracts the true conserved subnetwork from a larger set of potentially conserved interactions and demonstrate the utility of our method in cross species network inference. Last, we demonstrate our method's utility in learning from data collected on different experimental platforms.

  4. Inferring diameters of spheres and cylinders using interstitial water.

    Science.gov (United States)

    Herrera, Sheryl L; Mercredi, Morgan E; Buist, Richard; Martin, Melanie

    2018-06-04

    Most early methods to infer axon diameter distributions using magnetic resonance imaging (MRI) used single diffusion encoding sequences such as pulsed gradient spin echo (SE) and are thus sensitive to axons of diameters > 5 μm. We previously simulated oscillating gradient (OG) SE sequences for diffusion spectroscopy to study smaller axons including the majority constituting cortical connections. That study suggested the model of constant extra-axonal diffusion breaks down at OG accessible frequencies. In this study we present data from phantoms to test a time-varying interstitial apparent diffusion coefficient. Diffusion spectra were measured in four samples from water packed around beads of diameters 3, 6 and 10 μm; and 151 μm diameter tubes. Surface-to-volume ratios, and diameters were inferred. The bead pore radii estimates were 0.60±0.08 μm, 0.54±0.06 μm and 1.0±0.1 μm corresponding to bead diameters ranging from 2.9±0.4 μm to 5.3±0.7 μm, 2.6±0.3 μm to 4.8±0.6 μm, and 4.9±0.7 μm to 9±1 μm. The tube surface-to-volume ratio estimate was 0.06±0.02 μm -1 corresponding to a tube diameter of 180±70 μm. Interstitial models with OG inferred 3-10 μm bead diameters from 0.54±0.06 μm to 1.0±0.1 μm pore radii and 151 μm tube diameters from 0.06±0.02 μm -1 surface-to-volume ratios.

  5. HIERARCHICAL PROBABILISTIC INFERENCE OF COSMIC SHEAR

    International Nuclear Information System (INIS)

    Schneider, Michael D.; Dawson, William A.; Hogg, David W.; Marshall, Philip J.; Bard, Deborah J.; Meyers, Joshua; Lang, Dustin

    2015-01-01

    Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxy properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics

  6. Inverse Ising inference with correlated samples

    International Nuclear Information System (INIS)

    Obermayer, Benedikt; Levine, Erel

    2014-01-01

    Correlations between two variables of a high-dimensional system can be indicative of an underlying interaction, but can also result from indirect effects. Inverse Ising inference is a method to distinguish one from the other. Essentially, the parameters of the least constrained statistical model are learned from the observed correlations such that direct interactions can be separated from indirect correlations. Among many other applications, this approach has been helpful for protein structure prediction, because residues which interact in the 3D structure often show correlated substitutions in a multiple sequence alignment. In this context, samples used for inference are not independent but share an evolutionary history on a phylogenetic tree. Here, we discuss the effects of correlations between samples on global inference. Such correlations could arise due to phylogeny but also via other slow dynamical processes. We present a simple analytical model to address the resulting inference biases, and develop an exact method accounting for background correlations in alignment data by combining phylogenetic modeling with an adaptive cluster expansion algorithm. We find that popular reweighting schemes are only marginally effective at removing phylogenetic bias, suggest a rescaling strategy that yields better results, and provide evidence that our conclusions carry over to the frequently used mean-field approach to the inverse Ising problem. (paper)

  7. Bayesian structural inference for hidden processes

    Science.gov (United States)

    Strelioff, Christopher C.; Crutchfield, James P.

    2014-04-01

    We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ɛ-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ɛ-machines, irrespective of estimated transition probabilities. Properties of ɛ-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.

  8. The Impact of Disablers on Predictive Inference

    Science.gov (United States)

    Cummins, Denise Dellarosa

    2014-01-01

    People consider alternative causes when deciding whether a cause is responsible for an effect (diagnostic inference) but appear to neglect them when deciding whether an effect will occur (predictive inference). Five experiments were conducted to test a 2-part explanation of this phenomenon: namely, (a) that people interpret standard predictive…

  9. Using Approximate Bayesian Computation to infer sex ratios from acoustic data.

    Science.gov (United States)

    Lehnen, Lisa; Schorcht, Wigbert; Karst, Inken; Biedermann, Martin; Kerth, Gerald; Puechmaille, Sebastien J

    2018-01-01

    Population sex ratios are of high ecological relevance, but are challenging to determine in species lacking conspicuous external cues indicating their sex. Acoustic sexing is an option if vocalizations differ between sexes, but is precluded by overlapping distributions of the values of male and female vocalizations in many species. A method allowing the inference of sex ratios despite such an overlap will therefore greatly increase the information extractable from acoustic data. To meet this demand, we developed a novel approach using Approximate Bayesian Computation (ABC) to infer the sex ratio of populations from acoustic data. Additionally, parameters characterizing the male and female distribution of acoustic values (mean and standard deviation) are inferred. This information is then used to probabilistically assign a sex to a single acoustic signal. We furthermore develop a simpler means of sex ratio estimation based on the exclusion of calls from the overlap zone. Applying our methods to simulated data demonstrates that sex ratio and acoustic parameter characteristics of males and females are reliably inferred by the ABC approach. Applying both the ABC and the exclusion method to empirical datasets (echolocation calls recorded in colonies of lesser horseshoe bats, Rhinolophus hipposideros) provides similar sex ratios as molecular sexing. Our methods aim to facilitate evidence-based conservation, and to benefit scientists investigating ecological or conservation questions related to sex- or group specific behaviour across a wide range of organisms emitting acoustic signals. The developed methodology is non-invasive, low-cost and time-efficient, thus allowing the study of many sites and individuals. We provide an R-script for the easy application of the method and discuss potential future extensions and fields of applications. The script can be easily adapted to account for numerous biological systems by adjusting the type and number of groups to be

  10. Automatic physical inference with information maximizing neural networks

    Science.gov (United States)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    Compressing large data sets to a manageable number of summaries that are informative about the underlying parameters vastly simplifies both frequentist and Bayesian inference. When only simulations are available, these summaries are typically chosen heuristically, so they may inadvertently miss important information. We introduce a simulation-based machine learning technique that trains artificial neural networks to find nonlinear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). In test cases where the posterior can be derived exactly, likelihood-free inference based on automatically derived IMNN summaries produces nearly exact posteriors, showing that these summaries are good approximations to sufficient statistics. In a series of numerical examples of increasing complexity and astrophysical relevance we show that IMNNs are robustly capable of automatically finding optimal, nonlinear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima. We anticipate that the automatic physical inference method described in this paper will be essential to obtain both accurate and precise cosmological parameter estimates from complex and large astronomical data sets, including those from LSST and Euclid.

  11. Greater future global warming inferred from Earth’s recent energy budget

    Science.gov (United States)

    Brown, Patrick T.; Caldeira, Ken

    2017-12-01

    Climate models provide the principal means of projecting global warming over the remainder of the twenty-first century but modelled estimates of warming vary by a factor of approximately two even under the same radiative forcing scenarios. Across-model relationships between currently observable attributes of the climate system and the simulated magnitude of future warming have the potential to inform projections. Here we show that robust across-model relationships exist between the global spatial patterns of several fundamental attributes of Earth’s top-of-atmosphere energy budget and the magnitude of projected global warming. When we constrain the model projections with observations, we obtain greater means and narrower ranges of future global warming across the major radiative forcing scenarios, in general. In particular, we find that the observationally informed warming projection for the end of the twenty-first century for the steepest radiative forcing scenario is about 15 per cent warmer (+0.5 degrees Celsius) with a reduction of about a third in the two-standard-deviation spread (-1.2 degrees Celsius) relative to the raw model projections reported by the Intergovernmental Panel on Climate Change. Our results suggest that achieving any given global temperature stabilization target will require steeper greenhouse gas emissions reductions than previously calculated.

  12. An absolute measurement of 252Cf prompt fission neutron spectrum at low energy range

    International Nuclear Information System (INIS)

    Lajtai, A.; Dyachenko, P.P.; Kutzaeva, L.S.; Kononov, V.N.; Androsenko, P.A.; Androsenko, A.A.

    1983-01-01

    Prompt neutron energy spectrum at low energies (25 keV 252 Cf spontaneous fission has been measured with a time-of-flight technique on a 30 cm flight-path. Ionization chamber and lithium-glass were used as fission fragment and neutron detectors, respectively. Lithium glasses of NE-912 (containing 6 Li) and of NE-913 (containing 7 Li) 45 mm in diameter and 9.5 mm in thickness have been employed alternatively, for the registration of fission neutrons and gammas. For the correct determination of the multiscattering effects - the main difficulty of the low energy neutron spectrum measurements - a special geometry for the neutron detector was used. Special attention was paid also to the determination of the absolute efficiency of the neutron detector. The real response function of the spectrometer was determined by a Monte-Carlo calculation. The scattering material content of the ionization chamber containing a 252 Cf source was minimized. As a result of this measurement a prompt fission neutron spectrum of Maxwell type with a T=1.42 MeV parameter was obtained at this low energy range. We did not find any neutron excess or irregularities over the Maxwellian. (author)

  13. Colloquy and workshops: regional implications of the engineering manpower requirements of the National Energy Program

    Energy Technology Data Exchange (ETDEWEB)

    Segool, H. D. [ed.

    1979-05-01

    The crucial interrelationships of engineering manpower, technological innovation, productivity and capital re-formaton were keynoted. Near-term, a study has indicated a much larger New England energy demand-reduction/economic/market potential, with a probably larger engineering manpower requirement, for energy-conservation measures characterized by technological innovation and cost-effective capital services than for alternative energy-supply measures. Federal, regional, and state energy program responsibilities described a wide-ranging panorama of activities among many possible energy options which conveyed much endeavor without identifiable engineering manpower demand coefficients. Similarly, engineering manpower assessment data was described as uneven and unfocused to the energy program at the national level, disaggregated data as non-existent at the regional/state levels, although some qualitative inferences were drawn. A separate abstract was prepared for each of the 16 individual presentations for the DOE Energy Data Base (EDB); 14 of these were selected for Energy Abstracts for Policy Analysis (EAPA) and 2 for Energy Research Abstracts (ERA).

  14. Inference as Prediction

    Science.gov (United States)

    Watson, Jane

    2007-01-01

    Inference, or decision making, is seen in curriculum documents as the final step in a statistical investigation. For a formal statistical enquiry this may be associated with sophisticated tests involving probability distributions. For young students without the mathematical background to perform such tests, it is still possible to draw informal…

  15. Problem solving and inference mechanisms

    Energy Technology Data Exchange (ETDEWEB)

    Furukawa, K; Nakajima, R; Yonezawa, A; Goto, S; Aoyama, A

    1982-01-01

    The heart of the fifth generation computer will be powerful mechanisms for problem solving and inference. A deduction-oriented language is to be designed, which will form the core of the whole computing system. The language is based on predicate logic with the extended features of structuring facilities, meta structures and relational data base interfaces. Parallel computation mechanisms and specialized hardware architectures are being investigated to make possible efficient realization of the language features. The project includes research into an intelligent programming system, a knowledge representation language and system, and a meta inference system to be built on the core. 30 references.

  16. 4.5 Tesla magnetic field reduces range of high-energy positrons -- Potential implications for positron emission tomography

    International Nuclear Information System (INIS)

    Wirrwar, A.; Vosberg, H.; Herzog, H.; Halling, H.; Weber, S.; Mueller-Gaertner, H.W.; Forschungszentrum Juelich GmbH

    1997-01-01

    The authors have theoretically and experimentally investigated the extent to which homogeneous magnetic fields up to 7 Tesla reduce the spatial distance positrons travel before annihilation (positron range). Computer simulations of a noncoincident detector design using a Monte Carlo algorithm calculated the positron range as a function of positron energy and magnetic field strength. The simulation predicted improvements in resolution, defined as full-width at half-maximum (FWHM) of the line-spread function (LSF) for a magnetic field strength up to 7 Tesla: negligible for F-18, from 3.35 mm to 2.73 mm for Ga-68 and from 3.66 mm to 2.68 mm for Rb-82. Also a substantial noise suppression was observed, described by the full-width at tenth-maximum (FWTM) for higher positron energies. The experimental approach confirmed an improvement in resolution for Ga-68 from 3.54 mm at 0 Tesla to 2.99 mm FWHM at 4.5 Tesla and practically no improvement for F-18 (2.97 mm at 0 Tesla and 2.95 mm at 4.5 Tesla). It is concluded that the simulation model is appropriate and that a homogeneous static magnetic field of 4.5 Tesla reduces the range of high-energy positrons to an extent that may improve spatial resolution in positron emission tomography

  17. Functional inference of complex anatomical tendinous networks at a macroscopic scale via sparse experimentation.

    Science.gov (United States)

    Saxena, Anupam; Lipson, Hod; Valero-Cuevas, Francisco J

    2012-01-01

    In systems and computational biology, much effort is devoted to functional identification of systems and networks at the molecular-or cellular scale. However, similarly important networks exist at anatomical scales such as the tendon network of human fingers: the complex array of collagen fibers that transmits and distributes muscle forces to finger joints. This network is critical to the versatility of the human hand, and its function has been debated since at least the 16(th) century. Here, we experimentally infer the structure (both topology and parameter values) of this network through sparse interrogation with force inputs. A population of models representing this structure co-evolves in simulation with a population of informative future force inputs via the predator-prey estimation-exploration algorithm. Model fitness depends on their ability to explain experimental data, while the fitness of future force inputs depends on causing maximal functional discrepancy among current models. We validate our approach by inferring two known synthetic Latex networks, and one anatomical tendon network harvested from a cadaver's middle finger. We find that functionally similar but structurally diverse models can exist within a narrow range of the training set and cross-validation errors. For the Latex networks, models with low training set error [functional structure of complex anatomical networks. This work expands current bioinformatics inference approaches by demonstrating that sparse, yet informative interrogation of biological specimens holds significant computational advantages in accurate and efficient inference over random testing, or assuming model topology and only inferring parameters values. These findings also hold clues to both our evolutionary history and the development of versatile machines.

  18. Elements of Causal Inference: Foundations and Learning Algorithms

    DEFF Research Database (Denmark)

    Peters, Jonas Martin; Janzing, Dominik; Schölkopf, Bernhard

    A concise and self-contained introduction to causal inference, increasingly important in data science and machine learning......A concise and self-contained introduction to causal inference, increasingly important in data science and machine learning...

  19. Bayesian methods for hackers probabilistic programming and Bayesian inference

    CERN Document Server

    Davidson-Pilon, Cameron

    2016-01-01

    Bayesian methods of inference are deeply natural and extremely powerful. However, most discussions of Bayesian inference rely on intensely complex mathematical analyses and artificial examples, making it inaccessible to anyone without a strong mathematical background. Now, though, Cameron Davidson-Pilon introduces Bayesian inference from a computational perspective, bridging theory to practice–freeing you to get results using computing power. Bayesian Methods for Hackers illuminates Bayesian inference through probabilistic programming with the powerful PyMC language and the closely related Python tools NumPy, SciPy, and Matplotlib. Using this approach, you can reach effective solutions in small increments, without extensive mathematical intervention. Davidson-Pilon begins by introducing the concepts underlying Bayesian inference, comparing it with other techniques and guiding you through building and training your first Bayesian model. Next, he introduces PyMC through a series of detailed examples a...

  20. RANGE AND DENSITY OF ALIEN FISH IN WESTERN STREAMS AND RIVERS, US

    Science.gov (United States)

    Alien fish have become increasingly prevalent in Western U.S. waters. The EPA Environmental Monitoring and Assessment Program's Western Pilot (12 western states), which is based upon a probabilistic design, provides an opportunity to make inferences about the range and density of...

  1. Causal inference in econometrics

    CERN Document Server

    Kreinovich, Vladik; Sriboonchitta, Songsak

    2016-01-01

    This book is devoted to the analysis of causal inference which is one of the most difficult tasks in data analysis: when two phenomena are observed to be related, it is often difficult to decide whether one of them causally influences the other one, or whether these two phenomena have a common cause. This analysis is the main focus of this volume. To get a good understanding of the causal inference, it is important to have models of economic phenomena which are as accurate as possible. Because of this need, this volume also contains papers that use non-traditional economic models, such as fuzzy models and models obtained by using neural networks and data mining techniques. It also contains papers that apply different econometric models to analyze real-life economic dependencies.

  2. Higher Energy Intake Variability as Predisposition to Obesity: Novel Approach Using Interquartile Range.

    Science.gov (United States)

    Forejt, Martin; Brázdová, Zuzana Derflerová; Novák, Jan; Zlámal, Filip; Forbelská, Marie; Bienert, Petr; Mořkovská, Petra; Zavřelová, Miroslava; Pohořalá, Aneta; Jurášková, Miluše; Salah, Nabil; Bienertová-Vašků, Julie

    2017-12-01

    It is known that total energy intake and its distribution during the day influences human anthropometric characteristics. However, possible association between variability in total energy intake and obesity has thus far remained unexamined. This study was designed to establish the influence of energy intake variability of each daily meal on the anthropometric characteristics of obesity. A total of 521 individuals of Czech Caucasian origin aged 16–73 years (390 women and 131 men) were included in the study, 7-day food records were completed by all study subjects and selected anthropometric characteristics were measured. The interquartile range (IQR) of energy intake was assessed individually for each meal of the day (as a marker of energy intake variability) and subsequently correlated with body mass index (BMI), body fat percentage (%BF), waist-hip ratio (WHR), and waist circumference (cW). Four distinct models were created using multiple logistic regression analysis and backward stepwise logistic regression. The most precise results, based on the area under the curve (AUC), were observed in case of the %BF model (AUC=0.895) and cW model (AUC=0.839). According to the %BF model, age (p<0.001) and IQR-lunch (p<0.05) seem to play an important prediction role for obesity. Likewise, according to the cW model, age (p<0.001), IQR-breakfast (p<0.05) and IQR-dinner (p <0.05) predispose patients to the development of obesity. The results of our study show that higher variability in the energy intake of key daily meals may increase the likelihood of obesity development. Based on the obtained results, it is necessary to emphasize the regularity in meals intake for maintaining proper body composition. Copyright© by the National Institute of Public Health, Prague 2017

  3. Assessment of network inference methods: how to cope with an underdetermined problem.

    Directory of Open Access Journals (Sweden)

    Caroline Siegenthaler

    Full Text Available The inference of biological networks is an active research area in the field of systems biology. The number of network inference algorithms has grown tremendously in the last decade, underlining the importance of a fair assessment and comparison among these methods. Current assessments of the performance of an inference method typically involve the application of the algorithm to benchmark datasets and the comparison of the network predictions against the gold standard or reference networks. While the network inference problem is often deemed underdetermined, implying that the inference problem does not have a (unique solution, the consequences of such an attribute have not been rigorously taken into consideration. Here, we propose a new procedure for assessing the performance of gene regulatory network (GRN inference methods. The procedure takes into account the underdetermined nature of the inference problem, in which gene regulatory interactions that are inferable or non-inferable are determined based on causal inference. The assessment relies on a new definition of the confusion matrix, which excludes errors associated with non-inferable gene regulations. For demonstration purposes, the proposed assessment procedure is applied to the DREAM 4 In Silico Network Challenge. The results show a marked change in the ranking of participating methods when taking network inferability into account.

  4. Determination of personnel exposures in the lower energy ranges of X-ray by photographic dosimeter

    International Nuclear Information System (INIS)

    Ha, C.W.; Kim, J.R.; Suk, K.W.

    1986-01-01

    This paper described an improved technical method required for proper evaluation of personnel exposures by means of the photographic dosimeter developed by KAERI in lower gamma or X-ray energy regions, with which response of the dosimeter varies significantly. With calibration of the dosimeter in the energy range from 30 to 300 keV, the beam spectrum was carefully selected and specified it adequately. The absorber combinations and absorber thickness used to obtain the specified X-ray spectra from a constant potential X-ray machine were determined theoretically and also experimentally. A correlation between the density and exposure for the four separate energies, such as 49 keV eff , 154 keV eff 250 keV eff and 662 keV, is experimentally determined. As a result, it can be directly evaluated the exposure from the measured response of dosimeter. (Author)

  5. Commercial cyclotrons. Part I: Commercial cyclotrons in the energy range 10 30 MeV for isotope production

    Science.gov (United States)

    Papash, A. I.; Alenitsky, Yu. G.

    2008-07-01

    A survey of commercial cyclotrons for production of medical and industrial isotopes is presented. Compact isochronous cyclotrons which accelerate negative hydrogen ions in the energy range 10 30 MeV have been widely used over the last 25 years for production of medical isotopes and other applications. Different cyclotron models for the energy range 10 12 MeV with moderate beam intensity are used for production of 11C, 13N, 15O, and 18F isotopes widely applied in positron emission tomography. Commercial cyclotrons with high beam intensity are available on the market for production of most medical and industrial isotopes. In this work, the physical and technical parameters of different models are compared. Possibilities of improving performance and increasing intensity of H- beams up to 2 3 mA are discussed.

  6. Probability and Statistical Inference

    OpenAIRE

    Prosper, Harrison B.

    2006-01-01

    These lectures introduce key concepts in probability and statistical inference at a level suitable for graduate students in particle physics. Our goal is to paint as vivid a picture as possible of the concepts covered.

  7. Fuzzy logic controller using different inference methods

    International Nuclear Information System (INIS)

    Liu, Z.; De Keyser, R.

    1994-01-01

    In this paper the design of fuzzy controllers by using different inference methods is introduced. Configuration of the fuzzy controllers includes a general rule-base which is a collection of fuzzy PI or PD rules, the triangular fuzzy data model and a centre of gravity defuzzification algorithm. The generalized modus ponens (GMP) is used with the minimum operator of the triangular norm. Under the sup-min inference rule, six fuzzy implication operators are employed to calculate the fuzzy look-up tables for each rule base. The performance is tested in simulated systems with MATLAB/SIMULINK. Results show the effects of using the fuzzy controllers with different inference methods and applied to different test processes

  8. An algebra-based method for inferring gene regulatory networks.

    Science.gov (United States)

    Vera-Licona, Paola; Jarrah, Abdul; Garcia-Puente, Luis David; McGee, John; Laubenbacher, Reinhard

    2014-03-26

    The inference of gene regulatory networks (GRNs) from experimental observations is at the heart of systems biology. This includes the inference of both the network topology and its dynamics. While there are many algorithms available to infer the network topology from experimental data, less emphasis has been placed on methods that infer network dynamics. Furthermore, since the network inference problem is typically underdetermined, it is essential to have the option of incorporating into the inference process, prior knowledge about the network, along with an effective description of the search space of dynamic models. Finally, it is also important to have an understanding of how a given inference method is affected by experimental and other noise in the data used. This paper contains a novel inference algorithm using the algebraic framework of Boolean polynomial dynamical systems (BPDS), meeting all these requirements. The algorithm takes as input time series data, including those from network perturbations, such as knock-out mutant strains and RNAi experiments. It allows for the incorporation of prior biological knowledge while being robust to significant levels of noise in the data used for inference. It uses an evolutionary algorithm for local optimization with an encoding of the mathematical models as BPDS. The BPDS framework allows an effective representation of the search space for algebraic dynamic models that improves computational performance. The algorithm is validated with both simulated and experimental microarray expression profile data. Robustness to noise is tested using a published mathematical model of the segment polarity gene network in Drosophila melanogaster. Benchmarking of the algorithm is done by comparison with a spectrum of state-of-the-art network inference methods on data from the synthetic IRMA network to demonstrate that our method has good precision and recall for the network reconstruction task, while also predicting several of the

  9. Statistical inference based on divergence measures

    CERN Document Server

    Pardo, Leandro

    2005-01-01

    The idea of using functionals of Information Theory, such as entropies or divergences, in statistical inference is not new. However, in spite of the fact that divergence statistics have become a very good alternative to the classical likelihood ratio test and the Pearson-type statistic in discrete models, many statisticians remain unaware of this powerful approach.Statistical Inference Based on Divergence Measures explores classical problems of statistical inference, such as estimation and hypothesis testing, on the basis of measures of entropy and divergence. The first two chapters form an overview, from a statistical perspective, of the most important measures of entropy and divergence and study their properties. The author then examines the statistical analysis of discrete multivariate data with emphasis is on problems in contingency tables and loglinear models using phi-divergence test statistics as well as minimum phi-divergence estimators. The final chapter looks at testing in general populations, prese...

  10. Active inference, sensory attenuation and illusions.

    Science.gov (United States)

    Brown, Harriet; Adams, Rick A; Parees, Isabel; Edwards, Mark; Friston, Karl

    2013-11-01

    Active inference provides a simple and neurobiologically plausible account of how action and perception are coupled in producing (Bayes) optimal behaviour. This can be seen most easily as minimising prediction error: we can either change our predictions to explain sensory input through perception. Alternatively, we can actively change sensory input to fulfil our predictions. In active inference, this action is mediated by classical reflex arcs that minimise proprioceptive prediction error created by descending proprioceptive predictions. However, this creates a conflict between action and perception; in that, self-generated movements require predictions to override the sensory evidence that one is not actually moving. However, ignoring sensory evidence means that externally generated sensations will not be perceived. Conversely, attending to (proprioceptive and somatosensory) sensations enables the detection of externally generated events but precludes generation of actions. This conflict can be resolved by attenuating the precision of sensory evidence during movement or, equivalently, attending away from the consequences of self-made acts. We propose that this Bayes optimal withdrawal of precise sensory evidence during movement is the cause of psychophysical sensory attenuation. Furthermore, it explains the force-matching illusion and reproduces empirical results almost exactly. Finally, if attenuation is removed, the force-matching illusion disappears and false (delusional) inferences about agency emerge. This is important, given the negative correlation between sensory attenuation and delusional beliefs in normal subjects--and the reduction in the magnitude of the illusion in schizophrenia. Active inference therefore links the neuromodulatory optimisation of precision to sensory attenuation and illusory phenomena during the attribution of agency in normal subjects. It also provides a functional account of deficits in syndromes characterised by false inference

  11. Bayesian Inference and Online Learning in Poisson Neuronal Networks.

    Science.gov (United States)

    Huang, Yanping; Rao, Rajesh P N

    2016-08-01

    Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.

  12. Reinforcement and inference in cross-situational word learning.

    Science.gov (United States)

    Tilles, Paulo F C; Fontanari, José F

    2013-01-01

    Cross-situational word learning is based on the notion that a learner can determine the referent of a word by finding something in common across many observed uses of that word. Here we propose an adaptive learning algorithm that contains a parameter that controls the strength of the reinforcement applied to associations between concurrent words and referents, and a parameter that regulates inference, which includes built-in biases, such as mutual exclusivity, and information of past learning events. By adjusting these parameters so that the model predictions agree with data from representative experiments on cross-situational word learning, we were able to explain the learning strategies adopted by the participants of those experiments in terms of a trade-off between reinforcement and inference. These strategies can vary wildly depending on the conditions of the experiments. For instance, for fast mapping experiments (i.e., the correct referent could, in principle, be inferred in a single observation) inference is prevalent, whereas for segregated contextual diversity experiments (i.e., the referents are separated in groups and are exhibited with members of their groups only) reinforcement is predominant. Other experiments are explained with more balanced doses of reinforcement and inference.

  13. Data-driven inference for the spatial scan statistic.

    Science.gov (United States)

    Almeida, Alexandre C L; Duarte, Anderson R; Duczmal, Luiz H; Oliveira, Fernando L P; Takahashi, Ricardo H C

    2011-08-02

    Kulldorff's spatial scan statistic for aggregated area maps searches for clusters of cases without specifying their size (number of areas) or geographic location in advance. Their statistical significance is tested while adjusting for the multiple testing inherent in such a procedure. However, as is shown in this work, this adjustment is not done in an even manner for all possible cluster sizes. A modification is proposed to the usual inference test of the spatial scan statistic, incorporating additional information about the size of the most likely cluster found. A new interpretation of the results of the spatial scan statistic is done, posing a modified inference question: what is the probability that the null hypothesis is rejected for the original observed cases map with a most likely cluster of size k, taking into account only those most likely clusters of size k found under null hypothesis for comparison? This question is especially important when the p-value computed by the usual inference process is near the alpha significance level, regarding the correctness of the decision based in this inference. A practical procedure is provided to make more accurate inferences about the most likely cluster found by the spatial scan statistic.

  14. Total cross sections for electron scattering by CO2 molecules in the energy range 400 endash 5000 eV

    International Nuclear Information System (INIS)

    Garcia, G.; Manero, F.

    1996-01-01

    Total cross sections for electron scattering by CO 2 molecules in the energy range 400 endash 5000 eV have been measured with experimental errors of ∼3%. The present results have been compared with available experimental and theoretical data. The dependence of the total cross sections on electron energy shows an asymptotic behavior with increasing energies, in agreement with the Born-Bethe approximation. In addition, an analytical formula is provided to extrapolate total cross sections to higher energies. copyright 1996 The American Physical Society

  15. Eight challenges in phylodynamic inference

    Directory of Open Access Journals (Sweden)

    Simon D.W. Frost

    2015-03-01

    Full Text Available The field of phylodynamics, which attempts to enhance our understanding of infectious disease dynamics using pathogen phylogenies, has made great strides in the past decade. Basic epidemiological and evolutionary models are now well characterized with inferential frameworks in place. However, significant challenges remain in extending phylodynamic inference to more complex systems. These challenges include accounting for evolutionary complexities such as changing mutation rates, selection, reassortment, and recombination, as well as epidemiological complexities such as stochastic population dynamics, host population structure, and different patterns at the within-host and between-host scales. An additional challenge exists in making efficient inferences from an ever increasing corpus of sequence data.

  16. Measurements of the prompt neutron spectra in 233U, 235U, 239Pu thermal neutron fission in the energy range of 0.01-5 MeV and in 252Cf spontaneous fission in the energy range of 0.01-10 MeV

    International Nuclear Information System (INIS)

    Starostov, B.I.; Semenov, A.F.; Nefedov, V.N.

    1978-01-01

    The measurement results on the prompt neutron spectra in 233 U, 235 U, 239 Pu thermal neutron fission in the energy range of 0.01-5 MeV and in 252 Cf spontaneous fission in the energy range of 0.01-10 MeV are presented. The time-of-flight method was used. The exceeding of the spectra over the Maxwell distributions is observed at E 252 Cf neutron fission spectra. The spectra analysis was performed after normalization of the spectra and corresponding Maxwell distributions for one and the same area. In the range of 0.05-0.22 MeV the yield of 235 U + nsub(t) fission neutrons is approximately 8 and approximately 15 % greater than the yield of 252 Cf and 239 Pu + nsub(t) fission neutrons, respectively. In the range of 0.3-1.2 MeV the yield of 235 U + nsub(t) fission neutrons is 8 % greater than the fission neutron yield in case of 239 Pu + nsub(t) fission. The 235 U + nsub(t) and 233 U + nsub(t) fission neutron spectra do not differ from one another in the 0.05-0.6 MeV range

  17. Scintillation Response of CaF2 to H and He over a Continuous Energy Range

    International Nuclear Information System (INIS)

    Zhang, Yanwen; Xiang, Xia; Weber, William J.

    2008-01-01

    Recent demands for new radiation detector materials with improved γ-ray detection performance at room temperature have prompted research efforts on both accelerated material discovery and efficient techniques that can be used to identify material properties relevant to detector performance. New material discovery has been limited due to the difficulties of large crystal growth to completely absorb γ-energies; whereas high-quality thin films or small crystals of candidate materials can be readily produced by various modern growth techniques. In this work, an ion-scintillator technique is demonstrated that can be applied to study scintillation properties of thin films and small crystals. The scintillation response of a benchmark scintillator, europium-doped calcium fluoride (CaF2:Eu), to energetic proton and helium ions is studied using the ion-scintillator approach based on a time of flight (TOF) telescope. Excellent energy resolution and fast response of the TOF telescope allow quantitative measurement of light yield, nonlinearity and energy resolution over an energy range from a few tens to a few thousands of keV

  18. Actinide data in the thermal energy range - International Evaluation Co-operation Volume 3

    International Nuclear Information System (INIS)

    Tellier, Henri; Weigmann, H.; Sowerby, M.; Mattes, Margarete; Matsunobu, Hiroyuki; Tsuchihashi, Keichiro; Halsall, M.J.; Weston, L.; Deruytter, A.J.

    1994-01-01

    A Working Party on International Evaluation Co-operation was established under the sponsorship of the OECD/NEA Nuclear Science Committee (NSC) to promote the exchange of information on nuclear data evaluations, validation, and related topics. Its aim is also to provide a framework for co-operative activities between members of the major nuclear data evaluation projects. This includes the possible exchange of scientists in order to encourage co-operation. Requirements for experimental data resulting from this activity are compiled. The Working Party determines common criteria for evaluated nuclear data files with a view to assessing and improving the quality and completeness of evaluated data. The Parties to the project are: ENDF (United States), JEFF/EFF (NEA Data Bank Member countries), and JENDL (Japan). Co-operation with evaluation projects of non-OECD countries are organised through the Nuclear Data Section of the International Atomic Energy Agency (IAEA). This report was issued by a Subgroup investigating actinide data in the thermal energy range. Thermal nuclear constants for the primary actinides have been extensively studies, but the most recent evaluations are not in full agreement with thermal reactor calculations. The objective of the Subgroup was to identify the origin of these differences and to reassess the recent evaluations. A considerable effort was devoted to the η of U-235, where analysis of lattice temperature coefficient measurements has suggested an energy dependent shape below thermal energy

  19. Geothermal energy in Idaho: site data base and development status

    Energy Technology Data Exchange (ETDEWEB)

    McClain, D.W.

    1979-07-01

    Detailed site specific data regarding the commercialization potential of the proven, potential, and inferred geothermal resource areas in Idaho are presented. To assess the potential for geothermal resource development in Idaho, several kinds of data were obtained. These include information regarding institutional procedures for geothermal development, logistical procedures for utilization, energy needs and forecasted demands, and resource data. Area reports, data sheets, and scenarios were prepared that described possible geothermal development at individual sites. In preparing development projections, the objective was to base them on actual market potential, forecasted growth, and known or inferred resource conditions. To the extent possible, power-on-line dates and energy utilization estimates are realistic projections of the first events. Commercialization projections were based on the assumption that an aggressive development program will prove sufficient known and inferred resources to accomplish the projected event. This report is an estimate of probable energy developable under an aggressive exploration program and is considered extremely conservative. (MHR)

  20. Investigating the collision energy dependence of η /s in the beam energy scan at the BNL Relativistic Heavy Ion Collider using Bayesian statistics

    Science.gov (United States)

    Auvinen, Jussi; Bernhard, Jonah E.; Bass, Steffen A.; Karpenko, Iurii

    2018-04-01

    We determine the probability distributions of the shear viscosity over the entropy density ratio η /s in the quark-gluon plasma formed in Au + Au collisions at √{sN N}=19.6 ,39 , and 62.4 GeV , using Bayesian inference and Gaussian process emulators for a model-to-data statistical analysis that probes the full input parameter space of a transport + viscous hydrodynamics hybrid model. We find the most likely value of η /s to be larger at smaller √{sN N}, although the uncertainties still allow for a constant value between 0.10 and 0.15 for the investigated collision energy range.

  1. Keystrokes Inference Attack on Android: A Comparative Evaluation of Sensors and Their Fusion

    Directory of Open Access Journals (Sweden)

    Ahmed Al-Haiqi

    2014-11-01

    Full Text Available Introducing motion sensors into smartphones contributed to a wide range of applications in human-phone interaction, gaming, and many others. However, built-in sensors that detect subtle motion changes (e.g. accelerometers, might also reveal information about taps on touch screens: the main user input mode. Few researchers have already demonstrated the idea of exploiting motion sensors as side-channels into inferring keystrokes. Taken at most as initial explorations, much research is still needed to analyze the practicality of the new threat and examine various aspects of its implementation. One important aspect affecting directly the attack effectiveness is the selection of the right combination of sensors, to supply inference data. Although other aspects also play crucial role (e.g. the features set, we start in this paper by focusing on the comparison of different available sensors, in terms of the inference accuracy. We consider individual sensors shipped on Android phones, and study few options of preprocessing their raw datasets as well as fusing several sensors' readings. Our results indicate an outstanding performance of the gyroscope, and the potential of sensors data fusion. However, it seems that sensors with magnetometer component or the accelerometer alone have less benefit in the context of the adverted attack.

  2. Performance analysis and experimental verification of mid-range wireless energy transfer through non-resonant magnetic coupling

    DEFF Research Database (Denmark)

    Peng, Liang; Wang, Jingyu; Zhejiang University, Hangzhou, China, L.

    2011-01-01

    In this paper, the efficiency analysis of a mid-range wireless energy transfer system is performed through non-resonant magnetic coupling. It is shown that the self-resistance of the coils and the mutual inductance are critical in achieving a high efficiency, which is indicated by our theoretical...

  3. Nano-ranged low-energy ion-beam-induced DNA transfer in biological cells

    Energy Technology Data Exchange (ETDEWEB)

    Yu, L.D., E-mail: yuld@fnrf.science.cmu.ac.th [Thailand Center of Excellence in Physics, Commission on Higher Education, 328 Si Ayutthaya Road, Bangkok 10400 (Thailand); Plasma and Beam Physics Research Facility, Department of Physics and Materials Science, Faculty of Science, Chiang Mai University, Chiang Mai 50200 (Thailand); Wongkham, W. [Department of Biology, Faculty of Science, Chiang Mai University, Chiang Mai 50200 (Thailand); Prakrajang, K. [Plasma and Beam Physics Research Facility, Department of Physics and Materials Science, Faculty of Science, Chiang Mai University, Chiang Mai 50200 (Thailand); Sangwijit, K.; Inthanon, K. [Department of Biology, Faculty of Science, Chiang Mai University, Chiang Mai 50200 (Thailand); Thongkumkoon, P. [Thailand Center of Excellence in Physics, Commission on Higher Education, 328 Si Ayutthaya Road, Bangkok 10400 (Thailand); Plasma and Beam Physics Research Facility, Department of Physics and Materials Science, Faculty of Science, Chiang Mai University, Chiang Mai 50200 (Thailand); Wanichapichart, P. [Thailand Center of Excellence in Physics, Commission on Higher Education, 328 Si Ayutthaya Road, Bangkok 10400 (Thailand); Membrane Science and Technology Research Center, Department of Physics, Faculty of Science, Prince of Songkla University, Hat Yai, Songkla 90112 (Thailand); Anuntalabhochai, S. [Department of Biology, Faculty of Science, Chiang Mai University, Chiang Mai 50200 (Thailand)

    2013-06-15

    Low-energy ion beams at a few tens of keV were demonstrated to be able to induce exogenous macromolecules to transfer into plant and bacterial cells. In the process, the ion beam with well controlled energy and fluence bombarded living cells to cause certain degree damage in the cell envelope in nanoscales to facilitate the macromolecules such as DNA to pass through the cell envelope and enter the cell. Consequently, the technique was applied for manipulating positive improvements in the biological species. This physical DNA transfer method was highly efficient and had less risk of side-effects compared with chemical and biological methods. For better understanding of mechanisms involved in the process, a systematic study on the mechanisms was carried out. Applications of the technique were also expanded from DNA transfer in plant and bacterial cells to DNA transfection in human cancer cells potentially for the stem cell therapy purpose. Low-energy nitrogen and argon ion beams that were applied in our experiments had ranges of 100 nm or less in the cell envelope membrane which was majorly composed of polymeric cellulose. The ion beam bombardment caused chain-scission dominant damage in the polymer and electrical property changes such as increase in the impedance in the envelope membrane. These nano-modifications of the cell envelope eventually enhanced the permeability of the envelope membrane to favor the DNA transfer. The paper reports details of our research in this direction.

  4. Nano-ranged low-energy ion-beam-induced DNA transfer in biological cells

    International Nuclear Information System (INIS)

    Yu, L.D.; Wongkham, W.; Prakrajang, K.; Sangwijit, K.; Inthanon, K.; Thongkumkoon, P.; Wanichapichart, P.; Anuntalabhochai, S.

    2013-01-01

    Low-energy ion beams at a few tens of keV were demonstrated to be able to induce exogenous macromolecules to transfer into plant and bacterial cells. In the process, the ion beam with well controlled energy and fluence bombarded living cells to cause certain degree damage in the cell envelope in nanoscales to facilitate the macromolecules such as DNA to pass through the cell envelope and enter the cell. Consequently, the technique was applied for manipulating positive improvements in the biological species. This physical DNA transfer method was highly efficient and had less risk of side-effects compared with chemical and biological methods. For better understanding of mechanisms involved in the process, a systematic study on the mechanisms was carried out. Applications of the technique were also expanded from DNA transfer in plant and bacterial cells to DNA transfection in human cancer cells potentially for the stem cell therapy purpose. Low-energy nitrogen and argon ion beams that were applied in our experiments had ranges of 100 nm or less in the cell envelope membrane which was majorly composed of polymeric cellulose. The ion beam bombardment caused chain-scission dominant damage in the polymer and electrical property changes such as increase in the impedance in the envelope membrane. These nano-modifications of the cell envelope eventually enhanced the permeability of the envelope membrane to favor the DNA transfer. The paper reports details of our research in this direction.

  5. Making Type Inference Practical

    DEFF Research Database (Denmark)

    Schwartzbach, Michael Ignatieff; Oxhøj, Nicholas; Palsberg, Jens

    1992-01-01

    We present the implementation of a type inference algorithm for untyped object-oriented programs with inheritance, assignments, and late binding. The algorithm significantly improves our previous one, presented at OOPSLA'91, since it can handle collection classes, such as List, in a useful way. Abo......, the complexity has been dramatically improved, from exponential time to low polynomial time. The implementation uses the techniques of incremental graph construction and constraint template instantiation to avoid representing intermediate results, doing superfluous work, and recomputing type information....... Experiments indicate that the implementation type checks as much as 100 lines pr. second. This results in a mature product, on which a number of tools can be based, for example a safety tool, an image compression tool, a code optimization tool, and an annotation tool. This may make type inference for object...

  6. Examples in parametric inference with R

    CERN Document Server

    Dixit, Ulhas Jayram

    2016-01-01

    This book discusses examples in parametric inference with R. Combining basic theory with modern approaches, it presents the latest developments and trends in statistical inference for students who do not have an advanced mathematical and statistical background. The topics discussed in the book are fundamental and common to many fields of statistical inference and thus serve as a point of departure for in-depth study. The book is divided into eight chapters: Chapter 1 provides an overview of topics on sufficiency and completeness, while Chapter 2 briefly discusses unbiased estimation. Chapter 3 focuses on the study of moments and maximum likelihood estimators, and Chapter 4 presents bounds for the variance. In Chapter 5, topics on consistent estimator are discussed. Chapter 6 discusses Bayes, while Chapter 7 studies some more powerful tests. Lastly, Chapter 8 examines unbiased and other tests. Senior undergraduate and graduate students in statistics and mathematics, and those who have taken an introductory cou...

  7. Characterization and optimization of laser-driven electron and photon sources in keV and MeV energy ranges

    International Nuclear Information System (INIS)

    Bonnet, Thomas

    2013-01-01

    This work takes place in the framework of the characterization and the optimization of laser-driven electron and photon sources. With the goal of using these sources for nuclear physics experiments, we focused on 2 energy ranges: one around a few MeV and the other around a few tens of keV. The first part of this work is thus dedicated to the study of detectors routinely used for the characterization of laser-driven particle sources: Imaging Plates. A model has been developed and is fitted to experimental data. Response functions to electrons, photons, protons and alpha particles are established for SR, MS and TR Fuji Imaging Plates for energies ranging from a few keV to several MeV. The second part of this work present a study of ultrashort and intense electron and photon sources produced in the interaction of a laser with a solid or liquid target. An experiment was conducted at the ELFIE facility at LULI where beams of electrons and photons were accelerated up to several MeV. Energy and angular distributions of the electron and photons beams were characterized. The sources were optimized by varying the spatial extension of the plasma at both the front and the back end of the initial target position. In the optimal configuration of the laser-plasma coupling, more than 1011 electrons were accelerated. In the case of liquid target, a photon source was produced at a high repetition rate on an energy range of tens of keV by the interaction of the AURORE Laser at CELIA (10 16 W.cm -2 ) and a melted gallium target. It was shown that both the mean energy and the photon number can be increased by creating gallium jets at the surface of the liquid target with a pre-pulse. A physical interpretation supported by numerical simulations is proposed. (author)

  8. Causal Effect Inference with Deep Latent-Variable Models

    NARCIS (Netherlands)

    Louizos, C; Shalit, U.; Mooij, J.; Sontag, D.; Zemel, R.; Welling, M.

    2017-01-01

    Learning individual-level causal effects from observational data, such as inferring the most effective medication for a specific patient, is a problem of growing importance for policy makers. The most important aspect of inferring causal effects from observational data is the handling of

  9. Causal inference in survival analysis using pseudo-observations

    DEFF Research Database (Denmark)

    Andersen, Per K; Syriopoulou, Elisavet; Parner, Erik T

    2017-01-01

    Causal inference for non-censored response variables, such as binary or quantitative outcomes, is often based on either (1) direct standardization ('G-formula') or (2) inverse probability of treatment assignment weights ('propensity score'). To do causal inference in survival analysis, one needs ...

  10. Supervised learning for infection risk inference using pathology data.

    Science.gov (United States)

    Hernandez, Bernard; Herrero, Pau; Rawson, Timothy Miles; Moore, Luke S P; Evans, Benjamin; Toumazou, Christofer; Holmes, Alison H; Georgiou, Pantelis

    2017-12-08

    Antimicrobial Resistance is threatening our ability to treat common infectious diseases and overuse of antimicrobials to treat human infections in hospitals is accelerating this process. Clinical Decision Support Systems (CDSSs) have been proven to enhance quality of care by promoting change in prescription practices through antimicrobial selection advice. However, bypassing an initial assessment to determine the existence of an underlying disease that justifies the need of antimicrobial therapy might lead to indiscriminate and often unnecessary prescriptions. From pathology laboratory tests, six biochemical markers were selected and combined with microbiology outcomes from susceptibility tests to create a unique dataset with over one and a half million daily profiles to perform infection risk inference. Outliers were discarded using the inter-quartile range rule and several sampling techniques were studied to tackle the class imbalance problem. The first phase selects the most effective and robust model during training using ten-fold stratified cross-validation. The second phase evaluates the final model after isotonic calibration in scenarios with missing inputs and imbalanced class distributions. More than 50% of infected profiles have daily requested laboratory tests for the six biochemical markers with very promising infection inference results: area under the receiver operating characteristic curve (0.80-0.83), sensitivity (0.64-0.75) and specificity (0.92-0.97). Standardization consistently outperforms normalization and sensitivity is enhanced by using the SMOTE sampling technique. Furthermore, models operated without noticeable loss in performance if at least four biomarkers were available. The selected biomarkers comprise enough information to perform infection risk inference with a high degree of confidence even in the presence of incomplete and imbalanced data. Since they are commonly available in hospitals, Clinical Decision Support Systems could

  11. Definition by modelling, optimization and characterization of a neutron spectrometry system based on Bonner spheres extended to the high-energy range

    International Nuclear Information System (INIS)

    Serre, S.

    2010-01-01

    This research thesis first describes the problematic of the effects of natural radiation on micro- and nano-electronic components, and the atmospheric-radiative stress of atmospheric neutrons from cosmic origin: issue of 'Single event upsets', present knowledge of the atmospheric radiative environment induced by cosmic rays. The author then presents the neutron-based detection and spectrometry by using the Bonner sphere technique: principle of moderating spheres, definition and mathematical formulation of neutron spectrometry using Bonner spheres, active sensors of thermal neutrons, response of a system to conventional Bonner spheres, extension to the range of high energies. Then, he reports the development of a Bonner sphere system extended to the high-energy range for the spectrometry of atmospheric neutrons: definition of a conventional system, Monte Carlo calculation of response functions, development of the response matrix, representation and semi-empirical verification of fluence response, uncertainty analysis, extension to high energies, and measurement tests of the spectrometer. He reports the use of a Monte Carlo simulation to characterize the spectrometer response in the high-energy range

  12. Statistical Inference at Work: Statistical Process Control as an Example

    Science.gov (United States)

    Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia

    2008-01-01

    To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…

  13. On quantum statistical inference

    NARCIS (Netherlands)

    Barndorff-Nielsen, O.E.; Gill, R.D.; Jupp, P.E.

    2003-01-01

    Interest in problems of statistical inference connected to measurements of quantum systems has recently increased substantially, in step with dramatic new developments in experimental techniques for studying small quantum systems. Furthermore, developments in the theory of quantum measurements have

  14. Statistical inference

    CERN Document Server

    Rohatgi, Vijay K

    2003-01-01

    Unified treatment of probability and statistics examines and analyzes the relationship between the two fields, exploring inferential issues. Numerous problems, examples, and diagrams--some with solutions--plus clear-cut, highlighted summaries of results. Advanced undergraduate to graduate level. Contents: 1. Introduction. 2. Probability Model. 3. Probability Distributions. 4. Introduction to Statistical Inference. 5. More on Mathematical Expectation. 6. Some Discrete Models. 7. Some Continuous Models. 8. Functions of Random Variables and Random Vectors. 9. Large-Sample Theory. 10. General Meth

  15. Extensive range overlap between heliconiine sister species: evidence for sympatric speciation in butterflies?

    Science.gov (United States)

    Rosser, Neil; Kozak, Krzysztof M; Phillimore, Albert B; Mallet, James

    2015-06-30

    Sympatric speciation is today generally viewed as plausible, and some well-supported examples exist, but its relative contribution to biodiversity remains to be established. We here quantify geographic overlap of sister species of heliconiine butterflies, and use age-range correlations and spatial simulations of the geography of speciation to infer the frequency of sympatric speciation. We also test whether shifts in mimetic wing colour pattern, host plant use and climate niche play a role in speciation, and whether such shifts are associated with sympatry. Approximately a third of all heliconiine sister species pairs exhibit near complete range overlap, and analyses of the observed patterns of range overlap suggest that sympatric speciation contributes 32%-95% of speciation events. Müllerian mimicry colour patterns and host plant choice are highly labile traits that seem to be associated with speciation, but we find no association between shifts in these traits and range overlap. In contrast, climatic niches of sister species are more conserved. Unlike birds and mammals, sister species of heliconiines are often sympatric and our inferences using the most recent comparative methods suggest that sympatric speciation is common. However, if sister species spread rapidly into sympatry (e.g. due to their similar climatic niches), then assumptions underlying our methods would be violated. Furthermore, although we find some evidence for the role of ecology in speciation, ecological shifts did not show the associations with range overlap expected under sympatric speciation. We delimit species of heliconiines in three different ways, based on "strict and " "relaxed" biological species concepts (BSC), as well as on a surrogate for the widely-used "diagnostic" version of the phylogenetic species concept (PSC). We show that one reason why more sympatric speciation is inferred in heliconiines than in birds may be due to a different culture of species delimitation in the two

  16. The Impact of Contextual Clue Selection on Inference

    Directory of Open Access Journals (Sweden)

    Leila Barati

    2010-05-01

    Full Text Available Linguistic information can be conveyed in the form of speech and written text, but it is the content of the message that is ultimately essential for higher-level processes in language comprehension, such as making inferences and associations between text information and knowledge about the world. Linguistically, inference is the shovel that allows receivers to dig meaning out from the text with selecting different embedded contextual clues. Naturally, people with different world experiences infer similar contextual situations differently. Lack of contextual knowledge of the target language can present an obstacle to comprehension (Anderson & Lynch, 2003. This paper tries to investigate how true contextual clue selection from the text can influence listener’s inference. In the present study 60 male and female teenagers (13-19 and 60 male and female young adults (20-26 were selected randomly based on Oxford Placement Test (OPT. During the study two fiction and two non-fiction passages were read to the participants in the experimental and control groups respectively and they were given scores according to Lexile’s Score (LS[1] based on their correct inference and logical thinking ability. In general the results show that participants’ clue selection based on their personal schematic references and background knowledge differ between teenagers and young adults and influence inference and listening comprehension. [1]- This is a framework for reading and listening which matches the appropriate score to each text based on degree of difficulty of text and each text was given a Lexile score from zero to four.

  17. Data-driven Inference and Investigation of Thermosphere Dynamics and Variations

    Science.gov (United States)

    Mehta, P. M.; Linares, R.

    2017-12-01

    This paper presents a methodology for data-driven inference and investigation of thermosphere dynamics and variations. The approach uses data-driven modal analysis to extract the most energetic modes of variations for neutral thermospheric species using proper orthogonal decomposition, where the time-independent modes or basis represent the dynamics and the time-depedent coefficients or amplitudes represent the model parameters. The data-driven modal analysis approach combined with sparse, discrete observations is used to infer amplitues for the dynamic modes and to calibrate the energy content of the system. In this work, two different data-types, namely the number density measurements from TIMED/GUVI and the mass density measurements from CHAMP/GRACE are simultaneously ingested for an accurate and self-consistent specification of the thermosphere. The assimilation process is achieved with a non-linear least squares solver and allows estimation/tuning of the model parameters or amplitudes rather than the driver. In this work, we use the Naval Research Lab's MSIS model to derive the most energetic modes for six different species, He, O, N2, O2, H, and N. We examine the dominant drivers of variations for helium in MSIS and observe that seasonal latitudinal variation accounts for about 80% of the dynamic energy with a strong preference of helium for the winter hemisphere. We also observe enhanced helium presence near the poles at GRACE altitudes during periods of low solar activity (Feb 2007) as previously deduced. We will also examine the storm-time response of helium derived from observations. The results are expected to be useful in tuning/calibration of the physics-based models.

  18. Development of a new method to characterize low-to-medium energy X-ray beams (E≤150 keV) used in dosimetry

    International Nuclear Information System (INIS)

    Deloule, Sybelle

    2014-01-01

    In the field of dosimetry, the knowledge of the whole photon fluence spectrum is an essential parameter. In the low-to-medium energy range (i.e. E≤150 keV), the LNHB possess 5 X-ray tubes and iodine-125 brachytherapy seeds, both emitting high fluence rates. The performance of calculation (either Monte Carlo codes or deterministic software) is flawed by increasing uncertainties on fundamental parameters at low energies, and modelling issues. Therefore, direct measurement using a high purity germanium is preferred, even though it requires a time-consuming set-up and mathematical methods to infer impinging spectrum from measured ones (such as stripping, model-fitting or Bayesian inference). Concerning brachytherapy, the knowledge of the seed's parameters has been improved. Moreover, various calculated X-ray tube fluence spectra have been compared to measured ones, after unfolding. The results of all these methods have then be assessed, as well as their impact on dosimetric parameters. (author) [fr

  19. Performance of Ga(0.47)In(0.53)As cells over a range of proton energies

    Science.gov (United States)

    Weinberg, I.; Jain, R. K.; Vargasaburto, C.; Wilt, D. M.; Scheiman, D. A.

    1995-01-01

    Ga(0.47)In(0.53)As solar cells were processed by OMVPE and their characteristics determined at proton energies of 0.2, 0.5, and 3 MeV. Emphasis was on characteristics applicable to use of this cell as the low bandgap member of a monolithic, two terminal high efficiency InP/GaInAs cell. It was found that the radiation induced degradation in efficiency, I(sub SC), V(sub OC) and diffusion length increased with decreasing proton energy. When efficiency degradations were compared with InP it was observed that the present cells showed considerably more degradation over the entire energy range. Similar to InP, R(sub C), the carrier removal rate, decreased with increasing proton energy. However, numerical values for R(sub C) differed from those observed with InP. The difference is attributed to differing defect behavior between the two cell types. It was concluded that particular attention should be paid to the effects of low energy protons especially when the particle's track ends in one cell of the multibandgap device.

  20. Inferring Demographic History Using Two-Locus Statistics.

    Science.gov (United States)

    Ragsdale, Aaron P; Gutenkunst, Ryan N

    2017-06-01

    Population demographic history may be learned from contemporary genetic variation data. Methods based on aggregating the statistics of many single loci into an allele frequency spectrum (AFS) have proven powerful, but such methods ignore potentially informative patterns of linkage disequilibrium (LD) between neighboring loci. To leverage such patterns, we developed a composite-likelihood framework for inferring demographic history from aggregated statistics of pairs of loci. Using this framework, we show that two-locus statistics are more sensitive to demographic history than single-locus statistics such as the AFS. In particular, two-locus statistics escape the notorious confounding of depth and duration of a bottleneck, and they provide a means to estimate effective population size based on the recombination rather than mutation rate. We applied our approach to a Zambian population of Drosophila melanogaster Notably, using both single- and two-locus statistics, we inferred a substantially lower ancestral effective population size than previous works and did not infer a bottleneck history. Together, our results demonstrate the broad potential for two-locus statistics to enable powerful population genetic inference. Copyright © 2017 by the Genetics Society of America.

  1. Statistical Inference on the Canadian Middle Class

    Directory of Open Access Journals (Sweden)

    Russell Davidson

    2018-03-01

    Full Text Available Conventional wisdom says that the middle classes in many developed countries have recently suffered losses, in terms of both the share of the total population belonging to the middle class, and also their share in total income. Here, distribution-free methods are developed for inference on these shares, by means of deriving expressions for their asymptotic variances of sample estimates, and the covariance of the estimates. Asymptotic inference can be undertaken based on asymptotic normality. Bootstrap inference can be expected to be more reliable, and appropriate bootstrap procedures are proposed. As an illustration, samples of individual earnings drawn from Canadian census data are used to test various hypotheses about the middle-class shares, and confidence intervals for them are computed. It is found that, for the earlier censuses, sample sizes are large enough for asymptotic and bootstrap inference to be almost identical, but that, in the twenty-first century, the bootstrap fails on account of a strange phenomenon whereby many presumably different incomes in the data are rounded to one and the same value. Another difference between the centuries is the appearance of heavy right-hand tails in the income distributions of both men and women.

  2. The importance of learning when making inferences

    Directory of Open Access Journals (Sweden)

    Jorg Rieskamp

    2008-03-01

    Full Text Available The assumption that people possess a repertoire of strategies to solve the inference problems they face has been made repeatedly. The experimental findings of two previous studies on strategy selection are reexamined from a learning perspective, which argues that people learn to select strategies for making probabilistic inferences. This learning process is modeled with the strategy selection learning (SSL theory, which assumes that people develop subjective expectancies for the strategies they have. They select strategies proportional to their expectancies, which are updated on the basis of experience. For the study by Newell, Weston, and Shanks (2003 it can be shown that people did not anticipate the success of a strategy from the beginning of the experiment. Instead, the behavior observed at the end of the experiment was the result of a learning process that can be described by the SSL theory. For the second study, by Br"oder and Schiffer (2006, the SSL theory is able to provide an explanation for why participants only slowly adapted to new environments in a dynamic inference situation. The reanalysis of the previous studies illustrates the importance of learning for probabilistic inferences.

  3. Bayesian inference of substrate properties from film behavior

    International Nuclear Information System (INIS)

    Aggarwal, R; Demkowicz, M J; Marzouk, Y M

    2015-01-01

    We demonstrate that by observing the behavior of a film deposited on a substrate, certain features of the substrate may be inferred with quantified uncertainty using Bayesian methods. We carry out this demonstration on an illustrative film/substrate model where the substrate is a Gaussian random field and the film is a two-component mixture that obeys the Cahn–Hilliard equation. We construct a stochastic reduced order model to describe the film/substrate interaction and use it to infer substrate properties from film behavior. This quantitative inference strategy may be adapted to other film/substrate systems. (paper)

  4. Brain Imaging, Forward Inference, and Theories of Reasoning

    Science.gov (United States)

    Heit, Evan

    2015-01-01

    This review focuses on the issue of how neuroimaging studies address theoretical accounts of reasoning, through the lens of the method of forward inference (Henson, 2005, 2006). After theories of deductive and inductive reasoning are briefly presented, the method of forward inference for distinguishing between psychological theories based on brain imaging evidence is critically reviewed. Brain imaging studies of reasoning, comparing deductive and inductive arguments, comparing meaningful versus non-meaningful material, investigating hemispheric localization, and comparing conditional and relational arguments, are assessed in light of the method of forward inference. Finally, conclusions are drawn with regard to future research opportunities. PMID:25620926

  5. Brain imaging, forward inference, and theories of reasoning.

    Science.gov (United States)

    Heit, Evan

    2014-01-01

    This review focuses on the issue of how neuroimaging studies address theoretical accounts of reasoning, through the lens of the method of forward inference (Henson, 2005, 2006). After theories of deductive and inductive reasoning are briefly presented, the method of forward inference for distinguishing between psychological theories based on brain imaging evidence is critically reviewed. Brain imaging studies of reasoning, comparing deductive and inductive arguments, comparing meaningful versus non-meaningful material, investigating hemispheric localization, and comparing conditional and relational arguments, are assessed in light of the method of forward inference. Finally, conclusions are drawn with regard to future research opportunities.

  6. Data-driven inference for the spatial scan statistic

    Directory of Open Access Journals (Sweden)

    Duczmal Luiz H

    2011-08-01

    Full Text Available Abstract Background Kulldorff's spatial scan statistic for aggregated area maps searches for clusters of cases without specifying their size (number of areas or geographic location in advance. Their statistical significance is tested while adjusting for the multiple testing inherent in such a procedure. However, as is shown in this work, this adjustment is not done in an even manner for all possible cluster sizes. Results A modification is proposed to the usual inference test of the spatial scan statistic, incorporating additional information about the size of the most likely cluster found. A new interpretation of the results of the spatial scan statistic is done, posing a modified inference question: what is the probability that the null hypothesis is rejected for the original observed cases map with a most likely cluster of size k, taking into account only those most likely clusters of size k found under null hypothesis for comparison? This question is especially important when the p-value computed by the usual inference process is near the alpha significance level, regarding the correctness of the decision based in this inference. Conclusions A practical procedure is provided to make more accurate inferences about the most likely cluster found by the spatial scan statistic.

  7. Methane formation during deuteron bombardment of carbon in the energy range of 100 to 1500 eV

    International Nuclear Information System (INIS)

    Sone, K.

    1982-01-01

    Methane (CD 4 ) formation rates during deuteron bombardment of carbon (Papyex) have been measured in the energy range of 100 to 1500 eV. The temperature dependence of the methane formation rate is well explained by the model proposed by Erents et al. in the temperature range of 600 to 1150 K. The model, however, does not explain the dependence of the methane formation rate on the flux of incident deuterons at a certain temperature near Tsub(m) at which the formation rate has a maximum value. An alternative model is proposed in which the methane formation rate is assumed to be proportional to the product of the following three parameters: the surface concentration of deuterium atoms, the chemical reaction rate for the formation of methane, and the rate of production of vacancies on the surface by the deuteron bombardment. This model predicts an energy dependence of methane formation which has a maximum around 900 eV even at different deuteron fluxes, when the calculated result by Weissman and Sigmund is used for the surface deposited energy responsible for the production of vacancies. (author)

  8. Investigation of the sup 9 sup 3 Nb neutron cross-sections in resonance energy range

    CERN Document Server

    Grigoriev, Y V; Faikov-Stanchik, H; Ilchev, G; Kim, G N; Kitaev, V Ya; Mezentseva, Z V; Panteleev, T; Sinitsa, V V; Zhuravlev, B V

    2001-01-01

    The results of gamma-ray multiplicity spectra and transmission measurements for sup 9 sup 3 Nb in energy range 21.5 eV-100 keV are presented. Gamma spectra from 1 to 7 multiplicity were measured on the 501 m and 121 m flight paths of the IBR-30 using a 16-section scintillation detector with a NaI(Tl) crystals of a total volume of 36 l and a 16-section liquid scintillation detector of a total volume of 80 l for metallic samples of 50, 80 mm in diameter and 1, 1.5 mm thickness with 100% sup 9 sup 3 Nb. Besides, the total and scattering cross-section of sup 9 sup 3 Nb were measured by means batteries of B-10 and He-3 counters on the 124 m, 504 m and 1006 m flight paths of the IBR-30. Spectra of multiplicity distribution were obtained for resolved resonances in the energy region E=30-6000 eV and for energy groups in the energy region E=21.5 eV- 100 keV. They were used for determination of the average multiplicity, resonance parameters and capture cross-section in energy groups and for low-laying resonances of sup...

  9. Total cross section measurements for νμ, ν-barμ interactions in 3 - 30 GeV energy range with IHEP - JINR neutrino detector

    International Nuclear Information System (INIS)

    Anikeev, V.B.; Belikov, S.V.; Borisov, A.A.

    1995-01-01

    The results of total cross section measurements for the ν μ , ν-bar μ interactions with isoscalar target in the 3 - 30 GeV energy range have been presented. The data were obtained with the IHEP - JINR Neutrino Detector in the 'natural' neutrino beams of the U - 70 accelerator. The significant deviation from the linear dependence for σ tot versus neutrino energy is determined in the energy range less than 15 GeV. 46 refs., 10 figs., 5 tabs

  10. Inferring spatial and temporal behavioral patterns of free-ranging manatees using saltwater sensors of telemetry tags

    Science.gov (United States)

    Castelblanco-Martínez, Delma Nataly; Morales-Vela, Benjamin; Slone, Daniel H.; Padilla-Saldívar, Janneth Adriana; Reid, James P.; Hernández-Arana, Héctor Abuid

    2015-01-01

    Diving or respiratory behavior in aquatic mammals can be used as an indicator of physiological activity and consequently, to infer behavioral patterns. Five Antillean manatees, Trichechus manatus manatus, were captured in Chetumal Bay and tagged with GPS tracking devices. The radios were equipped with a micropower saltwater sensor (SWS), which records the times when the tag assembly was submerged. The information was analyzed to establish individual fine-scale behaviors. For each fix, we established the following variables: distance (D), sampling interval (T), movement rate (D/T), number of dives (N), and total diving duration (TDD). We used logic criteria and simple scatterplots to distinguish between behavioral categories: ‘Travelling’ (D/T ≥ 3 km/h), ‘Surface’ (↓TDD, ↓N), ‘Bottom feeding’ (↑TDD, ↑N) and ‘Bottom resting’ (↑TDD, ↓N). Habitat categories were qualitatively assigned: Lagoon, Channels, Caye shore, City shore, Channel edge, and Open areas. The instrumented individuals displayed a daily rhythm of bottom activities, with surfacing activities more frequent during the night and early in the morning. More investigation into those cycles and other individual fine-scale behaviors related to their proximity to concentrations of human activity would be informative

  11. Fossil biogeography: a new model to infer dispersal, extinction and sampling from palaeontological data.

    Science.gov (United States)

    Silvestro, Daniele; Zizka, Alexander; Bacon, Christine D; Cascales-Miñana, Borja; Salamin, Nicolas; Antonelli, Alexandre

    2016-04-05

    Methods in historical biogeography have revolutionized our ability to infer the evolution of ancestral geographical ranges from phylogenies of extant taxa, the rates of dispersals, and biotic connectivity among areas. However, extant taxa are likely to provide limited and potentially biased information about past biogeographic processes, due to extinction, asymmetrical dispersals and variable connectivity among areas. Fossil data hold considerable information about past distribution of lineages, but suffer from largely incomplete sampling. Here we present a new dispersal-extinction-sampling (DES) model, which estimates biogeographic parameters using fossil occurrences instead of phylogenetic trees. The model estimates dispersal and extinction rates while explicitly accounting for the incompleteness of the fossil record. Rates can vary between areas and through time, thus providing the opportunity to assess complex scenarios of biogeographic evolution. We implement the DES model in a Bayesian framework and demonstrate through simulations that it can accurately infer all the relevant parameters. We demonstrate the use of our model by analysing the Cenozoic fossil record of land plants and inferring dispersal and extinction rates across Eurasia and North America. Our results show that biogeographic range evolution is not a time-homogeneous process, as assumed in most phylogenetic analyses, but varies through time and between areas. In our empirical assessment, this is shown by the striking predominance of plant dispersals from Eurasia into North America during the Eocene climatic cooling, followed by a shift in the opposite direction, and finally, a balance in biotic interchange since the middle Miocene. We conclude by discussing the potential of fossil-based analyses to test biogeographic hypotheses and improve phylogenetic methods in historical biogeography. © 2016 The Author(s).

  12. Using neutral network to infer the hydrodynamic yield of aspherical sources

    International Nuclear Information System (INIS)

    Moran, B.; Glenn, L.A.

    1993-07-01

    We distinguish two kinds of difficulties with yield determination from aspherical sources. The first kind, the spoofing difficulty, occurs when a fraction of the energy of the explosion is channeled in such a way that it is not detected by the CORRTEX cable. In this case, neither neural networks nor any expert system can be expected to accurately estimate the yield without detailed information about device emplacement within the canister. Numerical simulations however, can provide an upper bound on the undetected fraction of the explosive energy. In the second instance, the interpretation difficulty, the data appear abnormal when analyzed using similar-explosion-scaling and the assumption of a spherical front. The inferred yield varies with time and the confidence in the yield estimate decreases. It is this kind of problem we address in this paper and for which neural networks can make a contribution. We used a back propagation neural network to infer the hydrodynamic yield of simulated aspherical sources. We trained the network using a subset of simulations from 3 different aspherical sources, with 3 different yields, and 3 satellite offset separations. The trained network was able to predict the yield within 15% in all cases and to identify the correct type of aspherical source in most cases. The predictive capability of the network increased with a larger training set. The neural network approach can easily incorporate information from new calculations or experiments and is therefore flexible and easy to maintain. We describe the potential capabilities and limitations in using such networks for yield estimations

  13. Using neural networks to infer the hydrodynamic yield of aspherical sources

    International Nuclear Information System (INIS)

    Moran, B.; Glenn, L.

    1993-01-01

    We distinguish two kinds of difficulties with yield determination from aspherical sources. The first kind, the spoofing difficulty, occurs when a fraction of the energy of the explosion is channeled in such a way that it is not detected by the CORRTEX cable. In this case, neither neural networks nor any expert system can be expected to accurately estimate the yield without detailed information about device emplacement within the canister. Numerical simulations however, can provide an upper bound on the undetected fraction of the explosive energy. In the second instance, the interpretation difficulty, the data appear abnormal when analyzed using similar-explosion-scaling and the assumption of a spherical front. The inferred yield varies with time and the confidence in the yield estimate decreases. It is this kind of problem we address in this paper and for which neural networks can make a contribution. We used a back propagation neural network to infer the hydrodynamic yield of simulated aspherical sources. We trained the network using a subset of simulations from 3 different aspherical sources, with 3 different yield, and 3 satellite offset separations. The trained network was able to predict the yield within 15% in all cases and to identify the correct type of aspherical source in most cases. The predictive capability of the network increased with a larger training set. The neural network approach can easily incorporate information from new calculations or experiments and is therefore flexible and easy to maintain. We describe the potential capabilities and limitations in using such networks for yield estimations

  14. Statistical inference an integrated approach

    CERN Document Server

    Migon, Helio S; Louzada, Francisco

    2014-01-01

    Introduction Information The concept of probability Assessing subjective probabilities An example Linear algebra and probability Notation Outline of the bookElements of Inference Common statistical modelsLikelihood-based functions Bayes theorem Exchangeability Sufficiency and exponential family Parameter elimination Prior Distribution Entirely subjective specification Specification through functional forms Conjugacy with the exponential family Non-informative priors Hierarchical priors Estimation Introduction to decision theoryBayesian point estimation Classical point estimation Empirical Bayes estimation Comparison of estimators Interval estimation Estimation in the Normal model Approximating Methods The general problem of inference Optimization techniquesAsymptotic theory Other analytical approximations Numerical integration methods Simulation methods Hypothesis Testing Introduction Classical hypothesis testingBayesian hypothesis testing Hypothesis testing and confidence intervalsAsymptotic tests Prediction...

  15. Statistical learning and selective inference.

    Science.gov (United States)

    Taylor, Jonathan; Tibshirani, Robert J

    2015-06-23

    We describe the problem of "selective inference." This addresses the following challenge: Having mined a set of data to find potential associations, how do we properly assess the strength of these associations? The fact that we have "cherry-picked"--searched for the strongest associations--means that we must set a higher bar for declaring significant the associations that we see. This challenge becomes more important in the era of big data and complex statistical modeling. The cherry tree (dataset) can be very large and the tools for cherry picking (statistical learning methods) are now very sophisticated. We describe some recent new developments in selective inference and illustrate their use in forward stepwise regression, the lasso, and principal components analysis.

  16. Limited Range Sesame EOS for Ta

    Energy Technology Data Exchange (ETDEWEB)

    Greeff, Carl William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Crockett, Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rudin, Sven Peter [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Burakovsky, Leonid [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-30

    A new Sesame EOS table for Ta has been released for testing. It is a limited range table covering T ≤ 26, 000 K and ρ ≤ 37.53 g/cc. The EOS is based on earlier analysis using DFT phonon calculations to infer the cold pressure from the Hugoniot. The cold curve has been extended into compression using new DFT calculations. The present EOS covers expansion into the gas phase. It is a multi-phase EOS with distinct liquid and solid phases. A cold shear modulus table (431) is included. This is based on an analytic interpolation of DFT calculations.

  17. Foraging optimally for home ranges

    Science.gov (United States)

    Mitchell, Michael S.; Powell, Roger A.

    2012-01-01

    Economic models predict behavior of animals based on the presumption that natural selection has shaped behaviors important to an animal's fitness to maximize benefits over costs. Economic analyses have shown that territories of animals are structured by trade-offs between benefits gained from resources and costs of defending them. Intuitively, home ranges should be similarly structured, but trade-offs are difficult to assess because there are no costs of defense, thus economic models of home-range behavior are rare. We present economic models that predict how home ranges can be efficient with respect to spatially distributed resources, discounted for travel costs, under 2 strategies of optimization, resource maximization and area minimization. We show how constraints such as competitors can influence structure of homes ranges through resource depression, ultimately structuring density of animals within a population and their distribution on a landscape. We present simulations based on these models to show how they can be generally predictive of home-range behavior and the mechanisms that structure the spatial distribution of animals. We also show how contiguous home ranges estimated statistically from location data can be misleading for animals that optimize home ranges on landscapes with patchily distributed resources. We conclude with a summary of how we applied our models to nonterritorial black bears (Ursus americanus) living in the mountains of North Carolina, where we found their home ranges were best predicted by an area-minimization strategy constrained by intraspecific competition within a social hierarchy. Economic models can provide strong inference about home-range behavior and the resources that structure home ranges by offering falsifiable, a priori hypotheses that can be tested with field observations.

  18. The aggregate site frequency spectrum for comparative population genomic inference.

    Science.gov (United States)

    Xue, Alexander T; Hickerson, Michael J

    2015-12-01

    Understanding how assemblages of species responded to past climate change is a central goal of comparative phylogeography and comparative population genomics, an endeavour that has increasing potential to integrate with community ecology. New sequencing technology now provides the potential to perform complex demographic inference at unprecedented resolution across assemblages of nonmodel species. To this end, we introduce the aggregate site frequency spectrum (aSFS), an expansion of the site frequency spectrum to use single nucleotide polymorphism (SNP) data sets collected from multiple, co-distributed species for assemblage-level demographic inference. We describe how the aSFS is constructed over an arbitrary number of independent population samples and then demonstrate how the aSFS can differentiate various multispecies demographic histories under a wide range of sampling configurations while allowing effective population sizes and expansion magnitudes to vary independently. We subsequently couple the aSFS with a hierarchical approximate Bayesian computation (hABC) framework to estimate degree of temporal synchronicity in expansion times across taxa, including an empirical demonstration with a data set consisting of five populations of the threespine stickleback (Gasterosteus aculeatus). Corroborating what is generally understood about the recent postglacial origins of these populations, the joint aSFS/hABC analysis strongly suggests that the stickleback data are most consistent with synchronous expansion after the Last Glacial Maximum (posterior probability = 0.99). The aSFS will have general application for multilevel statistical frameworks to test models involving assemblages and/or communities, and as large-scale SNP data from nonmodel species become routine, the aSFS expands the potential for powerful next-generation comparative population genomic inference. © 2015 The Authors. Molecular Ecology Published by John Wiley & Sons Ltd.

  19. Energy of nuclear-active particles in the 1-50 TeV energy range

    International Nuclear Information System (INIS)

    Kotlyarevskij, D.M.; Garsevanishvili, L.P.; Morozov, I.V.

    1979-01-01

    The ''TsKhRA-TsKARO'' installation designed for the investigation of cosmic beams interaction with nuclei in the energy range of 1-50 TeV is described. The installation comprises a magnetic spark spectrometer, photography system, ionization calorimeter and scintillating trigger device. The ionization calorimeter contains 12 rows of ionization chambers interlayed with 4 lead and 8 iron absorbers. The rows of ionization chambers of the calorimeter upper half are placed in the mutually-perpendicular direction to restore the direction and spatial situation of the electron-nuclear cascade. All in all the ionization calorimeter contains 100 channels of preintensifiers from which the information is transmitted to the terminal intensifiers and then to the storage cells. The aperture ratio of the whole installation (taking account of the upper chambers) is 0.5 m 2 /steradian, and the aperture ratio of the calorimeter together with the lower chambers is approximately 2 m 2 /steradian. Presented are integral spectra of the hadrons at the height of mountains and at the height of 2500 m above sea level obtained with the help of the installation described. Comparison of the number of charged and neutral particles recorded with the ''TsKhRA-TsKARO'' installation agrees well with the known ratio of charged and neutral cosmic hadrons on the heights of mountains n/c = 0.52+-0.05

  20. Object-Oriented Type Inference

    DEFF Research Database (Denmark)

    Schwartzbach, Michael Ignatieff; Palsberg, Jens

    1991-01-01

    We present a new approach to inferring types in untyped object-oriented programs with inheritance, assignments, and late binding. It guarantees that all messages are understood, annotates the program with type information, allows polymorphic methods, and can be used as the basis of an op...

  1. Investigation of amorphization energies for heavy ion implants into silicon carbide at depths far beyond the projected ranges

    Energy Technology Data Exchange (ETDEWEB)

    Friedland, E., E-mail: erich.friedland@up.ac.za

    2017-01-15

    At ion energies with inelastic stopping powers less than a few keV/nm, radiation damage is thought to be due to atomic displacements by elastic collisions only. However, it is well known that inelastic processes and non-linear effects due to defect interaction within collision cascades can significantly increase or decrease damage efficiencies. The importance of these processes changes significantly along the ion trajectory and becomes negligible at some distance beyond the projected range, where damage is mainly caused by slowly moving secondary recoils. Hence, in this region amorphization energies should become independent of the ion type and only reflect the properties of the target lattice. To investigate this, damage profiles were obtained from α-particle channeling spectra of 6H-SiC wafers implanted at room temperature with ions in the mass range 84 ⩽ M ⩽ 133, employing the computer code DICADA. An average amorphization dose of (0.7 ± 0.2) dpa and critical damage energy of (17 ± 6) eV/atom are obtained from TRIM simulations at the experimentally observed boundary positions of the amorphous zones.

  2. A closed-form formulation for the build-up factor and absorbed energy for photons and electrons in the Compton energy range in Cartesian geometry

    International Nuclear Information System (INIS)

    Borges, Volnei; Vilhena, Marco Tullio; Fernandes, Julio Cesar Lombaldo

    2011-01-01

    In this work, we report on a closed-form formulation for the build-up factor and absorbed energy, in one and two dimensional Cartesian geometry for photons and electrons, in the Compton energy range. For the one-dimensional case we use the LTS N method, assuming the Klein-Nishina scattering kernel for the determination of the angular radiation intensity for photons. We apply the two-dimensional LTS N nodal solution for the averaged angular radiation evaluation for the two-dimensional case, using the Klein-Nishina kernel for photons and the Compton kernel for electrons. From the angular radiation intensity we construct a closed-form solution for the build-up factor and evaluate the absorbed energy. We present numerical simulations and comparisons against results from the literature. (author)

  3. The Probabilistic Convolution Tree: Efficient Exact Bayesian Inference for Faster LC-MS/MS Protein Inference

    Science.gov (United States)

    Serang, Oliver

    2014-01-01

    Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called “causal independence”). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to and the space to where is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions. PMID:24626234

  4. Indexing the Environmental Quality Performance Based on A Fuzzy Inference Approach

    Science.gov (United States)

    Iswari, Lizda

    2018-03-01

    Environmental performance strongly deals with the quality of human life. In Indonesia, this performance is quantified through Environmental Quality Index (EQI) which consists of three indicators, i.e. river quality index, air quality index, and coverage of land cover. The current of this instrument data processing was done by averaging and weighting each index to represent the EQI at the provincial level. However, we found EQI interpretations that may contain some uncertainties and have a range of circumstances possibly less appropriate if processed under a common statistical approach. In this research, we aim to manage the indicators of EQI with a more intuitive computation technique and make some inferences related to the environmental performance in 33 provinces in Indonesia. Research was conducted in three stages of Mamdani Fuzzy Inference System (MAFIS), i.e. fuzzification, data inference, and defuzzification. Data input consists of 10 environmental parameters and the output is an index of Environmental Quality Performance (EQP). Research was applied to the environmental condition data set in 2015 and quantified the results into the scale of 0 to 100, i.e. 10 provinces at good performance with the EQP above 80 dominated by provinces in eastern part of Indonesia, 22 provinces with the EQP between 80 to 50, and one province in Java Island with the EQP below 20. This research shows that environmental quality performance can be quantified without eliminating the natures of the data set and simultaneously is able to show the environment behavior along with its spatial pattern distribution.

  5. Reward inference by primate prefrontal and striatal neurons.

    Science.gov (United States)

    Pan, Xiaochuan; Fan, Hongwei; Sawa, Kosuke; Tsuda, Ichiro; Tsukada, Minoru; Sakagami, Masamichi

    2014-01-22

    The brain contains multiple yet distinct systems involved in reward prediction. To understand the nature of these processes, we recorded single-unit activity from the lateral prefrontal cortex (LPFC) and the striatum in monkeys performing a reward inference task using an asymmetric reward schedule. We found that neurons both in the LPFC and in the striatum predicted reward values for stimuli that had been previously well experienced with set reward quantities in the asymmetric reward task. Importantly, these LPFC neurons could predict the reward value of a stimulus using transitive inference even when the monkeys had not yet learned the stimulus-reward association directly; whereas these striatal neurons did not show such an ability. Nevertheless, because there were two set amounts of reward (large and small), the selected striatal neurons were able to exclusively infer the reward value (e.g., large) of one novel stimulus from a pair after directly experiencing the alternative stimulus with the other reward value (e.g., small). Our results suggest that although neurons that predict reward value for old stimuli in the LPFC could also do so for new stimuli via transitive inference, those in the striatum could only predict reward for new stimuli via exclusive inference. Moreover, the striatum showed more complex functions than was surmised previously for model-free learning.

  6. Bootstrap inference when using multiple imputation.

    Science.gov (United States)

    Schomaker, Michael; Heumann, Christian

    2018-04-16

    Many modern estimators require bootstrapping to calculate confidence intervals because either no analytic standard error is available or the distribution of the parameter of interest is nonsymmetric. It remains however unclear how to obtain valid bootstrap inference when dealing with multiple imputation to address missing data. We present 4 methods that are intuitively appealing, easy to implement, and combine bootstrap estimation with multiple imputation. We show that 3 of the 4 approaches yield valid inference, but that the performance of the methods varies with respect to the number of imputed data sets and the extent of missingness. Simulation studies reveal the behavior of our approaches in finite samples. A topical analysis from HIV treatment research, which determines the optimal timing of antiretroviral treatment initiation in young children, demonstrates the practical implications of the 4 methods in a sophisticated and realistic setting. This analysis suffers from missing data and uses the g-formula for inference, a method for which no standard errors are available. Copyright © 2018 John Wiley & Sons, Ltd.

  7. Evolutionary inference via the Poisson Indel Process.

    Science.gov (United States)

    Bouchard-Côté, Alexandre; Jordan, Michael I

    2013-01-22

    We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments.

  8. Range-Energy Program for relativistic heavy ions in the region 1 < E < 3000 MeV/amu

    International Nuclear Information System (INIS)

    Salamon, M.H.

    1980-01-01

    This report describes a computer program (developed for analysis of Bevalac data) that calculates stopping power, range, or energy, and which includes the necessary corrections for accuracy in the high Z 1 , β regime, as discussed in the theoretical and review papers of S.P. Ahlen. The program is also available on PSS at LBL's Computer Center

  9. System Support for Forensic Inference

    Science.gov (United States)

    Gehani, Ashish; Kirchner, Florent; Shankar, Natarajan

    Digital evidence is playing an increasingly important role in prosecuting crimes. The reasons are manifold: financially lucrative targets are now connected online, systems are so complex that vulnerabilities abound and strong digital identities are being adopted, making audit trails more useful. If the discoveries of forensic analysts are to hold up to scrutiny in court, they must meet the standard for scientific evidence. Software systems are currently developed without consideration of this fact. This paper argues for the development of a formal framework for constructing “digital artifacts” that can serve as proxies for physical evidence; a system so imbued would facilitate sound digital forensic inference. A case study involving a filesystem augmentation that provides transparent support for forensic inference is described.

  10. Energy efficiency and CO_2 mitigation potential of the Turkish iron and steel industry using the LEAP (long-range energy alternatives planning) system

    International Nuclear Information System (INIS)

    Ates, Seyithan A.

    2015-01-01

    With the assistance of the LEAP (long-range energy alternatives planning) energy modeling tool, this study explores the energy efficiency and CO_2 emission reduction potential of the iron and steel industry in Turkey. With a share of 35%, the steel and iron industry is considered as the most energy-consuming sector in Turkey. The study explores that the energy intensity rate can be lowered by 13%, 38% and 51% in SEI (slow-speed energy efficiency improvement), AEI (accelerating energy efficiency improvement) and CPT (cleaner production and technology scenario) scenarios, respectively. Particularly the projected aggregated energy savings of the scenarios CPT and AES are very promising with saving rates of 33.7% and 23% respectively. Compared to baseline scenarios, energy efficiency improvements correspond to economic potential of 0.1 billion dollars for SEI, 1.25 dollars for AEI and 1.8 billion dollars for CPT scenarios annually. Concerning GHG (greenhouse gas) emissions, in 2030 the iron and steel industry in Turkey is estimated to produce 34.9 MtCO_2 in BAU (business-as-usual scenario), 32.5 MtCO_2 in SEI, 24.6 MtCO_2 in AEI and 14.5 MtCO_2 in CPT a scenario which corresponds to savings of 9%–39%. The study reveals that energy consumption and GHG emissions of the iron and steel industry can be lowered significantly if the necessary measures are implemented. It is expected that this study will fill knowledge gaps pertaining to energy efficiency potential in Turkish energy intensive industries and help stakeholders in energy intensive industries to realize the potential for energy efficiency and GHG mitigation. - Highlights: • This paper explores energy efficiency potential of iron and Steel industry in Turkey. • We applied the LEAP modeling to forecast future developments. • Four different scenarios have been developed for the LEAP modeling. • There is a huge potential for energy efficiency and mitigation of GHG emissions.

  11. Geostatistical inference using crosshole ground-penetrating radar

    DEFF Research Database (Denmark)

    Looms, Majken C; Hansen, Thomas Mejer; Cordua, Knud Skou

    2010-01-01

    of the subsurface are used to evaluate the uncertainty of the inversion estimate. We have explored the full potential of the geostatistical inference method using several synthetic models of varying correlation structures and have tested the influence of different assumptions concerning the choice of covariance...... reflection profile. Furthermore, the inferred values of the subsurface global variance and the mean velocity have been corroborated with moisturecontent measurements, obtained gravimetrically from samples collected at the field site....

  12. Bayesian Inference for Functional Dynamics Exploring in fMRI Data

    Directory of Open Access Journals (Sweden)

    Xuan Guo

    2016-01-01

    Full Text Available This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM, Bayesian Connectivity Change Point Model (BCCPM, and Dynamic Bayesian Variable Partition Model (DBVPM, and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come.

  13. Crossed beam study of He+-O2 charge transfer reactions in the collision energy range 0.5-200 eV

    International Nuclear Information System (INIS)

    Bischof, G.; Linder, F.

    1986-01-01

    Energy spectra and angular distributions of the O + and O 2 + product ions resulting from the He + -O 2 charge transfer reaction have been measured in the collision energy range 0.5-200 eV using the crossed-beam method. The O 2 + ions represent only a minor fraction of the reaction products (0.2-0.6% over the energy range measured). In the dissociative charge transfer reaction, four main processes are identified leading to O+O + reaction products in different electronic states. Two different mechanisms can be distinguished, each being responsible for two of the observed processes: (i) a long-distance energy-resonant charge transfer process involving the c 4 Σsub(u) - (upsilon'=0) state of O 2 + and (ii) a slightly exothermic charge transfer process via the (III) 2 PIsub(u) state of O 2 + (with the exothermicity depending on the collision energy). Angle-integrated branching ratios and partial cross sections (in absolute units) have been determined. The branching ratios of the individual processes show a pronounced dependence on the collision energy. At low energies, the O + product ions are preferentially formed in the 2 P 0 and 2 D 0 excited states. The angular distributions of the O + product ions show an anisotropic behaviour indicating an orientation-dependent charge transfer probability in the He + -O 2 reaction. (orig.)

  14. Results of TGE Study in 0.03-10 MeV Energy Range in Ground Experiments near Moscow and Aragats

    International Nuclear Information System (INIS)

    Bogomolov, V.; Kovalenko, A.; Panasyuk, M.; Saleev, K.; Svertilov, S.; Maximov, I.; Garipov, G.; Iyudin, A.; Chilingarian, A.; Hovsepyan, G.; Karapetyan, T.; Mntasakanyan, E.

    2017-01-01

    Ground-based experiments with scintillator gamma-spectrometers were conducted to study the spectral, temporal and spatial characteristics of TGES as well, as to search the fast hard X-ray and gamma-ray flashes possibly appearing at the moment of lightning. The time of each gamma-quantum interaction was recorded with ∼15 us accuracy together with detailed spectral data. The measurements are similar to ones reported at TEPA-2015 but some important improvement of the instruments was done for 2016 season. First, GPS module was used to synchronize the instrument time with UTC. The accuracy of such synchronization allows one to look at the gamma-ray data at the moment of lightning fixed by radio-wave detector or any other instrument. Second, the energy range of gamma-spectrometers was shifted to higher energies where the radiation of natural isotopes is absent. In this case one can see background changes connected with particles accelerated in thundercloud together with the background increases during the rain caused by Rn-222 daughters. Long-term measurements with two instruments placed in different points of Moscow region were done in 2016 season. First one based on CsI (Tl) 80x80 mm has energy range 0.03-6 MeV. The range of the second one based on CsI (Tl) 100x100 mm is 0.05-10 MeV. A dozen of thunderstorms with increase of Rn-222 radiation were detected but no significant increase of gamma-ray flux above 3.2 MeV was observed at these periods. A lot of data was obtained from the experiment with small gamma-ray spectrometer (40x40 mm NaI (T1) at mountain altitude in Armenia at Aragats station. The analysis of readings during the TGE periods indicates on the presence of Rn-222 radiation in low-energy range (E< l MeV). The detector was improved during TEPA-2016. New 50x50 mm NaI (Tl) crystal was used and the energy range was prolonged up to 5 MeV. Exact timing with GPS-sensor was added and fast recording of the output signal at the moments of triggers from UV flash

  15. Stopping power of liquid water for carbon ions in the energy range between 1 MeV and 6 MeV

    International Nuclear Information System (INIS)

    Rahm, J M; Baek, W Y; Rabus, H; Hofsäss, H

    2014-01-01

    The stopping power of liquid water was measured for the first time for carbon ions in the energy range between 1 and 6 MeV using the inverted Doppler shift attenuation method. The feasibility study carried out within the scope of the present work shows that this method is well suited for the quantification of the controversial condensed phased effect in the stopping power for heavy ions in the intermediate energy range. The preliminary results of this work indicate that the stopping power of water for carbon ions with energies prevailing in the Bragg-peak region is significantly lower than that of water vapor. In view of the relatively high uncertainty of the present results, a new experiment with uncertainties less than the predicted difference between the stopping powers of both water phases is planned. (paper)

  16. Optimization of whole-body simulator for photon emitters in the energy range 100 to 3000 KeV

    International Nuclear Information System (INIS)

    Dantas, Bernardo M.; Rosales, Geovana O.

    1996-01-01

    The calibration of the detection system for the in vivo determination of uniformly distributed radionuclides emitting photons in the energy range of 100 to 300 KeV requires the use of phantoms with dimensions close to the human body, in which known amounts of radionuclides are added. After the measurement of those phantoms, the calibration curves, channel x energy and energy x efficiency, are constructed. This type of phantom has been continuously optimized at the IRD-CNEN whole body counter with the objective of approximating its characteristics as close as possible to the standard man proposed in the ICRP 23. Furthermore, it has been tried to obtain a safe structure in terms of leakage and also of low cost. (author)

  17. Origin of the ankle in the ultrahigh energy cosmic ray spectrum, and of the extragalactic protons below it

    Science.gov (United States)

    Unger, Michael; Farrar, Glennys R.; Anchordoqui, Luis A.

    2015-12-01

    The sharp change in slope of the ultrahigh energy cosmic ray (UHECR) spectrum around 1 018.6 eV (the ankle), combined with evidence of a light but extragalactic component near and below the ankle and intermediate composition above, has proved exceedingly challenging to understand theoretically, without fine-tuning. We propose a mechanism whereby photo-disintegration of ultrahigh energy nuclei in the region surrounding a UHECR accelerator accounts for the observed spectrum and inferred composition at Earth. For suitable source conditions, the model reproduces the spectrum and the composition over the entire extragalactic cosmic ray energy range, i.e. above 1 017.5 eV . Predictions for the spectrum and flavors of neutrinos resulting from this process are also presented.

  18. Working memory supports inference learning just like classification learning.

    Science.gov (United States)

    Craig, Stewart; Lewandowsky, Stephan

    2013-08-01

    Recent research has found a positive relationship between people's working memory capacity (WMC) and their speed of category learning. To date, only classification-learning tasks have been considered, in which people learn to assign category labels to objects. It is unknown whether learning to make inferences about category features might also be related to WMC. We report data from a study in which 119 participants undertook classification learning and inference learning, and completed a series of WMC tasks. Working memory capacity was positively related to people's classification and inference learning performance.

  19. Bayesian inference for heterogeneous caprock permeability based on above zone pressure monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Namhata, Argha; Small, Mitchell J.; Dilmore, Robert M.; Nakles, David V.; King, Seth

    2017-02-01

    The presence of faults/ fractures or highly permeable zones in the primary sealing caprock of a CO2 storage reservoir can result in leakage of CO2. Monitoring of leakage requires the capability to detect and resolve the onset, location, and volume of leakage in a systematic and timely manner. Pressure-based monitoring possesses such capabilities. This study demonstrates a basis for monitoring network design based on the characterization of CO2 leakage scenarios through an assessment of the integrity and permeability of the caprock inferred from above zone pressure measurements. Four representative heterogeneous fractured seal types are characterized to demonstrate seal permeability ranging from highly permeable to impermeable. Based on Bayesian classification theory, the probability of each fractured caprock scenario given above zone pressure measurements with measurement error is inferred. The sensitivity to injection rate and caprock thickness is also evaluated and the probability of proper classification is calculated. The time required to distinguish between above zone pressure outcomes and the associated leakage scenarios is also computed.

  20. Elastic electron differential cross sections for argon atom in the intermediate energy range from 40 eV to 300 eV

    Science.gov (United States)

    Ranković, Miloš Lj.; Maljković, Jelena B.; Tökési, Károly; Marinković, Bratislav P.

    2018-02-01

    Measurements and calculations for electron elastic differential cross sections (DCS) of argon atom in the energy range from 40 to 300 eV are presented. DCS have been measured in the crossed beam arrangement of the electron spectrometer with an energy resolution of 0.5 eV and angular resolution of 1.5∘ in the range of scattering angles from 20∘ to 126∘. Both angular behaviour and energy dependence of DCS are obtained in a separate sets of experiments, while the absolute scale is achieved via relative flow method, using helium as a reference gas. All data is corrected for the energy transmission function, changes of primary electron beam current and target pressure, and effective path length (volume correction). DCSs are calculated in relativistic framework by expressing the Mott's cross sections in partial wave expansion. Our results are compared with other available data.

  1. Adaptive neuro-fuzzy inference system for forecasting rubber milk production

    Science.gov (United States)

    Rahmat, R. F.; Nurmawan; Sembiring, S.; Syahputra, M. F.; Fadli

    2018-02-01

    Natural Rubber is classified as the top export commodity in Indonesia. Its high production leads to a significant contribution to Indonesia’s foreign exchange. Before natural rubber ready to be exported to another country, the production of rubber milk becomes the primary concern. In this research, we use adaptive neuro-fuzzy inference system (ANFIS) to do rubber milk production forecasting. The data presented here is taken from PT. Anglo Eastern Plantation (AEP), which has high data variance and range for rubber milk production. Our data will span from January 2009 until December 2015. The best forecasting result is 1,182% in term of Mean Absolute Percentage Error (MAPE).

  2. Statistical inference for stochastic processes

    National Research Council Canada - National Science Library

    Basawa, Ishwar V; Prakasa Rao, B. L. S

    1980-01-01

    The aim of this monograph is to attempt to reduce the gap between theory and applications in the area of stochastic modelling, by directing the interest of future researchers to the inference aspects...

  3. A closed-form formulation for the build-up factor and absorbed energy for photons and electrons in the Compton energy range in Cartesian geometry

    Energy Technology Data Exchange (ETDEWEB)

    Borges, Volnei; Vilhena, Marco Tullio, E-mail: borges@ufrgs.b, E-mail: vilhena@pq.cnpq.b [Universidade Federal do Rio Grande do Sul (PROMEC/UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Fernandes, Julio Cesar Lombaldo, E-mail: julio.lombaldo@ufrgs.b [Universidade Federal do Rio Grande do Sul (DMPA/UFRGS), Porto Alegre, RS (Brazil). Dept. de Matematica Pura e Aplicada. Programa de Pos Graduacao em Matematica Aplicada

    2011-07-01

    In this work, we report on a closed-form formulation for the build-up factor and absorbed energy, in one and two dimensional Cartesian geometry for photons and electrons, in the Compton energy range. For the one-dimensional case we use the LTS{sub N} method, assuming the Klein-Nishina scattering kernel for the determination of the angular radiation intensity for photons. We apply the two-dimensional LTS{sub N} nodal solution for the averaged angular radiation evaluation for the two-dimensional case, using the Klein-Nishina kernel for photons and the Compton kernel for electrons. From the angular radiation intensity we construct a closed-form solution for the build-up factor and evaluate the absorbed energy. We present numerical simulations and comparisons against results from the literature. (author)

  4. Bacterial growth laws reflect the evolutionary importance of energy efficiency.

    Science.gov (United States)

    Maitra, Arijit; Dill, Ken A

    2015-01-13

    We are interested in the balance of energy and protein synthesis in bacterial growth. How has evolution optimized this balance? We describe an analytical model that leverages extensive literature data on growth laws to infer the underlying fitness landscape and to draw inferences about what evolution has optimized in Escherichia coli. Is E. coli optimized for growth speed, energy efficiency, or some other property? Experimental data show that at its replication speed limit, E. coli produces about four mass equivalents of nonribosomal proteins for every mass equivalent of ribosomes. This ratio can be explained if the cell's fitness function is the the energy efficiency of cells under fast growth conditions, indicating a tradeoff between the high energy costs of ribosomes under fast growth and the high energy costs of turning over nonribosomal proteins under slow growth. This model gives insight into some of the complex nonlinear relationships between energy utilization and ribosomal and nonribosomal production as a function of cell growth conditions.

  5. Inference of Large Phylogenies Using Neighbour-Joining

    DEFF Research Database (Denmark)

    Simonsen, Martin; Mailund, Thomas; Pedersen, Christian Nørgaard Storm

    2011-01-01

    The neighbour-joining method is a widely used method for phylogenetic reconstruction which scales to thousands of taxa. However, advances in sequencing technology have made data sets with more than 10,000 related taxa widely available. Inference of such large phylogenies takes hours or days using...... the Neighbour-Joining method on a normal desktop computer because of the O(n^3) running time. RapidNJ is a search heuristic which reduce the running time of the Neighbour-Joining method significantly but at the cost of an increased memory consumption making inference of large phylogenies infeasible. We present...... two extensions for RapidNJ which reduce the memory requirements and \\makebox{allows} phylogenies with more than 50,000 taxa to be inferred efficiently on a desktop computer. Furthermore, an improved version of the search heuristic is presented which reduces the running time of RapidNJ on many data...

  6. sick: The Spectroscopic Inference Crank

    Science.gov (United States)

    Casey, Andrew R.

    2016-03-01

    There exists an inordinate amount of spectral data in both public and private astronomical archives that remain severely under-utilized. The lack of reliable open-source tools for analyzing large volumes of spectra contributes to this situation, which is poised to worsen as large surveys successively release orders of magnitude more spectra. In this article I introduce sick, the spectroscopic inference crank, a flexible and fast Bayesian tool for inferring astrophysical parameters from spectra. sick is agnostic to the wavelength coverage, resolving power, or general data format, allowing any user to easily construct a generative model for their data, regardless of its source. sick can be used to provide a nearest-neighbor estimate of model parameters, a numerically optimized point estimate, or full Markov Chain Monte Carlo sampling of the posterior probability distributions. This generality empowers any astronomer to capitalize on the plethora of published synthetic and observed spectra, and make precise inferences for a host of astrophysical (and nuisance) quantities. Model intensities can be reliably approximated from existing grids of synthetic or observed spectra using linear multi-dimensional interpolation, or a Cannon-based model. Additional phenomena that transform the data (e.g., redshift, rotational broadening, continuum, spectral resolution) are incorporated as free parameters and can be marginalized away. Outlier pixels (e.g., cosmic rays or poorly modeled regimes) can be treated with a Gaussian mixture model, and a noise model is included to account for systematically underestimated variance. Combining these phenomena into a scalar-justified, quantitative model permits precise inferences with credible uncertainties on noisy data. I describe the common model features, the implementation details, and the default behavior, which is balanced to be suitable for most astronomical applications. Using a forward model on low-resolution, high signal

  7. SICK: THE SPECTROSCOPIC INFERENCE CRANK

    Energy Technology Data Exchange (ETDEWEB)

    Casey, Andrew R., E-mail: arc@ast.cam.ac.uk [Institute of Astronomy, University of Cambridge, Madingley Road, Cambdridge, CB3 0HA (United Kingdom)

    2016-03-15

    There exists an inordinate amount of spectral data in both public and private astronomical archives that remain severely under-utilized. The lack of reliable open-source tools for analyzing large volumes of spectra contributes to this situation, which is poised to worsen as large surveys successively release orders of magnitude more spectra. In this article I introduce sick, the spectroscopic inference crank, a flexible and fast Bayesian tool for inferring astrophysical parameters from spectra. sick is agnostic to the wavelength coverage, resolving power, or general data format, allowing any user to easily construct a generative model for their data, regardless of its source. sick can be used to provide a nearest-neighbor estimate of model parameters, a numerically optimized point estimate, or full Markov Chain Monte Carlo sampling of the posterior probability distributions. This generality empowers any astronomer to capitalize on the plethora of published synthetic and observed spectra, and make precise inferences for a host of astrophysical (and nuisance) quantities. Model intensities can be reliably approximated from existing grids of synthetic or observed spectra using linear multi-dimensional interpolation, or a Cannon-based model. Additional phenomena that transform the data (e.g., redshift, rotational broadening, continuum, spectral resolution) are incorporated as free parameters and can be marginalized away. Outlier pixels (e.g., cosmic rays or poorly modeled regimes) can be treated with a Gaussian mixture model, and a noise model is included to account for systematically underestimated variance. Combining these phenomena into a scalar-justified, quantitative model permits precise inferences with credible uncertainties on noisy data. I describe the common model features, the implementation details, and the default behavior, which is balanced to be suitable for most astronomical applications. Using a forward model on low-resolution, high signal

  8. SICK: THE SPECTROSCOPIC INFERENCE CRANK

    International Nuclear Information System (INIS)

    Casey, Andrew R.

    2016-01-01

    There exists an inordinate amount of spectral data in both public and private astronomical archives that remain severely under-utilized. The lack of reliable open-source tools for analyzing large volumes of spectra contributes to this situation, which is poised to worsen as large surveys successively release orders of magnitude more spectra. In this article I introduce sick, the spectroscopic inference crank, a flexible and fast Bayesian tool for inferring astrophysical parameters from spectra. sick is agnostic to the wavelength coverage, resolving power, or general data format, allowing any user to easily construct a generative model for their data, regardless of its source. sick can be used to provide a nearest-neighbor estimate of model parameters, a numerically optimized point estimate, or full Markov Chain Monte Carlo sampling of the posterior probability distributions. This generality empowers any astronomer to capitalize on the plethora of published synthetic and observed spectra, and make precise inferences for a host of astrophysical (and nuisance) quantities. Model intensities can be reliably approximated from existing grids of synthetic or observed spectra using linear multi-dimensional interpolation, or a Cannon-based model. Additional phenomena that transform the data (e.g., redshift, rotational broadening, continuum, spectral resolution) are incorporated as free parameters and can be marginalized away. Outlier pixels (e.g., cosmic rays or poorly modeled regimes) can be treated with a Gaussian mixture model, and a noise model is included to account for systematically underestimated variance. Combining these phenomena into a scalar-justified, quantitative model permits precise inferences with credible uncertainties on noisy data. I describe the common model features, the implementation details, and the default behavior, which is balanced to be suitable for most astronomical applications. Using a forward model on low-resolution, high signal

  9. Calculation of Bremsstrahlung radiation of electrons on atoms in wide energy range of photons

    CERN Document Server

    Romanikhin, V P

    2002-01-01

    The complete spectra of the Bremsstrahlung radiation on the krypton atoms within the range of the photon energies of 10-25000 eV and lanthanum near the potential of the 4d-shell ionization is carried out. The atoms summarized polarizability is calculated on the basis of the simple semiclassical approximation of the local electron density and experimental data on the photoabsorption. The comparison with the calculational results is carried out through the method of distorted partial waves (PDWA) for Kr and with the experimental data on La

  10. Measurement and uncertainties of energy loss in silicon over a wide Z sub 1 range using time of flight detector telescopes

    CERN Document Server

    Whitlow, H J; Elliman, R G; Weijers, T D M; Zhang Yan Wen; O'connor, D J

    2002-01-01

    The energy loss of projectiles with Z sub 1 in the range 3-26 has been experimentally measured in the 0.1-0.7 MeV per nucleon energy range in the same Si stopping foil of 105.5 mu g cm sup - sup 2 thickness using a time of flight-energy (ToF-E) elastic recoil detection analysis (ERDA) setup. A detailed study of the experimental uncertainties for ToF-E and ToF-ToF-E configuration has been made. For ERDA configurations where the energy calibration is taken against the edge positions small uncertainties in the angle at which recoils are detected can introduce significant absolute uncertainty. The relative uncertainty contribution is dominated by the energy calibration of the Si E detector for the ToF-E configuration and the position of the second ToF detector in ToF-ToF-E measurements. The much smaller calibration uncertainty for ToF-ToF-E configuration implies this technique is superior to ToF-E measurements with Si E detectors. At low energies the effect of charge changing in the time detector foils can become...

  11. On principles of inductive inference

    OpenAIRE

    Kostecki, Ryszard Paweł

    2011-01-01

    We propose an intersubjective epistemic approach to foundations of probability theory and statistical inference, based on relative entropy and category theory, and aimed to bypass the mathematical and conceptual problems of existing foundational approaches.

  12. On inertial range scaling laws

    International Nuclear Information System (INIS)

    Bowman, J.C.

    1994-12-01

    Inertial-range scaling laws for two- and three-dimensional turbulence are re-examined within a unified framework. A new correction to Kolmogorov's k -5/3 scaling is derived for the energy inertial range. A related modification is found to Kraichnan's logarithmically corrected two-dimensional enstrophy cascade law that removes its unexpected divergence at the injection wavenumber. The significance of these corrections is illustrated with steady-state energy spectra from recent high-resolution closure computations. The results also underscore the asymptotic nature of inertial-range scaling laws. Implications for conventional numerical simulations are discussed

  13. Energy savings and increased electric vehicle range through improved battery thermal management

    International Nuclear Information System (INIS)

    Smith, Joshua; Hinterberger, Michael; Schneider, Christoph; Koehler, Juergen

    2016-01-01

    Lithium-ion cells are temperature sensitive: operation outside the optimal operating range causes premature aging and correspondingly reduces vehicle range and battery system lifetime. In order to meet consumer demands for electric and hybrid-electric vehicle performance, especially in adverse climates, a battery thermal management system (BTMS) is often required. This work presents a novel experimental method for analyzing BTMS using three sample cooling plate concepts. For each concept, the input parameters (ambient temperature, coolant temperature and coolant flow rate) are varied and the resulting effect on the average temperature and temperature distribution across and between cells is compared. Additionally, the pressure loss along the coolant path is utilized as an indicator of energy efficiency. Using the presented methodology, various cooling plate layouts optimized for production alternative techniques are compared to the state of the art. It is shown that these production-optimized cooling plates provide sufficient thermal performance with the additional benefit of mechanical integration within the battery and/or vehicle system. It is also shown that the coolant flow influences battery cell thermal behavior more than the solid material and that pressure drop is more sensitive to geometrical changes in the cooling plate than temperature changes at the module.

  14. Model calculation of neutron reaction data for 31P in the energy range from 0.1 to 20 MeV

    International Nuclear Information System (INIS)

    Li Jiangting; Ge Zhigang; Sun Xiuquan

    2006-01-01

    The neutron data calculation of 31 P in the energy range from 0.1 to 20 MeV was carried out. The neutron optical potential parameters for 31 P in energy range from O.1 to 20 MeV were obtained, based on the fitting of the available neutron experimental data with the code APOM94. The DWUCK4 code was used to investigate the cross section for neutron direct inelastic scattering. The re-evaluated neutron data is based on the available measured data by using the UNF code. The theoretical results reproduce the experimental data well, and the results were given in ENDF/B-6 format. (authors)

  15. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  16. Quantification of the validity of simulations based on Geant4 and FLUKA for photo-nuclear interactions in the high energy range

    Science.gov (United States)

    Quintieri, Lina; Pia, Maria Grazia; Augelli, Mauro; Saracco, Paolo; Capogni, Marco; Guarnieri, Guido

    2017-09-01

    Photo-nuclear interactions are relevant in many research fields of both fundamental and applied physics and, for this reason, accurate Monte Carlo simulations of photo-nuclear interactions can provide a valuable and indispensable support in a wide range of applications (i.e from the optimisation of photo-neutron source target to the dosimetric estimation in high energy accelerator, etc). Unfortunately, few experimental photo-nuclear data are available above 100 MeV, so that, in the high energy range (from hundreds of MeV up to GeV scale), the code predictions are based on physical models. The aim of this work is to compare the predictions of relevant observables involving photon-nuclear interaction modelling, obtained with GEANT4 and FLUKA, to experimental data (if available), in order to assess the code estimation reliability, over a wide energy range. In particular, the comparison of the estimated photo-neutron yields and energy spectra with the experimental results of the n@BTF experiment (carried out at the Beam Test Facility of DaΦne collider, in Frascati, Italy) is here reported and discussed. Moreover, the preliminary results of the comparison of the cross sections used in the codes with the"evaluated' data recommended by the IAEA are also presented for some selected cases (W, Pb, Zn).

  17. Bootstrapping phylogenies inferred from rearrangement data

    Directory of Open Access Journals (Sweden)

    Lin Yu

    2012-08-01

    Full Text Available Abstract Background Large-scale sequencing of genomes has enabled the inference of phylogenies based on the evolution of genomic architecture, under such events as rearrangements, duplications, and losses. Many evolutionary models and associated algorithms have been designed over the last few years and have found use in comparative genomics and phylogenetic inference. However, the assessment of phylogenies built from such data has not been properly addressed to date. The standard method used in sequence-based phylogenetic inference is the bootstrap, but it relies on a large number of homologous characters that can be resampled; yet in the case of rearrangements, the entire genome is a single character. Alternatives such as the jackknife suffer from the same problem, while likelihood tests cannot be applied in the absence of well established probabilistic models. Results We present a new approach to the assessment of distance-based phylogenetic inference from whole-genome data; our approach combines features of the jackknife and the bootstrap and remains nonparametric. For each feature of our method, we give an equivalent feature in the sequence-based framework; we also present the results of extensive experimental testing, in both sequence-based and genome-based frameworks. Through the feature-by-feature comparison and the experimental results, we show that our bootstrapping approach is on par with the classic phylogenetic bootstrap used in sequence-based reconstruction, and we establish the clear superiority of the classic bootstrap for sequence data and of our corresponding new approach for rearrangement data over proposed variants. Finally, we test our approach on a small dataset of mammalian genomes, verifying that the support values match current thinking about the respective branches. Conclusions Our method is the first to provide a standard of assessment to match that of the classic phylogenetic bootstrap for aligned sequences. Its

  18. Bootstrapping phylogenies inferred from rearrangement data.

    Science.gov (United States)

    Lin, Yu; Rajan, Vaibhav; Moret, Bernard Me

    2012-08-29

    Large-scale sequencing of genomes has enabled the inference of phylogenies based on the evolution of genomic architecture, under such events as rearrangements, duplications, and losses. Many evolutionary models and associated algorithms have been designed over the last few years and have found use in comparative genomics and phylogenetic inference. However, the assessment of phylogenies built from such data has not been properly addressed to date. The standard method used in sequence-based phylogenetic inference is the bootstrap, but it relies on a large number of homologous characters that can be resampled; yet in the case of rearrangements, the entire genome is a single character. Alternatives such as the jackknife suffer from the same problem, while likelihood tests cannot be applied in the absence of well established probabilistic models. We present a new approach to the assessment of distance-based phylogenetic inference from whole-genome data; our approach combines features of the jackknife and the bootstrap and remains nonparametric. For each feature of our method, we give an equivalent feature in the sequence-based framework; we also present the results of extensive experimental testing, in both sequence-based and genome-based frameworks. Through the feature-by-feature comparison and the experimental results, we show that our bootstrapping approach is on par with the classic phylogenetic bootstrap used in sequence-based reconstruction, and we establish the clear superiority of the classic bootstrap for sequence data and of our corresponding new approach for rearrangement data over proposed variants. Finally, we test our approach on a small dataset of mammalian genomes, verifying that the support values match current thinking about the respective branches. Our method is the first to provide a standard of assessment to match that of the classic phylogenetic bootstrap for aligned sequences. Its support values follow a similar scale and its receiver

  19. Classification versus inference learning contrasted with real-world categories.

    Science.gov (United States)

    Jones, Erin L; Ross, Brian H

    2011-07-01

    Categories are learned and used in a variety of ways, but the research focus has been on classification learning. Recent work contrasting classification with inference learning of categories found important later differences in category performance. However, theoretical accounts differ on whether this is due to an inherent difference between the tasks or to the implementation decisions. The inherent-difference explanation argues that inference learners focus on the internal structure of the categories--what each category is like--while classification learners focus on diagnostic information to predict category membership. In two experiments, using real-world categories and controlling for earlier methodological differences, inference learners learned more about what each category was like than did classification learners, as evidenced by higher performance on a novel classification test. These results suggest that there is an inherent difference between learning new categories by classifying an item versus inferring a feature.

  20. Statistical inference via fiducial methods

    OpenAIRE

    Salomé, Diemer

    1998-01-01

    In this thesis the attention is restricted to inductive reasoning using a mathematical probability model. A statistical procedure prescribes, for every theoretically possible set of data, the inference about the unknown of interest. ... Zie: Summary

  1. Transport properties of gaseous ions over a wide energy range. Part III

    International Nuclear Information System (INIS)

    Ellis, H.W.; Thackston, M.G.; McDaniel, E.W.; Mason, E.A.

    1984-01-01

    This paper updates and extends in scope our two previous papers entitled ''Transport Properties of Gaseous Ions over a Wide Energy Range.'' The references to the earlier publications (referred to as ''Part I'' and ''Part II'') are I, H. W. Ellis, R. Y. Pai, E. W. McDonald, E. A. Mason, and L. A. Viehland, ATOMIC DATA AND NUCLEAR DATA TABLES 17, 177--210 (19876); and II, H. W. Ellis, E. W. McDaniel, D. L. Albritton, L. A. Veihland, S. L. Lin, and E. A. Mason, ATOMIC DATA AND NUCLEAR DATA TABLES 22, 179--217 (1978). Parts I and II contained compilations of experimental data on ionic mobilities and diffusion coefficients (both longitudinal and transverse) for ions in neutral gase (almost exclusively at room temperature) in an externally applied electric field

  2. Measurement of the cosmic optical background using the long range reconnaissance imager on New Horizons.

    Science.gov (United States)

    Zemcov, Michael; Immel, Poppy; Nguyen, Chi; Cooray, Asantha; Lisse, Carey M; Poppe, Andrew R

    2017-04-11

    The cosmic optical background is an important observable that constrains energy production in stars and more exotic physical processes in the universe, and provides a crucial cosmological benchmark against which to judge theories of structure formation. Measurement of the absolute brightness of this background is complicated by local foregrounds like the Earth's atmosphere and sunlight reflected from local interplanetary dust, and large discrepancies in the inferred brightness of the optical background have resulted. Observations from probes far from the Earth are not affected by these bright foregrounds. Here we analyse the data from the Long Range Reconnaissance Imager (LORRI) instrument on NASA's New Horizons mission acquired during cruise phase outside the orbit of Jupiter, and find a statistical upper limit on the optical background's brightness similar to the integrated light from galaxies. We conclude that a carefully performed survey with LORRI could yield uncertainties comparable to those from galaxy counting measurements.

  3. Information-Theoretic Inference of Large Transcriptional Regulatory Networks

    Directory of Open Access Journals (Sweden)

    Meyer Patrick

    2007-01-01

    Full Text Available The paper presents MRNET, an original method for inferring genetic networks from microarray data. The method is based on maximum relevance/minimum redundancy (MRMR, an effective information-theoretic technique for feature selection in supervised learning. The MRMR principle consists in selecting among the least redundant variables the ones that have the highest mutual information with the target. MRNET extends this feature selection principle to networks in order to infer gene-dependence relationships from microarray data. The paper assesses MRNET by benchmarking it against RELNET, CLR, and ARACNE, three state-of-the-art information-theoretic methods for large (up to several thousands of genes network inference. Experimental results on thirty synthetically generated microarray datasets show that MRNET is competitive with these methods.

  4. Information-Theoretic Inference of Large Transcriptional Regulatory Networks

    Directory of Open Access Journals (Sweden)

    Patrick E. Meyer

    2007-06-01

    Full Text Available The paper presents MRNET, an original method for inferring genetic networks from microarray data. The method is based on maximum relevance/minimum redundancy (MRMR, an effective information-theoretic technique for feature selection in supervised learning. The MRMR principle consists in selecting among the least redundant variables the ones that have the highest mutual information with the target. MRNET extends this feature selection principle to networks in order to infer gene-dependence relationships from microarray data. The paper assesses MRNET by benchmarking it against RELNET, CLR, and ARACNE, three state-of-the-art information-theoretic methods for large (up to several thousands of genes network inference. Experimental results on thirty synthetically generated microarray datasets show that MRNET is competitive with these methods.

  5. Interactions of hadrons in nuclear emulsion in the energy range 60 GeV - 400 GeV

    International Nuclear Information System (INIS)

    Holynski, R.

    1986-01-01

    Interactions of pions and protons in the energy range 60 GeV in nuclear emulsion have been analysed. The fragmentation process of the struck nucleus as well as the multiple production of relativistic particles have been investigated as a function of the primary energy and the effective thickness of the target. It is shown that both, the fragmentation of the target nucleus and particle production, can be described by models in which the projectile (or its constituents) undergoes multiple collisions inside the target nucleus. In particular the particle production in the projectile fragmentation region in pion-nucleus interactions is well described by the additive quark model. 47 refs., 35 figs., 2 tabs. (author)

  6. IMAGINE: Interstellar MAGnetic field INference Engine

    Science.gov (United States)

    Steininger, Theo

    2018-03-01

    IMAGINE (Interstellar MAGnetic field INference Engine) performs inference on generic parametric models of the Galaxy. The modular open source framework uses highly optimized tools and technology such as the MultiNest sampler (ascl:1109.006) and the information field theory framework NIFTy (ascl:1302.013) to create an instance of the Milky Way based on a set of parameters for physical observables, using Bayesian statistics to judge the mismatch between measured data and model prediction. The flexibility of the IMAGINE framework allows for simple refitting for newly available data sets and makes state-of-the-art Bayesian methods easily accessible particularly for random components of the Galactic magnetic field.

  7. The inclusion of long-range polarisation functions in calculations of low-energy e+-H2 scattering using the Kohn method

    International Nuclear Information System (INIS)

    Armour, E.A.G.; Plummer, M.

    1989-01-01

    An explanation is given of why it is necessary to include long-range polarisation functions in the trial function when carrying out Kohn calculations of low-energy positron (and electron) scattering by atoms and simple molecules. The asymptotic form of these functions in low-energy e + -H 2 scattering is deduced. Appropriate functions with this asymptotic form are used to represent the closed-channel part of the wavefunction in a Kohn calculation of the lowest partial wave of Σ u + symmetry in e + -H 2 scattering at very low energies. For k≤0.03a 0 -1 , the results obtained are in good agreement with those obtained using the Born approximation and the asymptotic forms of the static and polarisation potentials. The relationship is pointed out between this method of taking into account long-range polarisation and the polarised pseudostate method used in R-matrix calculations. (author)

  8. Greenhouse gas mitigation potential of biomass energy technologies in Vietnam using the long range energy alternative planning system model

    International Nuclear Information System (INIS)

    Kumar, Amit; Bhattacharya, S.C.; Pham, H.L.

    2003-01-01

    The greenhouse gas (GHG) mitigation potentials of number of selected Biomass Energy Technologies (BETs) have been assessed in Vietnam. These include Biomass Integrated Gasification Combined Cycle (BIGCC) based on wood and bagasse, direct combustion plants based on wood, co-firing power plants and Stirling engine based on wood and cooking stoves. Using the Long-range Energy Alternative Planning (LEAP) model, different scenarios were considered, namely the base case with no mitigation options, replacement of kerosene and liquefied petroleum gas (LPG) by biogas stove, substitution of gasoline by ethanol in transport sector, replacement of coal by wood as fuel in industrial boilers, electricity generation with biomass energy technologies and an integrated scenario including all the options together. Substitution of coal stoves by biogas stove has positive abatement cost, as the cost of wood in Vietnam is higher than coal. Replacement of kerosene and LPG cookstoves by biomass stove also has a positive abatement cost. Replacement of gasoline by ethanol can be realized after a few years, as at present the cost of ethanol is more than the cost of gasoline. The replacement of coal by biomass in industrial boiler is also not an attractive option as wood is more expensive than coal in Vietnam. The substitution of fossil fuel fired plants by packages of BETs has a negative abatement cost. This option, if implemented, would result in mitigation of 10.83 million tonnes (Mt) of CO 2 in 2010

  9. Inferring epidemic network topology from surveillance data.

    Directory of Open Access Journals (Sweden)

    Xiang Wan

    Full Text Available The transmission of infectious diseases can be affected by many or even hidden factors, making it difficult to accurately predict when and where outbreaks may emerge. One approach at the moment is to develop and deploy surveillance systems in an effort to detect outbreaks as timely as possible. This enables policy makers to modify and implement strategies for the control of the transmission. The accumulated surveillance data including temporal, spatial, clinical, and demographic information, can provide valuable information with which to infer the underlying epidemic networks. Such networks can be quite informative and insightful as they characterize how infectious diseases transmit from one location to another. The aim of this work is to develop a computational model that allows inferences to be made regarding epidemic network topology in heterogeneous populations. We apply our model on the surveillance data from the 2009 H1N1 pandemic in Hong Kong. The inferred epidemic network displays significant effect on the propagation of infectious diseases.

  10. A Learning Algorithm for Multimodal Grammar Inference.

    Science.gov (United States)

    D'Ulizia, A; Ferri, F; Grifoni, P

    2011-12-01

    The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.

  11. Preface to Special Topic: Marine Renewable Energy

    Energy Technology Data Exchange (ETDEWEB)

    Pinto, F. T.; Iglesias, G.; Santos, P. R.; Deng, Zhiqun

    2015-12-30

    Marine renewable energy (MRE) is generates from waves, currents, tides, and thermal resources in the ocean. MRE has been identified as a potential commercial-scale source of renewable energy. This special topic presents a compilation of works selected from the 3rd IAHR Europe Congress, held in Porto, Portugal, in 2014. It covers different subjects relevant to MRE, including resource assessment, marine energy sector policies, energy source comparisons based on levelized cost, proof-of-concept and new-technology development for wave and tidal energy exploitation, and assessment of possible inference between wave energy converters (WEC).

  12. Systematics of neutron-induced fission cross sections over the energy range 0.1 through 15 MeV, and at 0.0253 eV

    International Nuclear Information System (INIS)

    Behrens, J.W.

    1977-01-01

    Recent studies have shown straightforward systematic behavior as a function of constant proton and neutron number for neutron-induced fission cross sections of the actinide elements in the incident-neutron energy range 3 to 5 MeV. In this report, the second in a series, fission cross-section values are studied over the MeV incident-neutron energy range, and at 0.0253 eV. Fission-barrier heights and neutron-binding energies are correlated by constant proton and neutron number; however, these systematic behaviors alone do not explain the trends observed in the fission cross-section values

  13. Quantum mechanical free energy profiles with post-quantization restraints: Binding free energy of the water dimer over a broad range of temperatures.

    Science.gov (United States)

    Bishop, Kevin P; Roy, Pierre-Nicholas

    2018-03-14

    Free energy calculations are a crucial part of understanding chemical systems but are often computationally expensive for all but the simplest of systems. Various enhanced sampling techniques have been developed to improve the efficiency of these calculations in numerical simulations. However, the majority of these approaches have been applied using classical molecular dynamics. There are many situations where nuclear quantum effects impact the system of interest and a classical description fails to capture these details. In this work, path integral molecular dynamics has been used in conjunction with umbrella sampling, and it has been observed that correct results are only obtained when the umbrella sampling potential is applied to a single path integral bead post quantization. This method has been validated against a Lennard-Jones benchmark system before being applied to the more complicated water dimer system over a broad range of temperatures. Free energy profiles are obtained, and these are utilized in the calculation of the second virial coefficient as well as the change in free energy from the separated water monomers to the dimer. Comparisons to experimental and ground state calculation values from the literature are made for the second virial coefficient at higher temperature and the dissociation energy of the dimer in the ground state.

  14. Bayesian Inference of High-Dimensional Dynamical Ocean Models

    Science.gov (United States)

    Lin, J.; Lermusiaux, P. F. J.; Lolla, S. V. T.; Gupta, A.; Haley, P. J., Jr.

    2015-12-01

    This presentation addresses a holistic set of challenges in high-dimension ocean Bayesian nonlinear estimation: i) predict the probability distribution functions (pdfs) of large nonlinear dynamical systems using stochastic partial differential equations (PDEs); ii) assimilate data using Bayes' law with these pdfs; iii) predict the future data that optimally reduce uncertainties; and (iv) rank the known and learn the new model formulations themselves. Overall, we allow the joint inference of the state, equations, geometry, boundary conditions and initial conditions of dynamical models. Examples are provided for time-dependent fluid and ocean flows, including cavity, double-gyre and Strait flows with jets and eddies. The Bayesian model inference, based on limited observations, is illustrated first by the estimation of obstacle shapes and positions in fluid flows. Next, the Bayesian inference of biogeochemical reaction equations and of their states and parameters is presented, illustrating how PDE-based machine learning can rigorously guide the selection and discovery of complex ecosystem models. Finally, the inference of multiscale bottom gravity current dynamics is illustrated, motivated in part by classic overflows and dense water formation sites and their relevance to climate monitoring and dynamics. This is joint work with our MSEAS group at MIT.

  15. Inertial-range spectrum of whistler turbulence

    Directory of Open Access Journals (Sweden)

    Y. Narita

    2010-02-01

    Full Text Available We develop a theoretical model of an inertial-range energy spectrum for homogeneous whistler turbulence. The theory is a generalization of the Iroshnikov-Kraichnan concept of the inertial-range magnetohydrodynamic turbulence. In the model the dispersion relation is used to derive scaling laws for whistler waves at highly oblique propagation with respect to the mean magnetic field. The model predicts an energy spectrum for such whistler waves with a spectral index −2.5 in the perpendicular component of the wave vector and thus provides an interpretation about recent discoveries of the second inertial-range of magnetic energy spectra at high frequencies in the solar wind.

  16. Adaptive neuro-fuzzy inference system based automatic generation control

    Energy Technology Data Exchange (ETDEWEB)

    Hosseini, S.H.; Etemadi, A.H. [Department of Electrical Engineering, Sharif University of Technology, Tehran (Iran)

    2008-07-15

    Fixed gain controllers for automatic generation control are designed at nominal operating conditions and fail to provide best control performance over a wide range of operating conditions. So, to keep system performance near its optimum, it is desirable to track the operating conditions and use updated parameters to compute control gains. A control scheme based on artificial neuro-fuzzy inference system (ANFIS), which is trained by the results of off-line studies obtained using particle swarm optimization, is proposed in this paper to optimize and update control gains in real-time according to load variations. Also, frequency relaxation is implemented using ANFIS. The efficiency of the proposed method is demonstrated via simulations. Compliance of the proposed method with NERC control performance standard is verified. (author)

  17. Hybrid Optical Inference Machines

    Science.gov (United States)

    1991-09-27

    with labels. Now, events. a set of facts cal be generated in the dyadic form "u, R 1,2" Eichmann and Caulfield (19] consider the same type of and can...these enceding-schemes. These architectures are-based pri- 19. G. Eichmann and H. J. Caulfield, "Optical Learning (Inference)marily on optical inner

  18. A Network Inference Workflow Applied to Virulence-Related Processes in Salmonella typhimurium

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Ronald C.; Singhal, Mudita; Weller, Jennifer B.; Khoshnevis, Saeed; Shi, Liang; McDermott, Jason E.

    2009-04-20

    Inference of the structure of mRNA transcriptional regulatory networks, protein regulatory or interaction networks, and protein activation/inactivation-based signal transduction networks are critical tasks in systems biology. In this article we discuss a workflow for the reconstruction of parts of the transcriptional regulatory network of the pathogenic bacterium Salmonella typhimurium based on the information contained in sets of microarray gene expression data now available for that organism, and describe our results obtained by following this workflow. The primary tool is one of the network inference algorithms deployed in the Software Environment for BIological Network Inference (SEBINI). Specifically, we selected the algorithm called Context Likelihood of Relatedness (CLR), which uses the mutual information contained in the gene expression data to infer regulatory connections. The associated analysis pipeline automatically stores the inferred edges from the CLR runs within SEBINI and, upon request, transfers the inferred edges into either Cytoscape or the plug-in Collective Analysis of Biological of Biological Interaction Networks (CABIN) tool for further post-analysis of the inferred regulatory edges. The following article presents the outcome of this workflow, as well as the protocols followed for microarray data collection, data cleansing, and network inference. Our analysis revealed several interesting interactions, functional groups, metabolic pathways, and regulons in S. typhimurium.

  19. Training Inference Making Skills Using a Situation Model Approach Improves Reading Comprehension

    Directory of Open Access Journals (Sweden)

    Lisanne eBos

    2016-02-01

    Full Text Available This study aimed to enhance third and fourth graders’ text comprehension at the situation model level. Therefore, we tested a reading strategy training developed to target inference making skills, which are widely considered to be pivotal to situation model construction. The training was grounded in contemporary literature on situation model-based inference making and addressed the source (text-based versus knowledge-based, type (necessary versus unnecessary for (re-establishing coherence, and depth of an inference (making single lexical inferences versus combining multiple lexical inferences, as well as the type of searching strategy (forward versus backward. Results indicated that, compared to a control group (n = 51, children who followed the experimental training (n = 67 improved their inference making skills supportive to situation model construction. Importantly, our training also resulted in increased levels of general reading comprehension and motivation. In sum, this study showed that a ‘level of text representation’-approach can provide a useful framework to teach inference making skills to third and fourth graders.

  20. Robust Demographic Inference from Genomic and SNP Data

    Science.gov (United States)

    Excoffier, Laurent; Dupanloup, Isabelle; Huerta-Sánchez, Emilia; Sousa, Vitor C.; Foll, Matthieu

    2013-01-01

    We introduce a flexible and robust simulation-based framework to infer demographic parameters from the site frequency spectrum (SFS) computed on large genomic datasets. We show that our composite-likelihood approach allows one to study evolutionary models of arbitrary complexity, which cannot be tackled by other current likelihood-based methods. For simple scenarios, our approach compares favorably in terms of accuracy and speed with , the current reference in the field, while showing better convergence properties for complex models. We first apply our methodology to non-coding genomic SNP data from four human populations. To infer their demographic history, we compare neutral evolutionary models of increasing complexity, including unsampled populations. We further show the versatility of our framework by extending it to the inference of demographic parameters from SNP chips with known ascertainment, such as that recently released by Affymetrix to study human origins. Whereas previous ways of handling ascertained SNPs were either restricted to a single population or only allowed the inference of divergence time between a pair of populations, our framework can correctly infer parameters of more complex models including the divergence of several populations, bottlenecks and migration. We apply this approach to the reconstruction of African demography using two distinct ascertained human SNP panels studied under two evolutionary models. The two SNP panels lead to globally very similar estimates and confidence intervals, and suggest an ancient divergence (>110 Ky) between Yoruba and San populations. Our methodology appears well suited to the study of complex scenarios from large genomic data sets. PMID:24204310

  1. Behavior Intention Derivation of Android Malware Using Ontology Inference

    Directory of Open Access Journals (Sweden)

    Jian Jiao

    2018-01-01

    Full Text Available Previous researches on Android malware mainly focus on malware detection, and malware’s evolution makes the process face certain hysteresis. The information presented by these detected results (malice judgment, family classification, and behavior characterization is limited for analysts. Therefore, a method is needed to restore the intention of malware, which reflects the relation between multiple behaviors of complex malware and its ultimate purpose. This paper proposes a novel description and derivation model of Android malware intention based on the theory of intention and malware reverse engineering. This approach creates ontology for malware intention to model the semantic relation between behaviors and its objects and automates the process of intention derivation by using SWRL rules transformed from intention model and Jess inference engine. Experiments on 75 typical samples show that the inference system can perform derivation of malware intention effectively, and 89.3% of the inference results are consistent with artificial analysis, which proves the feasibility and effectiveness of our theory and inference system.

  2. Genealogical and evolutionary inference with the human Y chromosome.

    Science.gov (United States)

    Stumpf, M P; Goldstein, D B

    2001-03-02

    Population genetics has emerged as a powerful tool for unraveling human history. In addition to the study of mitochondrial and autosomal DNA, attention has recently focused on Y-chromosome variation. Ambiguities and inaccuracies in data analysis, however, pose an important obstacle to further development of the field. Here we review the methods available for genealogical inference using Y-chromosome data. Approaches can be divided into those that do and those that do not use an explicit population model in genealogical inference. We describe the strengths and weaknesses of these model-based and model-free approaches, as well as difficulties associated with the mutation process that affect both methods. In the case of genealogical inference using microsatellite loci, we use coalescent simulations to show that relatively simple generalizations of the mutation process can greatly increase the accuracy of genealogical inference. Because model-free and model-based approaches have different biases and limitations, we conclude that there is considerable benefit in the continued use of both types of approaches.

  3. SDG multiple fault diagnosis by real-time inverse inference

    International Nuclear Information System (INIS)

    Zhang Zhaoqian; Wu Chongguang; Zhang Beike; Xia Tao; Li Anfeng

    2005-01-01

    In the past 20 years, one of the qualitative simulation technologies, signed directed graph (SDG) has been widely applied in the field of chemical fault diagnosis. However, the assumption of single fault origin was usually used by many former researchers. As a result, this will lead to the problem of combinatorial explosion and has limited SDG to the realistic application on the real process. This is mainly because that most of the former researchers used forward inference engine in the commercial expert system software to carry out the inverse diagnosis inference on the SDG model which violates the internal principle of diagnosis mechanism. In this paper, we present a new SDG multiple faults diagnosis method by real-time inverse inference. This is a method of multiple faults diagnosis from the genuine significance and the inference engine use inverse mechanism. At last, we give an example of 65t/h furnace diagnosis system to demonstrate its applicability and efficiency

  4. SDG multiple fault diagnosis by real-time inverse inference

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Zhaoqian; Wu Chongguang; Zhang Beike; Xia Tao; Li Anfeng

    2005-02-01

    In the past 20 years, one of the qualitative simulation technologies, signed directed graph (SDG) has been widely applied in the field of chemical fault diagnosis. However, the assumption of single fault origin was usually used by many former researchers. As a result, this will lead to the problem of combinatorial explosion and has limited SDG to the realistic application on the real process. This is mainly because that most of the former researchers used forward inference engine in the commercial expert system software to carry out the inverse diagnosis inference on the SDG model which violates the internal principle of diagnosis mechanism. In this paper, we present a new SDG multiple faults diagnosis method by real-time inverse inference. This is a method of multiple faults diagnosis from the genuine significance and the inference engine use inverse mechanism. At last, we give an example of 65t/h furnace diagnosis system to demonstrate its applicability and efficiency.

  5. Functional networks inference from rule-based machine learning models.

    Science.gov (United States)

    Lazzarini, Nicola; Widera, Paweł; Williamson, Stuart; Heer, Rakesh; Krasnogor, Natalio; Bacardit, Jaume

    2016-01-01

    Functional networks play an important role in the analysis of biological processes and systems. The inference of these networks from high-throughput (-omics) data is an area of intense research. So far, the similarity-based inference paradigm (e.g. gene co-expression) has been the most popular approach. It assumes a functional relationship between genes which are expressed at similar levels across different samples. An alternative to this paradigm is the inference of relationships from the structure of machine learning models. These models are able to capture complex relationships between variables, that often are different/complementary to the similarity-based methods. We propose a protocol to infer functional networks from machine learning models, called FuNeL. It assumes, that genes used together within a rule-based machine learning model to classify the samples, might also be functionally related at a biological level. The protocol is first tested on synthetic datasets and then evaluated on a test suite of 8 real-world datasets related to human cancer. The networks inferred from the real-world data are compared against gene co-expression networks of equal size, generated with 3 different methods. The comparison is performed from two different points of view. We analyse the enriched biological terms in the set of network nodes and the relationships between known disease-associated genes in a context of the network topology. The comparison confirms both the biological relevance and the complementary character of the knowledge captured by the FuNeL networks in relation to similarity-based methods and demonstrates its potential to identify known disease associations as core elements of the network. Finally, using a prostate cancer dataset as a case study, we confirm that the biological knowledge captured by our method is relevant to the disease and consistent with the specialised literature and with an independent dataset not used in the inference process. The

  6. Neutron capture resonances in 56Fe and 58Fe in the energy range from 10 to 100 keV

    International Nuclear Information System (INIS)

    Kaeppeler, F.; Wisshak, K.; Hong, L.D.

    1982-11-01

    The neutron capture cross section of 56 Fe and 58 Fe has been measured in the energy range from 10 to 250 keV relative to the gold standard. A pulsed 3 MV Van de Graaff accelerator and the 7 Li(p, n) reaction served as a neutron source. Capture gamma rays were detected by two C 6 D 6 detectors, which were operated in coincidence and anticoincidence mode. Two-dimensional data acquisition allowed to apply the pulse height weighting technique off-line. The samples were located at a flight path of 60 cm. The total time resolution was 1.2 ns thus allowing for an energy resolution of 2 ns/m. The experimental set-up was optimized with respect to low background and low neutron sensitivity. The additional flight path of 4 cm from the sample to the detector was sufficient to discriminate capture of sample scattered neutrons by the additional time of flight. In this way reliable results were obtained even for the strong s-wave resonances of both isotopes. The experimental capture yield was analyzed with the FANAC code. The energy resolution allowed to extract resonance parameters in the energy range from 10 to 100 keV. The individual systematic uncertainties of the experimental method are discussed in detail. They were found to range between 5 and 10% while the statistical uncertainty is 3-5% for most of the resonances. A comparison to the results of other authors exhibits in case of 56 Fe systematic differences of 7-11%. For 58 Fe the present results differ up to 50% from the only other measurement for this isotope. (orig.) [de

  7. Was that part of the story or did i just think so? Age and cognitive status differences in inference and story recognition.

    Science.gov (United States)

    Bielak, Allison A M; Hultsch, David F; Kadlec, Helena; Strauss, Esther

    2007-01-01

    This study expanded the inference and story recognition literature by investigating differences within the older age range, differences as a result of cognitive impairment, no dementia (CIND), and applying signal detection procedures to the analysis of accuracy data. Old-old adults and those with more severe CIND showed poorer ability to accurately recognize inferences, and less sensitivity in discriminating between statement types. Results support the proposal that participants used two different recognition strategies. Old-old and CIND adults may be less able to recognize that something plausible with an event may not have actually occurred.

  8. Causal inference based on counterfactuals

    Directory of Open Access Journals (Sweden)

    Höfler M

    2005-09-01

    Full Text Available Abstract Background The counterfactual or potential outcome model has become increasingly standard for causal inference in epidemiological and medical studies. Discussion This paper provides an overview on the counterfactual and related approaches. A variety of conceptual as well as practical issues when estimating causal effects are reviewed. These include causal interactions, imperfect experiments, adjustment for confounding, time-varying exposures, competing risks and the probability of causation. It is argued that the counterfactual model of causal effects captures the main aspects of causality in health sciences and relates to many statistical procedures. Summary Counterfactuals are the basis of causal inference in medicine and epidemiology. Nevertheless, the estimation of counterfactual differences pose several difficulties, primarily in observational studies. These problems, however, reflect fundamental barriers only when learning from observations, and this does not invalidate the counterfactual concept.

  9. Surface radiant flux densities inferred from LAC and GAC AVHRR data

    Science.gov (United States)

    Berger, F.; Klaes, D.

    To infer surface radiant flux densities from current (NOAA-AVHRR, ERS-1/2 ATSR) and future meteorological (Envisat AATSR, MSG, METOP) satellite data, the complex, modular analysis scheme SESAT (Strahlungs- und Energieflüsse aus Satellitendaten) could be developed (Berger, 2001). This scheme allows the determination of cloud types, optical and microphysical cloud properties as well as surface and TOA radiant flux densities. After testing of SESAT in Central Europe and the Baltic Sea catchment (more than 400scenes U including a detailed validation with various surface measurements) it could be applied to a large number of NOAA-16 AVHRR overpasses covering the globe.For the analysis, two different spatial resolutions U local area coverage (LAC) andwere considered. Therefore, all inferred results, like global area coverage (GAC) U cloud cover, cloud properties and radiant properties, could be intercompared. Specific emphasis could be made to the surface radiant flux densities (all radiative balance compoments), where results for different regions, like Southern America, Southern Africa, Northern America, Europe, and Indonesia, will be presented. Applying SESAT, energy flux densities, like latent and sensible heat flux densities could also be determined additionally. A statistical analysis of all results including a detailed discussion for the two spatial resolutions will close this study.

  10. Inclusive photoproduction of PHI, Ksup(*)(890) and Ksup(*)(1420) in the photon energy range 20 to 70 GeV

    International Nuclear Information System (INIS)

    Atkinson, M.; Davenport, M.; Flower, P.; Hutton, J.S.; Kumar, B.R.; Morris, J.A.G.; Morris, J.V.; Sharp, P.H.

    1986-01-01

    Inclusive photoproduction of PHI, Ksup(*0)(890), anti Ksup(*0)(890) and Ksup(*0)(1420), anti Ksup(*0)(1420) has been studied in γp collisions with photons of energy 20 to 70 GeV and in the range 0.1<=chisub(F)<=0.95 [0.4<=chisub(F)<=0.8 for the anti Ksup(*)(1420)], chisub(F) being the Feynman variable of the vector/tensor meson. The cross-sections for these processes, averaged over the photon energy range and integrated over chisub(F) are given. The inclusive PHI production in the forward direction can be described quantitatively by a triple-Regge model calculation. The remaining PHI production and the total Ksup(*0)(890) and anti Ksup(*0)(890) production are consistent with a quark fusion picture. (orig.)

  11. High-resolution integrated germanium Compton polarimeter for the γ-ray energy range 80 keV-1 MeV

    Science.gov (United States)

    Sareen, R. A.; Urban, W.; Barnett, A. R.; Varley, B. J.

    1995-06-01

    Parameters which govern the choice of a detection system to measure the linear polarization of γ rays at low energies are discussed. An integrated polarimeter is described which is constructed from a single crystal of germanium. It is a compact planar device with the sectors defined electrically, and which gives an energy resolution in the add-back mode of 1 keV at 300 keV. Its performance is demonstrated in a series of calibration measurements using both unpolarized radiation from radioactive sources and polarized γ rays from the 168Er(α,2n)170Yb reaction at Eα=25 MeV. Polarization measurements at energies as low as 84 keV have been achieved, where the sensitivity was 0.32±0.09. The sensitivity, efficiency, and energy resolution are reported. Our results indicate that energy resolution should be included in the definition of the figure of merit and we relate the new definition to earlier work. The comparisons show the advantages of the present design in the energy range below 300 keV and its competitiveness up to 1500 keV.

  12. Implementing and analyzing the multi-threaded LP-inference

    Science.gov (United States)

    Bolotova, S. Yu; Trofimenko, E. V.; Leschinskaya, M. V.

    2018-03-01

    The logical production equations provide new possibilities for the backward inference optimization in intelligent production-type systems. The strategy of a relevant backward inference is aimed at minimization of a number of queries to external information source (either to a database or an interactive user). The idea of the method is based on the computing of initial preimages set and searching for the true preimage. The execution of each stage can be organized independently and in parallel and the actual work at a given stage can also be distributed between parallel computers. This paper is devoted to the parallel algorithms of the relevant inference based on the advanced scheme of the parallel computations “pipeline” which allows to increase the degree of parallelism. The author also provides some details of the LP-structures implementation.

  13. Study of the 2H(p,γ)3He reaction in the BBN energy range at LUNA

    Science.gov (United States)

    Trezzi, Davide; LUNA Collaboration

    2018-01-01

    Using Big Bang Nucleosynthesis with the recent cosmological parameters obtained by the Planck collaboration, a primordial deuterium abundance value D/H = (2.65 ± 0.07) × 10-5 is obtained. This one is a little bit in tension with astronomical observations on metal- poor damped Lyman alpha systems where D/H = (2.53 ± 0.04) × 105. In order to reduce the BBN calculation uncertainty, a measurement of the 2H(p,γ)3He cross section in the energy range 10-300 keV with a 3% accuracy is thus desirable. Thanks to the low background of the underground Gran Sasso Laboratories, and to the experience accumulated in more than twenty years of scientific activity, LUNA (Laboratory for Underground Nuclear Astrophysics) planned to measure the 2H(p,γ)3He fusion cross section at the BBN energy range in 2015-2016. A feasibility test of the measurement has been recently performed at LUNA. In this paper, the results obtained will be shown. Possible cosmological outcomes from the future LUNA data will be also discussed.

  14. Inferring Characteristics of Sensorimotor Behavior by Quantifying Dynamics of Animal Locomotion

    Science.gov (United States)

    Leung, KaWai

    Locomotion is one of the most well-studied topics in animal behavioral studies. Many fundamental and clinical research make use of the locomotion of an animal model to explore various aspects in sensorimotor behavior. In the past, most of these studies focused on population average of a specific trait due to limitation of data collection and processing power. With recent advance in computer vision and statistical modeling techniques, it is now possible to track and analyze large amounts of behavioral data. In this thesis, I present two projects that aim to infer the characteristics of sensorimotor behavior by quantifying the dynamics of locomotion of nematode Caenorhabditis elegans and fruit fly Drosophila melanogaster, shedding light on statistical dependence between sensing and behavior. In the first project, I investigate the possibility of inferring noxious sensory information from the behavior of Caenorhabditis elegans. I develop a statistical model to infer the heat stimulus level perceived by individual animals from their stereotyped escape responses after stimulation by an IR laser. The model allows quantification of analgesic-like effects of chemical agents or genetic mutations in the worm. At the same time, the method is able to differentiate perturbations of locomotion behavior that are beyond affecting the sensory system. With this model I propose experimental designs that allows statistically significant identification of analgesic-like effects. In the second project, I investigate the relationship of energy budget and stability of locomotion in determining the walking speed distribution of Drosophila melanogaster during aging. The locomotion stability at different age groups is estimated from video recordings using Floquet theory. I calculate the power consumption of different locomotion speed using a biomechanics model. In conclusion, the power consumption, not stability, predicts the locomotion speed distribution at different ages.

  15. International Conference on Trends and Perspectives in Linear Statistical Inference

    CERN Document Server

    Rosen, Dietrich

    2018-01-01

    This volume features selected contributions on a variety of topics related to linear statistical inference. The peer-reviewed papers from the International Conference on Trends and Perspectives in Linear Statistical Inference (LinStat 2016) held in Istanbul, Turkey, 22-25 August 2016, cover topics in both theoretical and applied statistics, such as linear models, high-dimensional statistics, computational statistics, the design of experiments, and multivariate analysis. The book is intended for statisticians, Ph.D. students, and professionals who are interested in statistical inference. .

  16. Elastic and inelastic scattering of alpha particles from /sup 40,44/Ca over a broad range of energies and angles

    International Nuclear Information System (INIS)

    Delbar, T.; Gregoire, G.; Paic, G.; Ceuleneer, R.; Michel, F.; Vanderpoorten, R.; Budzanowski, A.; Dabrowski, H.; Freindl, L.; Grotowski, K.; Micek, S.; Planeta, R.; Strzalkowski, A.; Eberhard, K.A.

    1978-01-01

    Angular distributions for α particle elastic scattering by /sup 40,44/Ca and excitation of the 3.73 MeV 3 - collective state of 40 Ca were measured for incident energies ranging from 40 to 62 MeV. An extensive optical model analysis of these elastic scattering cross sections and other available data, using squared Woods-Saxon form factors, results in potentials with fixed geometry for both real and imaginary parts and depths with smooth energy behavior over a broad incident energy range. These results are discussed in the frame of the semi-classical approximation developed by Brink and Takigawa. The sensitiveness of the calculated elastic scattering cross sections to the real part of the potentials as a function of the projectile-target distance has been investigated by means of a notch test. Distorted-wave Born-approximtion calculations for the excitation of the 3.73 MeV 3 - state of 40 Ca are presented

  17. Packaging design as communicator of product attributes: Effects on consumers’ attribute inferences

    NARCIS (Netherlands)

    van Ooijen, I.

    2016-01-01

    This dissertation will focus on two types of attribute inferences that result from packaging design cues. First, the effects of product packaging design on quality related inferences are investigated. Second, the effects of product packaging design on healthiness related inferences are examined (See

  18. Statistical inference using weak chaos and infinite memory

    International Nuclear Information System (INIS)

    Welling, Max; Chen Yutian

    2010-01-01

    We describe a class of deterministic weakly chaotic dynamical systems with infinite memory. These 'herding systems' combine learning and inference into one algorithm, where moments or data-items are converted directly into an arbitrarily long sequence of pseudo-samples. This sequence has infinite range correlations and as such is highly structured. We show that its information content, as measured by sub-extensive entropy, can grow as fast as K log T, which is faster than the usual 1/2 K log T for exchangeable sequences generated by random posterior sampling from a Bayesian model. In one dimension we prove that herding sequences are equivalent to Sturmian sequences which have complexity exactly log(T + 1). More generally, we advocate the application of the rich theoretical framework around nonlinear dynamical systems, chaos theory and fractal geometry to statistical learning.

  19. Statistical inference using weak chaos and infinite memory

    Energy Technology Data Exchange (ETDEWEB)

    Welling, Max; Chen Yutian, E-mail: welling@ics.uci.ed, E-mail: yutian.chen@uci.ed [Donald Bren School of Information and Computer Science, University of California Irvine CA 92697-3425 (United States)

    2010-06-01

    We describe a class of deterministic weakly chaotic dynamical systems with infinite memory. These 'herding systems' combine learning and inference into one algorithm, where moments or data-items are converted directly into an arbitrarily long sequence of pseudo-samples. This sequence has infinite range correlations and as such is highly structured. We show that its information content, as measured by sub-extensive entropy, can grow as fast as K log T, which is faster than the usual 1/2 K log T for exchangeable sequences generated by random posterior sampling from a Bayesian model. In one dimension we prove that herding sequences are equivalent to Sturmian sequences which have complexity exactly log(T + 1). More generally, we advocate the application of the rich theoretical framework around nonlinear dynamical systems, chaos theory and fractal geometry to statistical learning.

  20. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-06

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.