WorldWideScience

Sample records for gibbs point processes

  1. A logistic regression estimating function for spatial Gibbs point processes

    DEFF Research Database (Denmark)

    Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege

    We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related to the p......We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related...

  2. Fast covariance estimation for innovations computed from a spatial Gibbs point process

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Rubak, Ege

    In this paper, we derive an exact formula for the covariance of two innovations computed from a spatial Gibbs point process and suggest a fast method for estimating this covariance. We show how this methodology can be used to estimate the asymptotic covariance matrix of the maximum pseudo...

  3. The Gibbs Energy Basis and Construction of Boiling Point Diagrams in Binary Systems

    Science.gov (United States)

    Smith, Norman O.

    2004-01-01

    An illustration of how excess Gibbs energies of the components in binary systems can be used to construct boiling point diagrams is given. The underlying causes of the various types of behavior of the systems in terms of intermolecular forces and the method of calculating the coexisting liquid and vapor compositions in boiling point diagrams with…

  4. Residual analysis for spatial point processes

    DEFF Research Database (Denmark)

    Baddeley, A.; Turner, R.; Møller, Jesper

    We define residuals for point process models fitted to spatial point pattern data, and propose diagnostic plots based on these residuals. The techniques apply to any Gibbs point process model, which may exhibit spatial heterogeneity, interpoint interaction and dependence on spatial covariates. Ou...... or covariate effects. Q-Q plots of the residuals are effective in diagnosing interpoint interaction. Some existing ad hoc statistics of point patterns (quadrat counts, scan statistic, kernel smoothed intensity, Berman's diagnostic) are recovered as special cases....

  5. Modern Statistics for Spatial Point Processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Waagepetersen, Rasmus

    2007-01-01

    We summarize and discuss the current state of spatial point process theory and directions for future research, making an analogy with generalized linear models and random effect models, and illustrating the theory with various examples of applications. In particular, we consider Poisson, Gibbs...

  6. Modern statistics for spatial point processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Waagepetersen, Rasmus

    We summarize and discuss the current state of spatial point process theory and directions for future research, making an analogy with generalized linear models and random effect models, and illustrating the theory with various examples of applications. In particular, we consider Poisson, Gibbs...

  7. Boiling point determination using adiabatic Gibbs ensemble Monte Carlo simulations: application to metals described by embedded-atom potentials.

    Science.gov (United States)

    Gelb, Lev D; Chakraborty, Somendra Nath

    2011-12-14

    The normal boiling points are obtained for a series of metals as described by the "quantum-corrected Sutton Chen" (qSC) potentials [S.-N. Luo, T. J. Ahrens, T. Çağın, A. Strachan, W. A. Goddard III, and D. C. Swift, Phys. Rev. B 68, 134206 (2003)]. Instead of conventional Monte Carlo simulations in an isothermal or expanded ensemble, simulations were done in the constant-NPH adabatic variant of the Gibbs ensemble technique as proposed by Kristóf and Liszi [Chem. Phys. Lett. 261, 620 (1996)]. This simulation technique is shown to be a precise tool for direct calculation of boiling temperatures in high-boiling fluids, with results that are almost completely insensitive to system size or other arbitrary parameters as long as the potential truncation is handled correctly. Results obtained were validated using conventional NVT-Gibbs ensemble Monte Carlo simulations. The qSC predictions for boiling temperatures are found to be reasonably accurate, but substantially underestimate the enthalpies of vaporization in all cases. This appears to be largely due to the systematic overestimation of dimer binding energies by this family of potentials, which leads to an unsatisfactory description of the vapor phase. © 2011 American Institute of Physics

  8. Poisson branching point processes

    International Nuclear Information System (INIS)

    Matsuo, K.; Teich, M.C.; Saleh, B.E.A.

    1984-01-01

    We investigate the statistical properties of a special branching point process. The initial process is assumed to be a homogeneous Poisson point process (HPP). The initiating events at each branching stage are carried forward to the following stage. In addition, each initiating event independently contributes a nonstationary Poisson point process (whose rate is a specified function) located at that point. The additional contributions from all points of a given stage constitute a doubly stochastic Poisson point process (DSPP) whose rate is a filtered version of the initiating point process at that stage. The process studied is a generalization of a Poisson branching process in which random time delays are permitted in the generation of events. Particular attention is given to the limit in which the number of branching stages is infinite while the average number of added events per event of the previous stage is infinitesimal. In the special case when the branching is instantaneous this limit of continuous branching corresponds to the well-known Yule--Furry process with an initial Poisson population. The Poisson branching point process provides a useful description for many problems in various scientific disciplines, such as the behavior of electron multipliers, neutron chain reactions, and cosmic ray showers

  9. Theoretical Understanding the Relations of Melting-point Determination Methods from Gibbs Thermodynamic Surface and Applications on Melting Curves of Lower Mantle Minerals

    Science.gov (United States)

    Yin, K.; Belonoshko, A. B.; Zhou, H.; Lu, X.

    2016-12-01

    The melting temperatures of materials in the interior of the Earth has significant implications in many areas of geophysics. The direct calculations of the melting point by atomic simulations would face substantial hysteresis problem. To overcome the hysteresis encountered in the atomic simulations there are a few different melting-point determination methods available nowadays, which are founded independently, such as the free energy method, the two-phase or coexistence method, and the Z method, etc. In this study, we provide a theoretical understanding the relations of these methods from a geometrical perspective based on a quantitative construction of the volume-entropy-energy thermodynamic surface, a model first proposed by J. Willard Gibbs in 1873. Then combining with an experimental data and/or a previous melting-point determination method, we apply this model to derive the high-pressure melting curves for several lower mantle minerals with less computational efforts relative to using previous methods only. Through this way, some polyatomic minerals at extreme pressures which are almost unsolvable before are calculated fully from first principles now.

  10. Geometric and Texture Inpainting by Gibbs Sampling

    DEFF Research Database (Denmark)

    Gustafsson, David Karl John; Pedersen, Kim Steenstrup; Nielsen, Mads

    2007-01-01

    . In this paper we use the well-known FRAME (Filters, Random Fields and Maximum Entropy) for inpainting. We introduce a temperature term in the learned FRAME Gibbs distribution. By sampling using different temperature in the FRAME Gibbs distribution, different contents of the image are reconstructed. We propose...... a two step method for inpainting using FRAME. First the geometric structure of the image is reconstructed by sampling from a cooled Gibbs distribution, then the stochastic component is reconstructed by sample froma heated Gibbs distribution. Both steps in the reconstruction process are necessary...

  11. Josiah Willard Gibbs

    Indian Academy of Sciences (India)

    The younger Gibbs grew up in the liberal and academic atmos- phere at Yale, where .... research in the premier European universities at the time when a similar culture ... tion in obscure journals, Gibbs' work did not receive wide recognition in ...

  12. Thinning spatial point processes into Poisson processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Schoenberg, Frederic Paik

    2010-01-01

    are identified, and where we simulate backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and......In this paper we describe methods for randomly thinning certain classes of spatial point processes. In the case of a Markov point process, the proposed method involves a dependent thinning of a spatial birth-and-death process, where clans of ancestors associated with the original points......, thus, can be used as a graphical exploratory tool for inspecting the goodness-of-fit of a spatial point process model. Several examples, including clustered and inhibitive point processes, are considered....

  13. Thinning spatial point processes into Poisson processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Schoenberg, Frederic Paik

    , and where one simulates backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and thus can......This paper describes methods for randomly thinning certain classes of spatial point processes. In the case of a Markov point process, the proposed method involves a dependent thinning of a spatial birth-and-death process, where clans of ancestors associated with the original points are identified...... be used as a diagnostic for assessing the goodness-of-fit of a spatial point process model. Several examples, including clustered and inhibitive point processes, are considered....

  14. Detecting determinism from point processes.

    Science.gov (United States)

    Andrzejak, Ralph G; Mormann, Florian; Kreuz, Thomas

    2014-12-01

    The detection of a nonrandom structure from experimental data can be crucial for the classification, understanding, and interpretation of the generating process. We here introduce a rank-based nonlinear predictability score to detect determinism from point process data. Thanks to its modular nature, this approach can be adapted to whatever signature in the data one considers indicative of deterministic structure. After validating our approach using point process signals from deterministic and stochastic model dynamics, we show an application to neuronal spike trains recorded in the brain of an epilepsy patient. While we illustrate our approach in the context of temporal point processes, it can be readily applied to spatial point processes as well.

  15. Fixed-point signal processing

    CERN Document Server

    Padgett, Wayne T

    2009-01-01

    This book is intended to fill the gap between the ""ideal precision"" digital signal processing (DSP) that is widely taught, and the limited precision implementation skills that are commonly required in fixed-point processors and field programmable gate arrays (FPGAs). These skills are often neglected at the university level, particularly for undergraduates. We have attempted to create a resource both for a DSP elective course and for the practicing engineer with a need to understand fixed-point implementation. Although we assume a background in DSP, Chapter 2 contains a review of basic theory

  16. Processing Terrain Point Cloud Data

    KAUST Repository

    DeVore, Ronald

    2013-01-10

    Terrain point cloud data are typically acquired through some form of Light Detection And Ranging sensing. They form a rich resource that is important in a variety of applications including navigation, line of sight, and terrain visualization. Processing terrain data has not received the attention of other forms of surface reconstruction or of image processing. The goal of terrain data processing is to convert the point cloud into a succinct representation system that is amenable to the various application demands. The present paper presents a platform for terrain processing built on the following principles: (i) measuring distortion in the Hausdorff metric, which we argue is a good match for the application demands, (ii) a multiscale representation based on tree approximation using local polynomial fitting. The basic elements held in the nodes of the tree can be efficiently encoded, transmitted, visualized, and utilized for the various target applications. Several challenges emerge because of the variable resolution of the data, missing data, occlusions, and noise. Techniques for identifying and handling these challenges are developed. © 2013 Society for Industrial and Applied Mathematics.

  17. Gibbs-non-Gibbs transitions and vector-valued integration

    NARCIS (Netherlands)

    Zuijlen, van W.B.

    2016-01-01

    This thesis consists of two distinct topics. The first part of the thesis con- siders Gibbs-non-Gibbs transitions. Gibbs measures describe the macro- scopic state of a system of a large number of components that is in equilib- rium. It may happen that when the system is transformed, for example, by

  18. Characterization results and Markov chain Monte Carlo algorithms including exact simulation for some spatial point processes

    DEFF Research Database (Denmark)

    Häggström, Olle; Lieshout, Marie-Colette van; Møller, Jesper

    1999-01-01

    The area-interaction process and the continuum random-cluster model are characterized in terms of certain functional forms of their respective conditional intensities. In certain cases, these two point process models can be derived from a bivariate point process model which in many respects...... is simpler to analyse and simulate. Using this correspondence we devise a two-component Gibbs sampler, which can be used for fast and exact simulation by extending the recent ideas of Propp and Wilson. We further introduce a Swendsen-Wang type algorithm. The relevance of the results within spatial statistics...

  19. Generalization of Gibbs Entropy and Thermodynamic Relation

    OpenAIRE

    Park, Jun Chul

    2010-01-01

    In this paper, we extend Gibbs's approach of quasi-equilibrium thermodynamic processes, and calculate the microscopic expression of entropy for general non-equilibrium thermodynamic processes. Also, we analyze the formal structure of thermodynamic relation in non-equilibrium thermodynamic processes.

  20. Processing Terrain Point Cloud Data

    KAUST Repository

    DeVore, Ronald; Petrova, Guergana; Hielsberg, Matthew; Owens, Luke; Clack, Billy; Sood, Alok

    2013-01-01

    Terrain point cloud data are typically acquired through some form of Light Detection And Ranging sensing. They form a rich resource that is important in a variety of applications including navigation, line of sight, and terrain visualization

  1. Inhomogeneous Markov point processes by transformation

    DEFF Research Database (Denmark)

    Jensen, Eva B. Vedel; Nielsen, Linda Stougaard

    2000-01-01

    We construct parametrized models for point processes, allowing for both inhomogeneity and interaction. The inhomogeneity is obtained by applying parametrized transformations to homogeneous Markov point processes. An interesting model class, which can be constructed by this transformation approach......, is that of exponential inhomogeneous Markov point processes. Statistical inference For such processes is discussed in some detail....

  2. Testing Local Independence between Two Point Processes

    DEFF Research Database (Denmark)

    Allard, Denis; Brix, Anders; Chadæuf, Joël

    2001-01-01

    Independence test, Inhomogeneous point processes, Local test, Monte Carlo, Nonstationary, Rotations, Spatial pattern, Tiger bush......Independence test, Inhomogeneous point processes, Local test, Monte Carlo, Nonstationary, Rotations, Spatial pattern, Tiger bush...

  3. Finite Cycle Gibbs Measures on Permutations of

    Science.gov (United States)

    Armendáriz, Inés; Ferrari, Pablo A.; Groisman, Pablo; Leonardi, Florencia

    2015-03-01

    We consider Gibbs distributions on the set of permutations of associated to the Hamiltonian , where is a permutation and is a strictly convex potential. Call finite-cycle those permutations composed by finite cycles only. We give conditions on ensuring that for large enough temperature there exists a unique infinite volume ergodic Gibbs measure concentrating mass on finite-cycle permutations; this measure is equal to the thermodynamic limit of the specifications with identity boundary conditions. We construct as the unique invariant measure of a Markov process on the set of finite-cycle permutations that can be seen as a loss-network, a continuous-time birth and death process of cycles interacting by exclusion, an approach proposed by Fernández, Ferrari and Garcia. Define as the shift permutation . In the Gaussian case , we show that for each , given by is an ergodic Gibbs measure equal to the thermodynamic limit of the specifications with boundary conditions. For a general potential , we prove the existence of Gibbs measures when is bigger than some -dependent value.

  4. Lévy based Cox point processes

    DEFF Research Database (Denmark)

    Hellmund, Gunnar; Prokesová, Michaela; Jensen, Eva Bjørn Vedel

    2008-01-01

    In this paper we introduce Lévy-driven Cox point processes (LCPs) as Cox point processes with driving intensity function Λ defined by a kernel smoothing of a Lévy basis (an independently scattered, infinitely divisible random measure). We also consider log Lévy-driven Cox point processes (LLCPs......) with Λ equal to the exponential of such a kernel smoothing. Special cases are shot noise Cox processes, log Gaussian Cox processes, and log shot noise Cox processes. We study the theoretical properties of Lévy-based Cox processes, including moment properties described by nth-order product densities...

  5. State estimation for temporal point processes

    NARCIS (Netherlands)

    van Lieshout, Maria Nicolette Margaretha

    2015-01-01

    This paper is concerned with combined inference for point processes on the real line observed in a broken interval. For such processes, the classic history-based approach cannot be used. Instead, we adapt tools from sequential spatial point processes. For a range of models, the marginal and

  6. Bayesian analysis of Markov point processes

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2006-01-01

    Recently Møller, Pettitt, Berthelsen and Reeves introduced a new MCMC methodology for drawing samples from a posterior distribution when the likelihood function is only specified up to a normalising constant. We illustrate the method in the setting of Bayesian inference for Markov point processes...... a partially ordered Markov point process as the auxiliary variable. As the method requires simulation from the "unknown" likelihood, perfect simulation algorithms for spatial point processes become useful....

  7. SINGLE TREE DETECTION FROM AIRBORNE LASER SCANNING DATA USING A MARKED POINT PROCESS BASED METHOD

    Directory of Open Access Journals (Sweden)

    J. Zhang

    2013-05-01

    Full Text Available Tree detection and reconstruction is of great interest in large-scale city modelling. In this paper, we present a marked point process model to detect single trees from airborne laser scanning (ALS data. We consider single trees in ALS recovered canopy height model (CHM as a realization of point process of circles. Unlike traditional marked point process, we sample the model in a constraint configuration space by making use of image process techniques. A Gibbs energy is defined on the model, containing a data term which judge the fitness of the model with respect to the data, and prior term which incorporate the prior knowledge of object layouts. We search the optimal configuration through a steepest gradient descent algorithm. The presented hybrid framework was test on three forest plots and experiments show the effectiveness of the proposed method.

  8. Gibbs sampling on large lattice with GMRF

    Science.gov (United States)

    Marcotte, Denis; Allard, Denis

    2018-02-01

    Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.

  9. Poisson point processes imaging, tracking, and sensing

    CERN Document Server

    Streit, Roy L

    2010-01-01

    This overview of non-homogeneous and multidimensional Poisson point processes and their applications features mathematical tools and applications from emission- and transmission-computed tomography to multiple target tracking and distributed sensor detection.

  10. Statistical aspects of determinantal point processes

    DEFF Research Database (Denmark)

    Lavancier, Frédéric; Møller, Jesper; Rubak, Ege

    The statistical aspects of determinantal point processes (DPPs) seem largely unexplored. We review the appealing properties of DDPs, demonstrate that they are useful models for repulsiveness, detail a simulation procedure, and provide freely available software for simulation and statistical infer...

  11. Modeling fixation locations using spatial point processes.

    Science.gov (United States)

    Barthelmé, Simon; Trukenbrod, Hans; Engbert, Ralf; Wichmann, Felix

    2013-10-01

    Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.

  12. Fingerprint Analysis with Marked Point Processes

    DEFF Research Database (Denmark)

    Forbes, Peter G. M.; Lauritzen, Steffen; Møller, Jesper

    We present a framework for fingerprint matching based on marked point process models. An efficient Monte Carlo algorithm is developed to calculate the marginal likelihood ratio for the hypothesis that two observed prints originate from the same finger against the hypothesis that they originate from...... different fingers. Our model achieves good performance on an NIST-FBI fingerprint database of 258 matched fingerprint pairs....

  13. Extreme values, regular variation and point processes

    CERN Document Server

    Resnick, Sidney I

    1987-01-01

    Extremes Values, Regular Variation and Point Processes is a readable and efficient account of the fundamental mathematical and stochastic process techniques needed to study the behavior of extreme values of phenomena based on independent and identically distributed random variables and vectors It presents a coherent treatment of the distributional and sample path fundamental properties of extremes and records It emphasizes the core primacy of three topics necessary for understanding extremes the analytical theory of regularly varying functions; the probabilistic theory of point processes and random measures; and the link to asymptotic distribution approximations provided by the theory of weak convergence of probability measures in metric spaces The book is self-contained and requires an introductory measure-theoretic course in probability as a prerequisite Almost all sections have an extensive list of exercises which extend developments in the text, offer alternate approaches, test mastery and provide for enj...

  14. Determinantal point process models on the sphere

    DEFF Research Database (Denmark)

    Møller, Jesper; Nielsen, Morten; Porcu, Emilio

    defined on Sd × Sd . We review the appealing properties of such processes, including their specific moment properties, density expressions and simulation procedures. Particularly, we characterize and construct isotropic DPPs models on Sd , where it becomes essential to specify the eigenvalues......We consider determinantal point processes on the d-dimensional unit sphere Sd . These are finite point processes exhibiting repulsiveness and with moment properties determined by a certain determinant whose entries are specified by a so-called kernel which we assume is a complex covariance function...... and eigenfunctions in a spectral representation for the kernel, and we figure out how repulsive isotropic DPPs can be. Moreover, we discuss the shortcomings of adapting existing models for isotropic covariance functions and consider strategies for developing new models, including a useful spectral approach....

  15. Estimating Function Approaches for Spatial Point Processes

    Science.gov (United States)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting

  16. Concentration inequalities for functions of Gibbs fields with application to diffraction and random Gibbs measures

    CERN Document Server

    Külske, C

    2003-01-01

    We derive useful general concentration inequalities for functions of Gibbs fields in the uniqueness regime. We also consider expectations of random Gibbs measures that depend on an additional disorder field, and prove concentration w.r.t the disorder field. Both fields are assumed to be in the uniqueness regime, allowing in particular for non-independent disorder field. The modification of the bounds compared to the case of an independent field can be expressed in terms of constants that resemble the Dobrushin contraction coefficient, and are explicitly computable. On the basis of these inequalities, we obtain bounds on the deviation of a diffraction pattern created by random scatterers located on a general discrete point set in the Euclidean space, restricted to a finite volume. Here we also allow for thermal dislocations of the scatterers around their equilibrium positions. Extending recent results for independent scatterers, we give a universal upper bound on the probability of a deviation of the random sc...

  17. Point cloud processing for smart systems

    Directory of Open Access Journals (Sweden)

    Jaromír Landa

    2013-01-01

    Full Text Available High population as well as the economical tension emphasises the necessity of effective city management – from land use planning to urban green maintenance. The management effectiveness is based on precise knowledge of the city environment. Point clouds generated by mobile and terrestrial laser scanners provide precise data about objects in the scanner vicinity. From these data pieces the state of the roads, buildings, trees and other objects important for this decision-making process can be obtained. Generally, they can support the idea of “smart” or at least “smarter” cities.Unfortunately the point clouds do not provide this type of information automatically. It has to be extracted. This extraction is done by expert personnel or by object recognition software. As the point clouds can represent large areas (streets or even cities, usage of expert personnel to identify the required objects can be very time-consuming, therefore cost ineffective. Object recognition software allows us to detect and identify required objects semi-automatically or automatically.The first part of the article reviews and analyses the state of current art point cloud object recognition techniques. The following part presents common formats used for point cloud storage and frequently used software tools for point cloud processing. Further, a method for extraction of geospatial information about detected objects is proposed. Therefore, the method can be used not only to recognize the existence and shape of certain objects, but also to retrieve their geospatial properties. These objects can be later directly used in various GIS systems for further analyses.

  18. Parametric methods for spatial point processes

    DEFF Research Database (Denmark)

    Møller, Jesper

    is studied in Section 4, and Bayesian inference in Section 5. On one hand, as the development in computer technology and computational statistics continues,computationally-intensive simulation-based methods for likelihood inference probably will play a increasing role for statistical analysis of spatial...... inference procedures for parametric spatial point process models. The widespread use of sensible but ad hoc methods based on summary statistics of the kind studied in Chapter 4.3 have through the last two decades been supplied by likelihood based methods for parametric spatial point process models......(This text is submitted for the volume ‘A Handbook of Spatial Statistics' edited by A.E. Gelfand, P. Diggle, M. Fuentes, and P. Guttorp, to be published by Chapmand and Hall/CRC Press, and planned to appear as Chapter 4.4 with the title ‘Parametric methods'.) 1 Introduction This chapter considers...

  19. Statistical aspects of determinantal point processes

    DEFF Research Database (Denmark)

    Lavancier, Frédéric; Møller, Jesper; Rubak, Ege Holger

    The statistical aspects of determinantal point processes (DPPs) seem largely unexplored. We review the appealing properties of DDPs, demonstrate that they are useful models for repulsiveness, detail a simulation procedure, and provide freely available software for simulation and statistical...... inference. We pay special attention to stationary DPPs, where we give a simple condition ensuring their existence, construct parametric models, describe how they can be well approximated so that the likelihood can be evaluated and realizations can be simulated, and discuss how statistical inference...

  20. Transforming spatial point processes into Poisson processes using random superposition

    DEFF Research Database (Denmark)

    Møller, Jesper; Berthelsen, Kasper Klitgaaard

    with a complementary spatial point process Y  to obtain a Poisson process X∪Y  with intensity function β. Underlying this is a bivariate spatial birth-death process (Xt,Yt) which converges towards the distribution of (X,Y). We study the joint distribution of X and Y, and their marginal and conditional distributions....... In particular, we introduce a fast and easy simulation procedure for Y conditional on X. This may be used for model checking: given a model for the Papangelou intensity of the original spatial point process, this model is used to generate the complementary process, and the resulting superposition is a Poisson...... process with intensity function β if and only if the true Papangelou intensity is used. Whether the superposition is actually such a Poisson process can easily be examined using well known results and fast simulation procedures for Poisson processes. We illustrate this approach to model checking...

  1. Quantum Gibbs Samplers: The Commuting Case

    Science.gov (United States)

    Kastoryano, Michael J.; Brandão, Fernando G. S. L.

    2016-06-01

    We analyze the problem of preparing quantum Gibbs states of lattice spin Hamiltonians with local and commuting terms on a quantum computer and in nature. Our central result is an equivalence between the behavior of correlations in the Gibbs state and the mixing time of the semigroup which drives the system to thermal equilibrium (the Gibbs sampler). We introduce a framework for analyzing the correlation and mixing properties of quantum Gibbs states and quantum Gibbs samplers, which is rooted in the theory of non-commutative {mathbb{L}_p} spaces. We consider two distinct classes of Gibbs samplers, one of them being the well-studied Davies generator modelling the dynamics of a system due to weak-coupling with a large Markovian environment. We show that their spectral gap is independent of system size if, and only if, a certain strong form of clustering of correlations holds in the Gibbs state. Therefore every Gibbs state of a commuting Hamiltonian that satisfies clustering of correlations in this strong sense can be prepared efficiently on a quantum computer. As concrete applications of our formalism, we show that for every one-dimensional lattice system, or for systems in lattices of any dimension at temperatures above a certain threshold, the Gibbs samplers of commuting Hamiltonians are always gapped, giving an efficient way of preparing the associated Gibbs states on a quantum computer.

  2. Enzyme Catalysis and the Gibbs Energy

    Science.gov (United States)

    Ault, Addison

    2009-01-01

    Gibbs-energy profiles are often introduced during the first semester of organic chemistry, but are less often presented in connection with enzyme-catalyzed reactions. In this article I show how the Gibbs-energy profile corresponds to the characteristic kinetics of a simple enzyme-catalyzed reaction. (Contains 1 figure and 1 note.)

  3. Some probabilistic properties of fractional point processes

    KAUST Repository

    Garra, Roberto

    2017-05-16

    In this article, the first hitting times of generalized Poisson processes N-f (t), related to Bernstein functions f are studied. For the spacefractional Poisson processes, N alpha (t), t > 0 ( corresponding to f = x alpha), the hitting probabilities P{T-k(alpha) < infinity} are explicitly obtained and analyzed. The processes N-f (t) are time-changed Poisson processes N( H-f (t)) with subordinators H-f (t) and here we study N(Sigma H-n(j= 1)f j (t)) and obtain probabilistic features of these extended counting processes. A section of the paper is devoted to processes of the form N( G(H,v) (t)) where G(H,v) (t) are generalized grey Brownian motions. This involves the theory of time-dependent fractional operators of the McBride form. While the time-fractional Poisson process is a renewal process, we prove that the space-time Poisson process is no longer a renewal process.

  4. Some probabilistic properties of fractional point processes

    KAUST Repository

    Garra, Roberto; Orsingher, Enzo; Scavino, Marco

    2017-01-01

    P{T-k(alpha) < infinity} are explicitly obtained and analyzed. The processes N-f (t) are time-changed Poisson processes N( H-f (t)) with subordinators H-f (t) and here we study N(Sigma H-n(j= 1)f j (t)) and obtain probabilistic features

  5. On statistical analysis of compound point process

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2006-01-01

    Roč. 35, 2-3 (2006), s. 389-396 ISSN 1026-597X R&D Projects: GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : counting process * compound process * hazard function * Cox -model Subject RIV: BB - Applied Statistics, Operational Research

  6. Intensity-dependent point spread image processing

    International Nuclear Information System (INIS)

    Cornsweet, T.N.; Yellott, J.I.

    1984-01-01

    There is ample anatomical, physiological and psychophysical evidence that the mammilian retina contains networks that mediate interactions among neighboring receptors, resulting in intersecting transformations between input images and their corresponding neural output patterns. The almost universally accepted view is that the principal form of interaction involves lateral inhibition, resulting in an output pattern that is the convolution of the input with a ''Mexican hat'' or difference-of-Gaussians spread function, having a positive center and a negative surround. A closely related process is widely applied in digital image processing, and in photography as ''unsharp masking''. The authors show that a simple and fundamentally different process, involving no inhibitory or subtractive terms can also account for the physiological and psychophysical findings that have been attributed to lateral inhibition. This process also results in a number of fundamental effects that occur in mammalian vision and that would be of considerable significance in robotic vision, but which cannot be explained by lateral inhibitory interaction

  7. Statistical properties of several models of fractional random point processes

    Science.gov (United States)

    Bendjaballah, C.

    2011-08-01

    Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.

  8. Microbial profile and critical control points during processing of 'robo ...

    African Journals Online (AJOL)

    Microbial profile and critical control points during processing of 'robo' snack from ... the relevant critical control points especially in relation to raw materials and ... to the quality of the various raw ingredients used were the roasting using earthen

  9. Point processes and the position distribution of infinite boson systems

    International Nuclear Information System (INIS)

    Fichtner, K.H.; Freudenberg, W.

    1987-01-01

    It is shown that to each locally normal state of a boson system one can associate a point process that can be interpreted as the position distribution of the state. The point process contains all information one can get by position measurements and is determined by the latter. On the other hand, to each so-called Σ/sup c/-point process Q they relate a locally normal state with position distribution Q

  10. Oxidation potentials, Gibbs energies, enthalpies and entropies of actinide ions in aqueous solutions

    International Nuclear Information System (INIS)

    1977-01-01

    The values of the Gibbs energy, enthalpy, and entropy of different actinide ions, thermodynamic characteristics of the processes of hydration of these ions, and the presently known ionization potentials of actinides are given. The enthalpy and entropy components of the oxidation potentials of actinide elements are considered. The curves of the dependence of the Gibbs energy of ion formation on the atomic number of the element and the Frost diagrams are analyzed. The diagram proposed by Frost represents the graphical dependence of the Gibbs energy of hydrated ions on the degree of oxidation of the element. Using the Frost diagram it is easy to establish whether a given ion is stable to disproportioning

  11. Modeling adsorption of cationic surfactants at air/water interface without using the Gibbs equation.

    Science.gov (United States)

    Phan, Chi M; Le, Thu N; Nguyen, Cuong V; Yusa, Shin-ichi

    2013-04-16

    The Gibbs adsorption equation has been indispensable in predicting the surfactant adsorption at the interfaces, with many applications in industrial and natural processes. This study uses a new theoretical framework to model surfactant adsorption at the air/water interface without the Gibbs equation. The model was applied to two surfactants, C14TAB and C16TAB, to determine the maximum surface excesses. The obtained values demonstrated a fundamental change, which was verified by simulations, in the molecular arrangement at the interface. The new insights, in combination with recent discoveries in the field, expose the limitations of applying the Gibbs adsorption equation to cationic surfactants at the air/water interface.

  12. Self-exciting point process in modeling earthquake occurrences

    International Nuclear Information System (INIS)

    Pratiwi, H.; Slamet, I.; Respatiwulan; Saputro, D. R. S.

    2017-01-01

    In this paper, we present a procedure for modeling earthquake based on spatial-temporal point process. The magnitude distribution is expressed as truncated exponential and the event frequency is modeled with a spatial-temporal point process that is characterized uniquely by its associated conditional intensity process. The earthquakes can be regarded as point patterns that have a temporal clustering feature so we use self-exciting point process for modeling the conditional intensity function. The choice of main shocks is conducted via window algorithm by Gardner and Knopoff and the model can be fitted by maximum likelihood method for three random variables. (paper)

  13. Notes on the development of the gibbs potential; Sur le developpement du potentiel de gibbs

    Energy Technology Data Exchange (ETDEWEB)

    Bloch, C; Dominicis, C de [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires

    1959-07-01

    A short account is given of some recent work on the perturbation expansion of the Gibbs potential of quantum statistical mechanics. (author) [French] Expose en resume de quelques travaux sur le developpement dans la theorie des perturbations du potentiel de Gibbs de la Mecanique Statistique. (auteur)

  14. Unifying hydrotropy under Gibbs phase rule.

    Science.gov (United States)

    Shimizu, Seishi; Matubayasi, Nobuyuki

    2017-09-13

    The task of elucidating the mechanism of solubility enhancement using hydrotropes has been hampered by the wide variety of phase behaviour that hydrotropes can exhibit, encompassing near-ideal aqueous solution, self-association, micelle formation, and micro-emulsions. Instead of taking a field guide or encyclopedic approach to classify hydrotropes into different molecular classes, we take a rational approach aiming at constructing a unified theory of hydrotropy based upon the first principles of statistical thermodynamics. Achieving this aim can be facilitated by the two key concepts: (1) the Gibbs phase rule as the basis of classifying the hydrotropes in terms of the degrees of freedom and the number of variables to modulate the solvation free energy; (2) the Kirkwood-Buff integrals to quantify the interactions between the species and their relative contributions to the process of solubilization. We demonstrate that the application of the two key concepts can in principle be used to distinguish the different molecular scenarios at work under apparently similar solubility curves observed from experiments. In addition, a generalization of our previous approach to solutes beyond dilution reveals the unified mechanism of hydrotropy, driven by a strong solute-hydrotrope interaction which overcomes the apparent per-hydrotrope inefficiency due to hydrotrope self-clustering.

  15. Evolution algebras generated by Gibbs measures

    International Nuclear Information System (INIS)

    Rozikov, Utkir A.; Tian, Jianjun Paul

    2009-03-01

    In this article we study algebraic structures of function spaces defined by graphs and state spaces equipped with Gibbs measures by associating evolution algebras. We give a constructive description of associating evolution algebras to the function spaces (cell spaces) defined by graphs and state spaces and Gibbs measure μ. For finite graphs we find some evolution subalgebras and other useful properties of the algebras. We obtain a structure theorem for evolution algebras when graphs are finite and connected. We prove that for a fixed finite graph, the function spaces have a unique algebraic structure since all evolution algebras are isomorphic to each other for whichever Gibbs measures are assigned. When graphs are infinite graphs then our construction allows a natural introduction of thermodynamics in studying of several systems of biology, physics and mathematics by theory of evolution algebras. (author)

  16. Non-parametric Bayesian inference for inhomogeneous Markov point processes

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper; Johansen, Per Michael

    is a shot noise process, and the interaction function for a pair of points depends only on the distance between the two points and is a piecewise linear function modelled by a marked Poisson process. Simulation of the resulting posterior using a Metropolis-Hastings algorithm in the "conventional" way...

  17. A tutorial on Palm distributions for spatial point processes

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus Plenge

    2017-01-01

    This tutorial provides an introduction to Palm distributions for spatial point processes. Initially, in the context of finite point processes, we give an explicit definition of Palm distributions in terms of their density functions. Then we review Palm distributions in the general case. Finally, we...

  18. SHAPE FROM TEXTURE USING LOCALLY SCALED POINT PROCESSES

    Directory of Open Access Journals (Sweden)

    Eva-Maria Didden

    2015-09-01

    Full Text Available Shape from texture refers to the extraction of 3D information from 2D images with irregular texture. This paper introduces a statistical framework to learn shape from texture where convex texture elements in a 2D image are represented through a point process. In a first step, the 2D image is preprocessed to generate a probability map corresponding to an estimate of the unnormalized intensity of the latent point process underlying the texture elements. The latent point process is subsequently inferred from the probability map in a non-parametric, model free manner. Finally, the 3D information is extracted from the point pattern by applying a locally scaled point process model where the local scaling function represents the deformation caused by the projection of a 3D surface onto a 2D image.

  19. Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width.

    Science.gov (United States)

    De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher

    2015-12-01

    Gibbs sampling on factor graphs is a widely used inference technique, which often produces good empirical results. Theoretical guarantees for its performance are weak: even for tree structured graphs, the mixing time of Gibbs may be exponential in the number of variables. To help understand the behavior of Gibbs sampling, we introduce a new (hyper)graph property, called hierarchy width . We show that under suitable conditions on the weights, bounded hierarchy width ensures polynomial mixing time. Our study of hierarchy width is in part motivated by a class of factor graph templates, hierarchical templates , which have bounded hierarchy width-regardless of the data used to instantiate them. We demonstrate a rich application from natural language processing in which Gibbs sampling provably mixes rapidly and achieves accuracy that exceeds human volunteers.

  20. Gibbs equilibrium averages and Bogolyubov measure

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  1. Illustrating Enzyme Inhibition Using Gibbs Energy Profiles

    Science.gov (United States)

    Bearne, Stephen L.

    2012-01-01

    Gibbs energy profiles have great utility as teaching and learning tools because they present students with a visual representation of the energy changes that occur during enzyme catalysis. Unfortunately, most textbooks divorce discussions of traditional kinetic topics, such as enzyme inhibition, from discussions of these same topics in terms of…

  2. Mechanistic spatio-temporal point process models for marked point processes, with a view to forest stand data

    DEFF Research Database (Denmark)

    Møller, Jesper; Ghorbani, Mohammad; Rubak, Ege Holger

    We show how a spatial point process, where to each point there is associated a random quantitative mark, can be identified with a spatio-temporal point process specified by a conditional intensity function. For instance, the points can be tree locations, the marks can express the size of trees......, and the conditional intensity function can describe the distribution of a tree (i.e., its location and size) conditionally on the larger trees. This enable us to construct parametric statistical models which are easily interpretable and where likelihood-based inference is tractable. In particular, we consider maximum...

  3. Post-Processing in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars Vabbersgaard

    The material-point method (MPM) is a numerical method for dynamic or static analysis of solids using a discretization in time and space. The method has shown to be successful in modelling physical problems involving large deformations, which are difficult to model with traditional numerical tools...... such as the finite element method. In the material-point method, a set of material points is utilized to track the problem in time and space, while a computational background grid is utilized to obtain spatial derivatives relevant to the physical problem. Currently, the research within the material-point method......-point method. The first idea involves associating a volume with each material point and displaying the deformation of this volume. In the discretization process, the physical domain is divided into a number of smaller volumes each represented by a simple shape; here quadrilaterals are chosen for the presented...

  4. Multivariate Product-Shot-noise Cox Point Process Models

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Mateu, Jorge

    We introduce a new multivariate product-shot-noise Cox process which is useful for model- ing multi-species spatial point patterns with clustering intra-specific interactions and neutral, negative or positive inter-specific interactions. The auto and cross pair correlation functions of the process...... can be obtained in closed analytical forms and approximate simulation of the process is straightforward. We use the proposed process to model interactions within and among five tree species in the Barro Colorado Island plot....

  5. PROCESSING UAV AND LIDAR POINT CLOUDS IN GRASS GIS

    Directory of Open Access Journals (Sweden)

    V. Petras

    2016-06-01

    Full Text Available Today’s methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM, and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM. Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL, Point Cloud Library (PCL, and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.

  6. Scattering analysis of point processes and random measures

    International Nuclear Information System (INIS)

    Hanisch, K.H.

    1984-01-01

    In the present paper scattering analysis of point processes and random measures is studied. Known formulae which connect the scattering intensity with the pair distribution function of the studied structures are proved in a rigorous manner with tools of the theory of point processes and random measures. For some special fibre processes the scattering intensity is computed. For a class of random measures, namely for 'grain-germ-models', a new formula is proved which yields the pair distribution function of the 'grain-germ-model' in terms of the pair distribution function of the underlying point process (the 'germs') and of the mean structure factor and the mean squared structure factor of the particles (the 'grains'). (author)

  7. Dew point vs bubble point : a misunderstood constraint on gravity drainage processes

    Energy Technology Data Exchange (ETDEWEB)

    Nenninger, J. [N-Solv Corp., Calgary, AB (Canada); Gunnewiek, L. [Hatch Ltd., Mississauga, ON (Canada)

    2009-07-01

    This study demonstrated that gravity drainage processes that use blended fluids such as solvents have an inherently unstable material balance due to differences between dew point and bubble point compositions. The instability can lead to the accumulation of volatile components within the chamber, and impair mass and heat transfer processes. Case studies were used to demonstrate the large temperature gradients within the vapour chamber caused by temperature differences between the bubble point and dew point for blended fluids. A review of published data showed that many experiments on in-situ processes do not account for unstable material balances caused by a lack of steam trap control. A study of temperature profiles during steam assisted gravity drainage (SAGD) studies showed significant temperature depressions caused by methane accumulations at the outside perimeter of the steam chamber. It was demonstrated that the condensation of large volumes of purified solvents provided an efficient mechanism for the removal of methane from the chamber. It was concluded that gravity drainage processes can be optimized by using pure propane during the injection process. 22 refs., 1 tab., 18 figs.

  8. An efficient estimator for Gibbs random fields

    Czech Academy of Sciences Publication Activity Database

    Janžura, Martin

    2014-01-01

    Roč. 50, č. 6 (2014), s. 883-895 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : Gibbs random field * efficient estimator * empirical estimator Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2015/SI/janzura-0441325.pdf

  9. Quantitative Boltzmann-Gibbs Principles via Orthogonal Polynomial Duality

    Science.gov (United States)

    Ayala, Mario; Carinci, Gioia; Redig, Frank

    2018-06-01

    We study fluctuation fields of orthogonal polynomials in the context of particle systems with duality. We thereby obtain a systematic orthogonal decomposition of the fluctuation fields of local functions, where the order of every term can be quantified. This implies a quantitative generalization of the Boltzmann-Gibbs principle. In the context of independent random walkers, we complete this program, including also fluctuation fields in non-stationary context (local equilibrium). For other interacting particle systems with duality such as the symmetric exclusion process, similar results can be obtained, under precise conditions on the n particle dynamics.

  10. A MARKED POINT PROCESS MODEL FOR VEHICLE DETECTION IN AERIAL LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    A. Börcs

    2012-07-01

    Full Text Available In this paper we present an automated method for vehicle detection in LiDAR point clouds of crowded urban areas collected from an aerial platform. We assume that the input cloud is unordered, but it contains additional intensity and return number information which are jointly exploited by the proposed solution. Firstly, the 3-D point set is segmented into ground, vehicle, building roof, vegetation and clutter classes. Then the points with the corresponding class labels and intensity values are projected to the ground plane, where the optimal vehicle configuration is described by a Marked Point Process (MPP model of 2-D rectangles. Finally, the Multiple Birth and Death algorithm is utilized to find the configuration with the highest confidence.

  11. Inference with minimal Gibbs free energy in information field theory

    International Nuclear Information System (INIS)

    Ensslin, Torsten A.; Weig, Cornelius

    2010-01-01

    Non-linear and non-Gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the Gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from Poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a Gaussian signal with unknown spectrum, and (iii) inference of a Poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how Gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-Gaussian posterior.

  12. Pointo - a Low Cost Solution to Point Cloud Processing

    Science.gov (United States)

    Houshiar, H.; Winkler, S.

    2017-11-01

    With advance in technology access to data especially 3D point cloud data becomes more and more an everyday task. 3D point clouds are usually captured with very expensive tools such as 3D laser scanners or very time consuming methods such as photogrammetry. Most of the available softwares for 3D point cloud processing are designed for experts and specialists in this field and are usually very large software packages containing variety of methods and tools. This results in softwares that are usually very expensive to acquire and also very difficult to use. Difficulty of use is caused by complicated user interfaces that is required to accommodate a large list of features. The aim of these complex softwares is to provide a powerful tool for a specific group of specialist. However they are not necessary required by the majority of the up coming average users of point clouds. In addition to complexity and high costs of these softwares they generally rely on expensive and modern hardware and only compatible with one specific operating system. Many point cloud customers are not point cloud processing experts or willing to spend the high acquisition costs of these expensive softwares and hardwares. In this paper we introduce a solution for low cost point cloud processing. Our approach is designed to accommodate the needs of the average point cloud user. To reduce the cost and complexity of software our approach focuses on one functionality at a time in contrast with most available softwares and tools that aim to solve as many problems as possible at the same time. Our simple and user oriented design improve the user experience and empower us to optimize our methods for creation of an efficient software. In this paper we introduce Pointo family as a series of connected softwares to provide easy to use tools with simple design for different point cloud processing requirements. PointoVIEWER and PointoCAD are introduced as the first components of the Pointo family to provide a

  13. Investigation of Random Switching Driven by a Poisson Point Process

    DEFF Research Database (Denmark)

    Simonsen, Maria; Schiøler, Henrik; Leth, John-Josef

    2015-01-01

    This paper investigates the switching mechanism of a two-dimensional switched system, when the switching events are generated by a Poisson point process. A model, in the shape of a stochastic process, for such a system is derived and the distribution of the trajectory's position is developed...... together with marginal density functions for the coordinate functions. Furthermore, the joint probability distribution is given explicitly....

  14. Numerical implementation and oceanographic application of the Gibbs thermodynamic potential of seawater

    Directory of Open Access Journals (Sweden)

    R. Feistel

    2005-01-01

    Full Text Available The 2003 Gibbs thermodynamic potential function represents a very accurate, compact, consistent and comprehensive formulation of equilibrium properties of seawater. It is expressed in the International Temperature Scale ITS-90 and is fully consistent with the current scientific pure water standard, IAPWS-95. Source code examples in FORTRAN, C++ and Visual Basic are presented for the numerical implementation of the potential function and its partial derivatives, as well as for potential temperature. A collection of thermodynamic formulas and relations is given for possible applications in oceanography, from density and chemical potential over entropy and potential density to mixing heat and entropy production. For colligative properties like vapour pressure, freezing points, and for a Gibbs potential of sea ice, the equations relating the Gibbs function of seawater to those of vapour and ice are presented.

  15. Numerical implementation and oceanographic application of the Gibbs potential of ice

    Directory of Open Access Journals (Sweden)

    R. Feistel

    2005-01-01

    Full Text Available The 2004 Gibbs thermodynamic potential function of naturally abundant water ice is based on much more experimental data than its predecessors, is therefore significantly more accurate and reliable, and for the first time describes the entire temperature and pressure range of existence of this ice phase. It is expressed in the ITS-90 temperature scale and is consistent with the current scientific pure water standard, IAPWS-95, and the 2003 Gibbs potential of seawater. The combination of these formulations provides sublimation pressures, freezing points, and sea ice properties covering the parameter ranges of oceanographic interest. This paper provides source code examples in Visual Basic, Fortran and C++ for the computation of the Gibbs function of ice and its partial derivatives. It reports the most important related thermodynamic equations for ice and sea ice properties.

  16. Gibbs Energy Modeling of Digenite and Adjacent Solid-State Phases

    Science.gov (United States)

    Waldner, Peter

    2017-08-01

    All sulfur potential and phase diagram data available in the literature for solid-state equilibria related to digenite have been assessed. Thorough thermodynamic analysis at 1 bar total pressure has been performed. A three-sublattice approach has been developed to model the Gibbs energy of digenite as a function of composition and temperature using the compound energy formalism. The Gibbs energies of the adjacent solid-state phases covelitte and high-temperature chalcocite are also modeled treating both sulfides as stoichiometric compounds. The novel model for digenite offers new interpretation of experimental data, may contribute from a thermodynamic point of view to the elucidation of the role of copper species within the crystal structure and allows extrapolation to composition regimes richer in copper than stoichiometric digenite Cu2S. Preliminary predictions into the ternary Cu-Fe-S system at 1273 K (1000 °C) using the Gibbs energy model of digenite for calculating its iron solubility are promising.

  17. On estimation of the intensity function of a point process

    NARCIS (Netherlands)

    Lieshout, van M.N.M.

    2010-01-01

    Abstract. Estimation of the intensity function of spatial point processes is a fundamental problem. In this paper, we interpret the Delaunay tessellation field estimator recently introduced by Schaap and Van de Weygaert as an adaptive kernel estimator and give explicit expressions for the mean and

  18. Spatio-temporal point process filtering methods with an application

    Czech Academy of Sciences Publication Activity Database

    Frcalová, B.; Beneš, V.; Klement, Daniel

    2010-01-01

    Roč. 21, 3-4 (2010), s. 240-252 ISSN 1180-4009 R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z50110509 Keywords : cox point process * filtering * spatio-temporal modelling * spike Subject RIV: BA - General Mathematics Impact factor: 0.750, year: 2010

  19. A case study on point process modelling in disease mapping

    Czech Academy of Sciences Publication Activity Database

    Beneš, Viktor; Bodlák, M.; Moller, J.; Waagepetersen, R.

    2005-01-01

    Roč. 24, č. 3 (2005), s. 159-168 ISSN 1580-3139 R&D Projects: GA MŠk 0021620839; GA ČR GA201/03/0946 Institutional research plan: CEZ:AV0Z10750506 Keywords : log Gaussian Cox point process * Bayesian estimation Subject RIV: BB - Applied Statistics, Operational Research

  20. A J–function for inhomogeneous point processes

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette)

    2010-01-01

    htmlabstractWe propose new summary statistics for intensity-reweighted moment stationary point processes that generalise the well known J-, empty space, and nearest-neighbour distance dis- tribution functions, represent them in terms of generating functionals and conditional intensities, and relate

  1. Some properties of point processes in statistical optics

    International Nuclear Information System (INIS)

    Picinbono, B.; Bendjaballah, C.

    2010-01-01

    The analysis of the statistical properties of the point process (PP) of photon detection times can be used to determine whether or not an optical field is classical, in the sense that its statistical description does not require the methods of quantum optics. This determination is, however, more difficult than ordinarily admitted and the first aim of this paper is to illustrate this point by using some results of the PP theory. For example, it is well known that the analysis of the photodetection of classical fields exhibits the so-called bunching effect. But this property alone cannot be used to decide the nature of a given optical field. Indeed, we have presented examples of point processes for which a bunching effect appears and yet they cannot be obtained from a classical field. These examples are illustrated by computer simulations. Similarly, it is often admitted that for fields with very low light intensity the bunching or antibunching can be described by using the statistical properties of the distance between successive events of the point process, which simplifies the experimental procedure. We have shown that, while this property is valid for classical PPs, it has no reason to be true for nonclassical PPs, and we have presented some examples of this situation also illustrated by computer simulations.

  2. Shot-noise-weighted processes : a new family of spatial point processes

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette); I.S. Molchanov (Ilya)

    1995-01-01

    textabstractThe paper suggests a new family of of spatial point processes distributions. They are defined by means of densities with respect to the Poisson point process within a bounded set. These densities are given in terms of a functional of the shot-noise process with a given influence

  3. Two-step estimation for inhomogeneous spatial point processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Guan, Yongtao

    This paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second order properties (K-function). Regression parameters are estimated using a Poisson likelihood score estimating function and in a second...... step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rain forests....

  4. A case study on point process modelling in disease mapping

    DEFF Research Database (Denmark)

    Møller, Jesper; Waagepetersen, Rasmus Plenge; Benes, Viktor

    2005-01-01

    of the risk on the covariates. Instead of using the common areal level approaches we base the analysis on a Bayesian approach for a log Gaussian Cox point process with covariates. Posterior characteristics for a discretized version of the log Gaussian Cox process are computed using Markov chain Monte Carlo...... methods. A particular problem which is thoroughly discussed is to determine a model for the background population density. The risk map shows a clear dependency with the population intensity models and the basic model which is adopted for the population intensity determines what covariates influence...... the risk of TBE. Model validation is based on the posterior predictive distribution of various summary statistics....

  5. A Marked Point Process Framework for Extracellular Electrical Potentials

    Directory of Open Access Journals (Sweden)

    Carlos A. Loza

    2017-12-01

    Full Text Available Neuromodulations are an important component of extracellular electrical potentials (EEP, such as the Electroencephalogram (EEG, Electrocorticogram (ECoG and Local Field Potentials (LFP. This spatially temporal organized multi-frequency transient (phasic activity reflects the multiscale spatiotemporal synchronization of neuronal populations in response to external stimuli or internal physiological processes. We propose a novel generative statistical model of a single EEP channel, where the collected signal is regarded as the noisy addition of reoccurring, multi-frequency phasic events over time. One of the main advantages of the proposed framework is the exceptional temporal resolution in the time location of the EEP phasic events, e.g., up to the sampling period utilized in the data collection. Therefore, this allows for the first time a description of neuromodulation in EEPs as a Marked Point Process (MPP, represented by their amplitude, center frequency, duration, and time of occurrence. The generative model for the multi-frequency phasic events exploits sparseness and involves a shift-invariant implementation of the clustering technique known as k-means. The cost function incorporates a robust estimation component based on correntropy to mitigate the outliers caused by the inherent noise in the EEP. Lastly, the background EEP activity is explicitly modeled as the non-sparse component of the collected signal to further improve the delineation of the multi-frequency phasic events in time. The framework is validated using two publicly available datasets: the DREAMS sleep spindles database and one of the Brain-Computer Interface (BCI competition datasets. The results achieve benchmark performance and provide novel quantitative descriptions based on power, event rates and timing in order to assess behavioral correlates beyond the classical power spectrum-based analysis. This opens the possibility for a unifying point process framework of

  6. Framework for adaptive multiscale analysis of nonhomogeneous point processes.

    Science.gov (United States)

    Helgason, Hannes; Bartroff, Jay; Abry, Patrice

    2011-01-01

    We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.

  7. Simple computation of reaction–diffusion processes on point clouds

    KAUST Repository

    Macdonald, Colin B.; Merriman, Barry; Ruuth, Steven J.

    2013-01-01

    The study of reaction-diffusion processes is much more complicated on general curved surfaces than on standard Cartesian coordinate spaces. Here we show how to formulate and solve systems of reaction-diffusion equations on surfaces in an extremely simple way, using only the standard Cartesian form of differential operators, and a discrete unorganized point set to represent the surface. Our method decouples surface geometry from the underlying differential operators. As a consequence, it becomes possible to formulate and solve rather general reaction-diffusion equations on general surfaces without having to consider the complexities of differential geometry or sophisticated numerical analysis. To illustrate the generality of the method, computations for surface diffusion, pattern formation, excitable media, and bulk-surface coupling are provided for a variety of complex point cloud surfaces.

  8. Simple computation of reaction–diffusion processes on point clouds

    KAUST Repository

    Macdonald, Colin B.

    2013-05-20

    The study of reaction-diffusion processes is much more complicated on general curved surfaces than on standard Cartesian coordinate spaces. Here we show how to formulate and solve systems of reaction-diffusion equations on surfaces in an extremely simple way, using only the standard Cartesian form of differential operators, and a discrete unorganized point set to represent the surface. Our method decouples surface geometry from the underlying differential operators. As a consequence, it becomes possible to formulate and solve rather general reaction-diffusion equations on general surfaces without having to consider the complexities of differential geometry or sophisticated numerical analysis. To illustrate the generality of the method, computations for surface diffusion, pattern formation, excitable media, and bulk-surface coupling are provided for a variety of complex point cloud surfaces.

  9. Statistical representation of a spray as a point process

    International Nuclear Information System (INIS)

    Subramaniam, S.

    2000-01-01

    The statistical representation of a spray as a finite point process is investigated. One objective is to develop a better understanding of how single-point statistical information contained in descriptions such as the droplet distribution function (ddf), relates to the probability density functions (pdfs) associated with the droplets themselves. Single-point statistical information contained in the droplet distribution function (ddf) is shown to be related to a sequence of single surrogate-droplet pdfs, which are in general different from the physical single-droplet pdfs. It is shown that the ddf contains less information than the fundamental single-point statistical representation of the spray, which is also described. The analysis shows which events associated with the ensemble of spray droplets can be characterized by the ddf, and which cannot. The implications of these findings for the ddf approach to spray modeling are discussed. The results of this study also have important consequences for the initialization and evolution of direct numerical simulations (DNS) of multiphase flows, which are usually initialized on the basis of single-point statistics such as the droplet number density in physical space. If multiphase DNS are initialized in this way, this implies that even the initial representation contains certain implicit assumptions concerning the complete ensemble of realizations, which are invalid for general multiphase flows. Also the evolution of a DNS initialized in this manner is shown to be valid only if an as yet unproven commutation hypothesis holds true. Therefore, it is questionable to what extent DNS that are initialized in this manner constitute a direct simulation of the physical droplets. Implications of these findings for large eddy simulations of multiphase flows are also discussed. (c) 2000 American Institute of Physics

  10. Energy risk management through self-exciting marked point process

    International Nuclear Information System (INIS)

    Herrera, Rodrigo

    2013-01-01

    Crude oil is a dynamically traded commodity that affects many economies. We propose a collection of marked self-exciting point processes with dependent arrival rates for extreme events in oil markets and related risk measures. The models treat the time among extreme events in oil markets as a stochastic process. The main advantage of this approach is its capability to capture the short, medium and long-term behavior of extremes without involving an arbitrary stochastic volatility model or a prefiltration of the data, as is common in extreme value theory applications. We make use of the proposed model in order to obtain an improved estimate for the Value at Risk in oil markets. Empirical findings suggest that the reliability and stability of Value at Risk estimates improve as a result of finer modeling approach. This is supported by an empirical application in the representative West Texas Intermediate (WTI) and Brent crude oil markets. - Highlights: • We propose marked self-exciting point processes for extreme events in oil markets. • This approach captures the short and long-term behavior of extremes. • We improve the estimates for the VaR in the WTI and Brent crude oil markets

  11. Weak convergence of marked point processes generated by crossings of multivariate jump processes

    DEFF Research Database (Denmark)

    Tamborrino, Massimiliano; Sacerdote, Laura; Jacobsen, Martin

    2014-01-01

    We consider the multivariate point process determined by the crossing times of the components of a multivariate jump process through a multivariate boundary, assuming to reset each component to an initial value after its boundary crossing. We prove that this point process converges weakly...... process converging to a multivariate Ornstein–Uhlenbeck process is discussed as a guideline for applying diffusion limits for jump processes. We apply our theoretical findings to neural network modeling. The proposed model gives a mathematical foundation to the generalization of the class of Leaky...

  12. Variational approach for spatial point process intensity estimation

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper

    is assumed to be of log-linear form β+θ⊤z(u) where z is a spatial covariate function and the focus is on estimating θ. The variational estimator is very simple to implement and quicker than alternative estimation procedures. We establish its strong consistency and asymptotic normality. We also discuss its...... finite-sample properties in comparison with the maximum first order composite likelihood estimator when considering various inhomogeneous spatial point process models and dimensions as well as settings were z is completely or only partially known....

  13. Two-step estimation for inhomogeneous spatial point processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Guan, Yongtao

    2009-01-01

    The paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second-order properties (K-function). Regression parameters are estimated by using a Poisson likelihood score estimating function and in the ...... and in the second step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rainforests....

  14. Multiple Monte Carlo Testing with Applications in Spatial Point Processes

    DEFF Research Database (Denmark)

    Mrkvička, Tomáš; Myllymäki, Mari; Hahn, Ute

    with a function as the test statistic, 3) several Monte Carlo tests with functions as test statistics. The rank test has correct (global) type I error in each case and it is accompanied with a p-value and with a graphical interpretation which shows which subtest or which distances of the used test function......(s) lead to the rejection at the prescribed significance level of the test. Examples of null hypothesis from point process and random set statistics are used to demonstrate the strength of the rank envelope test. The examples include goodness-of-fit test with several test functions, goodness-of-fit test...

  15. CLINSULF sub-dew-point process for sulphur recovery

    Energy Technology Data Exchange (ETDEWEB)

    Heisel, M.; Marold, F.

    1988-01-01

    In a 2-reactor system, the CLINSULF process allows very high sulphur recovery rates. When operated at 100/sup 0/C at the outlet, i.e. below the sulphur solidification point, a sulphur recovery rate of more than 99.2% was achieved in a 2-reactor series. Assuming a 70% sulphur recovery in an upstream Claus furnace plus sulphur condenser, an overall sulphur recovery of more than 99.8% results for the 2-reactor system. This is approximately 2% higher than in conventional Claus plus SDP units, which mostly consist of 4 reactors or more. This means the the CLINSULF SSP process promises to be an improvement both in respect of efficiency and low investment cost.

  16. Self-Exciting Point Process Modeling of Conversation Event Sequences

    Science.gov (United States)

    Masuda, Naoki; Takaguchi, Taro; Sato, Nobuo; Yano, Kazuo

    Self-exciting processes of Hawkes type have been used to model various phenomena including earthquakes, neural activities, and views of online videos. Studies of temporal networks have revealed that sequences of social interevent times for individuals are highly bursty. We examine some basic properties of event sequences generated by the Hawkes self-exciting process to show that it generates bursty interevent times for a wide parameter range. Then, we fit the model to the data of conversation sequences recorded in company offices in Japan. In this way, we can estimate relative magnitudes of the self excitement, its temporal decay, and the base event rate independent of the self excitation. These variables highly depend on individuals. We also point out that the Hawkes model has an important limitation that the correlation in the interevent times and the burstiness cannot be independently modulated.

  17. A brief critique of the Adam-Gibbs entropy model

    DEFF Research Database (Denmark)

    Dyre, J. C.; Hecksher, Tina; Niss, Kristine

    2009-01-01

    This paper critically discusses the entropy model proposed by Adam and Gibbs in 1965 for the dramatic temperature dependence of glass-forming liquids' average relaxation time, which is one of the most influential models during the last four decades. We discuss the Adam-Gibbs model's theoretical...

  18. Imitation learning of Non-Linear Point-to-Point Robot Motions using Dirichlet Processes

    DEFF Research Database (Denmark)

    Krüger, Volker; Tikhanoff, Vadim; Natale, Lorenzo

    2012-01-01

    In this paper we discuss the use of the infinite Gaussian mixture model and Dirichlet processes for learning robot movements from demonstrations. Starting point of this work is an earlier paper where the authors learn a non-linear dynamic robot movement model from a small number of observations....... The model in that work is learned using a classical finite Gaussian mixture model (FGMM) where the Gaussian mixtures are appropriately constrained. The problem with this approach is that one needs to make a good guess for how many mixtures the FGMM should use. In this work, we generalize this approach...... our algorithm on the same data that was used in [5], where the authors use motion capture devices to record the demonstrations. As further validation we test our approach on novel data acquired on our iCub in a different demonstration scenario in which the robot is physically driven by the human...

  19. Benchmarking of radiological departments. Starting point for successful process optimization

    International Nuclear Information System (INIS)

    Busch, Hans-Peter

    2010-01-01

    Continuous optimization of the process of organization and medical treatment is part of the successful management of radiological departments. The focus of this optimization can be cost units such as CT and MRI or the radiological parts of total patient treatment. Key performance indicators for process optimization are cost- effectiveness, service quality and quality of medical treatment. The potential for improvements can be seen by comparison (benchmark) with other hospitals and radiological departments. Clear definitions of key data and criteria are absolutely necessary for comparability. There is currently little information in the literature regarding the methodology and application of benchmarks especially from the perspective of radiological departments and case-based lump sums, even though benchmarking has frequently been applied to radiological departments by hospital management. The aim of this article is to describe and discuss systematic benchmarking as an effective starting point for successful process optimization. This includes the description of the methodology, recommendation of key parameters and discussion of the potential for cost-effectiveness analysis. The main focus of this article is cost-effectiveness (efficiency and effectiveness) with respect to cost units and treatment processes. (orig.)

  20. Psychoanalytic Interpretation of Blueberries by Susan Gibb

    Directory of Open Access Journals (Sweden)

    Maya Zalbidea Paniagua

    2014-06-01

    Full Text Available Blueberries (2009 by Susan Gibb, published in the ELO (Electronic Literature Organization, invites the reader to travel inside the protagonist’s mind to discover real and imaginary experiences examining notions of gender, sex, body and identity of a traumatised woman. This article explores the verbal and visual modes in this digital short fiction following semiotic patterns as well as interpreting the psychological states that are expressed through poetical and technological components. A comparative study of the consequences of trauma in the protagonist will be developed including psychoanalytic theories by Sigmund Freud, Jacques Lacan and the feminist psychoanalysts: Melanie Klein and Bracha Ettinger. The reactions of the protagonist will be studied: loss of reality, hallucinations and Electra Complex, as well as the rise of defence mechanisms and her use of the artistic creativity as a healing therapy. The interactivity of the hypermedia, multiple paths and endings will be analyzed as a literary strategy that increases the reader’s capacity of empathizing with the speaker.

  1. A CASE STUDY ON POINT PROCESS MODELLING IN DISEASE MAPPING

    Directory of Open Access Journals (Sweden)

    Viktor Beneš

    2011-05-01

    Full Text Available We consider a data set of locations where people in Central Bohemia have been infected by tick-borne encephalitis (TBE, and where population census data and covariates concerning vegetation and altitude are available. The aims are to estimate the risk map of the disease and to study the dependence of the risk on the covariates. Instead of using the common area level approaches we base the analysis on a Bayesian approach for a log Gaussian Cox point process with covariates. Posterior characteristics for a discretized version of the log Gaussian Cox process are computed using Markov chain Monte Carlo methods. A particular problem which is thoroughly discussed is to determine a model for the background population density. The risk map shows a clear dependency with the population intensity models and the basic model which is adopted for the population intensity determines what covariates influence the risk of TBE. Model validation is based on the posterior predictive distribution of various summary statistics.

  2. Mean-field inference of Hawkes point processes

    International Nuclear Information System (INIS)

    Bacry, Emmanuel; Gaïffas, Stéphane; Mastromatteo, Iacopo; Muzy, Jean-François

    2016-01-01

    We propose a fast and efficient estimation method that is able to accurately recover the parameters of a d-dimensional Hawkes point-process from a set of observations. We exploit a mean-field approximation that is valid when the fluctuations of the stochastic intensity are small. We show that this is notably the case in situations when interactions are sufficiently weak, when the dimension of the system is high or when the fluctuations are self-averaging due to the large number of past events they involve. In such a regime the estimation of a Hawkes process can be mapped on a least-squares problem for which we provide an analytic solution. Though this estimator is biased, we show that its precision can be comparable to the one of the maximum likelihood estimator while its computation speed is shown to be improved considerably. We give a theoretical control on the accuracy of our new approach and illustrate its efficiency using synthetic datasets, in order to assess the statistical estimation error of the parameters. (paper)

  3. Corner-point criterion for assessing nonlinear image processing imagers

    Science.gov (United States)

    Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory

    2017-10-01

    Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to

  4. Multiplicative point process as a model of trading activity

    Science.gov (United States)

    Gontis, V.; Kaulakys, B.

    2004-11-01

    Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.

  5. Gibbs perturbations of a two-dimensional gauge field

    International Nuclear Information System (INIS)

    Petrova, E.N.

    1981-01-01

    Small Gibbs perturbations of random fields have been investigated up to now for a few initial fields only. Among them there are independent fields, Gaussian fields and some others. The possibility for the investigation of Gibbs modifications of a random field depends essentially on the existence of good estimates for semiinvariants of this field. This is the reason why the class of random fields for which the investigation of Gibbs perturbations with arbitrary potential of bounded support is possible is rather small. The author takes as initial a well-known model: a two-dimensional gauge field. (Auth.)

  6. Seeking a fingerprint: analysis of point processes in actigraphy recording

    Science.gov (United States)

    Gudowska-Nowak, Ewa; Ochab, Jeremi K.; Oleś, Katarzyna; Beldzik, Ewa; Chialvo, Dante R.; Domagalik, Aleksandra; Fąfrowicz, Magdalena; Marek, Tadeusz; Nowak, Maciej A.; Ogińska, Halszka; Szwed, Jerzy; Tyburczyk, Jacek

    2016-05-01

    Motor activity of humans displays complex temporal fluctuations which can be characterised by scale-invariant statistics, thus demonstrating that structure and fluctuations of such kinetics remain similar over a broad range of time scales. Previous studies on humans regularly deprived of sleep or suffering from sleep disorders predicted a change in the invariant scale parameters with respect to those for healthy subjects. In this study we investigate the signal patterns from actigraphy recordings by means of characteristic measures of fractional point processes. We analyse spontaneous locomotor activity of healthy individuals recorded during a week of regular sleep and a week of chronic partial sleep deprivation. Behavioural symptoms of lack of sleep can be evaluated by analysing statistics of duration times during active and resting states, and alteration of behavioural organisation can be assessed by analysis of power laws detected in the event count distribution, distribution of waiting times between consecutive movements and detrended fluctuation analysis of recorded time series. We claim that among different measures characterising complexity of the actigraphy recordings and their variations implied by chronic sleep distress, the exponents characterising slopes of survival functions in resting states are the most effective biomarkers distinguishing between healthy and sleep-deprived groups.

  7. Reflections on Gibbs: From Critical Phenomena to the Amistad

    Science.gov (United States)

    Kadanoff, Leo P.

    2003-03-01

    J. Willard Gibbs, the younger was the first American theorist. He was one of the inventors of statistical physics. His introduction and development of the concepts of phase space, phase transitions, and thermodynamic surfaces was remarkably correct and elegant. These three concepts form the basis of different but related areas of physics. The connection among these areas has been a subject of deep reflection from Gibbs' time to our own. I shall talk about these connections by using concepts suggested by the work of Michael Berry and explicitly put forward by the philosopher Robert Batterman. This viewpoint relates theory-connection to the applied mathematics concepts of asymptotic analysis and singular perturbations. J. Willard Gibbs, the younger, had all his achievements concentrated in science. His father, also J. Willard Gibbs, also a Professor at Yale, had one great achievement that remains unmatched in our day. I shall describe it.

  8. Boltzmann, Gibbs and Darwin-Fowler approaches in parastatistics

    International Nuclear Information System (INIS)

    Ponczek, R.L.; Yan, C.C.

    1976-01-01

    Derivations of the equilibrium values of occupation numbers are made using three approaches, namely, the Boltzmann 'elementary' one, the ensemble method of Gibbs, and that of Darwin and Fowler as well [pt

  9. Gibbs phenomenon for dispersive PDEs on the line

    OpenAIRE

    Biondini, Gino; Trogdon, Thomas

    2014-01-01

    We investigate the Cauchy problem for linear, constant-coefficient evolution PDEs on the real line with discontinuous initial conditions (ICs) in the small-time limit. The small-time behavior of the solution near discontinuities is expressed in terms of universal, computable special functions. We show that the leading-order behavior of the solution of dispersive PDEs near a discontinuity of the ICs is characterized by Gibbs-type oscillations and gives exactly the Wilbraham-Gibbs constant.

  10. A scalable and multi-purpose point cloud server (PCS) for easier and faster point cloud data management and processing

    Science.gov (United States)

    Cura, Rémi; Perret, Julien; Paparoditis, Nicolas

    2017-05-01

    In addition to more traditional geographical data such as images (rasters) and vectors, point cloud data are becoming increasingly available. Such data are appreciated for their precision and true three-Dimensional (3D) nature. However, managing point clouds can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a comprehensive and efficient point cloud management system based on a database server that works on groups of points (patches) rather than individual points. This system is specifically designed to cover the basic needs of point cloud users: fast loading, compressed storage, powerful patch and point filtering, easy data access and exporting, and integrated processing. Moreover, the proposed system fully integrates metadata (like sensor position) and can conjointly use point clouds with other geospatial data, such as images, vectors, topology and other point clouds. Point cloud (parallel) processing can be done in-base with fast prototyping capabilities. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the proposed system with several billion points obtained from Lidar (aerial and terrestrial) and stereo-vision. We demonstrate loading speeds in the ˜50 million pts/h per process range, transparent-for-user and greater than 2 to 4:1 compression ratio, patch filtering in the 0.1 to 1 s range, and output in the 0.1 million pts/s per process range, along with classical processing methods, such as object detection.

  11. Uniqueness of Gibbs measure for Potts model with countable set of spin values

    International Nuclear Information System (INIS)

    Ganikhodjaev, N.N.; Rozikov, U.A.

    2004-11-01

    We consider a nearest-neighbor Potts model with countable spin values 0,1,..., and non zero external field, on a Cayley tree of order k (with k+1 neighbors). We study translation-invariant 'splitting' Gibbs measures. We reduce the problem to the description of the solutions of some infinite system of equations. For any k≥1 and any fixed probability measure ν with ν(i)>0 on the set of all non negative integer numbers Φ={0,1,...} we show that the set of translation-invariant splitting Gibbs measures contains at most one point, independently on parameters of the Potts model with countable set of spin values on Cayley tree. Also we give a full description of the class of measures ν on Φ such that wit respect to each element of this class our infinite system of equations has unique solution {a i =1,2,...}, where a is an element of (0,1). (author)

  12. Discrete Approximations of Determinantal Point Processes on Continuous Spaces: Tree Representations and Tail Triviality

    Science.gov (United States)

    Osada, Hirofumi; Osada, Shota

    2018-01-01

    We prove tail triviality of determinantal point processes μ on continuous spaces. Tail triviality has been proved for such processes only on discrete spaces, and hence we have generalized the result to continuous spaces. To do this, we construct tree representations, that is, discrete approximations of determinantal point processes enjoying a determinantal structure. There are many interesting examples of determinantal point processes on continuous spaces such as zero points of the hyperbolic Gaussian analytic function with Bergman kernel, and the thermodynamic limit of eigenvalues of Gaussian random matrices for Sine_2 , Airy_2 , Bessel_2 , and Ginibre point processes. Our main theorem proves all these point processes are tail trivial.

  13. The Role of Shearing Energy and Interfacial Gibbs Free Energy in the Emulsification Mechanism of Waxy Crude Oil

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2017-05-01

    Full Text Available Crude oil is generally produced with water, and the water cut produced by oil wells is increasingly common over their lifetime, so it is inevitable to create emulsions during oil production. However, the formation of emulsions presents a costly problem in surface process particularly, both in terms of transportation energy consumption and separation efficiency. To deal with the production and operational problems which are related to crude oil emulsions, especially to ensure the separation and transportation of crude oil-water systems, it is necessary to better understand the emulsification mechanism of crude oil under different conditions from the aspects of bulk and interfacial properties. The concept of shearing energy was introduced in this study to reveal the driving force for emulsification. The relationship between shearing stress in the flow field and interfacial tension (IFT was established, and the correlation between shearing energy and interfacial Gibbs free energy was developed. The potential of the developed correlation model was validated using the experimental and field data on emulsification behavior. It was also shown how droplet deformation could be predicted from a random deformation degree and orientation angle. The results indicated that shearing energy as the energy produced by shearing stress working in the flow field is the driving force activating the emulsification behavior. The deformation degree and orientation angle of dispersed phase droplet are associated with the interfacial properties, rheological properties and the experienced turbulence degree. The correlation between shearing stress and IFT can be quantified if droplet deformation degree vs. droplet orientation angle data is available. When the water cut is close to the inversion point of waxy crude oil emulsion, the interfacial Gibbs free energy change decreased and the shearing energy increased. This feature is also presented in the special regions where

  14. Equivalence of functional limit theorems for stationary point processes and their Palm distributions

    NARCIS (Netherlands)

    Nieuwenhuis, G.

    1989-01-01

    Let P be the distribution of a stationary point process on the real line and let P0 be its Palm distribution. In this paper we consider two types of functional limit theorems, those in terms of the number of points of the point process in (0, t] and those in terms of the location of the nth point

  15. Entropy Calculation of Reversible Mixing of Ideal Gases Shows Absence of Gibbs Paradox

    Directory of Open Access Journals (Sweden)

    Oleg Borodiouk

    1999-05-01

    Full Text Available Abstract: We consider the work of reversible mixing of ideal gases using a real process. Now assumptions were made concerning infinite shifts, infinite number of cycles and infinite work to provide an accurate calculation of entropy resulting from reversible mixing of ideal gases. We derived an equation showing the dependence of this entropy on the difference in potential of mixed gases, which is evidence for the absence of Gibbs' paradox.

  16. Entropy Calculation of Reversible Mixing of Ideal Gases Shows Absence of Gibbs Paradox

    OpenAIRE

    Oleg Borodiouk; Vasili Tatarin

    1999-01-01

    Abstract: We consider the work of reversible mixing of ideal gases using a real process. Now assumptions were made concerning infinite shifts, infinite number of cycles and infinite work to provide an accurate calculation of entropy resulting from reversible mixing of ideal gases. We derived an equation showing the dependence of this entropy on the difference in potential of mixed gases, which is evidence for the absence of Gibbs' paradox.

  17. Microbial profile and critical control points during processing of 'robo ...

    African Journals Online (AJOL)

    STORAGESEVER

    2009-05-18

    May 18, 2009 ... frying, surface fat draining, open-air cooling, and holding/packaging in polyethylene films during sales and distribution. The product was, however, classified under category III with respect to risk and the significance of monitoring and evaluation of quality using the hazard analysis critical control point.

  18. Discussion of "Modern statistics for spatial point processes"

    DEFF Research Database (Denmark)

    Jensen, Eva Bjørn Vedel; Prokesová, Michaela; Hellmund, Gunnar

    2007-01-01

    ABSTRACT. The paper ‘Modern statistics for spatial point processes’ by Jesper Møller and Rasmus P. Waagepetersen is based on a special invited lecture given by the authors at the 21st Nordic Conference on Mathematical Statistics, held at Rebild, Denmark, in June 2006. At the conference, Antti...

  19. Geometric anisotropic spatial point pattern analysis and Cox processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Toftaker, Håkon

    . In particular we study Cox process models with an elliptical pair correlation function, including shot noise Cox processes and log Gaussian Cox processes, and we develop estimation procedures using summary statistics and Bayesian methods. Our methodology is illustrated on real and synthetic datasets of spatial...

  20. Process for structural geologic analysis of topography and point data

    Science.gov (United States)

    Eliason, Jay R.; Eliason, Valerie L. C.

    1987-01-01

    A quantitative method of geologic structural analysis of digital terrain data is described for implementation on a computer. Assuming selected valley segments are controlled by the underlying geologic structure, topographic lows in the terrain data, defining valley bottoms, are detected, filtered and accumulated into a series line segments defining contiguous valleys. The line segments are then vectorized to produce vector segments, defining valley segments, which may be indicative of the underlying geologic structure. Coplanar analysis is performed on vector segment pairs to determine which vectors produce planes which represent underlying geologic structure. Point data such as fracture phenomena which can be related to fracture planes in 3-dimensional space can be analyzed to define common plane orientation and locations. The vectors, points, and planes are displayed in various formats for interpretation.

  1. A Bayesian MCMC method for point process models with intractable normalising constants

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2004-01-01

    to simulate from the "unknown distribution", perfect simulation algorithms become useful. We illustrate the method in cases whre the likelihood is given by a Markov point process model. Particularly, we consider semi-parametric Bayesian inference in connection to both inhomogeneous Markov point process models...... and pairwise interaction point processes....

  2. Time-dependent generalized Gibbs ensembles in open quantum systems

    Science.gov (United States)

    Lange, Florian; Lenarčič, Zala; Rosch, Achim

    2018-04-01

    Generalized Gibbs ensembles have been used as powerful tools to describe the steady state of integrable many-particle quantum systems after a sudden change of the Hamiltonian. Here, we demonstrate numerically that they can be used for a much broader class of problems. We consider integrable systems in the presence of weak perturbations which break both integrability and drive the system to a state far from equilibrium. Under these conditions, we show that the steady state and the time evolution on long timescales can be accurately described by a (truncated) generalized Gibbs ensemble with time-dependent Lagrange parameters, determined from simple rate equations. We compare the numerically exact time evolutions of density matrices for small systems with a theory based on block-diagonal density matrices (diagonal ensemble) and a time-dependent generalized Gibbs ensemble containing only a small number of approximately conserved quantities, using the one-dimensional Heisenberg model with perturbations described by Lindblad operators as an example.

  3. INHOMOGENEITY IN SPATIAL COX POINT PROCESSES – LOCATION DEPENDENT THINNING IS NOT THE ONLY OPTION

    Directory of Open Access Journals (Sweden)

    Michaela Prokešová

    2010-11-01

    Full Text Available In the literature on point processes the by far most popular option for introducing inhomogeneity into a point process model is the location dependent thinning (resulting in a second-order intensity-reweighted stationary point process. This produces a very tractable model and there are several fast estimation procedures available. Nevertheless, this model dilutes the interaction (or the geometrical structure of the original homogeneous model in a special way. When concerning the Markov point processes several alternative inhomogeneous models were suggested and investigated in the literature. But it is not so for the Cox point processes, the canonical models for clustered point patterns. In the contribution we discuss several other options how to define inhomogeneous Cox point process models that result in point patterns with different types of geometric structure. We further investigate the possible parameter estimation procedures for such models.

  4. Marked point process for modelling seismic activity (case study in Sumatra and Java)

    Science.gov (United States)

    Pratiwi, Hasih; Sulistya Rini, Lia; Wayan Mangku, I.

    2018-05-01

    Earthquake is a natural phenomenon that is random, irregular in space and time. Until now the forecast of earthquake occurrence at a location is still difficult to be estimated so that the development of earthquake forecast methodology is still carried out both from seismology aspect and stochastic aspect. To explain the random nature phenomena, both in space and time, a point process approach can be used. There are two types of point processes: temporal point process and spatial point process. The temporal point process relates to events observed over time as a sequence of time, whereas the spatial point process describes the location of objects in two or three dimensional spaces. The points on the point process can be labelled with additional information called marks. A marked point process can be considered as a pair (x, m) where x is the point of location and m is the mark attached to the point of that location. This study aims to model marked point process indexed by time on earthquake data in Sumatra Island and Java Island. This model can be used to analyse seismic activity through its intensity function by considering the history process up to time before t. Based on data obtained from U.S. Geological Survey from 1973 to 2017 with magnitude threshold 5, we obtained maximum likelihood estimate for parameters of the intensity function. The estimation of model parameters shows that the seismic activity in Sumatra Island is greater than Java Island.

  5. Lasso and probabilistic inequalities for multivariate point processes

    DEFF Research Database (Denmark)

    Hansen, Niels Richard; Reynaud-Bouret, Patricia; Rivoirard, Vincent

    2015-01-01

    Due to its low computational cost, Lasso is an attractive regularization method for high-dimensional statistical settings. In this paper, we consider multivariate counting processes depending on an unknown function parameter to be estimated by linear combinations of a fixed dictionary. To select...... for multivariate Hawkes processes are proven, which allows us to check these assumptions by considering general dictionaries based on histograms, Fourier or wavelet bases. Motivated by problems of neuronal activity inference, we finally carry out a simulation study for multivariate Hawkes processes and compare our...... methodology with the adaptive Lasso procedure proposed by Zou in (J. Amer. Statist. Assoc. 101 (2006) 1418–1429). We observe an excellent behavior of our procedure. We rely on theoretical aspects for the essential question of tuning our methodology. Unlike adaptive Lasso of (J. Amer. Statist. Assoc. 101 (2006...

  6. Modelling financial high frequency data using point processes

    DEFF Research Database (Denmark)

    Hautsch, Nikolaus; Bauwens, Luc

    In this chapter written for a forthcoming Handbook of Financial Time Series to be published by Springer-Verlag, we review the econometric literature on dynamic duration and intensity processes applied to high frequency financial data, which was boosted by the work of Engle and Russell (1997...

  7. Lasso and probabilistic inequalities for multivariate point processes

    OpenAIRE

    Hansen, Niels Richard; Reynaud-Bouret, Patricia; Rivoirard, Vincent

    2012-01-01

    Due to its low computational cost, Lasso is an attractive regularization method for high-dimensional statistical settings. In this paper, we consider multivariate counting processes depending on an unknown function parameter to be estimated by linear combinations of a fixed dictionary. To select coefficients, we propose an adaptive $\\ell_{1}$-penalization methodology, where data-driven weights of the penalty are derived from new Bernstein type inequalities for martingales. Oracle inequalities...

  8. Chemical Disequilibria and Sources of Gibbs Free Energy Inside Enceladus

    Science.gov (United States)

    Zolotov, M. Y.

    2010-12-01

    Non-photosynthetic organisms use chemical disequilibria in the environment to gain metabolic energy from enzyme catalyzed oxidation-reduction (redox) reactions. The presence of carbon dioxide, ammonia, formaldehyde, methanol, methane and other hydrocarbons in the eruptive plume of Enceladus [1] implies diverse redox disequilibria in the interior. In the history of the moon, redox disequilibria could have been activated through melting of a volatile-rich ice and following water-rock-organic interactions. Previous and/or present aqueous processes are consistent with the detection of NaCl and Na2CO3/NaHCO3-bearing grains emitted from Enceladus [2]. A low K/Na ratio in the grains [2] and a low upper limit for N2 in the plume [3] indicate low temperature (possibly enzymes if organisms were (are) present. The redox conditions in aqueous systems and amounts of available Gibbs free energy should have been affected by the production, consumption and escape of hydrogen. Aqueous oxidation of minerals (Fe-Ni metal, Fe-Ni phosphides, etc.) accreted on Enceladus should have led to H2 production, which is consistent with H2 detection in the plume [1]. Numerical evaluations based on concentrations of plume gases [1] reveal sufficient energy sources available to support metabolically diverse life at a wide range of activities (a) of dissolved H2 (log aH2 from 0 to -10). Formaldehyde, carbon dioxide [c.f. 4], HCN (if it is present), methanol, acetylene and other hydrocarbons have the potential to react with H2 to form methane. Aqueous hydrogenations of acetylene, HCN and formaldehyde to produce methanol are energetically favorable as well. Both favorable hydrogenation and hydration of HCN lead to formation of ammonia. Condensed organic species could also participate in redox reactions. Methane and ammonia are the final products of these putative redox transformations. Sulfates may have not formed in cold and/or short-term aqueous environments with a limited H2 escape. In contrast to

  9. The S-Process Branching-Point at 205PB

    Science.gov (United States)

    Tonchev, Anton; Tsoneva, N.; Bhatia, C.; Arnold, C. W.; Goriely, S.; Hammond, S. L.; Kelley, J. H.; Kwan, E.; Lenske, H.; Piekarewicz, J.; Raut, R.; Rusev, G.; Shizuma, T.; Tornow, W.

    2017-09-01

    Accurate neutron-capture cross sections for radioactive nuclei near the line of beta stability are crucial for understanding s-process nucleosynthesis. However, neutron-capture cross sections for short-lived radionuclides are difficult to measure due to the fact that the measurements require both highly radioactive samples and intense neutron sources. We consider photon scattering using monoenergetic and 100% linearly polarized photon beams to obtain the photoabsorption cross section on 206Pb below the neutron separation energy. This observable becomes an essential ingredient in the Hauser-Feshbach statistical model for calculations of capture cross sections on 205Pb. The newly obtained photoabsorption information is also used to estimate the Maxwellian-averaged radiative cross section of 205Pb(n,g)206Pb at 30 keV. The astrophysical impact of this measurement on s-process nucleosynthesis will be discussed. This work was performed under the auspices of US DOE by LLNL under Contract DE-AC52-07NA27344.

  10. First-Year University Chemistry Textbooks' Misrepresentation of Gibbs Energy

    Science.gov (United States)

    Quilez, Juan

    2012-01-01

    This study analyzes the misrepresentation of Gibbs energy by college chemistry textbooks. The article reports the way first-year university chemistry textbooks handle the concepts of spontaneity and equilibrium. Problems with terminology are found; confusion arises in the meaning given to [delta]G, [delta][subscript r]G, [delta]G[degrees], and…

  11. Virial theorem and Gibbs thermodynamic potential for Coulomb systems

    International Nuclear Information System (INIS)

    Bobrov, V. B.; Trigger, S. A.

    2014-01-01

    Using the grand canonical ensemble and the virial theorem, we show that the Gibbs thermodynamic potential of the non-relativistic system of charged particles is uniquely defined by single-particle Green functions of electrons and nuclei. This result is valid beyond the perturbation theory with respect to the interparticle interaction

  12. Virial theorem and Gibbs thermodynamic potential for Coulomb systems

    OpenAIRE

    Bobrov, V. B.; Trigger, S. A.

    2013-01-01

    Using the grand canonical ensemble and the virial theorem, we show that the Gibbs thermodynamic potential of the non-relativistic system of charged particles is uniquely defined by single-particle Green functions of electrons and nuclei. This result is valid beyond the perturbation theory with respect to the interparticle interaction.

  13. Inverse Gaussian model for small area estimation via Gibbs sampling

    African Journals Online (AJOL)

    We present a Bayesian method for estimating small area parameters under an inverse Gaussian model. The method is extended to estimate small area parameters for finite populations. The Gibbs sampler is proposed as a mechanism for implementing the Bayesian paradigm. We illustrate the method by application to ...

  14. Exploring Fourier Series and Gibbs Phenomenon Using Mathematica

    Science.gov (United States)

    Ghosh, Jonaki B.

    2011-01-01

    This article describes a laboratory module on Fourier series and Gibbs phenomenon which was undertaken by 32 Year 12 students. It shows how the use of CAS played the role of an "amplifier" by making higher level mathematical concepts accessible to students of year 12. Using Mathematica students were able to visualise Fourier series of…

  15. Thermodynamic fluctuations within the Gibbs and Einstein approaches

    International Nuclear Information System (INIS)

    Rudoi, Yurii G; Sukhanov, Alexander D

    2000-01-01

    A comparative analysis of the descriptions of fluctuations in statistical mechanics (the Gibbs approach) and in statistical thermodynamics (the Einstein approach) is given. On this basis solutions are obtained for the Gibbs and Einstein problems that arise in pressure fluctuation calculations for a spatially limited equilibrium (or slightly nonequilibrium) macroscopic system. A modern formulation of the Gibbs approach which allows one to calculate equilibrium pressure fluctuations without making any additional assumptions is presented; to this end the generalized Bogolyubov - Zubarev and Hellmann - Feynman theorems are proved for the classical and quantum descriptions of a macrosystem. A statistical version of the Einstein approach is developed which shows a fundamental difference in pressure fluctuation results obtained within the context of two approaches. Both the 'genetic' relation between the Gibbs and Einstein approaches and the conceptual distinction between their physical grounds are demonstrated. To illustrate the results, which are valid for any thermodynamic system, an ideal nondegenerate gas of microparticles is considered, both classically and quantum mechanically. Based on the results obtained, the correspondence between the micro- and macroscopic descriptions is considered and the prospects of statistical thermodynamics are discussed. (reviews of topical problems)

  16. The Hinkley Point decision: An analysis of the policy process

    International Nuclear Information System (INIS)

    Thomas, Stephen

    2016-01-01

    In 2006, the British government launched a policy to build nuclear power reactors based on a claim that the power produced would be competitive with fossil fuel and would require no public subsidy. A decade later, it is not clear how many, if any, orders will be placed and the claims on costs and subsidies have proved false. Despite this failure to deliver, the policy is still being pursued with undiminished determination. The finance model that is now proposed is seen as a model other European countries can follow so the success or otherwise of the British nuclear programme will have implications outside the UK. This paper contends that the checks and balances that should weed out misguided policies, have failed. It argues that the most serious failure is with the civil service and its inability to provide politicians with high quality advice – truth to power. It concludes that the failure is likely to be due to the unwillingness of politicians to listen to opinions that conflict with their beliefs. Other weaknesses include the lack of energy expertise in the media, the unwillingness of the public to engage in the policy process and the impotence of Parliamentary Committees. - Highlights: •Britain's nuclear power policy is failing due to high costs and problems of finance. •This has implications for European countries who want to use the same financing model. •The continued pursuit of a failing policy is due to poor advice from civil servants. •Lack of expertise in the media and lack of public engagement have contributed. •Parliamentary processes have not provided proper critical scrutiny.

  17. ALTERNATIVE METHODOLOGIES FOR THE ESTIMATION OF LOCAL POINT DENSITY INDEX: MOVING TOWARDS ADAPTIVE LIDAR DATA PROCESSING

    Directory of Open Access Journals (Sweden)

    Z. Lari

    2012-07-01

    Full Text Available Over the past few years, LiDAR systems have been established as a leading technology for the acquisition of high density point clouds over physical surfaces. These point clouds will be processed for the extraction of geo-spatial information. Local point density is one of the most important properties of the point cloud that highly affects the performance of data processing techniques and the quality of extracted information from these data. Therefore, it is necessary to define a standard methodology for the estimation of local point density indices to be considered for the precise processing of LiDAR data. Current definitions of local point density indices, which only consider the 2D neighbourhood of individual points, are not appropriate for 3D LiDAR data and cannot be applied for laser scans from different platforms. In order to resolve the drawbacks of these methods, this paper proposes several approaches for the estimation of the local point density index which take the 3D relationship among the points and the physical properties of the surfaces they belong to into account. In the simplest approach, an approximate value of the local point density for each point is defined while considering the 3D relationship among the points. In the other approaches, the local point density is estimated by considering the 3D neighbourhood of the point in question and the physical properties of the surface which encloses this point. The physical properties of the surfaces enclosing the LiDAR points are assessed through eigen-value analysis of the 3D neighbourhood of individual points and adaptive cylinder methods. This paper will discuss these approaches and highlight their impact on various LiDAR data processing activities (i.e., neighbourhood definition, region growing, segmentation, boundary detection, and classification. Experimental results from airborne and terrestrial LiDAR data verify the efficacy of considering local point density variation for

  18. The Gibbs Function, Spontaneity, and Walls

    Science.gov (United States)

    Tykodi, R. J.

    1996-05-01

    For the expansion-into-the-vacuum process involving a saturated vapor, previously analyzed by Schomaker and waser and by myself, I assert that in general Delta G (composite) is undefined and that for the special case of bulbs with perfectly rigid walls Delta G (composite) is weakly positive. I show that the seemingly contradictory results of Schomaker and Waser are merely the consequences of their use of eccentric or anti-conventional terminology: they calculate the change in the Availability function for the process and call that change "Delta G (composite)".

  19. Development and evaluation of spatial point process models for epidermal nerve fibers.

    Science.gov (United States)

    Olsbo, Viktor; Myllymäki, Mari; Waller, Lance A; Särkkä, Aila

    2013-06-01

    We propose two spatial point process models for the spatial structure of epidermal nerve fibers (ENFs) across human skin. The models derive from two point processes, Φb and Φe, describing the locations of the base and end points of the fibers. Each point of Φe (the end point process) is connected to a unique point in Φb (the base point process). In the first model, both Φe and Φb are Poisson processes, yielding a null model of uniform coverage of the skin by end points and general baseline results and reference values for moments of key physiologic indicators. The second model provides a mechanistic model to generate end points for each base, and we model the branching structure more directly by defining Φe as a cluster process conditioned on the realization of Φb as its parent points. In both cases, we derive distributional properties for observable quantities of direct interest to neurologists such as the number of fibers per base, and the direction and range of fibers on the skin. We contrast both models by fitting them to data from skin blister biopsy images of ENFs and provide inference regarding physiological properties of ENFs. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Critical Control Points in the Processing of Cassava Tuber for Ighu ...

    African Journals Online (AJOL)

    Determination of the critical control points in the processing of cassava tuber into Ighu was carried out. The critical control points were determined according to the Codex guidelines for the application of the HACCP system by conducting hazard analysis. Hazard analysis involved proper examination of each processing step ...

  1. Distinguishing different types of inhomogeneity in Neyman-Scott point processes

    Czech Academy of Sciences Publication Activity Database

    Mrkvička, Tomáš

    2014-01-01

    Roč. 16, č. 2 (2014), s. 385-395 ISSN 1387-5841 Institutional support: RVO:60077344 Keywords : clustering * growing clusters * inhomogeneous cluster centers * inhomogeneous point process * location dependent scaling * Neyman-Scott point process Subject RIV: BA - General Mathematics Impact factor: 0.913, year: 2014

  2. The importance of topographically corrected null models for analyzing ecological point processes.

    Science.gov (United States)

    McDowall, Philip; Lynch, Heather J

    2017-07-01

    Analyses of point process patterns and related techniques (e.g., MaxEnt) make use of the expected number of occurrences per unit area and second-order statistics based on the distance between occurrences. Ecologists working with point process data often assume that points exist on a two-dimensional x-y plane or within a three-dimensional volume, when in fact many observed point patterns are generated on a two-dimensional surface existing within three-dimensional space. For many surfaces, however, such as the topography of landscapes, the projection from the surface to the x-y plane preserves neither area nor distance. As such, when these point patterns are implicitly projected to and analyzed in the x-y plane, our expectations of the point pattern's statistical properties may not be met. When used in hypothesis testing, we find that the failure to account for the topography of the generating surface may bias statistical tests that incorrectly identify clustering and, furthermore, may bias coefficients in inhomogeneous point process models that incorporate slope as a covariate. We demonstrate the circumstances under which this bias is significant, and present simple methods that allow point processes to be simulated with corrections for topography. These point patterns can then be used to generate "topographically corrected" null models against which observed point processes can be compared. © 2017 by the Ecological Society of America.

  3. The use of computational thermodynamics for the determination of surface tension and Gibbs-Thomson coefficient of multicomponent alloys

    Science.gov (United States)

    Ferreira, D. J. S.; Bezerra, B. N.; Collyer, M. N.; Garcia, A.; Ferreira, I. L.

    2018-04-01

    The simulation of casting processes demands accurate information on the thermophysical properties of the alloy; however, such information is scarce in the literature for multicomponent alloys. Generally, metallic alloys applied in industry have more than three solute components. In the present study, a general solution of Butler's formulation for surface tension is presented for multicomponent alloys and is applied in quaternary Al-Cu-Si-Fe alloys, thus permitting the Gibbs-Thomson coefficient to be determined. Such coefficient is a determining factor to the reliability of predictions furnished by microstructure growth models and by numerical computations of solidification thermal parameters, which will depend on the thermophysical properties assumed in the calculations. The Gibbs-Thomson coefficient for ternary and quaternary alloys is seldom reported in the literature. A numerical model based on Powell's hybrid algorithm and a finite difference Jacobian approximation has been coupled to a Thermo-Calc TCAPI interface to assess the excess Gibbs energy of the liquid phase, permitting liquidus temperature, latent heat, alloy density, surface tension and Gibbs-Thomson coefficient for Al-Cu-Si-Fe hypoeutectic alloys to be calculated, as an example of calculation capabilities for multicomponent alloys of the proposed method. The computed results are compared with thermophysical properties of binary Al-Cu and ternary Al-Cu-Si alloys found in the literature and presented as a function of the Cu solute composition.

  4. Inferring the Gibbs state of a small quantum system

    International Nuclear Information System (INIS)

    Rau, Jochen

    2011-01-01

    Gibbs states are familiar from statistical mechanics, yet their use is not limited to that domain. For instance, they also feature in the maximum entropy reconstruction of quantum states from incomplete measurement data. Outside the macroscopic realm, however, estimating a Gibbs state is a nontrivial inference task, due to two complicating factors: the proper set of relevant observables might not be evident a priori; and whenever data are gathered from a small sample only, the best estimate for the Lagrange parameters is invariably affected by the experimenter's prior bias. I show how the two issues can be tackled with the help of Bayesian model selection and Bayesian interpolation, respectively, and illustrate the use of these Bayesian techniques with a number of simple examples.

  5. On the Tsallis Entropy for Gibbs Random Fields

    Czech Academy of Sciences Publication Activity Database

    Janžura, Martin

    2014-01-01

    Roč. 21, č. 33 (2014), s. 59-69 ISSN 1212-074X R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional research plan: CEZ:AV0Z1075907 Keywords : Tsallis entropy * Gibbs random fields * phase transitions * Tsallis entropy rate Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2014/SI/janzura-0441885.pdf

  6. Extensitivity of entropy and modern form of Gibbs paradox

    International Nuclear Information System (INIS)

    Home, D.; Sengupta, S.

    1981-01-01

    The extensivity property of entropy is clarified in the light of a critical examination of the entropy formula based on quantum statistics and the relevant thermodynamic requirement. The modern form of the Gibbs paradox, related to the discontinuous jump in entropy due to identity or non-identity of particles, is critically investigated. Qualitative framework of a new resolution of this paradox, which analyses the general effect of distinction mark on the Hamiltonian of a system of identical particles, is outlined. (author)

  7. Gibbs' theorem for open systems with incomplete statistics

    International Nuclear Information System (INIS)

    Bagci, G.B.

    2009-01-01

    Gibbs' theorem, which is originally intended for canonical ensembles with complete statistics has been generalized to open systems with incomplete statistics. As a result of this generalization, it is shown that the stationary equilibrium distribution of inverse power law form associated with the incomplete statistics has maximum entropy even for open systems with energy or matter influx. The renormalized entropy definition given in this paper can also serve as a measure of self-organization in open systems described by incomplete statistics.

  8. GibbsCluster: unsupervised clustering and alignment of peptide sequences

    DEFF Research Database (Denmark)

    Andreatta, Massimo; Alvarez, Bruno; Nielsen, Morten

    2017-01-01

    motif characterizing each cluster. Several parameters are available to customize cluster analysis, including adjustable penalties for small clusters and overlapping groups and a trash cluster to remove outliers. As an example application, we used the server to deconvolute multiple specificities in large......-scale peptidome data generated by mass spectrometry. The server is available at http://www.cbs.dtu.dk/services/GibbsCluster-2.0....

  9. Consistent estimation of Gibbs energy using component contributions.

    Directory of Open Access Journals (Sweden)

    Elad Noor

    Full Text Available Standard Gibbs energies of reactions are increasingly being used in metabolic modeling for applying thermodynamic constraints on reaction rates, metabolite concentrations and kinetic parameters. The increasing scope and diversity of metabolic models has led scientists to look for genome-scale solutions that can estimate the standard Gibbs energy of all the reactions in metabolism. Group contribution methods greatly increase coverage, albeit at the price of decreased precision. We present here a way to combine the estimations of group contribution with the more accurate reactant contributions by decomposing each reaction into two parts and applying one of the methods on each of them. This method gives priority to the reactant contributions over group contributions while guaranteeing that all estimations will be consistent, i.e. will not violate the first law of thermodynamics. We show that there is a significant increase in the accuracy of our estimations compared to standard group contribution. Specifically, our cross-validation results show an 80% reduction in the median absolute residual for reactions that can be derived by reactant contributions only. We provide the full framework and source code for deriving estimates of standard reaction Gibbs energy, as well as confidence intervals, and believe this will facilitate the wide use of thermodynamic data for a better understanding of metabolism.

  10. Novel evaluation metrics for sparse spatio-temporal point process hotspot predictions - a crime case study

    OpenAIRE

    Adepeju, M.; Rosser, G.; Cheng, T.

    2016-01-01

    Many physical and sociological processes are represented as discrete events in time and space. These spatio-temporal point processes are often sparse, meaning that they cannot be aggregated and treated with conventional regression models. Models based on the point process framework may be employed instead for prediction purposes. Evaluating the predictive performance of these models poses a unique challenge, as the same sparseness prevents the use of popular measures such as the root mean squ...

  11. Efficient point cloud data processing in shipbuilding: Reformative component extraction method and registration method

    Directory of Open Access Journals (Sweden)

    Jingyu Sun

    2014-07-01

    Full Text Available To survive in the current shipbuilding industry, it is of vital importance for shipyards to have the ship components’ accuracy evaluated efficiently during most of the manufacturing steps. Evaluating components’ accuracy by comparing each component’s point cloud data scanned by laser scanners and the ship’s design data formatted in CAD cannot be processed efficiently when (1 extract components from point cloud data include irregular obstacles endogenously, or when (2 registration of the two data sets have no clear direction setting. This paper presents reformative point cloud data processing methods to solve these problems. K-d tree construction of the point cloud data fastens a neighbor searching of each point. Region growing method performed on the neighbor points of the seed point extracts the continuous part of the component, while curved surface fitting and B-spline curved line fitting at the edge of the continuous part recognize the neighbor domains of the same component divided by obstacles’ shadows. The ICP (Iterative Closest Point algorithm conducts a registration of the two sets of data after the proper registration’s direction is decided by principal component analysis. By experiments conducted at the shipyard, 200 curved shell plates are extracted from the scanned point cloud data, and registrations are conducted between them and the designed CAD data using the proposed methods for an accuracy evaluation. Results show that the methods proposed in this paper support the accuracy evaluation targeted point cloud data processing efficiently in practice.

  12. Edit distance for marked point processes revisited: An implementation by binary integer programming

    Energy Technology Data Exchange (ETDEWEB)

    Hirata, Yoshito; Aihara, Kazuyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan)

    2015-12-15

    We implement the edit distance for marked point processes [Suzuki et al., Int. J. Bifurcation Chaos 20, 3699–3708 (2010)] as a binary integer program. Compared with the previous implementation using minimum cost perfect matching, the proposed implementation has two advantages: first, by using the proposed implementation, we can apply a wide variety of software and hardware, even spin glasses and coherent ising machines, to calculate the edit distance for marked point processes; second, the proposed implementation runs faster than the previous implementation when the difference between the numbers of events in two time windows for a marked point process is large.

  13. Local thermodynamics and the generalized Gibbs-Duhem equation in systems with long-range interactions.

    Science.gov (United States)

    Latella, Ivan; Pérez-Madrid, Agustín

    2013-10-01

    The local thermodynamics of a system with long-range interactions in d dimensions is studied using the mean-field approximation. Long-range interactions are introduced through pair interaction potentials that decay as a power law in the interparticle distance. We compute the local entropy, Helmholtz free energy, and grand potential per particle in the microcanonical, canonical, and grand canonical ensembles, respectively. From the local entropy per particle we obtain the local equation of state of the system by using the condition of local thermodynamic equilibrium. This local equation of state has the form of the ideal gas equation of state, but with the density depending on the potential characterizing long-range interactions. By volume integration of the relation between the different thermodynamic potentials at the local level, we find the corresponding equation satisfied by the potentials at the global level. It is shown that the potential energy enters as a thermodynamic variable that modifies the global thermodynamic potentials. As a result, we find a generalized Gibbs-Duhem equation that relates the potential energy to the temperature, pressure, and chemical potential. For the marginal case where the power of the decaying interaction potential is equal to the dimension of the space, the usual Gibbs-Duhem equation is recovered. As examples of the application of this equation, we consider spatially uniform interaction potentials and the self-gravitating gas. We also point out a close relationship with the thermodynamics of small systems.

  14. The cylindrical K-function and Poisson line cluster point processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Safavimanesh, Farzaneh; Rasmussen, Jakob G.

    Poisson line cluster point processes, is also introduced. Parameter estimation based on moment methods or Bayesian inference for this model is discussed when the underlying Poisson line process and the cluster memberships are treated as hidden processes. To illustrate the methodologies, we analyze two...

  15. Bridging the gap between a stationary point process and its Palm distribution

    NARCIS (Netherlands)

    Nieuwenhuis, G.

    1994-01-01

    In the context of stationary point processes measurements are usually made from a time point chosen at random or from an occurrence chosen at random. That is, either the stationary distribution P or its Palm distribution P° is the ruling probability measure. In this paper an approach is presented to

  16. Hierarchical spatial point process analysis for a plant community with high biodiversity

    DEFF Research Database (Denmark)

    Illian, Janine B.; Møller, Jesper; Waagepetersen, Rasmus

    2009-01-01

    A complex multivariate spatial point pattern of a plant community with high biodiversity is modelled using a hierarchical multivariate point process model. In the model, interactions between plants with different post-fire regeneration strategies are of key interest. We consider initially a maxim...

  17. Definition of distance for nonlinear time series analysis of marked point process data

    Energy Technology Data Exchange (ETDEWEB)

    Iwayama, Koji, E-mail: koji@sat.t.u-tokyo.ac.jp [Research Institute for Food and Agriculture, Ryukoku Univeristy, 1-5 Yokotani, Seta Oe-cho, Otsu-Shi, Shiga 520-2194 (Japan); Hirata, Yoshito; Aihara, Kazuyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan)

    2017-01-30

    Marked point process data are time series of discrete events accompanied with some values, such as economic trades, earthquakes, and lightnings. A distance for marked point process data allows us to apply nonlinear time series analysis to such data. We propose a distance for marked point process data which can be calculated much faster than the existing distance when the number of marks is small. Furthermore, under some assumptions, the Kullback–Leibler divergences between posterior distributions for neighbors defined by this distance are small. We performed some numerical simulations showing that analysis based on the proposed distance is effective. - Highlights: • A new distance for marked point process data is proposed. • The distance can be computed fast enough for a small number of marks. • The method to optimize parameter values of the distance is also proposed. • Numerical simulations indicate that the analysis based on the distance is effective.

  18. Process and results of analytical framework and typology development for POINT

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Lehtonen, Markku; Bauler, Tom

    2009-01-01

    POINT is a project about how indicators are used in practice; to what extent and in what way indicators actually influence, support, or hinder policy and decision making processes, and what could be done to enhance the positive role of indicators in such processes. The project needs an analytical......, a set of core concepts and associated typologies, a series of analytic schemes proposed, and a number of research propositions and questions for the subsequent empirical work in POINT....

  19. Near-Optimal Detection in MIMO Systems using Gibbs Sampling

    DEFF Research Database (Denmark)

    Hansen, Morten; Hassibi, Babak; Dimakis, Georgios Alexandros

    2009-01-01

    In this paper we study a Markov Chain Monte Carlo (MCMC) Gibbs sampler for solving the integer least-squares problem. In digital communication the problem is equivalent to preforming Maximum Likelihood (ML) detection in Multiple-Input Multiple-Output (MIMO) systems. While the use of MCMC methods...... sampler provides a computationally efficient way of achieving approximative ML detection in MIMO systems having a huge number of transmit and receive dimensions. In fact, they further suggest that the Markov chain is rapidly mixing. Thus, it has been observed that even in cases were ML detection using, e...

  20. Excess Gibbs Energy for Ternary Lattice Solutions of Nonrandom Mixing

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Hae Young [DukSung Womens University, Seoul (Korea, Republic of)

    2008-12-15

    It is assumed for three components lattice solution that the number of ways of arranging particles randomly on the lattice follows a normal distribution of a linear combination of N{sub 12}, N{sub 23}, N{sub 13} which are the number of the nearest neighbor interactions between different molecules. It is shown by random number simulations that this assumption is reasonable. From this distribution, an approximate equation for the excess Gibbs energy of three components lattice solution is derived. Using this equation, several liquid-vapor equilibria are calculated and compared with the results from other equations.

  1. Spatial Mixture Modelling for Unobserved Point Processes: Examples in Immunofluorescence Histology.

    Science.gov (United States)

    Ji, Chunlin; Merl, Daniel; Kepler, Thomas B; West, Mike

    2009-12-04

    We discuss Bayesian modelling and computational methods in analysis of indirectly observed spatial point processes. The context involves noisy measurements on an underlying point process that provide indirect and noisy data on locations of point outcomes. We are interested in problems in which the spatial intensity function may be highly heterogenous, and so is modelled via flexible nonparametric Bayesian mixture models. Analysis aims to estimate the underlying intensity function and the abundance of realized but unobserved points. Our motivating applications involve immunological studies of multiple fluorescent intensity images in sections of lymphatic tissue where the point processes represent geographical configurations of cells. We are interested in estimating intensity functions and cell abundance for each of a series of such data sets to facilitate comparisons of outcomes at different times and with respect to differing experimental conditions. The analysis is heavily computational, utilizing recently introduced MCMC approaches for spatial point process mixtures and extending them to the broader new context here of unobserved outcomes. Further, our example applications are problems in which the individual objects of interest are not simply points, but rather small groups of pixels; this implies a need to work at an aggregate pixel region level and we develop the resulting novel methodology for this. Two examples with with immunofluorescence histology data demonstrate the models and computational methodology.

  2. GPU-accelerated Gibbs ensemble Monte Carlo simulations of Lennard-Jonesium

    Science.gov (United States)

    Mick, Jason; Hailat, Eyad; Russo, Vincent; Rushaidat, Kamel; Schwiebert, Loren; Potoff, Jeffrey

    2013-12-01

    This work describes an implementation of canonical and Gibbs ensemble Monte Carlo simulations on graphics processing units (GPUs). The pair-wise energy calculations, which consume the majority of the computational effort, are parallelized using the energetic decomposition algorithm. While energetic decomposition is relatively inefficient for traditional CPU-bound codes, the algorithm is ideally suited to the architecture of the GPU. The performance of the CPU and GPU codes are assessed for a variety of CPU and GPU combinations for systems containing between 512 and 131,072 particles. For a system of 131,072 particles, the GPU-enabled canonical and Gibbs ensemble codes were 10.3 and 29.1 times faster (GTX 480 GPU vs. i5-2500K CPU), respectively, than an optimized serial CPU-bound code. Due to overhead from memory transfers from system RAM to the GPU, the CPU code was slightly faster than the GPU code for simulations containing less than 600 particles. The critical temperature Tc∗=1.312(2) and density ρc∗=0.316(3) were determined for the tail corrected Lennard-Jones potential from simulations of 10,000 particle systems, and found to be in exact agreement with prior mixed field finite-size scaling calculations [J.J. Potoff, A.Z. Panagiotopoulos, J. Chem. Phys. 109 (1998) 10914].

  3. Just Another Gibbs Additive Modeler: Interfacing JAGS and mgcv

    Directory of Open Access Journals (Sweden)

    Simon N. Wood

    2016-12-01

    Full Text Available The BUGS language offers a very flexible way of specifying complex statistical models for the purposes of Gibbs sampling, while its JAGS variant offers very convenient R integration via the rjags package. However, including smoothers in JAGS models can involve some quite tedious coding, especially for multivariate or adaptive smoothers. Further, if an additive smooth structure is required then some care is needed, in order to centre smooths appropriately, and to find appropriate starting values. R package mgcv implements a wide range of smoothers, all in a manner appropriate for inclusion in JAGS code, and automates centring and other smooth setup tasks. The purpose of this note is to describe an interface between mgcv and JAGS, based around an R function, jagam, which takes a generalized additive model (GAM as specified in mgcv and automatically generates the JAGS model code and data required for inference about the model via Gibbs sampling. Although the auto-generated JAGS code can be run as is, the expectation is that the user would wish to modify it in order to add complex stochastic model components readily specified in JAGS. A simple interface is also provided for visualisation and further inference about the estimated smooth components using standard mgcv functionality. The methods described here will be un-necessarily inefficient if all that is required is fully Bayesian inference about a standard GAM, rather than the full flexibility of JAGS. In that case the BayesX package would be more efficient.

  4. Reflections on Gibbs: From Statistical Physics to the Amistad V3.0

    Science.gov (United States)

    Kadanoff, Leo P.

    2014-07-01

    This note is based upon a talk given at an APS meeting in celebration of the achievements of J. Willard Gibbs. J. Willard Gibbs, the younger, was the first American physical sciences theorist. He was one of the inventors of statistical physics. He introduced and developed the concepts of phase space, phase transitions, and thermodynamic surfaces in a remarkably correct and elegant manner. These three concepts form the basis of different areas of physics. The connection among these areas has been a subject of deep reflection from Gibbs' time to our own. This talk therefore celebrated Gibbs by describing modern ideas about how different parts of physics fit together. I finished with a more personal note. Our own J. Willard Gibbs had all his many achievements concentrated in science. His father, also J. Willard Gibbs, also a Professor at Yale, had one great non-academic achievement that remains unmatched in our day. I describe it.

  5. Second-order analysis of structured inhomogeneous spatio-temporal point processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Ghorbani, Mohammad

    Statistical methodology for spatio-temporal point processes is in its infancy. We consider second-order analysis based on pair correlation functions and K-functions for first general inhomogeneous spatio-temporal point processes and second inhomogeneous spatio-temporal Cox processes. Assuming...... spatio-temporal separability of the intensity function, we clarify different meanings of second-order spatio-temporal separability. One is second-order spatio-temporal independence and relates e.g. to log-Gaussian Cox processes with an additive covariance structure of the underlying spatio......-temporal Gaussian process. Another concerns shot-noise Cox processes with a separable spatio-temporal covariance density. We propose diagnostic procedures for checking hypotheses of second-order spatio-temporal separability, which we apply on simulated and real data (the UK 2001 epidemic foot and mouth disease data)....

  6. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining.

    Science.gov (United States)

    Truccolo, Wilson

    2016-11-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics ("order parameters") inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. Published by Elsevier Ltd.

  7. Determination of the impact of RGB points cloud attribute quality on color-based segmentation process

    Directory of Open Access Journals (Sweden)

    Bartłomiej Kraszewski

    2015-06-01

    Full Text Available The article presents the results of research on the effect that radiometric quality of point cloud RGB attributes have on color-based segmentation. In the research, a point cloud with a resolution of 5 mm, received from FAROARO Photon 120 scanner, described the fragment of an office’s room and color images were taken by various digital cameras. The images were acquired by SLR Nikon D3X, and SLR Canon D200 integrated with the laser scanner, compact camera Panasonic TZ-30 and a mobile phone digital camera. Color information from images was spatially related to point cloud in FAROARO Scene software. The color-based segmentation of testing data was performed with the use of a developed application named “RGB Segmentation”. The application was based on public Point Cloud Libraries (PCL and allowed to extract subsets of points fulfilling the criteria of segmentation from the source point cloud using region growing method.Using the developed application, the segmentation of four tested point clouds containing different RGB attributes from various images was performed. Evaluation of segmentation process was performed based on comparison of segments acquired using the developed application and extracted manually by an operator. The following items were compared: the number of obtained segments, the number of correctly identified objects and the correctness of segmentation process. The best correctness of segmentation and most identified objects were obtained using the data with RGB attribute from Nikon D3X images. Based on the results it was found that quality of RGB attributes of point cloud had impact only on the number of identified objects. In case of correctness of the segmentation, as well as its error no apparent relationship between the quality of color information and the result of the process was found.[b]Keywords[/b]: terrestrial laser scanning, color-based segmentation, RGB attribute, region growing method, digital images, points cloud

  8. Probabilistic safety assessment and optimal control of hazardous technological systems. A marked point process approach

    International Nuclear Information System (INIS)

    Holmberg, J.

    1997-04-01

    The thesis models risk management as an optimal control problem for a stochastic process. The approach classes the decisions made by management into three categories according to the control methods of a point process: (1) planned process lifetime, (2) modification of the design, and (3) operational decisions. The approach is used for optimization of plant shutdown criteria and surveillance test strategies of a hypothetical nuclear power plant

  9. Probabilistic safety assessment and optimal control of hazardous technological systems. A marked point process approach

    Energy Technology Data Exchange (ETDEWEB)

    Holmberg, J [VTT Automation, Espoo (Finland)

    1997-04-01

    The thesis models risk management as an optimal control problem for a stochastic process. The approach classes the decisions made by management into three categories according to the control methods of a point process: (1) planned process lifetime, (2) modification of the design, and (3) operational decisions. The approach is used for optimization of plant shutdown criteria and surveillance test strategies of a hypothetical nuclear power plant. 62 refs. The thesis includes also five previous publications by author.

  10. Apparatus and method for implementing power saving techniques when processing floating point values

    Science.gov (United States)

    Kim, Young Moon; Park, Sang Phill

    2017-10-03

    An apparatus and method are described for reducing power when reading and writing graphics data. For example, one embodiment of an apparatus comprises: a graphics processor unit (GPU) to process graphics data including floating point data; a set of registers, at least one of the registers of the set partitioned to store the floating point data; and encode/decode logic to reduce a number of binary 1 values being read from the at least one register by causing a specified set of bit positions within the floating point data to be read out as 0s rather than 1s.

  11. Effect of processing conditions on oil point pressure of moringa oleifera seed.

    Science.gov (United States)

    Aviara, N A; Musa, W B; Owolarafe, O K; Ogunsina, B S; Oluwole, F A

    2015-07-01

    Seed oil expression is an important economic venture in rural Nigeria. The traditional techniques of carrying out the operation is not only energy sapping and time consuming but also wasteful. In order to reduce the tedium involved in the expression of oil from moringa oleifera seed and develop efficient equipment for carrying out the operation, the oil point pressure of the seed was determined under different processing conditions using a laboratory press. The processing conditions employed were moisture content (4.78, 6.00, 8.00 and 10.00 % wet basis), heating temperature (50, 70, 85 and 100 °C) and heating time (15, 20, 25 and 30 min). Results showed that the oil point pressure increased with increase in seed moisture content, but decreased with increase in heating temperature and heating time within the above ranges. Highest oil point pressure value of 1.1239 MPa was obtained at the processing conditions of 10.00 % moisture content, 50 °C heating temperature and 15 min heating time. The lowest oil point pressure obtained was 0.3164 MPa and it occurred at the moisture content of 4.78 %, heating temperature of 100 °C and heating time of 30 min. Analysis of Variance (ANOVA) showed that all the processing variables and their interactions had significant effect on the oil point pressure of moringa oleifera seed at 1 % level of significance. This was further demonstrated using Response Surface Methodology (RSM). Tukey's test and Duncan's Multiple Range Analysis successfully separated the means and a multiple regression equation was used to express the relationship existing between the oil point pressure of moringa oleifera seed and its moisture content, processing temperature, heating time and their interactions. The model yielded coefficients that enabled the oil point pressure of the seed to be predicted with very high coefficient of determination.

  12. Linear and quadratic models of point process systems: contributions of patterned input to output.

    Science.gov (United States)

    Lindsay, K A; Rosenberg, J R

    2012-08-01

    In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale

    Science.gov (United States)

    Barrios, M. I.

    2013-12-01

    The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues

  14. Putting to point the production process of iodine-131 by dry distillation (Preoperational tests)

    International Nuclear Information System (INIS)

    Alanis M, J.

    2002-12-01

    With the purpose of putting to point the process of production of 131 I, one of the objectives of carrying out the realization of operational tests of the production process of iodine-131, it was of verifying the operation of each one of the following components: heating systems, vacuum system, mechanical system and peripheral equipment that are part of the production process of iodine-131, another of the objectives, was settling down the optimal parameters that were applied in each process during the obtaining of iodine-131, it is necessary to point out that this objective is very important, since the components of the equipment are new and its behavior during the process is different to the equipment where its were carried out the experimental studies. (Author)

  15. Hazard analysis and critical control point (HACCP) for an ultrasound food processing operation.

    Science.gov (United States)

    Chemat, Farid; Hoarau, Nicolas

    2004-05-01

    Emerging technologies, such as ultrasound (US), used for food and drink production often cause hazards for product safety. Classical quality control methods are inadequate to control these hazards. Hazard analysis of critical control points (HACCP) is the most secure and cost-effective method for controlling possible product contamination or cross-contamination, due to physical or chemical hazard during production. The following case study on the application of HACCP to an US food-processing operation demonstrates how the hazards at the critical control points of the process are effectively controlled through the implementation of HACCP.

  16. The application of prototype point processes for the summary and description of California wildfires

    Science.gov (United States)

    Nichols, K.; Schoenberg, F.P.; Keeley, J.E.; Bray, A.; Diez, D.

    2011-01-01

    A method for summarizing repeated realizations of a space-time marked point process, known as prototyping, is discussed and applied to catalogues of wildfires in California. Prototype summaries are constructed for varying time intervals using California wildfire data from 1990 to 2006. Previous work on prototypes for temporal and space-time point processes is extended here to include methods for computing prototypes with marks and the incorporation of prototype summaries into hierarchical clustering algorithms, the latter of which is used to delineate fire seasons in California. Other results include summaries of patterns in the spatial-temporal distribution of wildfires within each wildfire season. ?? 2011 Blackwell Publishing Ltd.

  17. Mass measurement on the rp-process waiting point {sup 72}Kr

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez, D. [Gesellschaft fuer Schwerionenforschung mbH, Darmstadt (Germany); Kolhinen, V.S. [Jyvaeskylae Univ. (Finland); Audi, G. [CSNSM-IN2P3-Centre National de la Recherche Scientifique (CNRS), 91 - Orsay (FR)] [and others

    2004-06-01

    The mass of one of the three major waiting points in the astrophysical rp-process {sup 72}Kr was measured for the first time with the Penning trap mass spectrometer ISOLTRAP. The measurement yielded a relative mass uncertainty of {delta}m/m=1.2 x 10{sup -7} ({delta}m=8 keV). Other Kr isotopes, also needed for astrophysical calculations, were measured with more than one order of magnitude improved accuracy. We use the ISOLTRAP masses of{sup 72-74}Kr to reanalyze the role of the {sup 72}Kr waiting point in the rp-process during X-ray bursts. (orig.)

  18. Gibbs free energy of formation of UPb(s) compound

    International Nuclear Information System (INIS)

    Samui, Pradeep; Agarwal, Renu; Mishra, Ratikanta

    2012-01-01

    Liquid lead and lead-bismuth eutectic (LBE) are being explored as primary candidates for coolants in accelerator driven systems and in advanced nuclear reactors due to their favorable thermo-physical and chemical properties. They are also proposed to be used as spallation neutron source in ADS Reactor Systems. However, corrosion of structural materials (i.e. steel) presents a critical challenge for the use of liquid lead or LBE in advanced nuclear reactors. The interactions of liquid lead or LBE with clad and fuel is of great scientific and technological importance in the development of advanced nuclear reactors. Clad failure/breach can lead to reaction of coolant elements with fuel components. Thus the study of fuel-coolant interaction of U with Pb/Bi is important. The paper deals with the determination of Gibbs free energy of formation of U-rich phase i.e. UPb in Pb-U system, employing Knudsen effusion mass loss technique

  19. Work and entropy production in generalised Gibbs ensembles

    International Nuclear Information System (INIS)

    Perarnau-Llobet, Martí; Riera, Arnau; Gallego, Rodrigo; Wilming, Henrik; Eisert, Jens

    2016-01-01

    Recent years have seen an enormously revived interest in the study of thermodynamic notions in the quantum regime. This applies both to the study of notions of work extraction in thermal machines in the quantum regime, as well as to questions of equilibration and thermalisation of interacting quantum many-body systems as such. In this work we bring together these two lines of research by studying work extraction in a closed system that undergoes a sequence of quenches and equilibration steps concomitant with free evolutions. In this way, we incorporate an important insight from the study of the dynamics of quantum many body systems: the evolution of closed systems is expected to be well described, for relevant observables and most times, by a suitable equilibrium state. We will consider three kinds of equilibration, namely to (i) the time averaged state, (ii) the Gibbs ensemble and (iii) the generalised Gibbs ensemble, reflecting further constants of motion in integrable models. For each effective description, we investigate notions of entropy production, the validity of the minimal work principle and properties of optimal work extraction protocols. While we keep the discussion general, much room is dedicated to the discussion of paradigmatic non-interacting fermionic quantum many-body systems, for which we identify significant differences with respect to the role of the minimal work principle. Our work not only has implications for experiments with cold atoms, but also can be viewed as suggesting a mindset for quantum thermodynamics where the role of the external heat baths is instead played by the system itself, with its internal degrees of freedom bringing coarse-grained observables to equilibrium. (paper)

  20. Efficient LIDAR Point Cloud Data Managing and Processing in a Hadoop-Based Distributed Framework

    Science.gov (United States)

    Wang, C.; Hu, F.; Sha, D.; Han, X.

    2017-10-01

    Light Detection and Ranging (LiDAR) is one of the most promising technologies in surveying and mapping city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop's storage and computing ability. At the same time, the Point Cloud Library (PCL), an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.

  1. End point detection in ion milling processes by sputter-induced optical emission spectroscopy

    International Nuclear Information System (INIS)

    Lu, C.; Dorian, M.; Tabei, M.; Elsea, A.

    1984-01-01

    The characteristic optical emission from the sputtered material during ion milling processes can provide an unambiguous indication of the presence of the specific etched species. By monitoring the intensity of a representative emission line, the etching process can be precisely terminated at an interface. Enhancement of the etching end point is possible by using a dual-channel photodetection system operating in a ratio or difference mode. The installation of the optical detection system to an existing etching chamber has been greatly facilitated by the use of optical fibers. Using a commercial ion milling system, experimental data for a number of etching processes have been obtained. The result demonstrates that sputter-induced optical emission spectroscopy offers many advantages over other techniques in detecting the etching end point of ion milling processes

  2. Digital analyzer for point processes based on first-in-first-out memories

    Science.gov (United States)

    Basano, Lorenzo; Ottonello, Pasquale; Schiavi, Enore

    1992-06-01

    We present an entirely new version of a multipurpose instrument designed for the statistical analysis of point processes, especially those characterized by high bunching. A long sequence of pulses can be recorded in the RAM bank of a personal computer via a suitably designed front end which employs a pair of first-in-first-out (FIFO) memories; these allow one to build an analyzer that, besides being simpler from the electronic point of view, is capable of sustaining much higher intensity fluctuations of the point process. The overflow risk of the device is evaluated by treating the FIFO pair as a queueing system. The apparatus was tested using both a deterministic signal and a sequence of photoelectrons obtained from laser light scattered by random surfaces.

  3. ON THE ESTIMATION OF DISTANCE DISTRIBUTION FUNCTIONS FOR POINT PROCESSES AND RANDOM SETS

    Directory of Open Access Journals (Sweden)

    Dietrich Stoyan

    2011-05-01

    Full Text Available This paper discusses various estimators for the nearest neighbour distance distribution function D of a stationary point process and for the quadratic contact distribution function Hq of a stationary random closed set. It recommends the use of Hanisch's estimator of D, which is of Horvitz-Thompson type, and the minussampling estimator of Hq. This recommendation is based on simulations for Poisson processes and Boolean models.

  4. Analysis of the stochastic channel model by Saleh & Valenzuela via the theory of point processes

    DEFF Research Database (Denmark)

    Jakobsen, Morten Lomholt; Pedersen, Troels; Fleury, Bernard Henri

    2012-01-01

    and underlying features, like the intensity function of the component delays and the delaypower intensity. The flexibility and clarity of the mathematical instruments utilized to obtain these results lead us to conjecture that the theory of spatial point processes provides a unifying mathematical framework...

  5. AKaplan-Meier estimators of distance distributions for spatial point processes

    NARCIS (Netherlands)

    Baddeley, A.J.; Gill, R.D.

    1997-01-01

    When a spatial point process is observed through a bounded window, edge effects hamper the estimation of characteristics such as the empty space function $F$, the nearest neighbour distance distribution $G$, and the reduced second order moment function $K$. Here we propose and study product-limit

  6. Two step estimation for Neyman-Scott point process with inhomogeneous cluster centers

    Czech Academy of Sciences Publication Activity Database

    Mrkvička, T.; Muška, Milan; Kubečka, Jan

    2014-01-01

    Roč. 24, č. 1 (2014), s. 91-100 ISSN 0960-3174 R&D Projects: GA ČR(CZ) GA206/07/1392 Institutional support: RVO:60077344 Keywords : bayesian method * clustering * inhomogeneous point process Subject RIV: EH - Ecology, Behaviour Impact factor: 1.623, year: 2014

  7. Dense range images from sparse point clouds using multi-scale processing

    NARCIS (Netherlands)

    Do, Q.L.; Ma, L.; With, de P.H.N.

    2013-01-01

    Multi-modal data processing based on visual and depth/range images has become relevant in computer vision for 3D reconstruction applications such as city modeling, robot navigation etc. In this paper, we generate highaccuracy dense range images from sparse point clouds to facilitate such

  8. A Systematic Approach to Process Evaluation in the Central Oklahoma Turning Point (COTP) Partnership

    Science.gov (United States)

    Tolma, Eleni L.; Cheney, Marshall K.; Chrislip, David D.; Blankenship, Derek; Troup, Pam; Hann, Neil

    2011-01-01

    Formation is an important stage of partnership development. Purpose: To describe the systematic approach to process evaluation of a Turning Point initiative in central Oklahoma during the formation stage. The nine-month collaborative effort aimed to develop an action plan to promote health. Methods: A sound planning framework was used in the…

  9. A Simple Method to Calculate the Temperature Dependence of the Gibbs Energy and Chemical Equilibrium Constants

    Science.gov (United States)

    Vargas, Francisco M.

    2014-01-01

    The temperature dependence of the Gibbs energy and important quantities such as Henry's law constants, activity coefficients, and chemical equilibrium constants is usually calculated by using the Gibbs-Helmholtz equation. Although, this is a well-known approach and traditionally covered as part of any physical chemistry course, the required…

  10. Continuous spin mean-field models : Limiting kernels and Gibbs properties of local transforms

    NARCIS (Netherlands)

    Kulske, Christof; Opoku, Alex A.

    2008-01-01

    We extend the notion of Gibbsianness for mean-field systems to the setup of general (possibly continuous) local state spaces. We investigate the Gibbs properties of systems arising from an initial mean-field Gibbs measure by application of given local transition kernels. This generalizes previous

  11. Large scale inference in the Infinite Relational Model: Gibbs sampling is not enough

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon; Moth, Andreas Leon Aagard; Mørup, Morten

    2013-01-01

    . We find that Gibbs sampling can be computationally scaled to handle millions of nodes and billions of links. Investigating the behavior of the Gibbs sampler for different sizes of networks we find that the mixing ability decreases drastically with the network size, clearly indicating a need...

  12. One of Gibbs's ideas that has gone unnoticed (comment on chapter IX of his classic book)

    International Nuclear Information System (INIS)

    Sukhanov, Alexander D; Rudoi, Yurii G

    2006-01-01

    We show that contrary to the commonly accepted view, Chapter IX of Gibbs's book [1] contains the prolegomena to a macroscopic statistical theory that is qualitatively different from his own microscopic statistical mechanics. The formulas obtained by Gibbs were the first results in the history of physics related to the theory of fluctuations in any macroparameters, including temperature. (from the history of physics)

  13. Thermodynamics, Gibbs Method and Statistical Physics of Electron Gases Gibbs Method and Statistical Physics of Electron Gases

    CERN Document Server

    Askerov, Bahram M

    2010-01-01

    This book deals with theoretical thermodynamics and the statistical physics of electron and particle gases. While treating the laws of thermodynamics from both classical and quantum theoretical viewpoints, it posits that the basis of the statistical theory of macroscopic properties of a system is the microcanonical distribution of isolated systems, from which all canonical distributions stem. To calculate the free energy, the Gibbs method is applied to ideal and non-ideal gases, and also to a crystalline solid. Considerable attention is paid to the Fermi-Dirac and Bose-Einstein quantum statistics and its application to different quantum gases, and electron gas in both metals and semiconductors is considered in a nonequilibrium state. A separate chapter treats the statistical theory of thermodynamic properties of an electron gas in a quantizing magnetic field.

  14. Reduction efficiency prediction of CENIBRA's recovery boiler by direct minimization of gibbs free energy

    Directory of Open Access Journals (Sweden)

    W. L. Silva

    2008-09-01

    Full Text Available The reduction efficiency is an important variable during the black liquor burning process in the Kraft recovery boiler. This variable value is obtained by slow experimental routines and the delay of this measure disturbs the pulp and paper industry customary control. This paper describes an optimization approach for the reduction efficiency determination in the furnace bottom of the recovery boiler based on the minimization of the Gibbs free energy. The industrial data used in this study were directly obtained from CENIBRA's data acquisition system. The resulting approach is able to predict the steady state behavior of the chemical composition of the furnace recovery boiler, - especially the reduction efficiency when different operational conditions are used. This result confirms the potential of this approach in the analysis of the daily operation of the recovery boiler.

  15. A course on large deviations with an introduction to Gibbs measures

    CERN Document Server

    Rassoul-Agha, Firas

    2015-01-01

    This is an introductory course on the methods of computing asymptotics of probabilities of rare events: the theory of large deviations. The book combines large deviation theory with basic statistical mechanics, namely Gibbs measures with their variational characterization and the phase transition of the Ising model, in a text intended for a one semester or quarter course. The book begins with a straightforward approach to the key ideas and results of large deviation theory in the context of independent identically distributed random variables. This includes Cramér's theorem, relative entropy, Sanov's theorem, process level large deviations, convex duality, and change of measure arguments. Dependence is introduced through the interactions potentials of equilibrium statistical mechanics. The phase transition of the Ising model is proved in two different ways: first in the classical way with the Peierls argument, Dobrushin's uniqueness condition, and correlation inequalities and then a second time through the ...

  16. A Combined Control Chart for Identifying Out–Of–Control Points in Multivariate Processes

    Directory of Open Access Journals (Sweden)

    Marroquín–Prado E.

    2010-10-01

    Full Text Available The Hotelling's T2 control chart is widely used to identify out–of–control signals in multivariate processes. However, this chart is not sensitive to small shifts in the process mean vec tor. In this work we propose a control chart to identify out–of–control signals. The proposed chart is a combination of Hotelling's T2 chart, M chart proposed by Hayter et al. (1994 and a new chart based on Principal Components. The combination of these charts identifies any type and size of change in the process mean vector. Us ing simulation and the Average Run Length (ARL, the performance of the proposed control chart is evaluated. The ARL means the average points within control before an out–of–control point is detected, The results of the simulation show that the proposed chart is more sensitive that each one of the three charts individually

  17. Steam generators secondary side chemical cleaning at Point Lepreau using the Siemen's high temperature process

    International Nuclear Information System (INIS)

    Verma, K.; MacNeil, C.; Odar, S.

    1996-01-01

    The secondary sides of all four steam generators at the Point Lepreau Nuclear Generating Stations were cleaned during the 1995 annual outage run-down using the Siemens high temperature chemical cleaning process. Traditionally all secondary side chemical cleaning exercises in CANDU as well as the other nuclear power stations in North America have been conducted using a process developed in conjunction with the Electric Power Research Institute (EPRI). The Siemens high temperature process was applied for the first time in North America at the Point Lepreau Nuclear Generating Station (PLGS). The paper discusses experiences related to the pre and post award chemical cleaning activities, chemical cleaning application, post cleaning inspection results and waste handling activities. (author)

  18. Dynamical predictive power of the generalized Gibbs ensemble revealed in a second quench.

    Science.gov (United States)

    Zhang, J M; Cui, F C; Hu, Jiangping

    2012-04-01

    We show that a quenched and relaxed completely integrable system is hardly distinguishable from the corresponding generalized Gibbs ensemble in a dynamical sense. To be specific, the response of the quenched and relaxed system to a second quench can be accurately reproduced by using the generalized Gibbs ensemble as a substitute. Remarkably, as demonstrated with the transverse Ising model and the hard-core bosons in one dimension, not only the steady values but even the transient, relaxation dynamics of the physical variables can be accurately reproduced by using the generalized Gibbs ensemble as a pseudoinitial state. This result is an important complement to the previously established result that a quenched and relaxed system is hardly distinguishable from the generalized Gibbs ensemble in a static sense. The relevance of the generalized Gibbs ensemble in the nonequilibrium dynamics of completely integrable systems is then greatly strengthened.

  19. Bayesian inference for multivariate point processes observed at sparsely distributed times

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl; Møller, Jesper; Aukema, B.H.

    We consider statistical and computational aspects of simulation-based Bayesian inference for a multivariate point process which is only observed at sparsely distributed times. For specicity we consider a particular data set which has earlier been analyzed by a discrete time model involving unknown...... normalizing constants. We discuss the advantages and disadvantages of using continuous time processes compared to discrete time processes in the setting of the present paper as well as other spatial-temporal situations. Keywords: Bark beetle, conditional intensity, forest entomology, Markov chain Monte Carlo...

  20. The Oil Point Method - A tool for indicative environmental evaluation in material and process selection

    DEFF Research Database (Denmark)

    Bey, Niki

    2000-01-01

    to three essential assessment steps, the method enables rough environmental evaluations and supports in this way material- and process-related decision-making in the early stages of design. In its overall structure, the Oil Point Method is related to Life Cycle Assessment - except for two main differences...... of environmental evaluation and only approximate information about the product and its life cycle. This dissertation addresses this challenge in presenting a method, which is tailored to these requirements of designers - the Oil Point Method (OPM). In providing environmental key information and confining itself...

  1. Spatial point process analysis for a plant community with high biodiversity

    DEFF Research Database (Denmark)

    Illian, Janine; Møller, Jesper; Waagepetersen, Rasmus Plenge

    A complex multivariate spatial point pattern for a plant community with high biodiversity is modelled using a hierarchical multivariate point process model. In the model, interactions between plants with different post-fire regeneration strategies are of key interest. We consider initially...... a maximum likelihood approach to inference where problems arise due to unknown interaction radii for the plants. We next demonstrate that a Bayesian approach provides a flexible framework for incorporating prior information concerning the interaction radii. From an ecological perspective, we are able both...

  2. Analysis of residual stress state in sheet metal parts processed by single point incremental forming

    Science.gov (United States)

    Maaß, F.; Gies, S.; Dobecki, M.; Brömmelhoff, K.; Tekkaya, A. E.; Reimers, W.

    2018-05-01

    The mechanical properties of formed metal components are highly affected by the prevailing residual stress state. A selective induction of residual compressive stresses in the component, can improve the product properties such as the fatigue strength. By means of single point incremental forming (SPIF), the residual stress state can be influenced by adjusting the process parameters during the manufacturing process. To achieve a fundamental understanding of the residual stress formation caused by the SPIF process, a valid numerical process model is essential. Within the scope of this paper the significance of kinematic hardening effects on the determined residual stress state is presented based on numerical simulations. The effect of the unclamping step after the manufacturing process is also analyzed. An average deviation of the residual stress amplitudes in the clamped and unclamped condition of 18 % reveals, that the unclamping step needs to be considered to reach a high numerical prediction quality.

  3. Analysis of multi-species point patterns using multivariate log Gaussian Cox processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Guan, Yongtao; Jalilian, Abdollah

    Multivariate log Gaussian Cox processes are flexible models for multivariate point patterns. However, they have so far only been applied in bivariate cases. In this paper we move beyond the bivariate case in order to model multi-species point patterns of tree locations. In particular we address t...... of the data. The selected number of common latent fields provides an index of complexity of the multivariate covariance structure. Hierarchical clustering is used to identify groups of species with similar patterns of dependence on the common latent fields.......Multivariate log Gaussian Cox processes are flexible models for multivariate point patterns. However, they have so far only been applied in bivariate cases. In this paper we move beyond the bivariate case in order to model multi-species point patterns of tree locations. In particular we address...... the problems of identifying parsimonious models and of extracting biologically relevant information from the fitted models. The latent multivariate Gaussian field is decomposed into components given in terms of random fields common to all species and components which are species specific. This allows...

  4. Prospects for direct neutron capture measurements on s-process branching point isotopes

    Energy Technology Data Exchange (ETDEWEB)

    Guerrero, C.; Lerendegui-Marco, J.; Quesada, J.M. [Universidad de Sevilla, Dept. de Fisica Atomica, Molecular y Nuclear, Sevilla (Spain); Domingo-Pardo, C. [CSIC-Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (Spain); Kaeppeler, F. [Karlsruhe Institute of Technology, Institut fuer Kernphysik, Karlsruhe (Germany); Palomo, F.R. [Universidad de Sevilla, Dept. de Ingenieria Electronica, Sevilla (Spain); Reifarth, R. [Goethe-Universitaet Frankfurt am Main, Frankfurt am Main (Germany)

    2017-05-15

    The neutron capture cross sections of several unstable key isotopes acting as branching points in the s-process are crucial for stellar nucleosynthesis studies, but they are very challenging to measure directly due to the difficult production of sufficient sample material, the high activity of the resulting samples, and the actual (n, γ) measurement, where high neutron fluxes and effective background rejection capabilities are required. At present there are about 21 relevant s-process branching point isotopes whose cross section could not be measured yet over the neutron energy range of interest for astrophysics. However, the situation is changing with some very recent developments and upcoming technologies. This work introduces three techniques that will change the current paradigm in the field: the use of γ-ray imaging techniques in (n, γ) experiments, the production of moderated neutron beams using high-power lasers, and double capture experiments in Maxwellian neutron beams. (orig.)

  5. Instantaneous nonlinear assessment of complex cardiovascular dynamics by Laguerre-Volterra point process models.

    Science.gov (United States)

    Valenza, Gaetano; Citi, Luca; Barbieri, Riccardo

    2013-01-01

    We report an exemplary study of instantaneous assessment of cardiovascular dynamics performed using point-process nonlinear models based on Laguerre expansion of the linear and nonlinear Wiener-Volterra kernels. As quantifiers, instantaneous measures such as high order spectral features and Lyapunov exponents can be estimated from a quadratic and cubic autoregressive formulation of the model first order moment, respectively. Here, these measures are evaluated on heartbeat series coming from 16 healthy subjects and 14 patients with Congestive Hearth Failure (CHF). Data were gathered from the on-line repository PhysioBank, which has been taken as landmark for testing nonlinear indices. Results show that the proposed nonlinear Laguerre-Volterra point-process methods are able to track the nonlinear and complex cardiovascular dynamics, distinguishing significantly between CHF and healthy heartbeat series.

  6. MODELLING AND SIMULATION OF A NEUROPHYSIOLOGICAL EXPERIMENT BY SPATIO-TEMPORAL POINT PROCESSES

    Directory of Open Access Journals (Sweden)

    Viktor Beneš

    2011-05-01

    Full Text Available We present a stochastic model of an experimentmonitoring the spiking activity of a place cell of hippocampus of an experimental animal moving in an arena. Doubly stochastic spatio-temporal point process is used to model and quantify overdispersion. Stochastic intensity is modelled by a Lévy based random field while the animal path is simplified to a discrete random walk. In a simulation study first a method suggested previously is used. Then it is shown that a solution of the filtering problem yields the desired inference to the random intensity. Two approaches are suggested and the new one based on finite point process density is applied. Using Markov chain Monte Carlo we obtain numerical results from the simulated model. The methodology is discussed.

  7. Point process analyses of variations in smoking rate by setting, mood, gender, and dependence

    Science.gov (United States)

    Shiffman, Saul; Rathbun, Stephen L.

    2010-01-01

    The immediate emotional and situational antecedents of ad libitum smoking are still not well understood. We re-analyzed data from Ecological Momentary Assessment using novel point-process analyses, to assess how craving, mood, and social setting influence smoking rate, as well as assessing the moderating effects of gender and nicotine dependence. 304 smokers recorded craving, mood, and social setting using electronic diaries when smoking and at random nonsmoking times over 16 days of smoking. Point-process analysis, which makes use of the known random sampling scheme for momentary variables, examined main effects of setting and interactions with gender and dependence. Increased craving was associated with higher rates of smoking, particularly among women. Negative affect was not associated with smoking rate, even in interaction with arousal, but restlessness was associated with substantially higher smoking rates. Women's smoking tended to be less affected by negative affect. Nicotine dependence had little moderating effect on situational influences. Smoking rates were higher when smokers were alone or with others smoking, and smoking restrictions reduced smoking rates. However, the presence of others smoking undermined the effects of restrictions. The more sensitive point-process analyses confirmed earlier findings, including the surprising conclusion that negative affect by itself was not related to smoking rates. Contrary to hypothesis, men's and not women's smoking was influenced by negative affect. Both smoking restrictions and the presence of others who are not smoking suppress smoking, but others’ smoking undermines the effects of restrictions. Point-process analyses of EMA data can bring out even small influences on smoking rate. PMID:21480683

  8. A random point process model for the score in sport matches

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2009-01-01

    Roč. 20, č. 2 (2009), s. 121-131 ISSN 1471-678X R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z10750506 Keywords : sport statistics * scoring intensity * Cox’s regression model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/SI/volf-a random point process model for the score in sport matches.pdf

  9. A business process model as a starting point for tight cooperation among organizations

    Directory of Open Access Journals (Sweden)

    O. Mysliveček

    2006-01-01

    Full Text Available Outsourcing and other kinds of tight cooperation among organizations are more and more necessary for success on all markets (markets of high technology products are particularly influenced. Thus it is important for companies to be able to effectively set up all kinds of cooperation. A business process model (BPM is a suitable starting point for this future cooperation. In this paper the process of setting up such cooperation is outlined, as well as why it is important for business success. 

  10. Weak interaction rates for Kr and Sr waiting-point nuclei under rp-process conditions

    International Nuclear Information System (INIS)

    Sarriguren, P.

    2009-01-01

    Weak interaction rates are studied in neutron deficient Kr and Sr waiting-point isotopes in ranges of densities and temperatures relevant for the rp process. The nuclear structure is described within a microscopic model (deformed QRPA) that reproduces not only the half-lives but also the Gamow-Teller strength distributions recently measured. The various sensitivities of the decay rates to both density and temperature are discussed. Continuum electron capture is shown to contribute significantly to the weak rates at rp-process conditions.

  11. A comparison of various Gibbs energy dissipation correlations for predicting microbial growth yields

    Energy Technology Data Exchange (ETDEWEB)

    Liu, J.-S. [Laboratory of Chemical and Biochemical Engineering, Swiss Federal Institute of Technology, EPFL, CH-1015 Lausanne (Switzerland); Vojinovic, V. [Laboratory of Chemical and Biochemical Engineering, Swiss Federal Institute of Technology, EPFL, CH-1015 Lausanne (Switzerland); Patino, R. [Cinvestav-Merida, Departamento de Fisica Aplicada, Km. 6 carretera antigua a Progreso, AP 73 Cordemex, 97310 Merida, Yucatan (Mexico); Maskow, Th. [UFZ Centre for Environmental Research, Department of Environmental Microbiology, Permoserstrasse 15, D-04318 Leipzig (Germany); Stockar, U. von [Laboratory of Chemical and Biochemical Engineering, Swiss Federal Institute of Technology, EPFL, CH-1015 Lausanne (Switzerland)]. E-mail: urs.vonStockar@epfl.ch

    2007-06-25

    Thermodynamic analysis may be applied in order to predict microbial growth yields roughly, based on an empirical correlation of the Gibbs energy of the overall growth reaction or Gibbs energy dissipation. Due to the well-known trade-off between high biomass yield and high Gibbs energy dissipation necessary for fast growth, an optimal range of Gibbs energy dissipation exists and it can be correlated to physical characteristics of the growth substrates. A database previously available in the literature has been extended significantly in order to test such correlations. An analysis of the relationship between biomass yield and Gibbs energy dissipation reveals that one does not need a very precise estimation of the latter to predict the former roughly. Approximating the Gibbs energy dissipation with a constant universal value of -500 kJ C-mol{sup -1} of dry biomass grown predicts many experimental growth yields nearly as well as a carefully designed, complex correlation available from the literature, even though a number of predictions are grossly out of range. A new correlation for Gibbs energy dissipation is proposed which is just as accurate as the complex literature correlation despite its dramatically simpler structure.

  12. A comparison of various Gibbs energy dissipation correlations for predicting microbial growth yields

    International Nuclear Information System (INIS)

    Liu, J.-S.; Vojinovic, V.; Patino, R.; Maskow, Th.; Stockar, U. von

    2007-01-01

    Thermodynamic analysis may be applied in order to predict microbial growth yields roughly, based on an empirical correlation of the Gibbs energy of the overall growth reaction or Gibbs energy dissipation. Due to the well-known trade-off between high biomass yield and high Gibbs energy dissipation necessary for fast growth, an optimal range of Gibbs energy dissipation exists and it can be correlated to physical characteristics of the growth substrates. A database previously available in the literature has been extended significantly in order to test such correlations. An analysis of the relationship between biomass yield and Gibbs energy dissipation reveals that one does not need a very precise estimation of the latter to predict the former roughly. Approximating the Gibbs energy dissipation with a constant universal value of -500 kJ C-mol -1 of dry biomass grown predicts many experimental growth yields nearly as well as a carefully designed, complex correlation available from the literature, even though a number of predictions are grossly out of range. A new correlation for Gibbs energy dissipation is proposed which is just as accurate as the complex literature correlation despite its dramatically simpler structure

  13. PARALLEL PROCESSING OF BIG POINT CLOUDS USING Z-ORDER-BASED PARTITIONING

    Directory of Open Access Journals (Sweden)

    C. Alis

    2016-06-01

    Full Text Available As laser scanning technology improves and costs are coming down, the amount of point cloud data being generated can be prohibitively difficult and expensive to process on a single machine. This data explosion is not only limited to point cloud data. Voluminous amounts of high-dimensionality and quickly accumulating data, collectively known as Big Data, such as those generated by social media, Internet of Things devices and commercial transactions, are becoming more prevalent as well. New computing paradigms and frameworks are being developed to efficiently handle the processing of Big Data, many of which utilize a compute cluster composed of several commodity grade machines to process chunks of data in parallel. A central concept in many of these frameworks is data locality. By its nature, Big Data is large enough that the entire dataset would not fit on the memory and hard drives of a single node hence replicating the entire dataset to each worker node is impractical. The data must then be partitioned across worker nodes in a manner that minimises data transfer across the network. This is a challenge for point cloud data because there exist different ways to partition data and they may require data transfer. We propose a partitioning based on Z-order which is a form of locality-sensitive hashing. The Z-order or Morton code is computed by dividing each dimension to form a grid then interleaving the binary representation of each dimension. For example, the Z-order code for the grid square with coordinates (x = 1 = 012, y = 3 = 112 is 10112 = 11. The number of points in each partition is controlled by the number of bits per dimension: the more bits, the fewer the points. The number of bits per dimension also controls the level of detail with more bits yielding finer partitioning. We present this partitioning method by implementing it on Apache Spark and investigating how different parameters affect the accuracy and running time of the k nearest

  14. Parallel Processing of Big Point Clouds Using Z-Order Partitioning

    Science.gov (United States)

    Alis, C.; Boehm, J.; Liu, K.

    2016-06-01

    As laser scanning technology improves and costs are coming down, the amount of point cloud data being generated can be prohibitively difficult and expensive to process on a single machine. This data explosion is not only limited to point cloud data. Voluminous amounts of high-dimensionality and quickly accumulating data, collectively known as Big Data, such as those generated by social media, Internet of Things devices and commercial transactions, are becoming more prevalent as well. New computing paradigms and frameworks are being developed to efficiently handle the processing of Big Data, many of which utilize a compute cluster composed of several commodity grade machines to process chunks of data in parallel. A central concept in many of these frameworks is data locality. By its nature, Big Data is large enough that the entire dataset would not fit on the memory and hard drives of a single node hence replicating the entire dataset to each worker node is impractical. The data must then be partitioned across worker nodes in a manner that minimises data transfer across the network. This is a challenge for point cloud data because there exist different ways to partition data and they may require data transfer. We propose a partitioning based on Z-order which is a form of locality-sensitive hashing. The Z-order or Morton code is computed by dividing each dimension to form a grid then interleaving the binary representation of each dimension. For example, the Z-order code for the grid square with coordinates (x = 1 = 012, y = 3 = 112) is 10112 = 11. The number of points in each partition is controlled by the number of bits per dimension: the more bits, the fewer the points. The number of bits per dimension also controls the level of detail with more bits yielding finer partitioning. We present this partitioning method by implementing it on Apache Spark and investigating how different parameters affect the accuracy and running time of the k nearest neighbour algorithm

  15. Developing a Business Intelligence Process for a Training Module in SharePoint 2010

    Science.gov (United States)

    Schmidtchen, Bryce; Solano, Wanda M.; Albasini, Colby

    2015-01-01

    Prior to this project, training information for the employees of the National Center for Critical Processing and Storage (NCCIPS) was stored in an array of unrelated spreadsheets and SharePoint lists that had to be manually updated. By developing a content management system through a web application platform named SharePoint, this training system is now highly automated and provides a much less intensive method of storing training data and scheduling training courses. This system was developed by using SharePoint Designer and laying out the data structure for the interaction between different lists of data about the employees. The automation of data population inside of the lists was accomplished by implementing SharePoint workflows which essentially lay out the logic for how data is connected and calculated between certain lists. The resulting training system is constructed from a combination of five lists of data with a single list acting as the user-friendly interface. This interface is populated with the courses required for each employee and includes past and future information about course requirements. The employees of NCCIPS now have the ability to view, log, and schedule their training information and courses with much more ease. This system will relieve a significant amount of manual input and serve as a powerful informational resource for the employees of NCCIPS in the future.

  16. EFFICIENT LIDAR POINT CLOUD DATA MANAGING AND PROCESSING IN A HADOOP-BASED DISTRIBUTED FRAMEWORK

    Directory of Open Access Journals (Sweden)

    C. Wang

    2017-10-01

    Full Text Available Light Detection and Ranging (LiDAR is one of the most promising technologies in surveying and mapping,city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop’s storage and computing ability. At the same time, the Point Cloud Library (PCL, an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.

  17. Implementation of hazard analysis and critical control point (HACCP) in dried anchovy production process

    Science.gov (United States)

    Citraresmi, A. D. P.; Wahyuni, E. E.

    2018-03-01

    The aim of this study was to inspect the implementation of Hazard Analysis and Critical Control Point (HACCP) for identification and prevention of potential hazards in the production process of dried anchovy at PT. Kelola Mina Laut (KML), Lobuk unit, Sumenep. Cold storage process is needed in each anchovy processing step in order to maintain its physical and chemical condition. In addition, the implementation of quality assurance system should be undertaken to maintain product quality. The research was conducted using a survey method, by following the whole process of making anchovy from the receiving raw materials to the packaging of final product. The method of data analysis used was descriptive analysis method. Implementation of HACCP at PT. KML, Lobuk unit, Sumenep was conducted by applying Pre Requisite Programs (PRP) and preparation stage consisting of 5 initial stages and 7 principles of HACCP. The results showed that CCP was found in boiling process flow with significant hazard of Listeria monocytogenesis bacteria and final sorting process with significant hazard of foreign material contamination in the product. Actions taken were controlling boiling temperature of 100 – 105°C for 3 - 5 minutes and training for sorting process employees.

  18. Optimization of the single point incremental forming process for titanium sheets by using response surface

    Directory of Open Access Journals (Sweden)

    Saidi Badreddine

    2016-01-01

    Full Text Available The single point incremental forming process is well-known to be perfectly suited for prototyping and small series. One of its fields of applicability is the medicine area for the forming of titanium prostheses or titanium medical implants. However this process is not yet very industrialized, mainly due its geometrical inaccuracy, its not homogeneous thickness distribution& Moreover considerable forces can occur. They must be controlled in order to preserve the tooling. In this paper, a numerical approach is proposed in order to minimize the maximum force achieved during the incremental forming of titanium sheets and to maximize the minimal thickness. A surface response methodology is used to find the optimal values of two input parameters of the process, the punch diameter and the vertical step size of the tool path.

  19. Marked point process framework for living probabilistic safety assessment and risk follow-up

    International Nuclear Information System (INIS)

    Arjas, Elja; Holmberg, Jan

    1995-01-01

    We construct a model for living probabilistic safety assessment (PSA) by applying the general framework of marked point processes. The framework provides a theoretically rigorous approach for considering risk follow-up of posterior hazards. In risk follow-up, the hazard of core damage is evaluated synthetically at time points in the past, by using some observed events as logged history and combining it with re-evaluated potential hazards. There are several alternatives for doing this, of which we consider three here, calling them initiating event approach, hazard rate approach, and safety system approach. In addition, for a comparison, we consider a core damage hazard arising in risk monitoring. Each of these four definitions draws attention to a particular aspect in risk assessment, and this is reflected in the behaviour of the consequent risk importance measures. Several alternative measures are again considered. The concepts and definitions are illustrated by a numerical example

  20. Quality control for electron beam processing of polymeric materials by end-point analysis

    International Nuclear Information System (INIS)

    DeGraff, E.; McLaughlin, W.L.

    1981-01-01

    Properties of certain plastics, e.g. polytetrafluoroethylene, polyethylene, ethylene vinyl acetate copolymer, can be modified selectively by ionizing radiation. One of the advantages of this treatment over chemical methods is better control of the process and the end-product properties. The most convenient method of dosimetry for monitoring quality control is post-irradiation evaluation of the plastic itself, e.g., melt index and melt point determination. It is shown that by proper calibration in terms of total dose and sufficiently reproducible radiation effects, such product test methods provide convenient and meaningful analyses. Other appropriate standardized analytical methods include stress-crack resistance, stress-strain-to-fracture testing and solubility determination. Standard routine dosimetry over the dose and dose rate ranges of interest confirm that measured product end points can be correlated with calibrated values of absorbed dose in the product within uncertainty limits of the measurements. (author)

  1. Application of random-point processes to the detection of radiation sources

    International Nuclear Information System (INIS)

    Woods, J.W.

    1978-01-01

    In this report the mathematical theory of random-point processes is reviewed and it is shown how use of the theory can obtain optimal solutions to the problem of detecting radiation sources. As noted, the theory also applies to image processing in low-light-level or low-count-rate situations. Paralleling Snyder's work, the theory is extended to the multichannel case of a continuous, two-dimensional (2-D), energy-time space. This extension essentially involves showing that the data are doubly stochastic Poisson (DSP) point processes in energy as well as time. Further, a new 2-D recursive formulation is presented for the radiation-detection problem with large computational savings over nonrecursive techniques when the number of channels is large (greater than or equal to 30). Finally, some adaptive strategies for on-line ''learning'' of unknown, time-varying signal and background-intensity parameters and statistics are present and discussed. These adaptive procedures apply when a complete statistical description is not available a priori

  2. Nuclear binding around the RP-process waiting points $^{68}$Se and $^{72}$Kr

    CERN Multimedia

    2002-01-01

    Encouraged by the success of mass determinations of nuclei close to the Z=N line performed at ISOLTRAP during the year 2000 and of the recent decay spectroscopy studies on neutron-deficient Kr isotopes (IS351 collaboration), we aim to measure masses and proton separation energies of the bottleneck nuclei defining the flow of the astrophysical rp-process beyond A$\\sim$70. In detail, the program includes mass measurements of the rp-process waiting point nuclei $^{68}$Se and $^{72}$Kr and determination of proton separation energies of the proton-unbound $^{69}$Br and $^{73}$Rb via $\\beta$-decays of $^{69}$Kr and $^{73}$Sr, respectively. The aim of the project is to complete the experimental database for astrophysical network calculations and for the liquid-drop type of mass models typically used in the modelling of the astrophysical rp process in the region. The first beamtime is scheduled for the August 2001 and the aim is to measure the absolute mass of the waiting-point nucleus $^{72}$Kr.

  3. Assessment of Peer Mediation Process from Conflicting Students’ Point of Views

    Directory of Open Access Journals (Sweden)

    Fulya TÜRK

    2016-12-01

    Full Text Available The purpose of this study was to analyze peer mediation process that was applied in a high school on conflicting students’ point of views. This research was carried out in a high school in Denizli. After ten sessions of training in peer mediation, peer mediators mediated peers’ real conflicts. In the research, 41 students (28 girls, 13 boys who got help at least once were interviewed as a party to the conflict. Through semistructured interviews with conflicting students, the mediation process has been evaluated through the point of views of students. Eight questions were asked about the conflicting parties. Verbal data obtained from interviews were analyzed using the content analysis. When conflicting students’ opinions and experiences about peer mediation were analyzed, it is seen that they were satisfied regarding the process, they have resolved their conflicts in a constructive and peaceful way, their friendship has been continuing as before. All of these results also indicate that peer mediation is an effective method of resolving student conflicts constructively

  4. An Introduction to the DA-T Gibbs Sampler for the Two-Parameter Logistic (2PL Model and Beyond

    Directory of Open Access Journals (Sweden)

    Gunter Maris

    2005-01-01

    Full Text Available The DA-T Gibbs sampler is proposed by Maris and Maris (2002 as a Bayesian estimation method for a wide variety of Item Response Theory (IRT models. The present paper provides an expository account of the DAT Gibbs sampler for the 2PL model. However, the scope is not limited to the 2PL model. It is demonstrated how the DA-T Gibbs sampler for the 2PL may be used to build, quite easily, Gibbs samplers for other IRT models. Furthermore, the paper contains a novel, intuitive derivation of the Gibbs sampler and could be read for a graduate course on sampling.

  5. Clusterless Decoding of Position From Multiunit Activity Using A Marked Point Process Filter

    Science.gov (United States)

    Deng, Xinyi; Liu, Daniel F.; Kay, Kenneth; Frank, Loren M.; Eden, Uri T.

    2016-01-01

    Point process filters have been applied successfully to decode neural signals and track neural dynamics. Traditionally, these methods assume that multiunit spiking activity has already been correctly spike-sorted. As a result, these methods are not appropriate for situations where sorting cannot be performed with high precision such as real-time decoding for brain-computer interfaces. As the unsupervised spike-sorting problem remains unsolved, we took an alternative approach that takes advantage of recent insights about clusterless decoding. Here we present a new point process decoding algorithm that does not require multiunit signals to be sorted into individual units. We use the theory of marked point processes to construct a function that characterizes the relationship between a covariate of interest (in this case, the location of a rat on a track) and features of the spike waveforms. In our example, we use tetrode recordings, and the marks represent a four-dimensional vector of the maximum amplitudes of the spike waveform on each of the four electrodes. In general, the marks may represent any features of the spike waveform. We then use Bayes’ rule to estimate spatial location from hippocampal neural activity. We validate our approach with a simulation study and with experimental data recorded in the hippocampus of a rat moving through a linear environment. Our decoding algorithm accurately reconstructs the rat’s position from unsorted multiunit spiking activity. We then compare the quality of our decoding algorithm to that of a traditional spike-sorting and decoding algorithm. Our analyses show that the proposed decoding algorithm performs equivalently or better than algorithms based on sorted single-unit activity. These results provide a path toward accurate real-time decoding of spiking patterns that could be used to carry out content-specific manipulations of population activity in hippocampus or elsewhere in the brain. PMID:25973549

  6. On the estimation of the spherical contact distribution Hs(y) for spatial point processes

    International Nuclear Information System (INIS)

    Doguwa, S.I.

    1990-08-01

    RIPLEY (1977, Journal of the Royal Statistical Society, B39 172-212) proposed an estimator for the spherical contact distribution H s (s), of a spatial point process observed in a bounded planar region. However, this estimator is not defined for some distances of interest, in this bounded region. A new estimator for H s (y), is proposed for use with regular grid of sampling locations. This new estimator is defined for all distances of interest. It also appears to have a smaller bias and a smaller mean squared error than the previously suggested alternative. (author). 11 refs, 4 figs, 1 tab

  7. Analysing the distribution of synaptic vesicles using a spatial point process model

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh; Waagepetersen, Rasmus; Nava, Nicoletta

    2014-01-01

    functionality by statistically modelling the distribution of the synaptic vesicles in two groups of rats: a control group subjected to sham stress and a stressed group subjected to a single acute foot-shock (FS)-stress episode. We hypothesize that the synaptic vesicles have different spatial distributions...... in the two groups. The spatial distributions are modelled using spatial point process models with an inhomogeneous conditional intensity and repulsive pairwise interactions. Our results verify the hypothesis that the two groups have different spatial distributions....

  8. Determination of Gibbs energies of formation in aqueous solution using chemical engineering tools.

    Science.gov (United States)

    Toure, Oumar; Dussap, Claude-Gilles

    2016-08-01

    Standard Gibbs energies of formation are of primary importance in the field of biothermodynamics. In the absence of any directly measured values, thermodynamic calculations are required to determine the missing data. For several biochemical species, this study shows that the knowledge of the standard Gibbs energy of formation of the pure compounds (in the gaseous, solid or liquid states) enables to determine the corresponding standard Gibbs energies of formation in aqueous solutions. To do so, using chemical engineering tools (thermodynamic tables and a model enabling to predict activity coefficients, solvation Gibbs energies and pKa data), it becomes possible to determine the partial chemical potential of neutral and charged components in real metabolic conditions, even in concentrated mixtures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Extension of Gibbs-Duhem equation including influences of external fields

    Science.gov (United States)

    Guangze, Han; Jianjia, Meng

    2018-03-01

    Gibbs-Duhem equation is one of the fundamental equations in thermodynamics, which describes the relation among changes in temperature, pressure and chemical potential. Thermodynamic system can be affected by external field, and this effect should be revealed by thermodynamic equations. Based on energy postulate and the first law of thermodynamics, the differential equation of internal energy is extended to include the properties of external fields. Then, with homogeneous function theorem and a redefinition of Gibbs energy, a generalized Gibbs-Duhem equation with influences of external fields is derived. As a demonstration of the application of this generalized equation, the influences of temperature and external electric field on surface tension, surface adsorption controlled by external electric field, and the derivation of a generalized chemical potential expression are discussed, which show that the extended Gibbs-Duhem equation developed in this paper is capable to capture the influences of external fields on a thermodynamic system.

  10. The Impact of the Delivery of Prepared Power Point Presentations on the Learning Process

    Directory of Open Access Journals (Sweden)

    Auksė Marmienė

    2011-04-01

    Full Text Available This article describes the process of the preparation and delivery of Power Point presentations and how it can be used by teachers as a resource for classroom teaching. The advantages of this classroom activity covering some of the problems and providing a few suggestions for dealing with those difficulties are also outlined. The major objective of the present paper is to investigate the students ability to choose the material and the content of Power Point presentations on professional topics via the Internet as well as the ability to prepare and deliver the presentation in front of the audience. The factors which determine the choice of the presentation subject are also analysed in this paper. After the delivery students were requested to self- and peer-assess the difficulties they faced in preparation and performance of the presentations by writing the reports. Learners’ attitudes to the choice of the topic of Power Point presentations were surveyed by administering a self-assessment questionnaire.

  11. The MaxEnt extension of a quantum Gibbs family, convex geometry and geodesics

    International Nuclear Information System (INIS)

    Weis, Stephan

    2015-01-01

    We discuss methods to analyze a quantum Gibbs family in the ultra-cold regime where the norm closure of the Gibbs family fails due to discontinuities of the maximum-entropy inference. The current discussion of maximum-entropy inference and irreducible correlation in the area of quantum phase transitions is a major motivation for this research. We extend a representation of the irreducible correlation from finite temperatures to absolute zero

  12. Uniqueness of Gibbs Measure for Models with Uncountable Set of Spin Values on a Cayley Tree

    International Nuclear Information System (INIS)

    Eshkabilov, Yu. Kh.; Haydarov, F. H.; Rozikov, U. A.

    2013-01-01

    We consider models with nearest-neighbor interactions and with the set [0, 1] of spin values, on a Cayley tree of order K ≥ 1. It is known that the ‘splitting Gibbs measures’ of the model can be described by solutions of a nonlinear integral equation. For arbitrary k ≥ 2 we find a sufficient condition under which the integral equation has unique solution, hence under the condition the corresponding model has unique splitting Gibbs measure.

  13. Process for quality assurance of welded joints for electrical resistance point welding

    International Nuclear Information System (INIS)

    Schaefer, R.; Singh, S.

    1977-01-01

    In order to guarantee the reproducibility of welded joints of even quality (above all in the metal working industry), it is proposed that before starting resistance point welding, a preheating current should be allowed to flow at the site of the weld. A given reduction of the total resistance at the site of the weld should effect the time when the preheating current is switched over to welding current. This value is always predetermined empirically. Further possibilities of controlling the welding process are described, where the measurement of thermal expansion of the parts is used. A standard welding time is given. The rated course of electrode movement during the process can be predicted and a running comparison of nominal and actual values can be carried out. (RW) [de

  14. Implementation of 5S tools as a starting point in business process reengineering

    Directory of Open Access Journals (Sweden)

    Vorkapić Miloš 0000-0002-3463-8665

    2017-01-01

    Full Text Available The paper deals with the analysis of elements which represent a starting point in implementation of a business process reengineering. We have used Lean tools through the analysis of 5S model in our research. On the example of finalization of the finished transmitter in IHMT-CMT production, 5S tools were implemented with a focus on Quality elements although the theory shows that BPR and TQM are two opposite activities in an enterprise. We wanted to distinguish the significance of employees’ self-discipline which helps the process of product finalization to develop in time and without waste and losses. In addition, the employees keep their work place clean, tidy and functional.

  15. Comparison of plastic strains on AA5052 by single point incremental forming process using digital image processing

    Energy Technology Data Exchange (ETDEWEB)

    Mugendiran, V.; Gnanavelbabu, A. [Anna University, Chennai, Tamilnadu (India)

    2017-06-15

    In this study, a surface based strain measurement was used to determine the formability of the sheet metal. A strain measurement may employ manual calculation of plastic strains based on the reference circle and the deformed circle. The manual calculation method has a greater margin of error in the practical applications. In this paper, an attempt has been made to compare the formability by implementing three different theoretical approaches: Namely conventional method, least square method and digital based strain measurements. As the sheet metal was formed by a single point incremental process the etched circles get deformed into elliptical shapes approximately, image acquisition has been done before and after forming. The plastic strains of the deformed circle grids are calculated based on the non- deformed reference. The coordinates of the deformed circles are measured by various image processing steps. Finally the strains obtained from the deformed circle are used to plot the forming limit diagram. To evaluate the accuracy of the system, the conventional, least square and digital based method of prediction of the forming limit diagram was compared. Conventional method and least square method have marginal error when compared with digital based processing method. Measurement of strain based on image processing agrees well and can be used to improve the accuracy and to reduce the measurement error in prediction of forming limit diagram.

  16. Comparison of plastic strains on AA5052 by single point incremental forming process using digital image processing

    International Nuclear Information System (INIS)

    Mugendiran, V.; Gnanavelbabu, A.

    2017-01-01

    In this study, a surface based strain measurement was used to determine the formability of the sheet metal. A strain measurement may employ manual calculation of plastic strains based on the reference circle and the deformed circle. The manual calculation method has a greater margin of error in the practical applications. In this paper, an attempt has been made to compare the formability by implementing three different theoretical approaches: Namely conventional method, least square method and digital based strain measurements. As the sheet metal was formed by a single point incremental process the etched circles get deformed into elliptical shapes approximately, image acquisition has been done before and after forming. The plastic strains of the deformed circle grids are calculated based on the non- deformed reference. The coordinates of the deformed circles are measured by various image processing steps. Finally the strains obtained from the deformed circle are used to plot the forming limit diagram. To evaluate the accuracy of the system, the conventional, least square and digital based method of prediction of the forming limit diagram was compared. Conventional method and least square method have marginal error when compared with digital based processing method. Measurement of strain based on image processing agrees well and can be used to improve the accuracy and to reduce the measurement error in prediction of forming limit diagram.

  17. Neutron capture at the s-process branching points $^{171}$Tm and $^{204}$Tl

    CERN Multimedia

    Branching points in the s-process are very special isotopes for which there is a competition between the neutron capture and the subsequent b-decay chain producing the heavy elements beyond Fe. Typically, the knowledge on the associated capture cross sections is very poor due to the difficulty in obtaining enough material of these radioactive isotopes and to measure the cross section of a sample with an intrinsic activity; indeed only 2 out o the 21 ${s}$-process branching points have ever been measured by using the time-of-flight method. In this experiment we aim at measuring for the first time the capture cross sections of $^{171}$Tm and $^{204}$Tl, both of crucial importance for understanding the nucleosynthesis of heavy elements in AGB stars. The combination of both (n,$\\gamma$) measurements on $^{171}$Tm and $^{204}$Tl will allow one to accurately constrain neutron density and the strength of the 13C(α,n) source in low mass AGB stars. Additionally, the cross section of $^{204}$Tl is also of cosmo-chrono...

  18. Detection of bursts in extracellular spike trains using hidden semi-Markov point process models.

    Science.gov (United States)

    Tokdar, Surya; Xi, Peiyi; Kelly, Ryan C; Kass, Robert E

    2010-08-01

    Neurons in vitro and in vivo have epochs of bursting or "up state" activity during which firing rates are dramatically elevated. Various methods of detecting bursts in extracellular spike trains have appeared in the literature, the most widely used apparently being Poisson Surprise (PS). A natural description of the phenomenon assumes (1) there are two hidden states, which we label "burst" and "non-burst," (2) the neuron evolves stochastically, switching at random between these two states, and (3) within each state the spike train follows a time-homogeneous point process. If in (2) the transitions from non-burst to burst and burst to non-burst states are memoryless, this becomes a hidden Markov model (HMM). For HMMs, the state transitions follow exponential distributions, and are highly irregular. Because observed bursting may in some cases be fairly regular-exhibiting inter-burst intervals with small variation-we relaxed this assumption. When more general probability distributions are used to describe the state transitions the two-state point process model becomes a hidden semi-Markov model (HSMM). We developed an efficient Bayesian computational scheme to fit HSMMs to spike train data. Numerical simulations indicate the method can perform well, sometimes yielding very different results than those based on PS.

  19. Mixed-Poisson Point Process with Partially-Observed Covariates: Ecological Momentary Assessment of Smoking.

    Science.gov (United States)

    Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul

    2012-01-01

    Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.

  20. Students’ Algebraic Thinking Process in Context of Point and Line Properties

    Science.gov (United States)

    Nurrahmi, H.; Suryadi, D.; Fatimah, S.

    2017-09-01

    Learning of schools algebra is limited to symbols and operating procedures, so students are able to work on problems that only require the ability to operate symbols but unable to generalize a pattern as one of part of algebraic thinking. The purpose of this study is to create a didactic design that facilitates students to do algebraic thinking process through the generalization of patterns, especially in the context of the property of point and line. This study used qualitative method and includes Didactical Design Research (DDR). The result is students are able to make factual, contextual, and symbolic generalization. This happen because the generalization arises based on facts on local terms, then the generalization produced an algebraic formula that was described in the context and perspective of each student. After that, the formula uses the algebraic letter symbol from the symbol t hat uses the students’ language. It can be concluded that the design has facilitated students to do algebraic thinking process through the generalization of patterns, especially in the context of property of the point and line. The impact of this study is this design can use as one of material teaching alternative in learning of school algebra.

  1. Comment on "Inference with minimal Gibbs free energy in information field theory".

    Science.gov (United States)

    Iatsenko, D; Stefanovska, A; McClintock, P V E

    2012-03-01

    Enßlin and Weig [Phys. Rev. E 82, 051112 (2010)] have introduced a "minimum Gibbs free energy" (MGFE) approach for estimation of the mean signal and signal uncertainty in Bayesian inference problems: it aims to combine the maximum a posteriori (MAP) and maximum entropy (ME) principles. We point out, however, that there are some important questions to be clarified before the new approach can be considered fully justified, and therefore able to be used with confidence. In particular, after obtaining a Gaussian approximation to the posterior in terms of the MGFE at some temperature T, this approximation should always be raised to the power of T to yield a reliable estimate. In addition, we show explicitly that MGFE indeed incorporates the MAP principle, as well as the MDI (minimum discrimination information) approach, but not the well-known ME principle of Jaynes [E.T. Jaynes, Phys. Rev. 106, 620 (1957)]. We also illuminate some related issues and resolve apparent discrepancies. Finally, we investigate the performance of MGFE estimation for different values of T, and we discuss the advantages and shortcomings of the approach.

  2. Topobathymetric LiDAR point cloud processing and landform classification in a tidal environment

    Science.gov (United States)

    Skovgaard Andersen, Mikkel; Al-Hamdani, Zyad; Steinbacher, Frank; Rolighed Larsen, Laurids; Brandbyge Ernstsen, Verner

    2017-04-01

    Historically it has been difficult to create high resolution Digital Elevation Models (DEMs) in land-water transition zones due to shallow water depth and often challenging environmental conditions. This gap of information has been reflected as a "white ribbon" with no data in the land-water transition zone. In recent years, the technology of airborne topobathymetric Light Detection and Ranging (LiDAR) has proven capable of filling out the gap by simultaneously capturing topographic and bathymetric elevation information, using only a single green laser. We collected green LiDAR point cloud data in the Knudedyb tidal inlet system in the Danish Wadden Sea in spring 2014. Creating a DEM from a point cloud requires the general processing steps of data filtering, water surface detection and refraction correction. However, there is no transparent and reproducible method for processing green LiDAR data into a DEM, specifically regarding the procedure of water surface detection and modelling. We developed a step-by-step procedure for creating a DEM from raw green LiDAR point cloud data, including a procedure for making a Digital Water Surface Model (DWSM) (see Andersen et al., 2017). Two different classification analyses were applied to the high resolution DEM: A geomorphometric and a morphological classification, respectively. The classification methods were originally developed for a small test area; but in this work, we have used the classification methods to classify the complete Knudedyb tidal inlet system. References Andersen MS, Gergely Á, Al-Hamdani Z, Steinbacher F, Larsen LR, Ernstsen VB (2017). Processing and performance of topobathymetric lidar data for geomorphometric and morphological classification in a high-energy tidal environment. Hydrol. Earth Syst. Sci., 21: 43-63, doi:10.5194/hess-21-43-2017. Acknowledgements This work was funded by the Danish Council for Independent Research | Natural Sciences through the project "Process-based understanding and

  3. Point process modeling and estimation: Advances in the analysis of dynamic neural spiking data

    Science.gov (United States)

    Deng, Xinyi

    2016-08-01

    A common interest of scientists in many fields is to understand the relationship between the dynamics of a physical system and the occurrences of discrete events within such physical system. Seismologists study the connection between mechanical vibrations of the Earth and the occurrences of earthquakes so that future earthquakes can be better predicted. Astrophysicists study the association between the oscillating energy of celestial regions and the emission of photons to learn the Universe's various objects and their interactions. Neuroscientists study the link between behavior and the millisecond-timescale spike patterns of neurons to understand higher brain functions. Such relationships can often be formulated within the framework of state-space models with point process observations. The basic idea is that the dynamics of the physical systems are driven by the dynamics of some stochastic state variables and the discrete events we observe in an interval are noisy observations with distributions determined by the state variables. This thesis proposes several new methodological developments that advance the framework of state-space models with point process observations at the intersection of statistics and neuroscience. In particular, we develop new methods 1) to characterize the rhythmic spiking activity using history-dependent structure, 2) to model population spike activity using marked point process models, 3) to allow for real-time decision making, and 4) to take into account the need for dimensionality reduction for high-dimensional state and observation processes. We applied these methods to a novel problem of tracking rhythmic dynamics in the spiking of neurons in the subthalamic nucleus of Parkinson's patients with the goal of optimizing placement of deep brain stimulation electrodes. We developed a decoding algorithm that can make decision in real-time (for example, to stimulate the neurons or not) based on various sources of information present in

  4. Nuclear structure and weak rates of heavy waiting point nuclei under rp-process conditions

    Science.gov (United States)

    Nabi, Jameel-Un; Böyükata, Mahmut

    2017-01-01

    The structure and the weak interaction mediated rates of the heavy waiting point (WP) nuclei 80Zr, 84Mo, 88Ru, 92Pd and 96Cd along N = Z line were studied within the interacting boson model-1 (IBM-1) and the proton-neutron quasi-particle random phase approximation (pn-QRPA). The energy levels of the N = Z WP nuclei were calculated by fitting the essential parameters of IBM-1 Hamiltonian and their geometric shapes were predicted by plotting potential energy surfaces (PESs). Half-lives, continuum electron capture rates, positron decay rates, electron capture cross sections of WP nuclei, energy rates of β-delayed protons and their emission probabilities were later calculated using the pn-QRPA. The calculated Gamow-Teller strength distributions were compared with previous calculation. We present positron decay and continuum electron capture rates on these WP nuclei under rp-process conditions using the same model. For the rp-process conditions, the calculated total weak rates are twice the Skyrme HF+BCS+QRPA rates for 80Zr. For remaining nuclei the two calculations compare well. The electron capture rates are significant and compete well with the corresponding positron decay rates under rp-process conditions. The finding of the present study supports that electron capture rates form an integral part of the weak rates under rp-process conditions and has an important role for the nuclear model calculations.

  5. Radial Basis Functional Model of Multi-Point Dieless Forming Process for Springback Reduction and Compensation

    Directory of Open Access Journals (Sweden)

    Misganaw Abebe

    2017-11-01

    Full Text Available Springback in multi-point dieless forming (MDF is a common problem because of the small deformation and blank holder free boundary condition. Numerical simulations are widely used in sheet metal forming to predict the springback. However, the computational time in using the numerical tools is time costly to find the optimal process parameters value. This study proposes radial basis function (RBF to replace the numerical simulation model by using statistical analyses that are based on a design of experiment (DOE. Punch holding time, blank thickness, and curvature radius are chosen as effective process parameters for determining the springback. The Latin hypercube DOE method facilitates statistical analyses and the extraction of a prediction model in the experimental process parameter domain. Finite element (FE simulation model is conducted in the ABAQUS commercial software to generate the springback responses of the training and testing samples. The genetic algorithm is applied to find the optimal value for reducing and compensating the induced springback for the different blank thicknesses using the developed RBF prediction model. Finally, the RBF numerical result is verified by comparing with the FE simulation result of the optimal process parameters and both results show that the springback is almost negligible from the target shape.

  6. On the stability and dynamics of stochastic spiking neuron models: Nonlinear Hawkes process and point process GLMs.

    Science.gov (United States)

    Gerhard, Felipe; Deger, Moritz; Truccolo, Wilson

    2017-02-01

    Point process generalized linear models (PP-GLMs) provide an important statistical framework for modeling spiking activity in single-neurons and neuronal networks. Stochastic stability is essential when sampling from these models, as done in computational neuroscience to analyze statistical properties of neuronal dynamics and in neuro-engineering to implement closed-loop applications. Here we show, however, that despite passing common goodness-of-fit tests, PP-GLMs estimated from data are often unstable, leading to divergent firing rates. The inclusion of absolute refractory periods is not a satisfactory solution since the activity then typically settles into unphysiological rates. To address these issues, we derive a framework for determining the existence and stability of fixed points of the expected conditional intensity function (CIF) for general PP-GLMs. Specifically, in nonlinear Hawkes PP-GLMs, the CIF is expressed as a function of the previous spike history and exogenous inputs. We use a mean-field quasi-renewal (QR) approximation that decomposes spike history effects into the contribution of the last spike and an average of the CIF over all spike histories prior to the last spike. Fixed points for stationary rates are derived as self-consistent solutions of integral equations. Bifurcation analysis and the number of fixed points predict that the original models can show stable, divergent, and metastable (fragile) dynamics. For fragile models, fluctuations of the single-neuron dynamics predict expected divergence times after which rates approach unphysiologically high values. This metric can be used to estimate the probability of rates to remain physiological for given time periods, e.g., for simulation purposes. We demonstrate the use of the stability framework using simulated single-neuron examples and neurophysiological recordings. Finally, we show how to adapt PP-GLM estimation procedures to guarantee model stability. Overall, our results provide a

  7. Predicting seizures in untreated temporal lobe epilepsy using point-process nonlinear models of heartbeat dynamics.

    Science.gov (United States)

    Valenza, G; Romigi, A; Citi, L; Placidi, F; Izzi, F; Albanese, M; Scilingo, E P; Marciani, M G; Duggento, A; Guerrisi, M; Toschi, N; Barbieri, R

    2016-08-01

    Symptoms of temporal lobe epilepsy (TLE) are frequently associated with autonomic dysregulation, whose underlying biological processes are thought to strongly contribute to sudden unexpected death in epilepsy (SUDEP). While abnormal cardiovascular patterns commonly occur during ictal events, putative patterns of autonomic cardiac effects during pre-ictal (PRE) periods (i.e. periods preceding seizures) are still unknown. In this study, we investigated TLE-related heart rate variability (HRV) through instantaneous, nonlinear estimates of cardiovascular oscillations during inter-ictal (INT) and PRE periods. ECG recordings from 12 patients with TLE were processed to extract standard HRV indices, as well as indices of instantaneous HRV complexity (dominant Lyapunov exponent and entropy) and higher-order statistics (bispectra) obtained through definition of inhomogeneous point-process nonlinear models, employing Volterra-Laguerre expansions of linear, quadratic, and cubic kernels. Experimental results demonstrate that the best INT vs. PRE classification performance (balanced accuracy: 73.91%) was achieved only when retaining the time-varying, nonlinear, and non-stationary structure of heartbeat dynamical features. The proposed approach opens novel important avenues in predicting ictal events using information gathered from cardiovascular signals exclusively.

  8. A customizable stochastic state point process filter (SSPPF) for neural spiking activity.

    Science.gov (United States)

    Xin, Yao; Li, Will X Y; Min, Biao; Han, Yan; Cheung, Ray C C

    2013-01-01

    Stochastic State Point Process Filter (SSPPF) is effective for adaptive signal processing. In particular, it has been successfully applied to neural signal coding/decoding in recent years. Recent work has proven its efficiency in non-parametric coefficients tracking in modeling of mammal nervous system. However, existing SSPPF has only been realized in commercial software platforms which limit their computational capability. In this paper, the first hardware architecture of SSPPF has been designed and successfully implemented on field-programmable gate array (FPGA), proving a more efficient means for coefficient tracking in a well-established generalized Laguerre-Volterra model for mammalian hippocampal spiking activity research. By exploring the intrinsic parallelism of the FPGA, the proposed architecture is able to process matrices or vectors with random size, and is efficiently scalable. Experimental result shows its superior performance comparing to the software implementation, while maintaining the numerical precision. This architecture can also be potentially utilized in the future hippocampal cognitive neural prosthesis design.

  9. Continuous quality improvement process pin-points delays, speeds STEMI patients to life-saving treatment.

    Science.gov (United States)

    2011-11-01

    Using a multidisciplinary team approach, the University of California, San Diego, Health System has been able to significantly reduce average door-to-balloon angioplasty times for patients with the most severe form of heart attacks, beating national recommendations by more than a third. The multidisciplinary team meets monthly to review all cases involving patients with ST-segment-elevation myocardial infarctions (STEMI) to see where process improvements can be made. Using this continuous quality improvement (CQI) process, the health system has reduced average door-to-balloon times from 120 minutes to less than 60 minutes, and administrators are now aiming for further progress. Among the improvements instituted by the multidisciplinary team are the implementation of a "greeter" with enough clinical expertise to quickly pick up on potential STEMI heart attacks as soon as patients walk into the ED, and the purchase of an electrocardiogram (EKG) machine so that evaluations can be done in the triage area. ED staff have prepared "STEMI" packets, including items such as special IV tubing and disposable leads, so that patients headed for the catheterization laboratory are prepared to undergo the procedure soon after arrival. All the clocks and devices used in the ED are synchronized so that analysts can later review how long it took to complete each step of the care process. Points of delay can then be targeted for improvement.

  10. A Unified Point Process Probabilistic Framework to Assess Heartbeat Dynamics and Autonomic Cardiovascular Control

    Directory of Open Access Journals (Sweden)

    Zhe eChen

    2012-02-01

    Full Text Available In recent years, time-varying inhomogeneous point process models have been introduced for assessment of instantaneous heartbeat dynamics as well as specific cardiovascular control mechanisms and hemodynamics. Assessment of the model's statistics is established through the Wiener-Volterra theory and a multivariate autoregressive (AR structure. A variety of instantaneous cardiovascular metrics, such as heart rate (HR, heart rate variability (HRV, respiratory sinus arrhythmia (RSA, and baroreceptor-cardiac reflex (baroreflex sensitivity (BRS, are derived within a parametric framework and instantaneously updated with adaptive and local maximum likelihood estimation algorithms. Inclusion of second order nonlinearities, with subsequent bispectral quantification in the frequency domain, further allows for definition of instantaneous metrics of nonlinearity. We here organize a comprehensive review of the devised methods as applied to experimental recordings from healthy subjects during propofol anesthesia. Collective results reveal interesting dynamic trends across the different pharmacological interventions operated within each anesthesia session, confirming the ability of the algorithm to track important changes in cardiorespiratory elicited interactions, and pointing at our mathematical approach as a promising monitoring tool for an accurate, noninvasive assessment in clinical practice.

  11. Catalysts macroporosity and their efficiency in sulphur sub-dew point Claus tail gas treating processes

    Energy Technology Data Exchange (ETDEWEB)

    Tsybulevski, A.M.; Pearson, M. [Alcoa Industrial Chemicals, 16010 Barker`s Point Lane, Houston, TX (United States); Morgun, L.V.; Filatova, O.E. [All-Russian Research Institute of Natural Gases and Gas Technologies VNIIGAZ, Moscow (Russian Federation); Sharp, M. [Porocel Corporation, Westheimer, Houston, TX (United States)

    1996-10-08

    The efficiency of 4 samples of alumina catalyst has been studied experimentally in the course of the Claus `tail gas` treating processes at the sulphur sub-dew point (TGTP). The samples were characterized by the same chemical and crystallographic composition, the same volume of micropores, the same surface area and the same catalytic activity but differed appreciably in the volume of macropores. An increase in the effective operation time of the catalysts before breakthrough of unrecoverable sulphur containing compounds, with the increasing macropore volume has been established. A theoretical model of the TGTP has been considered and it has been shown that the increase in the sulphur capacity of the catalysts with a larger volume of macropores is due to an increase in the catalysts efficiency factor and a slower decrease in their diffusive permeability during filling of micropores by sulphur

  12. Quantification of annual wildfire risk; A spatio-temporal point process approach.

    Directory of Open Access Journals (Sweden)

    Paula Pereira

    2013-10-01

    Full Text Available Policy responses for local and global firemanagement depend heavily on the proper understanding of the fire extent as well as its spatio-temporal variation across any given study area. Annual fire risk maps are important tools for such policy responses, supporting strategic decisions such as location-allocation of equipment and human resources. Here, we define risk of fire in the narrow sense as the probability of its occurrence without addressing the loss component. In this paper, we study the spatio-temporal point patterns of wildfires and model them by a log Gaussian Cox processes. Themean of predictive distribution of randomintensity function is used in the narrow sense, as the annual fire risk map for next year.

  13. The (n, $\\gamma$) reaction in the s-process branching point $^{59}$Ni

    CERN Multimedia

    We propose to measure the $^{59}$Ni(n,$\\gamma$)$^{56}$Fe cross section at the neutron time of flight (n TOF) facility with a dedicated chemical vapor deposition (CVD) diamond detector. The (n, ) reaction in the radioactive $^{59}$Ni is of relevance in nuclear astrophysics as it can be seen as a rst branching point in the astrophysical s-process. Its relevance in nuclear technology is especially related to material embrittlement in stainless steel. There is a strong discrepancy between available experimental data and the evaluated nuclear data les for this isotope. The aim of the measurement is to clarify this disagreement. The clear energy separation of the reaction products of neutron induced reactions in $^{59}$Ni makes it a very suitable candidate for a rst cross section measurement with the CVD diamond detector, which should serve in the future for similar measurements at n_TOF.

  14. Quantifying structural uncertainty on fault networks using a marked point process within a Bayesian framework

    Science.gov (United States)

    Aydin, Orhun; Caers, Jef Karel

    2017-08-01

    Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed

  15. Structured spatio-temporal shot-noise Cox point process models, with a view to modelling forest fires

    DEFF Research Database (Denmark)

    Møller, Jesper; Diaz-Avalos, Carlos

    Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...... dataset consisting of 2796 days and 5834 spatial locations of fires. The model is compared with a spatio-temporal log-Gaussian Cox point process model, and likelihood-based methods are discussed to some extent....

  16. A Multi-Point Method Considering the Maximum Power Point Tracking Dynamic Process for Aerodynamic Optimization of Variable-Speed Wind Turbine Blades

    Directory of Open Access Journals (Sweden)

    Zhiqiang Yang

    2016-05-01

    Full Text Available Due to the dynamic process of maximum power point tracking (MPPT caused by turbulence and large rotor inertia, variable-speed wind turbines (VSWTs cannot maintain the optimal tip speed ratio (TSR from cut-in wind speed up to the rated speed. Therefore, in order to increase the total captured wind energy, the existing aerodynamic design for VSWT blades, which only focuses on performance improvement at a single TSR, needs to be improved to a multi-point design. In this paper, based on a closed-loop system of VSWTs, including turbulent wind, rotor, drive train and MPPT controller, the distribution of operational TSR and its description based on inflow wind energy are investigated. Moreover, a multi-point method considering the MPPT dynamic process for the aerodynamic optimization of VSWT blades is proposed. In the proposed method, the distribution of operational TSR is obtained through a dynamic simulation of the closed-loop system under a specific turbulent wind, and accordingly the multiple design TSRs and the corresponding weighting coefficients in the objective function are determined. Finally, using the blade of a National Renewable Energy Laboratory (NREL 1.5 MW wind turbine as the baseline, the proposed method is compared with the conventional single-point optimization method using the commercial software Bladed. Simulation results verify the effectiveness of the proposed method.

  17. Activity coefficients and excess Gibbs' free energy of some binary mixtures formed by p-cresol at 95.23 kPa

    Energy Technology Data Exchange (ETDEWEB)

    Prasad, T.E. Vittal [Properties Group, Chemical Engineering Laboratory, Indian Institute of Chemical Technology, Hyderabad 500 007 (India); Venkanna, N. [Swamy Ramanandateertha Institute of Science and Technology, Hyderabad 508 004 (India); Kumar, Y. Naveen [Swamy Ramanandateertha Institute of Science and Technology, Hyderabad 508 004 (India); Ashok, K. [Swamy Ramanandateertha Institute of Science and Technology, Hyderabad 508 004 (India); Sirisha, N.M. [Swamy Ramanandateertha Institute of Science and Technology, Hyderabad 508 004 (India); Prasad, D.H.L. [Properties Group, Chemical Engineering Laboratory, Indian Institute of Chemical Technology, Hyderabad 500 007 (India)]. E-mail: dasika@iict.res.in

    2007-07-15

    Bubble point temperatures at 95.23 kPa, over the entire composition range are measured for the binary mixtures formed by p-cresol with 1,2-dichloroethane, 1,1,2,2-tetrachloroethane trichloroethylene, tetrachloroethylene, and o- , m- , and p-xylenes, making use of a Swietoslawski-type ebulliometer. Liquid phase mole fraction (x {sub 1}) versus bubble point temperature (T) measurements are found to be well represented by the Wilson model. The optimum Wilson parameters are used to calculate the vapor phase composition, activity coefficients, and excess Gibbs free energy. The results are discussed.

  18. Insights into mortality patterns and causes of death through a process point of view model.

    Science.gov (United States)

    Anderson, James J; Li, Ting; Sharrow, David J

    2017-02-01

    Process point of view (POV) models of mortality, such as the Strehler-Mildvan and stochastic vitality models, represent death in terms of the loss of survival capacity through challenges and dissipation. Drawing on hallmarks of aging, we link these concepts to candidate biological mechanisms through a framework that defines death as challenges to vitality where distal factors defined the age-evolution of vitality and proximal factors define the probability distribution of challenges. To illustrate the process POV, we hypothesize that the immune system is a mortality nexus, characterized by two vitality streams: increasing vitality representing immune system development and immunosenescence representing vitality dissipation. Proximal challenges define three mortality partitions: juvenile and adult extrinsic mortalities and intrinsic adult mortality. Model parameters, generated from Swedish mortality data (1751-2010), exhibit biologically meaningful correspondences to economic, health and cause-of-death patterns. The model characterizes the twentieth century epidemiological transition mainly as a reduction in extrinsic mortality resulting from a shift from high magnitude disease challenges on individuals at all vitality levels to low magnitude stress challenges on low vitality individuals. Of secondary importance, intrinsic mortality was described by a gradual reduction in the rate of loss of vitality presumably resulting from reduction in the rate of immunosenescence. Extensions and limitations of a distal/proximal framework for characterizing more explicit causes of death, e.g. the young adult mortality hump or cancer in old age are discussed.

  19. Interevent Time Distribution of Renewal Point Process, Case Study: Extreme Rainfall in South Sulawesi

    Science.gov (United States)

    Sunusi, Nurtiti

    2018-03-01

    The study of time distribution of occurrences of extreme rain phenomena plays a very important role in the analysis and weather forecast in an area. The timing of extreme rainfall is difficult to predict because its occurrence is random. This paper aims to determine the inter event time distribution of extreme rain events and minimum waiting time until the occurrence of next extreme event through a point process approach. The phenomenon of extreme rain events over a given period of time is following a renewal process in which the time for events is a random variable τ. The distribution of random variable τ is assumed to be a Pareto, Log Normal, and Gamma. To estimate model parameters, a moment method is used. Consider Rt as the time of the last extreme rain event at one location is the time difference since the last extreme rainfall event. if there are no extreme rain events up to t 0, there will be an opportunity for extreme rainfall events at (t 0, t 0 + δt 0). Furthermore from the three models reviewed, the minimum waiting time until the next extreme rainfall will be determined. The result shows that Log Nrmal model is better than Pareto and Gamma model for predicting the next extreme rainfall in South Sulawesi while the Pareto model can not be used.

  20. The neutron capture cross section of the ${s}$-process branch point isotope $^{63}$Ni

    CERN Multimedia

    Neutron capture nucleosynthesis in massive stars plays an important role in Galactic chemical evolution as well as for the analysis of abundance patterns in very old metal-poor halo stars. The so-called weak ${s}$-process component, which is responsible for most of the ${s}$ abundances between Fe and Sr, turned out to be very sensitive to the stellar neutron capture cross sections in this mass region and, in particular, of isotopes near the seed distribution around Fe. In this context, the unstable isotope $^{63}$Ni is of particular interest because it represents the first branching point in the reaction path of the ${s}$-process. We propose to measure this cross section at n_TOF from thermal energies up to 500 keV, covering the entire range of astrophysical interest. These data are needed to replace uncertain theoretical predicitons by first experimental information to understand the consequences of the $^{63}$Ni branching for the abundance pattern of the subsequent isotopes, especially for $^{63}$Cu and $^{...

  1. A study of the Boltzmann and Gibbs entropies in the context of a stochastic toy model

    Science.gov (United States)

    Malgieri, Massimiliano; Onorato, Pasquale; De Ambrosis, Anna

    2018-05-01

    In this article we reconsider a stochastic toy model of thermal contact, first introduced in Onorato et al (2017 Eur. J. Phys. 38 045102), showing its educational potential for clarifying some current issues in the foundations of thermodynamics. The toy model can be realized in practice using dice and coins, and can be seen as representing thermal coupling of two subsystems with energy bounded from above. The system is used as a playground for studying the different behaviours of the Boltzmann and Gibbs temperatures and entropies in the approach to steady state. The process that models thermal contact between the two subsystems can be proved to be an ergodic, reversible Markov chain; thus the dynamics produces an equilibrium distribution in which the weight of each state is proportional to its multiplicity in terms of microstates. Each one of the two subsystems, taken separately, is formally equivalent to an Ising spin system in the non-interacting limit. The model is intended for educational purposes, and the level of readership of the article is aimed at advanced undergraduates.

  2. A Gibbs Energy Minimization Approach for Modeling of Chemical Reactions in a Basic Oxygen Furnace

    Science.gov (United States)

    Kruskopf, Ari; Visuri, Ville-Valtteri

    2017-12-01

    In modern steelmaking, the decarburization of hot metal is converted into steel primarily in converter processes, such as the basic oxygen furnace. The objective of this work was to develop a new mathematical model for top blown steel converter, which accounts for the complex reaction equilibria in the impact zone, also known as the hot spot, as well as the associated mass and heat transport. An in-house computer code of the model has been developed in Matlab. The main assumption of the model is that all reactions take place in a specified reaction zone. The mass transfer between the reaction volume, bulk slag, and metal determine the reaction rates for the species. The thermodynamic equilibrium is calculated using the partitioning of Gibbs energy (PGE) method. The activity model for the liquid metal is the unified interaction parameter model and for the liquid slag the modified quasichemical model (MQM). The MQM was validated by calculating iso-activity lines for the liquid slag components. The PGE method together with the MQM was validated by calculating liquidus lines for solid components. The results were compared with measurements from literature. The full chemical reaction model was validated by comparing the metal and slag compositions to measurements from industrial scale converter. The predictions were found to be in good agreement with the measured values. Furthermore, the accuracy of the model was found to compare favorably with the models proposed in the literature. The real-time capability of the proposed model was confirmed in test calculations.

  3. Limit order book and its modeling in terms of Gibbs Grand-Canonical Ensemble

    Science.gov (United States)

    Bicci, Alberto

    2016-12-01

    In the domain of so called Econophysics some attempts have been already made for applying the theory of thermodynamics and statistical mechanics to economics and financial markets. In this paper a similar approach is made from a different perspective, trying to model the limit order book and price formation process of a given stock by the Grand-Canonical Gibbs Ensemble for the bid and ask orders. The application of the Bose-Einstein statistics to this ensemble allows then to derive the distribution of the sell and buy orders as a function of price. As a consequence we can define in a meaningful way expressions for the temperatures of the ensembles of bid orders and of ask orders, which are a function of minimum bid, maximum ask and closure prices of the stock as well as of the exchanged volume of shares. It is demonstrated that the difference between the ask and bid orders temperatures can be related to the VAO (Volume Accumulation Oscillator), an indicator empirically defined in Technical Analysis of stock markets. Furthermore the derived distributions for aggregate bid and ask orders can be subject to well defined validations against real data, giving a falsifiable character to the model.

  4. Demonstration and resolution of the Gibbs paradox of the first kind

    International Nuclear Information System (INIS)

    Peters, Hjalmar

    2014-01-01

    The Gibbs paradox of the first kind (GP1) refers to the false increase in entropy which, in statistical mechanics, is calculated from the process of combining two gas systems S1 and S2 consisting of distinguishable particles. Presented in a somewhat modified form, the GP1 manifests as a contradiction to the second law of thermodynamics. Contrary to popular belief, this contradiction affects not only classical but also quantum statistical mechanics. This paper resolves the GP1 by considering two effects. (i) The uncertainty about which particles are located in S1 and which in S2 contributes to the entropies of S1 and S2. (ii) S1 and S2 are correlated by the fact that if a certain particle is located in one system, it cannot be located in the other. As a consequence, the entropy of the total system consisting of S1 and S2 is not the sum of the entropies of S1 and S2. (paper)

  5. Birth-death models and coalescent point processes: the shape and probability of reconstructed phylogenies.

    Science.gov (United States)

    Lambert, Amaury; Stadler, Tanja

    2013-12-01

    Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Proposed Empirical Entropy and Gibbs Energy Based on Observations of Scale Invariance in Open Nonequilibrium Systems.

    Science.gov (United States)

    Tuck, Adrian F

    2017-09-07

    There is no widely agreed definition of entropy, and consequently Gibbs energy, in open systems far from equilibrium. One recent approach has sought to formulate an entropy and Gibbs energy based on observed scale invariances in geophysical variables, particularly in atmospheric quantities, including the molecules constituting stratospheric chemistry. The Hamiltonian flux dynamics of energy in macroscopic open nonequilibrium systems maps to energy in equilibrium statistical thermodynamics, and corresponding equivalences of scale invariant variables with other relevant statistical mechanical variables such as entropy, Gibbs energy, and 1/(k Boltzmann T), are not just formally analogous but are also mappings. Three proof-of-concept representative examples from available adequate stratospheric chemistry observations-temperature, wind speed and ozone-are calculated, with the aim of applying these mappings and equivalences. Potential applications of the approach to scale invariant observations from the literature, involving scales from molecular through laboratory to astronomical, are considered. Theoretical support for the approach from the literature is discussed.

  7. Approaching the r-process "waiting point" nuclei below $^{132}$Sn: quadrupole collectivity in $^{128}$Cd

    CERN Multimedia

    Reiter, P; Blazhev, A A; Nardelli, S; Voulot, D; Habs, D; Schwerdtfeger, W; Iwanicki, J S

    We propose to investigate the nucleus $^{128}$Cd neighbouring the r-process "waiting point" $^{130}$Cd. A possible explanation for the peak in the solar r-abundances at A $\\approx$ 130 is a quenching of the N = 82 shell closure for spherical nuclei below $^{132}$Sn. This explanation seems to be in agreement with recent $\\beta$-decay measurements performed at ISOLDE. In contrast to this picture, a beyond-mean-field approach would explain the anomaly in the excitation energy observed for $^{128}$Cd rather with a quite large quadrupole collectivity. Therefore, we propose to measure the reduced transition strengths B(E2) between ground state and first excited 2$^{+}$-state in $^{128}$Cd applying $\\gamma$-spectroscopy with MINIBALL after "safe" Coulomb excitation of a post-accelerated beam obtained from REX-ISOLDE. Such a measurement came into reach only because of the source developments made in 2006 for experiment IS411, in particular the use of a heated quartz transfer line. The result from the proposed measure...

  8. Process-based coastal erosion modeling for Drew Point (North Slope, Alaska)

    Science.gov (United States)

    Ravens, Thomas M.; Jones, Benjamin M.; Zhang, Jinlin; Arp, Christopher D.; Schmutz, Joel A.

    2012-01-01

    A predictive, coastal erosion/shoreline change model has been developed for a small coastal segment near Drew Point, Beaufort Sea, Alaska. This coastal setting has experienced a dramatic increase in erosion since the early 2000’s. The bluffs at this site are 3-4 m tall and consist of ice-wedge bounded blocks of fine-grained sediments cemented by ice-rich permafrost and capped with a thin organic layer. The bluffs are typically fronted by a narrow (∼ 5  m wide) beach or none at all. During a storm surge, the sea contacts the base of the bluff and a niche is formed through thermal and mechanical erosion. The niche grows both vertically and laterally and eventually undermines the bluff, leading to block failure or collapse. The fallen block is then eroded both thermally and mechanically by waves and currents, which must occur before a new niche forming episode may begin. The erosion model explicitly accounts for and integrates a number of these processes including: (1) storm surge generation resulting from wind and atmospheric forcing, (2) erosional niche growth resulting from wave-induced turbulent heat transfer and sediment transport (using the Kobayashi niche erosion model), and (3) thermal and mechanical erosion of the fallen block. The model was calibrated with historic shoreline change data for one time period (1979-2002), and validated with a later time period (2002-2007).

  9. Generating Impact Maps from Automatically Detected Bomb Craters in Aerial Wartime Images Using Marked Point Processes

    Science.gov (United States)

    Kruse, Christian; Rottensteiner, Franz; Hoberg, Thorsten; Ziems, Marcel; Rebke, Julia; Heipke, Christian

    2018-04-01

    The aftermath of wartime attacks is often felt long after the war ended, as numerous unexploded bombs may still exist in the ground. Typically, such areas are documented in so-called impact maps which are based on the detection of bomb craters. This paper proposes a method for the automatic detection of bomb craters in aerial wartime images that were taken during the Second World War. The object model for the bomb craters is represented by ellipses. A probabilistic approach based on marked point processes determines the most likely configuration of objects within the scene. Adding and removing new objects to and from the current configuration, respectively, changing their positions and modifying the ellipse parameters randomly creates new object configurations. Each configuration is evaluated using an energy function. High gradient magnitudes along the border of the ellipse are favored and overlapping ellipses are penalized. Reversible Jump Markov Chain Monte Carlo sampling in combination with simulated annealing provides the global energy optimum, which describes the conformance with a predefined model. For generating the impact map a probability map is defined which is created from the automatic detections via kernel density estimation. By setting a threshold, areas around the detections are classified as contaminated or uncontaminated sites, respectively. Our results show the general potential of the method for the automatic detection of bomb craters and its automated generation of an impact map in a heterogeneous image stock.

  10. A marked point process approach for identifying neural correlates of tics in Tourette Syndrome.

    Science.gov (United States)

    Loza, Carlos A; Shute, Jonathan B; Principe, Jose C; Okun, Michael S; Gunduz, Aysegul

    2017-07-01

    We propose a novel interpretation of local field potentials (LFP) based on a marked point process (MPP) framework that models relevant neuromodulations as shifted weighted versions of prototypical temporal patterns. Particularly, the MPP samples are categorized according to the well known oscillatory rhythms of the brain in an effort to elucidate spectrally specific behavioral correlates. The result is a transient model for LFP. We exploit data-driven techniques to fully estimate the model parameters with the added feature of exceptional temporal resolution of the resulting events. We utilize the learned features in the alpha and beta bands to assess correlations to tic events in patients with Tourette Syndrome (TS). The final results show stronger coupling between LFP recorded from the centromedian-paraficicular complex of the thalamus and the tic marks, in comparison to electrocorticogram (ECoG) recordings from the hand area of the primary motor cortex (M1) in terms of the area under the curve (AUC) of the receiver operating characteristic (ROC) curve.

  11. An Optimized Multicolor Point-Implicit Solver for Unstructured Grid Applications on Graphics Processing Units

    Science.gov (United States)

    Zubair, Mohammad; Nielsen, Eric; Luitjens, Justin; Hammond, Dana

    2016-01-01

    In the field of computational fluid dynamics, the Navier-Stokes equations are often solved using an unstructuredgrid approach to accommodate geometric complexity. Implicit solution methodologies for such spatial discretizations generally require frequent solution of large tightly-coupled systems of block-sparse linear equations. The multicolor point-implicit solver used in the current work typically requires a significant fraction of the overall application run time. In this work, an efficient implementation of the solver for graphics processing units is proposed. Several factors present unique challenges to achieving an efficient implementation in this environment. These include the variable amount of parallelism available in different kernel calls, indirect memory access patterns, low arithmetic intensity, and the requirement to support variable block sizes. In this work, the solver is reformulated to use standard sparse and dense Basic Linear Algebra Subprograms (BLAS) functions. However, numerical experiments show that the performance of the BLAS functions available in existing CUDA libraries is suboptimal for matrices representative of those encountered in actual simulations. Instead, optimized versions of these functions are developed. Depending on block size, the new implementations show performance gains of up to 7x over the existing CUDA library functions.

  12. Plasmon point spread functions: How do we model plasmon-mediated emission processes?

    Science.gov (United States)

    Willets, Katherine A.

    2014-02-01

    A major challenge with studying plasmon-mediated emission events is the small size of plasmonic nanoparticles relative to the wavelength of light. Objects smaller than roughly half the wavelength of light will appear as diffraction-limited spots in far-field optical images, presenting a significant experimental challenge for studying plasmonic processes on the nanoscale. Super-resolution imaging has recently been applied to plasmonic nanosystems and allows plasmon-mediated emission to be resolved on the order of ˜5 nm. In super-resolution imaging, a diffraction-limited spot is fit to some model function in order to calculate the position of the emission centroid, which represents the location of the emitter. However, the accuracy of the centroid position strongly depends on how well the fitting function describes the data. This Perspective discusses the commonly used two-dimensional Gaussian fitting function applied to super-resolution imaging of plasmon-mediated emission, then introduces an alternative model based on dipole point spread functions. The two fitting models are compared and contrasted for super-resolution imaging of nanoparticle scattering/luminescence, surface-enhanced Raman scattering, and surface-enhanced fluorescence.

  13. ISRIA statement: ten-point guidelines for an effective process of research impact assessment.

    Science.gov (United States)

    Adam, Paula; Ovseiko, Pavel V; Grant, Jonathan; Graham, Kathryn E A; Boukhris, Omar F; Dowd, Anne-Maree; Balling, Gert V; Christensen, Rikke N; Pollitt, Alexandra; Taylor, Mark; Sued, Omar; Hinrichs-Krapels, Saba; Solans-Domènech, Maite; Chorzempa, Heidi

    2018-02-08

    As governments, funding agencies and research organisations worldwide seek to maximise both the financial and non-financial returns on investment in research, the way the research process is organised and funded is becoming increasingly under scrutiny. There are growing demands and aspirations to measure research impact (beyond academic publications), to understand how science works, and to optimise its societal and economic impact. In response, a multidisciplinary practice called research impact assessment is rapidly developing. Given that the practice is still in its formative stage, systematised recommendations or accepted standards for practitioners (such as funders and those responsible for managing research projects) across countries or disciplines to guide research impact assessment are not yet available.In this statement, we propose initial guidelines for a rigorous and effective process of research impact assessment applicable to all research disciplines and oriented towards practice. This statement systematises expert knowledge and practitioner experience from designing and delivering the International School on Research Impact Assessment (ISRIA). It brings together insights from over 450 experts and practitioners from 34 countries, who participated in the school during its 5-year run (from 2013 to 2017) and shares a set of core values from the school's learning programme. These insights are distilled into ten-point guidelines, which relate to (1) context, (2) purpose, (3) stakeholders' needs, (4) stakeholder engagement, (5) conceptual frameworks, (6) methods and data sources, (7) indicators and metrics, (8) ethics and conflicts of interest, (9) communication, and (10) community of practice.The guidelines can help practitioners improve and standardise the process of research impact assessment, but they are by no means exhaustive and require evaluation and continuous improvement. The prima facie effectiveness of the guidelines is based on the systematised

  14. Existence and uniqueness of Gibbs states for a statistical mechanical polyacetylene model

    International Nuclear Information System (INIS)

    Park, Y.M.

    1987-01-01

    One-dimensional polyacetylene is studied as a model of statistical mechanics. In a semiclassical approximation the system is equivalent to a quantum XY model interacting with unbounded classical spins in one-dimensional lattice space Z. By establishing uniform estimates, an infinite-volume-limit Hilbert space, a strongly continuous time evolution group of unitary operators, and an invariant vector are constructed. Moreover, it is proven that any infinite-limit state satisfies Gibbs conditions. Finally, a modification of Araki's relative entropy method is used to establish the uniqueness of Gibbs states

  15. Determination of standard molar Gibbs energy of formation of Sm6UO12(s)

    International Nuclear Information System (INIS)

    Sahu, Manjulata; Dash, Smruti

    2015-01-01

    The standard molar Gibbs energies of formation of Sm 6 UO 12 (s) have been measured using an oxygen concentration cell with yttria stabilized zirconia as solid electrolyte. Δ f G o m (T) for Sm 6 UO 12 (s) has been calculated using the measured and required thermodynamic data from the literature. The calculated Gibbs energy expression in the temperature range 899 to 1127 K can be given as: Δ f G o m (Nd 6 UO 12 , s,T)/(±2.3) kJ∙ mol -1 = -6681 +1.099 (T/K) (899-1127 K)(T/K). (author)

  16. Gibbs free energy of formation of liquid lanthanide-bismuth alloys

    International Nuclear Information System (INIS)

    Sheng Jiawei; Yamana, Hajimu; Moriyama, Hirotake

    2001-01-01

    The linear free energy relationship developed by Sverjensky and Molling provides a way to predict Gibbs free energies of liquid Ln-Bi alloys formation from the known thermodynamic properties of aqueous trivalent lanthanides (Ln 3(5(6+ ). The Ln-Bi alloys are divided into two isostructural families named as the LnBi 2 (Ln=La, Ce, Pr, Nd and Pm) and LnBi (Ln=Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm and Yb). The calculated Gibbs free energy values are well agreed with experimental data

  17. Improved prediction of MHC class I and class II epitopes using a novel Gibbs sampling approach

    DEFF Research Database (Denmark)

    Nielsen, Morten; Lundegaard, Claus; Worning, Peder

    2004-01-01

    Prediction of which peptides will bind a specific major histocompatibility complex (MHC) constitutes an important step in identifying potential T-cell epitopes suitable as vaccine candidates. MHC class II binding peptides have a broad length distribution complicating such predictions. Thus......, identifying the correct alignment is a crucial part of identifying the core of an MHC class II binding motif. In this context, we wish to describe a novel Gibbs motif sampler method ideally suited for recognizing such weak sequence motifs. The method is based on the Gibbs sampling method, and it incorporates...

  18. Point process models for localization and interdependence of punctate cellular structures.

    Science.gov (United States)

    Li, Ying; Majarian, Timothy D; Naik, Armaghan W; Johnson, Gregory R; Murphy, Robert F

    2016-07-01

    Accurate representations of cellular organization for multiple eukaryotic cell types are required for creating predictive models of dynamic cellular function. To this end, we have previously developed the CellOrganizer platform, an open source system for generative modeling of cellular components from microscopy images. CellOrganizer models capture the inherent heterogeneity in the spatial distribution, size, and quantity of different components among a cell population. Furthermore, CellOrganizer can generate quantitatively realistic synthetic images that reflect the underlying cell population. A current focus of the project is to model the complex, interdependent nature of organelle localization. We built upon previous work on developing multiple non-parametric models of organelles or structures that show punctate patterns. The previous models described the relationships between the subcellular localization of puncta and the positions of cell and nuclear membranes and microtubules. We extend these models to consider the relationship to the endoplasmic reticulum (ER), and to consider the relationship between the positions of different puncta of the same type. Our results do not suggest that the punctate patterns we examined are dependent on ER position or inter- and intra-class proximity. With these results, we built classifiers to update previous assignments of proteins to one of 11 patterns in three distinct cell lines. Our generative models demonstrate the ability to construct statistically accurate representations of puncta localization from simple cellular markers in distinct cell types, capturing the complex phenomena of cellular structure interaction with little human input. This protocol represents a novel approach to vesicular protein annotation, a field that is often neglected in high-throughput microscopy. These results suggest that spatial point process models provide useful insight with respect to the spatial dependence between cellular structures.

  19. Bubble point pressures of the selected model system for CatLiq® bio-oil process

    DEFF Research Database (Denmark)

    Toor, Saqib Sohail; Rosendahl, Lasse; Baig, Muhammad Noman

    2010-01-01

    . In this work, the bubble point pressures of a selected model mixture (CO2 + H2O + Ethanol + Acetic acid + Octanoic acid) were measured to investigate the phase boundaries of the CatLiq® process. The bubble points were measured in the JEFRI-DBR high pressure PVT phase behavior system. The experimental results......The CatLiq® process is a second generation catalytic liquefaction process for the production of bio-oil from WDGS (Wet Distillers Grains with Solubles) at subcritical conditions (280-350 oC and 225-250 bar) in the presence of a homogeneous alkaline and a heterogeneous Zirconia catalyst...

  20. Structured Spatio-temporal shot-noise Cox point process models, with a view to modelling forest fires

    DEFF Research Database (Denmark)

    Møller, Jesper; Diaz-Avalos, Carlos

    2010-01-01

    Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...... data set consisting of 2796 days and 5834 spatial locations of fires. The model is compared with a spatio-temporal log-Gaussian Cox point process model, and likelihood-based methods are discussed to some extent....

  1. Equilibrium statistical mechanics for self-gravitating systems: local ergodicity and extended Boltzmann-Gibbs/White-Narayan statistics

    Science.gov (United States)

    He, Ping

    2012-01-01

    The long-standing puzzle surrounding the statistical mechanics of self-gravitating systems has not yet been solved successfully. We formulate a systematic theoretical framework of entropy-based statistical mechanics for spherically symmetric collisionless self-gravitating systems. We use an approach that is very different from that of the conventional statistical mechanics of short-range interaction systems. We demonstrate that the equilibrium states of self-gravitating systems consist of both mechanical and statistical equilibria, with the former characterized by a series of velocity-moment equations and the latter by statistical equilibrium equations, which should be derived from the entropy principle. The velocity-moment equations of all orders are derived from the steady-state collisionless Boltzmann equation. We point out that the ergodicity is invalid for the whole self-gravitating system, but it can be re-established locally. Based on the local ergodicity, using Fermi-Dirac-like statistics, with the non-degenerate condition and the spatial independence of the local microstates, we rederive the Boltzmann-Gibbs entropy. This is consistent with the validity of the collisionless Boltzmann equation, and should be the correct entropy form for collisionless self-gravitating systems. Apart from the usual constraints of mass and energy conservation, we demonstrate that the series of moment or virialization equations must be included as additional constraints on the entropy functional when performing the variational calculus; this is an extension to the original prescription by White & Narayan. Any possible velocity distribution can be produced by the statistical-mechanical approach that we have developed with the extended Boltzmann-Gibbs/White-Narayan statistics. Finally, we discuss the questions of negative specific heat and ensemble inequivalence for self-gravitating systems.

  2. Discrete tomographic reconstruction of 2D polycrystal orientation maps from X-ray diffraction projections using Gibbs priors

    DEFF Research Database (Denmark)

    Rodek, L.; Knudsen, E.; Poulsen, H.F.

    2005-01-01

    discrete tomographic algorithm, applying image-modelling Gibbs priors and a homogeneity condition. The optimization of the objective function is accomplished via the Gibbs Sampler in conjunction with simulated annealing. In order to express the structure of the orientation map, the similarity...

  3. Experimental Determination of Third Derivative of the Gibbs Free Energy, G II

    DEFF Research Database (Denmark)

    Koga, Yoshikata; Westh, Peter; Inaba, Akira

    2010-01-01

    We have been evaluating third derivative quantities of the Gibbs free energy, G, by graphically differentiating the second derivatives that are accessible experimentally, and demonstrated their power in elucidating the mixing schemes in aqueous solutions. Here we determine directly one of the third...

  4. Specification and comparative calculation of enthalpies and Gibbs formation energies of anhydrous lanthanide nitrates

    International Nuclear Information System (INIS)

    Del' Pino, Kh.; Chukurov, P.M.; Drakin, S.I.

    1980-01-01

    Analyzed are the results of experimental depermination of formation enthalpies of waterless nitrates of lanthane cerium, praseodymium, neodymium and samarium. Using method of comparative calculation computed are enthalpies of formation of waterless lanthanide and yttrium nitrates. Calculated values of enthalpies and Gibbs energies of waterless lanthanide nitrate formation are tabulated

  5. Experimental Pragmatics and What Is Said: A Response to Gibbs and Moise.

    Science.gov (United States)

    Nicolle, Steve; Clark, Billy

    1999-01-01

    Attempted replication of Gibbs and Moise (1997) experiments regarding the recognition of a distinction between what is said and what is implicated. Results showed that, under certain conditions, subject selected implicatures when asked to select the paraphrase best reflecting what a speaker has said. Suggests that results can be explained with the…

  6. Uniqueness of Gibbs states and global Markov property for Euclidean fields

    International Nuclear Information System (INIS)

    Albeverio, S.; Hoeegh-Krohn, R.

    1981-01-01

    The authors briefly discuss the proof of the uniqueness of solutions of the DLR equations (uniqueness of Gibbs states) in the class of regular generalized random fields (in the sense of having second moments bounded by those of some Euclidean field), for the Euclidean fields with trigonometric interaction. (Auth.)

  7. Estimates of Gibbs free energies of formation of chlorinated aliphatic compounds

    NARCIS (Netherlands)

    Dolfing, Jan; Janssen, Dick B.

    1994-01-01

    The Gibbs free energy of formation of chlorinated aliphatic compounds was estimated with Mavrovouniotis' group contribution method. The group contribution of chlorine was estimated from the scarce data available on chlorinated aliphatics in the literature, and found to vary somewhat according to the

  8. Standard Gibbs free energies of reactions of ozone with free radicals in aqueous solution: quantum-chemical calculations.

    Science.gov (United States)

    Naumov, Sergej; von Sonntag, Clemens

    2011-11-01

    Free radicals are common intermediates in the chemistry of ozone in aqueous solution. Their reactions with ozone have been probed by calculating the standard Gibbs free energies of such reactions using density functional theory (Jaguar 7.6 program). O(2) reacts fast and irreversibly only with simple carbon-centered radicals. In contrast, ozone also reacts irreversibly with conjugated carbon-centered radicals such as bisallylic (hydroxycylohexadienyl) radicals, with conjugated carbon/oxygen-centered radicals such as phenoxyl radicals, and even with nitrogen- oxygen-, sulfur-, and halogen-centered radicals. In these reactions, further ozone-reactive radicals are generated. Chain reactions may destroy ozone without giving rise to products other than O(2). This may be of importance when ozonation is used in pollution control, and reactions of free radicals with ozone have to be taken into account in modeling such processes.

  9. The standard Gibbs free energy of formation of lithium manganese oxides at the temperatures of (680, 740 and 800) K

    International Nuclear Information System (INIS)

    Rog, G.; Kucza, W.; Kozlowska-Rog, A.

    2004-01-01

    The standard Gibbs free energy of formation of LiMnO 2 and LiMn 2 O 4 at the temperatures of (680, 740 and 800) K has been determined with the help of the solid-state galvanic cells involving lithium-β-alumina electrolyte. The equilibrium electrical potentials of cathode containing Li x Mn 2 O 4 spinel, in the composition ranges 0≤x≤1 and 1≤x≤2, vs. metallic lithium in the reversible intercalation galvanic cell have been calculated. The existence of two-voltage plateaus which appeared during charging and discharging processes in reversible intercalation of lithium into Li x Mn 2 O 4 spinel, has been discussed

  10. Strong approximations and sequential change-point analysis for diffusion processes

    DEFF Research Database (Denmark)

    Mihalache, Stefan-Radu

    2012-01-01

    In this paper ergodic diffusion processes depending on a parameter in the drift are considered under the assumption that the processes can be observed continuously. Strong approximations by Wiener processes for a stochastic integral and for the estimator process constructed by the one...

  11. On P-Adic Quasi Gibbs Measures for Q + 1-State Potts Model on the Cayley Tree

    International Nuclear Information System (INIS)

    Mukhamedov, Farrukh

    2010-06-01

    In the present paper we introduce a new class of p-adic measures, associated with q +1-state Potts model, called p-adic quasi Gibbs measure, which is totally different from the p-adic Gibbs measure. We establish the existence p-adic quasi Gibbs measures for the model on a Cayley tree. If q is divisible by p, then we prove the occurrence of a strong phase transition. If q and p are relatively prime, then there is a quasi phase transition. These results are totally different from the results of [F.M.Mukhamedov, U.A. Rozikov, Indag. Math. N.S. 15(2005) 85-100], since q is divisible by p, which means that q + 1 is not divided by p, so according to a main result of the mentioned paper, there is a unique and bounded p-adic Gibbs measure (different from p-adic quasi Gibbs measure). (author)

  12. Prediction of Gibbs energies of formation and stability constants of some secondary uranium minerals containing the uranyl group

    International Nuclear Information System (INIS)

    Genderen, A.C.G. van; Weijden, C.H. van der

    1984-01-01

    For a group of minerals containing a common anion there exists a linear relationship between two parameters called ΔO and ΔF.ΔO is defined as the difference between the Gibbs energy of formation of a solid oxide and the Gibbs energy of formation of its aqueous cation, while ΔF is defined as the Gibbs energy of reaction of the formation of a mineral from the constituting oxide(s) and the acid. Using the Gibbs energies of formation of a number of known minerals the corresponding ΔO's and ΔF's were calculated and with the resulting regression equation it is possible to predict values for the Gibbs energies of formation of other minerals containing the same anion. This was done for 29 minerals containing the uranyl-ion together with phosphate, vanadate, arsenate or carbonate. (orig.)

  13. Concerning the acid dew point in waste gases from combustion processes

    Energy Technology Data Exchange (ETDEWEB)

    Knoche, K.F.; Deutz, W.; Hein, K.; Derichs, W.

    1986-09-01

    The paper discusses the problems associated with the measurement of acid dew point and of sulphuric acid-(say SO/sub 3/-)concentrations in the flue gas from brown coal-fired boiler plants. The sulphuric acid content in brown coal flue gas has been measured at 0.5 to 3 vpm in SO/sub 2/ concentrations of 200 to 800 vpm. Using a conditional equation, the derivation of which from new formulae for phase stability is described in the paper, an acid dew point temperature of 115 to 125/sup 0/C is produced.

  14. Comparison of Clothing Cultures from the View Point of Funeral Procession

    OpenAIRE

    増田, 美子; 大枝, 近子; 梅谷, 知世; 杉本, 浄; 内村, 理奈

    2011-01-01

    This study was for its object to research for the look in the funeral ceremony and make the point of the different and common point between the respective cultural spheres of the Buddhism,Hinduism, Islam and Christianity clearly. In the year 21, we tried to grasp the reality of costumes of funeral courtesy in modern times and present-day. And it became clear in the result, Japan, the Buddhist cultural sphere, China and Taiwan, the Buddhism, the Confucianism and the Taoism intermingled cultura...

  15. Fixed-point Characterization of Compositionality Properties of Probabilistic Processes Combinators

    Directory of Open Access Journals (Sweden)

    Daniel Gebler

    2014-08-01

    Full Text Available Bisimulation metric is a robust behavioural semantics for probabilistic processes. Given any SOS specification of probabilistic processes, we provide a method to compute for each operator of the language its respective metric compositionality property. The compositionality property of an operator is defined as its modulus of continuity which gives the relative increase of the distance between processes when they are combined by that operator. The compositionality property of an operator is computed by recursively counting how many times the combined processes are copied along their evolution. The compositionality properties allow to derive an upper bound on the distance between processes by purely inspecting the operators used to specify those processes.

  16. Focal Points, Endogenous Processes, and Exogenous Shocks in the Autism Epidemic

    Science.gov (United States)

    Liu, Kayuet; Bearman, Peter S.

    2015-01-01

    Autism prevalence has increased rapidly in the United States during the past two decades. We have previously shown that the diffusion of information about autism through spatially proximate social relations has contributed significantly to the epidemic. This study expands on this finding by identifying the focal points for interaction that drive…

  17. Multiscale change-point analysis of inhomogeneous Poisson processes using unbalanced wavelet decompositions

    NARCIS (Netherlands)

    Jansen, M.H.; Di Bucchianico, A.; Mattheij, R.M.M.; Peletier, M.A.

    2006-01-01

    We present a continuous wavelet analysis of count data with timevarying intensities. The objective is to extract intervals with significant intensities from background intervals. This includes the precise starting point of the significant interval, its exact duration and the (average) level of

  18. Phase-equilibria for design of coal-gasification processes: dew points of hot gases containing condensible tars. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Prausnitz, J.M.

    1980-05-01

    This research is concerned with the fundamental physical chemistry and thermodynamics of condensation of tars (dew points) from the vapor phase at advanced temperatures and pressures. Fundamental quantitative understanding of dew points is important for rational design of heat exchangers to recover sensible heat from hot, tar-containing gases that are produced in coal gasification. This report includes essentially six contributions toward establishing the desired understanding: (1) Characterization of Coal Tars for Dew-Point Calculations; (2) Fugacity Coefficients for Dew-Point Calculations in Coal-Gasification Process Design; (3) Vapor Pressures of High-Molecular-Weight Hydrocarbons; (4) Estimation of Vapor Pressures of High-Boiling Fractions in Liquefied Fossil Fuels Containing Heteroatoms Nitrogen or Sulfur; and (5) Vapor Pressures of Heavy Liquid Hydrocarbons by a Group-Contribution Method.

  19. Simultaneous reconstruction of multiple depth images without off-focus points in integral imaging using a graphics processing unit.

    Science.gov (United States)

    Yi, Faliu; Lee, Jieun; Moon, Inkyu

    2014-05-01

    The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.

  20. Second-order analysis of inhomogeneous spatial point processes with proportional intensity functions

    DEFF Research Database (Denmark)

    Guan, Yongtao; Waagepetersen, Rasmus; Beale, Colin M.

    2008-01-01

    of the intensity functions. The first approach is based on nonparametric kernel-smoothing, whereas the second approach uses a conditional likelihood estimation approach to fit a parametric model for the pair correlation function. A great advantage of the proposed methods is that they do not require the often...... to two spatial point patterns regarding the spatial distributions of birds in the U.K.'s Peak District in 1990 and 2004....

  1. Fractal Point Process and Queueing Theory and Application to Communication Networks

    National Research Council Canada - National Science Library

    Wornel, Gregory

    1999-01-01

    .... A unifying theme in the approaches to these problems has been an integration of interrelated perspectives from communication theory, information theory, signal processing theory, and control theory...

  2. Process of extracting oil from stones and sands. [heating below cracking temperature and above boiling point of oil

    Energy Technology Data Exchange (ETDEWEB)

    Bergfeld, K

    1935-03-09

    A process of extracting oil from stones or sands bearing oils is characterized by the stones and sands being heated in a suitable furnace to a temperature below that of cracking and preferably slightly higher than the boiling-point of the oils. The oily vapors are removed from the treating chamber by means of flushing gas.

  3. A three-dimensional point process model for the spatial distribution of disease occurrence in relation to an exposure source

    DEFF Research Database (Denmark)

    Grell, Kathrine; Diggle, Peter J; Frederiksen, Kirsten

    2015-01-01

    We study methods for how to include the spatial distribution of tumours when investigating the relation between brain tumours and the exposure from radio frequency electromagnetic fields caused by mobile phone use. Our suggested point process model is adapted from studies investigating spatial...... the Interphone Study, a large multinational case-control study on the association between brain tumours and mobile phone use....

  4. Estimating functions for inhomogeneous spatial point processes with incomplete covariate data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus

    and this leads to parameter estimation error which is difficult to quantify. In this paper we introduce a Monte Carlo version of the estimating function used in "spatstat" for fitting inhomogeneous Poisson processes and certain inhomogeneous cluster processes. For this modified estimating function it is feasible...

  5. Estimating functions for inhomogeneous spatial point processes with incomplete covariate data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus

    2008-01-01

    and this leads to parameter estimation error which is difficult to quantify. In this paper, we introduce a Monte Carlo version of the estimating function used in spatstat for fitting inhomogeneous Poisson processes and certain inhomogeneous cluster processes. For this modified estimating function, it is feasible...

  6. Hazard rate model and statistical analysis of a compound point process

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2005-01-01

    Roč. 41, č. 6 (2005), s. 773-786 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : couting process * compound process * Cox regression model * intensity Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.343, year: 2005

  7. Congruence from the operator's point of view: compositionality requirements on process semantics

    NARCIS (Netherlands)

    Gazda, M.; Fokkink, W.J.

    2010-01-01

    One of the basic sanity properties of a behavioural semantics is that it constitutes a congruence with respect to standard process operators. This issue has been traditionally addressed by the development of rule formats for transition system specifications that define process algebras. In this

  8. Congruence from the operator's point of view : compositionality requirements on process semantics

    NARCIS (Netherlands)

    Gazda, M.W.; Fokkink, W.J.; Aceto, L.; Sobocinski, P.

    2010-01-01

    One of the basic sanity properties of a behavioural semantics is that it constitutes a congruence with respect to standard process operators. This issue has been traditionally addressed by the development of rule formats for transition system specifications that define process algebras. In this

  9. Scientific evidence is just the starting point: A generalizable process for developing sports injury prevention interventions

    Directory of Open Access Journals (Sweden)

    Alex Donaldson

    2016-09-01

    Conclusion: This systematic yet pragmatic and iterative intervention development process is potentially applicable to any injury prevention topic across all sports settings and levels. It will guide researchers wishing to undertake intervention development.

  10. Main points of research in crude oil processing and petrochemistry. [German Democratic Republic

    Energy Technology Data Exchange (ETDEWEB)

    Keil, G.; Nowak, S.; Fiedrich, G.; Klare, H.; Apelt, E.

    1982-04-01

    This article analyzes general aspects in the development of petrochemistry and carbochemistry on a global scale and for industry in the German Democratic Republic. Diagrams are given for liquid and solid carbon resources and their natural hydrogen content showing the increasing hydrogen demand for chemical fuel conversion processes. The petrochemical and carbochemical industry must take a growing level of hydrogen demand into account, which is at present 25 Mt/a on a global scale and which increases by 7% annually. Various methods for chemical processing of crude oil and crude oil residues are outlined. Advanced coal conversion processes with prospects for future application in the GDR are also explained, including the methanol carbonylation process, which achieves 90% selectivity and which is based on carbon monoxide hydrogenation, further the Transcat process, using ethane for vinyl chloride production. Acetylene and carbide carbochemistry in the GDR is a further major line in research and development. Technological processes for the pyrolysis of vacuum gas oil are also evaluated. (27 refs.)

  11. Gibbs free energy of formation of lanthanum rhodate by quadrupole mass spectrometer

    International Nuclear Information System (INIS)

    Prasad, R.; Banerjee, Aparna; Venugopal, V.

    2003-01-01

    The ternary oxide in the system La-Rh-O is of considerable importance because of its application in catalysis. Phase equilibria in the pseudo-binary system La 2 O 3 -Rh 2 O 3 has been investigated by Shevyakov et. al. Gibbs free energy of LaRhO 3 (s) was determined by Jacob et. al. using a solid state Galvanic cell in the temperature range 890 to 1310 K. No other thermodynamic data were available in the literature. Hence it was decided to determine Gibbs free energy of formation of LaRhO 3 (s) by an independent technique, viz. quadrupole mass spectrometer (QMS) coupled with a Knudsen effusion cell and the results are presented

  12. Molar Surface Gibbs Energy of the Aqueous Solution of Ionic Liquid [C4mim][Oac

    Institute of Scientific and Technical Information of China (English)

    TONG Jing; ZHENG Xu; TONG Jian; QU Ye; LIU Lu; LI Hui

    2017-01-01

    The values of density and surface tension for aqueous solution of ionic liquid(IL) 1-butyl-3-methylimidazolium acetate([C4mim][OAc]) with various molalities were measured in the range of 288.15-318.15 K at intervals of 5 K.On the basis of thermodynamics,a semi-empirical model-molar surface Gibbs energy model of the ionic liquid solution that could be used to predict the surface tension or molar volume of solutions was put forward.The predicted values of the surface tension for aqueous [C4im][OAc] and the corresponding experimental ones were highly correlated and extremely similar.In terms of the concept of the molar Gibbs energy,a new E(o)tv(o)s equation was obtained and each parameter of the new equation has a clear physical meaning.

  13. Simultaneous alignment and clustering of peptide data using a Gibbs sampling approach

    DEFF Research Database (Denmark)

    Andreatta, Massimo; Lund, Ole; Nielsen, Morten

    2013-01-01

    Motivation: Proteins recognizing short peptide fragments play a central role in cellular signaling. As a result of high-throughput technologies, peptide-binding protein specificities can be studied using large peptide libraries at dramatically lower cost and time. Interpretation of such large...... peptide datasets, however, is a complex task, especially when the data contain multiple receptor binding motifs, and/or the motifs are found at different locations within distinct peptides.Results: The algorithm presented in this article, based on Gibbs sampling, identifies multiple specificities...... of unaligned peptide datasets of variable length. Example applications described in this article include mixtures of binders to different MHC class I and class II alleles, distinct classes of ligands for SH3 domains and sub-specificities of the HLA-A*02:01 molecule.Availability: The Gibbs clustering method...

  14. Inverse problems with non-trivial priors: efficient solution through sequential Gibbs sampling

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Cordua, Knud Skou; Mosegaard, Klaus

    2012-01-01

    Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample solutions to non-linear inverse problems. In principle, these methods allow incorporation of prior information of arbitrary complexity. If an analytical closed form description of the prior...... is available, which is the case when the prior can be described by a multidimensional Gaussian distribution, such prior information can easily be considered. In reality, prior information is often more complex than can be described by the Gaussian model, and no closed form expression of the prior can be given....... We propose an algorithm, called sequential Gibbs sampling, allowing the Metropolis algorithm to efficiently incorporate complex priors into the solution of an inverse problem, also for the case where no closed form description of the prior exists. First, we lay out the theoretical background...

  15. The thermodynamic properties of the upper continental crust: Exergy, Gibbs free energy and enthalpy

    International Nuclear Information System (INIS)

    Valero, Alicia; Valero, Antonio; Vieillard, Philippe

    2012-01-01

    This paper shows a comprehensive database of the thermodynamic properties of the most abundant minerals of the upper continental crust. For those substances whose thermodynamic properties are not listed in the literature, their enthalpy and Gibbs free energy are calculated with 11 different estimation methods described in this study, with associated errors of up to 10% with respect to values published in the literature. Thanks to this procedure we have been able to make a first estimation of the enthalpy, Gibbs free energy and exergy of the bulk upper continental crust and of each of the nearly 300 most abundant minerals contained in it. Finally, the chemical exergy of the continental crust is compared to the exergy of the concentrated mineral resources. The numbers obtained indicate the huge chemical exergy wealth of the crust: 6 × 10 6 Gtoe. However, this study shows that approximately only 0.01% of that amount can be effectively used by man.

  16. Direct measurements of the Gibbs free energy of OH using a CW tunable laser

    Science.gov (United States)

    Killinger, D. K.; Wang, C. C.

    1979-01-01

    The paper describes an absorption measurement for determining the Gibbs free energy of OH generated in a mixture of water and oxygen vapor. These measurements afford a direct verification of the accuracy of thermochemical data of H2O at high temperatures and pressures. The results indicate that values for the heat capacity of H2O obtained through numerical computations are correct within an experimental uncertainty of 0.15 cal/mole K.

  17. Standard Gibbs free energies for transfer of actinyl ions at the aqueous/organic solution interface

    International Nuclear Information System (INIS)

    Kitatsuji, Yoshihiro; Okugaki, Tomohiko; Kasuno, Megumi; Kubota, Hiroki; Maeda, Kohji; Kimura, Takaumi; Yoshida, Zenko; Kihara, Sorin

    2011-01-01

    Research highlights: → Standard Gibbs free energies for ion-transfer of tri- to hexavalent actinide ions. → Determination is based on distribution method combined with ion-transfer voltammetry. → Organic solvents examined are nitrobenzene, DCE, benzonitrile, acetophenone and NPOE. → Gibbs free energies of U(VI), Np(VI) and Pu(VI) are similar to each other. → Gibbs free energies of Np(V) is very large, comparing with ordinary monovalent cations. - Abstract: Standard Gibbs free energies for transfer (ΔG tr 0 ) of actinyl ions (AnO 2 z+ ; z = 2 or 1; An: U, Np, or Pu) between an aqueous solution and an organic solution were determined based on distribution method combined with voltammetry for ion transfer at the interface of two immiscible electrolyte solutions. The organic solutions examined were nitrobenzene, 1,2-dichloroethane, benzonitrile, acetophenone, and 2-nitrophenyl octyl ether. Irrespective of the type of organic solutions, ΔG tr 0 of UO 2 2+ ,NpO 2 2+ , and PuO 2 2+ were nearly equal to each other and slightly larger than that of Mg 2+ . The ΔG tr 0 of NpO 2 + was extraordinary large compared with those of ordinary monovalent cations. The dependence of ΔG tr 0 of AnO 2 z+ on the type of organic solutions was similar to that of H + or Mg 2+ . The ΔG tr 0 of An 3+ and An 4+ were also discussed briefly.

  18. Sampling informative/complex a priori probability distributions using Gibbs sampling assisted by sequential simulation

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Mosegaard, Klaus; Cordua, Knud Skou

    2010-01-01

    Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample the solutions to non-linear inverse problems. In principle these methods allow incorporation of arbitrarily complex a priori information, but current methods allow only relatively simple...... this algorithm with the Metropolis algorithm to obtain an efficient method for sampling posterior probability densities for nonlinear inverse problems....

  19. Modelling estimation and analysis of dynamic processes from image sequences using temporal random closed sets and point processes with application to the cell exocytosis and endocytosis

    OpenAIRE

    Díaz Fernández, Ester

    2010-01-01

    In this thesis, new models and methodologies are introduced for the analysis of dynamic processes characterized by image sequences with spatial temporal overlapping. The spatial temporal overlapping exists in many natural phenomena and should be addressed properly in several Science disciplines such as Microscopy, Material Sciences, Biology, Geostatistics or Communication Networks. This work is related to the Point Process and Random Closed Set theories, within Stochastic Ge...

  20. Steam generators secondary side chemical cleaning at Point Lepreau using the Siemens high temperature process

    International Nuclear Information System (INIS)

    Verma, K.; MacNeil, C.; Odar, S.; Kuhnke, K.

    1997-01-01

    This paper describes the chemical cleaning of the four steam generators at the Point Lepreau facility, which was accomplished as a part of a normal service outage. The steam generators had been in service for twelve years. Sludge samples showed the main elements were Fe, P and Na, with minor amounts of Ca, Mg, Mn, Cr, Zn, Cl, Cu, Ni, Ti, Si, and Pb, 90% in the form of Magnetite, substantial phosphate, and trace amounts of silicates. The steam generators were experiencing partial blockage of broached holes in the TSPs, and corrosion on tube ODs in the form of pitting and wastage. In addition heat transfer was clearly deteriorating. More than 1000 kg of magnetite and 124 kg of salts were removed from the four steam generators

  1. The role of point defects and defect complexes in silicon device processing. Summary report and papers

    Energy Technology Data Exchange (ETDEWEB)

    Sopori, B.; Tan, T.Y.

    1994-08-01

    This report is a summary of a workshop hold on August 24--26, 1992. Session 1 of the conference discussed characteristics of various commercial photovoltaic silicon substrates, the nature of impurities and defects in them, and how they are related to the material growth. Session 2 on point defects reviewed the capabilities of theoretical approaches to determine equilibrium structure of defects in the silicon lattice arising from transitional metal impurities and hydrogen. Session 3 was devoted to a discussion of the surface photovoltaic method for characterizing bulk wafer lifetimes, and to detailed studies on the effectiveness of various gettering operations on reducing the deleterious effects of transition metals. Papers presented at the conference are also included in this summary report.

  2. Congruence from the Operator's Point of View: Compositionality Requirements on Process Semantics

    Directory of Open Access Journals (Sweden)

    Maciej Gazda

    2010-08-01

    Full Text Available One of the basic sanity properties of a behavioural semantics is that it constitutes a congruence with respect to standard process operators. This issue has been traditionally addressed by the development of rule formats for transition system specifications that define process algebras. In this paper we suggest a novel, orthogonal approach. Namely, we focus on a number of process operators, and for each of them attempt to find the widest possible class of congruences. To this end, we impose restrictions on sublanguages of Hennessy-Milner logic, so that a semantics whose modal characterization satisfies a given criterion is guaranteed to be a congruence with respect to the operator in question. We investigate action prefix, alternative composition, two restriction operators, and parallel composition.

  3. Gibbs paradox of entropy of mixing experimental facts. Its rejection, and the theoretical consequences

    International Nuclear Information System (INIS)

    Lin, Shu-Kun

    1996-01-01

    Gibbs paradox statement of entropy of mixing has been regarded as the theoretical foundation of statistical mechanics, quantum theory and biophysics. However, all the relevant chemical experimental observations and logical analyses indicate that the Gibbs paradox statement is false. I prove that this statement is wrong: Gibbs paradox statement implies that entropy decreases with the increase in symmetry (as represented by a symmetry number σ; see any statistical mechanics textbook). From group theory any system has at least a symmetry number σ=1 which is the identity operation for a strictly asymmetric system. It follows that the entropy of a system is equal to, or less than, zero. However, from either von Neumann-Shannon entropy formula (S(w) =-Σ ω in p 1 ) or the Boltzmann entropy formula (S = in w) and the original definition, entropy is non-negative. Therefore, this statement is false. It should not be a surprise that for the first time, many outstanding problems such as the validity of Pauling's resonance theory, the explanation of second order phase transition phenomena, the biophysical problem of protein folding and the related hydrophobic effect, etc., can be solved. Empirical principles such as Pauli principle (and Hund's rule) and HSAB principle, etc., can also be given a theoretical explanation

  4. Screening disrupted molecular functions and pathways associated with clear cell renal cell carcinoma using Gibbs sampling.

    Science.gov (United States)

    Nan, Ning; Chen, Qi; Wang, Yu; Zhai, Xu; Yang, Chuan-Ce; Cao, Bin; Chong, Tie

    2017-10-01

    To explore the disturbed molecular functions and pathways in clear cell renal cell carcinoma (ccRCC) using Gibbs sampling. Gene expression data of ccRCC samples and adjacent non-tumor renal tissues were recruited from public available database. Then, molecular functions of expression changed genes in ccRCC were classed to Gene Ontology (GO) project, and these molecular functions were converted into Markov chains. Markov chain Monte Carlo (MCMC) algorithm was implemented to perform posterior inference and identify probability distributions of molecular functions in Gibbs sampling. Differentially expressed molecular functions were selected under posterior value more than 0.95, and genes with the appeared times in differentially expressed molecular functions ≥5 were defined as pivotal genes. Functional analysis was employed to explore the pathways of pivotal genes and their strongly co-regulated genes. In this work, we obtained 396 molecular functions, and 13 of them were differentially expressed. Oxidoreductase activity showed the highest posterior value. Gene composition analysis identified 79 pivotal genes, and survival analysis indicated that these pivotal genes could be used as a strong independent predictor of poor prognosis in patients with ccRCC. Pathway analysis identified one pivotal pathway - oxidative phosphorylation. We identified the differentially expressed molecular functions and pivotal pathway in ccRCC using Gibbs sampling. The results could be considered as potential signatures for early detection and therapy of ccRCC. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. The Gibbs-Thomson equation for a spherical coherent precipitate with applications to nucleation

    International Nuclear Information System (INIS)

    Rottman, C.; Voorhees, P.W.; Johnson, W.C.

    1988-01-01

    The conditions for interfacial thermodynamic equilibrium form the basis for the derivation of a number of basic equations in materials science, including the various forms of the Gibbs-Thomson equation. The equilibrium conditions pertaining to a curved interface in a two-phase fluid system are well-known. In contrast, the conditions for thermodynamic equilibrium at a curved interface in nonhydrostatically stressed solids have only recently been examined. These conditions can be much different from those at a fluid interface and, as a result, the Gibbs-Thomson equation appropriate to coherent solids is likely to be considerably different from that for fluids. In this paper, the authors first derive the conditions necessary for thermodynamic equilibrium at the precipitate-matrix interface of a coherent spherical precipitate. The authors' derivation of these equilibrium conditions includes a correction to the equilibrium conditions of Johnson and Alexander for a spherical precipitate in an isotropic matrix. They then use these conditions to derive the dependence of the interfacial precipitate and matrix concentrations on precipitate radius (Gibbs-Thomson equation) for a such a precipitate. In addition, these relationships are then used to calculate the critical radius for the nucleation of a coherent misfitting precipitate

  6. Accuracy of heart rate variability estimation by photoplethysmography using an smartphone: Processing optimization and fiducial point selection.

    Science.gov (United States)

    Ferrer-Mileo, V; Guede-Fernandez, F; Fernandez-Chimeno, M; Ramos-Castro, J; Garcia-Gonzalez, M A

    2015-08-01

    This work compares several fiducial points to detect the arrival of a new pulse in a photoplethysmographic signal using the built-in camera of smartphones or a photoplethysmograph. Also, an optimization process for the signal preprocessing stage has been done. Finally we characterize the error produced when we use the best cutoff frequencies and fiducial point for smartphones and photopletysmograph and compare if the error of smartphones can be reasonably be explained by variations in pulse transit time. The results have revealed that the peak of the first derivative and the minimum of the second derivative of the pulse wave have the lowest error. Moreover, for these points, high pass filtering the signal between 0.1 to 0.8 Hz and low pass around 2.7 Hz or 3.5 Hz are the best cutoff frequencies. Finally, the error in smartphones is slightly higher than in a photoplethysmograph.

  7. A Traffic Model for Machine-Type Communications Using Spatial Point Processes

    DEFF Research Database (Denmark)

    Thomsen, Henning; Manchón, Carles Navarro; Fleury, Bernard Henri

    2018-01-01

    , where the generated traffic by a given device depends on its position and event positions. We first consider the case where devices and events are static and devices generate traffic according to a Bernoulli process, where we derive the total rate from the devices at the base station. We then extend...

  8. Bayesian analysis of spatial point processes in the neighbourhood of Voronoi networks

    DEFF Research Database (Denmark)

    Skare, Øivind; Møller, Jesper; Jensen, Eva Bjørn Vedel

    2007-01-01

    A model for an inhomogeneous Poisson process with high intensity near the edges of a Voronoi tessellation in 2D or 3D is proposed. The model is analysed in a Bayesian setting with priors on nuclei of the Voronoi tessellation and other model parameters. An MCMC algorithm is constructed to sample...

  9. Optimal estimation of the intensity function of a spatial point process

    DEFF Research Database (Denmark)

    Guan, Yongtao; Jalilian, Abdollah; Waagepetersen, Rasmus

    easily computable estimating functions. We derive the optimal estimating function in a class of first-order estimating functions. The optimal estimating function depends on the solution of a certain Fredholm integral equation and reduces to the likelihood score in case of a Poisson process. We discuss...

  10. Bayesian analysis of spatial point processes in the neighbourhood of Voronoi networks

    DEFF Research Database (Denmark)

    Skare, Øivind; Møller, Jesper; Vedel Jensen, Eva B.

    A model for an inhomogeneous Poisson process with high intensity near the edges of a Voronoi tessellation in 2D or 3D is proposed. The model is analysed in a Bayesian setting with priors on nuclei of the Voronoi tessellation and other model parameters. An MCMC algorithm is constructed to sample...

  11. Entry points to stimulation of expansion in hides and skins processing

    African Journals Online (AJOL)

    Only 3.4% of respondents add value to hides and skins by processing. ... For this status of the chain, it was proposed that a workable intervention model has to encompass placement of tanneries and slaughter slabs in the chain as new actors, linking chain actors, improving livestock services especially dipping, and ...

  12. Mentoring Novice Teachers: Motives, Process, and Outcomes from the Mentor's Point of View

    Science.gov (United States)

    Iancu-Haddad, Debbie; Oplatka, Izhar

    2009-01-01

    The purpose of this paper is to present the major motives leading senior teachers to be involved in a mentoring process of newly appointed teachers and its benefits for the mentor teacher. Based on semi-structured interviews with 12 experienced teachers who participated in a university-based mentoring program in Israel, the current study found a…

  13. Stressors and Turning Points in High School and Dropout: A Stress Process, Life Course Framework

    Science.gov (United States)

    Dupéré, Véronique; Leventhal, Tama; Dion, Eric; Crosnoe, Robert; Archambault, Isabelle; Janosz, Michel

    2015-01-01

    High school dropout is commonly seen as the result of a long-term process of failure and disengagement. As useful as it is, this view has obscured the heterogeneity of pathways leading to dropout. Research suggests, for instance, that some students leave school not as a result of protracted difficulties but in response to situations that emerge…

  14. Meeting points in the VPL process - a key challenge for VPL activities

    DEFF Research Database (Denmark)

    Aagaard, Kirsten; Enggaard, Ellen

    2014-01-01

    , a step up the career ladder, personal development or threat of losing his job and the work place’s demand for new competences? There are three main players on this scene: the individual, the (HE) educational institution and the work place. There may be more players involved in the process......The right to have your competences recognized and validated as a mean to gain access to or exemptions of a higher education has existed since 2007, but the knowledge of this opportunity is still not very well spread and the potentials of the law are not exploited. This goes for individuals as well...... the individual in his or her individual career strategies benefit from the option of VPL in the process of managing his or her career strategy? What are the main barriers and obstacles the individual might meet in his or her attempt to move on in his career whether the motivation is change of career direction...

  15. Paleocurrents in the Charlie-Gibbs Fracture Zone during the Late Quaternary

    Science.gov (United States)

    Bashirova, L. D.; Dorokhova, E.; Sivkov, V.; Andersen, N.; Kuleshova, L. A.; Matul, A.

    2017-12-01

    The sedimentary processes prevailing in the Charlie-Gibbs Fracture Zone (CGFZ) are gravity flows. They rework pelagic sediments and contourites, and hereby mask the paleoceanographic information partly. The aim of this work is to study sediments of the AMK-4515 core taken in eastern part of the CGFZ. The sediment core AMK-4515 (52°03.14" N, 29°00.12" W; 370 cm length, water depth 3590 m) is located in the southern valley of the CGFZ. This natural deep corridor is influenced by both the westward Iceland-Scotland Overflow Water and underlying counterflow from the Newfoundland Basin. An alternation of the calcareous silty clays and hemipelagic clayey muds in the studied section indicates similarity between our core and long cores taking from CGFZ. A sharp facies shift was found at 80 cm depth in the investigated core. Only the upper section (0-80 cm) is valid for paleoreconstruction. Planktonic foraminiferal distribution and sea-surface temperature (SST) derived from these allow for tracing the PF and NAC latitudinal migrations during investigated period. So-called sortable silt mean size (SS) was used as proxy for reconstruction of bottom current intensity. The age model is based on δ18O and AMS 14C dating, as well as ice-rafted debris (IRD) counts and CaCO3 content. Stratigraphic subdivision of this section allows to allocate 2 marine isotope stages (MIS) covering the last 27 ka. We refer sediments below this level (80-370 cm) to upper part of turbidite, which was formed as a result of massive slide in the southern channel of the CGFZ. Sandy particles were deposited first, underlying silts and clays. This short-term event occurred so quickly that pelagic sedimentation played no role and was not reflected in the grain size distributions. There is evidence for the significant role of gravity flows in sedimentation in the southern channel of the CGFZ. According to our data, the massive sediment slide occurred in the CGFZ about 27 ka. The authors are grateful to RSF

  16. Chosen Aspects of Modernization Processes in EU Countries and in Poland - Classical Point of View

    OpenAIRE

    Dworak Edyta; Malarska Anna

    2010-01-01

    The aim of this paper is an evaluation of changes in a sectoral structure of the employment in EU-countries in time. Against this background there are exposed changes in Polish economy in the period 1997-2008. There were used classical tools of the statistical analysis to illustrate and initially verification the theory of three sectors by A. Fisher, C. Clark i J. Fourastiè, orientated to the evaluation of the modernization process of EU-economies.

  17. The Development of Point Doppler Velocimeter Data Acquisition and Processing Software

    Science.gov (United States)

    Cavone, Angelo A.

    2008-01-01

    In order to develop efficient and quiet aircraft and validate Computational Fluid Dynamic predications, aerodynamic researchers require flow parameter measurements to characterize flow fields about wind tunnel models and jet flows. A one-component Point Doppler Velocimeter (pDv), a non-intrusive, laser-based instrument, was constructed using a design/develop/test/validate/deploy approach. A primary component of the instrument is software required for system control/management and data collection/reduction. This software along with evaluation algorithms, advanced pDv from a laboratory curiosity to a production level instrument. Simultaneous pDv and pitot probe velocity measurements obtained at the centerline of a flow exiting a two-inch jet, matched within 0.4%. Flow turbulence spectra obtained with pDv and a hot-wire detected the primary and secondary harmonics with equal dynamic range produced by the fan driving the flow. Novel,hardware and software methods were developed, tested and incorporated into the system to eliminate and/or minimize error sources and improve system reliability.

  18. From Takeoff to Landing: Looking at the Design Process for the Development of NASA Blast at Thanksgiving Point

    Directory of Open Access Journals (Sweden)

    Stephen Ashton

    2011-01-01

    Full Text Available In this article we discuss the process of design used to develop and design the NASA Blast exhibition at Thanksgiving Point, a museum complex in Lehi, Utah. This was a class project for the Advanced Instructional Design Class at Brigham Young University. In an attempt to create a new discourse (Krippendorff, 2006 for Thanksgiving Point visitors and staff members, the design class used a very fluid design approach by utilizing brainstorming, researching, class member personas, and prototyping to create ideas for the new exhibition. Because of the nature of the experience, the design class developed their own techniques to enhance the process of their design. The result of the design was a compelling narrative that brought all the elements of the exhibition together in a cohesive piece.

  19. Using Graphs of Gibbs Energy versus Temperature in General Chemistry Discussions of Phase Changes and Colligative Properties

    Science.gov (United States)

    Hanson, Robert M.; Riley, Patrick; Schwinefus, Jeff; Fischer, Paul J.

    2008-01-01

    The use of qualitative graphs of Gibbs energy versus temperature is described in the context of chemical demonstrations involving phase changes and colligative properties at the general chemistry level. (Contains 5 figures and 1 note.)

  20. Solid oxide galvanic cell for determination of Gibbs energy of formation of Tb6UO12(s)

    International Nuclear Information System (INIS)

    Sahu, Manjulata; Dash, Smruti

    2013-01-01

    Citrate-nitrate combustion method was used to synthesise Tb 6 UO 12 (s). Gibbs energy of formation of Tb 6 UO 12 (s) was measured using solid oxide galvanic cell in the temperature range 957-1175 K. (author)

  1. Monte Carlo point process estimation of electromyographic envelopes from motor cortical spikes for brain-machine interfaces

    Science.gov (United States)

    Liao, Yuxi; She, Xiwei; Wang, Yiwen; Zhang, Shaomin; Zhang, Qiaosheng; Zheng, Xiaoxiang; Principe, Jose C.

    2015-12-01

    Objective. Representation of movement in the motor cortex (M1) has been widely studied in brain-machine interfaces (BMIs). The electromyogram (EMG) has greater bandwidth than the conventional kinematic variables (such as position, velocity), and is functionally related to the discharge of cortical neurons. As the stochastic information of EMG is derived from the explicit spike time structure, point process (PP) methods will be a good solution for decoding EMG directly from neural spike trains. Previous studies usually assume linear or exponential tuning curves between neural firing and EMG, which may not be true. Approach. In our analysis, we estimate the tuning curves in a data-driven way and find both the traditional functional-excitatory and functional-inhibitory neurons, which are widely found across a rat’s motor cortex. To accurately decode EMG envelopes from M1 neural spike trains, the Monte Carlo point process (MCPP) method is implemented based on such nonlinear tuning properties. Main results. Better reconstruction of EMG signals is shown on baseline and extreme high peaks, as our method can better preserve the nonlinearity of the neural tuning during decoding. The MCPP improves the prediction accuracy (the normalized mean squared error) 57% and 66% on average compared with the adaptive point process filter using linear and exponential tuning curves respectively, for all 112 data segments across six rats. Compared to a Wiener filter using spike rates with an optimal window size of 50 ms, MCPP decoding EMG from a point process improves the normalized mean square error (NMSE) by 59% on average. Significance. These results suggest that neural tuning is constantly changing during task execution and therefore, the use of spike timing methodologies and estimation of appropriate tuning curves needs to be undertaken for better EMG decoding in motor BMIs.

  2. The effect of starting point placement technique on thoracic transverse process strength: an ex vivo biomechanical study

    Directory of Open Access Journals (Sweden)

    Burton Douglas C

    2010-07-01

    Full Text Available Abstract Background The use of thoracic pedicle screws in spinal deformity, trauma, and tumor reconstruction is becoming more common. Unsuccessful screw placement may require salvage techniques utilizing transverse process hooks. The effect of different starting point placement techniques on the strength of the transverse process has not previously been reported. The purpose of this paper is to determine the biomechanical properties of the thoracic transverse process following various pedicle screw starting point placement techniques. Methods Forty-seven fresh-frozen human cadaveric thoracic vertebrae from T2 to T9 were disarticulated and matched by bone mineral density (BMD and transverse process (TP cross-sectional area. Specimens were randomized to one of four groups: A, control, and three others based on thoracic pedicle screw placement technique; B, straightforward; C, funnel; and D, in-out-in. Initial cortical bone removal for pedicle screw placement was made using a burr at the location on the transverse process or transverse process-laminar junction as published in the original description of each technique. The transverse process was tested measuring load-to-failure simulating a hook in compression mode. Analysis of covariance and Pearson correlation coefficients were used to examine the data. Results Technique was a significant predictor of load-to-failure (P = 0.0007. The least squares mean (LS mean load-to-failure of group A (control was 377 N, group B (straightforward 355 N, group C (funnel 229 N, and group D (in-out-in 301 N. Significant differences were noted between groups A and C, A and D, B and C, and C and D. BMD (0.925 g/cm2 [range, 0.624-1.301 g/cm2] was also a significant predictor of load-to-failure, for all specimens grouped together (P P 0.05. Level and side tested were not found to significantly correlate with load-to-failure. Conclusions The residual coronal plane compressive strength of the thoracic transverse process

  3. Design and fabrication of a diffractive beam splitter for dual-wavelength and concurrent irradiation of process points.

    Science.gov (United States)

    Amako, Jun; Shinozaki, Yu

    2016-07-11

    We report on a dual-wavelength diffractive beam splitter designed for use in parallel laser processing. This novel optical element generates two beam arrays of different wavelengths and allows their overlap at the process points on a workpiece. To design the deep surface-relief profile of a splitter using a simulated annealing algorithm, we introduce a heuristic but practical scheme to determine the maximum depth and the number of quantization levels. The designed corrugations were fabricated in a photoresist by maskless grayscale exposure using a high-resolution spatial light modulator. We characterized the photoresist splitter, thereby validating the proposed beam-splitting concept.

  4. Can the Hazard Assessment and Critical Control Points (HACCP) system be used to design process-based hygiene concepts?

    Science.gov (United States)

    Hübner, N-O; Fleßa, S; Haak, J; Wilke, F; Hübner, C; Dahms, C; Hoffmann, W; Kramer, A

    2011-01-01

    Recently, the HACCP (Hazard Analysis and Critical Control Points) concept was proposed as possible way to implement process-based hygiene concepts in clinical practice, but the extent to which this food safety concept can be transferred into the health care setting is unclear. We therefore discuss possible ways for a translation of the principles of the HACCP for health care settings. While a direct implementation of food processing concepts into health care is not very likely to be feasible and will probably not readily yield the intended results, the underlying principles of process-orientation, in-process safety control and hazard analysis based counter measures are transferable to clinical settings. In model projects the proposed concepts should be implemented, monitored, and evaluated under real world conditions.

  5. Mass customization process for the Social Housing. Potentiality, critical points, research lines

    Directory of Open Access Journals (Sweden)

    Michele Di Sivo

    2012-10-01

    Full Text Available The demand for lengthening the life cycle of the residential estate, engendered with the economical and housing crisis since the last few years, brings out, in the course of time, the need for conservation and improvement works of the property house performances, through the direct involvement of the users. The possibility of reducing maintenance and adjustment costs may develop into a project resource, consistent to the participation and cooperation principles, identifying social housing interventions. With this aim, the BETHA group of the d’Annunzio University is investigating the potentiality of technological transfer of the ‘mass customization’ process from the industrial products field to the social housing segment, by detecting issues, strategies and opportunities.

  6. Stochastic dynamical model of a growing citation network based on a self-exciting point process.

    Science.gov (United States)

    Golosovsky, Michael; Solomon, Sorin

    2012-08-31

    We put under experimental scrutiny the preferential attachment model that is commonly accepted as a generating mechanism of the scale-free complex networks. To this end we chose a citation network of physics papers and traced the citation history of 40,195 papers published in one year. Contrary to common belief, we find that the citation dynamics of the individual papers follows the superlinear preferential attachment, with the exponent α=1.25-1.3. Moreover, we show that the citation process cannot be described as a memoryless Markov chain since there is a substantial correlation between the present and recent citation rates of a paper. Based on our findings we construct a stochastic growth model of the citation network, perform numerical simulations based on this model and achieve an excellent agreement with the measured citation distributions.

  7. PET and diagnostic technology evaluation in a global clinical process. DGN's point of view

    International Nuclear Information System (INIS)

    Kotzerke, J.; Dietlein, M.; Gruenwald, F.; Bockisch, A.

    2010-01-01

    The German Society of Nuclear Medicine (DGN) criticizes the methodological approach of the IQWiG for evaluation of PET and the conclusions, which represent the opposite point of view compared to the most other European countries and health companies in the USA: (1) Real integration of experienced physicians into the interpretation of data and the evaluation of effectiveness should be used for best possible reporting instead of only formal hearing. (2) Data of the National Oncologic PET Registry (NOPR) from the USA have shown, that PET has changed the therapeutic management in 38% of patients. (3) The decision of the IQWiG to accept outcome data only for their benefit analyses, is controversial. Medical knowledge is generated by different methods, and an actual analysis of the scientific guidelines has shown that only 15% out of all guidelines are based on the level of evidence demanded by the IQWiG. Health economics has created different assessment methods for the evaluation of a diagnostic procedure. The strategy chosen by the IQWiG overestimated the perspective of the population and undervalue the benefit for an individual patient. (4) PET evaluates the effectiveness of a therapeutic procedure, but does not create an effective therapy. When the predictive value of PET is already implemented in a specific study design and the result of PET define a specific management, the trial evaluate the whole algorithm and PET is part of this algorithm only. When PET is implemented as test during chemotherapy or by the end of chemotherapy, the predictive value of PET will depend decisively on the effectiveness of the therapy: The better the therapy, the smaller the differences in survival detected by PET. (5) The significance of an optimal staging by the integration of PET will increase. Rationale is the actual development of ''titration'' of chemotherapy intensity and radiation dose towards the lowest possible, just about effective dosage. (6) The medical therapy of

  8. [Pharmaceutical Assistance in the Family Healthcare Program: points of affinity and discord in the organization process].

    Science.gov (United States)

    Silva Oliveira, Tatiana de Alencar; Maria, Tatiane de Oliveira Silva; Alves do Nascimento, Angela Maria; do Nascimento, Angela Alves

    2011-09-01

    The scope of this study was to discuss the organization of the pharmaceutical assistance service in the family healthcare program. Qualitative research from a critical/analytical perspective was conducted in family healthcare units in a municipality of the state of Bahia, Brazil. Data was collected on the basis of systematic observation, semi-structured interviews and documents analysis from a dialectic standpoint. The organization of Pharmaceutical Assistance consisted of selection, planning, acquisition, storage and dispensing activities. The process was studied in the implementation phase, which was occurring in a centralized and uncoordinated fashion, without the proposed team work. An excess of activity was observed among the healthcare workers and there was an absence of a continued education policy for the workers. For the transformation of this situation and to ensure the organization of pharmaceutical assistance with quality and in an integrated manner, a reworking of the manner of thinking and action of the players concerned (managers, health workers and users), who participate directly in the organization, is necessary. Furthermore, mechanical, bureaucratic and impersonal work practices need to be abandoned.

  9. Aftershock identification problem via the nearest-neighbor analysis for marked point processes

    Science.gov (United States)

    Gabrielov, A.; Zaliapin, I.; Wong, H.; Keilis-Borok, V.

    2007-12-01

    The centennial observations on the world seismicity have revealed a wide variety of clustering phenomena that unfold in the space-time-energy domain and provide most reliable information about the earthquake dynamics. However, there is neither a unifying theory nor a convenient statistical apparatus that would naturally account for the different types of seismic clustering. In this talk we present a theoretical framework for nearest-neighbor analysis of marked processes and obtain new results on hierarchical approach to studying seismic clustering introduced by Baiesi and Paczuski (2004). Recall that under this approach one defines an asymmetric distance D in space-time-energy domain such that the nearest-neighbor spanning graph with respect to D becomes a time- oriented tree. We demonstrate how this approach can be used to detect earthquake clustering. We apply our analysis to the observed seismicity of California and synthetic catalogs from ETAS model and show that the earthquake clustering part is statistically different from the homogeneous part. This finding may serve as a basis for an objective aftershock identification procedure.

  10. Archiving, sharing, processing and publishing historical earthquakes data: the IT point of view

    Science.gov (United States)

    Locati, Mario; Rovida, Andrea; Albini, Paola

    2014-05-01

    Digital tools devised for seismological data are mostly designed for handling instrumentally recorded data. Researchers working on historical seismology are forced to perform their daily job using a general purpose tool and/or coding their own to address their specific tasks. The lack of out-of-the-box tools expressly conceived to deal with historical data leads to a huge amount of time lost in performing tedious task to search for the data and, to manually reformat it in order to jump from one tool to the other, sometimes causing a loss of the original data. This reality is common to all activities related to the study of earthquakes of the past centuries, from the interpretations of past historical sources, to the compilation of earthquake catalogues. A platform able to preserve the historical earthquake data, trace back their source, and able to fulfil many common tasks was very much needed. In the framework of two European projects (NERIES and SHARE) and one global project (Global Earthquake History, GEM), two new data portals were designed and implemented. The European portal "Archive of Historical Earthquakes Data" (AHEAD) and the worldwide "Global Historical Earthquake Archive" (GHEA), are aimed at addressing at least some of the above mentioned issues. The availability of these new portals and their well-defined standards makes it easier than before the development of side tools for archiving, publishing and processing the available historical earthquake data. The AHEAD and GHEA portals, their underlying technologies and the developed side tools are presented.

  11. Dissolution Dominating Calcification Process in Polar Pteropods Close to the Point of Aragonite Undersaturation

    Science.gov (United States)

    Bednaršek, Nina; Tarling, Geraint A.; Bakker, Dorothee C. E.; Fielding, Sophie; Feely, Richard A.

    2014-01-01

    Thecosome pteropods are abundant upper-ocean zooplankton that build aragonite shells. Ocean acidification results in the lowering of aragonite saturation levels in the surface layers, and several incubation studies have shown that rates of calcification in these organisms decrease as a result. This study provides a weight-specific net calcification rate function for thecosome pteropods that includes both rates of dissolution and calcification over a range of plausible future aragonite saturation states (Ωar). We measured gross dissolution in the pteropod Limacina helicina antarctica in the Scotia Sea (Southern Ocean) by incubating living specimens across a range of aragonite saturation states for a maximum of 14 days. Specimens started dissolving almost immediately upon exposure to undersaturated conditions (Ωar∼0.8), losing 1.4% of shell mass per day. The observed rate of gross dissolution was different from that predicted by rate law kinetics of aragonite dissolution, in being higher at Ωar levels slightly above 1 and lower at Ωar levels of between 1 and 0.8. This indicates that shell mass is affected by even transitional levels of saturation, but there is, nevertheless, some partial means of protection for shells when in undersaturated conditions. A function for gross dissolution against Ωar derived from the present observations was compared to a function for gross calcification derived by a different study, and showed that dissolution became the dominating process even at Ωar levels close to 1, with net shell growth ceasing at an Ωar of 1.03. Gross dissolution increasingly dominated net change in shell mass as saturation levels decreased below 1. As well as influencing their viability, such dissolution of pteropod shells in the surface layers will result in slower sinking velocities and decreased carbon and carbonate fluxes to the deep ocean. PMID:25285916

  12. The effect of post-processing treatments on inflection points in current–voltage curves of roll-to-roll processed polymer photovoltaics

    DEFF Research Database (Denmark)

    Lilliedal, Mathilde Raad; Medford, Andrew James; Vesterager Madsen, Morten

    2010-01-01

    Inflection point behaviour is often observed in the current–voltage (IV) curve of polymer solar cells. This phenomenon is examined in the context of flexible roll-to-roll (R2R) processed polymer solar cells in a large series of devices with a layer structure of: PET–ITO–ZnO–P3HT...... characterization of device interfaces was carried out in order to identify possible chemical processes that are related to photo-annealing. A possible mechanism based on ZnO photoconductivity, photooxidation and redistribution of oxygen inside the cell is proposed, and it is anticipated that the findings......:PCBM–PEDOT:PSS–Ag. The devices were manufactured using a combination of slot-die coating and screen printing; they were then encapsulated by lamination using a polymer based barrier material. All manufacturing steps were carried out in ambient air. The freshly prepared devices showed a consistent inflection point in the IV...

  13. Phase relations and Gibbs energies of spinel phases and solid solutions in the system Mg-Rh-O

    Energy Technology Data Exchange (ETDEWEB)

    Jacob, K.T., E-mail: katob@materials.iisc.ernet.in [Department of Materials Engineering, Indian Institute of Science, Bangalore 560 012 (India); Prusty, Debadutta [Department of Materials Engineering, Indian Institute of Science, Bangalore 560 012 (India); Kale, G.M. [Institute for Materials Research, University of Leeds, Leeds, LS2 9JT (United Kingdom)

    2012-02-05

    Highlights: Black-Right-Pointing-Pointer Refinement of phase diagram for the system Mg-Rh-O and thermodynamic data for spinel compounds MgRh{sub 2}O{sub 4} and Mg{sub 2}RhO{sub 4} is presented. Black-Right-Pointing-Pointer A solid-state electrochemical cell is used for thermodynamic measurement. Black-Right-Pointing-Pointer An advanced design of the solid-state electrochemical cell incorporating buffer electrodes is deployed to minimize polarization of working electrode. Black-Right-Pointing-Pointer Regular solution model for the spinel solid solution MgRh{sub 2}O{sub 4} - Mg{sub 2}RhO{sub 4} based on ideal mixing of cations on the octahedral site is proposed. Black-Right-Pointing-Pointer Factors responsible for stabilization of tetravalent rhodium in spinel compounds are identified. - Abstract: Pure stoichiometric MgRh{sub 2}O{sub 4} could not be prepared by solid state reaction from an equimolar mixture of MgO and Rh{sub 2}O{sub 3} in air. The spinel phase formed always contained excess of Mg and traces of Rh or Rh{sub 2}O{sub 3}. The spinel phase can be considered as a solid solution of Mg{sub 2}RhO{sub 4} in MgRh{sub 2}O{sub 4}. The compositions of the spinel solid solution in equilibrium with different phases in the ternary system Mg-Rh-O were determined by electron probe microanalysis. The oxygen potential established by the equilibrium between Rh + MgO + Mg{sub 1+x}Rh{sub 2-x}O{sub 4} was measured as a function of temperature using a solid-state cell incorporating yttria-stabilized zirconia as an electrolyte and pure oxygen at 0.1 MPa as the reference electrode. To avoid polarization of the working electrode during the measurements, an improved design of the cell with a buffer electrode was used. The standard Gibbs energies of formation of MgRh{sub 2}O{sub 4} and Mg{sub 2}RhO{sub 4} were deduced from the measured electromotive force (e.m.f.) by invoking a model for the spinel solid solution. The parameters of the model were optimized using the measured

  14. On the diffusion process of irradiation-induced point defects in the stress field of a moving dislocation

    International Nuclear Information System (INIS)

    Steinbach, E.

    1987-01-01

    The cellular model of a dislocation is used for an investigation of the time-dependent diffusion process of irradiation-induced point defects interacting with the stress field of a moving dislocation. An analytic solution is given taking into account the elastic interaction due to the first-order size effect and the stress-induced interaction, the kinematic interaction due to the dislocation motion as well as the presence of secondary neutral sinks. The results for the space and time-dependent point defect concentration, represented in terms of Mathieu-Bessel and Mathieu-Hankel functions, emphasize the influence of the parameters which have been taken into consideration. Proceeding from these solutions, formulae for the diffusion flux reaching unit length of the dislocation, which plays an important role with regard to void swelling and irradiation-induced creep, are derived

  15. The signer and the sign: cortical correlates of person identity and language processing from point-light displays.

    Science.gov (United States)

    Campbell, Ruth; Capek, Cheryl M; Gazarian, Karine; MacSweeney, Mairéad; Woll, Bencie; David, Anthony S; McGuire, Philip K; Brammer, Michael J

    2011-09-01

    In this study, the first to explore the cortical correlates of signed language (SL) processing under point-light display conditions, the observer identified either a signer or a lexical sign from a display in which different signers were seen producing a number of different individual signs. Many of the regions activated by point-light under these conditions replicated those previously reported for full-image displays, including regions within the inferior temporal cortex that are specialised for face and body-part identification, although such body parts were invisible in the display. Right frontal regions were also recruited - a pattern not usually seen in full-image SL processing. This activation may reflect the recruitment of information about person identity from the reduced display. A direct comparison of identify-signer and identify-sign conditions showed these tasks relied to a different extent on the posterior inferior regions. Signer identification elicited greater activation than sign identification in (bilateral) inferior temporal gyri (BA 37/19), fusiform gyri (BA 37), middle and posterior portions of the middle temporal gyri (BAs 37 and 19), and superior temporal gyri (BA 22 and 42). Right inferior frontal cortex was a further focus of differential activation (signer>sign). These findings suggest that the neural systems supporting point-light displays for the processing of SL rely on a cortical network including areas of the inferior temporal cortex specialized for face and body identification. While this might be predicted from other studies of whole body point-light actions (Vaina, Solomon, Chowdhury, Sinha, & Belliveau, 2001) it is not predicted from the perspective of spoken language processing, where voice characteristics and speech content recruit distinct cortical regions (Stevens, 2004) in addition to a common network. In this respect, our findings contrast with studies of voice/speech recognition (Von Kriegstein, Kleinschmidt, Sterzer

  16. New Comment on Gibbs Density Surface of Fluid Argon: Revised Critical Parameters, L. V. Woodcock, Int. J. Thermophys. (2014) 35, 1770-1784

    Science.gov (United States)

    Umirzakov, I. H.

    2018-01-01

    The author comments on an article by Woodcock (Int J Thermophys 35:1770-1784, 2014), who investigates the idea of a critical line instead of a single critical point using the example of argon. In the introduction, Woodcock states that "The Van der Waals critical point does not comply with the Gibbs phase rule. Its existence is based upon a hypothesis rather than a thermodynamic definition". The present comment is a response to the statement by Woodcock. The comment mathematically demonstrates that a critical point is not only based on a hypothesis that is used to define values of two parameters of the Van der Waals equation of state. Instead, the author argues that a critical point is a direct consequence of the thermodynamic phase equilibrium conditions resulting in a single critical point. It is shown that the thermodynamic conditions result in the first and second partial derivatives of pressure with respect to volume at constant temperature at a critical point equal to zero which are usual conditions of an existence of a critical point.

  17. Zoeal morphology of Pachygrapsus transversus (Gibbes (Decapoda, Grapsidae reared in the laboratory

    Directory of Open Access Journals (Sweden)

    Ana Luiza Brossi-Garcia

    1997-12-01

    Full Text Available Ovigerous females of Pachygrapsus transversus (Gibbes, 1850 were collected on the Praia Dura and Saco da Ribeira beaches, Ubatuba, São Paulo, Brazil. Larvae were individually reared in a climatic room at 25ºC temperature, salinities of 28, 32 and 35‰ and under natural photoperiod conditions. The best rearing results were observed at 35%o salinity. Seven zoeal instars were observed, drawing and described in detail. The data are compared with those obtained for P. gracilis (Saussure, 1858.

  18. ASTEM, Evaluation of Gibbs, Helmholtz and Saturation Line Function for Thermodynamics Calculation

    International Nuclear Information System (INIS)

    Moore, K.V.; Burgess, M.P.; Fuller, G.L.; Kaiser, A.H.; Jaeger, D.L.

    1974-01-01

    1 - Description of problem or function: ASTEM is a modular set of FORTRAN IV subroutines to evaluate the Gibbs, Helmholtz, and saturation line functions as published by the American Society of Mechanical Engineers (1967). Any thermodynamic quantity including derivative properties can be obtained from these routines by a user-supplied main program. PROPS is an auxiliary routine available for the IBM360 version which makes it easier to apply the ASTEM routines to power station models. 2 - Restrictions on the complexity of the problem: Unless re-dimensioned by the user, the highest derivative allowed is order 9. All arrays within ASTEM are one-dimensional to save storage area

  19. Size and shape dependent Gibbs free energy and phase stability of titanium and zirconium nanoparticles

    International Nuclear Information System (INIS)

    Xiong Shiyun; Qi Weihong; Huang Baiyun; Wang Mingpu; Li Yejun

    2010-01-01

    The Debye model of Helmholtz free energy for bulk material is generalized to Gibbs free energy (GFE) model for nanomaterial, while a shape factor is introduced to characterize the shape effect on GFE. The structural transitions of Ti and Zr nanoparticles are predicted based on GFE. It is further found that GFE decreases with the shape factor and increases with decreasing of the particle size. The critical size of structural transformation for nanoparticles goes up as temperature increases in the absence of change in shape factor. For specified temperature, the critical size climbs up with the increase of shape factor. The present predictions agree well with experiment values.

  20. LA CASA GIBBS Y EL MONOPOLIO SALITRERO PERUANO: 1876-1878

    Directory of Open Access Journals (Sweden)

    Manuel Ravest Mora

    2008-06-01

    Full Text Available El objeto de este breve trabajo es mostrar la disposición de Anthony Gibbs & Sons, y de sus filiales, a apoyar el proyecto monopólico salitrero del Perú con recursos monetarios y los manejos de sus directores en la única empresa que, dada su capacidad de elaboración, podía hacerlo fracasar: la Compañía de Salitres y Ferrocarril de Antofagasta, de la que Gibbs era el segundo mayor accionista. Para el gobierno chileno la causa primaria de la guerra de 1879 fue el intento del Perú por monopolizar la producción salitrera. Bolivia, su aliada secreta desde 1873, colaboró arrendándole y vendiéndole sus depósitos de nitrato, e imponiendo a la exportación del salitre un tributo que infringió la condición -estipulada en un Tratado de Límites- bajo la cual Chile le cedió territorio. Su recuperación manu militari inició el conflicto. A partir de la segunda mitad del siglo pasado esta tesis economicista-legalista fue cuestionada en Chile y en el exterior, desplazando el acento causal al reordenamiento de los mercados de materias primas -de las que los beligerantes eran exportadores- a consecuencia de la crisis mundial de la década de 1870.This brief study aims at showing Anthony Gibbs & Sons disposition in supporting the Peruvian monopolistic nitrate project with monetary resources and its Director's influences in the only company which, due its production's capacity, could make the project fail: the Chilean Antofagasta Nitrate and Railway Co. in which Gibbs was the second most important stockholder. According to Chilean government the primary cause of 1879's war was Peru's attempt to monopolize nitrate production. Bolivia, its secret allied since 1873, helped renting and selling him her nitrate fields and imposing a tax on the nitrate exports of the Chilean company in Antofagasta, thus violating the condition stated in a Border Treaty by which Chile had ceded territory. Its recovery through the use of military forcé was the first act

  1. Neutron-rich isotopes around the r-process 'waiting-point' nuclei 2979Cu50 and 3080Zn50

    International Nuclear Information System (INIS)

    Kratz, K.L.; Gabelmann, H.; Pfeiffer, B.; Woehr, A.

    1991-01-01

    Beta-decay half-lives (T 1/2 ) and delayed-neutron emission probabilities (P n ) of very neutron-rich Cu to As nuclei have been measured, among them the new isotopes 77 Cu 48 , 79 Cu 50 , 81 Zn 51 and 84 Ga 53 . With the T 1/2 and P n -values of now four N≅50 'waiting-point' nuclei known, our hypothesis that the r-process has attained a local β-flow equilibrium around A≅80 is further strengthened. (orig.)

  2. Thermodynamic analysis of ethanol/water system in a fuel cell reformer with the Gibbs energy minimization method

    International Nuclear Information System (INIS)

    Lima da Silva, Aline; De Fraga Malfatti, Celia; Heck, Nestor Cesar

    2003-01-01

    The use of fuel cells is a promising technology in the conversion of chemical to electrical energy. Due to environmental concerns related to the reduction of atmospheric pollution and greenhouse gases emissions such as CO 2 , NO x and hydrocarbons, there have been many researches about fuel cells using hydrogen as fuel. Hydrogen gas can be produced by several routes; a promising one is the steam reforming of ethanol. This route may become an important industrial process, especially for sugarcane producing countries. Ethanol is renewable energy and presents several advantages over other sources related to natural availability, storage and handling safety. In order to contribute to the understanding of the steam reforming of ethanol inside the reformer, this work displays a detailed thermodynamic analysis of the ethanol/water system, in the temperature range of 500-1200K, considering different H 2 O/ethanol reforming ratios. The equilibrium determinations were done with the help of the Gibbs energy minimization method using the Generalized Reduced Gradient algorithm (GRG). Based on literature data, the species considered in calculations were: H 2 , H 2 O, CO, CO 2 , CH 4 , C 2 H 4 , CH 3 CHO, C 2 H 5 OH (gas phase) and C gr . (graphite phase). The thermodynamic conditions for carbon deposition (probably soot) on catalyst during gas reforming were analyzed, in order to establish temperature ranges and H 2 O/ethanol ratios where carbon precipitation is not thermodynamically feasible. Experimental results from literature show that carbon deposition causes catalyst deactivation during reforming. This deactivation is due to encapsulating carbon that covers active phases on a catalyst substrate, e.g. Ni over Al 2 O 3 . In the present study, a mathematical relationship between Lagrange multipliers and the carbon activity (with reference to the graphite phase) was deduced, unveiling the carbon activity in the reformer atmosphere. From this, it is possible to foreseen if soot

  3. Point processes statistics of stable isotopes: analysing water uptake patterns in a mixed stand of Aleppo pine and Holm oak

    Directory of Open Access Journals (Sweden)

    Carles Comas

    2015-04-01

    Full Text Available Aim of study: Understanding inter- and intra-specific competition for water is crucial in drought-prone environments. However, little is known about the spatial interdependencies for water uptake among individuals in mixed stands. The aim of this work was to compare water uptake patterns during a drought episode in two common Mediterranean tree species, Quercus ilex L. and Pinus halepensis Mill., using the isotope composition of xylem water (δ18O, δ2H as hydrological marker. Area of study: The study was performed in a mixed stand, sampling a total of 33 oaks and 78 pines (plot area= 888 m2. We tested the hypothesis that both species uptake water differentially along the soil profile, thus showing different levels of tree-to-tree interdependency, depending on whether neighbouring trees belong to one species or the other. Material and Methods: We used pair-correlation functions to study intra-specific point-tree configurations and the bivariate pair correlation function to analyse the inter-specific spatial configuration. Moreover, the isotopic composition of xylem water was analysed as a mark point pattern. Main results: Values for Q. ilex (δ18O = –5.3 ± 0.2‰, δ2H = –54.3 ± 0.7‰ were significantly lower than for P. halepensis (δ18O = –1.2 ± 0.2‰, δ2H = –25.1 ± 0.8‰, pointing to a greater contribution of deeper soil layers for water uptake by Q. ilex. Research highlights: Point-process analyses revealed spatial intra-specific dependencies among neighbouring pines, showing neither oak-oak nor oak-pine interactions. This supports niche segregation for water uptake between the two species.

  4. Monte Carlo Molecular Simulation with Isobaric-Isothermal and Gibbs-NPT Ensembles

    KAUST Repository

    Du, Shouhong

    2012-01-01

    This thesis presents Monte Carlo methods for simulations of phase behaviors of Lennard-Jones fluids. The isobaric-isothermal (NPT) ensemble and Gibbs-NPT ensemble are introduced in detail. NPT ensemble is employed to determine the phase diagram of pure component. The reduced simulation results are verified by comparison with the equation of state by by Johnson et al. and results with L-J parameters of methane agree considerably with the experiment measurements. We adopt the blocking method for variance estimation and error analysis of the simulation results. The relationship between variance and number of Monte Carlo cycles, error propagation and Random Number Generator performance are also investigated. We review the Gibbs-NPT ensemble employed for phase equilibrium of binary mixture. The phase equilibrium is achieved by performing three types of trial move: particle displacement, volume rearrangement and particle transfer. The simulation models and the simulation details are introduced. The simulation results of phase coexistence for methane and ethane are reported with comparison of the experimental data. Good agreement is found for a wide range of pressures. The contribution of this thesis work lies in the study of the error analysis with respect to the Monte Carlo cycles and number of particles in some interesting aspects.

  5. Gibbs Measures Over Locally Tree-Like Graphs and Percolative Entropy Over Infinite Regular Trees

    Science.gov (United States)

    Austin, Tim; Podder, Moumanti

    2018-03-01

    Consider a statistical physical model on the d-regular infinite tree Td described by a set of interactions Φ . Let Gn be a sequence of finite graphs with vertex sets V_n that locally converge to Td. From Φ one can construct a sequence of corresponding models on the graphs G_n. Let μ_n be the resulting Gibbs measures. Here we assume that μ n converges to some limiting Gibbs measure μ on Td in the local weak^* sense, and study the consequences of this convergence for the specific entropies |V_n|^{-1}H(μ _n). We show that the limit supremum of |V_n|^{-1}H(μ _n) is bounded above by the percolative entropy H_{it{perc}}(μ ), a function of μ itself, and that |V_n|^{-1}H(μ _n) actually converges to H_{it{perc}}(μ ) in case Φ exhibits strong spatial mixing on T_d. When it is known to exist, the limit of |V_n|^{-1}H(μ _n) is most commonly shown to be given by the Bethe ansatz. Percolative entropy gives a different formula, and we do not know how to connect it to the Bethe ansatz directly. We discuss a few examples of well-known models for which the latter result holds in the high temperature regime.

  6. Monte Carlo Molecular Simulation with Isobaric-Isothermal and Gibbs-NPT Ensembles

    KAUST Repository

    Du, Shouhong

    2012-05-01

    This thesis presents Monte Carlo methods for simulations of phase behaviors of Lennard-Jones fluids. The isobaric-isothermal (NPT) ensemble and Gibbs-NPT ensemble are introduced in detail. NPT ensemble is employed to determine the phase diagram of pure component. The reduced simulation results are verified by comparison with the equation of state by by Johnson et al. and results with L-J parameters of methane agree considerably with the experiment measurements. We adopt the blocking method for variance estimation and error analysis of the simulation results. The relationship between variance and number of Monte Carlo cycles, error propagation and Random Number Generator performance are also investigated. We review the Gibbs-NPT ensemble employed for phase equilibrium of binary mixture. The phase equilibrium is achieved by performing three types of trial move: particle displacement, volume rearrangement and particle transfer. The simulation models and the simulation details are introduced. The simulation results of phase coexistence for methane and ethane are reported with comparison of the experimental data. Good agreement is found for a wide range of pressures. The contribution of this thesis work lies in the study of the error analysis with respect to the Monte Carlo cycles and number of particles in some interesting aspects.

  7. Excess Gibbs energy for six binary solid solutions of molecularly simple substances

    Energy Technology Data Exchange (ETDEWEB)

    Lobo, L J; Staveley, L A.K.

    1985-01-01

    In this paper we apply the method developed in a previous study of Ar + CH/sub 4/ to the evaluation of the excess Gibbs energy G /SUP E.S/ for solid solutions of two molecularly simple components. The method depends on combining information on the excess Gibbs energy G /SUP E.L/ for the liquid mixture of the two components with a knowledge of the (T, x) solid-liquid phase diagram. Certain thermal properties o the pure substances are also needed. G /SUP E.S/ has been calculated for binary mixtures of Ar + Kr, Kr + CH/sub 4/, CO + N/sub 2/, Kr + Xe, Ar + N/sub 2/, and Ar + CO. In general, but not always, the solid mixtures are more non-ideal than the liquid mixtures of the same composition at the same temperature. Except for the Kr + CH/sub 4/ system, the ratio r = G /SUP E.S/ /G /SUP E.L/ is larger the richer the solution in the component with the smaller molecules.

  8. A Bayesian approach to PET reconstruction using image-modeling Gibbs priors: Implementation and comparison

    International Nuclear Information System (INIS)

    Chan, M.T.; Herman, G.T.; Levitan, E.

    1996-01-01

    We demonstrate that (i) classical methods of image reconstruction from projections can be improved upon by considering the output of such a method as a distorted version of the original image and applying a Bayesian approach to estimate from it the original image (based on a model of distortion and on a Gibbs distribution as the prior) and (ii) by selecting an open-quotes image-modelingclose quotes prior distribution (i.e., one which is such that it is likely that a random sample from it shares important characteristics of the images of the application area) one can improve over another Gibbs prior formulated using only pairwise interactions. We illustrate our approach using simulated Positron Emission Tomography (PET) data from realistic brain phantoms. Since algorithm performance ultimately depends on the diagnostic task being performed. we examine a number of different medically relevant figures of merit to give a fair comparison. Based on a training-and-testing evaluation strategy, we demonstrate that statistically significant improvements can be obtained using the proposed approach

  9. Influence of Wilbraham-Gibbs Phenomenon on Digital Stochastic Measurement of EEG Signal Over an Interval

    Directory of Open Access Journals (Sweden)

    Sovilj P.

    2014-10-01

    Full Text Available Measurement methods, based on the approach named Digital Stochastic Measurement, have been introduced, and several prototype and small-series commercial instruments have been developed based on these methods. These methods have been mostly investigated for various types of stationary signals, but also for non-stationary signals. This paper presents, analyzes and discusses digital stochastic measurement of electroencephalography (EEG signal in the time domain, emphasizing the problem of influence of the Wilbraham-Gibbs phenomenon. The increase of measurement error, related to the Wilbraham-Gibbs phenomenon, is found. If the EEG signal is measured and measurement interval is 20 ms wide, the average maximal error relative to the range of input signal is 16.84 %. If the measurement interval is extended to 2s, the average maximal error relative to the range of input signal is significantly lowered - down to 1.37 %. Absolute errors are compared with the error limit recommended by Organisation Internationale de Métrologie Légale (OIML and with the quantization steps of the advanced EEG instruments with 24-bit A/D conversion

  10. Gibbs Free Energy of Formation for Selected Platinum Group Minerals (PGM

    Directory of Open Access Journals (Sweden)

    Spiros Olivotos

    2016-01-01

    Full Text Available Thermodynamic data for platinum group (Os, Ir, Ru, Rh, Pd and Pt minerals are very limited. The present study is focused on the calculation of the Gibbs free energy of formation (ΔfG° for selected PGM occurring in layered intrusions and ophiolite complexes worldwide, applying available experimental data on their constituent elements at their standard state (ΔG = G(species − ΔG(elements, using the computer program HSC Chemistry software 6.0. The evaluation of the accuracy of the calculation method was made by the calculation of (ΔGf of rhodium sulfide phases. The calculated values were found to be ingood agreement with those measured in the binary system (Rh + S as a function of temperature by previous authors (Jacob and Gupta (2014. The calculated Gibbs free energy (ΔfG° followed the order RuS2 < (Ir,OsS2 < (Pt, PdS < (Pd, PtTe2, increasing from compatible to incompatible noble metals and from sulfides to tellurides.

  11. Combining the AFLOW GIBBS and elastic libraries to efficiently and robustly screen thermomechanical properties of solids

    Science.gov (United States)

    Toher, Cormac; Oses, Corey; Plata, Jose J.; Hicks, David; Rose, Frisco; Levy, Ohad; de Jong, Maarten; Asta, Mark; Fornari, Marco; Buongiorno Nardelli, Marco; Curtarolo, Stefano

    2017-06-01

    Thorough characterization of the thermomechanical properties of materials requires difficult and time-consuming experiments. This severely limits the availability of data and is one of the main obstacles for the development of effective accelerated materials design strategies. The rapid screening of new potential materials requires highly integrated, sophisticated, and robust computational approaches. We tackled the challenge by developing an automated, integrated workflow with robust error-correction within the AFLOW framework which combines the newly developed "Automatic Elasticity Library" with the previously implemented GIBBS method. The first extracts the mechanical properties from automatic self-consistent stress-strain calculations, while the latter employs those mechanical properties to evaluate the thermodynamics within the Debye model. This new thermoelastic workflow is benchmarked against a set of 74 experimentally characterized systems to pinpoint a robust computational methodology for the evaluation of bulk and shear moduli, Poisson ratios, Debye temperatures, Grüneisen parameters, and thermal conductivities of a wide variety of materials. The effect of different choices of equations of state and exchange-correlation functionals is examined and the optimum combination of properties for the Leibfried-Schlömann prediction of thermal conductivity is identified, leading to improved agreement with experimental results than the GIBBS-only approach. The framework has been applied to the AFLOW.org data repositories to compute the thermoelastic properties of over 3500 unique materials. The results are now available online by using an expanded version of the REST-API described in the Appendix.

  12. Hierarchical random additive process and logarithmic scaling of generalized high order, two-point correlations in turbulent boundary layer flow

    Science.gov (United States)

    Yang, X. I. A.; Marusic, I.; Meneveau, C.

    2016-06-01

    Townsend [Townsend, The Structure of Turbulent Shear Flow (Cambridge University Press, Cambridge, UK, 1976)] hypothesized that the logarithmic region in high-Reynolds-number wall-bounded flows consists of space-filling, self-similar attached eddies. Invoking this hypothesis, we express streamwise velocity fluctuations in the inertial layer in high-Reynolds-number wall-bounded flows as a hierarchical random additive process (HRAP): uz+=∑i=1Nzai . Here u is the streamwise velocity fluctuation, + indicates normalization in wall units, z is the wall normal distance, and ai's are independently, identically distributed random additives, each of which is associated with an attached eddy in the wall-attached hierarchy. The number of random additives is Nz˜ln(δ /z ) where δ is the boundary layer thickness and ln is natural log. Due to its simplified structure, such a process leads to predictions of the scaling behaviors for various turbulence statistics in the logarithmic layer. Besides reproducing known logarithmic scaling of moments, structure functions, and correlation function [" close="]3/2 uz(x ) uz(x +r ) >, new logarithmic laws in two-point statistics such as uz4(x ) > 1 /2, 1/3, etc. can be derived using the HRAP formalism. Supporting empirical evidence for the logarithmic scaling in such statistics is found from the Melbourne High Reynolds Number Boundary Layer Wind Tunnel measurements. We also show that, at high Reynolds numbers, the above mentioned new logarithmic laws can be derived by assuming the arrival of an attached eddy at a generic point in the flow field to be a Poisson process [Woodcock and Marusic, Phys. Fluids 27, 015104 (2015), 10.1063/1.4905301]. Taken together, the results provide new evidence supporting the essential ingredients of the attached eddy hypothesis to describe streamwise velocity fluctuations of large, momentum transporting eddies in wall-bounded turbulence, while observed deviations suggest the need for further extensions of the

  13. Point process models for spatio-temporal distance sampling data from a large-scale survey of blue whales

    KAUST Repository

    Yuan, Yuan; Bachl, Fabian E.; Lindgren, Finn; Borchers, David L.; Illian, Janine B.; Buckland, Stephen T.; Rue, Haavard; Gerrodette, Tim

    2017-01-01

    Distance sampling is a widely used method for estimating wildlife population abundance. The fact that conventional distance sampling methods are partly design-based constrains the spatial resolution at which animal density can be estimated using these methods. Estimates are usually obtained at survey stratum level. For an endangered species such as the blue whale, it is desirable to estimate density and abundance at a finer spatial scale than stratum. Temporal variation in the spatial structure is also important. We formulate the process generating distance sampling data as a thinned spatial point process and propose model-based inference using a spatial log-Gaussian Cox process. The method adopts a flexible stochastic partial differential equation (SPDE) approach to model spatial structure in density that is not accounted for by explanatory variables, and integrated nested Laplace approximation (INLA) for Bayesian inference. It allows simultaneous fitting of detection and density models and permits prediction of density at an arbitrarily fine scale. We estimate blue whale density in the Eastern Tropical Pacific Ocean from thirteen shipboard surveys conducted over 22 years. We find that higher blue whale density is associated with colder sea surface temperatures in space, and although there is some positive association between density and mean annual temperature, our estimates are consistent with no trend in density across years. Our analysis also indicates that there is substantial spatially structured variation in density that is not explained by available covariates.

  14. Point process models for spatio-temporal distance sampling data from a large-scale survey of blue whales

    KAUST Repository

    Yuan, Yuan

    2017-12-28

    Distance sampling is a widely used method for estimating wildlife population abundance. The fact that conventional distance sampling methods are partly design-based constrains the spatial resolution at which animal density can be estimated using these methods. Estimates are usually obtained at survey stratum level. For an endangered species such as the blue whale, it is desirable to estimate density and abundance at a finer spatial scale than stratum. Temporal variation in the spatial structure is also important. We formulate the process generating distance sampling data as a thinned spatial point process and propose model-based inference using a spatial log-Gaussian Cox process. The method adopts a flexible stochastic partial differential equation (SPDE) approach to model spatial structure in density that is not accounted for by explanatory variables, and integrated nested Laplace approximation (INLA) for Bayesian inference. It allows simultaneous fitting of detection and density models and permits prediction of density at an arbitrarily fine scale. We estimate blue whale density in the Eastern Tropical Pacific Ocean from thirteen shipboard surveys conducted over 22 years. We find that higher blue whale density is associated with colder sea surface temperatures in space, and although there is some positive association between density and mean annual temperature, our estimates are consistent with no trend in density across years. Our analysis also indicates that there is substantial spatially structured variation in density that is not explained by available covariates.

  15. Decoding the non-stationary neuron spike trains by dual Monte Carlo point process estimation in motor Brain Machine Interfaces.

    Science.gov (United States)

    Liao, Yuxi; Li, Hongbao; Zhang, Qiaosheng; Fan, Gong; Wang, Yiwen; Zheng, Xiaoxiang

    2014-01-01

    Decoding algorithm in motor Brain Machine Interfaces translates the neural signals to movement parameters. They usually assume the connection between the neural firings and movements to be stationary, which is not true according to the recent studies that observe the time-varying neuron tuning property. This property results from the neural plasticity and motor learning etc., which leads to the degeneration of the decoding performance when the model is fixed. To track the non-stationary neuron tuning during decoding, we propose a dual model approach based on Monte Carlo point process filtering method that enables the estimation also on the dynamic tuning parameters. When applied on both simulated neural signal and in vivo BMI data, the proposed adaptive method performs better than the one with static tuning parameters, which raises a promising way to design a long-term-performing model for Brain Machine Interfaces decoder.

  16. Oxygen concentration cell for the measurements of the standard molar Gibbs energy of formation of Nd6UO12(s)

    International Nuclear Information System (INIS)

    Sahu, Manjulata; Dash, Smruti

    2011-01-01

    The standard molar Gibbs energies of formation of Nd 6 UO 12 (s) have been measured using an oxygen concentration cell with yttria stabilized zirconia as solid electrolyte. Δ f G m o (T) for Nd 6 UO 12 (s) has been calculated using the measured and required thermodynamic data from the literature. The calculated Gibbs energy expression can be given as: Δ f G m o (Nd 6 UO 12 , s,T)/(± 2.3) kJmol -1 = -6660.1+1.0898 (T/K). (author)

  17. Investigation of the s-process branch-point nucleus {sup 86}Rb at HIγS

    Energy Technology Data Exchange (ETDEWEB)

    Erbacher, Philipp; Glorius, Jan; Reifarth, Rene; Sonnabend, Kerstin [Goethe Universitaet Frankfurt am Main (Germany); Isaak, Johann; Loeher, Bastian; Savran, Deniz [GSI Helmholzzentrum fuer Schwerionenforschung (Germany); Tornow, Werner [Duke University (United States)

    2016-07-01

    The branch-point nucleus {sup 86}Rb determines the isotopic abundance ratio {sup 86}Sr/{sup 87}Sr in s-process nucleosynthesis. Thus, stellar parameters such as temperature and neutron density and their evolution in time as simulated by modern s-process network calculations can be constrained by a comparison of the calculated isotopic ratio with the one observed in SiC meteoritic grains. To this end, the radiative neutron-capture cross section of the unstable isotope {sup 86}Rb has to be known with sufficient accuracy. Since the short half-life of {sup 86}Rb prohibits the direct measurement, the nuclear-physics input to a calculation of the cross section has to be measured. For this reason, the γ-ray strength function of {sup 87}Rb was measured using the γ{sup 3} setup at the High Intensity γ-ray Source facility at TUNL in Durham, USA. First experimental results are presented.

  18. Coupling aerosol-cloud-radiative processes in the WRF-Chem model: Investigating the radiative impact of elevated point sources

    Directory of Open Access Journals (Sweden)

    E. G. Chapman

    2009-02-01

    Full Text Available The local and regional influence of elevated point sources on summertime aerosol forcing and cloud-aerosol interactions in northeastern North America was investigated using the WRF-Chem community model. The direct effects of aerosols on incoming solar radiation were simulated using existing modules to relate aerosol sizes and chemical composition to aerosol optical properties. Indirect effects were simulated by adding a prognostic treatment of cloud droplet number and adding modules that activate aerosol particles to form cloud droplets, simulate aqueous-phase chemistry, and tie a two-moment treatment of cloud water (cloud water mass and cloud droplet number to precipitation and an existing radiation scheme. Fully interactive feedbacks thus were created within the modified model, with aerosols affecting cloud droplet number and cloud radiative properties, and clouds altering aerosol size and composition via aqueous processes, wet scavenging, and gas-phase-related photolytic processes. Comparisons of a baseline simulation with observations show that the model captured the general temporal cycle of aerosol optical depths (AODs and produced clouds of comparable thickness to observations at approximately the proper times and places. The model overpredicted SO2 mixing ratios and PM2.5 mass, but reproduced the range of observed SO2 to sulfate aerosol ratios, suggesting that atmospheric oxidation processes leading to aerosol sulfate formation are captured in the model. The baseline simulation was compared to a sensitivity simulation in which all emissions at model levels above the surface layer were set to zero, thus removing stack emissions. Instantaneous, site-specific differences for aerosol and cloud related properties between the two simulations could be quite large, as removing above-surface emission sources influenced when and where clouds formed within the modeling domain. When summed spatially over the finest

  19. Dynamics of macro-observables and space-time inhomogeneous Gibbs ensembles

    International Nuclear Information System (INIS)

    Lanz, L.; Lupieri, G.

    1978-01-01

    The relationship between the classical description of a macro-system and quantum mechanics of its particles is considered within the framework recently developed by Ludwig. A procedure is given to define probability measures on the trajectory space of a macrosystem which yields a statistical description of the dynamics of a macrosystem. The basic tool in this treatment is a new concept of space-time inhomogeneous Gibbs ensemble, defined in N-body quantum mechanics. In the Gaussian approximation of the probabilities the results of Zubarev's theory based on the ''nonequilibrium statistical operator'' are recovered. The present ''embedding'' of the description of a macrosystem inside the N-body theory allows for a joint description of a macrosystem and a microsubsystem of it, and a ''macroscopical'' calculation of the statistical operator of the microsystem is indicated. (author)

  20. The osmotic second virial coefficient and the Gibbs-McMillan-Mayer framework

    DEFF Research Database (Denmark)

    Mollerup, J.M.; Breil, Martin Peter

    2009-01-01

    The osmotic second virial coefficient is a key parameter in light scattering, protein crystallisation. self-interaction chromatography, and osmometry. The interpretation of the osmotic second virial coefficient depends on the set of independent variables. This commonly includes the independent...... variables associated with the Kirkwood-Buff, the McMillan-Mayer, and the Lewis-Randall solution theories. In this paper we analyse the osmotic second virial coefficient using a Gibbs-McMillan-Mayer framework which is similar to the McMillan-Mayer framework with the exception that pressure rather than volume...... is an independent variable. A Taylor expansion is applied to the osmotic pressure of a solution where one of the solutes is a small molecule, a salt for instance, that equilibrates between the two phases. Other solutes are retained. Solvents are small molecules that equilibrate between the two phases...

  1. Generalized Gibbs distribution and energy localization in the semiclassical FPU problem

    Science.gov (United States)

    Hipolito, Rafael; Danshita, Ippei; Oganesyan, Vadim; Polkovnikov, Anatoli

    2011-03-01

    We investigate dynamics of the weakly interacting quantum mechanical Fermi-Pasta-Ulam (qFPU) model in the semiclassical limit below the stochasticity threshold. Within this limit we find that initial quantum fluctuations lead to the damping of FPU oscillations and relaxation of the system to a slowly evolving steady state with energy localized within few momentum modes. We find that in large systems this state can be described by the generalized Gibbs ensemble (GGE), with the Lagrange multipliers being very weak functions of time. This ensembles gives accurate description of the instantaneous correlation functions, both quadratic and quartic. Based on these results we conjecture that GGE generically appears as a prethermalized state in weakly non-integrable systems.

  2. Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much.

    Science.gov (United States)

    He, Bryan; De Sa, Christopher; Mitliagkas, Ioannis; Ré, Christopher

    2016-01-01

    Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively samples variables from their conditional distributions. There are two common scan orders for the variables: random scan and systematic scan. Due to the benefits of locality in hardware, systematic scan is commonly used, even though most statistical guarantees are only for random scan. While it has been conjectured that the mixing times of random scan and systematic scan do not differ by more than a logarithmic factor, we show by counterexample that this is not the case, and we prove that that the mixing times do not differ by more than a polynomial factor under mild conditions. To prove these relative bounds, we introduce a method of augmenting the state space to study systematic scan using conductance.

  3. Standard molar Gibbs free energy of formation of URh3(s)

    International Nuclear Information System (INIS)

    Prasad, Rajendra; Sayi, Y.S.; Radhakrishna, J.; Yadav, C.S.; Shankaran, P.S.; Chhapru, G.C.

    1992-01-01

    Equilibrium partial pressures of CO(g) over the system (UO 2 (s) + C(s) + Rh(s) + URh 3 (s)) were measured in the temperature range 1327 - 1438 K. Standard Gibbs molar free energy of formation of URh 3 (Δ f G o m ) in the above temperature range can be expressed as Δ f G o m (URh 3 ,s,T)+-3.0(kJ/mol)= -348.165 + 0.03144 T(K). The second and third law enthalpy of formation, ΔfH o m (URh 3 ,s,298.15 K) are (-318.4 +- 3.0) and (298.3 +- 2.5) kJ/mol respectively. (author). 7 refs., 3 tabs

  4. Ergodic time-reversible chaos for Gibbs' canonical oscillator

    International Nuclear Information System (INIS)

    Hoover, William Graham; Sprott, Julien Clinton; Patra, Puneet Kumar

    2015-01-01

    Nosé's pioneering 1984 work inspired a variety of time-reversible deterministic thermostats. Though several groups have developed successful doubly-thermostated models, single-thermostat models have failed to generate Gibbs' canonical distribution for the one-dimensional harmonic oscillator. A 2001 doubly-thermostated model, claimed to be ergodic, has a singly-thermostated version. Though neither of these models is ergodic this work has suggested a successful route toward singly-thermostated ergodicity. We illustrate both ergodicity and its lack for these models using phase-space cross sections and Lyapunov instability as diagnostic tools. - Highlights: • We develop cross-section and Lyapunov methods for diagnosing ergodicity. • We apply these methods to several thermostatted-oscillator problems. • We demonstrate the nonergodicity of previous work. • We find a novel family of ergodic thermostatted-oscillator problems.

  5. Extrapolation procedures for calculating high-temperature gibbs free energies of aqueous electrolytes

    International Nuclear Information System (INIS)

    Tremaine, P.R.

    1979-01-01

    Methods for calculating high-temprature Gibbs free energies of mononuclear cations and anions from room-temperature data are reviewed. Emphasis is given to species required for oxide solubility calculations relevant to mass transport situations in the nuclear industry. Free energies predicted by each method are compared to selected values calculated from recently reported solubility studies and other literature data. Values for monatomic ions estimated using the assumption anti C 0 p(T) = anti C 0 p(298) agree best with experiment to 423 K. From 423 K to 523 K, free energies from an electrostatic model for ion hydration are more accurate. Extrapolations for hydrolyzed species are limited by a lack of room-temperature entropy data and expressions for estimating these entropies are discussed. (orig.) [de

  6. Empirical Statistical Power for Testing Multilocus Genotypic Effects under Unbalanced Designs Using a Gibbs Sampler

    Directory of Open Access Journals (Sweden)

    Chaeyoung Lee

    2012-11-01

    Full Text Available Epistasis that may explain a large portion of the phenotypic variation for complex economic traits of animals has been ignored in many genetic association studies. A Baysian method was introduced to draw inferences about multilocus genotypic effects based on their marginal posterior distributions by a Gibbs sampler. A simulation study was conducted to provide statistical powers under various unbalanced designs by using this method. Data were simulated by combined designs of number of loci, within genotype variance, and sample size in unbalanced designs with or without null combined genotype cells. Mean empirical statistical power was estimated for testing posterior mean estimate of combined genotype effect. A practical example for obtaining empirical statistical power estimates with a given sample size was provided under unbalanced designs. The empirical statistical powers would be useful for determining an optimal design when interactive associations of multiple loci with complex phenotypes were examined.

  7. Robust identification of transcriptional regulatory networks using a Gibbs sampler on outlier sum statistic.

    Science.gov (United States)

    Gu, Jinghua; Xuan, Jianhua; Riggins, Rebecca B; Chen, Li; Wang, Yue; Clarke, Robert

    2012-08-01

    Identification of transcriptional regulatory networks (TRNs) is of significant importance in computational biology for cancer research, providing a critical building block to unravel disease pathways. However, existing methods for TRN identification suffer from the inclusion of excessive 'noise' in microarray data and false-positives in binding data, especially when applied to human tumor-derived cell line studies. More robust methods that can counteract the imperfection of data sources are therefore needed for reliable identification of TRNs in this context. In this article, we propose to establish a link between the quality of one target gene to represent its regulator and the uncertainty of its expression to represent other target genes. Specifically, an outlier sum statistic was used to measure the aggregated evidence for regulation events between target genes and their corresponding transcription factors. A Gibbs sampling method was then developed to estimate the marginal distribution of the outlier sum statistic, hence, to uncover underlying regulatory relationships. To evaluate the effectiveness of our proposed method, we compared its performance with that of an existing sampling-based method using both simulation data and yeast cell cycle data. The experimental results show that our method consistently outperforms the competing method in different settings of signal-to-noise ratio and network topology, indicating its robustness for biological applications. Finally, we applied our method to breast cancer cell line data and demonstrated its ability to extract biologically meaningful regulatory modules related to estrogen signaling and action in breast cancer. The Gibbs sampler MATLAB package is freely available at http://www.cbil.ece.vt.edu/software.htm. xuan@vt.edu Supplementary data are available at Bioinformatics online.

  8. Point process-based modeling of multiple debris flow landslides using INLA: an application to the 2009 Messina disaster

    KAUST Repository

    Lombardo, Luigi

    2018-02-13

    We develop a stochastic modeling approach based on spatial point processes of log-Gaussian Cox type for a collection of around 5000 landslide events provoked by a precipitation trigger in Sicily, Italy. Through the embedding into a hierarchical Bayesian estimation framework, we can use the integrated nested Laplace approximation methodology to make inference and obtain the posterior estimates of spatially distributed covariate and random effects. Several mapping units are useful to partition a given study area in landslide prediction studies. These units hierarchically subdivide the geographic space from the highest grid-based resolution to the stronger morphodynamic-oriented slope units. Here we integrate both mapping units into a single hierarchical model, by treating the landslide triggering locations as a random point pattern. This approach diverges fundamentally from the unanimously used presence–absence structure for areal units since we focus on modeling the expected landslide count jointly within the two mapping units. Predicting this landslide intensity provides more detailed and complete information as compared to the classically used susceptibility mapping approach based on relative probabilities. To illustrate the model’s versatility, we compute absolute probability maps of landslide occurrences and check their predictive power over space. While the landslide community typically produces spatial predictive models for landslides only in the sense that covariates are spatially distributed, no actual spatial dependence has been explicitly integrated so far. Our novel approach features a spatial latent effect defined at the slope unit level, allowing us to assess the spatial influence that remains unexplained by the covariates in the model. For rainfall-induced landslides in regions where the raingauge network is not sufficient to capture the spatial distribution of the triggering precipitation event, this latent effect provides valuable imaging support

  9. A geometric stochastic approach based on marked point processes for road mark detection from high resolution aerial images

    Science.gov (United States)

    Tournaire, O.; Paparoditis, N.

    Road detection has been a topic of great interest in the photogrammetric and remote sensing communities since the end of the 70s. Many approaches dealing with various sensor resolutions, the nature of the scene or the wished accuracy of the extracted objects have been presented. This topic remains challenging today as the need for accurate and up-to-date data is becoming more and more important. Based on this context, we will study in this paper the road network from a particular point of view, focusing on road marks, and in particular dashed lines. Indeed, they are very useful clues, for evidence of a road, but also for tasks of a higher level. For instance, they can be used to enhance quality and to improve road databases. It is also possible to delineate the different circulation lanes, their width and functionality (speed limit, special lanes for buses or bicycles...). In this paper, we propose a new robust and accurate top-down approach for dashed line detection based on stochastic geometry. Our approach is automatic in the sense that no intervention from a human operator is necessary to initialise the algorithm or to track errors during the process. The core of our approach relies on defining geometric, radiometric and relational models for dashed lines objects. The model also has to deal with the interactions between the different objects making up a line, meaning that it introduces external knowledge taken from specifications. Our strategy is based on a stochastic method, and in particular marked point processes. Our goal is to find the objects configuration minimising an energy function made-up of a data attachment term measuring the consistency of the image with respect to the objects and a regularising term managing the relationship between neighbouring objects. To sample the energy function, we use Green algorithm's; coupled with a simulated annealing to find its minimum. Results from aerial images at various resolutions are presented showing that our

  10. Adaptation to Elastic Loads and BMI Robot Controls During Rat Locomotion examined with Point-Process GLMs.

    Directory of Open Access Journals (Sweden)

    Weiguo eSong

    2015-04-01

    Full Text Available Currently little is known about how a mechanically coupled BMI system’s actions are integrated into ongoing body dynamics. We tested a locomotor task augmented with a BMI system driving a robot mechanically interacting with a rat under three conditions: control locomotion (BL, ‘simple elastic load’ (E and ‘BMI with elastic load’ (BMI/E. The effect of the BMI was to allow compensation of the elastic load as a function of the neural drive. Neurons recorded here were close to one another in cortex, all within a 200 micron diameter horizontal distance of one another. The interactions of these close assemblies of neurons may differ from those among neurons at longer distances in BMI tasks and thus are important to explore. A point process generalized linear model (GLM, was used to examine connectivity at two different binning timescales (1ms vs. 10ms. We used GLM models to fit non-Poisson neural dynamics solely using other neurons’ prior neural activity as covariates. Models at different timescales were compared based on Kolmogorov-Smirnov (KS goodness-of-fit and parsimony. About 15% of cells with non-Poisson firing were well fitted with the neuron-to-neuron models alone. More such cells were fitted at the 1ms binning than 10ms. Positive connection parameters (‘excitation’ ~70% exceeded negative parameters (‘inhibition’ ~30%. Significant connectivity changes in the GLM determined networks of well-fitted neurons occurred between the conditions. However, a common core of connections comprising at least ~15% of connections persisted between any two of the three conditions. Significantly almost twice as many connections were in common between the two load conditions (~27%, compared to between either load condition and the baseline. This local point process GLM identified neural correlation structure and the changes seen across task conditions in the rats in this neural subset may be intrinsic to cortex or due to feedback and input

  11. The Concentration Dependence of the (Delta)s Term in the Gibbs Free Energy Function: Application to Reversible Reactions in Biochemistry

    Science.gov (United States)

    Gary, Ronald K.

    2004-01-01

    The concentration dependence of (delta)S term in the Gibbs free energy function is described in relation to its application to reversible reactions in biochemistry. An intuitive and non-mathematical argument for the concentration dependence of the (delta)S term in the Gibbs free energy equation is derived and the applicability of the equation to…

  12. A simple approach to the solvent reorganisation Gibbs free energy in electron transfer reactions of redox metalloproteins

    DEFF Research Database (Denmark)

    Ulstrup, Jens

    1999-01-01

    We discuss a simple model for the environmental reorganisation Gibbs free energy, E-r, in electron transfer between a metalloprotein and a small reaction partner. The protein is represented as a dielectric globule with low dielectric constant, the metal centres as conducting spheres, all embedded...

  13. Gibbs Ensemble Simulation on Polarizable Models: Vapor-liquid Equilibrium in Baranyai-Kiss Models of Water

    Czech Academy of Sciences Publication Activity Database

    Moučka, F.; Nezbeda, Ivo

    2013-01-01

    Roč. 360, DEC 25 (2013), s. 472-476 ISSN 0378-3812 Grant - others:GA MŠMT(CZ) LH12019 Institutional support: RVO:67985858 Keywords : multi-particle move monte carlo * Gibbs ensemble * vapor-liquid-equilibria Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.241, year: 2013

  14. The importance of the photosynthetic Gibbs effect in the elucidation of the Calvin-Benson-Bassham cycle.

    Science.gov (United States)

    Ebenhöh, Oliver; Spelberg, Stephanie

    2018-02-19

    The photosynthetic carbon reduction cycle, or Calvin-Benson-Bassham (CBB) cycle, is now contained in every standard biochemistry textbook. Although the cycle was already proposed in 1954, it is still the subject of intense research, and even the structure of the cycle, i.e. the exact series of reactions, is still under debate. The controversy about the cycle's structure was fuelled by the findings of Gibbs and Kandler in 1956 and 1957, when they observed that radioactive 14 CO 2 was dynamically incorporated in hexoses in a very atypical and asymmetrical way, a phenomenon later termed the 'photosynthetic Gibbs effect'. Now, it is widely accepted that the photosynthetic Gibbs effect is not in contradiction to the reaction scheme proposed by CBB, but the arguments given have been largely qualitative and hand-waving. To fully appreciate the controversy and to understand the difficulties in interpreting the Gibbs effect, it is illustrative to illuminate the history of the discovery of the CBB cycle. We here give an account of central scientific advances and discoveries, which were essential prerequisites for the elucidation of the cycle. Placing the historic discoveries in the context of the modern textbook pathway scheme illustrates the complexity of the cycle and demonstrates why especially dynamic labelling experiments are far from easy to interpret. We conclude by arguing that it requires sound theoretical approaches to resolve conflicting interpretations and to provide consistent quantitative explanations. © 2018 The Author(s).

  15. Algunas Precisiones en torno a las funciones termodinámicas energía libre de Gibbs

    OpenAIRE

    Solaz Portolés, Joan Josep; Quílez Pardo, Juan

    2001-01-01

    The aim of this study is to elucidate some didactic misundertandings related with the use and the appli cability of the delta functions ∆G, ∆rG and ∆rG0, which derive from the thermodynamic potential Gibbs Free Energy, G.

  16. Comparison of second-generation processes for the conversion of sugarcane bagasse to liquid biofuels in terms of energy efficiency, pinch point analysis and Life Cycle Analysis

    International Nuclear Information System (INIS)

    Petersen, A.M.; Melamu, Rethabi; Knoetze, J.H.; Görgens, J.F.

    2015-01-01

    Highlights: • Process evaluation of thermochemical and biological routes for bagasse to fuels. • Pinch point analysis increases overall efficiencies by reducing utility consumption. • Advanced biological route increased efficiency and local environmental impacts. • Thermochemical routes have the highest efficiencies and low life cycle impacts. - Abstract: Three alternative processes for the production of liquid transportation biofuels from sugar cane bagasse were compared, on the perspective of energy efficiencies using process modelling, Process Environmental Assessments and Life Cycle Assessment. Bio-ethanol via two biological processes was considered, i.e. Separate Hydrolysis and Fermentation (Process 1) and Simultaneous Saccharification and Fermentation (Process 2), in comparison to Gasification and Fischer Tropsch synthesis for the production of synthetic fuels (Process 3). The energy efficiency of each process scenario was maximised by pinch point analysis for heat integration. The more advanced bio-ethanol process was Process 2 and it had a higher energy efficiency at 42.3%. Heat integration was critical for the Process 3, whereby the energy efficiency was increased from 51.6% to 55.7%. For both the Process Environmental and Life Cycle Assessment, Process 3 had the least potential for detrimental environmental impacts, due to its relatively high energy efficiency. Process 2 had the greatest Process Environmental Impact due to the intensive use of processing chemicals. Regarding the Life Cycle Assessments, Process 1 was the most severe due to its low energy efficiency

  17. How Does the Gibbs Inequality Condition Affect the Stability and Detachment of Floating Spheres from the Free Surface of Water?

    Science.gov (United States)

    Feng, Dong-xia; Nguyen, Anh V

    2016-03-01

    Floating objects on the air-water interfaces are central to a number of everyday activities, from walking on water by insects to flotation separation of valuable minerals using air bubbles. The available theories show that a fine sphere can float if the force of surface tension and buoyancies can support the sphere at the interface with an apical angle subtended by the circle of contact being larger than the contact angle. Here we show that the pinning of the contact line at the sharp edge, known as the Gibbs inequality condition, also plays a significant role in controlling the stability and detachment of floating spheres. Specifically, we truncated the spheres with different angles and used a force sensor device to measure the force of pushing the truncated spheres from the interface into water. We also developed a theoretical modeling to calculate the pushing force that in combination with experimental results shows different effects of the Gibbs inequality condition on the stability and detachment of the spheres from the water surface. For small angles of truncation, the Gibbs inequality condition does not affect the sphere detachment, and hence the classical theories on the floatability of spheres are valid. For large truncated angles, the Gibbs inequality condition determines the tenacity of the particle-meniscus contact and the stability and detachment of floating spheres. In this case, the classical theories on the floatability of spheres are no longer valid. A critical truncated angle for the transition from the classical to the Gibbs inequality regimes of detachment was also established. The outcomes of this research advance our understanding of the behavior of floating objects, in particular, the flotation separation of valuable minerals, which often contain various sharp edges of their crystal faces.

  18. Hygienic-sanitary working practices and implementation of a Hazard Analysis and Critical Control Point (HACCP plan in lobster processing industries

    Directory of Open Access Journals (Sweden)

    Cristina Farias da Fonseca

    2013-03-01

    Full Text Available This study aimed to verify the hygienic-sanitary working practices and to create and implement a Hazard Analysis Critical Control Point (HACCP in two lobster processing industries in Pernambuco State, Brazil. The industries studied process frozen whole lobsters, frozen whole cooked lobsters, and frozen lobster tails for exportation. The application of the hygienic-sanitary checklist in the industries analyzed achieved conformity rates over 96% to the aspects evaluated. The use of the Hazard Analysis Critical Control Point (HACCP plan resulted in the detection of two critical control points (CCPs including the receiving and classification steps in the processing of frozen lobster and frozen lobster tails, and an additional critical control point (CCP was detected during the cooking step of processing of the whole frozen cooked lobster. The proper implementation of the Hazard Analysis Critical Control Point (HACCP plan in the lobster processing industries studied proved to be the safest and most cost-effective method to monitor each critical control point (CCP hazards.

  19. A procedure to compute equilibrium concentrations in multicomponent systems by Gibbs energy minimization on spreadsheets

    International Nuclear Information System (INIS)

    Lima da Silva, Aline; Heck, Nestor Cesar

    2003-01-01

    Equilibrium concentrations are traditionally calculated with the help of equilibrium constant equations from selected reactions. This procedure, however, is only useful for simpler problems. Analysis of the equilibrium state in a multicomponent and multiphase system necessarily involves solution of several simultaneous equations, and, as the number of system components grows, the required computation becomes more complex and tedious. A more direct and general method for solving the problem is the direct minimization of the Gibbs energy function. The solution for the nonlinear problem consists in minimizing the objective function (Gibbs energy of the system) subjected to the constraints of the elemental mass-balance. To solve it, usually a computer code is developed, which requires considerable testing and debugging efforts. In this work, a simple method to predict equilibrium composition in multicomponent systems is presented, which makes use of an electronic spreadsheet. The ability to carry out these calculations within a spreadsheet environment shows several advantages. First, spreadsheets are available 'universally' on nearly all personal computers. Second, the input and output capabilities of spreadsheets can be effectively used to monitor calculated results. Third, no additional systems or programs need to be learned. In this way, spreadsheets can be as suitable in computing equilibrium concentrations as well as to be used as teaching and learning aids. This work describes, therefore, the use of the Solver tool, contained in the Microsoft Excel spreadsheet package, on computing equilibrium concentrations in a multicomponent system, by the method of direct Gibbs energy minimization. The four phases Fe-Cr-O-C-Ni system is used as an example to illustrate the method proposed. The pure stoichiometric phases considered in equilibrium calculations are: Cr 2 O 3 (s) and FeO C r 2 O 3 (s). The atmosphere consists of O 2 , CO e CO 2 constituents. The liquid iron

  20. Design and development of cell queuing, processing, and scheduling modules for the iPOINT input-buffered ATM testbed

    Science.gov (United States)

    Duan, Haoran

    1997-12-01

    This dissertation presents the concepts, principles, performance, and implementation of input queuing and cell-scheduling modules for the Illinois Pulsar-based Optical INTerconnect (iPOINT) input-buffered Asynchronous Transfer Mode (ATM) testbed. Input queuing (IQ) ATM switches are well suited to meet the requirements of current and future ultra-broadband ATM networks. The IQ structure imposes minimum memory bandwidth requirements for cell buffering, tolerates bursty traffic, and utilizes memory efficiently for multicast traffic. The lack of efficient cell queuing and scheduling solutions has been a major barrier to build high-performance, scalable IQ-based ATM switches. This dissertation proposes a new Three-Dimensional Queue (3DQ) and a novel Matrix Unit Cell Scheduler (MUCS) to remove this barrier. 3DQ uses a linked-list architecture based on Synchronous Random Access Memory (SRAM) to combine the individual advantages of per-virtual-circuit (per-VC) queuing, priority queuing, and N-destination queuing. It avoids Head of Line (HOL) blocking and provides per-VC Quality of Service (QoS) enforcement mechanisms. Computer simulation results verify the QoS capabilities of 3DQ. For multicast traffic, 3DQ provides efficient usage of cell buffering memory by storing multicast cells only once. Further, the multicast mechanism of 3DQ prevents a congested destination port from blocking other less- loaded ports. The 3DQ principle has been prototyped in the Illinois Input Queue (iiQueue) module. Using Field Programmable Gate Array (FPGA) devices, SRAM modules, and integrated on a Printed Circuit Board (PCB), iiQueue can process incoming traffic at 800 Mb/s. Using faster circuit technology, the same design is expected to operate at the OC-48 rate (2.5 Gb/s). MUCS resolves the output contention by evaluating the weight index of each candidate and selecting the heaviest. It achieves near-optimal scheduling and has a very short response time. The algorithm originates from a

  1. Quantifying the effect of sea level rise and flood defence - a point process perspective on coastal flood damage

    Science.gov (United States)

    Boettle, M.; Rybski, D.; Kropp, J. P.

    2016-02-01

    In contrast to recent advances in projecting sea levels, estimations about the economic impact of sea level rise are vague. Nonetheless, they are of great importance for policy making with regard to adaptation and greenhouse-gas mitigation. Since the damage is mainly caused by extreme events, we propose a stochastic framework to estimate the monetary losses from coastal floods in a confined region. For this purpose, we follow a Peak-over-Threshold approach employing a Poisson point process and the Generalised Pareto Distribution. By considering the effect of sea level rise as well as potential adaptation scenarios on the involved parameters, we are able to study the development of the annual damage. An application to the city of Copenhagen shows that a doubling of losses can be expected from a mean sea level increase of only 11 cm. In general, we find that for varying parameters the expected losses can be well approximated by one of three analytical expressions depending on the extreme value parameters. These findings reveal the complex interplay of the involved parameters and allow conclusions of fundamental relevance. For instance, we show that the damage typically increases faster than the sea level rise itself. This in turn can be of great importance for the assessment of sea level rise impacts on the global scale. Our results are accompanied by an assessment of uncertainty, which reflects the stochastic nature of extreme events. While the absolute value of uncertainty about the flood damage increases with rising mean sea levels, we find that it decreases in relation to the expected damage.

  2. The Gibbs free energy of homogeneous nucleation: From atomistic nuclei to the planar limit.

    Science.gov (United States)

    Cheng, Bingqing; Tribello, Gareth A; Ceriotti, Michele

    2017-09-14

    In this paper we discuss how the information contained in atomistic simulations of homogeneous nucleation should be used when fitting the parameters in macroscopic nucleation models. We show how the number of solid and liquid atoms in such simulations can be determined unambiguously by using a Gibbs dividing surface and how the free energy as a function of the number of solid atoms in the nucleus can thus be extracted. We then show that the parameters (the chemical potential, the interfacial free energy, and a Tolman correction) of a model based on classical nucleation theory can be fitted using the information contained in these free-energy profiles but that the parameters in such models are highly correlated. This correlation is unfortunate as it ensures that small errors in the computed free energy surface can give rise to large errors in the extrapolated properties of the fitted model. To resolve this problem we thus propose a method for fitting macroscopic nucleation models that uses simulations of planar interfaces and simulations of three-dimensional nuclei in tandem. We show that when the chemical potentials and the interface energy are pinned to their planar-interface values, more precise estimates for the Tolman length are obtained. Extrapolating the free energy profile obtained from small simulation boxes to larger nuclei is thus more reliable.

  3. A Gibbs potential expansion with a quantic system made up of a large number of particles

    International Nuclear Information System (INIS)

    Bloch, Claude; Dominicis, Cyrano de

    1959-01-01

    Starting from an expansion derived in a previous work, we study the contribution to the Gibbs potential of the two-body dynamical correlations, taking into account the statistical correlations. Such a contribution is of interest for low density systems at low temperature. In the zero density limit, it reduces to the Beth Uhlenbeck expression of the second virial coefficient. For a system of fermions in the zero temperature limit, it yields the contribution of the Brueckner reaction matrix to the ground state energy, plus, under certain conditions, additional terms of the form exp. (β |Δ|), where the Δ are the binding energies of 'bound states' of the type first discussed by L. Cooper. Finally, we study the wave function of two particles immersed in a medium (defined by its temperature and chemical potential). lt satisfies an equation generalizing the Bethe Goldstone equation for an arbitrary temperature. Reprint of a paper published in 'Nuclear Physics' 10, 1959, p. 181-196 [fr

  4. Periodic p-adic Gibbs Measures of q-State Potts Model on Cayley Trees I: The Chaos Implies the Vastness of the Set of p-Adic Gibbs Measures

    Science.gov (United States)

    Ahmad, Mohd Ali Khameini; Liao, Lingmin; Saburov, Mansoor

    2018-06-01

    We study the set of p-adic Gibbs measures of the q-state Potts model on the Cayley tree of order three. We prove the vastness of the set of the periodic p-adic Gibbs measures for such model by showing the chaotic behavior of the corresponding Potts-Bethe mapping over Q_p for the prime numbers p≡1 (mod 3). In fact, for 0< |θ -1|_p< |q|_p^2 < 1 where θ =\\exp _p(J) and J is a coupling constant, there exists a subsystem that is isometrically conjugate to the full shift on three symbols. Meanwhile, for 0< |q|_p^2 ≤ |θ -1|_p< |q|_p < 1, there exists a subsystem that is isometrically conjugate to a subshift of finite type on r symbols where r ≥ 4. However, these subshifts on r symbols are all topologically conjugate to the full shift on three symbols. The p-adic Gibbs measures of the same model for the prime numbers p=2,3 and the corresponding Potts-Bethe mapping are also discussed. On the other hand, for 0< |θ -1|_p< |q|_p < 1, we remark that the Potts-Bethe mapping is not chaotic when p=3 and p≡ 2 (mod 3) and we could not conclude the vastness of the set of the periodic p-adic Gibbs measures. In a forthcoming paper with the same title, we will treat the case 0< |q|_p ≤ |θ -1|_p < 1 for all prime numbers p.

  5. Gibbs energy of the resolvation of glycylglycine and its anion in aqueous solutions of dimethylsulfoxide at 298.15 K

    Science.gov (United States)

    Naumov, V. V.; Isaeva, V. A.; Kuzina, E. N.; Sharnin, V. A.

    2012-12-01

    Gibbs energies for the transfer of glycylglycine and glycylglycinate ions from water to water-dimethylsulfoxide solvents are determined from the interface distribution of substances between immiscible phases in the composition range of 0.00 to 0.20 molar fractions of DMSO at 298.15 K. It is shown that with a rise in the concentration of nonaqueous components in solution, we observe the solvation of dipeptide and its anion, due mainly to the destabilization of the carboxyl group.

  6. The thermodynamic approach to boron chemical vapour deposition based on a computer minimization of the total Gibbs free energy

    International Nuclear Information System (INIS)

    Naslain, R.; Thebault, J.; Hagenmuller, P.; Bernard, C.

    1979-01-01

    A thermodynamic approach based on the minimization of the total Gibbs free energy of the system is used to study the chemical vapour deposition (CVD) of boron from BCl 3 -H 2 or BBr 3 -H 2 mixtures on various types of substrates (at 1000 < T< 1900 K and 1 atm). In this approach it is assumed that states close to equilibrium are reached in the boron CVD apparatus. (Auth.)

  7. MINIMALISM IN A PSYCHOLINGUISTIC POINT OF VIEW: BINDING PRINCIPLES AND ITS OPERATION IN ON-LINE PROCESSING OF COREFERENCE

    Directory of Open Access Journals (Sweden)

    José Ferrari Neto

    2014-12-01

    Full Text Available This article aims to evaluate how much a formal model of Grammar can be apply to on-line mental processes that underlying the sentential processing. For this intent, it was carried on an experiment in which it was observed how the Binding Principles act in the processing of correferential relations in Brazilian Portuguese (BP. The results suggest that there is a convergence between linguistic computation and theories about linguistic processing.

  8. NJOY processed multigroup library for fast reactor applications and point data library for MCNP - Experience and validation

    International Nuclear Information System (INIS)

    Kim Jung-Do; Gil Choong-Sup

    1996-01-01

    JEF-1-based 50-group cross section library for fast reactor applications and point data library for continuous-energy Monte Carlo code MCNP have been generated using NJOY91.38 system. They have been examined by analyzing measured integral quantities such as criticality and central reaction rate ratios for 8 small fast critical assemblies. (author). 9 refs, 2 figs, 10 tabs

  9. The Charlie-Gibbs Fracture Zone: A Crossroads of the Atlantic Meridional Overturning Circulation

    Science.gov (United States)

    Bower, A. S.; Furey, H. H.; Xu, X.

    2016-02-01

    The Charlie-Gibbs Fracture Zone (CGFZ), a deep gap in the Mid-Atlantic Ridge at 52N, is the primary conduit for westward-flowing Iceland-Scotland Overflow Water (ISOW), which merges with Denmark Strait Overflow Water to form the Deep Western Boundary Current. The CGFZ has also been shown to "funnel" the path of the northern branch of the eastward-flowing North Atlantic Current (NAC), thereby bringing these two branches of the AMOC into close proximity. A recent two-year time series of hydrographic properties and currents from eight tall moorings across the CGFZ offers the first opportunity to investigate the NAC as a source of variability for ISOW transport. The two-year mean and standard deviation of ISOW transport was -1.7 ± 1.5 Sv, compared to -2.4 ± 3.0 Sv reported by Saunders for a 13-month period in 1988-1989. Differences in the two estimates are partly explained by limitations of the Saunders array, but more importantly reflect the strong low-frequency variability in ISOW transport through CGFZ (which includes complete reversals). Both the observations and output from a multi-decadal simulation of the North Atlantic using the Hybrid Coordinate Ocean Model (HYCOM) forced with interannually varying wind and buoyancy fields indicate a strong positive correlation between ISOW transport and the strength of the NAC through the CGFZ (stronger eastward NAC related to weaker westward ISOW transport). Vertical structure of the low-frequency current variability and water mass structure in the CGFZ will also be discussed. The results have implications regarding the interaction of the upper and lower limbs of the AMOC, and downstream propagation of ISOW transport variability in the Deep Western Boundary Current.

  10. Modeling Electric Double-Layer Capacitors Using Charge Variation Methodology in Gibbs Ensemble

    Directory of Open Access Journals (Sweden)

    Ganeshprasad Pavaskar

    2018-01-01

    Full Text Available Supercapacitors deliver higher power than batteries and find applications in grid integration and electric vehicles. Recent work by Chmiola et al. (2006 has revealed unexpected increase in the capacitance of porous carbon electrodes using ionic liquids as electrolytes. The work has generated curiosity among both experimentalists and theoreticians. Here, we have performed molecular simulations using a recently developed technique (Punnathanam, 2014 for simulating supercapacitor system. In this technique, the two electrodes (containing electrolyte in slit pore are simulated in two different boxes using the Gibbs ensemble methodology. This reduces the number of particles required and interfacial interactions, which helps in reducing computational load. The method simulates an electric double-layer capacitor (EDLC with macroscopic electrodes with much smaller system sizes. In addition, the charges on individual electrode atoms are allowed to vary in response to movement of electrolyte ions (i.e., electrode is polarizable while ensuring these atoms are at the same electric potential. We also present the application of our technique on EDLCs with the electrodes modeled as slit pores and as complex three-dimensional pore networks for different electrolyte geometries. The smallest pore geometry showed an increase in capacitance toward the potential of 0 charge. This is in agreement with the new understanding of the electrical double layer in regions of dense ionic packing, as noted by Kornyshev’s theoretical model (Kornyshev, 2007, which also showed a similar trend. This is not addressed by the classical Gouy–Chapman theory for the electric double layer. Furthermore, the electrode polarizability simulated in the model improved the accuracy of the calculated capacitance. However, its addition did not significantly alter the capacitance values in the voltage range considered.

  11. Gibbs Free-Energy Gradient along the Path of Glucose Transport through Human Glucose Transporter 3.

    Science.gov (United States)

    Liang, Huiyun; Bourdon, Allen K; Chen, Liao Y; Phelix, Clyde F; Perry, George

    2018-06-11

    Fourteen glucose transporters (GLUTs) play essential roles in human physiology by facilitating glucose diffusion across the cell membrane. Due to its central role in the energy metabolism of the central nervous system, GLUT3 has been thoroughly investigated. However, the Gibbs free-energy gradient (what drives the facilitated diffusion of glucose) has not been mapped out along the transport path. Some fundamental questions remain. Here we present a molecular dynamics study of GLUT3 embedded in a lipid bilayer to quantify the free-energy profile along the entire transport path of attracting a β-d-glucose from the interstitium to the inside of GLUT3 and, from there, releasing it to the cytoplasm by Arrhenius thermal activation. From the free-energy profile, we elucidate the unique Michaelis-Menten characteristics of GLUT3, low K M and high V MAX , specifically suitable for neurons' high and constant demand of energy from their low-glucose environments. We compute GLUT3's binding free energy for β-d-glucose to be -4.6 kcal/mol in agreement with the experimental value of -4.4 kcal/mol ( K M = 1.4 mM). We also compute the hydration energy of β-d-glucose, -18.0 kcal/mol vs the experimental data, -17.8 kcal/mol. In this, we establish a dynamics-based connection from GLUT3's crystal structure to its cellular thermodynamics with quantitative accuracy. We predict equal Arrhenius barriers for glucose uptake and efflux through GLUT3 to be tested in future experiments.

  12. Modelling metal-humate interactions: an approach based on the Gibbs-Donnan concept

    International Nuclear Information System (INIS)

    Ephraim, J.H.

    1995-01-01

    Humic and fulvic acids constitute an appreciable portion of organic substances in both aquatic and terrestrial environments. Their ability to sequester metal ions and other trace elements has engaged the interest of numerous environmental scientists recently and even though considerable advances have been made, a lot more remains unknown in the area. The existence of high molecular weight fractions and functional group heterogeneity have endowed ion exchange characteristics to these substances. For example, the cation exchange capacities of some humic substances have been compared to those of smectites. Recent development in the solution chemistry has also indicated that humic substances have the capability to interact with other anions because of their amphiphilic nature. In this paper, metal-humate interaction is described by relying heavily on information obtained from treatment of the solution chemistry of ion exchangers as typical polymers. In such a treatment, the perturbations to the metal-humate interaction are estimated by resort to the Gibbs-Donnan concept where the humic substance molecule is envisaged as having a potential counter-ion concentrating region around its molecular domain into which diffusible components can enter or leave depending on their corresponding electrochemical potentials. Information from studies with ion exchangers have been adapted to describe ionic equilibria involving these substances by making it possible to characterise the configuration/conformation of these natural organic acids and to correct for electrostatic effects in the metal-humate interaction. The resultant unified physicochemical approach has facilitated the identification and estimation of the complications to the solution chemistry of humic substances. (authors). 15 refs., 1 fig

  13. SUBLIMATION-DRIVEN ACTIVITY IN MAIN-BELT COMET 313P/GIBBS

    Energy Technology Data Exchange (ETDEWEB)

    Hsieh, Henry H. [Institute of Astronomy and Astrophysics, Academia Sinica, P.O. Box 23-141, Taipei 10617, Taiwan (China); Hainaut, Olivier [European Southern Observatory, Karl-Schwarzschild-Straße 2, D-85748 Garching bei München (Germany); Novaković, Bojan [Department of Astronomy, Faculty of Mathematics, University of Belgrade, Studentski trg 16, 11000 Belgrade (Serbia); Bolin, Bryce [Observatoire de la Côte d’Azur, Boulevard de l’Observatoire, B.P. 4229, F-06304 Nice Cedex 4 (France); Denneau, Larry; Haghighipour, Nader; Kleyna, Jan; Meech, Karen J.; Schunova, Eva; Wainscoat, Richard J. [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States); Fitzsimmons, Alan [Astrophysics Research Centre, Queens University Belfast, Belfast BT7 1NN (United Kingdom); Kokotanekova, Rosita; Snodgrass, Colin [Planetary and Space Sciences, Department of Physical Sciences, The Open University, Milton Keynes MK7 6AA (United Kingdom); Lacerda, Pedro [Max Planck Institute for Solar System Research, Justus-von-Liebig-Weg 3, D-37077 Göttingen (Germany); Micheli, Marco [ESA SSA NEO Coordination Centre, Frascati, RM (Italy); Moskovitz, Nick; Wasserman, Lawrence [Lowell Observatory, 1400 W. Mars Hill Road, Flagstaff, AZ 86001 (United States); Waszczak, Adam, E-mail: hhsieh@asiaa.sinica.edu.tw [Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125 (United States)

    2015-02-10

    We present an observational and dynamical study of newly discovered main-belt comet 313P/Gibbs. We find that the object is clearly active both in observations obtained in 2014 and in precovery observations obtained in 2003 by the Sloan Digital Sky Survey, strongly suggesting that its activity is sublimation-driven. This conclusion is supported by a photometric analysis showing an increase in the total brightness of the comet over the 2014 observing period, and dust modeling results showing that the dust emission persists over at least three months during both active periods, where we find start dates for emission no later than 2003 July 24 ± 10 for the 2003 active period and 2014 July 28 ± 10 for the 2014 active period. From serendipitous observations by the Subaru Telescope in 2004 when the object was apparently inactive, we estimate that the nucleus has an absolute R-band magnitude of H{sub R} = 17.1 ± 0.3, corresponding to an effective nucleus radius of r{sub e} ∼ 1.00 ± 0.15 km. The object’s faintness at that time means we cannot rule out the presence of activity, and so this computed radius should be considered an upper limit. We find that 313P’s orbit is intrinsically chaotic, having a Lyapunov time of T{sub l} = 12,000 yr and being located near two three-body mean-motion resonances with Jupiter and Saturn, 11J-1S-5A and 10J+12S-7A, yet appears stable over >50 Myr in an apparent example of stable chaos. We furthermore find that 313P is the second main-belt comet, after P/2012 T1 (PANSTARRS), to belong to the ∼155 Myr old Lixiaohua asteroid family.

  14. Comparison of Boltzmann and Gibbs entropies for the analysis of single-chain phase transitions

    Science.gov (United States)

    Shakirov, T.; Zablotskiy, S.; Böker, A.; Ivanov, V.; Paul, W.

    2017-03-01

    In the last 10 years, flat histogram Monte Carlo simulations have contributed strongly to our understanding of the phase behavior of simple generic models of polymers. These simulations result in an estimate for the density of states of a model system. To connect this result with thermodynamics, one has to relate the density of states to the microcanonical entropy. In a series of publications, Dunkel, Hilbert and Hänggi argued that it would lead to a more consistent thermodynamic description of small systems, when one uses the Gibbs definition of entropy instead of the Boltzmann one. The latter is the logarithm of the density of states at a certain energy, the former is the logarithm of the integral of the density of states over all energies smaller than or equal to this energy. We will compare the predictions using these two definitions for two polymer models, a coarse-grained model of a flexible-semiflexible multiblock copolymer and a coarse-grained model of the protein poly-alanine. Additionally, it is important to note that while Monte Carlo techniques are normally concerned with the configurational energy only, the microcanonical ensemble is defined for the complete energy. We will show how taking the kinetic energy into account alters the predictions from the analysis. Finally, the microcanonical ensemble is supposed to represent a closed mechanical N-particle system. But due to Galilei invariance such a system has two additional conservation laws, in general: momentum and angular momentum. We will also show, how taking these conservation laws into account alters the results.

  15. Gibbs-Thomson Law for Singular Step Segments: Thermodynamics Versus Kinetics

    Science.gov (United States)

    Chernov, A. A.

    2003-01-01

    Classical Burton-Cabrera-Frank theory presumes that thermal fluctuations are so fast that at any time density of kinks on a step is comparable with the reciprocal intermolecular distance, so that the step rate is about isotropic within the crystal plane. Such azimuthal isotropy is, however, often not the case: Kink density may be much lower. In particular, it was recently found on the (010) face of orthorhombic lysozyme that interkink distance may exceed 500-600 intermolecular distances. Under such conditions, Gibbs-Thomson law (GTL) may not be applicable: On a straight step segment between two corners, communication between the comers occurs exclusively by kink exchange. Annihilation between kinks of opposite sign generated at the comers results in the grain in step energy entering GTL. If the step segment length l much greater than D/v, where D and v are the kink diffusivity and propagation rate, respectively, the opposite kinks have practically no chance to annihilate and GTL is not applicable. The opposite condition of the GTL applicability, l much less than D/v, is equivalent to the requirement that relative supersaturation Delta(sub mu)/kT much less than alpha/l, where alpha is molecular size. Thus, GTL may be applied to a segment of 10(exp 3)alpha approx. 3 x 10(exp -5)cm approx 0.3 micron only if supersaturation is less than 0.1%, while practically used driving forces for crystallization are much larger. Relationships alternative to the GTL for different, but low, kink density have been discussed. They confirm experimental evidences that the Burton-Cabrera-Frank theory of spiral growth is growth rates twice as low as compared to the observed figures. Also, application of GTL results in unrealistic step energy while suggested kinetic law give reasonable figures.

  16. Estimation of Genetic Parameters for Direct and Maternal Effects in Growth Traits of Sangsari Sheep Using Gibbs Sampling

    Directory of Open Access Journals (Sweden)

    Zohreh Yousefi

    2016-11-01

    Full Text Available Introduction Small ruminants, especially native breed types, play an important role in livelihoods of a considerable part of human population in the tropics from socio-economic aspects. Therefore, integrated attempt in terms of management and genetic improvement to enhance production is of crucial importance. Knowledge of genetic variation and co-variation among traits is required for both the design of effective sheep breeding programs and the accurate prediction of genetic progress from these programs. Body weight and growth traits are one of the economically important traits in sheep production, especially in Iran where lamb sale is the main source of income for sheep breeders while other products are in secondary importance. Although mutton is the most important source of protein in Iran, meat production from the sheep does not cover the increasing consumer demand. On the other hand, increase in sheep number to increase meat production has been limited by low quality and quantity of forage range. Therefore, enhancing meat production should be achieved by selecting the animals that have maximum genetic merit as next generation parents. To design an efficient improvement program and genetic evaluation system for maximization response to selection for economically important traits, accurate estimates of the genetic parameters and the genetic relationships between the traits are necessary. Studies of various sheep breeds have shown that both direct and maternal genetic influences are of importance for lamb growth. When growth traits are included in the breeding goal, both direct and maternal genetic effects should be taken into account in order to achieve optimum genetic progress. The objective of this study was to estimate the variance components and heritability, for growth traits, by fitting six animal models in the Sangsari sheep using Gibbs sampling. Material and Method Sangsari is a fat-tailed and relatively small sized breed of sheep

  17. Effect of the temperature and dew point of the decarburization process on the oxide subscale of a 3% silicon steel

    Energy Technology Data Exchange (ETDEWEB)

    Cesar, Maria das Gracas M.M. E-mail: gracamelo@acesita.com.br; Mantel, Marc J

    2003-01-01

    The oxide subscale formed on the decarburization annealing of 3% Si-Fe was investigated using microscopy and spectroscopy techniques. It was found that the morphology as well as the molecular structure of the subscale are affected by temperature and dew point. The results suggest that there is an optimum level of internal oxidation and an optimum fayalite/silica ratio in the subscale to achieve a oriented grain silicon steel having a continuous and smooth ceramic film and low core loss.

  18. Predicting wildfire occurrence distribution with spatial point process models and its uncertainty assessment: a case study in the Lake Tahoe Basin, USA

    Science.gov (United States)

    Jian Yang; Peter J. Weisberg; Thomas E. Dilts; E. Louise Loudermilk; Robert M. Scheller; Alison Stanton; Carl Skinner

    2015-01-01

    Strategic fire and fuel management planning benefits from detailed understanding of how wildfire occurrences are distributed spatially under current climate, and from predictive models of future wildfire occurrence given climate change scenarios. In this study, we fitted historical wildfire occurrence data from 1986 to 2009 to a suite of spatial point process (SPP)...

  19. Seed Dispersal, Microsites or Competition—What Drives Gap Regeneration in an Old-Growth Forest? An Application of Spatial Point Process Modelling

    Directory of Open Access Journals (Sweden)

    Georg Gratzer

    2018-04-01

    Full Text Available The spatial structure of trees is a template for forest dynamics and the outcome of a variety of processes in ecosystems. Identifying the contribution and magnitude of the different drivers is an age-old task in plant ecology. Recently, the modelling of a spatial point process was used to identify factors driving the spatial distribution of trees at stand scales. Processes driving the coexistence of trees, however, frequently unfold within gaps and questions on the role of resource heterogeneity within-gaps have become central issues in community ecology. We tested the applicability of a spatial point process modelling approach for quantifying the effects of seed dispersal, within gap light environment, microsite heterogeneity, and competition on the generation of within gap spatial structure of small tree seedlings in a temperate, old growth, mixed-species forest. By fitting a non-homogeneous Neyman–Scott point process model, we could disentangle the role of seed dispersal from niche partitioning for within gap tree establishment and did not detect seed densities as a factor explaining the clustering of small trees. We found only a very weak indication for partitioning of within gap light among the three species and detected a clear niche segregation of Picea abies (L. Karst. on nurse logs. The other two dominating species, Abies alba Mill. and Fagus sylvatica L., did not show signs of within gap segregation.

  20. Observation and investigation of a dynamic inflection point in current-voltage curves for roll-to-roll processed polymer photovoltaics

    DEFF Research Database (Denmark)

    Medford, Andrew James; Lilliedal, Mathilde Raad

    2010-01-01

    Inflection point behaviour is often observed in the current-voltage (IV) curve of polymer and organic solar cells. This phenomenon is examined in the context of flexible roll-to-roll (R2R) processed polymer solar cells in a large series of devices with a layer structure of: PET-ITO-ZnO-P3HT...... of this “photo-annealing” behaviour was further investigated by studying the effects of several key factors: temperature, illumination, and atmosphere. The results consistently showed that the inflection point is a dynamic interface phenomenon which can be removed under specific conditions. Subsequently...

  1. Accurate and precise determination of critical properties from Gibbs ensemble Monte Carlo simulations

    International Nuclear Information System (INIS)

    Dinpajooh, Mohammadhasan; Bai, Peng; Allan, Douglas A.; Siepmann, J. Ilja

    2015-01-01

    Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor–liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields T c = 1.3128 ± 0.0016, ρ c = 0.316 ± 0.004, and p c = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρ t ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using r cut = 3.5σ yield T c and p c that are higher by 0.2% and 1.4% than simulations with r cut = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that r cut = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard-core square-well particles with various

  2. Phase relations and gibbs energies in the system Mn-Rh-O

    Science.gov (United States)

    Jacob, K. T.; Sriram, M. V.

    1994-07-01

    Phase relations in the system Mn-Rh-O are established at 1273 K by equilibrating different compositions either in evacuated quartz ampules or in pure oxygen at a pressure of 1.01 × 105 Pa. The quenched samples are examined by optical microscopy, X-ray diffraction, and energy-dispersive X-ray analysis (EDAX). The alloys and intermetallics in the binary Mn-Rh system are found to be in equilibrium with MnO. There is only one ternary compound, MnRh2O4, with normal spinel structure in the system. The compound Mn3O4 has a tetragonal structure at 1273 K. A solid solution is formed between MnRh2O4 and Mn3O4. The solid solution has the cubic structure over a large range of composition and coexists with metallic rhodium. The partial pressure of oxygen corresponding to this two-phase equilibrium is measured as a function of the composition of the spinel solid solution and temperature. A new solid-state cell, with three separate electrode compartments, is designed to measure accurately the chemical potential of oxygen in the two-phase mixture, Rh + Mn3-2xRh2xO4, which has 1 degree of freedom at constant temperature. From the electromotive force (emf), thermodynamic mixing properties of the Mn3O4-MnRh2O4 solid solution and Gibbs energy of formation of MnRh2O4 are deduced. The activities exhibit negative deviations from Raoult’s law for most of the composition range, except near Mn3O4, where a two-phase region exists. In the cubic phase, the entropy of mixing of the two Rh3+ and Mn3+ ions on the octahedral site of the spinel is ideal, and the enthalpy of mixing is positive and symmetric with respect to composition. For the formation of the spinel (sp) from component oxides with rock salt (rs) and orthorhombic (orth) structures according to the reaction, MnO (rs) + Rh2O3 (orth) → MnRh2O4 (sp), ΔG° = -49,680 + 1.56T (±500) J mol-1 The oxygen potentials corresponding to MnO + Mn3O4 and Rh + Rh2O3 equilibria are also obtained from potentiometric measurements on galvanic

  3. COST VOLUME PROFIT MODEL, THE BREAK -EVEN POINT AND THE DECISION MAKING PROCESS IN THE HOSPITALITY INDUSTRY

    Directory of Open Access Journals (Sweden)

    Scorte Carmen

    2010-12-01

    Full Text Available Management accounting and cost calculation in the hospitality industry is a pathless land. The prezent article is a starting point of a long scientific approach on the domain of the hospitality industry and on the managerial accounting in this area. Our intention is to put the spot light back on the thorny problem of applying Financial Accounting and specifically its implementation in the hospitality industry. One aim of this article is to provide a picture of CVP analysis in decision making with customizing the hospitality industry. To cope with the crisis period, the competition and to achieve the expected profits of the hospitality industry ,managers have the possibility to apply CVP analysis, one of the most simple and useful analytical tools. This paper will address the basic version of the CVP model, exemplifying the main indicators of the particular model for the hospitality industry that can help guide decision-making.

  4. Theory of transport processes in wood below the fiber saturation point. Physical background on the microscale and its macroscopic description

    DEFF Research Database (Denmark)

    Eitelberger, Johannes; Svensson, Staffan; Hofstetter, Karin

    2011-01-01

    transport when used to describe transient processes. A suitable modeling approach was found by distinguishing between the two phases of water in wood, namely bound water in the cell walls and water vapor in the lumens. Such models are capable of reproducing transient moisture transport processes......, but the physical origin of the coupling between the two phases remains unclear. In this paper, the physical background on the microscale is clarified and transformed into a comprehensive macroscopic description, ending up with a dual-scale model comprising three coupled differential equations for bound water...

  5. Critical points for spread-out self-avoiding walk, percolation and the contact process above the upper critical dimensions

    NARCIS (Netherlands)

    Hofstad, van der R.W.; Sakai, A.

    2005-01-01

    We consider self-avoiding walk and percolation in d, oriented percolation in d×+, and the contact process in d, with p D(·) being the coupling function whose range is proportional to L. For percolation, for example, each bond is independently occupied with probability p D(y–x). The above models are

  6. Flooding the Zone: A Ten-Point Approach to Assessing Critical Thinking as Part of the AACSB Accreditation Process

    Science.gov (United States)

    Cavaliere, Frank; Mayer, Bradley W.

    2012-01-01

    Undergoing the accreditation process of the Association to Advance Collegiate Schools of Business (AACSB) can be quite daunting and stressful. It requires prodigious amounts of planning, record-keeping, and document preparation. It is not something that can be thrown together at the last minute. The same is true of the five-year reaccreditation…

  7. Equilibrium modeling of gasification: Gibbs free energy minimization approach and its application to spouted bed and spout-fluid bed gasifiers

    International Nuclear Information System (INIS)

    Jarungthammachote, S.; Dutta, A.

    2008-01-01

    Spouted beds have been found in many applications, one of which is gasification. In this paper, the gasification processes of conventional and modified spouted bed gasifiers were considered. The conventional spouted bed is a central jet spouted bed, while the modified spouted beds are circular split spouted bed and spout-fluid bed. The Gibbs free energy minimization method was used to predict the composition of the producer gas. The major six components, CO, CO 2 , CH 4 , H 2 O, H 2 and N 2 , were determined in the mixture of the producer gas. The results showed that the carbon conversion in the gasification process plays an important role in the model. A modified model was developed by considering the carbon conversion in the constraint equations and in the energy balance calculation. The results from the modified model showed improvements. The higher heating values (HHV) were also calculated and compared with the ones from experiments. The agreements of the calculated and experimental values of HHV, especially in the case of the circular split spouted bed and the spout-fluid bed were observed

  8. Latin-american and maghrebian women migratory process and psychological adjustment: from a gender point of view

    Directory of Open Access Journals (Sweden)

    Edurne Elgorriaga

    2013-02-01

    Full Text Available This study examines the migratory process and psychological adjustment of immigrant women currently residing in the Basque Country. Perceived stress is analyzed in relationship with relevant psychosocial variables from a gender perspective.The sample consisted of 206 immigrant women, proceeding from Latin America (61.2% and Maghreb (38.8%.The participants’ self-assessment of migratory and well-beingwas in overall positive, however, the diffi culties derived from thisprocess, and the migratory changes, infl uence the psychologicaladjustment of immigrant women.Results revealed that perceived stress is affected by the migratory process, educational level, residential status, and the balance of their situation, the elements crossed by factors asgender and/or cultural origin.

  9. Choosing between Higher Moment Maximum Entropy Models and Its Application to Homogeneous Point Processes with Random Effects

    Directory of Open Access Journals (Sweden)

    Lotfi Khribi

    2017-12-01

    Full Text Available In the Bayesian framework, the usual choice of prior in the prediction of homogeneous Poisson processes with random effects is the gamma one. Here, we propose the use of higher order maximum entropy priors. Their advantage is illustrated in a simulation study and the choice of the best order is established by two goodness-of-fit criteria: Kullback–Leibler divergence and a discrepancy measure. This procedure is illustrated on a warranty data set from the automobile industry.

  10. Fermentation of Saccharomyces cerevisiae - Combining kinetic modeling and optimization techniques points out avenues to effective process design.

    Science.gov (United States)

    Scheiblauer, Johannes; Scheiner, Stefan; Joksch, Martin; Kavsek, Barbara

    2018-09-14

    A combined experimental/theoretical approach is presented, for improving the predictability of Saccharomyces cerevisiae fermentations. In particular, a mathematical model was developed explicitly taking into account the main mechanisms of the fermentation process, allowing for continuous computation of key process variables, including the biomass concentration and the respiratory quotient (RQ). For model calibration and experimental validation, batch and fed-batch fermentations were carried out. Comparison of the model-predicted biomass concentrations and RQ developments with the corresponding experimentally recorded values shows a remarkably good agreement for both batch and fed-batch processes, confirming the adequacy of the model. Furthermore, sensitivity studies were performed, in order to identify model parameters whose variations have significant effects on the model predictions: our model responds with significant sensitivity to the variations of only six parameters. These studies provide a valuable basis for model reduction, as also demonstrated in this paper. Finally, optimization-based parametric studies demonstrate how our model can be utilized for improving the efficiency of Saccharomyces cerevisiae fermentations. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Gibbs free-energy difference between the glass and crystalline phases of a Ni-Zr alloy

    Science.gov (United States)

    Ohsaka, K.; Trinh, E. H.; Holzer, J. C.; Johnson, W. L.

    1993-01-01

    The heats of eutectic melting and devitrification, and the specific heats of the crystalline, glass, and liquid phases have been measured for a Ni24Zr76 alloy. The data are used to calculate the Gibbs free-energy difference, Delta G(AC), between the real glass and the crystal on an assumption that the liquid-glass transition is second order. The result shows that Delta G(AC) continuously increases as the temperature decreases in contrast to the ideal glass case where Delta G(AC) is assumed to be independent of temperature.

  12. Size Fluctuations of Near Critical Nuclei and Gibbs Free Energy for Nucleation of BDA on Cu(001)

    Science.gov (United States)

    Schwarz, Daniel; van Gastel, Raoul; Zandvliet, Harold J. W.; Poelsema, Bene

    2012-07-01

    We present a low-energy electron microscopy study of nucleation and growth of BDA on Cu(001) at low supersaturation. At sufficiently high coverage, a dilute BDA phase coexists with c(8×8) crystallites. The real-time microscopic information allows a direct visualization of near-critical nuclei, determination of the supersaturation and the line tension of the crystallites, and, thus, derivation of the Gibbs free energy for nucleation. The resulting critical nucleus size nicely agrees with the measured value. Nuclei up to 4-6 times larger still decay with finite probability, urging reconsideration of the classic perception of a critical nucleus.

  13. On the temperature dependence of the Adam-Gibbs equation around the crossover region in the glass transition

    Science.gov (United States)

    Duque, Michel; Andraca, Adriana; Goldstein, Patricia; del Castillo, Luis Felipe

    2018-04-01

    The Adam-Gibbs equation has been used for more than five decades, and still a question remains unanswered on the temperature dependence of the chemical potential it includes. Nowadays, it is a well-known fact that in fragile glass formers, actually the behavior of the system depends on the temperature region it is being studied. Transport coefficients change due to the appearance of heterogeneity in the liquid as it is supercooled. Using the different forms for the logarithmic shift factor and the form of the configurational entropy, we evaluate this temperature dependence and present a discussion on our results.

  14. The Relationship of Dynamical Heterogeneity to the Adam-Gibbs and Random First-Order Transition Theories of Glass Formation

    OpenAIRE

    Starr, Francis W.; Douglas, Jack F.; Sastry, Srikanth

    2013-01-01

    We carefully examine common measures of dynamical heterogeneity for a model polymer melt and test how these scales compare with those hypothesized by the Adam and Gibbs (AG) and random first-order transition (RFOT) theories of relaxation in glass-forming liquids. To this end, we first analyze clusters of highly mobile particles, the string-like collective motion of these mobile particles, and clusters of relative low mobility. We show that the time scale of the high-mobility clusters and stri...

  15. On the thermochemical conversions of hard coal pitches in the process of raising the softening point to 358-363 K

    Energy Technology Data Exchange (ETDEWEB)

    Kekin, N.A.; Belkina, T.V.; Stepanenko, M.A.; Gordienko, V.G.

    1983-09-01

    High resolution paramagnetic resonance and infrared spectroscopy are used to obtain data on the nature of changes in hydrogen content of various groups in the substances of soluble functions in raw pitch and its thermoproducts during the process of producing binders with an increased softening point of 358-363 K. It was shown that thermal treatment of pitch during the process of raising the softening point leads to enrichment of the pitch structure with aromatic hydrogen and to reduction in the structure of the hydrogen in aliphatic bonds. The basis of these conversions is the splitting off of CH/SUB/3 groups and the formation of new structures containing CH/SUB/2 groups. (11 refs.)

  16. ASSESSMENT OF THE INQUIRY-BASED PROJECT IMPLEMENTATION PROCESS IN SCIENCE EDUCATION UPON STUDENTS’ POINTS OF VIEWS

    Directory of Open Access Journals (Sweden)

    Orhan AKINOGLU

    2008-01-01

    Full Text Available Aim of the study is to assess how students in 6th, 7th and 8th grades of primary education see the project works made in science education and their implementation processes. The study was fulfilled upon the descriptive survey model to collect data. Participants of the research were 100 students who had project implementation experiences in science education, and they were from 24 primary schools in 7 districts randomly chosen in the city of Istanbul in Turkey. Data of the study were collected by using a semi-constructed interview form offered to students during the 2005-2006 teaching year. In the research, following items were examined: The extent to which students are inspired from the previously made projects during their own project selection process, the level of scientific document survey and the effects of contemporary events, science and technology class topics and students’ interest areas. It was seen that internet is the mostly used source to obtain information. For students, one of the most problematic issues faced during the project implementation is the time limits set out by teacher. It was found that the most obvious benefit obtained by students from the project works is their increasing interest towards science and technology class. The most significant change seen by students regarding project preparation is their increasing grades in exams during and following the project works.

  17. Variations in the Holocene North Atlantic Bottom Current Strength in the Charlie Gibbs Fracture Zone

    Science.gov (United States)

    Kissel, C.; Van Toer, A.; Cortijo, E.; Turon, J.

    2011-12-01

    The changes in the strength of the North Atlantic bottom current during the Holocene period is presented via the study of cores located at the western termination of the northern deep channel of the Charlie-Gibbs fracture zone. This natural roughly E-W corridor is bathed by the Iceland-Scotland overflow water (ISOW) when it passes westward out of the Iceland Basin into the western North Atlantic basin. At present, it is also described as the place where southern sourced silicate-rich Lower Deep Water (LDW) derived from the Antarctic Bottom Waters (AABW) are passing westward, mixing with the ISOW. We conducted a deep-water multiproxy analysis on two nearby cores, coupling magnetic properties, anisotropy, sortable silt and benthic foraminifera isotopes. The first core had been taken by the R. V. Charcot in 1977 and the second one is a CASQ core taken during the IMAGES-AMOCINT MD168- cruise in the framework of the 06-EuroMARC-FP-008 Project on board the R.V. Marion Dufresne (French Polar Institute, IPEV) in 2008. The radiocarbon ages indicate an average sedimentation rate of about 50 cm/kyr through middle and late Holocene allowing a data resolution ranging from 40 to 100 years depending on the proxy. In each core, we observe long-term and short-term changes in the strength of the bottom currents. On the long term, a decrease in the amount of magnetic particles (normalized by the carbonate content) is first from 10 kyr to 8.6 kyr and then between 6 and 2 kyrs before reaching a steady state. Following Kissel et al. (2009), this indicates a decrease in the ISOW strength. The mean sortable silt shows exactly the same pattern indicating that not only the intensity of the ISOW but the whole deep water mass bathing the sites has decreased. On the short term, a first very prominent event centered at about 8.4 kyr (cal. ages) is marked by a pronounced minima in magnetic content and the smaller mean sortable silt sizes. This is typical for an abrupt reduction in deep flow

  18. Tipping Point

    Medline Plus

    Full Text Available ... en español Blog About OnSafety CPSC Stands for Safety The Tipping Point Home > 60 Seconds of Safety (Videos) > The Tipping Point The Tipping Point by ... danger death electrical fall furniture head injury product safety television tipover tv Watch the video in Adobe ...

  19. Interaction between α-calcium sulfate hemihydrate and superplasticizer from the point of adsorption characteristics, hydration and hardening process

    International Nuclear Information System (INIS)

    Guan Baohong; Ye Qingqing; Zhang Jiali; Lou Wenbin; Wu Zhongbiao

    2010-01-01

    Superplasticizers (SPs), namely sulfonated melamine formaldehyde (SMF) and polycarboxylate (PC), were independently admixed with α-calcium sulfate hemihydrate based plaster to improve the material's performance. SMF and PC gave, respectively, 38% and 25% increases in the 2 h bending strength at the optimum dosages of 0.5 wt.% and 0.3 wt.%, which are determined essentially by the maximum water-reducing efficiency. The peak shift of binding energy of Ca2p 3/2 detected by X-ray photoelectron spectroscopy (XPS) suggests that SPs are chemically adsorbed on gypsum surface. A careful examination of the strength development of set plaster allowed the hydration and hardening process to be divided roughly into five stages. SMF accelerates early hydration, while PC decelerates it. Both SPs allowed similar maximum water reductions, giving a more compact structure and a decrease in total pore volume and average pore diameter, and thus leading to higher strengths in the hardened plasters with SPs.

  20. PET and diagnostic technology evaluation in a global clinical process. DGN's point of view

    Energy Technology Data Exchange (ETDEWEB)

    Kotzerke, J. [Klinik und Poliklinik fuer Nuklearmedizin der Univ. Dresden (Germany); Dietlein, M. [Klinik und Poliklinik fuer Nuklearmedizin der Univ. Koeln (Germany); Gruenwald, F. [Klinik und Poliklinik fuer Nuklearmedizin der Univ. Frankfurt am Main (Germany); Bockisch, A. [Klinik und Poliklinik fuer Nuklearmedizin der Univ. Essen (Germany)

    2010-07-01

    The German Society of Nuclear Medicine (DGN) criticizes the methodological approach of the IQWiG for evaluation of PET and the conclusions, which represent the opposite point of view compared to the most other European countries and health companies in the USA: (1) Real integration of experienced physicians into the interpretation of data and the evaluation of effectiveness should be used for best possible reporting instead of only formal hearing. (2) Data of the National Oncologic PET Registry (NOPR) from the USA have shown, that PET has changed the therapeutic management in 38% of patients. (3) The decision of the IQWiG to accept outcome data only for their benefit analyses, is controversial. Medical knowledge is generated by different methods, and an actual analysis of the scientific guidelines has shown that only 15% out of all guidelines are based on the level of evidence demanded by the IQWiG. Health economics has created different assessment methods for the evaluation of a diagnostic procedure. The strategy chosen by the IQWiG overestimated the perspective of the population and undervalue the benefit for an individual patient. (4) PET evaluates the effectiveness of a therapeutic procedure, but does not create an effective therapy. When the predictive value of PET is already implemented in a specific study design and the result of PET define a specific management, the trial evaluate the whole algorithm and PET is part of this algorithm only. When PET is implemented as test during chemotherapy or by the end of chemotherapy, the predictive value of PET will depend decisively on the effectiveness of the therapy: The better the therapy, the smaller the differences in survival detected by PET. (5) The significance of an optimal staging by the integration of PET will increase. Rationale is the actual development of ''titration'' of chemotherapy intensity and radiation dose towards the lowest possible, just about effective dosage. (6) The medical

  1. Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.

    Science.gov (United States)

    Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L

    2017-06-13

    λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.

  2. A novel knot selection method for the error-bounded B-spline curve fitting of sampling points in the measuring process

    International Nuclear Information System (INIS)

    Liang, Fusheng; Zhao, Ji; Ji, Shijun; Zhang, Bing; Fan, Cheng

    2017-01-01

    The B-spline curve has been widely used in the reconstruction of measurement data. The error-bounded sampling points reconstruction can be achieved by the knot addition method (KAM) based B-spline curve fitting. In KAM, the selection pattern of initial knot vector has been associated with the ultimate necessary number of knots. This paper provides a novel initial knots selection method to condense the knot vector required for the error-bounded B-spline curve fitting. The initial knots are determined by the distribution of features which include the chord length (arc length) and bending degree (curvature) contained in the discrete sampling points. Firstly, the sampling points are fitted into an approximate B-spline curve Gs with intensively uniform knot vector to substitute the description of the feature of the sampling points. The feature integral of Gs is built as a monotone increasing function in an analytic form. Then, the initial knots are selected according to the constant increment of the feature integral. After that, an iterative knot insertion (IKI) process starting from the initial knots is introduced to improve the fitting precision, and the ultimate knot vector for the error-bounded B-spline curve fitting is achieved. Lastly, two simulations and the measurement experiment are provided, and the results indicate that the proposed knot selection method can reduce the number of ultimate knots available. (paper)

  3. Martin Gibbs (1922-2006): Pioneer of (14)C research, sugar metabolism & photosynthesis; vigilant Editor-in-Chief of Plant Physiology; sage Educator; and humanistic Mentor.

    Science.gov (United States)

    Black, Clanton C

    2008-01-01

    The very personal touch of Professor Martin Gibbs as a worldwide advocate for photosynthesis and plant physiology was lost with his death in July 2006. Widely known for his engaging humorous personality and his humanitarian lifestyle, Martin Gibbs excelled as a strong international science diplomat; like a personal science family patriarch encouraging science and plant scientists around the world. Immediately after World War II he was a pioneer at the Brookhaven National Laboratory in the use of (14)C to elucidate carbon flow in metabolism and particularly carbon pathways in photosynthesis. His leadership on carbon metabolism and photosynthesis extended for four decades of working in collaboration with a host of students and colleagues. In 1962, he was selected as the Editor-in-Chief of Plant Physiology. That appointment initiated 3 decades of strong directional influences by Gibbs on plant research and photosynthesis. Plant Physiology became and remains a premier source of new knowledge about the vital and primary roles of plants in earth's environmental history and the energetics of our green-blue planet. His leadership and charismatic humanitarian character became the quintessence of excellence worldwide. Martin Gibbs was in every sense the personification of a model mentor not only for scientists but also shown in devotion to family. Here we pay tribute and honor to an exemplary humanistic mentor, Martin Gibbs.

  4. The application of computational thermodynamics and a numerical model for the determination of surface tension and Gibbs-Thomson coefficient of aluminum based alloys

    International Nuclear Information System (INIS)

    Jacome, Paulo A.D.; Landim, Mariana C.; Garcia, Amauri; Furtado, Alexandre F.; Ferreira, Ivaldo L.

    2011-01-01

    Highlights: → Surface tension and the Gibbs-Thomson coefficient are computed for Al-based alloys. → Butler's scheme and ThermoCalc are used to compute the thermophysical properties. → Predictive cell/dendrite growth models depend on accurate thermophysical properties. → Mechanical properties can be related to the microstructural cell/dendrite spacing. - Abstract: In this paper, a solution for Butler's formulation is presented permitting the surface tension and the Gibbs-Thomson coefficient of Al-based binary alloys to be determined. The importance of Gibbs-Thomson coefficient for binary alloys is related to the reliability of predictions furnished by predictive cellular and dendritic growth models and of numerical computations of solidification thermal variables, which will be strongly dependent on the thermophysical properties assumed for the calculations. A numerical model based on Powell hybrid algorithm and a finite difference Jacobian approximation was coupled to a specific interface of a computational thermodynamics software in order to assess the excess Gibbs energy of the liquid phase, permitting the surface tension and Gibbs-Thomson coefficient for Al-Fe, Al-Ni, Al-Cu and Al-Si hypoeutectic alloys to be calculated. The computed results are presented as a function of the alloy composition.

  5. The application of computational thermodynamics and a numerical model for the determination of surface tension and Gibbs-Thomson coefficient of aluminum based alloys

    Energy Technology Data Exchange (ETDEWEB)

    Jacome, Paulo A.D.; Landim, Mariana C. [Department of Mechanical Engineering, Fluminense Federal University, Av. dos Trabalhadores, 420-27255-125 Volta Redonda, RJ (Brazil); Garcia, Amauri, E-mail: amaurig@fem.unicamp.br [Department of Materials Engineering, University of Campinas, UNICAMP, PO Box 6122, 13083-970 Campinas, SP (Brazil); Furtado, Alexandre F.; Ferreira, Ivaldo L. [Department of Mechanical Engineering, Fluminense Federal University, Av. dos Trabalhadores, 420-27255-125 Volta Redonda, RJ (Brazil)

    2011-08-20

    Highlights: {yields} Surface tension and the Gibbs-Thomson coefficient are computed for Al-based alloys. {yields} Butler's scheme and ThermoCalc are used to compute the thermophysical properties. {yields} Predictive cell/dendrite growth models depend on accurate thermophysical properties. {yields} Mechanical properties can be related to the microstructural cell/dendrite spacing. - Abstract: In this paper, a solution for Butler's formulation is presented permitting the surface tension and the Gibbs-Thomson coefficient of Al-based binary alloys to be determined. The importance of Gibbs-Thomson coefficient for binary alloys is related to the reliability of predictions furnished by predictive cellular and dendritic growth models and of numerical computations of solidification thermal variables, which will be strongly dependent on the thermophysical properties assumed for the calculations. A numerical model based on Powell hybrid algorithm and a finite difference Jacobian approximation was coupled to a specific interface of a computational thermodynamics software in order to assess the excess Gibbs energy of the liquid phase, permitting the surface tension and Gibbs-Thomson coefficient for Al-Fe, Al-Ni, Al-Cu and Al-Si hypoeutectic alloys to be calculated. The computed results are presented as a function of the alloy composition.

  6. A Comparative Study of Applying Active-Set and Interior Point Methods in MPC for Controlling Nonlinear pH Process

    Directory of Open Access Journals (Sweden)

    Syam Syafiie

    2014-06-01

    Full Text Available A comparative study of Model Predictive Control (MPC using active-set method and interior point methods is proposed as a control technique for highly non-linear pH process. The process is a strong acid-strong base system. A strong acid of hydrochloric acid (HCl and a strong base of sodium hydroxide (NaOH with the presence of buffer solution sodium bicarbonate (NaHCO3 are used in a neutralization process flowing into reactor. The non-linear pH neutralization model governed in this process is presented by multi-linear models. Performance of both controllers is studied by evaluating its ability of set-point tracking and disturbance-rejection. Besides, the optimization time is compared between these two methods; both MPC shows the similar performance with no overshoot, offset, and oscillation. However, the conventional active-set method gives a shorter control action time for small scale optimization problem compared to MPC using IPM method for pH control.

  7. Collision Visualization of a Laser-Scanned Point Cloud of Streets and a Festival Float Model Used for the Revival of a Traditional Procession Route

    Science.gov (United States)

    Li, W.; Shigeta, K.; Hasegawa, K.; Li, L.; Yano, K.; Tanaka, S.

    2017-09-01

    Recently, laser-scanning technology, especially mobile mapping systems (MMSs), has been applied to measure 3D urban scenes. Thus, it has become possible to simulate a traditional cultural event in a virtual space constructed using measured point clouds. In this paper, we take the festival float procession in the Gion Festival that has a long history in Kyoto City, Japan. The city government plans to revive the original procession route that is narrow and not used at present. For the revival, it is important to know whether a festival float collides with houses, billboards, electric wires or other objects along the original route. Therefore, in this paper, we propose a method for visualizing the collisions of point cloud objects. The advantageous features of our method are (1) a see-through visualization with a correct depth feel that is helpful to robustly determine the collision areas, (2) the ability to visualize areas of high collision risk as well as real collision areas, and (3) the ability to highlight target visualized areas by increasing the point densities there.

  8. Fabrication of an infrared Shack-Hartmann sensor by combining high-speed single-point diamond milling and precision compression molding processes.

    Science.gov (United States)

    Zhang, Lin; Zhou, Wenchen; Naples, Neil J; Yi, Allen Y

    2018-05-01

    A novel fabrication method by combining high-speed single-point diamond milling and precision compression molding processes for fabrication of discontinuous freeform microlens arrays was proposed. Compared with slow tool servo diamond broaching, high-speed single-point diamond milling was selected for its flexibility in the fabrication of true 3D optical surfaces with discontinuous features. The advantage of single-point diamond milling is that the surface features can be constructed sequentially by spacing the axes of a virtual spindle at arbitrary positions based on the combination of rotational and translational motions of both the high-speed spindle and linear slides. By employing this method, each micro-lenslet was regarded as a microstructure cell by passing the axis of the virtual spindle through the vertex of each cell. An optimization arithmetic based on minimum-area fabrication was introduced to the machining process to further increase the machining efficiency. After the mold insert was machined, it was employed to replicate the microlens array onto chalcogenide glass. In the ensuing optical measurement, the self-built Shack-Hartmann wavefront sensor was proven to be accurate in detecting an infrared wavefront by both experiments and numerical simulation. The combined results showed that precision compression molding of chalcogenide glasses could be an economic and precision optical fabrication technology for high-volume production of infrared optics.

  9. COLLISION VISUALIZATION OF A LASER-SCANNED POINT CLOUD OF STREETS AND A FESTIVAL FLOAT MODEL USED FOR THE REVIVAL OF A TRADITIONAL PROCESSION ROUTE

    Directory of Open Access Journals (Sweden)

    W. Li

    2017-09-01

    Full Text Available Recently, laser-scanning technology, especially mobile mapping systems (MMSs, has been applied to measure 3D urban scenes. Thus, it has become possible to simulate a traditional cultural event in a virtual space constructed using measured point clouds. In this paper, we take the festival float procession in the Gion Festival that has a long history in Kyoto City, Japan. The city government plans to revive the original procession route that is narrow and not used at present. For the revival, it is important to know whether a festival float collides with houses, billboards, electric wires or other objects along the original route. Therefore, in this paper, we propose a method for visualizing the collisions of point cloud objects. The advantageous features of our method are (1 a see-through visualization with a correct depth feel that is helpful to robustly determine the collision areas, (2 the ability to visualize areas of high collision risk as well as real collision areas, and (3 the ability to highlight target visualized areas by increasing the point densities there.

  10. Cognitive and emotional processing of pleasant and unpleasant experiences in major depression: A matter of vantage point?

    Science.gov (United States)

    Pfaltz, Monique C; Wu, Gwyneth W Y; Liu, Guanyu; Tankersley, Amelia P; Stilley, Ashley M; Plichta, Michael M; McNally, Richard J

    2017-03-01

    In nonclinical populations, adopting a third-person perspective as opposed to a first-person perspective while analyzing negative emotional experiences fosters understanding of these experiences and reduces negative emotional reactivity. We assessed whether this generalizes to people with major depression (MD). Additionally, we assessed whether the emotion-reducing effects of adopting a third-person perspective also occur when subjects with MD and HC subjects analyze positive experiences. Seventy-two MD subjects and 82 HC subjects analyzed a happy and a negative experience from either a first-person or a third-person perspective. Unexpectedly, we found no emotion-reducing effects of third-person perspective in either group thinking about negative events. However, across groups, third-person perspective was associated with less recounting of negative experiences and with a clearer, more coherent understanding of them. Negative affect decreased and positive affect increased in both groups analyzing happy experiences. In MD subjects, decreases in depressive affect were stronger for the third-person perspective. In both groups, positive affect increased and negative affect decreased more strongly for the third-person perspective. While reflecting on their positive memory, MD subjects adopted their assigned perspective for a shorter amount of time (70%) than HC subjects (78%). However, percentage of time participants adopted their assigned perspective was unrelated to the significant effects we found. Both people suffering from MD and healthy individuals may benefit from processing pleasant experiences, especially when adopting a self-distant perspective. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Food safety and nutritional quality for the prevention of non communicable diseases: the Nutrient, hazard Analysis and Critical Control Point process (NACCP).

    Science.gov (United States)

    Di Renzo, Laura; Colica, Carmen; Carraro, Alberto; Cenci Goga, Beniamino; Marsella, Luigi Tonino; Botta, Roberto; Colombo, Maria Laura; Gratteri, Santo; Chang, Ting Fa Margherita; Droli, Maurizio; Sarlo, Francesca; De Lorenzo, Antonino

    2015-04-23

    The important role of food and nutrition in public health is being increasingly recognized as crucial for its potential impact on health-related quality of life and the economy, both at the societal and individual levels. The prevalence of non-communicable diseases calls for a reformulation of our view of food. The Hazard Analysis and Critical Control Point (HACCP) system, first implemented in the EU with the Directive 43/93/CEE, later replaced by Regulation CE 178/2002 and Regulation CE 852/2004, is the internationally agreed approach for food safety control. Our aim is to develop a new procedure for the assessment of the Nutrient, hazard Analysis and Critical Control Point (NACCP) process, for total quality management (TMQ), and optimize nutritional levels. NACCP was based on four general principles: i) guarantee of health maintenance; ii) evaluate and assure the nutritional quality of food and TMQ; iii) give correct information to the consumers; iv) ensure an ethical profit. There are three stages for the application of the NACCP process: 1) application of NACCP for quality principles; 2) application of NACCP for health principals; 3) implementation of the NACCP process. The actions are: 1) identification of nutritional markers, which must remain intact throughout the food supply chain; 2) identification of critical control points which must monitored in order to minimize the likelihood of a reduction in quality; 3) establishment of critical limits to maintain adequate levels of nutrient; 4) establishment, and implementation of effective monitoring procedures of critical control points; 5) establishment of corrective actions; 6) identification of metabolic biomarkers; 7) evaluation of the effects of food intake, through the application of specific clinical trials; 8) establishment of procedures for consumer information; 9) implementation of the Health claim Regulation EU 1924/2006; 10) starting a training program. We calculate the risk assessment as follows

  12. Research on an uplink carrier sense multiple access algorithm of large indoor visible light communication networks based on an optical hard core point process.

    Science.gov (United States)

    Nan, Zhufen; Chi, Xuefen

    2016-12-20

    The IEEE 802.15.7 protocol suggests that it could coordinate the channel access process based on the competitive method of carrier sensing. However, the directionality of light and randomness of diffuse reflection would give rise to a serious imperfect carrier sense (ICS) problem [e.g., hidden node (HN) problem and exposed node (EN) problem], which brings great challenges in realizing the optical carrier sense multiple access (CSMA) mechanism. In this paper, the carrier sense process implemented by diffuse reflection light is modeled as the choice of independent sets. We establish an ICS model with the presence of ENs and HNs for the multi-point to multi-point visible light communication (VLC) uplink communications system. Considering the severe optical ICS problem, an optical hard core point process (OHCPP) is developed, which characterizes the optical CSMA for the indoor VLC uplink communications system. Due to the limited coverage of the transmitted optical signal, in our OHCPP, the ENs within the transmitters' carrier sense region could be retained provided that they could not corrupt the ongoing communications. Moreover, because of the directionality of both light emitting diode (LED) transmitters and receivers, theoretical analysis of the HN problem becomes difficult. In this paper, we derive the closed-form expression for approximating the outage probability and transmission capacity of VLC networks with the presence of HNs and ENs. Simulation results validate the analysis and also show the existence of an optimal physical carrier-sensing threshold that maximizes the transmission capacity for a given emission angle of LED.

  13. Fixed Points

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 5; Issue 5. Fixed Points - From Russia with Love - A Primer of Fixed Point Theory. A K Vijaykumar. Book Review Volume 5 Issue 5 May 2000 pp 101-102. Fulltext. Click here to view fulltext PDF. Permanent link:

  14. Tipping Point

    Medline Plus

    Full Text Available ... OnSafety CPSC Stands for Safety The Tipping Point Home > 60 Seconds of Safety (Videos) > The Tipping Point ... 24 hours a day. For young children whose home is a playground, it’s the best way to ...

  15. Tipping Point

    Medline Plus

    Full Text Available ... 60 Seconds of Safety (Videos) > The Tipping Point The Tipping Point by CPSC Blogger September 22, 2009 appliance child Childproofing CPSC danger death electrical fall furniture head injury product safety television tipover tv Watch the video in Adobe Flash ...

  16. Coefficients of interphase distribution and Gibbs energy of the transfer of nicotinic acid from water into aqueous solutions of ethanol and dimethylsulfoxide

    Science.gov (United States)

    Grazhdan, K. V.; Gamov, G. A.; Dushina, S. V.; Sharnin, V. A.

    2012-11-01

    Coefficients of the interphase distribution of nicotinic acid are determined in aqueous solution systems of ethanol-hexane and DMSO-hexane at 25.0 ± 0.1°C. They are used to calculate the Gibbs energy of the transfer of nicotinic acid from water into aqueous solutions of ethanol and dimethylsulfoxide. The Gibbs energy values for the transfer of the molecular and zwitterionic forms of nicotinic acid are obtained by means of UV spectroscopy. The diametrically opposite effect of the composition of binary solvents on the transfer of the molecular and zwitterionic forms of nicotinic acid is noted.

  17. Galaxy clustering: a point process

    OpenAIRE

    Hurtado Gil, Lluis

    2016-01-01

    El 'clustering' de galàxies és l'agregació de galàxies en l'universe produida per la força de la gravetat. Les galàxies tendeixen a formar estructures de major tamany tal com 'clusters' o filaments que formen la xarxa còsmica ('Cosmic Web'). Aquesta Estructura a Gran Escala de l'Univers es pot entendre com el resultat de la distribució de galàxies, un procés en el qual totes les galàxies estan subjectes a forces comuns i comparteixen propietats universals. L'anàlisis d'aquesta distribució es ...

  18. System identification to characterize human use of ethanol based on generative point-process models of video games with ethanol rewards.

    Science.gov (United States)

    Ozil, Ipek; Plawecki, Martin H; Doerschuk, Peter C; O'Connor, Sean J

    2011-01-01

    The influence of family history and genetics on the risk for the development of abuse or dependence is a major theme in alcoholism research. Recent research have used endophenotypes and behavioral paradigms to help detect further genetic contributions to this disease. Electronic tasks, essentially video games, which provide alcohol as a reward in controlled environments and with specified exposures have been developed to explore some of the behavioral and subjective characteristics of individuals with or at risk for alcohol substance use disorders. A generative model (containing parameters with unknown values) of a simple game involving a progressive work paradigm is described along with the associated point process signal processing that allows system identification of the model. The system is demonstrated on human subject data. The same human subject completing the task under different circumstances, e.g., with larger and smaller alcohol reward values, is assigned different parameter values. Potential meanings of the different parameter values are described.

  19. Gibbs free energy difference between the undercooled liquid and the beta phase of a Ti-Cr alloy

    Science.gov (United States)

    Ohsaka, K.; Trinh, E. H.; Holzer, J. C.; Johnson, W. L.

    1992-01-01

    The heat of fusion and the specific heats of the solid and liquid have been experimentally determined for a Ti60Cr40 alloy. The data are used to evaluate the Gibbs free energy difference, delta-G, between the liquid and the beta phase as a function of temperature to verify a reported spontaneous vitrification (SV) of the beta phase in Ti-Cr alloys. The results show that SV of an undistorted beta phase in the Ti60Cr40 alloy at 873 K is not feasible because delta-G is positive at the temperature. However, delta-G may become negative with additional excess free energy to the beta phase in the form of defects.

  20. Standard enthalpy, entropy and Gibbs free energy of formation of «A» type carbonate phosphocalcium hydroxyapatites

    International Nuclear Information System (INIS)

    Jebri, Sonia; Khattech, Ismail; Jemal, Mohamed

    2017-01-01

    Highlights: • A-type carbonate hydroxyapatites with 0 ⩽ x ⩽ 1 were prepared and characterized by DRX, IR spectroscopy and CHN analysis. • The heat of solution was measured in 9 wt% HNO 3 using an isoperibol calorimeter. • The standard enthalpy of formation was determined by thermochemical cycle. • Gibbs free energy has been deduced by estimating standard entropy of formation. • Carbonatation increases the stability till x = 0.6 mol. - Abstract: « A » type carbonate phosphocalcium hydroxyapatites having the general formula Ca 10 (PO 4 ) 6 (OH) (2-2x) (CO 3 ) x with 0 ⩽ x ⩽ 1, were prepared by solid gas reaction in the temperature range of 700–1000 °C. The obtained materials were characterized by X-ray diffraction and infrared spectroscopy. The carbonate content has been determined by C–H–N analysis. The heat of solution of these products was measured at T = 298 K in 9 wt% nitric acid solution using an isoperibol calorimeter. A thermochemical cycle was proposed and complementary experiences were performed in order to access to the standard enthalpies of formation of these phosphates. The results were compared to those previously obtained on apatites containing strontium and barium and show a decrease with the carbonate amount introduced in the lattice. This quantity becomes more negative as the ratio of substitution increases. Estimation of the entropy of formation allowed the determination of standard Gibbs free energy of formation of these compounds. The study showed that the substitution of hydroxyl by carbonate ions contributes to the stabilisation of the apatite structure.

  1. Dew Point

    OpenAIRE

    Goldsmith, Shelly

    1999-01-01

    Dew Point was a solo exhibition originating at PriceWaterhouseCoopers Headquarters Gallery, London, UK and toured to the Centre de Documentacio i Museu Textil, Terrassa, Spain and Gallery Aoyama, Tokyo, Japan.

  2. Tipping Point

    Medline Plus

    Full Text Available ... Point by CPSC Blogger September 22, 2009 appliance child Childproofing CPSC danger death electrical fall furniture head injury product safety television tipover tv Watch the video in Adobe Flash ...

  3. Tipping Point

    Science.gov (United States)

    ... Point by CPSC Blogger September 22, 2009 appliance child Childproofing CPSC danger death electrical fall furniture head injury product safety television tipover tv Watch the video in Adobe Flash ...

  4. Tipping Point

    Medline Plus

    Full Text Available ... Point by CPSC Blogger September 22, 2009 appliance child Childproofing CPSC danger death electrical fall furniture head ... see news reports about horrible accidents involving young children and furniture, appliance and tv tip-overs. The ...

  5. Tipping Point

    Medline Plus

    Full Text Available ... Point by CPSC Blogger September 22, 2009 appliance child Childproofing CPSC danger death electrical fall furniture head ... TV falls with about the same force as child falling from the third story of a building. ...

  6. Tipping Point

    Medline Plus

    Full Text Available ... Tipping Point by CPSC Blogger September 22, 2009 appliance child Childproofing CPSC danger death electrical fall furniture ... about horrible accidents involving young children and furniture, appliance and tv tip-overs. The force of a ...

  7. Thermodynamics of Micellar Systems : Comparison of Mass Action and Phase Equilibrium Models for the Calculation of Standard Gibbs Energies of Micelle Formation

    NARCIS (Netherlands)

    Blandamer, Michael J.; Cullis, Paul M.; Soldi, L. Giorgio; Engberts, Jan B.F.N.; Kacperska, Anna; Os, Nico M. van

    1995-01-01

    Micellar colloids are distinguished from other colloids by their association-dissociation equilibrium in solution between monomers, counter-ions and micelles. According to classical thermodynamics, the standard Gibbs energy of formation of micelles at fixed temperature and pressure can be related to

  8. About the choice of Gibbs' potential for modelling of FCC ↔ HCP transformation in FeMnSi-based shape memory alloys

    Science.gov (United States)

    Evard, Margarita E.; Volkov, Aleksandr E.; Belyaev, Fedor S.; Ignatova, Anna D.

    2018-05-01

    The choice of Gibbs' potential for microstructural modeling of FCC ↔ HCP martensitic transformation in FeMn-based shape memory alloys is discussed. Threefold symmetry of the HCP phase is taken into account on specifying internal variables characterizing volume fractions of martensite variants. Constraints imposed on model constants by thermodynamic equilibrium conditions are formulated.

  9. Origin of the correlation between the standard Gibbs energies of ion transfer from water to a hydrophobic ionic liquid and to a molecular solvent

    Czech Academy of Sciences Publication Activity Database

    Langmaier, Jan; Záliš, Stanislav; Samec, Zdeněk; Bovtun, Viktor; Kempa, Martin

    2013-01-01

    Roč. 87, JAN 2013 (2013), s. 591-598 ISSN 0013-4686 R&D Projects: GA ČR GAP206/11/0707 Institutional support: RVO:61388955 ; RVO:68378271 Keywords : ionic liquid s * cyclic voltammetry * standard Gibbs energy of ion transfer Subject RIV: CG - Electrochemistry Impact factor: 4.086, year: 2013

  10. Process analytical technology (PAT) approach to the formulation of thermosensitive protein-loaded pellets: Multi-point monitoring of temperature in a high-shear pelletization.

    Science.gov (United States)

    Kristó, Katalin; Kovács, Orsolya; Kelemen, András; Lajkó, Ferenc; Klivényi, Gábor; Jancsik, Béla; Pintye-Hódi, Klára; Regdon, Géza

    2016-12-01

    In the literature there are some publications about the effect of impeller and chopper speeds on product parameters. However, there is no information about the effect of temperature. Therefore our main aim was the investigation of elevated temperature and temperature distribution during pelletization in a high shear granulator according to process analytical technology. During our experimental work, pellets containing pepsin were formulated with a high-shear granulator. A specially designed chamber (Opulus Ltd.) was used for pelletization. This chamber contained four PyroButton-TH® sensors built in the wall and three PyroDiff® sensors 1, 2 and 3cm from the wall. The sensors were located in three different heights. The impeller and chopper speeds were set on the basis of 3 2 factorial design. The temperature was measured continuously in 7 different points during pelletization and the results were compared with the temperature values measured by the thermal sensor of the high-shear granulator. The optimization parameters were enzyme activity, average size, breaking hardness, surface free energy and aspect ratio. One of the novelties was the application of the specially designed chamber (Opulus Ltd.) for monitoring the temperature continuously in 7 different points during high-shear granulation. The other novelty of this study was the evaluation of the effect of temperature on the properties of pellets containing protein during high-shear pelletization. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Temperature dependence of the mechanical properties of melt-processed Dy-Ba-Cu-O bulk superconductors evaluated by three point bending tests

    International Nuclear Information System (INIS)

    Katagiri, K; Nyilas, A; Sato, T; Hatakeyama, Y; Hokari, T; Teshima, H; Iwamoto, A; Mito, T

    2006-01-01

    Dy-Ba-Cu-O bulk superconductor has an excellent capability of trapping magnetic flux and lower heat conductivity at cryogenic temperatures as compared with Y-Ba-Cu-O bulk superconductor. The Young's modulus and the bending strength in the range from room temperature to 7 K were measured by the three-point bending tests using specimens cut from a melt-processed Dy-Ba-Cu-O bulk superconductor. They were tested in a helium gas flow type cryostat at Forschungszentrum Karlsruhe and in a liquid nitrogen bath at Iwate University. The Young's modulus was calculated by either the slope of stress-strain curve or that of the load-deflection curve of the specimen. Although the bending strength measured in the two institutes coincided well, there was a significant discrepancy in the Young's modulus. The Young's modulus and bending strength increased with decrease of temperature down to 7 K. The amount of increase in the Young's modulus and the bending strength were about 32% and 36% of those at room temperature, respectively. The scatter of data for each run was significant and did not depend on temperature. The temperature dependence of the Young's modulus coincided with that in Y-Ba-Cu-O obtained by ultrasonic velocity. The temperature dependence of the Young's modulus and the bending strength was discussed from the view point of interatomic distance of the bulk crystal

  12. Is point of care testing in Irish hospitals ready for the laboratory modernisation process? An audit against the current national Irish guidelines.

    LENUS (Irish Health Repository)

    O'Kelly, R A

    2013-04-11

    BACKGROUND: The Laboratory modernisation process in Ireland will include point of care testing (POCT) as one of its central tenets. However, a previous baseline survey showed that POCT was under-resourced particularly with respect to information technology (IT) and staffing. AIMS: An audit was undertaken to see if POCT services had improved since the publication of National Guidelines and if such services were ready for the major changes in laboratory medicine as envisaged by the Health Service Executive. METHODS: The 15 recommendations of the 2007 Guidelines were used as a template for a questionnaire, which was distributed by the Irish External Quality Assessment Scheme. RESULTS: Thirty-nine of a possible 45 acute hospitals replied. Only a quarter of respondent hospitals had POCT committees, however, allocation of staff to POCT had doubled since the first baseline survey. Poor IT infrastructure, the use of unapproved devices, and low levels of adverse incident reporting were still major issues. CONCLUSIONS: Point of care testing remains under-resourced, despite the roll out of such devices throughout the health service including primary care. The present high standards of laboratory medicine may not be maintained if the quality and cost-effectiveness of POCT is not controlled. Adherence to national Guidelines and adequate resourcing is essential to ensure patient safety.

  13. An automated and robust image processing algorithm for glaucoma diagnosis from fundus images using novel blood vessel tracking and bend point detection.

    Science.gov (United States)

    M, Soorya; Issac, Ashish; Dutta, Malay Kishore

    2018-02-01

    Glaucoma is an ocular disease which can cause irreversible blindness. The disease is currently identified using specialized equipment operated by optometrists manually. The proposed work aims to provide an efficient imaging solution which can help in automating the process of Glaucoma diagnosis using computer vision techniques from digital fundus images. The proposed method segments the optic disc using a geometrical feature based strategic framework which improves the detection accuracy and makes the algorithm invariant to illumination and noise. Corner thresholding and point contour joining based novel methods are proposed to construct smooth contours of Optic Disc. Based on a clinical approach as used by ophthalmologist, the proposed algorithm tracks blood vessels inside the disc region and identifies the points at which first vessel bend from the optic disc boundary and connects them to obtain the contours of Optic Cup. The proposed method has been compared with the ground truth marked by the medical experts and the similarity parameters, used to determine the performance of the proposed method, have yield a high similarity of segmentation. The proposed method has achieved a macro-averaged f-score of 0.9485 and accuracy of 97.01% in correctly classifying fundus images. The proposed method is clinically significant and can be used for Glaucoma screening over a large population which will work in a real time. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Exfoliating and Dispersing Few-Layered Graphene in Low-Boiling-Point Organic Solvents towards Solution-Processed Optoelectronic Device Applications.

    Science.gov (United States)

    Zhang, Lu; Miao, Zhongshuo; Hao, Zhen; Liu, Jun

    2016-05-06

    With normal organic surfactants, graphene can only be dispersed in water and cannot be dispersed in low-boiling-point organic solvents, which hampers its application in solution-processed organic optoelectronic devices. Herein, we report the exfoliation of graphite into graphene in low-boiling-point organic solvents, for example, methanol and acetone, by using edge-carboxylated graphene quantum dots (ECGQD) as the surfactant. The great capability of ECGQD for graphene dispersion is due to its ultralarge π-conjugated unit that allows tight adhesion on the graphene surface through strong π-π interactions, its edge-carboxylated structure that diminishes the steric effects of the oxygen-containing functional groups on the basal plane of ECGQD, and its abundance of carboxylic acid groups for solubility. The graphene dispersion in methanol enables the application of graphene:ECGQD as a cathode interlayer in polymer solar cells (PSCs). Moreover, the PSC device performance of graphene:ECGQD is better than that of Ca, the state-of-the-art cathode interlayer material. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. [Design of a Hazard Analysis and Critical Control Points (HACCP) plan to assure the safety of a bologna product produced by a meat processing plant].

    Science.gov (United States)

    Bou Rached, Lizet; Ascanio, Norelis; Hernández, Pilar

    2004-03-01

    The Hazard Analysis and Critical Control Point (HACCP) is a systematic integral program used to identify and estimate the hazards (microbiological, chemical and physical) and the risks generated during the primary production, processing, storage, distribution, expense and consumption of foods. To establish a program of HACCP has advantages, being some of them: to emphasize more in the prevention than in the detection, to diminish the costs, to minimize the risk of manufacturing faulty products, to allow bigger trust to the management, to strengthen the national and international competitiveness, among others. The present work is a proposal based on the design of an HACCP program to guarantee the safety of the Bologna Special Type elaborated by a meat products industry, through the determination of hazards (microbiological, chemical or physical), the identification of critical control points (CCP), the establishment of critical limits, plan corrective actions and the establishment of documentation and verification procedures. The used methodology was based in the application of the seven basic principles settled down by the Codex Alimentarius, obtaining the design of this program. In view of the fact that recently the meat products are linked with pathogens like E. coli O157:H7 and Listeria monocytogenes, these were contemplated as microbiological hazard for the establishment of the HACCP plan whose application will guarantee the obtaining of a safe product.

  16. Quasi-Phase Diagrams at Air/Oil Interfaces and Bulk Oil Phases for Crystallization of Small-Molecular Semiconductors by Adjusting Gibbs Adsorption.

    Science.gov (United States)

    Watanabe, Satoshi; Ohta, Takahisa; Urata, Ryota; Sato, Tetsuya; Takaishi, Kazuto; Uchiyama, Masanobu; Aoyama, Tetsuya; Kunitake, Masashi

    2017-09-12

    The temperature and concentration dependencies of the crystallization of two small-molecular semiconductors were clarified by constructing quasi-phase diagrams at air/oil interfaces and in bulk oil phases. A quinoidal quaterthiophene derivative with four alkyl chains (QQT(CN)4) in 1,1,2,2-tetrachroloethane (TCE) and a thienoacene derivative with two alkyl chains (C8-BTBT) in o-dichlorobenzene were used. The apparent crystal nucleation temperature (T n ) and dissolution temperature (T d ) of the molecules were determined based on optical microscopy examination in closed glass capillaries and open dishes during slow cooling and heating processes, respectively. T n and T d were considered estimates of the critical temperatures for nuclear formation and crystal growth, respectively. The T n values of QQT(CN)4 and C8-BTBT at the air/oil interfaces were higher than those in the bulk oil phases, whereas the T d values at the air/oil interfaces were almost the same as those in the bulk oil phases. These Gibbs adsorption phenomena were attributed to the solvophobic effect of the alkyl chain moieties. The temperature range between T n and T d corresponds to suitable supercooling conditions for ideal crystal growth based on the suppression of nucleation. The T n values at the water/oil and oil/glass interfaces did not shift compared with those of the bulk phases, indicating that adsorption did not occur at the hydrophilic interfaces. Promotion and inhibition of nuclear formation for crystal growth of the semiconductors were achieved at the air/oil and hydrophilic interfaces, respectively.

  17. Co-ordination of federal and provincial environmental assessment processes for the Point Lepreau Generating Station Solid Radioactive Waste Management Facility modifications

    Energy Technology Data Exchange (ETDEWEB)

    Hickman, C.; Thompson, P.D. [Point Lepreau Generating Station, Point Lepreau Refurbishment Project, Lepreau, New Brunswick (Canada); Barnes, J. [Jacques Whitford Environment Ltd., Fredericton, New Brunswick (Canada)

    2006-07-01

    Modification of the Solid Radioactive Waste Management Facility at Point Lepreau Generating Station is required to accommodate waste generated during and after an 18-month maintenance outage during which the station would be Refurbished. The modification of the facility triggered both federal and provincial environmental assessment requirements, and these assessments were conducted in a 'coordinated' and cooperative fashion. In this project, the coordinated approach worked well, and provided some significant advantages to the proponent, the public and the regulators. However, there are opportunities for further improvement in future projects, and this paper explores the advantages and disadvantages of this 'co-ordinated' approach. As part of this exploration, there is a discussion of administrative and regulatory changes that the province is considering for the environmental assessment process, and a discussion of the need for a formal 'harmonization' agreement. (author)

  18. Co-ordination of federal and provincial environmental assessment processes for the Point Lepreau Generating Station Solid Radioactive Waste Management Facility modifications

    International Nuclear Information System (INIS)

    Hickman, C.; Thompson, P.D.; Barnes, J.

    2006-01-01

    Modification of the Solid Radioactive Waste Management Facility at Point Lepreau Generating Station is required to accommodate waste generated during and after an 18-month maintenance outage during which the station would be Refurbished. The modification of the facility triggered both federal and provincial environmental assessment requirements, and these assessments were conducted in a 'coordinated' and cooperative fashion. In this project, the coordinated approach worked well, and provided some significant advantages to the proponent, the public and the regulators. However, there are opportunities for further improvement in future projects, and this paper explores the advantages and disadvantages of this 'co-ordinated' approach. As part of this exploration, there is a discussion of administrative and regulatory changes that the province is considering for the environmental assessment process, and a discussion of the need for a formal 'harmonization' agreement. (author)

  19. Assessment of attenuation processes in a chlorinated ethene plume by use of stream bed Passive Flux Meters, streambed Point Velocity Probes and contaminant mass balances

    DEFF Research Database (Denmark)

    Rønde, Vinni Kampman; McKnight, Ursula S.; Annable, Michael

    , however studies contradicting this have also been reported. Since dilution commonly reduces contaminant concentrations in streams to below quantification limits, use of mass balances along the pathway from groundwater to stream is unusual. Our study is conducted at the low-land Grindsted stream, Denmark......Chlorinated ethenes (CE) are abundant groundwater contaminants and pose risk to both groundwater and surface water bodies, as plumes can migrate through aquifers to streams. After release to the environment, CE may undergo attenuation. The hyporheic zone is believed to enhance CE attenuation......, which is impacted by a contaminant plume. CE have been observed in the stream water; hence our study site provides an unusual opportunity to study attenuation processes in a CE plume as it migrates through the groundwater at the stream bank, through the stream bed and further to the point of fully mixed...

  20. A Gibbs potential expansion with a quantic system made up of a large number of particles; Un developpement du potentiel de Gibbs d'un systeme compose d'un grand nombre de particules

    Energy Technology Data Exchange (ETDEWEB)

    Bloch, Claude; Dominicis, Cyrano de [Commissariat a l' energie atomique et aux energies alternatives - CEA, Centre d' Etudes Nucleaires de Saclay, Gif-sur-Yvette (France)

    1959-07-01

    Starting from an expansion derived in a previous work, we study the contribution to the Gibbs potential of the two-body dynamical correlations, taking into account the statistical correlations. Such a contribution is of interest for low density systems at low temperature. In the zero density limit, it reduces to the Beth Uhlenbeck expression of the second virial coefficient. For a system of fermions in the zero temperature limit, it yields the contribution of the Brueckner reaction matrix to the ground state energy, plus, under certain conditions, additional terms of the form exp. (β |Δ|), where the Δ are the binding energies of 'bound states' of the type first discussed by L. Cooper. Finally, we study the wave function of two particles immersed in a medium (defined by its temperature and chemical potential). lt satisfies an equation generalizing the Bethe Goldstone equation for an arbitrary temperature. Reprint of a paper published in 'Nuclear Physics' 10, 1959, p. 181-196 [French] Partant d'un developpement extrait d'un precedent travail, nous etudions la contribution au potentiel de Gibbs des relations dynamiques du systeme de deux corps, en tenant compte des relations statistiques. Une telle contribution presente de l'interet pour les systemes a densite faible et a basse temperature. A la densite limite zero, elle se ramene a l'expression de Beth Uhlenbeck du second coefficient virial. Pour un systeme de fermions a la temperature limite zero, il produit la contribution de la matrice de reaction de Brueckner au niveau fondamental, plus, dans certaines conditions, des termes additionnels de la forme exp. (β |Δ|), ou les Δ sont les energies de liaison des 'etats lies' du premier type, discutes auparavant par L. Cooper. Finalement, on etudie la fonction d'onde de deux particules immerges dans un milieu (definie par sa temperature et son potentiel chimique). Il satisfait a une equation generalisant l'equation de Bethe Goldstone pour une temperature arbitraire

  1. Estimação de parâmetros genéticos em suínos usando Amostrador de Gibbs Estimation of genetic parameters for growth and backfat thickness of Large White pigs using the Gibbs Sampler

    Directory of Open Access Journals (Sweden)

    Leandro Barbosa

    2008-07-01

    Full Text Available Um total de 38.865 registros de animais da raça Large White foi usado para estimar componentes de co-variância e parâmetros genéticos das características idade ao atingir 100 kg de peso vivo (IDA e espessura de toucinho ajustada para 100 kg de peso vivo (ET, em análises bicaracterísticas. Para obtenção dos componentes de co-variância, foi utilizado o Amostrador de Gibbs por meio do programa MTGSAM. O modelo misto utilizado continha efeito fixo de grupo contemporâneo e os seguintes efeitos aleatórios: efeito genético aditivo direto, efeito genético aditivo materno, efeito comum de leitegada e efeito residual. As médias das estimativas de herdabilidade aditivas diretas foram 0,33 e 0,44 para IDA e ET, respectivamente. As médias das estimativas do efeito comum de leitegada foram 0,09 e 0,02 para IDA e ET, respectivamente. A estimativa de correlação genética aditiva entre as características foi próxima de zero (-0,015. As herdabilidades obtidas para as características de desempenho avaliadas indicam que ganhos genéticos satisfatórios podem ser obtidos no melhoramento de suínos da raça Large White para essas características e que a seleção simultânea para ambas as características pode ser realizada, uma vez que é baixa a correlação genética aditiva direta.Data consisting of 38,865 records of Large White pigs were used to estimate genetic parameters for days to 100 kg (DAYS and backfat thickness adjusted to 100 kg (BF. Covariance components were estimated by a bivariate mixed model including the fixed effect of contemporary group and the direct and maternal additive genetic, common litter and residual random effects using the Gibbs Sampling algorithm of the MTGSAM program. Estimates of direct and common litter effects for DAYS and BF were 0.33 and 0.44 and 0.09 and 0.02, respectively. Additive genetic correlation between DAYS and BF was close to zero (-0.015. The heritability estimates indicate that genetic gains may

  2. Assessment of attenuation processes in a chlorinated ethene plume by use of stream bed Passive Flux Meters, streambed Point Velocity Probes and contaminant mass balances

    Science.gov (United States)

    Rønde, V.; McKnight, U. S.; Annable, M. D.; Devlin, J. F.; Cremeans, M.; Sonne, A. T.; Bjerg, P. L.

    2017-12-01

    Chlorinated ethenes (CE) are abundant groundwater contaminants and pose risk to both groundwater and surface water bodies, as plumes can migrate through aquifers to streams. After release to the environment, CE may undergo attenuation. The hyporheic zone is believed to enhance CE attenuation, however studies contradicting this have also been reported. Since dilution commonly reduces contaminant concentrations in streams to below quantification limits, use of mass balances along the pathway from groundwater to stream is unusual. Our study is conducted at the low-land Grindsted stream, Denmark, which is impacted by a contaminant plume. CE have been observed in the stream water; hence our study site provides an unusual opportunity to study attenuation processes in a CE plume as it migrates through the groundwater at the stream bank, through the stream bed and further to the point of fully mixed conditions in the stream. The study undertook the determination of redox conditions and CE distribution from bank to stream; streambed contaminant flux estimation using streambed Passive Flux Meters (sPFM); and quantification of streambed water fluxes using temperature profiling and streambed Point Velocity Probes (SBPVP). The advantage of the sPFM is that it directly measures the contaminant flux without the need for water samples, while the advantage of the SBPVP is its ability to measure the vertical seepage velocity without the need for additional geological parameters. Finally, a mass balance assessment along the plume pathway was conducted to account for any losses or accumulations. The results show consistencies in spatial patterns between redox conditions and extent of dechlorination; between contaminant fluxes from sPFM and concentrations from water samples; and between seepage velocities from SBPVP and temperature-based water fluxes. Mass balances and parent-metabolite compound ratios indicate limited degradation between the bank and the point of fully mixed stream

  3. [Eight-step structured decision-making process to assign criminal responsibility and seven focal points for describing relationship between psychopathology and offense].

    Science.gov (United States)

    Okada, Takayuki

    2013-01-01

    The author suggested that it is essential for lawyers and psychiatrists to have a common understanding of the mutual division of roles between them when determining criminal responsibility (CR) and, for this purpose, proposed an 8-step structured CR decision-making process. The 8 steps are: (1) gathering of information related to mental function and condition, (2) recognition of mental function and condition,(3) psychiatric diagnosis, (4) description of the relationship between psychiatric symptom or psychopathology and index offense, (5) focus on capacities of differentiation between right and wrong and behavioral control, (6) specification of elements of cognitive/volitional prong in legal context, (7) legal evaluation of degree of cognitive/volitional prong, and (8) final interpretation of CR as a legal conclusion. The author suggested that the CR decision-making process should proceed not in a step-like pattern from (1) to (2) to (3) to (8), but in a step-like pattern from (1) to (2) to (4) to (5) to (6) to (7) to (8), and that not steps after (5), which require the interpretation or the application of section 39 of the Penal Code, but Step (4), must be the core of psychiatric expert evidence. When explaining the relationship between the mental disorder and offense described in Step (4), the Seven Focal Points (7FP) are often used. The author urged basic precautions to prevent the misuse of 7FP, which are: (a) the priority of each item is not equal and the relative importance differs from case to case; (b) each item is not exclusively independent, there may be overlap between items; (c) the criminal responsibility shall not be judged because one item is applicable or because a number of items are applicable, i. e., 7FP are not "criteria," for example, the aim is not to decide such things as 'the motive is understandable' or 'the conduct is appropriate', but should be to describe how psychopathological factors affected the offense specifically in the context of

  4. Teaching the Concept of Gibbs Energy Minimization through Its Application to Phase-Equilibrium Calculation

    Science.gov (United States)

    Privat, Romain; Jaubert, Jean-Noe¨l; Berger, Etienne; Coniglio, Lucie; Lemaitre, Ce´cile; Meimaroglou, Dimitrios; Warth, Vale´rie

    2016-01-01

    Robust and fast methods for chemical or multiphase equilibrium calculation are routinely needed by chemical-process engineers working on sizing or simulation aspects. Yet, while industrial applications essentially require calculation tools capable of discriminating between stable and nonstable states and converging to nontrivial solutions,…

  5. Development of Bi-phase sodium-oxygen-hydrogen chemical equilibrium calculation program (BISHOP) using Gibbs free energy minimization method

    International Nuclear Information System (INIS)

    Okano, Yasushi

    1999-08-01

    In order to analyze the reaction heat and compounds due to sodium combustion, the multiphase chemical equilibrium calculation program for chemical reaction among sodium, oxygen and hydrogen is developed in this study. The developed numerical program is named BISHOP; which denotes Bi-Phase, Sodium - Oxygen - Hydrogen, Chemical Equilibrium Calculation Program'. Gibbs free energy minimization method is used because of the special merits that easily add and change chemical species, and generally deal many thermochemical reaction systems in addition to constant temperature and pressure one. Three new methods are developed for solving multi-phase sodium reaction system in this study. One is to construct equation system by simplifying phase, and the other is to expand the Gibbs free energy minimization method into multi-phase system, and the last is to establish the effective searching method for the minimum value. Chemical compounds by the combustion of sodium in the air are calculated using BISHOP. The Calculated temperature and moisture conditions where sodium-oxide and hydroxide are formed qualitatively agree with the experiments. Deformation of sodium hydride is calculated by the program. The estimated result of the relationship between the deformation temperature and pressure closely agree with the well known experimental equation of Roy and Rodgers. It is concluded that BISHOP can be used for evaluated the combustion and deformation behaviors of sodium and its compounds. Hydrogen formation condition of the dump-tank room at the sodium leak event of FBR is quantitatively evaluated by BISHOP. It can be concluded that to keep the temperature of dump-tank room lower is effective method to suppress the formation of hydrogen. In case of choosing the lower inflammability limit of 4.1 mol% as the hydrogen concentration criterion, formation reaction of sodium hydride from sodium and hydrogen is facilitated below the room temperature of 800 K, and concentration of hydrogen

  6. Thermodynamics of non-ionic surfactant Triton X-100-cationic surfactants mixtures at the cloud point

    International Nuclear Information System (INIS)

    Batigoec, Cigdem; Akbas, Halide; Boz, Mesut

    2011-01-01

    Highlights: → Non-ionic surfactants are used as emulsifier and solubilizate in such as textile, detergent and cosmetic. → Non-ionic surfactants occur phase separation at temperature as named the cloud point in solution. → Dimeric surfactants have attracted increasing attention due to their superior surface activity. → The positive values of ΔG cp 0 indicate that the process proceeds nonspontaneous. - Abstract: This study investigates the effects of gemini and conventional cationic surfactants on the cloud point (CP) of the non-ionic surfactant Triton X-100 (TX-100) in aqueous solutions. Instead of visual observation, a spectrophotometer was used for measurement of the cloud point temperatures. The thermodynamic parameters of these mixtures were calculated at different cationic surfactant concentrations. The gemini surfactants of the alkanediyl-α-ω-bis (alkyldimethylammonium) dibromide type, on the one hand, with different alkyl groups containing m carbon atoms and an ethanediyl spacer, referred to as 'm-2-m' (m = 10, 12, and 16) and, on the other hand, with -C 16 alkyl groups and different spacers containing s carbon atoms, referred to as '16-s-16' (s = 6 and 10) were synthesized, purified and characterized. Additions of the cationic surfactants to the TX-100 solution increased the cloud point temperature of the TX-100 solution. It was accepted that the solubility of non-ionic surfactant containing polyoxyethylene (POE) hydrophilic chain was a maximum at the cloud point so that the thermodynamic parameters were calculated at this temperature. The results showed that the standard Gibbs free energy (ΔG cp 0 ), the enthalpy (ΔH cp 0 ) and the entropy (ΔS cp 0 ) of the clouding phenomenon were found positive in all cases. The standard free energy (ΔG cp 0 ) increased with increasing hydrophobic alkyl chain for both gemini and conventional cationic surfactants; however, it decreased with increasing surfactant concentration.

  7. β-Decay half-lives and nuclear structure of exotic proton-rich waiting point nuclei under rp-process conditions

    Science.gov (United States)

    Nabi, Jameel-Un; Böyükata, Mahmut

    2016-03-01

    We investigate even-even nuclei in the A ∼ 70 mass region within the framework of the proton-neutron quasi-particle random phase approximation (pn-QRPA) and the interacting boson model-1 (IBM-1). Our work includes calculation of the energy spectra and the potential energy surfaces V (β , γ) of Zn, Ge, Se, Kr and Sr nuclei with the same proton and neutron number, N = Z. The parametrization of the IBM-1 Hamiltonian was performed for the calculation of the energy levels in the ground state bands. Geometric shape of the nuclei was predicted by plotting the potential energy surfaces V (β , γ) obtained from the IBM-1 Hamiltonian in the classical limit. The pn-QRPA model was later used to compute half-lives of the neutron-deficient nuclei which were found to be in very good agreement with the measured ones. The pn-QRPA model was also used to calculate the Gamow-Teller strength distributions and was found to be in decent agreement with the measured data. We further calculate the electron capture and positron decay rates for these N = Z waiting point (WP) nuclei in the stellar environment employing the pn-QRPA model. For the rp-process conditions, our total weak rates are within a factor two compared with the Skyrme HF +BCS +QRPA calculation. All calculated electron capture rates are comparable to the competing positron decay rates under rp-process conditions. Our study confirms the finding that electron capture rates form an integral part of the weak rates under rp-process conditions and should not be neglected in the nuclear network calculations.

  8. Contamination and Critical Control Points (CCPs along the processing line of sale of frozen poultry foods in retail outlets of a typical market in Ibadan, Nigeria

    Directory of Open Access Journals (Sweden)

    Adetunji, V. O

    2013-12-01

    Full Text Available Aim: Over the years, there have been considerable increases in the consumption of frozen poultry foods across Nigeria. Little attention has been paid to the microbial quality of these foods and hence constitutes a threat to public health. The contamination levels (Enterobacteriaceae and Listeria counts and the presence of pathogenic E. coli, Salmonella and Listeria along the processing line of sale of frozen poultry foods were assayed in retail outlets. Methodology and results: Bacteriological counts and bacterial isolation were carried out using standard plate methods, while the direct slide agglutination technique was utilized for serology. Bacteriological assay revealed extremely high counts (Listeria count (LC: 7.784±1.109 - 9.586±0.016 log cfu/cm2; Enterobacteriaceae count (EC: 7.151±0.213 - 9.318±0.161 log cfu/cm2, higher than stipulated by International Food Standard Agencies. The highest count for EC (9.318±0.161 log cfu/cm2 and LC 9.586±0.016 log cfu/cm2 was from the weighing scale and processing table. Averagely, LC (8.598±0.733 log cfu/cm2 was higher than EC (8.145±0.936 log cfu/cm2. Weighing scale had counts significantly different (p < 0.05 from all others for EC. But there were no significant differences in LC. Weighing scale and meat tables were critical control points (CCPs in the processing line for sale of frozen poultry meats in the retail outlets. E. coli spp., E. coli O157:H7, Salmonella spp., Salmonella Enteritidis, Listeria spp. and Listeria monocytogenes were isolated along the processing line. Conclusion, significance and impact of the study: Results of this study indicated that poultry meat are easily contaminated along the processing line of sale and may act as a potential risk to public health if counteractive measures are not applied to reduce microbial contamination during storage, sale and distribution to consumers.

  9. Gibbs energy calculation of electrolytic plasma channel with inclusions of copper and copper oxide with Al-base

    Science.gov (United States)

    Posuvailo, V. M.; Klapkiv, M. D.; Student, M. M.; Sirak, Y. Y.; Pokhmurska, H. V.

    2017-03-01

    The oxide ceramic coating with copper inclusions was synthesized by the method of plasma electrolytic oxidation (PEO). Calculations of the Gibbs energies of reactions between the plasma channel elements with inclusions of copper and copper oxide were carried out. Two methods of forming the oxide-ceramic coatings on aluminum base in electrolytic plasma with copper inclusions were established. The first method - consist in the introduction of copper into the aluminum matrix, the second - copper oxide. During the synthesis of oxide ceramic coatings plasma channel does not react with copper and copper oxide-ceramic included in the coating. In the second case is reduction of copper oxide in interaction with elements of the plasma channel. The content of oxide-ceramic layer was investigated by X-ray and X-ray microelement analysis. The inclusions of copper, CuAl2, Cu9Al4 in the oxide-ceramic coatings were found. It was established that in the spark plasma channels alongside with the oxidation reaction occurs also the reaction aluminothermic reduction of the metal that allows us to dope the oxide-ceramic coating by metal the isobaric-isothermal potential oxidation of which is less negative than the potential of the aluminum oxide.

  10. THE PREDICTION OF pH BY GIBBS FREE ENERGY MINIMIZATION IN THE SUMP SOLUTION UNDER LOCA CONDITION OF PWR

    Directory of Open Access Journals (Sweden)

    HYOUNGJU YOON

    2013-02-01

    Full Text Available It is required that the pH of the sump solution should be above 7.0 to retain iodine in a liquid phase and be within the material compatibility constraints under LOCA condition of PWR. The pH of the sump solution can be determined by conventional chemical equilibrium constants or by the minimization of Gibbs free energy. The latter method developed as a computer code called SOLGASMIX-PV is more convenient than the former since various chemical components can be easily treated under LOCA conditions. In this study, SOLGASMIX-PV code was modified to accommodate the acidic and basic materials produced by radiolysis reactions and to calculate the pH of the sump solution. When the computed pH was compared with measured by the ORNL experiment to verify the reliability of the modified code, the error between two values was within 0.3 pH. Finally, two cases of calculation were performed for the SKN 3&4 and UCN 1&2. As results, pH of the sump solution for the SKN 3&4 was between 7.02 and 7.45, and for the UCN 1&2 plant between 8.07 and 9.41. Furthermore, it was found that the radiolysis reactions have insignificant effects on pH because the relative concentrations of HCl, HNO3, and Cs are very low.

  11. ANÁLISIS DEL RIESGO DE INCENDIOS FORESTALES: UN ENFOQUE BASADO EN PROCESOS PUNTUALES // FOREST WILDFIRE HAZARD ANALYSES: A POINT PROCESSES APPROACH

    Directory of Open Access Journals (Sweden)

    Rafael González de Gouveia

    2017-06-01

    Full Text Available Los procesos estocásticos puntuales representan una herramienta de gran utilidad para el análisis de los factores de riesgo en los incendios forestales. En este artículo se estudia la ocurrencia de los incendios forestales a partir de un proceso de Poisson espacio temporal, en el que se considera la función de intensidad del mismo como una caracterización del riesgo de incendio a partir de técnicas paramétricas y no paramétricas. Finalmente, se considera un conjunto de datos reales, suministrados por el Ministerio del Poder Popular para el Ambiente a través del Instituto Nacional de Meteorología e Hidrología (INAMEH en Venezuela, relativos a los incendios forestales producidos en un día en particular. Se estiman las funciones de riesgo basadas en el modelo propuesto y se generan mapas de riesgo de incendios lo cuales se ajustan a las características geográficas y climáticas del país. // Point stochastic processes represent a very useful tool for the analysis of hazard factors in wildfire. In this article, the occurrence of wildfire is studied from a spatial-temporal Poisson process, in which the intensity function thereof is considered as a wildfire hazard characterization based on parametric and non-parametric techniques. Finally, it is considered a set of real data, provided by the Ministerio del Poder Popular para el Ambiente from Instituto Nacional de Meteorología e Hidrologia (INAMEH of Venezuela, relating to widlfire produced on a particular day. The hazard functions are estimated based on the proposed model and wildfire hazard maps are generated, which are adjusted to the geographical and climatic characteristics of the country.

  12. Modelling point patterns with linear structures

    DEFF Research Database (Denmark)

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    2009-01-01

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  13. Modelling point patterns with linear structures

    DEFF Research Database (Denmark)

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  14. Neutron-Rich Silver Isotopes Produced by a Chemically Selective Laser Ion-Source: Test of the R-Process " Waiting-Point " Concept

    CERN Multimedia

    2002-01-01

    The r-process is an important nucleosynthesis mechanism for several reasons: \\begin{enumerate} \\item It is crucial to an understanding of about half of the A>60 elemental composition of the Galaxy; \\item It is the mechanism that forms the long-lived Th-U-Pu nuclear chronometers which are used for cosmochronolgy; \\item It provides an important probe for the temperature (T$ _{9} $)-neutron density ($n_{n}$) conditions in explosive events; and last but not least \\item It may serve to provide useful clues to and constraints upon the nuclear properties of very neutron-rich heavy nuclei. \\end{enumerate} \\\\ \\\\With regard to nuclear-physics data, of particular interest are the T$ _{1/2} $ and P$_{n-} $ values of certain$\\,$ "waiting-point"$\\,$ isotopes in the regions of the A $ \\approx $ 80 and 130. r-abundance peaks. Previous studies of $^{130}_{\\phantom{1}48}$Cd$_{82}$ and $^{79}_{29}$Cu$_{50}$. $\\beta$-decay properties at ISOLDE using a hot plasma ion source were strongly complicated by isobar and molecular-ion c...

  15. Site formation processes at Pinnacle Point Cave 13B (Mossel Bay, Western Cape Province, South Africa): resolving stratigraphic and depositional complexities with micromorphology.

    Science.gov (United States)

    Karkanas, Panagiotis; Goldberg, Paul

    2010-01-01

    Site PP13B is a cave located on the steep cliffs of Pinnacle Point near Mossel Bay in Western Cape Province, South Africa. The depositional sequence of the cave, predating Marine Isotopic Stage 11 (MIS 11) and continuing to present, is in the form of isolated sediment exposures with different depositional facies and vertical and lateral variations. Micromorphological analysis demonstrated that a suite of natural sedimentation processes operated during the development of the sequence ranging from water action to aeolian activity, and from speleothem formations to plant colonization and root encrustation. At the same time, anthropogenic sediments that are mainly in the form of burnt remains from combustion features (e.g., wood ash, charcoal, and burnt bone) were accumulating. Several erosional episodes have resulted in a complicated stratigraphy, as discerned from different depositional and post-depositional features. The cave is associated with a fluctuating coastal environment, frequent changes in sea level and climate controlled patterns of sedimentation, and the presence or absence of humans. Copyright © 2010 Elsevier Ltd. All rights reserved.

  16. Model-based testing for space-time interaction using point processes: An application to psychiatric hospital admissions in an urban area.

    Science.gov (United States)

    Meyer, Sebastian; Warnke, Ingeborg; Rössler, Wulf; Held, Leonhard

    2016-05-01

    Spatio-temporal interaction is inherent to cases of infectious diseases and occurrences of earthquakes, whereas the spread of other events, such as cancer or crime, is less evident. Statistical significance tests of space-time clustering usually assess the correlation between the spatial and temporal (transformed) distances of the events. Although appealing through simplicity, these classical tests do not adjust for the underlying population nor can they account for a distance decay of interaction. We propose to use the framework of an endemic-epidemic point process model to jointly estimate a background event rate explained by seasonal and areal characteristics, as well as a superposed epidemic component representing the hypothesis of interest. We illustrate this new model-based test for space-time interaction by analysing psychiatric inpatient admissions in Zurich, Switzerland (2007-2012). Several socio-economic factors were found to be associated with the admission rate, but there was no evidence of general clustering of the cases. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Gibbs free energy of reactions involving SiC, Si3N4, H2, and H2O as a function of temperature and pressure

    Science.gov (United States)

    Isham, M. A.

    1992-01-01

    Silicon carbide and silicon nitride are considered for application as structural materials and coating in advanced propulsion systems including nuclear thermal. Three-dimensional Gibbs free energy were constructed for reactions involving these materials in H2 and H2/H2O. Free energy plots are functions of temperature and pressure. Calculations used the definition of Gibbs free energy where the spontaneity of reactions is calculated as a function of temperature and pressure. Silicon carbide decomposes to Si and CH4 in pure H2 and forms a SiO2 scale in a wet atmosphere. Silicon nitride remains stable under all conditions. There was no apparent difference in reaction thermodynamics between ideal and Van der Waals treatment of gaseous species.

  18. Gibbs free energy of transfer of a methylene group on {UCON + (sodium or potassium) phosphate salts} aqueous two-phase systems: Hydrophobicity effects

    International Nuclear Information System (INIS)

    Silverio, Sara C.; Rodriguez, Oscar; Teixeira, Jose A.; Macedo, Eugenia A.

    2010-01-01

    The Gibbs free energy of transfer of a suitable hydrophobic probe can be regarded as a measure of the relative hydrophobicity of the different phases. The methylene group (CH 2 ) can be considered hydrophobic, and thus be a suitable probe for hydrophobicity. In this work, the partition coefficients of a series of five dinitrophenylated-amino acids were experimentally determined, at 23 o C, in three different tie-lines of the biphasic systems: (UCON + K 2 HPO 4 ), (UCON + potassium phosphate buffer, pH 7), (UCON + KH 2 PO 4 ), (UCON + Na 2 HPO 4 ), (UCON + sodium phosphate buffer, pH 7), and (UCON + NaH 2 PO 4 ). The Gibbs free energy of transfer of CH 2 units were calculated from the partition coefficients and used to compare the relative hydrophobicity of the equilibrium phases. The largest relative hydrophobicity was found for the ATPS formed by dihydrogen phosphate salts.

  19. Gibbs energies of formation of zircon (ZrSiO4), thorite (ThSiO4), and phenacite (Be2SiO4)

    International Nuclear Information System (INIS)

    Schuiling, R.D.; Vergouwen, L.; Rijst, H. van der

    1976-01-01

    Zircon, thorite, and phenacite are very refractory compounds which do not yield to solution calorimetry. In In order to obtain approximate Gibbs energies of formation for these minerals, their reactions with a number of silica-undersaturated compounds (silicates or oxides) were studied. Conversely baddeleyite (ZrO 2 ), thorianite (ThO 2 ), and bromellite (BeO) were reacted with the appropriate silicates. As the Gibbs energies of reaction of the undersaturated compounds with SiO 2 are known, the experiments yield the following data: Δ G 298 , 1 /sub bar/ 0 = -459.02 +- 1.04 kcal for zircon, -489.67 +- 1.04 for thorite, and -480.20 +- 1.01 for phenacite

  20. Solubility and Standard Gibb's energies of transfer of alkali metal perchlorates, tetramethyl- and tetraethylammonium from water to aqua-acetone solvents

    International Nuclear Information System (INIS)

    Kireev, A.A.; Pak, T.G.; Bezuglyj, V.D.

    1996-01-01

    Solubilities of KClO 4 , RbClO 4 , CsClO 4 , (CH 3 ) 4 NClO 4 , (C 2 M 5 ) 4 NClO 4 in water and water-acetone mixtures are determined by the method of isothermal saturation at 298.15 K. Dissociation constants of alkali metal perchlorates are found by conductometric method. Solubility products and standard Gibbs energies of transfer of corresponding electrolytes from water into water-acetone solvents are calculated. The character of transfer Gibbs energy dependence on solvent composition is explained by preferred solvation of cations by acetone molecules and anions-by water molecules. Features of tetraalkyl ammonium ions are explained by large changes in energy of cavity formation for these ions